path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
private-prediction/a - Secure Model Serving.ipynb | ###Markdown
Part a: Secure Model Serving with TFE Keras Now that you have a trained model with federated learning, you are ready to serve some private predictions. We can do that using TFE Keras.To secure and serve this model, we will need three TFE servers. This is because TF Encrypted under the hood uses an encryption technique called [multi-party computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation). The idea is to split the model weights and input data into shares, then send a share of each value to the different servers. The key property is that if you look at the share on one server, it reveals nothing about the original value (input data or model weights).If you want to learn more about MPC, you can read this excellent [blog post](https://mortendahl.github.io/2017/04/17/private-deep-learning-with-mpc/).In this notebook, you will be able serve private predictions after a series of simple steps:- Configure TFE Protocol to secure the model via secret sharing.- Launch three TFE servers.- Convert the TF Keras model into a TFE Keras model using `tfe.keras.models.clone_model`.- Serve the secured model using `tfe.serving.QueueServer`.Alright, let's do it!
###Code
from collections import OrderedDict
import numpy as np
import tensorflow as tf
import tf_encrypted as tfe
import tf_encrypted.keras.backend as KE
tf.compat.v1.disable_eager_execution()
###Output
_____no_output_____
###Markdown
First, we load the model into normal TF Keras using the `load_model` function.
###Code
trained_model = '../saved_fl_model'
model = tf.keras.models.load_model(trained_model)
###Output
WARNING:tensorflow:From /Users/jasonmancuso/tf-world/venv/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
Protocol Next, we configure the protocol we will be using, as well as the servers on which we want to run it. We will be using the SecureNN protocol to secret share the model between each of the three TFE servers. Most importantly, this will add the capability of providing predictions on encrypted data.Note that the configuration is saved to file as we will be needing it in the client as well.
###Code
players = OrderedDict([
('server0', 'localhost:4000'),
('server1', 'localhost:4001'),
('server2', 'localhost:4002'),
])
config = tfe.RemoteConfig(players)
config.save('/tmp/config.json')
tfe.set_config(config)
tfe.set_protocol(tfe.protocol.SecureNN())
###Output
_____no_output_____
###Markdown
Launching serversBefore actually serving the computation below we need to launch TFE servers in new processes. Run the following in three different terminals. You may have to allow Python to accept incoming connections.
###Code
for player_name in players.keys():
print("python -m tf_encrypted.player --config /tmp/config.json {}".format(player_name))
###Output
python -m tf_encrypted.player --config /tmp/config.json server0
python -m tf_encrypted.player --config /tmp/config.json server1
python -m tf_encrypted.player --config /tmp/config.json server2
###Markdown
Convert TF Keras into TFE Keras Thanks to `tfe.keras.models.clone_model` you can convert automatically the TF Keras model into a TFE Keras model.
###Code
with tfe.protocol.SecureNN():
tfe_model = tfe.keras.models.clone_model(model)
###Output
_____no_output_____
###Markdown
Set up a new `tfe.serving.QueueServer` `tfe.serving.QueueServer` will launch a serving queue, so that the TFE servers can accept prediction requests on the secured model from external clients.
###Code
q_input_shape = (1, 784)
q_output_shape = (1, 10)
server = tfe.serving.QueueServer(
input_shape=q_input_shape, output_shape=q_output_shape, computation_fn=tfe_model
)
###Output
_____no_output_____
###Markdown
Start Server Perfect! with all of the above in place we can finally connect to our servers, push our TensorFlow graph to them, and start serving the model. You can set `num_requests` to set a limit on the number of predictions requests served by the model; if not specified then the model will be served until interrupted.
###Code
sess = KE.get_session()
request_ix = 1
def step_fn():
global request_ix
print("Served encrypted prediction {i} to client.".format(i=request_ix))
request_ix += 1
server.run(
sess,
num_steps=10,
step_fn=step_fn)
###Output
Served encrypted prediction 1 to client.
Served encrypted prediction 2 to client.
Served encrypted prediction 3 to client.
Served encrypted prediction 4 to client.
Served encrypted prediction 5 to client.
Served encrypted prediction 6 to client.
Served encrypted prediction 7 to client.
Served encrypted prediction 8 to client.
Served encrypted prediction 9 to client.
Served encrypted prediction 10 to client.
###Markdown
You are ready to move to the **c - Private Prediction Client** notebook to request some private predictions. Cleanup! Once your request limit above, the model will no longer be available for serving requests, but it's still secret shared between the three workers above. You can kill the workers by executing the cell below.**Congratulations** on finishing b - Secure Model Serving.
###Code
process_ids = !ps aux | grep '[p]ython -m tf_encrypted.player --config' | awk '{print $2}'
for process_id in process_ids:
!kill {process_id}
print("Process ID {id} has been killed.".format(id=process_id))
###Output
Process ID 25752 has been killed.
Process ID 25745 has been killed.
Process ID 25736 has been killed.
|
TSFS_3.ipynb | ###Markdown
Time Series From Scratch (part. 3) — White Noise and Random Walk (Dario Radečić)[Source](https://towardsdatascience.com/time-series-from-scratch-white-noise-and-random-walk-5c96270514d3). From [Time Series From Scratch](https://towardsdatascience.com/tagged/time-series-from-scratch).- Author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20')
###Code
%load_ext watermark
import numpy as np
import pandas as pd
from statsmodels.graphics.tsaplots import plot_acf
import matplotlib.pyplot as plt
from matplotlib import rcParams
from cycler import cycler
rcParams['figure.figsize'] = 18, 5
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
rcParams['axes.prop_cycle'] = cycler(color=['#365977'])
rcParams['lines.linewidth'] = 2.5
# from tqdm import tqdm
# from glob import glob
# import matplotlib.pyplot as plt
# %matplotlib inline
# from IPython.core.pylabtools import figsize
# figsize(12, 8)
# import seaborn as sns
# sns.set_theme()
# pd.set_option("max_columns", None)
# pd.set_option("max_rows", None)
# pd.set_option('display.max_colwidth', None)
# from IPython.display import Markdown, display
# def md(arg):
# display(Markdown(arg))
# from pandas_profiling import ProfileReport
# #report = ProfileReport(#DataFrame here#, minimal=True)
# #report.to
# import pyarrow.parquet as pq
# #df = pq.ParquetDataset(path_to_folder_with_parquets, filesystem=None).read_pandas().to_pandas()
# import json
# def open_file_json(path,mode='r',var=None):
# if mode == 'w':
# with open(path,'w') as f:
# json.dump(var, f)
# if mode == 'r':
# with open(path,'r') as f:
# return json.load(f)
# import functools
# import operator
# def flat(a):
# return functools.reduce(operator.iconcat, a, [])
# import json
# from glob import glob
# from typing import NewType
# DictsPathType = NewType("DictsPath", str)
# def open_file_json(path):
# with open(path, "r") as f:
# return json.load(f)
# class LoadDicts:
# def __init__(self, dict_path: DictsPathType = "./data"):
# Dicts_glob = glob(f"{dict_path}/*.json")
# self.List = []
# self.Dict = {}
# for path_json in Dicts_glob:
# name = path_json.split("/")[-1].replace(".json", "")
# self.List.append(name)
# self.Dict[name] = open_file_json(path_json)
# setattr(self, name, self.Dict[name])
# Run this cell before close.
%watermark -d --iversion -b -r -g -m -v
!cat /proc/cpuinfo |grep 'model name'|head -n 1 |sed -e 's/model\ name/CPU/'
!free -h |cut -d'i' -f1 |grep -v total
# Declare
white_noise = np.random.randn(1000)
# Plot
plt.title('White Noise Plot', size=20)
plt.plot(np.arange(len(white_noise)), white_noise);
plt.grid()
# Split into an arbitraty number of chunks
white_noise_chunks = np.split(white_noise, 20)
means, stds = [], []
# Get the mean and std values for every chunk
for chunk in white_noise_chunks:
means.append(np.mean(chunk))
stds.append(np.std(chunk))
# Plot
plt.title('White Noise Mean and Standard Deviation Comparison', size=20)
plt.plot(np.arange(len(means)), [white_noise.mean()] * len(means), label='Global mean', lw=1.5)
plt.scatter(x=np.arange(len(means)), y=means, label='Mean', s=100)
plt.plot(np.arange(len(stds)), [white_noise.std()] * len(stds), label='Global std', lw=1.5, color='orange')
plt.scatter(x=np.arange(len(stds)), y=stds, label='STD', color='orange', s=100)
plt.legend();
plt.grid()
plot_acf(np.array(white_noise))
plt.grid()
random_walk_diff = (np.random.random(1000) < 0.5)*2 -1
random_walk_diff[0] = 0
random_walk = np.cumsum(random_walk_diff)
print(sum(random_walk_diff<0) / sum(random_walk_diff>0))
# Plot
plt.title('Random Walk Plot', size=20)
plt.plot(np.arange(len(random_walk)), random_walk)
plt.grid()
plot_acf(np.array(random_walk));
plt.grid()
# Plot
plt.title('Random Walk First Order Difference', size=20)
plt.plot(random_walk_diff)
plt.grid()
plot_acf(random_walk_diff)
plt.grid()
###Output
_____no_output_____ |
5.) Sequence Models/1.) Buiding a Recurrent Neural Network/Building_a_Recurrent_Neural_Network_Step_by_Step_v3a.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement key components of a Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a unidirectional RNN to take information from the past to process later inputs. A bidirectional RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - **Sub**script $i$ denotes the $i^{th}$ entry of a vector.Example: - $a^{(2)[3]}_5$ denotes the activation of the 2nd training example (2), 3rd layer [3], 4th time step , and 5th entry in the vector. Pre-requisites* We assume that you are already familiar with `numpy`. * To refresh your knowledge of numpy, you can review course 1 of this specialization "Neural Networks and Deep Learning". * Specifically, review the week 2 assignment ["Python Basics with numpy (optional)"](https://www.coursera.org/learn/neural-networks-deep-learning/item/Zh0CU). Be careful when modifying the starter code* When working on graded functions, please remember to only modify the code that is between the```Python START CODE HERE```and```Python END CODE HERE```* In particular, Be careful to not modify the first line of graded routines. These start with:```Python GRADED FUNCTION: routine_name```* The automatic grader (autograder) needs these to locate the function.* Even a change in spacing will cause issues with the autograder. * It will return 'failed' if these are modified or missing." Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* "Forward propagation for the basic RNN", added sections to clarify variable names and shapes: - "Dimensions of $x^{\langle t \rangle}$" - "Hidden State $a$", - "Dimensions of hidden state $a^{\langle t \rangle}$" - "Dimensions of prediction $y^{\langle t \rangle}$"* `rnn_cell_forward`: * Added additional hints. * Updated figure 2.* `rnn_forward` - Set `xt` in a separate line of code to clarify what code is expected; added additional hints. - Clarifies instructions to specify dimensions (2D or 3D), and clarifies variable names. - Additional Hints - Clarifies when the basic RNN works well. - Updated figure 3.* "About the gates" replaced with "overview of gates and states": - Updated to include conceptual description of each gate's purpose, and an explanation of each equation. - Added sections about the cell state, hidden state, and prediction. - Lists variable names that are used in the code, and notes when they differ from the variables used in the equations. - Lists shapes of the variables. - Updated figure 4.* `lstm_forward` - Added instructions, noting the shapes of the variables. - Added hints about `c` and `c_next` to help students avoid copy-by-reference mistakes. - Set `xt` in a separate line to make this step explicit.* Renamed global variables so that they do not conflict with local variables within the function.* Spelling, grammar and wording corrections.* For unit tests, updated print statements and "expected output" for easier comparisons.* Many thanks to mentor Geoff Ladwig for suggested improvements and fixes in the assignments for course 5! Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Dimensions of input $x$ Input with $n_x$ number of units* For a single input example, $x^{(i)}$ is a one-dimensional input vector.* Using language as an example, a language with a 5000 word vocabulary could be one-hot encoded into a vector that has 5000 units. So $x^{(i)}$ would have the shape (5000,). * We'll use the notation $n_x$ to denote the number of units in a single training example. Batches of size $m$* Let's say we have mini-batches, each with 20 training examples. * To benefit from vectorization, we'll stack 20 columns of $x^{(i)}$ examples into a 2D array (a matrix).* For example, this tensor has the shape (5000,20). * We'll use $m$ to denote the number of training examples. * So the shape of a mini-batch is $(n_x,m)$ Time steps of size $T_{x}$* A recurrent neural network has multiple time steps, which we'll index with $t$.* In the lessons, we saw a single training example $x^{(i)}$ (a vector) pass through multiple time steps $T_x$. For example, if there are 10 time steps, $T_{x} = 10$ 3D Tensor of shape $(n_{x},m,T_{x})$* The 3-dimensional tensor $x$ of shape $(n_x,m,T_x)$ represents the input $x$ that is fed into the RNN. Taking a 2D slice for each time step: $x^{\langle t \rangle}$* At each time step, we'll use a mini-batches of training examples (not just a single example).* So, for each time step $t$, we'll use a 2D slice of shape $(n_x,m)$.* We're referring to this 2D slice as $x^{\langle t \rangle}$. The variable name in the code is `xt`. Definition of hidden state $a$* The activation $a^{\langle t \rangle}$ that is passed to the RNN from one time step to another is called a "hidden state." Dimensions of hidden state $a$* Similar to the input tensor $x$, the hidden state for a single training example is a vector of length $n_{a}$.* If we include a mini-batch of $m$ training examples, the shape of a mini-batch is $(n_{a},m)$.* When we include the time step dimension, the shape of the hidden state is $(n_{a}, m, T_x)$* We will loop through the time steps with index $t$, and work with a 2D slice of the 3D tensor. * We'll refer to this 2D slice as $a^{\langle t \rangle}$. * In the code, the variable names we use are either `a_prev` or `a_next`, depending on the function that's being implemented.* The shape of this 2D slice is $(n_{a}, m)$ Dimensions of prediction $\hat{y}$* Similar to the inputs and hidden states, $\hat{y}$ is a 3D tensor of shape $(n_{y}, m, T_{y})$. * $n_{y}$: number of units in the vector representing the prediction. * $m$: number of examples in a mini-batch. * $T_{y}$: number of time steps in the prediction.* For a single time step $t$, a 2D slice $\hat{y}^{\langle t \rangle}$ has shape $(n_{y}, m)$.* In the code, the variable names are: - `y_pred`: $\hat{y}$ - `yt_pred`: $\hat{y}^{\langle t \rangle}$ Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. 1.1 - RNN cellA recurrent neural network can be seen as the repeated use of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $\hat{y}^{\langle t \rangle}$ rnn cell versus rnn_cell_forward* Note that an RNN cell outputs the hidden state $a^{\langle t \rangle}$. * The rnn cell is shown in the figure as the inner box which has solid lines. * The function that we will implement, `rnn_cell_forward`, also calculates the prediction $\hat{y}^{\langle t \rangle}$ * The rnn_cell_forward is shown in the figure as the outer box that has dashed lines. **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided the function `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in a `cache`.4. Return $a^{\langle t \rangle}$ , $\hat{y}^{\langle t \rangle}$ and `cache` Additional Hints* [numpy.tanh](https://www.google.com/search?q=numpy+tanh&rlz=1C5CHFA_enUS854US855&oq=numpy+tanh&aqs=chrome..69i57j0l5.1340j0j7&sourceid=chrome&ie=UTF-8)* We've created a `softmax` function that you can use. It is located in the file 'rnn_utils.py' and has been imported.* For matrix multiplication, use [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya, a_next) + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
###Output
a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape = (5, 10)
yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape = (2, 10)
###Markdown
**Expected Output**: ```Pythona_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037]a_next.shape = (5, 10)yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526]yt_pred.shape = (2, 10)``` 1.2 - RNN forward pass - A recurrent neural network (RNN) is a repetition of the RNN cell that you've just built. - If your input sequence of data is 10 time steps long, then you will re-use the RNN cell 10 times. - Each cell takes two inputs at each time step: - $a^{\langle t-1 \rangle}$: The hidden state from the previous cell. - $x^{\langle t \rangle}$: The current time-step's input data.- It has two outputs at each time step: - A hidden state ($a^{\langle t \rangle}$) - A prediction ($y^{\langle t \rangle}$)- The weights and biases $(W_{aa}, b_{a}, W_{ax}, b_{x})$ are re-used each time step. - They are maintained between calls to rnn_cell_forward in the 'parameters' dictionary. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:* Create a 3D array of zeros, $a$ of shape $(n_{a}, m, T_{x})$ that will store all the hidden states computed by the RNN.* Create a 3D array of zeros, $\hat{y}$, of shape $(n_{y}, m, T_{x})$ that will store the predictions. - Note that in this case, $T_{y} = T_{x}$ (the prediction and input have the same number of time steps).* Initialize the 2D hidden state `a_next` by setting it equal to the initial hidden state, $a_{0}$.* At each time step $t$: - Get $x^{\langle t \rangle}$, which is a 2D slice of $x$ for a single time step $t$. - $x^{\langle t \rangle}$ has shape $(n_{x}, m)$ - $x$ has shape $(n_{x}, m, T_{x})$ - Update the 2D hidden state $a^{\langle t \rangle}$ (variable name `a_next`), the prediction $\hat{y}^{\langle t \rangle}$ and the cache by running `rnn_cell_forward`. - $a^{\langle t \rangle}$ has shape $(n_{a}, m)$ - Store the 2D hidden state in the 3D tensor $a$, at the $t^{th}$ position. - $a$ has shape $(n_{a}, m, T_{x})$ - Store the 2D $\hat{y}^{\langle t \rangle}$ prediction (variable name `yt_pred`) in the 3D tensor $\hat{y}_{pred}$ at the $t^{th}$ position. - $\hat{y}^{\langle t \rangle}$ has shape $(n_{y}, m)$ - $\hat{y}$ has shape $(n_{y}, m, T_x)$ - Append the cache to the list of caches.* Return the 3D tensor $a$ and $\hat{y}$, as well as the list of caches. Additional Hints- [np.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)- If you have a 3 dimensional numpy array and are indexing by its third dimension, you can use array slicing like this: `var_name[:,:,i]`.
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros((n_a, m, T_x))
y_pred = np.zeros((n_y, m, T_x))
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
###Output
a[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]
a.shape = (5, 10, 4)
y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]
y_pred.shape = (2, 10, 4)
caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) = 2
###Markdown
**Expected Output**:```Pythona[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]a.shape = (5, 10, 4)y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]y_pred.shape = (2, 10, 4)caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]len(caches) = 2``` Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. Situations when this RNN will perform better:- This will work well enough for some applications, but it suffers from the vanishing gradient problems. - The RNN works best when each output $\hat{y}^{\langle t \rangle}$ can be estimated using "local" context. - "Local" context refers to information that is close to the prediction's time step $t$.- More formally, local context refers to inputs $x^{\langle t' \rangle}$ and predictions $\hat{y}^{\langle t \rangle}$ where $t'$ is close to $t$.In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThe following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a "for-loop" to have it process an input with $T_x$ time-steps. Overview of gates and states - Forget gate $\mathbf{\Gamma}_{f}$* Let's assume we are reading words in a piece of text, and plan to use an LSTM to keep track of grammatical structures, such as whether the subject is singular ("puppy") or plural ("puppies"). * If the subject changes its state (from a singular word to a plural word), the memory of the previous state becomes outdated, so we "forget" that outdated state.* The "forget gate" is a tensor containing values that are between 0 and 1. * If a unit in the forget gate has a value close to 0, the LSTM will "forget" the stored state in the corresponding unit of the previous cell state. * If a unit in the forget gate has a value close to 1, the LSTM will mostly remember the corresponding value in the stored state. Equation$$\mathbf{\Gamma}_f^{\langle t \rangle} = \sigma(\mathbf{W}_f[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_f)\tag{1} $$ Explanation of the equation:* $\mathbf{W_{f}}$ contains weights that govern the forget gate's behavior. * The previous time step's hidden state $[a^{\langle t-1 \rangle}$ and current time step's input $x^{\langle t \rangle}]$ are concatenated together and multiplied by $\mathbf{W_{f}}$. * A sigmoid function is used to make each of the gate tensor's values $\mathbf{\Gamma}_f^{\langle t \rangle}$ range from 0 to 1.* The forget gate $\mathbf{\Gamma}_f^{\langle t \rangle}$ has the same dimensions as the previous cell state $c^{\langle t-1 \rangle}$. * This means that the two can be multiplied together, element-wise.* Multiplying the tensors $\mathbf{\Gamma}_f^{\langle t \rangle} * \mathbf{c}^{\langle t-1 \rangle}$ is like applying a mask over the previous cell state.* If a single value in $\mathbf{\Gamma}_f^{\langle t \rangle}$ is 0 or close to 0, then the product is close to 0. * This keeps the information stored in the corresponding unit in $\mathbf{c}^{\langle t-1 \rangle}$ from being remembered for the next time step.* Similarly, if one value is close to 1, the product is close to the original value in the previous cell state. * The LSTM will keep the information from the corresponding unit of $\mathbf{c}^{\langle t-1 \rangle}$, to be used in the next time step. Variable names in the codeThe variable names in the code are similar to the equations, with slight differences. * `Wf`: forget gate weight $\mathbf{W}_{f}$* `Wb`: forget gate bias $\mathbf{W}_{b}$* `ft`: forget gate $\Gamma_f^{\langle t \rangle}$ Candidate value $\tilde{\mathbf{c}}^{\langle t \rangle}$* The candidate value is a tensor containing information from the current time step that **may** be stored in the current cell state $\mathbf{c}^{\langle t \rangle}$.* Which parts of the candidate value get passed on depends on the update gate.* The candidate value is a tensor containing values that range from -1 to 1.* The tilde "~" is used to differentiate the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ from the cell state $\mathbf{c}^{\langle t \rangle}$. Equation$$\mathbf{\tilde{c}}^{\langle t \rangle} = \tanh\left( \mathbf{W}_{c} [\mathbf{a}^{\langle t - 1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{c} \right) \tag{3}$$ Explanation of the equation* The 'tanh' function produces values between -1 and +1. Variable names in the code* `cct`: candidate value $\mathbf{\tilde{c}}^{\langle t \rangle}$ - Update gate $\mathbf{\Gamma}_{i}$* We use the update gate to decide what aspects of the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ to add to the cell state $c^{\langle t \rangle}$.* The update gate decides what parts of a "candidate" tensor $\tilde{\mathbf{c}}^{\langle t \rangle}$ are passed onto the cell state $\mathbf{c}^{\langle t \rangle}$.* The update gate is a tensor containing values between 0 and 1. * When a unit in the update gate is close to 1, it allows the value of the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ to be passed onto the hidden state $\mathbf{c}^{\langle t \rangle}$ * When a unit in the update gate is close to 0, it prevents the corresponding value in the candidate from being passed onto the hidden state.* Notice that we use the subscript "i" and not "u", to follow the convention used in the literature. Equation$$\mathbf{\Gamma}_i^{\langle t \rangle} = \sigma(\mathbf{W}_i[a^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_i)\tag{2} $$ Explanation of the equation* Similar to the forget gate, here $\mathbf{\Gamma}_i^{\langle t \rangle}$, the sigmoid produces values between 0 and 1.* The update gate is multiplied element-wise with the candidate, and this product ($\mathbf{\Gamma}_{i}^{\langle t \rangle} * \tilde{c}^{\langle t \rangle}$) is used in determining the cell state $\mathbf{c}^{\langle t \rangle}$. Variable names in code (Please note that they're different than the equations)In the code, we'll use the variable names found in the academic literature. These variables don't use "u" to denote "update".* `Wi` is the update gate weight $\mathbf{W}_i$ (not "Wu") * `bi` is the update gate bias $\mathbf{b}_i$ (not "bu")* `it` is the forget gate $\mathbf{\Gamma}_i^{\langle t \rangle}$ (not "ut") - Cell state $\mathbf{c}^{\langle t \rangle}$* The cell state is the "memory" that gets passed onto future time steps.* The new cell state $\mathbf{c}^{\langle t \rangle}$ is a combination of the previous cell state and the candidate value. Equation$$ \mathbf{c}^{\langle t \rangle} = \mathbf{\Gamma}_f^{\langle t \rangle}* \mathbf{c}^{\langle t-1 \rangle} + \mathbf{\Gamma}_{i}^{\langle t \rangle} *\mathbf{\tilde{c}}^{\langle t \rangle} \tag{4} $$ Explanation of equation* The previous cell state $\mathbf{c}^{\langle t-1 \rangle}$ is adjusted (weighted) by the forget gate $\mathbf{\Gamma}_{f}^{\langle t \rangle}$* and the candidate value $\tilde{\mathbf{c}}^{\langle t \rangle}$, adjusted (weighted) by the update gate $\mathbf{\Gamma}_{i}^{\langle t \rangle}$ Variable names and shapes in the code* `c`: cell state, including all time steps, $\mathbf{c}$ shape $(n_{a}, m, T)$* `c_next`: new (next) cell state, $\mathbf{c}^{\langle t \rangle}$ shape $(n_{a}, m)$* `c_prev`: previous cell state, $\mathbf{c}^{\langle t-1 \rangle}$, shape $(n_{a}, m)$ - Output gate $\mathbf{\Gamma}_{o}$* The output gate decides what gets sent as the prediction (output) of the time step.* The output gate is like the other gates. It contains values that range from 0 to 1. Equation$$ \mathbf{\Gamma}_o^{\langle t \rangle}= \sigma(\mathbf{W}_o[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{o})\tag{5}$$ Explanation of the equation* The output gate is determined by the previous hidden state $\mathbf{a}^{\langle t-1 \rangle}$ and the current input $\mathbf{x}^{\langle t \rangle}$* The sigmoid makes the gate range from 0 to 1. Variable names in the code* `Wo`: output gate weight, $\mathbf{W_o}$* `bo`: output gate bias, $\mathbf{b_o}$* `ot`: output gate, $\mathbf{\Gamma}_{o}^{\langle t \rangle}$ - Hidden state $\mathbf{a}^{\langle t \rangle}$* The hidden state gets passed to the LSTM cell's next time step.* It is used to determine the three gates ($\mathbf{\Gamma}_{f}, \mathbf{\Gamma}_{u}, \mathbf{\Gamma}_{o}$) of the next time step.* The hidden state is also used for the prediction $y^{\langle t \rangle}$. Equation$$ \mathbf{a}^{\langle t \rangle} = \mathbf{\Gamma}_o^{\langle t \rangle} * \tanh(\mathbf{c}^{\langle t \rangle})\tag{6} $$ Explanation of equation* The hidden state $\mathbf{a}^{\langle t \rangle}$ is determined by the cell state $\mathbf{c}^{\langle t \rangle}$ in combination with the output gate $\mathbf{\Gamma}_{o}$.* The cell state state is passed through the "tanh" function to rescale values between -1 and +1.* The output gate acts like a "mask" that either preserves the values of $\tanh(\mathbf{c}^{\langle t \rangle})$ or keeps those values from being included in the hidden state $\mathbf{a}^{\langle t \rangle}$ Variable names and shapes in the code* `a`: hidden state, including time steps. $\mathbf{a}$ has shape $(n_{a}, m, T_{x})$* 'a_prev`: hidden state from previous time step. $\mathbf{a}^{\langle t-1 \rangle}$ has shape $(n_{a}, m)$* `a_next`: hidden state for next time step. $\mathbf{a}^{\langle t \rangle}$ has shape $(n_{a}, m)$ - Prediction $\mathbf{y}^{\langle t \rangle}_{pred}$* The prediction in this use case is a classification, so we'll use a softmax.The equation is:$$\mathbf{y}^{\langle t \rangle}_{pred} = \textrm{softmax}(\mathbf{W}_{y} \mathbf{a}^{\langle t \rangle} + \mathbf{b}_{y})$$ Variable names and shapes in the code* `y_pred`: prediction, including all time steps. $\mathbf{y}_{pred}$ has shape $(n_{y}, m, T_{x})$. Note that $(T_{y} = T_{x})$ for this example.* `yt_pred`: prediction for the current time step $t$. $\mathbf{y}^{\langle t \rangle}_{pred}$ has shape $(n_{y}, m)$ 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (4).**Instructions**:1. Concatenate the hidden state $a^{\langle t-1 \rangle}$ and input $x^{\langle t \rangle}$ into a single matrix: $$concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$$ 2. Compute all the formulas 1 through 6 for the gates, hidden state, and cell state.3. Compute the prediction $y^{\langle t \rangle}$. Additional Hints* You can use [numpy.concatenate](https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html). Check which value to use for the `axis` parameter.* The functions `sigmoid()` and `softmax` are imported from `rnn_utils.py`.* [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)* Use [np.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) for matrix multiplication.* Notice that the variable names `Wi`, `bi` refer to the weights and biases of the **update** gate. There are no variables named "Wu" or "bu" in this function.
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the cell state (memory)
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"] # forget gate weight
bf = parameters["bf"]
Wi = parameters["Wi"] # update gate weight (notice the variable name)
bi = parameters["bi"] # (notice the variable name)
Wc = parameters["Wc"] # candidate value weight
bc = parameters["bc"]
Wo = parameters["Wo"] # output gate weight
bo = parameters["bo"]
Wy = parameters["Wy"] # prediction weight
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈1 line)
concat = np.concatenate((a_prev, xt), axis=0)
# Compute values for ft (forget gate), it (update gate),
# cct (candidate value), c_next (cell state),
# ot (output gate), a_next (hidden state) (≈6 lines)
ft = sigmoid(Wf @ concat + bf) # forget gate
it = sigmoid(Wi @ concat + bi)
cct = np.tanh(Wc @ concat + bc)
c_next = ft * c_prev + it * cct
ot = sigmoid(Wo @ concat + bo)
a_next = ot * np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(Wy @ a_next + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
c_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi'] = np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.random.randn(2,5)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)
print("a_next[4] = \n", a_next_tmp[4])
print("a_next.shape = ", c_next_tmp.shape)
print("c_next[2] = \n", c_next_tmp[2])
print("c_next.shape = ", c_next_tmp.shape)
print("yt[1] =", yt_tmp[1])
print("yt.shape = ", yt_tmp.shape)
print("cache[1][3] =\n", cache_tmp[1][3])
print("len(cache) = ", len(cache_tmp))
###Output
a_next[4] =
[-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] =
[ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] =
[-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
###Markdown
**Expected Output**:```Pythona_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275]a_next.shape = (5, 10)c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932]c_next.shape = (5, 10)yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381]yt.shape = (2, 10)cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422]len(cache) = 10``` 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 5**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Instructions*** Get the dimensions $n_x, n_a, n_y, m, T_x$ from the shape of the variables: `x` and `parameters`.* Initialize the 3D tensors $a$, $c$ and $y$. - $a$: hidden state, shape $(n_{a}, m, T_{x})$ - $c$: cell state, shape $(n_{a}, m, T_{x})$ - $y$: prediction, shape $(n_{y}, m, T_{x})$ (Note that $T_{y} = T_{x}$ in this example). - **Note** Setting one variable equal to the other is a "copy by reference". In other words, don't do `c = a', otherwise both these variables point to the same underlying variable.* Initialize the 2D tensor $a^{\langle t \rangle}$ - $a^{\langle t \rangle}$ stores the hidden state for time step $t$. The variable name is `a_next`. - $a^{\langle 0 \rangle}$, the initial hidden state at time step 0, is passed in when calling the function. The variable name is `a0`. - $a^{\langle t \rangle}$ and $a^{\langle 0 \rangle}$ represent a single time step, so they both have the shape $(n_{a}, m)$ - Initialize $a^{\langle t \rangle}$ by setting it to the initial hidden state ($a^{\langle 0 \rangle}$) that is passed into the function.* Initialize $c^{\langle t \rangle}$ with zeros. - The variable name is `c_next`. - $c^{\langle t \rangle}$ represents a single time step, so its shape is $(n_{a}, m)$ - **Note**: create `c_next` as its own variable with its own location in memory. Do not initialize it as a slice of the 3D tensor $c$. In other words, **don't** do `c_next = c[:,:,0]`.* For each time step, do the following: - From the 3D tensor $x$, get a 2D slice $x^{\langle t \rangle}$ at time step $t$. - Call the `lstm_cell_forward` function that you defined previously, to get the hidden state, cell state, prediction, and cache. - Store the hidden state, cell state and prediction (the 2D tensors) inside the 3D tensors. - Also append the cache to the list of caches.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros((n_a, m, T_x))
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros(a_next.shape)
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
###Output
a[4][3][6] = 0.172117767533
a.shape = (5, 10, 7)
y[1][4][3] = 0.95087346185
y.shape = (2, 10, 7)
caches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
c[1][2][1] -0.855544916718
len(caches) = 2
###Markdown
**Expected Output**:```Pythona[4][3][6] = 0.172117767533a.shape = (5, 10, 7)y[1][4][3] = 0.95087346185y.shape = (2, 10, 7)caches[1][1][1] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165]c[1][2][1] -0.855544916718len(caches) = 2``` Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell. **Figure 6**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculus. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. Deriving the one step backward functions: To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$. The final two equations also follow the same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
dtanh = (1- a_next**2) * da_next
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = np.dot(Wax.T, dtanh)
dWax = np.dot(dtanh, xt.T)
# compute the gradient with respect to Waa (≈2 lines)
da_prev = np.dot(Waa.T, dtanh)
dWaa = np.dot(dtanh, a_prev.T)
# compute the gradient with respect to b (≈1 line)
dba = np.sum(dtanh, 1, keepdims=True)
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
gradients["dxt"][1][2] = -0.460564103059
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = 0.0842968653807
gradients["da_prev"].shape = (5, 10)
gradients["dWax"][3][1] = 0.393081873922
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = -0.28483955787
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [ 0.80517166]
gradients["dba"].shape = (5, 1)
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.0842968653807 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.393081873922 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = -0.28483955787 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.80517166] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy array of shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = caches
(a1, a0, x1, parameters) = caches[0]
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈6 lines)
dx = np.zeros((n_x, m, T_x))
dWax = np.zeros((n_a, n_x))
dWaa = np.zeros((n_a, n_a))
dba = np.zeros((n_a, 1))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros((n_a, m))
# Loop through all the time steps
for t in reversed(range(T_x)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = rnn_cell_backward(da[:,:,t] + da_prevt, caches[t])
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = dxt
dWax += dWaxt
dWaa += dWaat
dba += dbat
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = da_prevt
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x_tmp = np.random.randn(3,10,4)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_tmp, y_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)
da_tmp = np.random.randn(5, 10, 4)
gradients_tmp = rnn_backward(da_tmp, caches_tmp)
print("gradients[\"dx\"][1][2] =", gradients_tmp["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients_tmp["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients_tmp["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients_tmp["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients_tmp["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients_tmp["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients_tmp["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients_tmp["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients_tmp["dba"][4])
print("gradients[\"dba\"].shape =", gradients_tmp["dba"].shape)
###Output
gradients["dx"][1][2] = [-2.07101689 -0.59255627 0.02466855 0.01483317]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.314942375127
gradients["da0"].shape = (5, 10)
gradients["dWax"][3][1] = 11.2641044965
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 2.30333312658
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [-0.74747722]
gradients["dba"].shape = (5, 1)
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slightly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivatives$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$ 3.2.3 parameter derivatives $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = xt.shape
n_a, m = a_next.shape
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = da_next * np.tanh(c_next) * ot * (1 - ot)
dcct = (dc_next * it + ot * (1 - np.square(np.tanh(c_next))) * it * da_next) * (1 - np.square(cct))
dit = (dc_next * cct + ot * (1 - np.square(np.tanh(c_next))) * cct * da_next) * it * (1 - it)
dft = (dc_next * c_prev + ot *(1 - np.square(np.tanh(c_next))) * c_prev * da_next) * ft * (1 - ft)
# Code equations (7) to (10) (≈4 lines)
##dit = None
##dft = None
##dot = None
##dcct = None
concat = np.concatenate((a_prev, xt), axis=0)
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = np.dot(dft, concat.T)
dWi = np.dot(dit, concat.T)
dWc = np.dot(dcct, concat.T)
dWo = np.dot(dot, concat.T)
dbf = np.sum(dft, axis=1 ,keepdims = True)
dbi = np.sum(dit, axis=1, keepdims = True)
dbc = np.sum(dcct, axis=1, keepdims = True)
dbo = np.sum(dot, axis=1, keepdims = True)
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = np.dot(parameters['Wf'][:, :n_a].T, dft) + np.dot(parameters['Wi'][:, :n_a].T, dit) + np.dot(parameters['Wc'][:, :n_a].T, dcct) + np.dot(parameters['Wo'][:, :n_a].T, dot)
dc_prev = dc_next * ft + ot * (1 - np.square(np.tanh(c_next))) * ft * da_next
dxt = np.dot(parameters['Wf'][:, n_a:].T, dft) + np.dot(parameters['Wi'][:, n_a:].T, dit) + np.dot(parameters['Wc'][:, n_a:].T, dcct) + np.dot(parameters['Wo'][:, n_a:].T, dot)
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
gradients["dxt"][1][2] = 3.23055911511
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = -0.0639621419711
gradients["da_prev"].shape = (5, 10)
gradients["dc_prev"][2][3] = 0.797522038797
gradients["dc_prev"].shape = (5, 10)
gradients["dWf"][3][1] = -0.147954838164
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 1.05749805523
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = 2.30456216369
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.331311595289
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [ 0.18864637]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.40142491]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [ 0.25587763]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [ 0.13893342]
gradients["dbo"].shape = (5, 1)
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈12 lines)
dx = np.zeros((n_x, m, T_x))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros(da0.shape)
dc_prevt = np.zeros(da0.shape)
dWf = np.zeros((n_a, n_a + n_x))
dWi = np.zeros(dWf.shape)
dWc = np.zeros(dWf.shape)
dWo = np.zeros(dWf.shape)
dbf = np.zeros((n_a, 1))
dbi = np.zeros(dbf.shape)
dbc = np.zeros(dbf.shape)
dbo = np.zeros(dbf.shape)
# loop back over the whole sequence
for t in reversed(range(T_x)):
# Compute all gradients using lstm_cell_backward
gradients = lstm_cell_backward(da[:, :, t], dc_prevt, caches[t])
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = gradients["dxt"]
dWf += gradients["dWf"]
dWi += gradients["dWi"]
dWc += gradients["dWc"]
dWo += gradients["dWo"]
dbf += gradients["dbf"]
dbi += gradients["dbi"]
dbc += gradients["dbc"]
dbo += gradients["dbo"]
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = gradients["da_prev"]
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
gradients["dx"][1][2] = [-0.00173313 0.08287442 -0.30545663 -0.43281115]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.095911501954
gradients["da0"].shape = (5, 10)
gradients["dWf"][3][1] = -0.0698198561274
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 0.102371820249
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = -0.0624983794927
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.0484389131444
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [-0.0565788]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.15399065]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [-0.29691142]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [-0.29798344]
gradients["dbo"].shape = (5, 1)
|
notebooks/2. Single Dataset - Univariate Analysis.ipynb | ###Markdown
Short IntroductionThis notebook conducts deeper univariate analysis to better understand feature correlation with win percent and to identify any statistical significance. Overview First of all we load all needed packages.
###Code
%run ./Libraries
###Output
_____no_output_____
###Markdown
Univariate Analysis
###Code
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/candy-power-ranking/candy-data.csv')
df['competitorname'] = df.competitorname.apply(lambda l: l.replace('Õ', '\''))
display(df)
###Output
_____no_output_____
###Markdown
chocolateIn the top 20 there are only two non-chocolate candies:
###Code
df[['competitorname', 'chocolate', 'winpercent']].sort_values('winpercent', ascending=False)[:20]
###Output
_____no_output_____
###Markdown
The box plot shows the the strong correlation of chocolate to winpercent:
###Code
fig = plt.figure()
sns.boxplot(data=df, x="chocolate", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
The difference of candies with and without chocolate with respect to winpercent is statistically significant:
###Code
print('p-value = {0:.10%}'.format(stats.ttest_ind(df[df.chocolate == 0].winpercent,
df[df.chocolate == 1].winpercent)[1]))
###Output
_____no_output_____
###Markdown
The p-value of the Student's t-test is the probability that you are wrong if you assert there is a difference between the two datasets. Usually if the p-value is smaller than 5% one can say that there is a statistical significance for the difference of the two datasets. barAt first glance, the correlation between bar and winpercent seems interesting, but we can see two facts. First, there is only one bar without chocolate:
###Code
df[(df.bar == 1) & (df.chocolate == 0)]
###Output
_____no_output_____
###Markdown
Second, if we restrict the dataset to chocolate only, the correlation almost vanishes:
###Code
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1,2,1)
ax.set(title='Overall')
sns.heatmap(df[['bar', 'winpercent']].corr(), linewidth=0.5, center=0, cmap="YlGn", annot=True)
ax = fig.add_subplot(1,2,2)
ax.set(title='Chocolate only')
sns.heatmap(df[df.chocolate == 1][['bar', 'winpercent']].corr(), linewidth=0.5, center=0, cmap="YlGn", annot=True)
###Output
_____no_output_____
###Markdown
Let's take a look on the box plot:
###Code
fig = plt.figure()
plt.title('Chocolate only')
sns.boxplot(data=df[df.chocolate == 1], x="bar", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
We see there is only a small difference between bars with chocolate and without bars with respect to winpercent. So, it's not suprisingly that the p-value is far away from the 5%:
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[(df.bar == 0) & (df.chocolate == 1)].winpercent,
df[(df.bar == 1) & (df.chocolate == 1)].winpercent)[1]))
###Output
_____no_output_____
###Markdown
peanutyalmondyBecause of the correlation between peanutyalmondy and chocolate we take a look on this first.
###Code
df[['chocolate', 'peanutyalmondy', 'competitorname']].groupby(['chocolate', 'peanutyalmondy'], as_index=False).count()
###Output
_____no_output_____
###Markdown
Since a candy has almost ever chocolate if it is peanutyalmondy, we restrict the analysis to candies with chocolate:
###Code
plt.figure()
sns.boxplot(data=df[df.chocolate == 1], x="peanutyalmondy", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
The above plot implies that candies with chocolate and peanut/almond perform better than candy with chocolate but without peanut/almond:
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[(df.peanutyalmondy == 0) & (df.chocolate == 1)].winpercent,
df[(df.peanutyalmondy == 1) & (df.chocolate == 1)].winpercent)[1]))
###Output
_____no_output_____
###Markdown
FruitySince the correlation between fruity and chocolate is even larger, we look at this first.
###Code
df[['chocolate', 'fruity', 'competitorname']].groupby(['chocolate', 'fruity'], as_index=False).count()
###Output
_____no_output_____
###Markdown
There is only one candy which has chocolate and is fruity. So the most candies have either chocolate or are fruity. Fruity and winpercent have a conspicious negative correlation. Again we create a box plot to see the difference between fruity candies and the others.
###Code
plt.figure()
sns.boxplot(data=df, x="fruity", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
We see that the difference between fruity and non-fruity candies is statistical significant:
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[df.fruity == 0].winpercent,
df[df.fruity == 1].winpercent)[1]))
###Output
_____no_output_____
###Markdown
hardWe see that hard cookies perform worse than other candies:
###Code
plt.figure()
sns.boxplot(data=df, x="hard", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
Again, this difference is statistical significant:
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[df.hard == 0].winpercent,
df[df.hard == 1].winpercent)[1]))
###Output
_____no_output_____
###Markdown
Almost every hard cookie does not have chocolate:
###Code
df[['chocolate', 'hard', 'competitorname']].groupby(['chocolate', 'hard'], as_index=False).count()
###Output
_____no_output_____
###Markdown
So, we restrict the analysis to candies without chocolate, to see if there is a different result to the above result:
###Code
fig = plt.figure()
sns.boxplot(data=df[df.chocolate == 0], x="hard", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
This difference is not statistical significant, because of the low numbers of observations.
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[(df.hard == 0) & (df.chocolate == 0)].winpercent,
df[(df.hard == 1) & (df.chocolate == 0)].winpercent)[1]))
###Output
_____no_output_____
###Markdown
pluribusCandies from a box or bag with different candies perform worse than other candies:
###Code
fig = plt.figure()
sns.boxplot(data=df, x="pluribus", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
This difference is statistical significant:
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[df.pluribus == 0].winpercent,
df[df.pluribus == 1].winpercent)[1]))
###Output
_____no_output_____
###Markdown
caramelCandies with caramel perform better than other candies:**
###Code
fig = plt.figure()
sns.boxplot(data=df, x="caramel", y="winpercent", palette="YlGn")
###Output
_____no_output_____
###Markdown
This difference is statistical significant:
###Code
print('p-value = {0:.2%}'.format(stats.ttest_ind(df[df.caramel == 0].winpercent,
df[df.caramel == 1].winpercent)[1]))
###Output
_____no_output_____
###Markdown
pricepercentSince pricepercent differs for fruity and for chocolate, we separate the candies for the analysis.
###Code
def bars_winpercent(df, x, title, cnt_bars):
# plot one continuous variable against winpercent
factor = cnt_bars - 1
x = '%s_rounded' % x
price = pd.DataFrame({x: np.round(df.pricepercent*factor)/factor, 'winpercent': df.winpercent})
fig = plt.figure(figsize=(15,5))
fig.suptitle(title)
ax = fig.add_subplot(1,2,1)
sns.barplot(x=x,
y='winpercent',
data=price.groupby(x, as_index=False).mean().sort_values(x),
palette='YlGn', ax=ax)
ax.set(ylabel='Mean winpercent')
ax = fig.add_subplot(1,2,2)
ax.set(ylabel='Count')
sns.barplot(x=x,
y='winpercent',
data=price.groupby(x, as_index=False).count().sort_values(x),
palette='YlGn', ax=ax)
ax.set(ylabel='Count')
bars_winpercent(df[df.chocolate == 1], 'pricepercent', 'Chocolate only', 5)
bars_winpercent(df[df.fruity == 1], 'pricepercent', 'Fruity only', 5)
###Output
_____no_output_____
###Markdown
sugarpercentDue to the correlation between sugar and fruity and between sugar and chocolate we separate fruity and chocolate.
###Code
bars_winpercent(df[df.chocolate == 1], 'sugarpercent', 'Chocolate only', 5)
bars_winpercent(df[df.fruity == 1], 'sugarpercent', 'Fruity only', 5)
###Output
_____no_output_____
###Markdown
Candies with chocolate have slightly more sugar than fruity candies by trend. But the influence seems very low. SummaryThe univariate analysis shows that chocolate, peanutyalmondy, and caramel are the most important features. Candies with chocolate and peanutyalmondy or chocolate and caramel leads to the best performing candies.
###Code
###Output
_____no_output_____ |
dev-env.ipynb | ###Markdown
環境構築- jupyter lab- calysto_scheme- calysto- gnuplot(libcairo-2.dllのインストール・Pathを設定)
###Code
https://notebook.community/dsblank/ProgLangBook/Reference%20Guide%20for%20Calysto%20Scheme
https://github.com/Calysto/calysto/blob/master/calysto/graphics.py
###Output
_____no_output_____ |
Matplotlib/bayes_update_sgskip.ipynb | ###Markdown
The Bayes updateThis animation displays the posterior estimate updates as it is refitted whennew data arrives.The vertical line represents the theoretical value to which the plotteddistribution should converge.
###Code
# update a distribution based on new data.
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as ss
from matplotlib.animation import FuncAnimation
class UpdateDist(object):
def __init__(self, ax, prob=0.5):
self.success = 0
self.prob = prob
self.line, = ax.plot([], [], 'k-')
self.x = np.linspace(0, 1, 200)
self.ax = ax
# Set up plot parameters
self.ax.set_xlim(0, 1)
self.ax.set_ylim(0, 15)
self.ax.grid(True)
# This vertical line represents the theoretical value, to
# which the plotted distribution should converge.
self.ax.axvline(prob, linestyle='--', color='black')
def init(self):
self.success = 0
self.line.set_data([], [])
return self.line,
def __call__(self, i):
# This way the plot can continuously run and we just keep
# watching new realizations of the process
if i == 0:
return self.init()
# Choose success based on exceed a threshold with a uniform pick
if np.random.rand(1,) < self.prob:
self.success += 1
y = ss.beta.pdf(self.x, self.success + 1, (i - self.success) + 1)
self.line.set_data(self.x, y)
return self.line,
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots()
ud = UpdateDist(ax, prob=0.7)
anim = FuncAnimation(fig, ud, frames=np.arange(100), init_func=ud.init,
interval=100, blit=True)
plt.show()
###Output
_____no_output_____ |
Experiments/20181031/subject_parser.ipynb | ###Markdown
parse the files in fmri and dmri folders and find subjects with both fmri and dmri
###Code
fmri_path = '../../data/HBN/fmri/desikan/'
fmris = os.listdir(fmri_path)
dmri_path = '../../data/HBN/dwi/desikan/'
dmris = os.listdir(dmri_path)
subjects = [s.split('_')[0] for s in fmris]
subjects_unique = sorted(list(set(subjects)))
subjects_corrected = []
for subject in subjects_unique:
fmri_tmp = [f for f in fmris if subject in f]
dmri_tmp = [f for f in dmris if subject in f]
if (len(fmri_tmp) == 1) & (len(dmri_tmp) == 1):
subjects_corrected.append(subject)
dmris_corrected = []
fmris_corrected = []
for subject in subjects_corrected:
for i in dmris:
if subject in i:
dmris_corrected.append(i)
for i in fmris:
if subject in i:
fmris_corrected.append(i)
dmris_corrected
fmris_corrected
len(fmris_corrected), len(dmris_corrected)
for idx in range(293):
f = fmris_corrected[idx].split('_')
d = dmris_corrected[idx].split('_')
assert f[0] == d[0]
#assert f[1] == d[1]
###Output
_____no_output_____
###Markdown
Remove subjects with empty dwi or fmri scans
###Code
fmris = []
dmris = []
for idx, (fmri, dmri) in enumerate(zip(fmris_corrected, dmris_corrected)):
fmri_graph = import_edgelist(fmri_path + fmri)
dmri_graph = import_edgelist(dmri_path + dmri)
if fmri_graph.shape == dmri_graph.shape:
fmris.append(fmri)
dmris.append(dmri)
#fmri_graphs.append(fmri_graph)
#dmri_graphs.append(dmri_graph)
###Output
_____no_output_____
###Markdown
Remove subjects without any demographic information
###Code
subjects = [f.split('_')[0] for f in fmris]
subjects = [f.split('-')[1] for f in subjects]
len(subjects)
df = pd.read_csv('../../data/HBN_phenotypic_data/9994_Basic_Demos_20180927.csv')
df = df[['Patient_ID', 'Sex', 'Age']]
df.head()
df.loc[df['Patient_ID'].isin(subjects)].to_csv('./subject_information.csv', index=False)
###Output
_____no_output_____ |
Cats vs Dogs - Classifier.ipynb | ###Markdown
Setting up Enviroment
###Code
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import plot_model
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.inception_v3 import InceptionV3, preprocess_input
%matplotlib inline
tf.__version__
tf.test.is_gpu_available()
###Output
_____no_output_____
###Markdown
**Using Half Precision FP-16**
###Code
dtype='float16'
K.set_floatx(dtype)
# default is 1e-7 which is too small for float16. Without adjusting the epsilon, we will get NaN predictions because of divide by zero problems
K.set_epsilon(1e-4)
###Output
_____no_output_____
###Markdown
Getting the Data
###Code
filenames = os.listdir("train")
print("No of images: ", len(filenames))
category = []
for file in filenames:
if file[0] == 'd':
category.append('1')
if file[0] == 'c':
category.append('0')
data = pd.DataFrame({'Photo': filenames, 'Class': category})
data = data.sample(frac = 1, replace = False, random_state = 0)
data.reset_index(drop = True, inplace = True)
data.head()
###Output
_____no_output_____
###Markdown
Displaying 10 examples
###Code
fig = plt.figure(figsize = (20,10))
for i in range(10):
img = plt.imread("train/" + data['Photo'].iloc[i])
plt.subplot(2,5,i+1)
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Splitting the Data into Train and Validation
###Code
train_data = data.iloc[:20000]
valid_data = data.iloc[20000:]
train_data.head()
valid_data.head()
###Output
_____no_output_____
###Markdown
Number of Trainable layers in a model
###Code
def num_trainable_layers(model):
count = 0
for layer in model.layers:
if layer.trainable == True:
count += 1
print("Number of Trainable Layers: ", count)
###Output
_____no_output_____
###Markdown
Creating the Train and Validation Image Generators
###Code
train_datagen = ImageDataGenerator(rotation_range = 30,
width_shift_range = 0.25,
height_shift_range = 0.25,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
brightness_range = [0.75, 1.25],
preprocessing_function = preprocess_input)
valid_datagen = ImageDataGenerator(preprocessing_function = preprocess_input)
train_generator = train_datagen.flow_from_dataframe(train_data,
directory = 'train/',
x_col = 'Photo',
y_col = 'Class',
target_size = (299,299),
class_mode = 'binary',
seed = 42,
batch_size = 64)
validation_generator = valid_datagen.flow_from_dataframe(valid_data,
directory = 'train/',
x_col = 'Photo',
y_col = 'Class',
target_size = (299,299),
class_mode = 'binary',
seed = 42,
batch_size = 64)
###Output
Found 20000 validated image filenames belonging to 2 classes.
Found 5000 validated image filenames belonging to 2 classes.
###Markdown
Using InceptionV3 as the base CNN model
###Code
inceptionv3 = InceptionV3(include_top = False,
weights = 'imagenet',
input_shape = (299, 299, 3),
pooling = 'avg',
classes = 2)
###Output
_____no_output_____
###Markdown
**Freezing the Base Model**
###Code
inceptionv3.trainable = False
###Output
_____no_output_____
###Markdown
**Adding Dense Layer at the end**
###Code
out = Dense(1, activation = 'sigmoid')(inceptionv3.output)
model = Model(inputs = inceptionv3.inputs, outputs = out)
num_trainable_layers(model)
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 299, 299, 3) 0
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 149, 149, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 149, 149, 32) 96 conv2d[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 149, 149, 32) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 147, 147, 32) 9216 activation[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 147, 147, 32) 96 conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 147, 147, 32) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 147, 147, 64) 18432 activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 147, 147, 64) 192 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 147, 147, 64) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 73, 73, 64) 0 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 73, 73, 80) 5120 max_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 73, 73, 80) 240 conv2d_3[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 73, 73, 80) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 71, 71, 192) 138240 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 71, 71, 192) 576 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 71, 71, 192) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 35, 35, 192) 0 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 35, 35, 64) 12288 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 35, 35, 64) 192 conv2d_8[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 35, 35, 64) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 35, 35, 48) 9216 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 35, 35, 96) 55296 activation_8[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 35, 35, 48) 144 conv2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 35, 35, 96) 288 conv2d_9[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 35, 35, 48) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 35, 35, 96) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
average_pooling2d (AveragePooli (None, 35, 35, 192) 0 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 35, 35, 64) 12288 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 35, 35, 64) 76800 activation_6[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 35, 35, 96) 82944 activation_9[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 35, 35, 32) 6144 average_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 35, 35, 64) 192 conv2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 35, 35, 64) 192 conv2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 35, 35, 96) 288 conv2d_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 35, 35, 32) 96 conv2d_11[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 35, 35, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 35, 35, 64) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 35, 35, 96) 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 35, 35, 32) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
mixed0 (Concatenate) (None, 35, 35, 256) 0 activation_5[0][0]
activation_7[0][0]
activation_10[0][0]
activation_11[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 35, 35, 64) 16384 mixed0[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 35, 35, 64) 192 conv2d_15[0][0]
__________________________________________________________________________________________________
activation_15 (Activation) (None, 35, 35, 64) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 35, 35, 48) 12288 mixed0[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 35, 35, 96) 55296 activation_15[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 35, 35, 48) 144 conv2d_13[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 35, 35, 96) 288 conv2d_16[0][0]
__________________________________________________________________________________________________
activation_13 (Activation) (None, 35, 35, 48) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
activation_16 (Activation) (None, 35, 35, 96) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, 35, 35, 256) 0 mixed0[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 35, 35, 64) 16384 mixed0[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 35, 35, 64) 76800 activation_13[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 35, 35, 96) 82944 activation_16[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 35, 35, 64) 16384 average_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 35, 35, 64) 192 conv2d_12[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 35, 35, 64) 192 conv2d_14[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 35, 35, 96) 288 conv2d_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 35, 35, 64) 192 conv2d_18[0][0]
__________________________________________________________________________________________________
activation_12 (Activation) (None, 35, 35, 64) 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
activation_14 (Activation) (None, 35, 35, 64) 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
activation_17 (Activation) (None, 35, 35, 96) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
activation_18 (Activation) (None, 35, 35, 64) 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
mixed1 (Concatenate) (None, 35, 35, 288) 0 activation_12[0][0]
activation_14[0][0]
activation_17[0][0]
activation_18[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 35, 35, 64) 18432 mixed1[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 35, 35, 64) 192 conv2d_22[0][0]
__________________________________________________________________________________________________
activation_22 (Activation) (None, 35, 35, 64) 0 batch_normalization_22[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 35, 35, 48) 13824 mixed1[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 35, 35, 96) 55296 activation_22[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 35, 35, 48) 144 conv2d_20[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 35, 35, 96) 288 conv2d_23[0][0]
__________________________________________________________________________________________________
activation_20 (Activation) (None, 35, 35, 48) 0 batch_normalization_20[0][0]
__________________________________________________________________________________________________
activation_23 (Activation) (None, 35, 35, 96) 0 batch_normalization_23[0][0]
__________________________________________________________________________________________________
average_pooling2d_2 (AveragePoo (None, 35, 35, 288) 0 mixed1[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 35, 35, 64) 18432 mixed1[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 35, 35, 64) 76800 activation_20[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 35, 35, 96) 82944 activation_23[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, 35, 35, 64) 18432 average_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 35, 35, 64) 192 conv2d_19[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 35, 35, 64) 192 conv2d_21[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 35, 35, 96) 288 conv2d_24[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 35, 35, 64) 192 conv2d_25[0][0]
__________________________________________________________________________________________________
activation_19 (Activation) (None, 35, 35, 64) 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
activation_21 (Activation) (None, 35, 35, 64) 0 batch_normalization_21[0][0]
__________________________________________________________________________________________________
activation_24 (Activation) (None, 35, 35, 96) 0 batch_normalization_24[0][0]
__________________________________________________________________________________________________
activation_25 (Activation) (None, 35, 35, 64) 0 batch_normalization_25[0][0]
__________________________________________________________________________________________________
mixed2 (Concatenate) (None, 35, 35, 288) 0 activation_19[0][0]
activation_21[0][0]
activation_24[0][0]
activation_25[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, 35, 35, 64) 18432 mixed2[0][0]
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, 35, 35, 64) 192 conv2d_27[0][0]
__________________________________________________________________________________________________
activation_27 (Activation) (None, 35, 35, 64) 0 batch_normalization_27[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D) (None, 35, 35, 96) 55296 activation_27[0][0]
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, 35, 35, 96) 288 conv2d_28[0][0]
__________________________________________________________________________________________________
activation_28 (Activation) (None, 35, 35, 96) 0 batch_normalization_28[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, 17, 17, 384) 995328 mixed2[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D) (None, 17, 17, 96) 82944 activation_28[0][0]
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, 17, 17, 384) 1152 conv2d_26[0][0]
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, 17, 17, 96) 288 conv2d_29[0][0]
__________________________________________________________________________________________________
activation_26 (Activation) (None, 17, 17, 384) 0 batch_normalization_26[0][0]
__________________________________________________________________________________________________
activation_29 (Activation) (None, 17, 17, 96) 0 batch_normalization_29[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 17, 17, 288) 0 mixed2[0][0]
__________________________________________________________________________________________________
mixed3 (Concatenate) (None, 17, 17, 768) 0 activation_26[0][0]
activation_29[0][0]
max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D) (None, 17, 17, 128) 98304 mixed3[0][0]
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, 17, 17, 128) 384 conv2d_34[0][0]
__________________________________________________________________________________________________
activation_34 (Activation) (None, 17, 17, 128) 0 batch_normalization_34[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D) (None, 17, 17, 128) 114688 activation_34[0][0]
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, 17, 17, 128) 384 conv2d_35[0][0]
__________________________________________________________________________________________________
activation_35 (Activation) (None, 17, 17, 128) 0 batch_normalization_35[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D) (None, 17, 17, 128) 98304 mixed3[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D) (None, 17, 17, 128) 114688 activation_35[0][0]
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, 17, 17, 128) 384 conv2d_31[0][0]
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, 17, 17, 128) 384 conv2d_36[0][0]
__________________________________________________________________________________________________
activation_31 (Activation) (None, 17, 17, 128) 0 batch_normalization_31[0][0]
__________________________________________________________________________________________________
activation_36 (Activation) (None, 17, 17, 128) 0 batch_normalization_36[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D) (None, 17, 17, 128) 114688 activation_31[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D) (None, 17, 17, 128) 114688 activation_36[0][0]
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, 17, 17, 128) 384 conv2d_32[0][0]
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, 17, 17, 128) 384 conv2d_37[0][0]
__________________________________________________________________________________________________
activation_32 (Activation) (None, 17, 17, 128) 0 batch_normalization_32[0][0]
__________________________________________________________________________________________________
activation_37 (Activation) (None, 17, 17, 128) 0 batch_normalization_37[0][0]
__________________________________________________________________________________________________
average_pooling2d_3 (AveragePoo (None, 17, 17, 768) 0 mixed3[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D) (None, 17, 17, 192) 147456 mixed3[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D) (None, 17, 17, 192) 172032 activation_32[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D) (None, 17, 17, 192) 172032 activation_37[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, 17, 17, 192) 576 conv2d_30[0][0]
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, 17, 17, 192) 576 conv2d_33[0][0]
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, 17, 17, 192) 576 conv2d_38[0][0]
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, 17, 17, 192) 576 conv2d_39[0][0]
__________________________________________________________________________________________________
activation_30 (Activation) (None, 17, 17, 192) 0 batch_normalization_30[0][0]
__________________________________________________________________________________________________
activation_33 (Activation) (None, 17, 17, 192) 0 batch_normalization_33[0][0]
__________________________________________________________________________________________________
activation_38 (Activation) (None, 17, 17, 192) 0 batch_normalization_38[0][0]
__________________________________________________________________________________________________
activation_39 (Activation) (None, 17, 17, 192) 0 batch_normalization_39[0][0]
__________________________________________________________________________________________________
mixed4 (Concatenate) (None, 17, 17, 768) 0 activation_30[0][0]
activation_33[0][0]
activation_38[0][0]
activation_39[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D) (None, 17, 17, 160) 122880 mixed4[0][0]
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, 17, 17, 160) 480 conv2d_44[0][0]
__________________________________________________________________________________________________
activation_44 (Activation) (None, 17, 17, 160) 0 batch_normalization_44[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D) (None, 17, 17, 160) 179200 activation_44[0][0]
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, 17, 17, 160) 480 conv2d_45[0][0]
__________________________________________________________________________________________________
activation_45 (Activation) (None, 17, 17, 160) 0 batch_normalization_45[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D) (None, 17, 17, 160) 122880 mixed4[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D) (None, 17, 17, 160) 179200 activation_45[0][0]
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, 17, 17, 160) 480 conv2d_41[0][0]
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, 17, 17, 160) 480 conv2d_46[0][0]
__________________________________________________________________________________________________
activation_41 (Activation) (None, 17, 17, 160) 0 batch_normalization_41[0][0]
__________________________________________________________________________________________________
activation_46 (Activation) (None, 17, 17, 160) 0 batch_normalization_46[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D) (None, 17, 17, 160) 179200 activation_41[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D) (None, 17, 17, 160) 179200 activation_46[0][0]
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, 17, 17, 160) 480 conv2d_42[0][0]
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, 17, 17, 160) 480 conv2d_47[0][0]
__________________________________________________________________________________________________
activation_42 (Activation) (None, 17, 17, 160) 0 batch_normalization_42[0][0]
__________________________________________________________________________________________________
activation_47 (Activation) (None, 17, 17, 160) 0 batch_normalization_47[0][0]
__________________________________________________________________________________________________
average_pooling2d_4 (AveragePoo (None, 17, 17, 768) 0 mixed4[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D) (None, 17, 17, 192) 147456 mixed4[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D) (None, 17, 17, 192) 215040 activation_42[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D) (None, 17, 17, 192) 215040 activation_47[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, 17, 17, 192) 576 conv2d_40[0][0]
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, 17, 17, 192) 576 conv2d_43[0][0]
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, 17, 17, 192) 576 conv2d_48[0][0]
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, 17, 17, 192) 576 conv2d_49[0][0]
__________________________________________________________________________________________________
activation_40 (Activation) (None, 17, 17, 192) 0 batch_normalization_40[0][0]
__________________________________________________________________________________________________
activation_43 (Activation) (None, 17, 17, 192) 0 batch_normalization_43[0][0]
__________________________________________________________________________________________________
activation_48 (Activation) (None, 17, 17, 192) 0 batch_normalization_48[0][0]
__________________________________________________________________________________________________
activation_49 (Activation) (None, 17, 17, 192) 0 batch_normalization_49[0][0]
__________________________________________________________________________________________________
mixed5 (Concatenate) (None, 17, 17, 768) 0 activation_40[0][0]
activation_43[0][0]
activation_48[0][0]
activation_49[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D) (None, 17, 17, 160) 122880 mixed5[0][0]
__________________________________________________________________________________________________
batch_normalization_54 (BatchNo (None, 17, 17, 160) 480 conv2d_54[0][0]
__________________________________________________________________________________________________
activation_54 (Activation) (None, 17, 17, 160) 0 batch_normalization_54[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D) (None, 17, 17, 160) 179200 activation_54[0][0]
__________________________________________________________________________________________________
batch_normalization_55 (BatchNo (None, 17, 17, 160) 480 conv2d_55[0][0]
__________________________________________________________________________________________________
activation_55 (Activation) (None, 17, 17, 160) 0 batch_normalization_55[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D) (None, 17, 17, 160) 122880 mixed5[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D) (None, 17, 17, 160) 179200 activation_55[0][0]
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, 17, 17, 160) 480 conv2d_51[0][0]
__________________________________________________________________________________________________
batch_normalization_56 (BatchNo (None, 17, 17, 160) 480 conv2d_56[0][0]
__________________________________________________________________________________________________
activation_51 (Activation) (None, 17, 17, 160) 0 batch_normalization_51[0][0]
__________________________________________________________________________________________________
activation_56 (Activation) (None, 17, 17, 160) 0 batch_normalization_56[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D) (None, 17, 17, 160) 179200 activation_51[0][0]
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, 17, 17, 160) 179200 activation_56[0][0]
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, 17, 17, 160) 480 conv2d_52[0][0]
__________________________________________________________________________________________________
batch_normalization_57 (BatchNo (None, 17, 17, 160) 480 conv2d_57[0][0]
__________________________________________________________________________________________________
activation_52 (Activation) (None, 17, 17, 160) 0 batch_normalization_52[0][0]
__________________________________________________________________________________________________
activation_57 (Activation) (None, 17, 17, 160) 0 batch_normalization_57[0][0]
__________________________________________________________________________________________________
average_pooling2d_5 (AveragePoo (None, 17, 17, 768) 0 mixed5[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D) (None, 17, 17, 192) 147456 mixed5[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D) (None, 17, 17, 192) 215040 activation_52[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, 17, 17, 192) 215040 activation_57[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, 17, 17, 192) 576 conv2d_50[0][0]
__________________________________________________________________________________________________
batch_normalization_53 (BatchNo (None, 17, 17, 192) 576 conv2d_53[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, 17, 17, 192) 576 conv2d_58[0][0]
__________________________________________________________________________________________________
batch_normalization_59 (BatchNo (None, 17, 17, 192) 576 conv2d_59[0][0]
__________________________________________________________________________________________________
activation_50 (Activation) (None, 17, 17, 192) 0 batch_normalization_50[0][0]
__________________________________________________________________________________________________
activation_53 (Activation) (None, 17, 17, 192) 0 batch_normalization_53[0][0]
__________________________________________________________________________________________________
activation_58 (Activation) (None, 17, 17, 192) 0 batch_normalization_58[0][0]
__________________________________________________________________________________________________
activation_59 (Activation) (None, 17, 17, 192) 0 batch_normalization_59[0][0]
__________________________________________________________________________________________________
mixed6 (Concatenate) (None, 17, 17, 768) 0 activation_50[0][0]
activation_53[0][0]
activation_58[0][0]
activation_59[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0]
__________________________________________________________________________________________________
batch_normalization_64 (BatchNo (None, 17, 17, 192) 576 conv2d_64[0][0]
__________________________________________________________________________________________________
activation_64 (Activation) (None, 17, 17, 192) 0 batch_normalization_64[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D) (None, 17, 17, 192) 258048 activation_64[0][0]
__________________________________________________________________________________________________
batch_normalization_65 (BatchNo (None, 17, 17, 192) 576 conv2d_65[0][0]
__________________________________________________________________________________________________
activation_65 (Activation) (None, 17, 17, 192) 0 batch_normalization_65[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D) (None, 17, 17, 192) 258048 activation_65[0][0]
__________________________________________________________________________________________________
batch_normalization_61 (BatchNo (None, 17, 17, 192) 576 conv2d_61[0][0]
__________________________________________________________________________________________________
batch_normalization_66 (BatchNo (None, 17, 17, 192) 576 conv2d_66[0][0]
__________________________________________________________________________________________________
activation_61 (Activation) (None, 17, 17, 192) 0 batch_normalization_61[0][0]
__________________________________________________________________________________________________
activation_66 (Activation) (None, 17, 17, 192) 0 batch_normalization_66[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, 17, 17, 192) 258048 activation_61[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D) (None, 17, 17, 192) 258048 activation_66[0][0]
__________________________________________________________________________________________________
batch_normalization_62 (BatchNo (None, 17, 17, 192) 576 conv2d_62[0][0]
__________________________________________________________________________________________________
batch_normalization_67 (BatchNo (None, 17, 17, 192) 576 conv2d_67[0][0]
__________________________________________________________________________________________________
activation_62 (Activation) (None, 17, 17, 192) 0 batch_normalization_62[0][0]
__________________________________________________________________________________________________
activation_67 (Activation) (None, 17, 17, 192) 0 batch_normalization_67[0][0]
__________________________________________________________________________________________________
average_pooling2d_6 (AveragePoo (None, 17, 17, 768) 0 mixed6[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, 17, 17, 192) 258048 activation_62[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D) (None, 17, 17, 192) 258048 activation_67[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_60 (BatchNo (None, 17, 17, 192) 576 conv2d_60[0][0]
__________________________________________________________________________________________________
batch_normalization_63 (BatchNo (None, 17, 17, 192) 576 conv2d_63[0][0]
__________________________________________________________________________________________________
batch_normalization_68 (BatchNo (None, 17, 17, 192) 576 conv2d_68[0][0]
__________________________________________________________________________________________________
batch_normalization_69 (BatchNo (None, 17, 17, 192) 576 conv2d_69[0][0]
__________________________________________________________________________________________________
activation_60 (Activation) (None, 17, 17, 192) 0 batch_normalization_60[0][0]
__________________________________________________________________________________________________
activation_63 (Activation) (None, 17, 17, 192) 0 batch_normalization_63[0][0]
__________________________________________________________________________________________________
activation_68 (Activation) (None, 17, 17, 192) 0 batch_normalization_68[0][0]
__________________________________________________________________________________________________
activation_69 (Activation) (None, 17, 17, 192) 0 batch_normalization_69[0][0]
__________________________________________________________________________________________________
mixed7 (Concatenate) (None, 17, 17, 768) 0 activation_60[0][0]
activation_63[0][0]
activation_68[0][0]
activation_69[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D) (None, 17, 17, 192) 147456 mixed7[0][0]
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, 17, 17, 192) 576 conv2d_72[0][0]
__________________________________________________________________________________________________
activation_72 (Activation) (None, 17, 17, 192) 0 batch_normalization_72[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D) (None, 17, 17, 192) 258048 activation_72[0][0]
__________________________________________________________________________________________________
batch_normalization_73 (BatchNo (None, 17, 17, 192) 576 conv2d_73[0][0]
__________________________________________________________________________________________________
activation_73 (Activation) (None, 17, 17, 192) 0 batch_normalization_73[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D) (None, 17, 17, 192) 147456 mixed7[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D) (None, 17, 17, 192) 258048 activation_73[0][0]
__________________________________________________________________________________________________
batch_normalization_70 (BatchNo (None, 17, 17, 192) 576 conv2d_70[0][0]
__________________________________________________________________________________________________
batch_normalization_74 (BatchNo (None, 17, 17, 192) 576 conv2d_74[0][0]
__________________________________________________________________________________________________
activation_70 (Activation) (None, 17, 17, 192) 0 batch_normalization_70[0][0]
__________________________________________________________________________________________________
activation_74 (Activation) (None, 17, 17, 192) 0 batch_normalization_74[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D) (None, 8, 8, 320) 552960 activation_70[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D) (None, 8, 8, 192) 331776 activation_74[0][0]
__________________________________________________________________________________________________
batch_normalization_71 (BatchNo (None, 8, 8, 320) 960 conv2d_71[0][0]
__________________________________________________________________________________________________
batch_normalization_75 (BatchNo (None, 8, 8, 192) 576 conv2d_75[0][0]
__________________________________________________________________________________________________
activation_71 (Activation) (None, 8, 8, 320) 0 batch_normalization_71[0][0]
__________________________________________________________________________________________________
activation_75 (Activation) (None, 8, 8, 192) 0 batch_normalization_75[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 768) 0 mixed7[0][0]
__________________________________________________________________________________________________
mixed8 (Concatenate) (None, 8, 8, 1280) 0 activation_71[0][0]
activation_75[0][0]
max_pooling2d_3[0][0]
__________________________________________________________________________________________________
conv2d_80 (Conv2D) (None, 8, 8, 448) 573440 mixed8[0][0]
__________________________________________________________________________________________________
batch_normalization_80 (BatchNo (None, 8, 8, 448) 1344 conv2d_80[0][0]
__________________________________________________________________________________________________
activation_80 (Activation) (None, 8, 8, 448) 0 batch_normalization_80[0][0]
__________________________________________________________________________________________________
conv2d_77 (Conv2D) (None, 8, 8, 384) 491520 mixed8[0][0]
__________________________________________________________________________________________________
conv2d_81 (Conv2D) (None, 8, 8, 384) 1548288 activation_80[0][0]
__________________________________________________________________________________________________
batch_normalization_77 (BatchNo (None, 8, 8, 384) 1152 conv2d_77[0][0]
__________________________________________________________________________________________________
batch_normalization_81 (BatchNo (None, 8, 8, 384) 1152 conv2d_81[0][0]
__________________________________________________________________________________________________
activation_77 (Activation) (None, 8, 8, 384) 0 batch_normalization_77[0][0]
__________________________________________________________________________________________________
activation_81 (Activation) (None, 8, 8, 384) 0 batch_normalization_81[0][0]
__________________________________________________________________________________________________
conv2d_78 (Conv2D) (None, 8, 8, 384) 442368 activation_77[0][0]
__________________________________________________________________________________________________
conv2d_79 (Conv2D) (None, 8, 8, 384) 442368 activation_77[0][0]
__________________________________________________________________________________________________
conv2d_82 (Conv2D) (None, 8, 8, 384) 442368 activation_81[0][0]
__________________________________________________________________________________________________
conv2d_83 (Conv2D) (None, 8, 8, 384) 442368 activation_81[0][0]
__________________________________________________________________________________________________
average_pooling2d_7 (AveragePoo (None, 8, 8, 1280) 0 mixed8[0][0]
__________________________________________________________________________________________________
conv2d_76 (Conv2D) (None, 8, 8, 320) 409600 mixed8[0][0]
__________________________________________________________________________________________________
batch_normalization_78 (BatchNo (None, 8, 8, 384) 1152 conv2d_78[0][0]
__________________________________________________________________________________________________
batch_normalization_79 (BatchNo (None, 8, 8, 384) 1152 conv2d_79[0][0]
__________________________________________________________________________________________________
batch_normalization_82 (BatchNo (None, 8, 8, 384) 1152 conv2d_82[0][0]
__________________________________________________________________________________________________
batch_normalization_83 (BatchNo (None, 8, 8, 384) 1152 conv2d_83[0][0]
__________________________________________________________________________________________________
conv2d_84 (Conv2D) (None, 8, 8, 192) 245760 average_pooling2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_76 (BatchNo (None, 8, 8, 320) 960 conv2d_76[0][0]
__________________________________________________________________________________________________
activation_78 (Activation) (None, 8, 8, 384) 0 batch_normalization_78[0][0]
__________________________________________________________________________________________________
activation_79 (Activation) (None, 8, 8, 384) 0 batch_normalization_79[0][0]
__________________________________________________________________________________________________
activation_82 (Activation) (None, 8, 8, 384) 0 batch_normalization_82[0][0]
__________________________________________________________________________________________________
activation_83 (Activation) (None, 8, 8, 384) 0 batch_normalization_83[0][0]
__________________________________________________________________________________________________
batch_normalization_84 (BatchNo (None, 8, 8, 192) 576 conv2d_84[0][0]
__________________________________________________________________________________________________
activation_76 (Activation) (None, 8, 8, 320) 0 batch_normalization_76[0][0]
__________________________________________________________________________________________________
mixed9_0 (Concatenate) (None, 8, 8, 768) 0 activation_78[0][0]
activation_79[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 8, 8, 768) 0 activation_82[0][0]
activation_83[0][0]
__________________________________________________________________________________________________
activation_84 (Activation) (None, 8, 8, 192) 0 batch_normalization_84[0][0]
__________________________________________________________________________________________________
mixed9 (Concatenate) (None, 8, 8, 2048) 0 activation_76[0][0]
mixed9_0[0][0]
concatenate[0][0]
activation_84[0][0]
__________________________________________________________________________________________________
conv2d_89 (Conv2D) (None, 8, 8, 448) 917504 mixed9[0][0]
__________________________________________________________________________________________________
batch_normalization_89 (BatchNo (None, 8, 8, 448) 1344 conv2d_89[0][0]
__________________________________________________________________________________________________
activation_89 (Activation) (None, 8, 8, 448) 0 batch_normalization_89[0][0]
__________________________________________________________________________________________________
conv2d_86 (Conv2D) (None, 8, 8, 384) 786432 mixed9[0][0]
__________________________________________________________________________________________________
conv2d_90 (Conv2D) (None, 8, 8, 384) 1548288 activation_89[0][0]
__________________________________________________________________________________________________
batch_normalization_86 (BatchNo (None, 8, 8, 384) 1152 conv2d_86[0][0]
__________________________________________________________________________________________________
batch_normalization_90 (BatchNo (None, 8, 8, 384) 1152 conv2d_90[0][0]
__________________________________________________________________________________________________
activation_86 (Activation) (None, 8, 8, 384) 0 batch_normalization_86[0][0]
__________________________________________________________________________________________________
activation_90 (Activation) (None, 8, 8, 384) 0 batch_normalization_90[0][0]
__________________________________________________________________________________________________
conv2d_87 (Conv2D) (None, 8, 8, 384) 442368 activation_86[0][0]
__________________________________________________________________________________________________
conv2d_88 (Conv2D) (None, 8, 8, 384) 442368 activation_86[0][0]
__________________________________________________________________________________________________
conv2d_91 (Conv2D) (None, 8, 8, 384) 442368 activation_90[0][0]
__________________________________________________________________________________________________
conv2d_92 (Conv2D) (None, 8, 8, 384) 442368 activation_90[0][0]
__________________________________________________________________________________________________
average_pooling2d_8 (AveragePoo (None, 8, 8, 2048) 0 mixed9[0][0]
__________________________________________________________________________________________________
conv2d_85 (Conv2D) (None, 8, 8, 320) 655360 mixed9[0][0]
__________________________________________________________________________________________________
batch_normalization_87 (BatchNo (None, 8, 8, 384) 1152 conv2d_87[0][0]
__________________________________________________________________________________________________
batch_normalization_88 (BatchNo (None, 8, 8, 384) 1152 conv2d_88[0][0]
__________________________________________________________________________________________________
batch_normalization_91 (BatchNo (None, 8, 8, 384) 1152 conv2d_91[0][0]
__________________________________________________________________________________________________
batch_normalization_92 (BatchNo (None, 8, 8, 384) 1152 conv2d_92[0][0]
__________________________________________________________________________________________________
conv2d_93 (Conv2D) (None, 8, 8, 192) 393216 average_pooling2d_8[0][0]
__________________________________________________________________________________________________
batch_normalization_85 (BatchNo (None, 8, 8, 320) 960 conv2d_85[0][0]
__________________________________________________________________________________________________
activation_87 (Activation) (None, 8, 8, 384) 0 batch_normalization_87[0][0]
__________________________________________________________________________________________________
activation_88 (Activation) (None, 8, 8, 384) 0 batch_normalization_88[0][0]
__________________________________________________________________________________________________
activation_91 (Activation) (None, 8, 8, 384) 0 batch_normalization_91[0][0]
__________________________________________________________________________________________________
activation_92 (Activation) (None, 8, 8, 384) 0 batch_normalization_92[0][0]
__________________________________________________________________________________________________
batch_normalization_93 (BatchNo (None, 8, 8, 192) 576 conv2d_93[0][0]
__________________________________________________________________________________________________
activation_85 (Activation) (None, 8, 8, 320) 0 batch_normalization_85[0][0]
__________________________________________________________________________________________________
mixed9_1 (Concatenate) (None, 8, 8, 768) 0 activation_87[0][0]
activation_88[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 8, 8, 768) 0 activation_91[0][0]
activation_92[0][0]
__________________________________________________________________________________________________
activation_93 (Activation) (None, 8, 8, 192) 0 batch_normalization_93[0][0]
__________________________________________________________________________________________________
mixed10 (Concatenate) (None, 8, 8, 2048) 0 activation_85[0][0]
mixed9_1[0][0]
concatenate_1[0][0]
activation_93[0][0]
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 2048) 0 mixed10[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 1) 2049 global_average_pooling2d[0][0]
==================================================================================================
Total params: 21,804,833
Trainable params: 2,049
Non-trainable params: 21,802,784
__________________________________________________________________________________________________
###Markdown
Compiling and Training the Model
###Code
optim = tf.keras.optimizers.Adam(lr = 0.001)
model.compile(optimizer = optim, loss = 'binary_crossentropy', metrics = ['accuracy'])
###Output
_____no_output_____
###Markdown
**Training only the Last (Dense) layer**
###Code
hist = model.fit_generator(train_generator,
steps_per_epoch = len(train_generator),
epochs = 20,
validation_data = validation_generator,
verbose = 2,
validation_steps = len(validation_generator),
validation_freq = 1)
###Output
Epoch 1/20
###Markdown
Saving the Model
###Code
model.save("model_before_finetuning.h5")
###Output
_____no_output_____
###Markdown
We save the model, in case we want to finetune the model at a later time. In that case, re-run the first cell to import the necessary modules and also initialize the Training and Validation Image Data Genartors. Then we load the saved model by running the following cell. Loading the Model
###Code
model = load_model("model_before_finetuning.h5")
###Output
WARNING: Logging before flag parsing goes to stderr.
W0920 18:56:22.156484 17052 deprecation.py:323] From C:\Users\RAJDEEP\Anaconda\Anaconda3\lib\site-packages\tensorflow\python\ops\math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
Finetuning the Model **Unfreezing the Base Model**
###Code
model.trainable = True
num_trainable_layers(model)
###Output
Number of Trainable Layers: 313
###Markdown
**Using Checkpoints**
###Code
checkpoint = ModelCheckpoint('checkpoint.h5',
monitor = 'val_accuracy',
verbose = 0,
save_best_only = True,
save_weights_only = False,
mode = 'max',
save_freq = 'epoch')
###Output
_____no_output_____
###Markdown
**Re-Compiling the Model**
###Code
optim = tf.keras.optimizers.Adam(lr = 0.00001)
model.compile(optimizer = optim, loss = 'binary_crossentropy', metrics = ['accuracy'])
###Output
_____no_output_____
###Markdown
**Training all the layers**
###Code
hist = model.fit_generator(train_generator,
steps_per_epoch = len(train_generator),
epochs = 30,
callbacks = [checkpoint],
validation_data = validation_generator,
verbose = 2,
validation_steps = len(validation_generator),
validation_freq = 1)
###Output
Epoch 1/30
313/313 - 619s - loss: 0.0953 - accuracy: 0.9800 - val_loss: 0.0456 - val_accuracy: 0.9849
Epoch 2/30
313/313 - 530s - loss: 0.0851 - accuracy: 0.9839 - val_loss: 0.0473 - val_accuracy: 0.9824
Epoch 3/30
313/313 - 534s - loss: 0.0883 - accuracy: 0.9814 - val_loss: 0.0455 - val_accuracy: 0.9824
Epoch 4/30
313/313 - 524s - loss: 0.0827 - accuracy: 0.9849 - val_loss: 0.0455 - val_accuracy: 0.9858
Epoch 5/30
313/313 - 528s - loss: 0.0839 - accuracy: 0.9839 - val_loss: 0.0445 - val_accuracy: 0.9873
Epoch 6/30
313/313 - 525s - loss: 0.0796 - accuracy: 0.9839 - val_loss: 0.0454 - val_accuracy: 0.9858
Epoch 7/30
313/313 - 543s - loss: 0.0795 - accuracy: 0.9858 - val_loss: 0.0429 - val_accuracy: 0.9858
Epoch 8/30
313/313 - 543s - loss: 0.0740 - accuracy: 0.9863 - val_loss: 0.0435 - val_accuracy: 0.9858
Epoch 9/30
313/313 - 538s - loss: 0.0738 - accuracy: 0.9863 - val_loss: 0.0413 - val_accuracy: 0.9873
Epoch 10/30
313/313 - 533s - loss: 0.0826 - accuracy: 0.9834 - val_loss: 0.0390 - val_accuracy: 0.9873
Epoch 11/30
313/313 - 535s - loss: 0.0671 - accuracy: 0.9888 - val_loss: 0.0382 - val_accuracy: 0.9873
Epoch 12/30
313/313 - 547s - loss: 0.0712 - accuracy: 0.9878 - val_loss: 0.0387 - val_accuracy: 0.9873
Epoch 13/30
313/313 - 528s - loss: 0.0647 - accuracy: 0.9912 - val_loss: 0.0372 - val_accuracy: 0.9888
Epoch 14/30
313/313 - 526s - loss: 0.0680 - accuracy: 0.9897 - val_loss: 0.0371 - val_accuracy: 0.9888
Epoch 15/30
313/313 - 533s - loss: 0.0657 - accuracy: 0.9888 - val_loss: 0.0375 - val_accuracy: 0.9888
Epoch 16/30
313/313 - 528s - loss: 0.0654 - accuracy: 0.9878 - val_loss: 0.0362 - val_accuracy: 0.9888
Epoch 17/30
313/313 - 534s - loss: 0.0636 - accuracy: 0.9912 - val_loss: 0.0374 - val_accuracy: 0.9888
Epoch 18/30
313/313 - 531s - loss: 0.0636 - accuracy: 0.9927 - val_loss: 0.0357 - val_accuracy: 0.9888
Epoch 19/30
313/313 - 538s - loss: 0.0597 - accuracy: 0.9912 - val_loss: 0.0364 - val_accuracy: 0.9888
Epoch 20/30
313/313 - 533s - loss: 0.0607 - accuracy: 0.9902 - val_loss: 0.0364 - val_accuracy: 0.9888
Epoch 21/30
313/313 - 531s - loss: 0.0643 - accuracy: 0.9888 - val_loss: 0.0350 - val_accuracy: 0.9888
Epoch 22/30
313/313 - 538s - loss: 0.0596 - accuracy: 0.9912 - val_loss: 0.0349 - val_accuracy: 0.9888
Epoch 23/30
313/313 - 538s - loss: 0.0565 - accuracy: 0.9912 - val_loss: 0.0335 - val_accuracy: 0.9902
Epoch 24/30
313/313 - 536s - loss: 0.0564 - accuracy: 0.9912 - val_loss: 0.0341 - val_accuracy: 0.9902
Epoch 25/30
313/313 - 547s - loss: 0.0559 - accuracy: 0.9922 - val_loss: 0.0334 - val_accuracy: 0.9902
Epoch 26/30
313/313 - 572s - loss: 0.0518 - accuracy: 0.9902 - val_loss: 0.0326 - val_accuracy: 0.9902
Epoch 27/30
313/313 - 525s - loss: 0.0517 - accuracy: 0.9922 - val_loss: 0.0326 - val_accuracy: 0.9902
Epoch 28/30
313/313 - 529s - loss: 0.0531 - accuracy: 0.9922 - val_loss: 0.0328 - val_accuracy: 0.9902
Epoch 29/30
313/313 - 534s - loss: 0.0511 - accuracy: 0.9902 - val_loss: 0.0330 - val_accuracy: 0.9922
Epoch 30/30
313/313 - 547s - loss: 0.0543 - accuracy: 0.9912 - val_loss: 0.0332 - val_accuracy: 0.9922
###Markdown
Saving the Final Model Since after the last epoch, we see that model has attained its best score in terms of both Training and Validation Accuracy, we are going to save the model and use it to predict the Test Cases.
###Code
model.save("final_model.h5")
###Output
_____no_output_____
###Markdown
Evaluating the Model
###Code
train_loss, train_acc = model.evaluate_generator(train_generator, steps = len(train_generator))
print("Train Loss: ", train_loss)
print("Train Acc: ", train_acc)
validation_loss, validation_acc = model.evaluate_generator(validation_generator, steps = len(validation_generator))
print("Validation Loss: ", validation_loss)
print("Validation Acc: ", validation_acc)
###Output
Validation Loss: 0.033245949805537356
Validation Acc: 0.992
###Markdown
Getting the Test Data
###Code
test_filenames = os.listdir("test")
print("No of images: ", len(test_filenames))
test_data = pd.DataFrame({'Photo': test_filenames})
test_data.reset_index(drop = True, inplace = True)
test_data.head()
###Output
_____no_output_____
###Markdown
Creating the Test Image Generators
###Code
test_datagen = ImageDataGenerator(preprocessing_function = preprocess_input)
test_generator = test_datagen.flow_from_dataframe(test_data,
directory = 'test',
x_col = 'Photo',
target_size = (299,299),
class_mode = None,
shuffle = False,
batch_size = 64)
###Output
Found 12500 validated image filenames.
###Markdown
Predicting the Test Examples
###Code
out = model.predict_generator(test_generator,
steps = len(test_generator),
verbose=0)
print("Shape of Test Prediction Array:", out.shape)
###Output
Shape of Test Prediction Array: (12500, 1)
###Markdown
Creating the Output CSV
###Code
df = pd.DataFrame({'id': test_filenames, 'label':out.squeeze()})
df.head()
df['id'] = df['id'].str.split(".", expand = True).iloc[:,0].astype(int)
df = df.sort_values('id').reset_index(drop = True)
df.head()
df.to_csv("out.csv", index = False)
###Output
_____no_output_____ |
Model backlog/Train/17-jigsaw-train-1fold-xlm-roberta-large.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil
from jigsaw_utility_scripts import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv", usecols=['comment_text', 'toxic', 'lang'])
print('Train set samples: %d' % len(k_fold))
print('Validation set samples: %d' % len(valid_df))
display(k_fold.head())
# Unzip files
!tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_1.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_2.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_3.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_4.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_5.tar.gz
###Output
Train set samples: 435775
Validation set samples: 8000
###Markdown
Model parameters
###Code
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 16 * strategy.num_replicas_in_sync,
"EPOCHS": 2,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": 1,
"N_FOLDS": 1,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
sequence_output = base_model({'input_ids': input_ids})
last_state = sequence_output[0]
cls_token = last_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=input_ids, outputs=output)
model.compile(optimizers.Adam(lr=config['LEARNING_RATE']),
loss=losses.BinaryCrossentropy(),
metrics=[metrics.BinaryAccuracy(), metrics.AUC()])
return model
###Output
_____no_output_____
###Markdown
Train
###Code
history_list = []
for n_fold in range(config['N_FOLDS']):
tf.tpu.experimental.initialize_tpu_system(tpu)
print('\nFOLD: %d' % (n_fold+1))
# Load data
base_data_path = 'fold_%d/' % (n_fold+1)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy')
step_size = x_train.shape[0] // config['BATCH_SIZE']
train_dataset = (
tf.data.Dataset
.from_tensor_slices((x_train, y_train))
.repeat()
.shuffle(2048)
.batch(config['BATCH_SIZE'])
.prefetch(AUTO)
)
valid_dataset = (
tf.data.Dataset
.from_tensor_slices((x_valid_ml, y_valid_ml))
.batch(config['BATCH_SIZE'])
.cache()
.prefetch(AUTO)
)
### Delete data dir
shutil.rmtree(base_data_path)
# Train model
model_path = 'model_fold_%d.h5' % (n_fold+1)
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True, verbose=1)
with strategy.scope():
model = model_fn(config['MAX_LEN'])
# history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO),
# validation_data=(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO)),
history = model.fit(train_dataset,
validation_data=(valid_dataset),
callbacks=[checkpoint, es],
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
verbose=1).history
history_list.append(history)
# # Make predictions
# train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
# valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
# valid_ml_preds = model.predict(get_test_dataset(x_valid_ml, config['BATCH_SIZE'], AUTO))
# k_fold.loc[k_fold['fold_%d' % (n_fold+1)] == 'train', 'pred_%d' % (n_fold+1)] = np.round(train_preds)
# k_fold.loc[k_fold['fold_%d' % (n_fold+1)] == 'validation', 'pred_%d' % (n_fold+1)] = np.round(valid_preds)
# valid_df['pred_%d' % (n_fold+1)] = np.round(valid_ml_preds)
###Output
FOLD: 1
Train for 2723 steps, validate for 63 steps
Epoch 1/2
2722/2723 [============================>.] - ETA: 0s - loss: 0.0950 - binary_accuracy: 0.9648 - auc: 0.9913
Epoch 00001: val_loss improved from inf to 0.29396, saving model to model_fold_1.h5
2723/2723 [==============================] - 1532s 563ms/step - loss: 0.0950 - binary_accuracy: 0.9648 - auc: 0.9913 - val_loss: 0.2940 - val_binary_accuracy: 0.8583 - val_auc: 0.8986
Epoch 2/2
2722/2723 [============================>.] - ETA: 0s - loss: 0.0578 - binary_accuracy: 0.9767 - auc: 0.9970
Epoch 00002: val_loss improved from 0.29396 to 0.28796, saving model to model_fold_1.h5
2723/2723 [==============================] - 1318s 484ms/step - loss: 0.0578 - binary_accuracy: 0.9767 - auc: 0.9970 - val_loss: 0.2880 - val_binary_accuracy: 0.8678 - val_auc: 0.9009
###Markdown
Model loss graph
###Code
sns.set(style="whitegrid")
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model evaluation
###Code
# display(evaluate_model(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
# for n_fold in range(config['N_FOLDS']):
# print('Fold: %d' % (n_fold+1))
# train_set = k_fold[k_fold['fold_%d' % (n_fold+1)] == 'train']
# validation_set = k_fold[k_fold['fold_%d' % (n_fold+1)] == 'validation']
# plot_confusion_matrix(train_set['toxic'], train_set['pred_%d' % (n_fold+1)],
# validation_set['toxic'], validation_set['pred_%d' % (n_fold+1)])
###Output
_____no_output_____
###Markdown
Model evaluation by language
###Code
# display(evaluate_model_lang(valid_df, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
# pd.set_option('max_colwidth', 120)
# display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(15))
n_steps = x_valid_ml.shape[0] // config['BATCH_SIZE']
train_history_2 = model.fit(
valid_dataset.repeat(),
steps_per_epoch=n_steps,
epochs=config['EPOCHS']
)
x_test = np.load(database_base_path + 'x_test.npy')
test_dataset = (
tf.data.Dataset
.from_tensor_slices(x_test)
.batch(config['BATCH_SIZE'])
)
sub = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
sub['toxic'] = model.predict(test_dataset, verbose=1)
sub.to_csv('submission.csv', index=False)
###Output
499/499 [==============================] - 112s 224ms/step
|
caos_2019-2020/sem17-sockets-tcp-udp/sockets-tcp-udp.ipynb | ###Markdown
Сокеты и tcp-сокеты в частности Спасибо Сове Глебу и Голяр Димитрису за участие в написании текста **Модель OSI** [Подробнее про уровни](https://zvondozvon.ru/tehnologii/model-osi) 1. Физический уровень (PHYSICAL)2. Канальный уровень (DATA LINK) Отвечает за передачу фреймов информации. Для этого каждому к каждому блоку добавляется метаинформация и чексумма. Справляется с двумя важными проблемами: 1. Передача фрейма данных 2. Коллизии данных. Это можно сделать двумя способами: повторно передавать данные либо передавать данные через Ethernet кабели, добавив коммутаторы в качестве посредника (наличие всего двух субъектов на каждом канале упрощает совместное использование среды).3. Сетевой уровень (NETWORK) Появляются IP-адреса. Выполняет выбор маршрута передачи данных, учитывая длительность пути, нагруженность сети, etc. Один IP может соответствовать нескольким устройствам. Для этого используется хак на уровне маршрутизатора(NAT) Одному устройству может соответствовать несколько IP. Это без хаков.4. Транспортный уровень (TRANSPORT) ` Важный момент. Сетевой уровень - это про пересылку сообщений между конкретными хостами. А транспортный - между конкретными программами на конкретных хостах. Реализуются часто в ядре операционной системы Еще стоит понимать, что транспортный уровень, предоставляя один интерфейс может иметь разные реализации. Например сокеты UNIX, в этом случае под транспортным уровнем нет сетевого, так как передача данных ведется внутри одной машины. Появляется понятие порта - порт идентифицирует программу-получателя на хосте. Протоколы передачи данных: 1. TCP - устанавливает соединение, похожее на пайп. Надежный, переотправляет данные, но медленный. Регулирует скорость отправки данных в зависимости от нагрузки сети, чтобы не дропнуть интернет. 2. UDP - Быстрый, ненадёжный. Отправляет данные сразу все. 5. Сеансовый уровень (SESSION) (IMHO не нужен)6. Уровень представления данных (PRESENTATION) (IMHO не нужен)7. Прикладной уровень (APPLICATION)Сегодня в программе:* `socketpair` - аналог `pipe`, но полученные дескрипторы обладают сокетными свойствами: файловый дескриптор работает и на чтение и на запись (соответственно этот "pipe" двусторонний), закрывать нужно с вызовом `shutdown`* `socket` - функция создания сокета * TCP * AF_UNIX - сокет внутри системы. Адрес в данном случае - адрес файла сокета в файловой системе. * AF_INET - сокет для стандартных ipv4 соединений. **И это самый важный пример в этом ноутбуке**. * AF_INET6 - сокет для стандартных ipv6 соединений. * UDP * AF_INET - посылаем датаграммы по ipv4. [Сайт с хорошими картинками про порядок низкоуровневых вызовов в клиентском и серверном приложении](http://support.fastwel.ru/AppNotes/AN/AN-0001.htmlserver_tcp_init)Комментарии к ДЗ[Ридинг Яковлева](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/sockets-tcp) netcatДля отладки может быть полезным:* `netcat -lv localhost 30000` - слушать указанный порт по TCP. Выводит все, что пишет клиент. Данные из своего stdin отправляет подключенному клиенту.* `netcat localhost 30000` - подключиться к серверу по TCP. Ввод вывод работает.* `netcat -lvu localhost 30000` - слушать по UDP. Но что-то мне кажется, эта команда умеет только одну датаграмму принимать, потом что-то ломается.* `echo "asfrtvf" | netcat -u -q1 localhost 30000` - отправить датаграмму. Опция -v в этом случае ведет себя странно почему-то. socketpair в качестве pipeSocket в качестве pipe (т.е. внутри системы) используется для написания примерно одинакового кода (для локальных соединений и соединений через интернет) и для использования возможностей сокета.[close vs shutdown](https://stackoverflow.com/questions/48208236/tcp-close-vs-shutdown-in-linux-os)
###Code
%%cpp socketpair.cpp
%run gcc socketpair.cpp -o socketpair.exe
%run ./socketpair.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
write(fd, "X", 1);
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
int main() {
union {
int arr_fd[2];
struct {
int fd_1; // ==arr_fd[0] can change order, it will work
int fd_2; // ==arr_fd[1]
};
} fds;
assert(socketpair(AF_UNIX, SOCK_STREAM, 0, fds.arr_fd) == 0); //socketpair создает пару соединенных сокетов(по сути pipe)
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
close(fds.fd_2);
write_smth(fds.fd_1);
shutdown(fds.fd_1, SHUT_RDWR); // important, try to comment out and look at time. Если мы не закроем соединение, то мы будем сидеть и ждать информации, даже когда ее уже нет
close(fds.fd_1);
log_printf("Writing is done\n");
sleep(3);
return 0;
}
if ((pid_2 = fork()) == 0) {
close(fds.fd_1);
read_all(fds.fd_2);
shutdown(fds.fd_2, SHUT_RDWR);
close(fds.fd_2);
return 0;
}
close(fds.fd_1);
close(fds.fd_2);
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
###Output
_____no_output_____
###Markdown
socket + AF_UNIX + TCP
###Code
%%cpp socket_unix.cpp
%run gcc socket_unix.cpp -o socket_unix.exe
%run ./socket_unix.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <sys/un.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
write(fd, "X", 1);
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
// important to use "/tmp/*", otherwise you can have problems with permissions
const char* SOCKET_PATH = "/tmp/my_precious_unix_socket";
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = socket(AF_UNIX, SOCK_STREAM, 0); // == connection_fd in this case
conditional_handle_error(socket_fd == -1, "can't initialize socket");
// Тип переменной адреса (sockaddr_un) отличается от того что будет в следующем примере (т.е. тип зависит от того какое соединение используется)
struct sockaddr_un addr = {.sun_family = AF_UNIX};
strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1);
// Кастуем sockaddr_un* -> sockaddr*. Знакомьтесь, сишные абстрактные структуры.
int connect_ret = connect(socket_fd, (const struct sockaddr*)&addr, sizeof(addr.sun_path));
conditional_handle_error(connect_ret == -1, "can't connect to unix socket");
write_smth(socket_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_UNIX, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
unlink(SOCKET_PATH); // remove socket if exists, because bind fail if it exists
struct sockaddr_un addr = {.sun_family = AF_UNIX};
strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1);
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr.sun_path));
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG);
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_un peer_addr = {0};
socklen_t peer_addr_size = sizeof(struct sockaddr_un);
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size); // После accept можно делать fork и обрабатывать соединение в отдельном процессе
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR);
close(connection_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
unlink(SOCKET_PATH);
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
###Output
_____no_output_____
###Markdown
socket + AF_INET + TCP[На первый взгляд приличная статейка про программирование на сокетах в linux](https://www.rsdn.org/article/unix/sockets.xml)[Ответ на stackoverflow про то, что делает shutdown](https://stackoverflow.com/a/23483487)
###Code
%%cpp socket_inet.cpp
%run gcc -DDEBUG socket_inet.cpp -o socket_inet.exe
%run ./socket_inet.exe
%run diff socket_unix.cpp socket_inet.cpp | grep -v "// %" | grep -e '>' -e '<' -C 1
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
int write_ret = write(fd, "X", 1);
conditional_handle_error(write_ret != 1, "writing failed");
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
const int PORT = 31008;
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1); // Нужен, чтобы сервер успел запуститься.
// В нормальном мире ошибки у пользователя решаются через retry.
int socket_fd = socket(AF_INET, SOCK_STREAM, 0); // == connection_fd in this case
conditional_handle_error(socket_fd == -1, "can't initialize socket"); // Проверяем на ошибку. Всегда так делаем, потому что что угодно (и где угодно) может сломаться при работе с сетью
// Формирование адреса
struct sockaddr_in addr; // Структурка адреса сервера, к которому обращаемся
addr.sin_family = AF_INET; // Указали семейство протоколов
addr.sin_port = htons(PORT); // Указали порт. htons преобразует локальный порядок байтов в сетевой(little endian to big).
struct hostent *hosts = gethostbyname("localhost"); // simple function but it is legacy. Prefer getaddrinfo. Получили информацию о хосте с именем localhost
conditional_handle_error(!hosts, "can't get host by name");
memcpy(&addr.sin_addr, hosts->h_addr_list[0], sizeof(addr.sin_addr)); // Указали в addr первый адрес из hosts
int connect_ret = connect(socket_fd, (struct sockaddr*)&addr, sizeof(addr)); //Тут делаем коннект
conditional_handle_error(connect_ret == -1, "can't connect to unix socket");
write_smth(socket_fd);
log_printf("writing is done\n");
shutdown(socket_fd, SHUT_RDWR); // Закрываем соединение
close(socket_fd); // Закрываем файловый дескриптор уже закрытого соединения. Стоит делать оба закрытия.
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
// Смотри ридинг Яковлева. Вызовы, которые скажут нам, что мы готовы переиспользовать порт (потому что он может ещё не быть полностью освобожденным после прошлого использования)
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in addr = {.sin_family = AF_INET, .sin_port = htons(PORT)};
// addr.sin_addr == 0, so we are ready to receive connections directed to all our addresses
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr)); // Привязали сокет к порту
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG); // Говорим что готовы принимать соединения. Не больше чем LISTEN_BACKLOG за раз
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_in peer_addr = {0}; // Сюда запишется адрес клиента, который к нам подключится
socklen_t peer_addr_size = sizeof(struct sockaddr_in); // Считаем длину, чтобы accept() безопасно записал адрес и не переполнил ничего
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size); // Принимаем соединение и записываем адрес
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR); // }
close(connection_fd); // }Закрыли сокет соединение
shutdown(socket_fd, SHUT_RDWR); // }
close(socket_fd); // } Закрыли сам сокет
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
###Output
_____no_output_____
###Markdown
getaddrinfoРезолвим адрес по имени.[Документация](https://linux.die.net/man/3/getaddrinfo)Из документации взята реализация. Но она не работала, пришлось ее подправить :)
###Code
%%cpp getaddrinfo.cpp
%run gcc -DDEBUG getaddrinfo.cpp -o getaddrinfo.exe
%run ./getaddrinfo.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <netdb.h>
#include <string.h>
int try_connect_by_name(const char* name, int port, int ai_family) {
struct addrinfo hints;
struct addrinfo *result, *rp;
int sfd, s, j;
size_t len;
ssize_t nread;
/* Obtain address(es) matching host/port */
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = ai_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = 0;
hints.ai_protocol = 0; /* Any protocol */
char port_s[20];
sprintf(port_s, "%d", port);
s = getaddrinfo(name, port_s, &hints, &result);
if (s != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
exit(EXIT_FAILURE);
}
/* getaddrinfo() returns a list of address structures.
Try each address until we successfully connect(2).
If socket(2) (or connect(2)) fails, we (close the socket
and) try the next address. */
for (rp = result; rp != NULL; rp = rp->ai_next) {
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
if (getnameinfo(rp->ai_addr, rp->ai_addrlen, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), NI_NUMERICHOST | NI_NUMERICSERV) == 0)
fprintf(stderr, "Try ai_family=%d host=%s, serv=%s\n", rp->ai_family, hbuf, sbuf);
sfd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (sfd == -1)
continue;
if (connect(sfd, rp->ai_addr, rp->ai_addrlen) != -1)
break; /* Success */
close(sfd);
}
freeaddrinfo(result);
if (rp == NULL) { /* No address succeeded */
fprintf(stderr, "Could not connect\n");
return -1;
}
return sfd;
}
int main() {
try_connect_by_name("localhost", 22, AF_UNSPEC);
try_connect_by_name("localhost", 22, AF_INET6);
try_connect_by_name("ya.ru", 80, AF_UNSPEC);
try_connect_by_name("ya.ru", 80, AF_INET6);
return 0;
}
###Output
_____no_output_____
###Markdown
socket + AF_INET6 + getaddrinfo + TCPВынужден использовать getaddrinfo из-за ipv6. При этом пришлось его немного поломать, так как при реализации из мануала rp->ai_socktype и rp->ai_protocol давали неподходящие значения для установки соединения.
###Code
%%cpp socket_inet6.cpp
%run gcc -DDEBUG socket_inet6.cpp -o socket_inet6.exe
%run ./socket_inet6.exe
%run diff socket_inet.cpp socket_inet6.cpp | grep -v "// %" | grep -e '>' -e '<' -C 1
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
int write_ret = write(fd, "X", 1);
conditional_handle_error(write_ret != 1, "writing failed");
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
int try_connect_by_name(const char* name, int port, int ai_family) {
struct addrinfo hints;
struct addrinfo *result, *rp;
int sfd, s, j;
size_t len;
ssize_t nread;
/* Obtain address(es) matching host/port */
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = ai_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = 0;
hints.ai_protocol = 0; /* Any protocol */
char port_s[20];
sprintf(port_s, "%d", port);
s = getaddrinfo(name, port_s, &hints, &result);
if (s != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
exit(EXIT_FAILURE);
}
/* getaddrinfo() returns a list of address structures.
Try each address until we successfully connect(2).
If socket(2) (or connect(2)) fails, we (close the socket
and) try the next address. */
for (rp = result; rp != NULL; rp = rp->ai_next) {
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
if (getnameinfo(rp->ai_addr, rp->ai_addrlen, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), NI_NUMERICHOST | NI_NUMERICSERV) == 0)
fprintf(stderr, "Try ai_family=%d host=%s, serv=%s\n", rp->ai_family, hbuf, sbuf);
sfd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (sfd == -1)
continue;
if (connect(sfd, rp->ai_addr, rp->ai_addrlen) != -1)
break; /* Success */
close(sfd);
}
freeaddrinfo(result);
if (rp == NULL) { /* No address succeeded */
fprintf(stderr, "Could not connect\n");
return -1;
}
return sfd;
}
const int PORT = 31008;
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = try_connect_by_name("localhost", PORT, AF_INET6);
write_smth(socket_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET6, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in6 addr = {.sin6_family = AF_INET6, .sin6_port = htons(PORT)};
// addr.sin6_addr == 0, so we are ready to receive connections directed to all our addresses
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr));
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG);
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_in6 peer_addr = {0};
socklen_t peer_addr_size = sizeof(struct sockaddr_in6);
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size);
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR);
close(connection_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
###Output
_____no_output_____
###Markdown
socket + AF_INET + UDP
###Code
%%cpp socket_inet.cpp
%run gcc -DDEBUG socket_inet.cpp -o socket_inet.exe
%run ./socket_inet.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
const int PORT = 31008;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = socket(AF_INET, SOCK_DGRAM, 0); // создаем UDP сокет
conditional_handle_error(socket_fd == -1, "can't initialize socket");
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_port = htons(PORT),
.sin_addr = {.s_addr = htonl(INADDR_LOOPBACK)}, // более эффективный способ присвоить адрес localhost
};
int written_bytes;
// посылаем первую датаграмму, явно указываем, кому (функция sendto)
const char msg1[] = "Hello 1";
written_bytes = sendto(socket_fd, msg1, sizeof(msg1), 0,
(struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(written_bytes == -1, "can't sendto");
// здесь вызываем connect. В данном случае он просто сохраняет адрес, никаких данных по сети не передается
// посылаем вторую датаграмму, по сохраненному адресу. Используем функцию send
const char msg2[] = "Hello 2";
int connect_ret = connect(socket_fd, (struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(connect_ret == -1, "can't connect OoOo");
written_bytes = send(socket_fd, msg2, sizeof(msg2), 0);
conditional_handle_error(written_bytes == -1, "can't send");
// посылаем третью датаграмму (write - эквивалент send с последним аргументом = 0)
const char msg3[] = "LastHello";
written_bytes = write(socket_fd, msg3, sizeof(msg3));
conditional_handle_error(written_bytes == -1, "can't write");
log_printf("client finished\n");
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET, SOCK_DGRAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_port = htons(PORT),
.sin_addr = {.s_addr = htonl(INADDR_ANY)}, // более надежный способ сказать, что мы готовы принимать на любой входящий адрес (раньше просто 0 неявно записывали)
};
int bind_ret = bind(socket_fd, (struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(bind_ret < 0, "can't bind socket");
char buf[1024];
int bytes_read;
while (true) {
// last 2 arguments: struct sockaddr *src_addr, socklen_t *addrlen)
bytes_read = recvfrom(socket_fd, buf, 1024, 0, NULL, NULL);
buf[bytes_read] = '\0';
log_printf("%s\n", buf);
if (strcmp("LastHello", buf) == 0) {
break;
}
}
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
###Output
_____no_output_____
###Markdown
QA```Димитрис Голяр, [23 февр. 2020 г., 18:11:26 (23.02.2020, 18:09:14)]:Привет! У меня возник вопрос по работе сервера. В задаче 14-1 написано, что программа должна прослушивать соединения на сервере localhost. А что вообще произойдёт если я пропишу не localhost, а что-то другое?) Я буду прослушивать соединения другого какого-то сервера?Yuri Pechatnov, [23 февр. 2020 г., 18:36:07]:Я это понимаю так: у хоста может быть несколько IP адресов. Например, глобальный в интернете и 127.0.0.1 (=localhost)Если ты укзазываешь адрес 0 при создании сервера, то ты принимаешь пакеты адресованные на любой IP этого хостаА если указываешь конкретный адрес, то только пакеты адресованнные на этот конкретный адресИ если ты указываешь localhost, то обрабатываешь только те пакеты у которых целевой адрес 127.0.0.1а эти пакеты могли быть отправлены только с твоего хоста (иначе бы они остались на хосте-отправителе и не дошли до тебя)Кстати, эта особенность стреляет при запуске jupyter notebook. Если ты не укажешь «—ip=0.0.0.0» то не сможешь подключиться к нему с другой машины, так как он сядет слушать только пакеты адресованные в localhost``` Комментарии к ДЗ* inf14-0: posix/sockets/tcp-client -- требуется решить несколько задач: 1. Сформировать адрес. Здесь можно использовать функции, которые делают из доменного имени адрес (им неважно, преобразовывать "192.168.1.2" или "ya.ru"). А можно специальную функцию `inet_aton` или `inet_pton`. 2. Установить соединение. Так же как и раньше делали 3. Написать логику про чтение/запись чисел. Так как порядок байт LittleEndian - тут вообще никаких сетевых особенностей нет. * inf14-1: posix/sockets/http-server-1 -- задача больше на работу с файлами, чем на сетевую часть. Единственный момент -- реагирование на сигналы. Тут можно просто хранить в атомиках файловые дескрипторы и в хендлере закрывать их с последующим exit. Или можно заморочиться с мультиплексированием ввода-вывода (будет на следующем семинаре)* inf14-2: posix/sockets/udp-client -- на следующем семинаре разберём udp. Или можете сами почитать, там просто в сравнении с UDP. (Пример уже в этом ноутбуке есть)* inf14-3: posix/sockets/http-server-2 -- усложнение inf14-1, но не по сетевой части. Просто вспомнить проверку файлов на исполняемость и позапускать, правильно прокинув файловые дескрипторы.Длинный комментарий про задачи-серверы:`man sendfile` - эта функция вам пригодится.Смотрю я на вашу работу с сигналами в задачах-серверах и в большинстве случаем все страшненькоК сожалению не могу предложить какой-то эталонный способ, как с этим хорошо работать, но советую посмотреть в следующих направлениях: 1. signalfd - информацию о сигналах можно читать из файловых дескрипторов - тогда можно делать epoll на условную пару (socket_fd, signal_fd) и если пришел сигнал синхронно хорошо его обрабатывать 2. В хендлерах только проставлять флаги того, что пришли сигналы. Опцию SA_RESTART не ставить. И проверять флаги в основном цикле и после каждого системного вызова. 3. Блокировка сигналов. Тут все сложненько, так как если сигналы будут заблокированы во время условного accept вы не вероятно прерветесь. В целом можно защищать некоторые области кода блокированием сигналов, но не стоит в этих областях делать блокирующие вызовы. (Но можно сделать так: с помощью epoll подождать, пока в socket_fd что-то появится, в потом в защищенной секции сделать connection_fd = accept(…) (который выполнится мгновенно))Классические ошибки 1. Блокировка сигналов там, где она не нужна 2. atomic_connection_fd = accept(…); + неуправляемо асинхронный хендлер, в котором atomic_connection_fd должен закрываться и делаться exit Тогда хендлер может сработать после завершения accept но до присвоения атомика. И соединение вы не закроете Относительно безопасный шаблон для домашки про серверОчень много прям откровенно плохой обработки сигналов (сходу придумываются кейсы, когда решения ломаются). Поэтому предлагаю свою версию (без вырезок зашла в ejudge, да).Суть в том, чтобы избежать асинхронной обработки сигналов и связанных с этим проблем. Превратить пришедший сигнал в данные в декскрипторе и следить за ним с помощью epoll.
###Code
%%cpp server_sol.c --ejudge-style
//%run gcc server_sol.c -o server_sol.exe
//%run ./server_sol.exe 30045
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <signal.h>
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
#include <stdbool.h>
#include <sys/stat.h>
#include <wait.h>
#include <sys/epoll.h>
#include <assert.h>
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
//...
// должен работать до тех пор, пока в stop_fd не появится что-нибудь доступное для чтения
int server_main(int argc, char** argv, int stop_fd) {
assert(argc >= 2);
//...
int epoll_fd = epoll_create1(0);
{
int fds[] = {stop_fd, socket_fd, -1};
for (int* fd = fds; *fd != -1; ++fd) {
struct epoll_event event = {
.events = EPOLLIN | EPOLLERR | EPOLLHUP,
.data = {.fd = *fd}
};
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, *fd, &event);
}
}
while (true) {
struct epoll_event event;
int epoll_ret = epoll_wait(epoll_fd, &event, 1, 1000); // Читаем события из epoll-объект (то есть из множества файловых дескриптотров, по которым есть события)
if (epoll_ret <= 0) {
continue;
}
if (event.data.fd == stop_fd) {
break;
}
// отработает мгновенно, так как уже подождали в epoll
int fd = accept(socket_fd, NULL, NULL);
// ... а тут обрабатываем соединение
shutdown(fd, SHUT_RDWR);
close(fd);
}
close(epoll_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
return 0;
}
// Основную работу будем делать в дочернем процессе.
// А этот процесс будет принимать сигналы и напишет в пайп, когда пора останавливаться
// (Кстати, лишний процесс и пайп можно было заменить на signalf, но это менее портируемо)
// (A еще можно установить хендлер сигнала из которого и писать в пайп, то есть не делать лишнего процесса тут)
int main(int argc, char** argv) {
sigset_t full_mask;
sigfillset(&full_mask);
sigprocmask(SIG_BLOCK, &full_mask, NULL);
int fds[2];
assert(pipe(fds) == 0);
int child_pid = fork();
assert(child_pid >= 0);
if (child_pid == 0) {
close(fds[1]);
server_main(argc, argv, fds[0]);
close(fds[0]);
return 0;
} else {
// Код ленивого человека, просто скопировавшего этот шаблон
close(fds[0]);
while (1) {
siginfo_t info;
sigwaitinfo(&full_mask, &info);
int received_signal = info.si_signo;
if (received_signal == SIGTERM || received_signal == SIGINT) {
int written = write(fds[1], "X", 1);
conditional_handle_error(written != 1, "writing failed");
close(fds[1]);
break;
}
}
int status;
assert(waitpid(child_pid, &status, 0) != -1);
}
return 0;
}
###Output
_____no_output_____ |
references/03_07_2021 Time Series DS C26 Assignment/DS C26 Demo.ipynb | ###Markdown
**Assume city is market-segment column**- index = Order_date- Columns = Market_Segment- Values = Profit- Aagfunc = sum
###Code
dfag = df.pivot_table(index = 'dt', columns = 'City', values = 'AverageTemperature', aggfunc = 'sum')
dfag.head()
# Assignment:
#train_len = 42
#test = 6
train_len =4500
train_df = dfag[0:train_len]
test_df = dfag[train_len:]
train_df.head()
###Output
_____no_output_____
###Markdown
**COV Calculation**
###Code
cov = pd.DataFrame(columns = ['segment', 'cov'])
cov.head()
for i in train_df.columns:
temp = {'segment':i, 'cov': np.std(train_df[i])/np.mean(train_df[i])}
cov = cov.append(temp, ignore_index = True)
cov.head()
cov.sort_values(by = 'cov')
###Output
_____no_output_____
###Markdown
Stage-2
###Code
df_filter = df[df['City'] == 'Abidjan' ]
df_filter.head()
df_filter = df_filter[['dt', 'AverageTemperature']]
df_filter.head()
# Aggregate
df2 = df_filter.groupby('dt').sum()
df2.head()
df2 = df2.to_timestamp()
df2.head()
train_len = 4500#42
train = df2[0:train_len]
test = df2[train_len:]
train.head()
###Output
_____no_output_____
###Markdown
Stage-1
###Code
df['dt'] = pd.to_datetime(df['dt']).dt.to_period('m')
df.head()
###Output
_____no_output_____ |
Tensorflow_Project_Exercise.ipynb | ###Markdown
___ ___ Tensorflow Project Let's wrap up this Deep Learning by taking a a quick look at the effectiveness of Neural Nets!We'll use the [Bank Authentication Data Set](https://archive.ics.uci.edu/ml/datasets/banknote+authentication) from the UCI repository.The data consists of 5 columns:* variance of Wavelet Transformed image (continuous)* skewness of Wavelet Transformed image (continuous)* curtosis of Wavelet Transformed image (continuous)* entropy of image (continuous)* class (integer)Where class indicates whether or not a Bank Note was authentic.This sort of task is perfectly suited for Neural Networks and Deep Learning! Just follow the instructions below to get started! Get the Data** Use pandas to read in the bank_note_data.csv file **
###Code
import pandas as pd
bndf=pd.read_json("https://datahub.io/machine-learning/banknote-authentication/r/banknote-authentication.json")
bndf.head()
###Output
_____no_output_____
###Markdown
** Check the head of the Data **
###Code
bndf['Class']=bndf['Class'].apply(lambda x:x-1)
bndf.head()
###Output
_____no_output_____
###Markdown
EDAWe'll just do a few quick plots of the data.** Import seaborn and set matplolib inline for viewing **
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
** Create a Countplot of the Classes (Authentic 1 vs Fake 0) **
###Code
sns.countplot(x='Class',data=bndf)
###Output
_____no_output_____
###Markdown
** Create a PairPlot of the Data with Seaborn, set Hue to Class **
###Code
sns.pairplot(bndf, hue='Class')
###Output
_____no_output_____
###Markdown
Data Preparation When using Neural Network and Deep Learning based systems, it is usually a good idea to Standardize your data, this step isn't actually necessary for our particular data set, but let's run through it for practice! Standard Scaling** Import StandardScaler() from SciKit Learn**
###Code
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
###Output
_____no_output_____
###Markdown
**Create a StandardScaler() object called scaler.**
###Code
feats=scaler.fit(bndf.drop('Class', axis=1))
###Output
_____no_output_____
###Markdown
**Fit scaler to the features.**
###Code
feats
###Output
_____no_output_____
###Markdown
**Use the .transform() method to transform the features to a scaled version.**
###Code
scaled=feats.transform(bndf.drop('Class',axis=1))
###Output
_____no_output_____
###Markdown
**Convert the scaled features to a dataframe and check the head of this dataframe to make sure the scaling worked.**
###Code
df=pd.DataFrame(scaled, columns=bndf.columns[1:])
df.head()
###Output
_____no_output_____
###Markdown
Train Test Split** Create two objects X and y which are the scaled feature values and labels respectively.** ** Use the .as_matrix() method on X and Y and reset them equal to this result. We need to do this in order for TensorFlow to accept the data in Numpy array form instead of a pandas series. **
###Code
X=df
y=bndf['Class']
###Output
_____no_output_____
###Markdown
** Use SciKit Learn to create training and testing sets of the data as we've done in previous lectures:**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.33)
###Output
_____no_output_____
###Markdown
Estimators
###Code
import tensorflow as tf
fc=[]
for f in bndf.columns[1:]:
fc.append(tf.feature_column.numeric_column(f))
input_func=tf.estimator.inputs.pandas_input_fn(X_train,y_train,batch_size=10,num_epochs=5, shuffle=True)
fc
###Output
_____no_output_____
###Markdown
** Create an object called classifier which is a DNNClassifier from learn. Set it to have 2 classes and a [10,20,10] hidden unit layer structure:**
###Code
classifier=tf.estimator.DNNClassifier(hidden_units=[10,20,10], n_classes=2,feature_columns=fc)
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp2qdfxgge
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp2qdfxgge', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7ff8ebf095f8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
###Markdown
** Now fit classifier to the training data. Use steps=200 with a batch_size of 20. You can play around with these values if you want!***Note: Ignore any warnings you get, they won't effect your output*
###Code
classifier.train(input_fn=input_func,steps=200)
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/monitored_session.py:804: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp2qdfxgge/model.ckpt.
INFO:tensorflow:loss = 8.247284, step = 1
INFO:tensorflow:global_step/sec: 309.428
INFO:tensorflow:loss = 0.32563835, step = 101 (0.330 sec)
INFO:tensorflow:Saving checkpoints for 200 into /tmp/tmp2qdfxgge/model.ckpt.
INFO:tensorflow:Loss for final step: 0.034902044.
###Markdown
Model Evaluation** Use the predict method from the classifier model to create predictions from X_test **
###Code
pred_in=tf.estimator.inputs.pandas_input_fn(x=X_test,batch_size=len(X_test),shuffle=False)
ypred = list(classifier.predict(input_fn=pred_in))
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmp2qdfxgge/model.ckpt-200
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
###Markdown
** Now create a classification report and a Confusion Matrix. Does anything stand out to you?**
###Code
yp=list()
for p in ypred:
yp.append(p['class_ids'][0])
yp
from sklearn.metrics import confusion_matrix, classification_report
import numpy as np
y_test.describe()
print(confusion_matrix(y_test,yp))
print(classification_report(y_test,yp))
###Output
_____no_output_____
###Markdown
Optional Comparison** You should have noticed extremely accurate results from the DNN model. Let's compare this to a Random Forest Classifier for a reality check!****Use SciKit Learn to Create a Random Forest Classifier and compare the confusion matrix and classification report to the DNN model**
###Code
from sklearn.ensemble import RandomForestClassifier
rfc=RandomForestClassifier(n_estimators=200)
rfc.fit(X_train,y_train)
rfc_pred=rfc.predict(X_test)
print(classification_report(y_test,rfc_pred))
print(confusion_matrix(y_test,rfc_pred))
###Output
[[233 4]
[ 1 174]]
|
Discretization.ipynb | ###Markdown
Discretization---In this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces. 1. Import the Necessary Packages
###Code
import sys
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
!python -m pip install pyvirtualdisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
is_ipython = 'inline' in plt.get_backend()
if is_ipython:
from IPython import display
plt.ion()
###Output
Requirement already satisfied: pyvirtualdisplay in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: EasyProcess in /opt/conda/lib/python3.6/site-packages (from pyvirtualdisplay)
[33mYou are using pip version 9.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
2. Specify the Environment, and Explore the State and Action SpacesWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but a discrete action space.
###Code
# Create an environment and set random seed
env = gym.make('MountainCar-v0')
env.seed(505);
###Output
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
###Markdown
Run the next code cell to watch a random agent.
###Code
state = env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
for t in range(1000):
action = env.action_space.sample()
img.set_data(env.render(mode='rgb_array'))
plt.axis('off')
display.display(plt.gcf())
display.clear_output(wait=True)
state, reward, done, _ = env.step(action)
if done:
print('Score: ', t+1)
break
env.close()
###Output
Score: 200
###Markdown
In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.
###Code
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Generate some samples from the state space
print("State space samples:")
print(np.array([env.observation_space.sample() for i in range(10)]))
# Explore the action space
print("Action space:", env.action_space)
# Generate some samples from the action space
print("Action space samples:")
print(np.array([env.action_space.sample() for i in range(10)]))
###Output
Action space: Discrete(3)
Action space samples:
[1 1 1 2 2 2 0 1 2 1]
###Markdown
3. Discretize the State Space with a Uniform GridWe will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimension, which will be 1 less than the number of bins.For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, and `bins = (10, 10)`, then your function should return the following list of 2 NumPy arrays:```[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]), array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]```Note that the ends of `low` and `high` are **not** included in these split points. It is assumed that any value below the lowest split point maps to index `0` and any value above the highest split point maps to index `n-1`, where `n` is the number of bins along that dimension.
###Code
def create_uniform_grid(low, high, bins=(10, 10)):
"""Define a uniformly-spaced grid that can be used to discretize a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins along each corresponding dimension.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
extent_dim0 = high[0] - low[0]
extent_dim1 = high[1] - low[1]
step_dim0 = extent_dim0 / bins[0]
step_dim1 = extent_dim1 / bins[1]
steps0 = []
steps1 = []
for i0 in range(1,bins[0]):
steps0.append(low[0] + i0 * step_dim0)
for i1 in range(1,bins[1]):
steps1.append(low[1] + i1 * step_dim1)
return [np.asarray(steps0), np.asarray(steps1)]
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_uniform_grid(low, high) # [test]
###Output
_____no_output_____
###Markdown
Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.Assume the grid is a list of NumPy arrays containing the following split points:```[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]), array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]```Here are some potential samples and their corresponding discretized representations:```[-1.0 , -5.0] => [0, 0][-0.81, -4.1] => [0, 0][-0.8 , -4.0] => [1, 1][-0.5 , 0.0] => [2, 5][ 0.2 , -1.9] => [6, 3][ 0.8 , 4.0] => [9, 9][ 0.81, 4.1] => [9, 9][ 1.0 , 5.0] => [9, 9]```**Note**: There may be one-off differences in binning due to floating-point inaccuracies when samples are close to grid boundaries, but that is alright.
###Code
def discretize(sample, grid):
"""Discretize a sample as per given grid.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
grid : list of array_like
A list of arrays containing split points for each dimension.
Returns
-------
discretized_sample : array_like
A sequence of integers with the same number of dimensions as sample.
"""
discretized_sample = []
for dim in range(len(sample)):
split = len(grid[dim])
for index, value in enumerate(grid[dim]):
if sample[dim] < value:
split = index
break
discretized_sample.append(split)
return discretized_sample
# Test with a simple grid and some samples
grid = create_uniform_grid([-1.0, -5.0], [1.0, 5.0])
samples = np.array(
[[-1.0 , -5.0],
[-0.81, -4.1],
[-0.8 , -4.0],
[-0.5 , 0.0],
[ 0.2 , -1.9],
[ 0.8 , 4.0],
[ 0.81, 4.1],
[ 1.0 , 5.0]])
discretized_samples = np.array([discretize(sample, grid) for sample in samples])
print("\nSamples:", repr(samples), sep="\n")
print("\nDiscretized samples:", repr(discretized_samples), sep="\n")
###Output
Samples:
array([[-1. , -5. ],
[-0.81, -4.1 ],
[-0.8 , -4. ],
[-0.5 , 0. ],
[ 0.2 , -1.9 ],
[ 0.8 , 4. ],
[ 0.81, 4.1 ],
[ 1. , 5. ]])
Discretized samples:
array([[0, 0],
[0, 0],
[1, 1],
[2, 5],
[5, 3],
[9, 9],
[9, 9],
[9, 9]])
###Markdown
4. VisualizationIt might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.
###Code
import matplotlib.collections as mc
def visualize_samples(samples, discretized_samples, grid, low=None, high=None):
"""Visualize original and discretized samples on a given 2-dimensional grid."""
fig, ax = plt.subplots(figsize=(10, 10))
# Show grid
ax.xaxis.set_major_locator(plt.FixedLocator(grid[0]))
ax.yaxis.set_major_locator(plt.FixedLocator(grid[1]))
ax.grid(True)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Otherwise use first, last grid locations as low, high (for further mapping discretized samples)
low = [splits[0] for splits in grid]
high = [splits[-1] for splits in grid]
# Map each discretized sample (which is really an index) to the center of corresponding grid cell
grid_extended = np.hstack((np.array([low]).T, grid, np.array([high]).T)) # add low and high ends
grid_centers = (grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 # compute center of each grid cell
locs = np.stack(grid_centers[i, discretized_samples[:, i]] for i in range(len(grid))).T # map discretized samples
ax.plot(samples[:, 0], samples[:, 1], 'o') # plot original samples
ax.plot(locs[:, 0], locs[:, 1], 's') # plot discretized samples in mapped locations
ax.add_collection(mc.LineCollection(list(zip(samples, locs)), colors='orange')) # add a line connecting each original-discretized sample
ax.legend(['original', 'discretized'])
visualize_samples(samples, discretized_samples, grid, low, high)
###Output
_____no_output_____
###Markdown
Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.
###Code
# Create a grid to discretize the state space
state_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10))
state_grid
# Obtain some samples from the space, discretize them, and then visualize them
state_samples = np.array([env.observation_space.sample() for i in range(10)])
discretized_state_samples = np.array([discretize(sample, state_grid) for sample in state_samples])
visualize_samples(state_samples, discretized_state_samples, state_grid,
env.observation_space.low, env.observation_space.high)
plt.xlabel('position'); plt.ylabel('velocity'); # axis labels for MountainCar-v0 state space
###Output
_____no_output_____
###Markdown
You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works! 5. Q-LearningProvided below is a simple Q-Learning agent. Implement the `preprocess_state()` method to convert each continuous state sample to its corresponding discretized representation.
###Code
class QLearningAgent:
"""Q-Learning agent that can act on a continuous state space by discretizing it."""
def __init__(self, env, state_grid, alpha=0.02, gamma=0.99,
epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505):
"""Initialize variables, create grid for discretization."""
# Environment info
self.env = env
self.state_grid = state_grid
self.state_size = tuple(len(splits) + 1 for splits in self.state_grid) # n-dimensional state space
self.action_size = self.env.action_space.n # 1-dimensional discrete action space
self.seed = np.random.seed(seed)
print("Environment:", self.env)
print("State space size:", self.state_size)
print("Action space size:", self.action_size)
# Learning parameters
self.alpha = alpha # learning rate
self.gamma = gamma # discount factor
self.epsilon = self.initial_epsilon = epsilon # initial exploration rate
self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon
self.min_epsilon = min_epsilon
# Create Q-table
self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))
print("Q table size:", self.q_table.shape)
def preprocess_state(self, state):
"""Map a continuous state to its discretized representation."""
# TODO: Implement this
return tuple(discretize(state, self.state_grid))
def reset_episode(self, state):
"""Reset variables for a new episode."""
# Gradually decrease exploration rate
self.epsilon *= self.epsilon_decay_rate
self.epsilon = max(self.epsilon, self.min_epsilon)
# Decide initial action
self.last_state = self.preprocess_state(state)
self.last_action = np.argmax(self.q_table[self.last_state])
return self.last_action
def reset_exploration(self, epsilon=None):
"""Reset exploration rate used when training."""
self.epsilon = epsilon if epsilon is not None else self.initial_epsilon
def act(self, state, reward=None, done=None, mode='train'):
"""Pick next action and update internal Q table (when mode != 'test')."""
state = self.preprocess_state(state)
if mode == 'test':
# Test mode: Simply produce an action
action = np.argmax(self.q_table[state])
else:
# Train mode (default): Update Q table, pick next action
# Note: We update the Q table entry for the *last* (state, action) pair with current state, reward
self.q_table[self.last_state + (self.last_action,)] += self.alpha * \
(reward + self.gamma * max(self.q_table[state]) - self.q_table[self.last_state + (self.last_action,)])
# Exploration vs. exploitation
do_exploration = np.random.uniform(0, 1) < self.epsilon
if do_exploration:
# Pick a random action
action = np.random.randint(0, self.action_size)
else:
# Pick the best action from Q table
action = np.argmax(self.q_table[state])
# Roll over current state, action for next step
self.last_state = state
self.last_action = action
return action
q_agent = QLearningAgent(env, state_grid)
###Output
Environment: <TimeLimit<MountainCarEnv<MountainCar-v0>>>
State space size: (10, 10)
Action space size: 3
Q table size: (10, 10, 3)
###Markdown
Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.
###Code
def run(agent, env, num_episodes=20000, mode='train'):
"""Run agent in given reinforcement learning environment and return scores."""
scores = []
max_avg_score = -np.inf
for i_episode in range(1, num_episodes+1):
# Initialize episode
state = env.reset()
action = agent.reset_episode(state)
total_reward = 0
done = False
# Roll out steps until done
while not done:
state, reward, done, info = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
# Save final score
scores.append(total_reward)
# Print episode stats
if mode == 'train':
if len(scores) > 100:
avg_score = np.mean(scores[-100:])
if avg_score > max_avg_score:
max_avg_score = avg_score
if i_episode % 100 == 0:
print("\rEpisode {}/{} | Max Average Score: {}".format(i_episode, num_episodes, max_avg_score), end="")
sys.stdout.flush()
return scores
scores = run(q_agent, env)
###Output
Episode 20000/20000 | Max Average Score: -137.29
###Markdown
The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.
###Code
# Plot scores obtained per episode
plt.plot(scores); plt.title("Scores");
###Output
_____no_output_____
###Markdown
If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.
###Code
def plot_scores(scores, rolling_window=100):
"""Plot scores and optional rolling mean using specified window."""
plt.plot(scores); plt.title("Scores");
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
return rolling_mean
rolling_mean = plot_scores(scores)
###Output
_____no_output_____
###Markdown
You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.
###Code
# Run in test mode and analyze scores obtained
test_scores = run(q_agent, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores, rolling_window=10)
###Output
[TEST] Completed 100 episodes with avg. score = -168.28
###Markdown
It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that value.
###Code
def plot_q_table(q_table):
"""Visualize max Q-value for each state and corresponding action."""
q_image = np.max(q_table, axis=2) # max Q-value for each state
q_actions = np.argmax(q_table, axis=2) # best action for each state
fig, ax = plt.subplots(figsize=(10, 10))
cax = ax.imshow(q_image, cmap='jet');
cbar = fig.colorbar(cax)
for x in range(q_image.shape[0]):
for y in range(q_image.shape[1]):
ax.text(x, y, q_actions[x, y], color='white',
horizontalalignment='center', verticalalignment='center')
ax.grid(False)
ax.set_title("Q-table, size: {}".format(q_table.shape))
ax.set_xlabel('position')
ax.set_ylabel('velocity')
plot_q_table(q_agent.q_table)
###Output
_____no_output_____
###Markdown
6. Modify the GridNow it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).
###Code
# TODO: Create a new agent with a different state space grid
state_grid_new = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(30, 30))
q_agent_new = QLearningAgent(env, state_grid_new)
q_agent_new.scores = [] # initialize a list to store scores for this agent
# Train it over a desired number of episodes and analyze scores
# Note: This cell can be run multiple times, and scores will get accumulated
q_agent_new.scores += run(q_agent_new, env, num_episodes=50000) # accumulate scores
rolling_mean_new = plot_scores(q_agent_new.scores)
# Run in test mode and analyze scores obtained
test_scores = run(q_agent_new, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
# Visualize the learned Q-table
plot_q_table(q_agent_new.q_table)
import pickle
pickle.dump( q_agent_new.scores, open( "q_agent_new.scores.pkl", "wb" ) )
###Output
_____no_output_____
###Markdown
7. Watch a Smart Agent
###Code
state = env.reset()
score = 0
img = plt.imshow(env.render(mode='rgb_array'))
for t in range(1000):
action = q_agent_new.act(state, mode='test')
img.set_data(env.render(mode='rgb_array'))
plt.axis('off')
display.display(plt.gcf())
display.clear_output(wait=True)
state, reward, done, _ = env.step(action)
score += reward
if done:
print('Score: ', score)
break
env.close()
###Output
Score: -117.0
|
jtnn_vae_quickstart.ipynb | ###Markdown
JTNN quickstart First, install the package if you haven't already. If you use conda:
###Code
! conda create -n jtnn_env --file conda_list.txt
###Output
_____no_output_____
###Markdown
Then
###Code
! pip install -e .
import os
import pickle
# Disable CUDA (workaround for GPU memory leak issue)
os.environ["CUDA_VISIBLE_DEVICES"]=""
import tqdm
import pandas as pd
from IPython.display import display
from fast_jtnn.fp_calculator import FingerprintCalculator
from fast_jtnn.mol_tree import main_mol_tree
from fast_molvae.preprocess import create_tensor_pickle
from fast_molvae.vae_train import main_vae_train
from rdkit import Chem
from sklearn.cluster import KMeans
# Verify that rdkit version is 2020.09.3 as
# version 2021.03.1 does not seem to work
import rdkit
rdkit.__version__
###Output
_____no_output_____
###Markdown
Optional: Remove molecules with high-valence atoms
###Code
dataset = open("data/full_train.txt", "r").read().split()
with open("data/full_train.txt", "w") as f:
for smiles in tqdm.tqdm(dataset):
mol = Chem.MolFromSmiles(smiles)
for atom in mol.GetAtoms():
if atom.GetDegree() > 6:
print(f"Rejecting high-valence molecule {smiles}")
break
else:
f.write(smiles + "\n")
###Output
_____no_output_____
###Markdown
Generate vocabulary
###Code
main_mol_tree('data/full_train.txt', 'data/vocab_full.txt')
###Output
_____no_output_____
###Markdown
Tensorize training set molecules
###Code
create_tensor_pickle('data/full_train.txt', 'data/tensors/tensors_full.p')
###Output
_____no_output_____
###Markdown
Train VAE
###Code
model = main_vae_train('data/tensors/', 'data/vocab_full.txt', 'data/models', num_workers=4)
###Output
_____no_output_____
###Markdown
Check that it works
###Code
smiles_list = open("data/full_train.txt", "r").read().split()[:5000]
fp_calculator = FingerprintCalculator("data/models/model.iter-20000", "data/vocab_full.txt")
smiles_list
%pdb on
fps = fp_calculator(smiles_list)
kmeans = KMeans(n_clusters=500, random_state=0).fit(fps)
labels = kmeans.labels_
db = pd.DataFrame()
db['smiles']=smiles_list
db['label']=labels
db
for smiles in db[db['label']==10]['smiles'].values:
display(Chem.Draw.MolToImage(Chem.MolFromSmiles(smiles)))
###Output
_____no_output_____ |
notebooks/ROcrate-linked-data.ipynb | ###Markdown
Creating a RO-crate entry and serializing it in JSON-LDhttps://researchobject.github.io/ro-crate/
###Code
from rdflib import *
from datetime import datetime
schema = Namespace("http://schema.org/")
###Output
_____no_output_____
###Markdown
Writing RDF triples to populate a minimal RO-crate
###Code
graph = ConjunctiveGraph()
#graph.bind('foaf', 'http://xmlns.com/foaf/0.1/')
graph.load('https://researchobject.github.io/ro-crate/0.2/context.json', format='json-ld')
# person information
graph.add( (URIRef('https://orcid.org/0000-0002-3597-8557'), RDF.type, schema.Person) )
# contact information
graph.add( (URIRef('[email protected]'), RDF.type, schema.ContactPoint) )
graph.add((URIRef('[email protected]'), schema.contactType, Literal('Developer')) )
graph.add( (URIRef('[email protected]'), schema.name, Literal('Alban Gaignard')) )
graph.add( (URIRef('[email protected]'), schema.email, Literal('[email protected]', datatype=XSD.string)) )
graph.add( (URIRef('[email protected]'), schema.url, Literal('https://orcid.org/0000-0002-3597-8557')) )
# root metadata
graph.add( (URIRef('ro-crate-metadata.jsonld'), RDF.type, schema.CreativeWork) )
graph.add( (URIRef('ro-crate-metadata.jsonld'), schema.identifier, Literal('ro-crate-metadata.jsonld')) )
graph.add( (URIRef('ro-crate-metadata.jsonld'), schema.about, URIRef('./')) )
# Dataset metadata with reference to files
graph.add( (URIRef('./'), RDF.type, schema.Dataset) )
graph.add( (URIRef('./'), schema.name, Literal("workfow outputs")) )
graph.add( (URIRef('./'), schema.datePublished, Literal(datetime.now().isoformat())) )
graph.add( (URIRef('./'), schema.author, URIRef('https://orcid.org/0000-0002-3597-8557')) )
graph.add( (URIRef('./'), schema.contactPoint, URIRef('[email protected]')) )
graph.add( (URIRef('./'), schema.description, Literal("this is the description of the workfow description, this is the description of the workfow description, this is the description of the workfow description")) )
graph.add( (URIRef('./'), schema.license, Literal("MIT?")) )
graph.add( (URIRef('./'), schema.hasPart, (URIRef('./data/provenance.ttl'))) )
# Files metadata
graph.add( (URIRef('./data/provenance.ttl'), RDF.type, schema.MediaObject) )
print(graph.serialize(format='turtle').decode())
#print(graph.serialize(format='json-ld').decode())
###Output
@prefix bibo: <http://purl.org/ontology/bibo/> .
@prefix cc: <http://creativecommons.org/ns#> .
@prefix dct: <http://purl.org/dc/terms/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix frapo: <http://purl.org/cerif/frapo/> .
@prefix pav: <http://purl.org/pav/> .
@prefix pcdm: <http://pcdm.org/models#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfa: <http://www.w3.org/ns/rdfa#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix rel: <https://www.w3.org/ns/iana/link-relations/relation#> .
@prefix roterms: <http://purl.org/ro/roterms#> .
@prefix schema: <http://schema.org/> .
@prefix wf4ever: <http://purl.org/ro/wf4ever#> .
@prefix wfdesc: <http://purl.org/ro/wfdesc#> .
@prefix wfprov: <http://purl.org/ro/wfprov#> .
@prefix xml: <http://www.w3.org/XML/1998/namespace> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<ro-crate-metadata.jsonld> a schema:CreativeWork ;
schema:about <./> ;
schema:identifier "ro-crate-metadata.jsonld" .
<./> a schema:Dataset ;
schema:author <https://orcid.org/0000-0002-3597-8557> ;
schema:contactPoint <[email protected]> ;
schema:datePublished "2020-04-09T12:09:34.998235" ;
schema:description "this is the description of the workfow description, this is the description of the workfow description, this is the description of the workfow description" ;
schema:hasPart <./data/provenance.ttl> ;
schema:license "MIT?" ;
schema:name "workfow outputs" .
<./data/provenance.ttl> a schema:MediaObject .
<[email protected]> a schema:ContactPoint ;
schema:contactType "Developer" ;
schema:email "[email protected]"^^xsd:string ;
schema:name "Alban Gaignard" ;
schema:url "https://orcid.org/0000-0002-3597-8557" .
<https://orcid.org/0000-0002-3597-8557> a schema:Person .
###Markdown
Warpping these triples into Python objects
###Code
import requests
import json
class RO_crate_abstract:
"""
An abstract RO-crate class to share common attributes and methods.
"""
def __init__(self, uri):
self.uri = uri
self.graph = ConjunctiveGraph()
def get_uri(self):
return self.uri
def print(self):
print(self.graph.serialize(format='turtle').decode())
def serialize_jsonld(self):
res = requests.get('https://w3id.org/ro/crate/1.0/context')
ctx = json.loads(res.text)['@context']
jsonld = self.graph.serialize(format='json-ld', context=ctx)
print(jsonld.decode())
self.graph.serialize(destination='ro-crate-metadata.jsonld', format='json-ld', context=ctx)
def add_has_part(self, other_ro_crate):
self.graph = self.graph + other_ro_crate.graph
#TODO add has_part property
self.graph.add( (URIRef(self.get_uri()), schema.hasPart, URIRef(other_ro_crate.get_uri())) )
class RO_crate_Root(RO_crate_abstract):
"""
The root RO-crate.
"""
def __init__(self):
RO_crate_abstract.__init__(self, uri='ro-crate-metadata.jsonld')
self.graph.add( (URIRef('ro-crate-metadata.jsonld'), RDF.type, schema.CreativeWork) )
self.graph.add( (URIRef('ro-crate-metadata.jsonld'), schema.identifier, Literal('ro-crate-metadata.jsonld')) )
self.graph.add( (URIRef('ro-crate-metadata.jsonld'), schema.about, URIRef('./')) )
class RO_crate_Person(RO_crate_abstract):
"""
A person RO-crate.
"""
def __init__(self, uri):
RO_crate_abstract.__init__(self, uri)
self.graph.add( (URIRef(uri), RDF.type, schema.Person) )
class RO_crate_Contact(RO_crate_Person):
"""
A person RO-crate.
"""
def __init__(self, uri, name=None, email=None, ctype=None, url=None):
RO_crate_Person.__init__(self, uri)
self.graph.add( (URIRef(uri), RDF.type, schema.Person) )
if name:
self.graph.add( (URIRef(uri), schema.name, Literal(name)) )
if email:
self.graph.add( (URIRef(uri), schema.email, Literal(email, datatype=XSD.string)) )
if ctype:
self.graph.add( (URIRef(uri), schema.contactType, Literal(ctype)) )
if url:
self.graph.add( (URIRef(uri), schema.url, Literal(url)) )
# creating a root RO-crate
root = RO_crate_Root()
root.print()
# creating a person RO-crate
person = RO_crate_Person('https://orcid.org/0000-0002-3597-8557')
person.print()
# creating a contact RO-crate
contact = RO_crate_Contact('https://orcid.org/0000-0002-3597-8557', name='Alban Gaignard', ctype='contributor')
contact.print()
# adding hasPart relation between RO-crates
root.add_has_part(contact)
root.print()
# serializing the output
root.serialize_jsonld()
###Output
{
"@context": {
"3DModel": "http://schema.org/3DModel",
"@base": null,
"@label": "http://www.w3.org/2000/01/rdf-schema#label",
"AMRadioChannel": "http://schema.org/AMRadioChannel",
"APIReference": "http://schema.org/APIReference",
"Abdomen": "http://schema.org/Abdomen",
"AboutPage": "http://schema.org/AboutPage",
"AcceptAction": "http://schema.org/AcceptAction",
"Accommodation": "http://schema.org/Accommodation",
"AccountingService": "http://schema.org/AccountingService",
"AchieveAction": "http://schema.org/AchieveAction",
"Action": "http://schema.org/Action",
"ActionAccessSpecification": "http://schema.org/ActionAccessSpecification",
"ActionStatusType": "http://schema.org/ActionStatusType",
"ActivateAction": "http://schema.org/ActivateAction",
"ActiveActionStatus": "http://schema.org/ActiveActionStatus",
"ActiveNotRecruiting": "http://schema.org/ActiveNotRecruiting",
"AddAction": "http://schema.org/AddAction",
"AdministrativeArea": "http://schema.org/AdministrativeArea",
"AdultEntertainment": "http://schema.org/AdultEntertainment",
"AdvertiserContentArticle": "http://schema.org/AdvertiserContentArticle",
"AerobicActivity": "http://schema.org/AerobicActivity",
"AggregateOffer": "http://schema.org/AggregateOffer",
"AggregateRating": "http://schema.org/AggregateRating",
"AgreeAction": "http://schema.org/AgreeAction",
"Airline": "http://schema.org/Airline",
"Airport": "http://schema.org/Airport",
"AlbumRelease": "http://schema.org/AlbumRelease",
"AlignmentObject": "http://schema.org/AlignmentObject",
"AllWheelDriveConfiguration": "http://schema.org/AllWheelDriveConfiguration",
"AllocateAction": "http://schema.org/AllocateAction",
"AmusementPark": "http://schema.org/AmusementPark",
"AnaerobicActivity": "http://schema.org/AnaerobicActivity",
"AnalysisNewsArticle": "http://schema.org/AnalysisNewsArticle",
"AnatomicalStructure": "http://schema.org/AnatomicalStructure",
"AnatomicalSystem": "http://schema.org/AnatomicalSystem",
"Anesthesia": "http://schema.org/Anesthesia",
"AnimalShelter": "http://schema.org/AnimalShelter",
"Answer": "http://schema.org/Answer",
"Apartment": "http://schema.org/Apartment",
"ApartmentComplex": "http://schema.org/ApartmentComplex",
"Appearance": "http://schema.org/Appearance",
"AppendAction": "http://schema.org/AppendAction",
"ApplyAction": "http://schema.org/ApplyAction",
"ApprovedIndication": "http://schema.org/ApprovedIndication",
"Aquarium": "http://schema.org/Aquarium",
"ArchiveComponent": "http://schema.org/ArchiveComponent",
"ArchiveOrganization": "http://schema.org/ArchiveOrganization",
"ArriveAction": "http://schema.org/ArriveAction",
"ArtGallery": "http://schema.org/ArtGallery",
"Artery": "http://schema.org/Artery",
"Article": "http://schema.org/Article",
"AskAction": "http://schema.org/AskAction",
"AskPublicNewsArticle": "http://schema.org/AskPublicNewsArticle",
"AssessAction": "http://schema.org/AssessAction",
"AssignAction": "http://schema.org/AssignAction",
"Atlas": "http://schema.org/Atlas",
"Attorney": "http://schema.org/Attorney",
"Audience": "http://schema.org/Audience",
"AudioObject": "http://schema.org/AudioObject",
"Audiobook": "http://schema.org/Audiobook",
"AudiobookFormat": "http://schema.org/AudiobookFormat",
"AuthoritativeLegalValue": "http://schema.org/AuthoritativeLegalValue",
"AuthorizeAction": "http://schema.org/AuthorizeAction",
"AutoBodyShop": "http://schema.org/AutoBodyShop",
"AutoDealer": "http://schema.org/AutoDealer",
"AutoPartsStore": "http://schema.org/AutoPartsStore",
"AutoRental": "http://schema.org/AutoRental",
"AutoRepair": "http://schema.org/AutoRepair",
"AutoWash": "http://schema.org/AutoWash",
"AutomatedTeller": "http://schema.org/AutomatedTeller",
"AutomotiveBusiness": "http://schema.org/AutomotiveBusiness",
"Ayurvedic": "http://schema.org/Ayurvedic",
"BackgroundNewsArticle": "http://schema.org/BackgroundNewsArticle",
"Bacteria": "http://schema.org/Bacteria",
"Bakery": "http://schema.org/Bakery",
"Balance": "http://schema.org/Balance",
"BankAccount": "http://schema.org/BankAccount",
"BankOrCreditUnion": "http://schema.org/BankOrCreditUnion",
"BarOrPub": "http://schema.org/BarOrPub",
"Barcode": "http://schema.org/Barcode",
"Beach": "http://schema.org/Beach",
"BeautySalon": "http://schema.org/BeautySalon",
"BedAndBreakfast": "http://schema.org/BedAndBreakfast",
"BedDetails": "http://schema.org/BedDetails",
"BedType": "http://schema.org/BedType",
"BefriendAction": "http://schema.org/BefriendAction",
"BenefitsHealthAspect": "http://schema.org/BenefitsHealthAspect",
"BikeStore": "http://schema.org/BikeStore",
"Blog": "http://schema.org/Blog",
"BlogPosting": "http://schema.org/BlogPosting",
"BloodTest": "http://schema.org/BloodTest",
"BoardingPolicyType": "http://schema.org/BoardingPolicyType",
"BodyOfWater": "http://schema.org/BodyOfWater",
"Bone": "http://schema.org/Bone",
"Book": "http://schema.org/Book",
"BookFormatType": "http://schema.org/BookFormatType",
"BookSeries": "http://schema.org/BookSeries",
"BookStore": "http://schema.org/BookStore",
"BookmarkAction": "http://schema.org/BookmarkAction",
"Boolean": "http://schema.org/Boolean",
"BorrowAction": "http://schema.org/BorrowAction",
"BowlingAlley": "http://schema.org/BowlingAlley",
"BrainStructure": "http://schema.org/BrainStructure",
"Brand": "http://schema.org/Brand",
"BreadcrumbList": "http://schema.org/BreadcrumbList",
"Brewery": "http://schema.org/Brewery",
"Bridge": "http://schema.org/Bridge",
"BroadcastChannel": "http://schema.org/BroadcastChannel",
"BroadcastEvent": "http://schema.org/BroadcastEvent",
"BroadcastFrequencySpecification": "http://schema.org/BroadcastFrequencySpecification",
"BroadcastRelease": "http://schema.org/BroadcastRelease",
"BroadcastService": "http://schema.org/BroadcastService",
"BrokerageAccount": "http://schema.org/BrokerageAccount",
"BuddhistTemple": "http://schema.org/BuddhistTemple",
"BusOrCoach": "http://schema.org/BusOrCoach",
"BusReservation": "http://schema.org/BusReservation",
"BusStation": "http://schema.org/BusStation",
"BusStop": "http://schema.org/BusStop",
"BusTrip": "http://schema.org/BusTrip",
"BusinessAudience": "http://schema.org/BusinessAudience",
"BusinessEntityType": "http://schema.org/BusinessEntityType",
"BusinessEvent": "http://schema.org/BusinessEvent",
"BusinessFunction": "http://schema.org/BusinessFunction",
"BuyAction": "http://schema.org/BuyAction",
"CDFormat": "http://schema.org/CDFormat",
"CT": "http://schema.org/CT",
"CableOrSatelliteService": "http://schema.org/CableOrSatelliteService",
"CafeOrCoffeeShop": "http://schema.org/CafeOrCoffeeShop",
"Campground": "http://schema.org/Campground",
"CampingPitch": "http://schema.org/CampingPitch",
"Canal": "http://schema.org/Canal",
"CancelAction": "http://schema.org/CancelAction",
"Car": "http://schema.org/Car",
"CarUsageType": "http://schema.org/CarUsageType",
"Cardiovascular": "http://schema.org/Cardiovascular",
"CardiovascularExam": "http://schema.org/CardiovascularExam",
"CaseSeries": "http://schema.org/CaseSeries",
"Casino": "http://schema.org/Casino",
"CassetteFormat": "http://schema.org/CassetteFormat",
"CategoryCode": "http://schema.org/CategoryCode",
"CategoryCodeSet": "http://schema.org/CategoryCodeSet",
"CatholicChurch": "http://schema.org/CatholicChurch",
"CausesHealthAspect": "http://schema.org/CausesHealthAspect",
"Cemetery": "http://schema.org/Cemetery",
"Chapter": "http://schema.org/Chapter",
"CheckAction": "http://schema.org/CheckAction",
"CheckInAction": "http://schema.org/CheckInAction",
"CheckOutAction": "http://schema.org/CheckOutAction",
"CheckoutPage": "http://schema.org/CheckoutPage",
"ChildCare": "http://schema.org/ChildCare",
"ChildrensEvent": "http://schema.org/ChildrensEvent",
"Chiropractic": "http://schema.org/Chiropractic",
"ChooseAction": "http://schema.org/ChooseAction",
"Church": "http://schema.org/Church",
"City": "http://schema.org/City",
"CityHall": "http://schema.org/CityHall",
"CivicStructure": "http://schema.org/CivicStructure",
"Claim": "http://schema.org/Claim",
"ClaimReview": "http://schema.org/ClaimReview",
"Class": "http://schema.org/Class",
"Clinician": "http://schema.org/Clinician",
"Clip": "http://schema.org/Clip",
"ClothingStore": "http://schema.org/ClothingStore",
"CoOp": "http://schema.org/CoOp",
"Code": "http://schema.org/Code",
"CohortStudy": "http://schema.org/CohortStudy",
"Collection": "http://schema.org/Collection",
"CollectionPage": "http://schema.org/CollectionPage",
"CollegeOrUniversity": "http://schema.org/CollegeOrUniversity",
"ComedyClub": "http://schema.org/ComedyClub",
"ComedyEvent": "http://schema.org/ComedyEvent",
"ComicCoverArt": "http://schema.org/ComicCoverArt",
"ComicIssue": "http://schema.org/ComicIssue",
"ComicSeries": "http://schema.org/ComicSeries",
"ComicStory": "http://schema.org/ComicStory",
"Comment": "http://schema.org/Comment",
"CommentAction": "http://schema.org/CommentAction",
"CommentPermission": "http://schema.org/CommentPermission",
"CommunicateAction": "http://schema.org/CommunicateAction",
"CommunityHealth": "http://schema.org/CommunityHealth",
"CompilationAlbum": "http://schema.org/CompilationAlbum",
"CompleteDataFeed": "http://schema.org/CompleteDataFeed",
"Completed": "http://schema.org/Completed",
"CompletedActionStatus": "http://schema.org/CompletedActionStatus",
"CompoundPriceSpecification": "http://schema.org/CompoundPriceSpecification",
"ComputerLanguage": "http://schema.org/ComputerLanguage",
"ComputerStore": "http://schema.org/ComputerStore",
"ConfirmAction": "http://schema.org/ConfirmAction",
"Consortium": "http://schema.org/Consortium",
"ConsumeAction": "http://schema.org/ConsumeAction",
"ContactPage": "http://schema.org/ContactPage",
"ContactPoint": "http://schema.org/ContactPoint",
"ContactPointOption": "http://schema.org/ContactPointOption",
"ContagiousnessHealthAspect": "http://schema.org/ContagiousnessHealthAspect",
"Continent": "http://schema.org/Continent",
"ControlAction": "http://schema.org/ControlAction",
"ConvenienceStore": "http://schema.org/ConvenienceStore",
"Conversation": "http://schema.org/Conversation",
"CookAction": "http://schema.org/CookAction",
"Corporation": "http://schema.org/Corporation",
"CorrectionComment": "http://schema.org/CorrectionComment",
"Country": "http://schema.org/Country",
"Course": "http://schema.org/Course",
"CourseInstance": "http://schema.org/CourseInstance",
"Courthouse": "http://schema.org/Courthouse",
"CoverArt": "http://schema.org/CoverArt",
"CreateAction": "http://schema.org/CreateAction",
"CreativeWork": "http://schema.org/CreativeWork",
"CreativeWorkSeason": "http://schema.org/CreativeWorkSeason",
"CreativeWorkSeries": "http://schema.org/CreativeWorkSeries",
"CreditCard": "http://schema.org/CreditCard",
"Crematorium": "http://schema.org/Crematorium",
"CriticReview": "http://schema.org/CriticReview",
"CrossSectional": "http://schema.org/CrossSectional",
"CssSelectorType": "http://schema.org/CssSelectorType",
"CurrencyConversionService": "http://schema.org/CurrencyConversionService",
"DDxElement": "http://schema.org/DDxElement",
"DJMixAlbum": "http://schema.org/DJMixAlbum",
"DVDFormat": "http://schema.org/DVDFormat",
"DamagedCondition": "http://schema.org/DamagedCondition",
"DanceEvent": "http://schema.org/DanceEvent",
"DanceGroup": "http://schema.org/DanceGroup",
"DataCatalog": "http://schema.org/DataCatalog",
"DataDownload": "http://schema.org/DataDownload",
"DataFeed": "http://schema.org/DataFeed",
"DataFeedItem": "http://schema.org/DataFeedItem",
"DataType": "http://schema.org/DataType",
"Dataset": "http://schema.org/Dataset",
"Date": "http://schema.org/Date",
"DateTime": "http://schema.org/DateTime",
"DatedMoneySpecification": "http://schema.org/DatedMoneySpecification",
"DayOfWeek": "http://schema.org/DayOfWeek",
"DaySpa": "http://schema.org/DaySpa",
"DeactivateAction": "http://schema.org/DeactivateAction",
"DefenceEstablishment": "http://schema.org/DefenceEstablishment",
"DefinedTerm": "http://schema.org/DefinedTerm",
"DefinedTermSet": "http://schema.org/DefinedTermSet",
"DefinitiveLegalValue": "http://schema.org/DefinitiveLegalValue",
"DeleteAction": "http://schema.org/DeleteAction",
"DeliveryChargeSpecification": "http://schema.org/DeliveryChargeSpecification",
"DeliveryEvent": "http://schema.org/DeliveryEvent",
"DeliveryMethod": "http://schema.org/DeliveryMethod",
"Demand": "http://schema.org/Demand",
"DemoAlbum": "http://schema.org/DemoAlbum",
"Dentist": "http://schema.org/Dentist",
"Dentistry": "http://schema.org/Dentistry",
"DepartAction": "http://schema.org/DepartAction",
"DepartmentStore": "http://schema.org/DepartmentStore",
"DepositAccount": "http://schema.org/DepositAccount",
"Dermatologic": "http://schema.org/Dermatologic",
"Dermatology": "http://schema.org/Dermatology",
"DiabeticDiet": "http://schema.org/DiabeticDiet",
"Diagnostic": "http://schema.org/Diagnostic",
"DiagnosticLab": "http://schema.org/DiagnosticLab",
"DiagnosticProcedure": "http://schema.org/DiagnosticProcedure",
"Diet": "http://schema.org/Diet",
"DietNutrition": "http://schema.org/DietNutrition",
"DietarySupplement": "http://schema.org/DietarySupplement",
"DigitalAudioTapeFormat": "http://schema.org/DigitalAudioTapeFormat",
"DigitalDocument": "http://schema.org/DigitalDocument",
"DigitalDocumentPermission": "http://schema.org/DigitalDocumentPermission",
"DigitalDocumentPermissionType": "http://schema.org/DigitalDocumentPermissionType",
"DigitalFormat": "http://schema.org/DigitalFormat",
"DisagreeAction": "http://schema.org/DisagreeAction",
"Discontinued": "http://schema.org/Discontinued",
"DiscoverAction": "http://schema.org/DiscoverAction",
"DiscussionForumPosting": "http://schema.org/DiscussionForumPosting",
"DislikeAction": "http://schema.org/DislikeAction",
"Distance": "http://schema.org/Distance",
"Distillery": "http://schema.org/Distillery",
"DonateAction": "http://schema.org/DonateAction",
"DoseSchedule": "http://schema.org/DoseSchedule",
"DoubleBlindedTrial": "http://schema.org/DoubleBlindedTrial",
"DownloadAction": "http://schema.org/DownloadAction",
"DrawAction": "http://schema.org/DrawAction",
"Drawing": "http://schema.org/Drawing",
"DrinkAction": "http://schema.org/DrinkAction",
"DriveWheelConfigurationValue": "http://schema.org/DriveWheelConfigurationValue",
"DrivingSchoolVehicleUsage": "http://schema.org/DrivingSchoolVehicleUsage",
"Drug": "http://schema.org/Drug",
"DrugClass": "http://schema.org/DrugClass",
"DrugCost": "http://schema.org/DrugCost",
"DrugCostCategory": "http://schema.org/DrugCostCategory",
"DrugLegalStatus": "http://schema.org/DrugLegalStatus",
"DrugPregnancyCategory": "http://schema.org/DrugPregnancyCategory",
"DrugPrescriptionStatus": "http://schema.org/DrugPrescriptionStatus",
"DrugStrength": "http://schema.org/DrugStrength",
"DryCleaningOrLaundry": "http://schema.org/DryCleaningOrLaundry",
"Duration": "http://schema.org/Duration",
"EBook": "http://schema.org/EBook",
"EPRelease": "http://schema.org/EPRelease",
"Ear": "http://schema.org/Ear",
"EatAction": "http://schema.org/EatAction",
"EducationEvent": "http://schema.org/EducationEvent",
"EducationalAudience": "http://schema.org/EducationalAudience",
"EducationalOccupationalCredential": "http://schema.org/EducationalOccupationalCredential",
"EducationalOccupationalProgram": "http://schema.org/EducationalOccupationalProgram",
"EducationalOrganization": "http://schema.org/EducationalOrganization",
"Electrician": "http://schema.org/Electrician",
"ElectronicsStore": "http://schema.org/ElectronicsStore",
"ElementarySchool": "http://schema.org/ElementarySchool",
"EmailMessage": "http://schema.org/EmailMessage",
"Embassy": "http://schema.org/Embassy",
"Emergency": "http://schema.org/Emergency",
"EmergencyService": "http://schema.org/EmergencyService",
"EmployeeRole": "http://schema.org/EmployeeRole",
"EmployerAggregateRating": "http://schema.org/EmployerAggregateRating",
"EmployerReview": "http://schema.org/EmployerReview",
"EmploymentAgency": "http://schema.org/EmploymentAgency",
"Endocrine": "http://schema.org/Endocrine",
"EndorseAction": "http://schema.org/EndorseAction",
"EndorsementRating": "http://schema.org/EndorsementRating",
"Energy": "http://schema.org/Energy",
"EngineSpecification": "http://schema.org/EngineSpecification",
"EnrollingByInvitation": "http://schema.org/EnrollingByInvitation",
"EntertainmentBusiness": "http://schema.org/EntertainmentBusiness",
"EntryPoint": "http://schema.org/EntryPoint",
"Enumeration": "http://schema.org/Enumeration",
"Episode": "http://schema.org/Episode",
"Event": "http://schema.org/Event",
"EventCancelled": "http://schema.org/EventCancelled",
"EventPostponed": "http://schema.org/EventPostponed",
"EventRescheduled": "http://schema.org/EventRescheduled",
"EventReservation": "http://schema.org/EventReservation",
"EventScheduled": "http://schema.org/EventScheduled",
"EventSeries": "http://schema.org/EventSeries",
"EventStatusType": "http://schema.org/EventStatusType",
"EventVenue": "http://schema.org/EventVenue",
"EvidenceLevelA": "http://schema.org/EvidenceLevelA",
"EvidenceLevelB": "http://schema.org/EvidenceLevelB",
"EvidenceLevelC": "http://schema.org/EvidenceLevelC",
"ExampleRun": "http://purl.org/ro/roterms#ExampleRun",
"ExchangeRateSpecification": "http://schema.org/ExchangeRateSpecification",
"ExchangeRefund": "http://schema.org/ExchangeRefund",
"ExerciseAction": "http://schema.org/ExerciseAction",
"ExerciseGym": "http://schema.org/ExerciseGym",
"ExercisePlan": "http://schema.org/ExercisePlan",
"ExhibitionEvent": "http://schema.org/ExhibitionEvent",
"Eye": "http://schema.org/Eye",
"FAQPage": "http://schema.org/FAQPage",
"FDAcategoryA": "http://schema.org/FDAcategoryA",
"FDAcategoryB": "http://schema.org/FDAcategoryB",
"FDAcategoryC": "http://schema.org/FDAcategoryC",
"FDAcategoryD": "http://schema.org/FDAcategoryD",
"FDAcategoryX": "http://schema.org/FDAcategoryX",
"FDAnotEvaluated": "http://schema.org/FDAnotEvaluated",
"FMRadioChannel": "http://schema.org/FMRadioChannel",
"FailedActionStatus": "http://schema.org/FailedActionStatus",
"False": "http://schema.org/False",
"FastFoodRestaurant": "http://schema.org/FastFoodRestaurant",
"Female": "http://schema.org/Female",
"Festival": "http://schema.org/Festival",
"File": "http://schema.org/MediaObject",
"FilmAction": "http://schema.org/FilmAction",
"FinancialProduct": "http://schema.org/FinancialProduct",
"FinancialService": "http://schema.org/FinancialService",
"FindAction": "http://schema.org/FindAction",
"FireStation": "http://schema.org/FireStation",
"Flexibility": "http://schema.org/Flexibility",
"Flight": "http://schema.org/Flight",
"FlightReservation": "http://schema.org/FlightReservation",
"Float": "http://schema.org/Float",
"Florist": "http://schema.org/Florist",
"FollowAction": "http://schema.org/FollowAction",
"FoodEstablishment": "http://schema.org/FoodEstablishment",
"FoodEstablishmentReservation": "http://schema.org/FoodEstablishmentReservation",
"FoodEvent": "http://schema.org/FoodEvent",
"FoodService": "http://schema.org/FoodService",
"FourWheelDriveConfiguration": "http://schema.org/FourWheelDriveConfiguration",
"Friday": "http://schema.org/Friday",
"FrontWheelDriveConfiguration": "http://schema.org/FrontWheelDriveConfiguration",
"FullRefund": "http://schema.org/FullRefund",
"FundingAgency": "http://schema.org/FundingAgency",
"FundingScheme": "http://schema.org/FundingScheme",
"Fungus": "http://schema.org/Fungus",
"FurnitureStore": "http://schema.org/FurnitureStore",
"Game": "http://schema.org/Game",
"GamePlayMode": "http://schema.org/GamePlayMode",
"GameServer": "http://schema.org/GameServer",
"GameServerStatus": "http://schema.org/GameServerStatus",
"GardenStore": "http://schema.org/GardenStore",
"GasStation": "http://schema.org/GasStation",
"Gastroenterologic": "http://schema.org/Gastroenterologic",
"GatedResidenceCommunity": "http://schema.org/GatedResidenceCommunity",
"GenderType": "http://schema.org/GenderType",
"GeneralContractor": "http://schema.org/GeneralContractor",
"Genetic": "http://schema.org/Genetic",
"Genitourinary": "http://schema.org/Genitourinary",
"GeoCircle": "http://schema.org/GeoCircle",
"GeoCoordinates": "http://schema.org/GeoCoordinates",
"GeoShape": "http://schema.org/GeoShape",
"GeospatialGeometry": "http://schema.org/GeospatialGeometry",
"Geriatric": "http://schema.org/Geriatric",
"GiveAction": "http://schema.org/GiveAction",
"GlutenFreeDiet": "http://schema.org/GlutenFreeDiet",
"GolfCourse": "http://schema.org/GolfCourse",
"GovernmentBuilding": "http://schema.org/GovernmentBuilding",
"GovernmentOffice": "http://schema.org/GovernmentOffice",
"GovernmentOrganization": "http://schema.org/GovernmentOrganization",
"GovernmentPermit": "http://schema.org/GovernmentPermit",
"GovernmentService": "http://schema.org/GovernmentService",
"Grant": "http://schema.org/Grant",
"GraphicNovel": "http://schema.org/GraphicNovel",
"GroceryStore": "http://schema.org/GroceryStore",
"GroupBoardingPolicy": "http://schema.org/GroupBoardingPolicy",
"Gynecologic": "http://schema.org/Gynecologic",
"HTML": "rdf:HTML",
"HVACBusiness": "http://schema.org/HVACBusiness",
"HairSalon": "http://schema.org/HairSalon",
"HalalDiet": "http://schema.org/HalalDiet",
"Hardcover": "http://schema.org/Hardcover",
"HardwareStore": "http://schema.org/HardwareStore",
"Head": "http://schema.org/Head",
"HealthAndBeautyBusiness": "http://schema.org/HealthAndBeautyBusiness",
"HealthAspectEnumeration": "http://schema.org/HealthAspectEnumeration",
"HealthClub": "http://schema.org/HealthClub",
"HealthInsurancePlan": "http://schema.org/HealthInsurancePlan",
"HealthPlanCostSharingSpecification": "http://schema.org/HealthPlanCostSharingSpecification",
"HealthPlanFormulary": "http://schema.org/HealthPlanFormulary",
"HealthPlanNetwork": "http://schema.org/HealthPlanNetwork",
"HealthTopicContent": "http://schema.org/HealthTopicContent",
"HearingImpairedSupported": "http://schema.org/HearingImpairedSupported",
"Hematologic": "http://schema.org/Hematologic",
"HighSchool": "http://schema.org/HighSchool",
"HinduDiet": "http://schema.org/HinduDiet",
"HinduTemple": "http://schema.org/HinduTemple",
"HobbyShop": "http://schema.org/HobbyShop",
"HomeAndConstructionBusiness": "http://schema.org/HomeAndConstructionBusiness",
"HomeGoodsStore": "http://schema.org/HomeGoodsStore",
"Homeopathic": "http://schema.org/Homeopathic",
"Hospital": "http://schema.org/Hospital",
"Hostel": "http://schema.org/Hostel",
"Hotel": "http://schema.org/Hotel",
"HotelRoom": "http://schema.org/HotelRoom",
"House": "http://schema.org/House",
"HousePainter": "http://schema.org/HousePainter",
"HowOrWhereHealthAspect": "http://schema.org/HowOrWhereHealthAspect",
"HowTo": "http://schema.org/HowTo",
"HowToDirection": "http://schema.org/HowToDirection",
"HowToItem": "http://schema.org/HowToItem",
"HowToSection": "http://schema.org/HowToSection",
"HowToStep": "http://schema.org/HowToStep",
"HowToSupply": "http://schema.org/HowToSupply",
"HowToTip": "http://schema.org/HowToTip",
"HowToTool": "http://schema.org/HowToTool",
"IceCreamShop": "http://schema.org/IceCreamShop",
"IgnoreAction": "http://schema.org/IgnoreAction",
"ImageGallery": "http://schema.org/ImageGallery",
"ImageObject": "http://schema.org/ImageObject",
"ImagingTest": "http://schema.org/ImagingTest",
"InForce": "http://schema.org/InForce",
"InStock": "http://schema.org/InStock",
"InStoreOnly": "http://schema.org/InStoreOnly",
"IndividualProduct": "http://schema.org/IndividualProduct",
"Infectious": "http://schema.org/Infectious",
"InfectiousAgentClass": "http://schema.org/InfectiousAgentClass",
"InfectiousDisease": "http://schema.org/InfectiousDisease",
"InformAction": "http://schema.org/InformAction",
"InsertAction": "http://schema.org/InsertAction",
"InstallAction": "http://schema.org/InstallAction",
"InsuranceAgency": "http://schema.org/InsuranceAgency",
"Intangible": "http://schema.org/Intangible",
"Integer": "http://schema.org/Integer",
"InteractAction": "http://schema.org/InteractAction",
"InteractionCounter": "http://schema.org/InteractionCounter",
"InternationalTrial": "http://schema.org/InternationalTrial",
"InternetCafe": "http://schema.org/InternetCafe",
"InvestmentFund": "http://schema.org/InvestmentFund",
"InvestmentOrDeposit": "http://schema.org/InvestmentOrDeposit",
"InviteAction": "http://schema.org/InviteAction",
"Invoice": "http://schema.org/Invoice",
"ItemAvailability": "http://schema.org/ItemAvailability",
"ItemList": "http://schema.org/ItemList",
"ItemListOrderAscending": "http://schema.org/ItemListOrderAscending",
"ItemListOrderDescending": "http://schema.org/ItemListOrderDescending",
"ItemListOrderType": "http://schema.org/ItemListOrderType",
"ItemListUnordered": "http://schema.org/ItemListUnordered",
"ItemPage": "http://schema.org/ItemPage",
"JewelryStore": "http://schema.org/JewelryStore",
"JobPosting": "http://schema.org/JobPosting",
"JoinAction": "http://schema.org/JoinAction",
"Joint": "http://schema.org/Joint",
"Journal": "http://schema.org/Periodical",
"KosherDiet": "http://schema.org/KosherDiet",
"LaboratoryScience": "http://schema.org/LaboratoryScience",
"LakeBodyOfWater": "http://schema.org/LakeBodyOfWater",
"Landform": "http://schema.org/Landform",
"LandmarksOrHistoricalBuildings": "http://schema.org/LandmarksOrHistoricalBuildings",
"Language": "http://schema.org/Language",
"LaserDiscFormat": "http://schema.org/LaserDiscFormat",
"LeaveAction": "http://schema.org/LeaveAction",
"LeftHandDriving": "http://schema.org/LeftHandDriving",
"LegalForceStatus": "http://schema.org/LegalForceStatus",
"LegalService": "http://schema.org/LegalService",
"LegalValueLevel": "http://schema.org/LegalValueLevel",
"Legislation": "http://schema.org/Legislation",
"LegislationObject": "http://schema.org/LegislationObject",
"LegislativeBuilding": "http://schema.org/LegislativeBuilding",
"LeisureTimeActivity": "http://schema.org/LeisureTimeActivity",
"LendAction": "http://schema.org/LendAction",
"Library": "http://schema.org/Library",
"LibrarySystem": "http://schema.org/LibrarySystem",
"LifestyleModification": "http://schema.org/LifestyleModification",
"Ligament": "http://schema.org/Ligament",
"LikeAction": "http://schema.org/LikeAction",
"LimitedAvailability": "http://schema.org/LimitedAvailability",
"LinkRole": "http://schema.org/LinkRole",
"LiquorStore": "http://schema.org/LiquorStore",
"ListItem": "http://schema.org/ListItem",
"ListenAction": "http://schema.org/ListenAction",
"LiteraryEvent": "http://schema.org/LiteraryEvent",
"LiveAlbum": "http://schema.org/LiveAlbum",
"LiveBlogPosting": "http://schema.org/LiveBlogPosting",
"LivingWithHealthAspect": "http://schema.org/LivingWithHealthAspect",
"LoanOrCredit": "http://schema.org/LoanOrCredit",
"LocalBusiness": "http://schema.org/LocalBusiness",
"LocationFeatureSpecification": "http://schema.org/LocationFeatureSpecification",
"LockerDelivery": "http://schema.org/LockerDelivery",
"Locksmith": "http://schema.org/Locksmith",
"LodgingBusiness": "http://schema.org/LodgingBusiness",
"LodgingReservation": "http://schema.org/LodgingReservation",
"Longitudinal": "http://schema.org/Longitudinal",
"LoseAction": "http://schema.org/LoseAction",
"LowCalorieDiet": "http://schema.org/LowCalorieDiet",
"LowFatDiet": "http://schema.org/LowFatDiet",
"LowLactoseDiet": "http://schema.org/LowLactoseDiet",
"LowSaltDiet": "http://schema.org/LowSaltDiet",
"Lung": "http://schema.org/Lung",
"LymphaticVessel": "http://schema.org/LymphaticVessel",
"MRI": "http://schema.org/MRI",
"Male": "http://schema.org/Male",
"Manuscript": "http://schema.org/Manuscript",
"Map": "http://schema.org/Map",
"MapCategoryType": "http://schema.org/MapCategoryType",
"MarryAction": "http://schema.org/MarryAction",
"Mass": "http://schema.org/Mass",
"MaximumDoseSchedule": "http://schema.org/MaximumDoseSchedule",
"MayTreatHealthAspect": "http://schema.org/MayTreatHealthAspect",
"MediaObject": "http://schema.org/MediaObject",
"MediaSubscription": "http://schema.org/MediaSubscription",
"MedicalAudience": "http://schema.org/MedicalAudience",
"MedicalBusiness": "http://schema.org/MedicalBusiness",
"MedicalCause": "http://schema.org/MedicalCause",
"MedicalClinic": "http://schema.org/MedicalClinic",
"MedicalCode": "http://schema.org/MedicalCode",
"MedicalCondition": "http://schema.org/MedicalCondition",
"MedicalConditionStage": "http://schema.org/MedicalConditionStage",
"MedicalContraindication": "http://schema.org/MedicalContraindication",
"MedicalDevice": "http://schema.org/MedicalDevice",
"MedicalDevicePurpose": "http://schema.org/MedicalDevicePurpose",
"MedicalEntity": "http://schema.org/MedicalEntity",
"MedicalEnumeration": "http://schema.org/MedicalEnumeration",
"MedicalEvidenceLevel": "http://schema.org/MedicalEvidenceLevel",
"MedicalGuideline": "http://schema.org/MedicalGuideline",
"MedicalGuidelineContraindication": "http://schema.org/MedicalGuidelineContraindication",
"MedicalGuidelineRecommendation": "http://schema.org/MedicalGuidelineRecommendation",
"MedicalImagingTechnique": "http://schema.org/MedicalImagingTechnique",
"MedicalIndication": "http://schema.org/MedicalIndication",
"MedicalIntangible": "http://schema.org/MedicalIntangible",
"MedicalObservationalStudy": "http://schema.org/MedicalObservationalStudy",
"MedicalObservationalStudyDesign": "http://schema.org/MedicalObservationalStudyDesign",
"MedicalOrganization": "http://schema.org/MedicalOrganization",
"MedicalProcedure": "http://schema.org/MedicalProcedure",
"MedicalProcedureType": "http://schema.org/MedicalProcedureType",
"MedicalResearcher": "http://schema.org/MedicalResearcher",
"MedicalRiskCalculator": "http://schema.org/MedicalRiskCalculator",
"MedicalRiskEstimator": "http://schema.org/MedicalRiskEstimator",
"MedicalRiskFactor": "http://schema.org/MedicalRiskFactor",
"MedicalRiskScore": "http://schema.org/MedicalRiskScore",
"MedicalScholarlyArticle": "http://schema.org/MedicalScholarlyArticle",
"MedicalSign": "http://schema.org/MedicalSign",
"MedicalSignOrSymptom": "http://schema.org/MedicalSignOrSymptom",
"MedicalSpecialty": "http://schema.org/MedicalSpecialty",
"MedicalStudy": "http://schema.org/MedicalStudy",
"MedicalStudyStatus": "http://schema.org/MedicalStudyStatus",
"MedicalSymptom": "http://schema.org/MedicalSymptom",
"MedicalTest": "http://schema.org/MedicalTest",
"MedicalTestPanel": "http://schema.org/MedicalTestPanel",
"MedicalTherapy": "http://schema.org/MedicalTherapy",
"MedicalTrial": "http://schema.org/MedicalTrial",
"MedicalTrialDesign": "http://schema.org/MedicalTrialDesign",
"MedicalWebPage": "http://schema.org/MedicalWebPage",
"MedicineSystem": "http://schema.org/MedicineSystem",
"MeetingRoom": "http://schema.org/MeetingRoom",
"MensClothingStore": "http://schema.org/MensClothingStore",
"Menu": "http://schema.org/Menu",
"MenuItem": "http://schema.org/MenuItem",
"MenuSection": "http://schema.org/MenuSection",
"Message": "http://schema.org/Message",
"MiddleSchool": "http://schema.org/MiddleSchool",
"Midwifery": "http://schema.org/Midwifery",
"MisconceptionsHealthAspect": "http://schema.org/MisconceptionsHealthAspect",
"MixtapeAlbum": "http://schema.org/MixtapeAlbum",
"MobileApplication": "http://schema.org/MobileApplication",
"MobilePhoneStore": "http://schema.org/MobilePhoneStore",
"Monday": "http://schema.org/Monday",
"MonetaryAmount": "http://schema.org/MonetaryAmount",
"MonetaryAmountDistribution": "http://schema.org/MonetaryAmountDistribution",
"MonetaryGrant": "http://schema.org/MonetaryGrant",
"MoneyTransfer": "http://schema.org/MoneyTransfer",
"MortgageLoan": "http://schema.org/MortgageLoan",
"Mosque": "http://schema.org/Mosque",
"Motel": "http://schema.org/Motel",
"Motorcycle": "http://schema.org/Motorcycle",
"MotorcycleDealer": "http://schema.org/MotorcycleDealer",
"MotorcycleRepair": "http://schema.org/MotorcycleRepair",
"MotorizedBicycle": "http://schema.org/MotorizedBicycle",
"Mountain": "http://schema.org/Mountain",
"MoveAction": "http://schema.org/MoveAction",
"Movie": "http://schema.org/Movie",
"MovieClip": "http://schema.org/MovieClip",
"MovieRentalStore": "http://schema.org/MovieRentalStore",
"MovieSeries": "http://schema.org/MovieSeries",
"MovieTheater": "http://schema.org/MovieTheater",
"MovingCompany": "http://schema.org/MovingCompany",
"MultiCenterTrial": "http://schema.org/MultiCenterTrial",
"MultiPlayer": "http://schema.org/MultiPlayer",
"MulticellularParasite": "http://schema.org/MulticellularParasite",
"Muscle": "http://schema.org/Muscle",
"Musculoskeletal": "http://schema.org/Musculoskeletal",
"MusculoskeletalExam": "http://schema.org/MusculoskeletalExam",
"Museum": "http://schema.org/Museum",
"MusicAlbum": "http://schema.org/MusicAlbum",
"MusicAlbumProductionType": "http://schema.org/MusicAlbumProductionType",
"MusicAlbumReleaseType": "http://schema.org/MusicAlbumReleaseType",
"MusicComposition": "http://schema.org/MusicComposition",
"MusicEvent": "http://schema.org/MusicEvent",
"MusicGroup": "http://schema.org/MusicGroup",
"MusicPlaylist": "http://schema.org/MusicPlaylist",
"MusicRecording": "http://schema.org/MusicRecording",
"MusicRelease": "http://schema.org/MusicRelease",
"MusicReleaseFormatType": "http://schema.org/MusicReleaseFormatType",
"MusicStore": "http://schema.org/MusicStore",
"MusicVenue": "http://schema.org/MusicVenue",
"MusicVideoObject": "http://schema.org/MusicVideoObject",
"NGO": "http://schema.org/NGO",
"NailSalon": "http://schema.org/NailSalon",
"Neck": "http://schema.org/Neck",
"Nerve": "http://schema.org/Nerve",
"Neuro": "http://schema.org/Neuro",
"Neurologic": "http://schema.org/Neurologic",
"NewCondition": "http://schema.org/NewCondition",
"NewsArticle": "http://schema.org/NewsArticle",
"NewsMediaOrganization": "http://schema.org/NewsMediaOrganization",
"Newspaper": "http://schema.org/Newspaper",
"NightClub": "http://schema.org/NightClub",
"NoninvasiveProcedure": "http://schema.org/NoninvasiveProcedure",
"Nose": "http://schema.org/Nose",
"NotInForce": "http://schema.org/NotInForce",
"NotYetRecruiting": "http://schema.org/NotYetRecruiting",
"Notary": "http://schema.org/Notary",
"NoteDigitalDocument": "http://schema.org/NoteDigitalDocument",
"Number": "http://schema.org/Number",
"Nursing": "http://schema.org/Nursing",
"NutritionInformation": "http://schema.org/NutritionInformation",
"OTC": "http://schema.org/OTC",
"Observation": "http://schema.org/Observation",
"Observational": "http://schema.org/Observational",
"Obstetric": "http://schema.org/Obstetric",
"Occupation": "http://schema.org/Occupation",
"OccupationalActivity": "http://schema.org/OccupationalActivity",
"OccupationalTherapy": "http://schema.org/OccupationalTherapy",
"OceanBodyOfWater": "http://schema.org/OceanBodyOfWater",
"Offer": "http://schema.org/Offer",
"OfferCatalog": "http://schema.org/OfferCatalog",
"OfferForLease": "http://schema.org/OfferForLease",
"OfferForPurchase": "http://schema.org/OfferForPurchase",
"OfferItemCondition": "http://schema.org/OfferItemCondition",
"OfficeEquipmentStore": "http://schema.org/OfficeEquipmentStore",
"OfficialLegalValue": "http://schema.org/OfficialLegalValue",
"OfflinePermanently": "http://schema.org/OfflinePermanently",
"OfflineTemporarily": "http://schema.org/OfflineTemporarily",
"OnDemandEvent": "http://schema.org/OnDemandEvent",
"OnSitePickup": "http://schema.org/OnSitePickup",
"Oncologic": "http://schema.org/Oncologic",
"Online": "http://schema.org/Online",
"OnlineFull": "http://schema.org/OnlineFull",
"OnlineOnly": "http://schema.org/OnlineOnly",
"OpenTrial": "http://schema.org/OpenTrial",
"OpeningHoursSpecification": "http://schema.org/OpeningHoursSpecification",
"OpinionNewsArticle": "http://schema.org/OpinionNewsArticle",
"Optician": "http://schema.org/Optician",
"Optometric": "http://schema.org/Optometric",
"Order": "http://schema.org/Order",
"OrderAction": "http://schema.org/OrderAction",
"OrderCancelled": "http://schema.org/OrderCancelled",
"OrderDelivered": "http://schema.org/OrderDelivered",
"OrderInTransit": "http://schema.org/OrderInTransit",
"OrderItem": "http://schema.org/OrderItem",
"OrderPaymentDue": "http://schema.org/OrderPaymentDue",
"OrderPickupAvailable": "http://schema.org/OrderPickupAvailable",
"OrderProblem": "http://schema.org/OrderProblem",
"OrderProcessing": "http://schema.org/OrderProcessing",
"OrderReturned": "http://schema.org/OrderReturned",
"OrderStatus": "http://schema.org/OrderStatus",
"Organization": "http://schema.org/Organization",
"OrganizationRole": "http://schema.org/OrganizationRole",
"OrganizeAction": "http://schema.org/OrganizeAction",
"OriginalShippingFees": "http://schema.org/OriginalShippingFees",
"Osteopathic": "http://schema.org/Osteopathic",
"Otolaryngologic": "http://schema.org/Otolaryngologic",
"OutOfStock": "http://schema.org/OutOfStock",
"OutletStore": "http://schema.org/OutletStore",
"OverviewHealthAspect": "http://schema.org/OverviewHealthAspect",
"OwnershipInfo": "http://schema.org/OwnershipInfo",
"PET": "http://schema.org/PET",
"PaintAction": "http://schema.org/PaintAction",
"Painting": "http://schema.org/Painting",
"PalliativeProcedure": "http://schema.org/PalliativeProcedure",
"Paperback": "http://schema.org/Paperback",
"ParcelDelivery": "http://schema.org/ParcelDelivery",
"ParcelService": "http://schema.org/ParcelService",
"ParentAudience": "http://schema.org/ParentAudience",
"Park": "http://schema.org/Park",
"ParkingFacility": "http://schema.org/ParkingFacility",
"ParkingMap": "http://schema.org/ParkingMap",
"PartiallyInForce": "http://schema.org/PartiallyInForce",
"Pathology": "http://schema.org/Pathology",
"PathologyTest": "http://schema.org/PathologyTest",
"Patient": "http://schema.org/Patient",
"PatientExperienceHealthAspect": "http://schema.org/PatientExperienceHealthAspect",
"PawnShop": "http://schema.org/PawnShop",
"PayAction": "http://schema.org/PayAction",
"PaymentAutomaticallyApplied": "http://schema.org/PaymentAutomaticallyApplied",
"PaymentCard": "http://schema.org/PaymentCard",
"PaymentChargeSpecification": "http://schema.org/PaymentChargeSpecification",
"PaymentComplete": "http://schema.org/PaymentComplete",
"PaymentDeclined": "http://schema.org/PaymentDeclined",
"PaymentDue": "http://schema.org/PaymentDue",
"PaymentMethod": "http://schema.org/PaymentMethod",
"PaymentPastDue": "http://schema.org/PaymentPastDue",
"PaymentService": "http://schema.org/PaymentService",
"PaymentStatusType": "http://schema.org/PaymentStatusType",
"Pediatric": "http://schema.org/Pediatric",
"PeopleAudience": "http://schema.org/PeopleAudience",
"PercutaneousProcedure": "http://schema.org/PercutaneousProcedure",
"PerformAction": "http://schema.org/PerformAction",
"PerformanceRole": "http://schema.org/PerformanceRole",
"PerformingArtsTheater": "http://schema.org/PerformingArtsTheater",
"PerformingGroup": "http://schema.org/PerformingGroup",
"Periodical": "http://schema.org/Periodical",
"Permit": "http://schema.org/Permit",
"Person": "http://schema.org/Person",
"PetStore": "http://schema.org/PetStore",
"Pharmacy": "http://schema.org/Pharmacy",
"PharmacySpecialty": "http://schema.org/PharmacySpecialty",
"Photograph": "http://schema.org/Photograph",
"PhotographAction": "http://schema.org/PhotographAction",
"PhysicalActivity": "http://schema.org/PhysicalActivity",
"PhysicalActivityCategory": "http://schema.org/PhysicalActivityCategory",
"PhysicalExam": "http://schema.org/PhysicalExam",
"PhysicalTherapy": "http://schema.org/PhysicalTherapy",
"Physician": "http://schema.org/Physician",
"Physiotherapy": "http://schema.org/Physiotherapy",
"Place": "http://schema.org/Place",
"PlaceOfWorship": "http://schema.org/PlaceOfWorship",
"PlaceboControlledTrial": "http://schema.org/PlaceboControlledTrial",
"PlanAction": "http://schema.org/PlanAction",
"PlasticSurgery": "http://schema.org/PlasticSurgery",
"Play": "http://schema.org/Play",
"PlayAction": "http://schema.org/PlayAction",
"Playground": "http://schema.org/Playground",
"Plumber": "http://schema.org/Plumber",
"PodcastEpisode": "http://schema.org/PodcastEpisode",
"PodcastSeason": "http://schema.org/PodcastSeason",
"PodcastSeries": "http://schema.org/PodcastSeries",
"Podiatric": "http://schema.org/Podiatric",
"PoliceStation": "http://schema.org/PoliceStation",
"Pond": "http://schema.org/Pond",
"PostOffice": "http://schema.org/PostOffice",
"PostalAddress": "http://schema.org/PostalAddress",
"Poster": "http://schema.org/Poster",
"PotentialActionStatus": "http://schema.org/PotentialActionStatus",
"PreOrder": "http://schema.org/PreOrder",
"PreOrderAction": "http://schema.org/PreOrderAction",
"PreSale": "http://schema.org/PreSale",
"PrependAction": "http://schema.org/PrependAction",
"Preschool": "http://schema.org/Preschool",
"PrescriptionOnly": "http://schema.org/PrescriptionOnly",
"PresentationDigitalDocument": "http://schema.org/PresentationDigitalDocument",
"PreventionHealthAspect": "http://schema.org/PreventionHealthAspect",
"PreventionIndication": "http://schema.org/PreventionIndication",
"PriceSpecification": "http://schema.org/PriceSpecification",
"PrimaryCare": "http://schema.org/PrimaryCare",
"Prion": "http://schema.org/Prion",
"Product": "http://schema.org/Product",
"ProductModel": "http://schema.org/ProductModel",
"ProductReturnEnumeration": "http://schema.org/ProductReturnEnumeration",
"ProductReturnFiniteReturnWindow": "http://schema.org/ProductReturnFiniteReturnWindow",
"ProductReturnNotPermitted": "http://schema.org/ProductReturnNotPermitted",
"ProductReturnPolicy": "http://schema.org/ProductReturnPolicy",
"ProductReturnUnlimitedWindow": "http://schema.org/ProductReturnUnlimitedWindow",
"ProductReturnUnspecified": "http://schema.org/ProductReturnUnspecified",
"ProfessionalService": "http://schema.org/ProfessionalService",
"ProfilePage": "http://schema.org/ProfilePage",
"PrognosisHealthAspect": "http://schema.org/PrognosisHealthAspect",
"ProgramMembership": "http://schema.org/ProgramMembership",
"Project": "http://schema.org/Project",
"Property": "http://schema.org/Property",
"PropertyValue": "http://schema.org/PropertyValue",
"PropertyValueSpecification": "http://schema.org/PropertyValueSpecification",
"Protozoa": "http://schema.org/Protozoa",
"Psychiatric": "http://schema.org/Psychiatric",
"PsychologicalTreatment": "http://schema.org/PsychologicalTreatment",
"PublicHealth": "http://schema.org/PublicHealth",
"PublicHolidays": "http://schema.org/PublicHolidays",
"PublicSwimmingPool": "http://schema.org/PublicSwimmingPool",
"PublicToilet": "http://schema.org/PublicToilet",
"PublicationEvent": "http://schema.org/PublicationEvent",
"PublicationIssue": "http://schema.org/PublicationIssue",
"PublicationVolume": "http://schema.org/PublicationVolume",
"Pulmonary": "http://schema.org/Pulmonary",
"QAPage": "http://schema.org/QAPage",
"QualitativeValue": "http://schema.org/QualitativeValue",
"QuantitativeValue": "http://schema.org/QuantitativeValue",
"QuantitativeValueDistribution": "http://schema.org/QuantitativeValueDistribution",
"Quantity": "http://schema.org/Quantity",
"Question": "http://schema.org/Question",
"Quotation": "http://schema.org/Quotation",
"QuoteAction": "http://schema.org/QuoteAction",
"RVPark": "http://schema.org/RVPark",
"RadiationTherapy": "http://schema.org/RadiationTherapy",
"RadioBroadcastService": "http://schema.org/RadioBroadcastService",
"RadioChannel": "http://schema.org/RadioChannel",
"RadioClip": "http://schema.org/RadioClip",
"RadioEpisode": "http://schema.org/RadioEpisode",
"RadioSeason": "http://schema.org/RadioSeason",
"RadioSeries": "http://schema.org/RadioSeries",
"RadioStation": "http://schema.org/RadioStation",
"Radiography": "http://schema.org/Radiography",
"RandomizedTrial": "http://schema.org/RandomizedTrial",
"Rating": "http://schema.org/Rating",
"ReactAction": "http://schema.org/ReactAction",
"ReadAction": "http://schema.org/ReadAction",
"ReadPermission": "http://schema.org/ReadPermission",
"RealEstateAgent": "http://schema.org/RealEstateAgent",
"RealEstateListing": "http://schema.org/RealEstateListing",
"RearWheelDriveConfiguration": "http://schema.org/RearWheelDriveConfiguration",
"ReceiveAction": "http://schema.org/ReceiveAction",
"Recipe": "http://schema.org/Recipe",
"RecommendedDoseSchedule": "http://schema.org/RecommendedDoseSchedule",
"Recruiting": "http://schema.org/Recruiting",
"RecyclingCenter": "http://schema.org/RecyclingCenter",
"RefundTypeEnumeration": "http://schema.org/RefundTypeEnumeration",
"RefurbishedCondition": "http://schema.org/RefurbishedCondition",
"RegisterAction": "http://schema.org/RegisterAction",
"Registry": "http://schema.org/Registry",
"ReimbursementCap": "http://schema.org/ReimbursementCap",
"RejectAction": "http://schema.org/RejectAction",
"RelatedTopicsHealthAspect": "http://schema.org/RelatedTopicsHealthAspect",
"RemixAlbum": "http://schema.org/RemixAlbum",
"Renal": "http://schema.org/Renal",
"RentAction": "http://schema.org/RentAction",
"RentalCarReservation": "http://schema.org/RentalCarReservation",
"RentalVehicleUsage": "http://schema.org/RentalVehicleUsage",
"RepaymentSpecification": "http://schema.org/RepaymentSpecification",
"ReplaceAction": "http://schema.org/ReplaceAction",
"ReplyAction": "http://schema.org/ReplyAction",
"Report": "http://schema.org/Report",
"ReportageNewsArticle": "http://schema.org/ReportageNewsArticle",
"ReportedDoseSchedule": "http://schema.org/ReportedDoseSchedule",
"RepositoryCollection": "http://pcdm.org/models#Collection",
"RepositoryObject": "http://pcdm.org/models#object",
"ResearchProject": "http://schema.org/ResearchProject",
"Researcher": "http://schema.org/Researcher",
"Reservation": "http://schema.org/Reservation",
"ReservationCancelled": "http://schema.org/ReservationCancelled",
"ReservationConfirmed": "http://schema.org/ReservationConfirmed",
"ReservationHold": "http://schema.org/ReservationHold",
"ReservationPackage": "http://schema.org/ReservationPackage",
"ReservationPending": "http://schema.org/ReservationPending",
"ReservationStatusType": "http://schema.org/ReservationStatusType",
"ReserveAction": "http://schema.org/ReserveAction",
"Reservoir": "http://schema.org/Reservoir",
"Residence": "http://schema.org/Residence",
"Resort": "http://schema.org/Resort",
"RespiratoryTherapy": "http://schema.org/RespiratoryTherapy",
"Restaurant": "http://schema.org/Restaurant",
"RestockingFees": "http://schema.org/RestockingFees",
"RestrictedDiet": "http://schema.org/RestrictedDiet",
"ResultsAvailable": "http://schema.org/ResultsAvailable",
"ResultsNotAvailable": "http://schema.org/ResultsNotAvailable",
"ResumeAction": "http://schema.org/ResumeAction",
"Retail": "http://schema.org/Retail",
"ReturnAction": "http://schema.org/ReturnAction",
"ReturnFeesEnumeration": "http://schema.org/ReturnFeesEnumeration",
"ReturnShippingFees": "http://schema.org/ReturnShippingFees",
"Review": "http://schema.org/Review",
"ReviewAction": "http://schema.org/ReviewAction",
"ReviewNewsArticle": "http://schema.org/ReviewNewsArticle",
"Rheumatologic": "http://schema.org/Rheumatologic",
"RightHandDriving": "http://schema.org/RightHandDriving",
"RisksOrComplicationsHealthAspect": "http://schema.org/RisksOrComplicationsHealthAspect",
"RiverBodyOfWater": "http://schema.org/RiverBodyOfWater",
"Role": "http://schema.org/Role",
"RoofingContractor": "http://schema.org/RoofingContractor",
"Room": "http://schema.org/Room",
"RsvpAction": "http://schema.org/RsvpAction",
"RsvpResponseMaybe": "http://schema.org/RsvpResponseMaybe",
"RsvpResponseNo": "http://schema.org/RsvpResponseNo",
"RsvpResponseType": "http://schema.org/RsvpResponseType",
"RsvpResponseYes": "http://schema.org/RsvpResponseYes",
"SaleEvent": "http://schema.org/SaleEvent",
"SatiricalArticle": "http://schema.org/SatiricalArticle",
"Saturday": "http://schema.org/Saturday",
"Schedule": "http://schema.org/Schedule",
"ScheduleAction": "http://schema.org/ScheduleAction",
"ScholarlyArticle": "http://schema.org/ScholarlyArticle",
"School": "http://schema.org/School",
"ScreeningEvent": "http://schema.org/ScreeningEvent",
"ScreeningHealthAspect": "http://schema.org/ScreeningHealthAspect",
"Script": "http://purl.org/ro/wf4ever#Script",
"Sculpture": "http://schema.org/Sculpture",
"SeaBodyOfWater": "http://schema.org/SeaBodyOfWater",
"SearchAction": "http://schema.org/SearchAction",
"SearchResultsPage": "http://schema.org/SearchResultsPage",
"Season": "http://schema.org/Season",
"Seat": "http://schema.org/Seat",
"SeatingMap": "http://schema.org/SeatingMap",
"SeeDoctorHealthAspect": "http://schema.org/SeeDoctorHealthAspect",
"SelfCareHealthAspect": "http://schema.org/SelfCareHealthAspect",
"SelfStorage": "http://schema.org/SelfStorage",
"SellAction": "http://schema.org/SellAction",
"SendAction": "http://schema.org/SendAction",
"Series": "http://schema.org/Series",
"Service": "http://schema.org/Service",
"ServiceChannel": "http://schema.org/ServiceChannel",
"ShareAction": "http://schema.org/ShareAction",
"SheetMusic": "http://schema.org/SheetMusic",
"ShoeStore": "http://schema.org/ShoeStore",
"ShoppingCenter": "http://schema.org/ShoppingCenter",
"ShortStory": "http://schema.org/ShortStory",
"SideEffectsHealthAspect": "http://schema.org/SideEffectsHealthAspect",
"SingleBlindedTrial": "http://schema.org/SingleBlindedTrial",
"SingleCenterTrial": "http://schema.org/SingleCenterTrial",
"SingleFamilyResidence": "http://schema.org/SingleFamilyResidence",
"SinglePlayer": "http://schema.org/SinglePlayer",
"SingleRelease": "http://schema.org/SingleRelease",
"SiteNavigationElement": "http://schema.org/SiteNavigationElement",
"SkiResort": "http://schema.org/SkiResort",
"Skin": "http://schema.org/Skin",
"SocialEvent": "http://schema.org/SocialEvent",
"SocialMediaPosting": "http://schema.org/SocialMediaPosting",
"SoftwareApplication": "http://schema.org/SoftwareApplication",
"SoftwareSourceCode": "http://schema.org/SoftwareSourceCode",
"SoldOut": "http://schema.org/SoldOut",
"SomeProducts": "http://schema.org/SomeProducts",
"SoundtrackAlbum": "http://schema.org/SoundtrackAlbum",
"SpeakableSpecification": "http://schema.org/SpeakableSpecification",
"Specialty": "http://schema.org/Specialty",
"SpeechPathology": "http://schema.org/SpeechPathology",
"SpokenWordAlbum": "http://schema.org/SpokenWordAlbum",
"SportingGoodsStore": "http://schema.org/SportingGoodsStore",
"SportsActivityLocation": "http://schema.org/SportsActivityLocation",
"SportsClub": "http://schema.org/SportsClub",
"SportsEvent": "http://schema.org/SportsEvent",
"SportsOrganization": "http://schema.org/SportsOrganization",
"SportsTeam": "http://schema.org/SportsTeam",
"SpreadsheetDigitalDocument": "http://schema.org/SpreadsheetDigitalDocument",
"StadiumOrArena": "http://schema.org/StadiumOrArena",
"StagesHealthAspect": "http://schema.org/StagesHealthAspect",
"State": "http://schema.org/State",
"StatisticalPopulation": "http://schema.org/StatisticalPopulation",
"SteeringPositionValue": "http://schema.org/SteeringPositionValue",
"Store": "http://schema.org/Store",
"StoreCreditRefund": "http://schema.org/StoreCreditRefund",
"StrengthTraining": "http://schema.org/StrengthTraining",
"StructuredValue": "http://schema.org/StructuredValue",
"StudioAlbum": "http://schema.org/StudioAlbum",
"StupidType": "http://schema.org/StupidType",
"SubscribeAction": "http://schema.org/SubscribeAction",
"Substance": "http://schema.org/Substance",
"SubwayStation": "http://schema.org/SubwayStation",
"Suite": "http://schema.org/Suite",
"Sunday": "http://schema.org/Sunday",
"SuperficialAnatomy": "http://schema.org/SuperficialAnatomy",
"Surgical": "http://schema.org/Surgical",
"SurgicalProcedure": "http://schema.org/SurgicalProcedure",
"SuspendAction": "http://schema.org/SuspendAction",
"Suspended": "http://schema.org/Suspended",
"SymptomsHealthAspect": "http://schema.org/SymptomsHealthAspect",
"Synagogue": "http://schema.org/Synagogue",
"TVClip": "http://schema.org/TVClip",
"TVEpisode": "http://schema.org/TVEpisode",
"TVSeason": "http://schema.org/TVSeason",
"TVSeries": "http://schema.org/TVSeries",
"Table": "http://schema.org/Table",
"TakeAction": "http://schema.org/TakeAction",
"TattooParlor": "http://schema.org/TattooParlor",
"Taxi": "http://schema.org/Taxi",
"TaxiReservation": "http://schema.org/TaxiReservation",
"TaxiService": "http://schema.org/TaxiService",
"TaxiStand": "http://schema.org/TaxiStand",
"TaxiVehicleUsage": "http://schema.org/TaxiVehicleUsage",
"TechArticle": "http://schema.org/TechArticle",
"TelevisionChannel": "http://schema.org/TelevisionChannel",
"TelevisionStation": "http://schema.org/TelevisionStation",
"TennisComplex": "http://schema.org/TennisComplex",
"Terminated": "http://schema.org/Terminated",
"Text": "http://schema.org/Text",
"TextDigitalDocument": "http://schema.org/TextDigitalDocument",
"TheaterEvent": "http://schema.org/TheaterEvent",
"TheaterGroup": "http://schema.org/TheaterGroup",
"Therapeutic": "http://schema.org/Therapeutic",
"TherapeuticProcedure": "http://schema.org/TherapeuticProcedure",
"Thesis": "http://schema.org/Thesis",
"Thing": "http://schema.org/Thing",
"Throat": "http://schema.org/Throat",
"Thursday": "http://schema.org/Thursday",
"Ticket": "http://schema.org/Ticket",
"TieAction": "http://schema.org/TieAction",
"Time": "http://schema.org/Time",
"TipAction": "http://schema.org/TipAction",
"TireShop": "http://schema.org/TireShop",
"TollFree": "http://schema.org/TollFree",
"TouristAttraction": "http://schema.org/TouristAttraction",
"TouristDestination": "http://schema.org/TouristDestination",
"TouristInformationCenter": "http://schema.org/TouristInformationCenter",
"TouristTrip": "http://schema.org/TouristTrip",
"Toxicologic": "http://schema.org/Toxicologic",
"ToyStore": "http://schema.org/ToyStore",
"TrackAction": "http://schema.org/TrackAction",
"TradeAction": "http://schema.org/TradeAction",
"TraditionalChinese": "http://schema.org/TraditionalChinese",
"TrainReservation": "http://schema.org/TrainReservation",
"TrainStation": "http://schema.org/TrainStation",
"TrainTrip": "http://schema.org/TrainTrip",
"TransferAction": "http://schema.org/TransferAction",
"TransitMap": "http://schema.org/TransitMap",
"TravelAction": "http://schema.org/TravelAction",
"TravelAgency": "http://schema.org/TravelAgency",
"TreatmentIndication": "http://schema.org/TreatmentIndication",
"TreatmentsHealthAspect": "http://schema.org/TreatmentsHealthAspect",
"Trip": "http://schema.org/Trip",
"TripleBlindedTrial": "http://schema.org/TripleBlindedTrial",
"True": "http://schema.org/True",
"Tuesday": "http://schema.org/Tuesday",
"TypeAndQuantityNode": "http://schema.org/TypeAndQuantityNode",
"TypesHealthAspect": "http://schema.org/TypesHealthAspect",
"URL": "http://schema.org/URL",
"Ultrasound": "http://schema.org/Ultrasound",
"UnRegisterAction": "http://schema.org/UnRegisterAction",
"UnitPriceSpecification": "http://schema.org/UnitPriceSpecification",
"UnofficialLegalValue": "http://schema.org/UnofficialLegalValue",
"UpdateAction": "http://schema.org/UpdateAction",
"Urologic": "http://schema.org/Urologic",
"UsageOrScheduleHealthAspect": "http://schema.org/UsageOrScheduleHealthAspect",
"UseAction": "http://schema.org/UseAction",
"UsedCondition": "http://schema.org/UsedCondition",
"UserBlocks": "http://schema.org/UserBlocks",
"UserCheckins": "http://schema.org/UserCheckins",
"UserComments": "http://schema.org/UserComments",
"UserDownloads": "http://schema.org/UserDownloads",
"UserInteraction": "http://schema.org/UserInteraction",
"UserLikes": "http://schema.org/UserLikes",
"UserPageVisits": "http://schema.org/UserPageVisits",
"UserPlays": "http://schema.org/UserPlays",
"UserPlusOnes": "http://schema.org/UserPlusOnes",
"UserReview": "http://schema.org/UserReview",
"UserTweets": "http://schema.org/UserTweets",
"VeganDiet": "http://schema.org/VeganDiet",
"VegetarianDiet": "http://schema.org/VegetarianDiet",
"Vehicle": "http://schema.org/Vehicle",
"Vein": "http://schema.org/Vein",
"VenueMap": "http://schema.org/VenueMap",
"Vessel": "http://schema.org/Vessel",
"VeterinaryCare": "http://schema.org/VeterinaryCare",
"VideoGallery": "http://schema.org/VideoGallery",
"VideoGame": "http://schema.org/VideoGame",
"VideoGameClip": "http://schema.org/VideoGameClip",
"VideoGameSeries": "http://schema.org/VideoGameSeries",
"VideoObject": "http://schema.org/VideoObject",
"ViewAction": "http://schema.org/ViewAction",
"VinylFormat": "http://schema.org/VinylFormat",
"Virus": "http://schema.org/Virus",
"VisualArtsEvent": "http://schema.org/VisualArtsEvent",
"VisualArtwork": "http://schema.org/VisualArtwork",
"VitalSign": "http://schema.org/VitalSign",
"Volcano": "http://schema.org/Volcano",
"VoteAction": "http://schema.org/VoteAction",
"WPAdBlock": "http://schema.org/WPAdBlock",
"WPFooter": "http://schema.org/WPFooter",
"WPHeader": "http://schema.org/WPHeader",
"WPSideBar": "http://schema.org/WPSideBar",
"WantAction": "http://schema.org/WantAction",
"WarrantyPromise": "http://schema.org/WarrantyPromise",
"WarrantyScope": "http://schema.org/WarrantyScope",
"WatchAction": "http://schema.org/WatchAction",
"Waterfall": "http://schema.org/Waterfall",
"WearAction": "http://schema.org/WearAction",
"WebAPI": "http://schema.org/WebAPI",
"WebApplication": "http://schema.org/WebApplication",
"WebContent": "http://schema.org/WebContent",
"WebPage": "http://schema.org/WebPage",
"WebPageElement": "http://schema.org/WebPageElement",
"WebSite": "http://schema.org/WebSite",
"Wednesday": "http://schema.org/Wednesday",
"WesternConventional": "http://schema.org/WesternConventional",
"Wholesale": "http://schema.org/Wholesale",
"WholesaleStore": "http://schema.org/WholesaleStore",
"WinAction": "http://schema.org/WinAction",
"Winery": "http://schema.org/Winery",
"Withdrawn": "http://schema.org/Withdrawn",
"WorkBasedProgram": "http://schema.org/WorkBasedProgram",
"WorkersUnion": "http://schema.org/WorkersUnion",
"Workflow": "http://purl.org/ro/wfdesc#Workflow",
"WorkflowSketch": "http://purl.org/ro/roterms#Sketch",
"WriteAction": "http://schema.org/WriteAction",
"WritePermission": "http://schema.org/WritePermission",
"XPathType": "http://schema.org/XPathType",
"XRay": "http://schema.org/XRay",
"ZoneBoardingPolicy": "http://schema.org/ZoneBoardingPolicy",
"Zoo": "http://schema.org/Zoo",
"about": "http://schema.org/about",
"abridged": "http://schema.org/abridged",
"abstract": "http://schema.org/abstract",
"accelerationTime": "http://schema.org/accelerationTime",
"acceptedAnswer": "http://schema.org/acceptedAnswer",
"acceptedOffer": "http://schema.org/acceptedOffer",
"acceptedPaymentMethod": "http://schema.org/acceptedPaymentMethod",
"acceptsReservations": "http://schema.org/acceptsReservations",
"accessCode": "http://schema.org/accessCode",
"accessMode": "http://schema.org/accessMode",
"accessModeSufficient": "http://schema.org/accessModeSufficient",
"accessibilityAPI": "http://schema.org/accessibilityAPI",
"accessibilityControl": "http://schema.org/accessibilityControl",
"accessibilityFeature": "http://schema.org/accessibilityFeature",
"accessibilityHazard": "http://schema.org/accessibilityHazard",
"accessibilitySummary": "http://schema.org/accessibilitySummary",
"accommodationCategory": "http://schema.org/accommodationCategory",
"accountId": "http://schema.org/accountId",
"accountMinimumInflow": "http://schema.org/accountMinimumInflow",
"accountOverdraftLimit": "http://schema.org/accountOverdraftLimit",
"accountablePerson": "http://schema.org/accountablePerson",
"acquiredFrom": "http://schema.org/acquiredFrom",
"acrissCode": "http://schema.org/acrissCode",
"action": "http://schema.org/action",
"actionAccessibilityRequirement": "http://schema.org/actionAccessibilityRequirement",
"actionApplication": "http://schema.org/actionApplication",
"actionOption": "http://schema.org/actionOption",
"actionPlatform": "http://schema.org/actionPlatform",
"actionStatus": "http://schema.org/actionStatus",
"actionableFeedbackPolicy": "http://schema.org/actionableFeedbackPolicy",
"activeIngredient": "http://schema.org/activeIngredient",
"activityDuration": "http://schema.org/activityDuration",
"activityFrequency": "http://schema.org/activityFrequency",
"actor": "http://schema.org/actor",
"actors": "http://schema.org/actors",
"addOn": "http://schema.org/addOn",
"additionalName": "http://schema.org/additionalName",
"additionalNumberOfGuests": "http://schema.org/additionalNumberOfGuests",
"additionalProperty": "http://schema.org/additionalProperty",
"additionalType": "http://schema.org/additionalType",
"additionalVariable": "http://schema.org/additionalVariable",
"address": "http://schema.org/address",
"addressCountry": "http://schema.org/addressCountry",
"addressLocality": "http://schema.org/addressLocality",
"addressRegion": "http://schema.org/addressRegion",
"administrationRoute": "http://schema.org/administrationRoute",
"advanceBookingRequirement": "http://schema.org/advanceBookingRequirement",
"adverseOutcome": "http://schema.org/adverseOutcome",
"affectedBy": "http://schema.org/affectedBy",
"affiliation": "http://schema.org/affiliation",
"afterMedia": "http://schema.org/afterMedia",
"agent": "http://schema.org/agent",
"aggregateRating": "http://schema.org/aggregateRating",
"aircraft": "http://schema.org/aircraft",
"album": "http://schema.org/album",
"albumProductionType": "http://schema.org/albumProductionType",
"albumRelease": "http://schema.org/albumRelease",
"albumReleaseType": "http://schema.org/albumReleaseType",
"albums": "http://schema.org/albums",
"alcoholWarning": "http://schema.org/alcoholWarning",
"algorithm": "http://schema.org/algorithm",
"alignmentType": "http://schema.org/alignmentType",
"alternateName": "http://schema.org/alternateName",
"alternativeHeadline": "http://schema.org/alternativeHeadline",
"alumni": "http://schema.org/alumni",
"alumniOf": "http://schema.org/alumniOf",
"amenityFeature": "http://schema.org/amenityFeature",
"amount": "http://schema.org/amount",
"amountOfThisGood": "http://schema.org/amountOfThisGood",
"annualPercentageRate": "http://schema.org/annualPercentageRate",
"answerCount": "http://schema.org/answerCount",
"antagonist": "http://schema.org/antagonist",
"appearance": "http://schema.org/appearance",
"applicableLocation": "http://schema.org/applicableLocation",
"applicantLocationRequirements": "http://schema.org/applicantLocationRequirements",
"application": "http://schema.org/application",
"applicationCategory": "http://schema.org/applicationCategory",
"applicationSubCategory": "http://schema.org/applicationSubCategory",
"applicationSuite": "http://schema.org/applicationSuite",
"appliesToDeliveryMethod": "http://schema.org/appliesToDeliveryMethod",
"appliesToPaymentMethod": "http://schema.org/appliesToPaymentMethod",
"archiveHeld": "http://schema.org/archiveHeld",
"area": "http://schema.org/area",
"areaServed": "http://schema.org/areaServed",
"arrivalAirport": "http://schema.org/arrivalAirport",
"arrivalBusStop": "http://schema.org/arrivalBusStop",
"arrivalGate": "http://schema.org/arrivalGate",
"arrivalPlatform": "http://schema.org/arrivalPlatform",
"arrivalStation": "http://schema.org/arrivalStation",
"arrivalTerminal": "http://schema.org/arrivalTerminal",
"arrivalTime": "http://schema.org/arrivalTime",
"artEdition": "http://schema.org/artEdition",
"artMedium": "http://schema.org/artMedium",
"arterialBranch": "http://schema.org/arterialBranch",
"artform": "http://schema.org/artform",
"articleBody": "http://schema.org/articleBody",
"articleSection": "http://schema.org/articleSection",
"artist": "http://schema.org/artist",
"artworkSurface": "http://schema.org/artworkSurface",
"aspect": "http://schema.org/aspect",
"assembly": "http://schema.org/assembly",
"assemblyVersion": "http://schema.org/assemblyVersion",
"associatedAnatomy": "http://schema.org/associatedAnatomy",
"associatedArticle": "http://schema.org/associatedArticle",
"associatedMedia": "http://schema.org/associatedMedia",
"associatedPathophysiology": "http://schema.org/associatedPathophysiology",
"athlete": "http://schema.org/athlete",
"attendee": "http://schema.org/attendee",
"attendees": "http://schema.org/attendees",
"audience": "http://schema.org/audience",
"audienceType": "http://schema.org/audienceType",
"audio": "http://schema.org/audio",
"authenticator": "http://schema.org/authenticator",
"author": "http://schema.org/author",
"availability": "http://schema.org/availability",
"availabilityEnds": "http://schema.org/availabilityEnds",
"availabilityStarts": "http://schema.org/availabilityStarts",
"availableAtOrFrom": "http://schema.org/availableAtOrFrom",
"availableChannel": "http://schema.org/availableChannel",
"availableDeliveryMethod": "http://schema.org/availableDeliveryMethod",
"availableFrom": "http://schema.org/availableFrom",
"availableIn": "http://schema.org/availableIn",
"availableLanguage": "http://schema.org/availableLanguage",
"availableOnDevice": "http://schema.org/availableOnDevice",
"availableService": "http://schema.org/availableService",
"availableStrength": "http://schema.org/availableStrength",
"availableTest": "http://schema.org/availableTest",
"availableThrough": "http://schema.org/availableThrough",
"award": "http://schema.org/award",
"awards": "http://schema.org/awards",
"awayTeam": "http://schema.org/awayTeam",
"background": "http://schema.org/background",
"backstory": "http://schema.org/backstory",
"bankAccountType": "http://schema.org/bankAccountType",
"baseSalary": "http://schema.org/baseSalary",
"bccRecipient": "http://schema.org/bccRecipient",
"bed": "http://schema.org/bed",
"beforeMedia": "http://schema.org/beforeMedia",
"beneficiaryBank": "http://schema.org/beneficiaryBank",
"benefits": "http://schema.org/benefits",
"benefitsSummaryUrl": "http://schema.org/benefitsSummaryUrl",
"bestRating": "http://schema.org/bestRating",
"bibo": "http://purl.org/ontology/bibo/",
"billingAddress": "http://schema.org/billingAddress",
"billingIncrement": "http://schema.org/billingIncrement",
"billingPeriod": "http://schema.org/billingPeriod",
"biomechnicalClass": "http://schema.org/biomechnicalClass",
"birthDate": "http://schema.org/birthDate",
"birthPlace": "http://schema.org/birthPlace",
"bitrate": "http://schema.org/bitrate",
"blogPost": "http://schema.org/blogPost",
"blogPosts": "http://schema.org/blogPosts",
"bloodSupply": "http://schema.org/bloodSupply",
"boardingGroup": "http://schema.org/boardingGroup",
"boardingPolicy": "http://schema.org/boardingPolicy",
"bodyLocation": "http://schema.org/bodyLocation",
"bodyType": "http://schema.org/bodyType",
"bookEdition": "http://schema.org/bookEdition",
"bookFormat": "http://schema.org/bookFormat",
"bookingAgent": "http://schema.org/bookingAgent",
"bookingTime": "http://schema.org/bookingTime",
"borrower": "http://schema.org/borrower",
"box": "http://schema.org/box",
"branch": "http://schema.org/branch",
"branchCode": "http://schema.org/branchCode",
"branchOf": "http://schema.org/branchOf",
"brand": "http://schema.org/brand",
"breadcrumb": "http://schema.org/breadcrumb",
"breastfeedingWarning": "http://schema.org/breastfeedingWarning",
"broadcastAffiliateOf": "http://schema.org/broadcastAffiliateOf",
"broadcastChannelId": "http://schema.org/broadcastChannelId",
"broadcastDisplayName": "http://schema.org/broadcastDisplayName",
"broadcastFrequency": "http://schema.org/broadcastFrequency",
"broadcastFrequencyValue": "http://schema.org/broadcastFrequencyValue",
"broadcastOfEvent": "http://schema.org/broadcastOfEvent",
"broadcastServiceTier": "http://schema.org/broadcastServiceTier",
"broadcastSignalModulation": "http://schema.org/broadcastSignalModulation",
"broadcastSubChannel": "http://schema.org/broadcastSubChannel",
"broadcastTimezone": "http://schema.org/broadcastTimezone",
"broadcaster": "http://schema.org/broadcaster",
"broker": "http://schema.org/broker",
"browserRequirements": "http://schema.org/browserRequirements",
"busName": "http://schema.org/busName",
"busNumber": "http://schema.org/busNumber",
"businessFunction": "http://schema.org/businessFunction",
"buyer": "http://schema.org/buyer",
"byArtist": "http://schema.org/byArtist",
"byDay": "http://schema.org/byDay",
"byMonth": "http://schema.org/byMonth",
"byMonthDay": "http://schema.org/byMonthDay",
"callSign": "http://schema.org/callSign",
"calories": "http://schema.org/calories",
"candidate": "http://schema.org/candidate",
"caption": "http://schema.org/caption",
"carbohydrateContent": "http://schema.org/carbohydrateContent",
"cargoVolume": "http://schema.org/cargoVolume",
"carrier": "http://schema.org/carrier",
"carrierRequirements": "http://schema.org/carrierRequirements",
"cashBack": "http://schema.org/cashBack",
"catalog": "http://schema.org/catalog",
"catalogNumber": "http://schema.org/catalogNumber",
"category": "http://schema.org/category",
"cause": "http://schema.org/cause",
"causeOf": "http://schema.org/causeOf",
"cc": "http://creativecommons.org/ns#",
"ccRecipient": "http://schema.org/ccRecipient",
"character": "http://schema.org/character",
"characterAttribute": "http://schema.org/characterAttribute",
"characterName": "http://schema.org/characterName",
"cheatCode": "http://schema.org/cheatCode",
"checkinTime": "http://schema.org/checkinTime",
"checkoutTime": "http://schema.org/checkoutTime",
"childMaxAge": "http://schema.org/childMaxAge",
"childMinAge": "http://schema.org/childMinAge",
"children": "http://schema.org/children",
"cholesterolContent": "http://schema.org/cholesterolContent",
"circle": "http://schema.org/circle",
"citation": "http://schema.org/citation",
"cite-as": "https://www.w3.org/ns/iana/link-relations/relation#cite-as",
"claimReviewed": "http://schema.org/claimReviewed",
"clincalPharmacology": "http://schema.org/clincalPharmacology",
"clinicalPharmacology": "http://schema.org/clinicalPharmacology",
"clipNumber": "http://schema.org/clipNumber",
"closes": "http://schema.org/closes",
"coach": "http://schema.org/coach",
"code": "http://schema.org/code",
"codeRepository": "http://schema.org/codeRepository",
"codeSampleType": "http://schema.org/codeSampleType",
"codeValue": "http://schema.org/codeValue",
"codingSystem": "http://schema.org/codingSystem",
"colleague": "http://schema.org/colleague",
"colleagues": "http://schema.org/colleagues",
"collection": "http://schema.org/collection",
"collectionSize": "http://schema.org/collectionSize",
"color": "http://schema.org/color",
"colorist": "http://schema.org/colorist",
"comment": "http://schema.org/comment",
"commentCount": "http://schema.org/commentCount",
"commentText": "http://schema.org/commentText",
"commentTime": "http://schema.org/commentTime",
"competencyRequired": "http://schema.org/competencyRequired",
"competitor": "http://schema.org/competitor",
"composer": "http://schema.org/composer",
"comprisedOf": "http://schema.org/comprisedOf",
"conditionsOfAccess": "http://schema.org/conditionsOfAccess",
"confirmationNumber": "http://schema.org/confirmationNumber",
"conformsTo": "http://purl.org/dc/terms/conformsTo",
"connectedTo": "http://schema.org/connectedTo",
"constrainingProperty": "http://schema.org/constrainingProperty",
"contactOption": "http://schema.org/contactOption",
"contactPoint": "http://schema.org/contactPoint",
"contactPoints": "http://schema.org/contactPoints",
"contactType": "http://schema.org/contactType",
"contactlessPayment": "http://schema.org/contactlessPayment",
"containedIn": "http://schema.org/containedIn",
"containedInPlace": "http://schema.org/containedInPlace",
"containsPlace": "http://schema.org/containsPlace",
"containsSeason": "http://schema.org/containsSeason",
"contentLocation": "http://schema.org/contentLocation",
"contentRating": "http://schema.org/contentRating",
"contentReferenceTime": "http://schema.org/contentReferenceTime",
"contentSize": "http://schema.org/contentSize",
"contentType": "http://schema.org/contentType",
"contentUrl": "http://schema.org/contentUrl",
"contraindication": "http://schema.org/contraindication",
"contributor": "http://schema.org/contributor",
"cookTime": "http://schema.org/cookTime",
"cookingMethod": "http://schema.org/cookingMethod",
"copyrightHolder": "http://schema.org/copyrightHolder",
"copyrightYear": "http://schema.org/copyrightYear",
"correction": "http://schema.org/correction",
"correctionsPolicy": "http://schema.org/correctionsPolicy",
"cost": "http://schema.org/cost",
"costCategory": "http://schema.org/costCategory",
"costCurrency": "http://schema.org/costCurrency",
"costOrigin": "http://schema.org/costOrigin",
"costPerUnit": "http://schema.org/costPerUnit",
"countriesNotSupported": "http://schema.org/countriesNotSupported",
"countriesSupported": "http://schema.org/countriesSupported",
"countryOfOrigin": "http://schema.org/countryOfOrigin",
"course": "http://schema.org/course",
"courseCode": "http://schema.org/courseCode",
"courseMode": "http://schema.org/courseMode",
"coursePrerequisites": "http://schema.org/coursePrerequisites",
"courseWorkload": "http://schema.org/courseWorkload",
"coverageEndTime": "http://schema.org/coverageEndTime",
"coverageStartTime": "http://schema.org/coverageStartTime",
"creativeWorkStatus": "http://schema.org/creativeWorkStatus",
"creator": "http://schema.org/creator",
"credentialCategory": "http://schema.org/credentialCategory",
"creditedTo": "http://schema.org/creditedTo",
"cssSelector": "http://schema.org/cssSelector",
"currenciesAccepted": "http://schema.org/currenciesAccepted",
"currency": "http://schema.org/currency",
"currentExchangeRate": "http://schema.org/currentExchangeRate",
"customer": "http://schema.org/customer",
"dataFeedElement": "http://schema.org/dataFeedElement",
"dataset": "http://schema.org/dataset",
"datasetTimeInterval": "http://schema.org/datasetTimeInterval",
"dateCreated": "http://schema.org/dateCreated",
"dateDeleted": "http://schema.org/dateDeleted",
"dateIssued": "http://schema.org/dateIssued",
"dateModified": "http://schema.org/dateModified",
"datePosted": "http://schema.org/datePosted",
"datePublished": "http://schema.org/datePublished",
"dateRead": "http://schema.org/dateRead",
"dateReceived": "http://schema.org/dateReceived",
"dateSent": "http://schema.org/dateSent",
"dateVehicleFirstRegistered": "http://schema.org/dateVehicleFirstRegistered",
"dateline": "http://schema.org/dateline",
"dayOfWeek": "http://schema.org/dayOfWeek",
"dct": "http://purl.org/dc/terms/",
"deathDate": "http://schema.org/deathDate",
"deathPlace": "http://schema.org/deathPlace",
"defaultValue": "http://schema.org/defaultValue",
"deliveryAddress": "http://schema.org/deliveryAddress",
"deliveryLeadTime": "http://schema.org/deliveryLeadTime",
"deliveryMethod": "http://schema.org/deliveryMethod",
"deliveryStatus": "http://schema.org/deliveryStatus",
"department": "http://schema.org/department",
"departureAirport": "http://schema.org/departureAirport",
"departureBusStop": "http://schema.org/departureBusStop",
"departureGate": "http://schema.org/departureGate",
"departurePlatform": "http://schema.org/departurePlatform",
"departureStation": "http://schema.org/departureStation",
"departureTerminal": "http://schema.org/departureTerminal",
"departureTime": "http://schema.org/departureTime",
"dependencies": "http://schema.org/dependencies",
"depth": "http://schema.org/depth",
"description": "http://schema.org/description",
"device": "http://schema.org/device",
"diagnosis": "http://schema.org/diagnosis",
"diagram": "http://schema.org/diagram",
"diet": "http://schema.org/diet",
"dietFeatures": "http://schema.org/dietFeatures",
"differentialDiagnosis": "http://schema.org/differentialDiagnosis",
"director": "http://schema.org/director",
"directors": "http://schema.org/directors",
"disambiguatingDescription": "http://schema.org/disambiguatingDescription",
"discount": "http://schema.org/discount",
"discountCode": "http://schema.org/discountCode",
"discountCurrency": "http://schema.org/discountCurrency",
"discusses": "http://schema.org/discusses",
"discussionUrl": "http://schema.org/discussionUrl",
"dissolutionDate": "http://schema.org/dissolutionDate",
"distance": "http://schema.org/distance",
"distinguishingSign": "http://schema.org/distinguishingSign",
"distribution": "http://schema.org/distribution",
"diversityPolicy": "http://schema.org/diversityPolicy",
"diversityStaffingReport": "http://schema.org/diversityStaffingReport",
"documentation": "http://schema.org/documentation",
"domainIncludes": "http://schema.org/domainIncludes",
"domiciledMortgage": "http://schema.org/domiciledMortgage",
"doorTime": "http://schema.org/doorTime",
"dosageForm": "http://schema.org/dosageForm",
"doseSchedule": "http://schema.org/doseSchedule",
"doseUnit": "http://schema.org/doseUnit",
"doseValue": "http://schema.org/doseValue",
"downPayment": "http://schema.org/downPayment",
"downloadUrl": "http://schema.org/downloadUrl",
"downvoteCount": "http://schema.org/downvoteCount",
"drainsTo": "http://schema.org/drainsTo",
"driveWheelConfiguration": "http://schema.org/driveWheelConfiguration",
"dropoffLocation": "http://schema.org/dropoffLocation",
"dropoffTime": "http://schema.org/dropoffTime",
"drug": "http://schema.org/drug",
"drugClass": "http://schema.org/drugClass",
"drugUnit": "http://schema.org/drugUnit",
"duns": "http://schema.org/duns",
"duplicateTherapy": "http://schema.org/duplicateTherapy",
"duration": "http://schema.org/duration",
"durationOfWarranty": "http://schema.org/durationOfWarranty",
"duringMedia": "http://schema.org/duringMedia",
"earlyPrepaymentPenalty": "http://schema.org/earlyPrepaymentPenalty",
"editor": "http://schema.org/editor",
"educationRequirements": "http://schema.org/educationRequirements",
"educationalAlignment": "http://schema.org/educationalAlignment",
"educationalCredentialAwarded": "http://schema.org/educationalCredentialAwarded",
"educationalFramework": "http://schema.org/educationalFramework",
"educationalLevel": "http://schema.org/educationalLevel",
"educationalRole": "http://schema.org/educationalRole",
"educationalUse": "http://schema.org/educationalUse",
"elevation": "http://schema.org/elevation",
"eligibleCustomerType": "http://schema.org/eligibleCustomerType",
"eligibleDuration": "http://schema.org/eligibleDuration",
"eligibleQuantity": "http://schema.org/eligibleQuantity",
"eligibleRegion": "http://schema.org/eligibleRegion",
"eligibleTransactionVolume": "http://schema.org/eligibleTransactionVolume",
"email": "http://schema.org/email",
"embedUrl": "http://schema.org/embedUrl",
"emissionsCO2": "http://schema.org/emissionsCO2",
"employee": "http://schema.org/employee",
"employees": "http://schema.org/employees",
"employmentType": "http://schema.org/employmentType",
"employmentUnit": "http://schema.org/employmentUnit",
"encodesCreativeWork": "http://schema.org/encodesCreativeWork",
"encoding": "http://schema.org/encoding",
"encodingFormat": "http://schema.org/encodingFormat",
"encodingType": "http://schema.org/encodingType",
"encodings": "http://schema.org/encodings",
"endDate": "http://schema.org/endDate",
"endOffset": "http://schema.org/endOffset",
"endTime": "http://schema.org/endTime",
"endorsee": "http://schema.org/endorsee",
"endorsers": "http://schema.org/endorsers",
"engineDisplacement": "http://schema.org/engineDisplacement",
"enginePower": "http://schema.org/enginePower",
"engineType": "http://schema.org/engineType",
"entertainmentBusiness": "http://schema.org/entertainmentBusiness",
"epidemiology": "http://schema.org/epidemiology",
"episode": "http://schema.org/episode",
"episodeNumber": "http://schema.org/episodeNumber",
"episodes": "http://schema.org/episodes",
"equal": "http://schema.org/equal",
"error": "http://schema.org/error",
"estimatedCost": "http://schema.org/estimatedCost",
"estimatedFlightDuration": "http://schema.org/estimatedFlightDuration",
"estimatedSalary": "http://schema.org/estimatedSalary",
"estimatesRiskOf": "http://schema.org/estimatesRiskOf",
"ethicsPolicy": "http://schema.org/ethicsPolicy",
"event": "http://schema.org/event",
"eventSchedule": "http://schema.org/eventSchedule",
"eventStatus": "http://schema.org/eventStatus",
"events": "http://schema.org/events",
"evidenceLevel": "http://schema.org/evidenceLevel",
"evidenceOrigin": "http://schema.org/evidenceOrigin",
"exampleOfWork": "http://schema.org/exampleOfWork",
"exceptDate": "http://schema.org/exceptDate",
"exchangeRateSpread": "http://schema.org/exchangeRateSpread",
"executableLibraryName": "http://schema.org/executableLibraryName",
"exerciseCourse": "http://schema.org/exerciseCourse",
"exercisePlan": "http://schema.org/exercisePlan",
"exerciseRelatedDiet": "http://schema.org/exerciseRelatedDiet",
"exerciseType": "http://schema.org/exerciseType",
"exifData": "http://schema.org/exifData",
"expectedArrivalFrom": "http://schema.org/expectedArrivalFrom",
"expectedArrivalUntil": "http://schema.org/expectedArrivalUntil",
"expectedPrognosis": "http://schema.org/expectedPrognosis",
"expectsAcceptanceOf": "http://schema.org/expectsAcceptanceOf",
"experienceRequirements": "http://schema.org/experienceRequirements",
"expertConsiderations": "http://schema.org/expertConsiderations",
"expires": "http://schema.org/expires",
"familyName": "http://schema.org/familyName",
"fatContent": "http://schema.org/fatContent",
"faxNumber": "http://schema.org/faxNumber",
"featureList": "http://schema.org/featureList",
"feesAndCommissionsSpecification": "http://schema.org/feesAndCommissionsSpecification",
"fiberContent": "http://schema.org/fiberContent",
"fileFormat": "http://schema.org/fileFormat",
"fileSize": "http://schema.org/fileSize",
"firstAppearance": "http://schema.org/firstAppearance",
"firstPerformance": "http://schema.org/firstPerformance",
"flightDistance": "http://schema.org/flightDistance",
"flightNumber": "http://schema.org/flightNumber",
"floorLevel": "http://schema.org/floorLevel",
"floorLimit": "http://schema.org/floorLimit",
"floorSize": "http://schema.org/floorSize",
"foaf": "http://xmlns.com/foaf/0.1/",
"followee": "http://schema.org/followee",
"follows": "http://schema.org/follows",
"followup": "http://schema.org/followup",
"foodEstablishment": "http://schema.org/foodEstablishment",
"foodEvent": "http://schema.org/foodEvent",
"foodWarning": "http://schema.org/foodWarning",
"founder": "http://schema.org/founder",
"founders": "http://schema.org/founders",
"foundingDate": "http://schema.org/foundingDate",
"foundingLocation": "http://schema.org/foundingLocation",
"frapo": "http://purl.org/cerif/frapo/",
"free": "http://schema.org/free",
"frequency": "http://schema.org/frequency",
"fromLocation": "http://schema.org/fromLocation",
"fuelCapacity": "http://schema.org/fuelCapacity",
"fuelConsumption": "http://schema.org/fuelConsumption",
"fuelEfficiency": "http://schema.org/fuelEfficiency",
"fuelType": "http://schema.org/fuelType",
"function": "http://schema.org/function",
"functionalClass": "http://schema.org/functionalClass",
"fundedItem": "http://schema.org/fundedItem",
"funder": "http://schema.org/funder",
"game": "http://schema.org/game",
"gameItem": "http://schema.org/gameItem",
"gameLocation": "http://schema.org/gameLocation",
"gamePlatform": "http://schema.org/gamePlatform",
"gameServer": "http://schema.org/gameServer",
"gameTip": "http://schema.org/gameTip",
"gender": "http://schema.org/gender",
"genre": "http://schema.org/genre",
"geo": "http://schema.org/geo",
"geoContains": "http://schema.org/geoContains",
"geoCoveredBy": "http://schema.org/geoCoveredBy",
"geoCovers": "http://schema.org/geoCovers",
"geoCrosses": "http://schema.org/geoCrosses",
"geoDisjoint": "http://schema.org/geoDisjoint",
"geoEquals": "http://schema.org/geoEquals",
"geoIntersects": "http://schema.org/geoIntersects",
"geoMidpoint": "http://schema.org/geoMidpoint",
"geoOverlaps": "http://schema.org/geoOverlaps",
"geoRadius": "http://schema.org/geoRadius",
"geoTouches": "http://schema.org/geoTouches",
"geoWithin": "http://schema.org/geoWithin",
"geographicArea": "http://schema.org/geographicArea",
"givenName": "http://schema.org/givenName",
"globalLocationNumber": "http://schema.org/globalLocationNumber",
"gracePeriod": "http://schema.org/gracePeriod",
"grantee": "http://schema.org/grantee",
"greater": "http://schema.org/greater",
"greaterOrEqual": "http://schema.org/greaterOrEqual",
"gtin": "http://schema.org/gtin",
"gtin12": "http://schema.org/gtin12",
"gtin13": "http://schema.org/gtin13",
"gtin14": "http://schema.org/gtin14",
"gtin8": "http://schema.org/gtin8",
"guideline": "http://schema.org/guideline",
"guidelineDate": "http://schema.org/guidelineDate",
"guidelineSubject": "http://schema.org/guidelineSubject",
"hasBroadcastChannel": "http://schema.org/hasBroadcastChannel",
"hasCategoryCode": "http://schema.org/hasCategoryCode",
"hasCourseInstance": "http://schema.org/hasCourseInstance",
"hasCredential": "http://schema.org/hasCredential",
"hasDefinedTerm": "http://schema.org/hasDefinedTerm",
"hasDeliveryMethod": "http://schema.org/hasDeliveryMethod",
"hasDigitalDocumentPermission": "http://schema.org/hasDigitalDocumentPermission",
"hasFile": "http://pcdm.org/models#hasFile",
"hasHealthAspect": "http://schema.org/hasHealthAspect",
"hasMap": "http://schema.org/hasMap",
"hasMember": "http://pcdm.org/models#hasMember",
"hasMenu": "http://schema.org/hasMenu",
"hasMenuItem": "http://schema.org/hasMenuItem",
"hasMenuSection": "http://schema.org/hasMenuSection",
"hasOccupation": "http://schema.org/hasOccupation",
"hasOfferCatalog": "http://schema.org/hasOfferCatalog",
"hasPOS": "http://schema.org/hasPOS",
"hasPart": "http://schema.org/hasPart",
"hasProductReturnPolicy": "http://schema.org/hasProductReturnPolicy",
"headline": "http://schema.org/headline",
"healthCondition": "http://schema.org/healthCondition",
"healthPlanCoinsuranceOption": "http://schema.org/healthPlanCoinsuranceOption",
"healthPlanCoinsuranceRate": "http://schema.org/healthPlanCoinsuranceRate",
"healthPlanCopay": "http://schema.org/healthPlanCopay",
"healthPlanCopayOption": "http://schema.org/healthPlanCopayOption",
"healthPlanCostSharing": "http://schema.org/healthPlanCostSharing",
"healthPlanDrugOption": "http://schema.org/healthPlanDrugOption",
"healthPlanDrugTier": "http://schema.org/healthPlanDrugTier",
"healthPlanId": "http://schema.org/healthPlanId",
"healthPlanMarketingUrl": "http://schema.org/healthPlanMarketingUrl",
"healthPlanNetworkId": "http://schema.org/healthPlanNetworkId",
"healthPlanNetworkTier": "http://schema.org/healthPlanNetworkTier",
"healthPlanPharmacyCategory": "http://schema.org/healthPlanPharmacyCategory",
"height": "http://schema.org/height",
"highPrice": "http://schema.org/highPrice",
"hiringOrganization": "http://schema.org/hiringOrganization",
"holdingArchive": "http://schema.org/holdingArchive",
"homeLocation": "http://schema.org/homeLocation",
"homeTeam": "http://schema.org/homeTeam",
"honorificPrefix": "http://schema.org/honorificPrefix",
"honorificSuffix": "http://schema.org/honorificSuffix",
"hospitalAffiliation": "http://schema.org/hospitalAffiliation",
"hostingOrganization": "http://schema.org/hostingOrganization",
"hoursAvailable": "http://schema.org/hoursAvailable",
"howPerformed": "http://schema.org/howPerformed",
"httpMethod": "http://schema.org/httpMethod",
"iataCode": "http://schema.org/iataCode",
"icaoCode": "http://schema.org/icaoCode",
"identifier": "http://schema.org/identifier",
"identifyingExam": "http://schema.org/identifyingExam",
"identifyingTest": "http://schema.org/identifyingTest",
"illustrator": "http://schema.org/illustrator",
"image": "http://schema.org/image",
"imagingTechnique": "http://schema.org/imagingTechnique",
"importedBy": "http://purl.org/pav/importedBy",
"importedFrom": "http://purl.org/pav/importedFrom",
"importedOn": "http://purl.org/pav/importedOn",
"inAlbum": "http://schema.org/inAlbum",
"inBroadcastLineup": "http://schema.org/inBroadcastLineup",
"inCodeSet": "http://schema.org/inCodeSet",
"inDefinedTermSet": "http://schema.org/inDefinedTermSet",
"inLanguage": "http://schema.org/inLanguage",
"inPlaylist": "http://schema.org/inPlaylist",
"inStoreReturnsOffered": "http://schema.org/inStoreReturnsOffered",
"inSupportOf": "http://schema.org/inSupportOf",
"incentiveCompensation": "http://schema.org/incentiveCompensation",
"incentives": "http://schema.org/incentives",
"includedComposition": "http://schema.org/includedComposition",
"includedDataCatalog": "http://schema.org/includedDataCatalog",
"includedInDataCatalog": "http://schema.org/includedInDataCatalog",
"includedInHealthInsurancePlan": "http://schema.org/includedInHealthInsurancePlan",
"includedRiskFactor": "http://schema.org/includedRiskFactor",
"includesAttraction": "http://schema.org/includesAttraction",
"includesHealthPlanFormulary": "http://schema.org/includesHealthPlanFormulary",
"includesHealthPlanNetwork": "http://schema.org/includesHealthPlanNetwork",
"includesObject": "http://schema.org/includesObject",
"increasesRiskOf": "http://schema.org/increasesRiskOf",
"indication": "http://schema.org/indication",
"industry": "http://schema.org/industry",
"ineligibleRegion": "http://schema.org/ineligibleRegion",
"infectiousAgent": "http://schema.org/infectiousAgent",
"infectiousAgentClass": "http://schema.org/infectiousAgentClass",
"ingredients": "http://schema.org/ingredients",
"inker": "http://schema.org/inker",
"insertion": "http://schema.org/insertion",
"installUrl": "http://schema.org/installUrl",
"instructor": "http://schema.org/instructor",
"instrument": "http://schema.org/instrument",
"intensity": "http://schema.org/intensity",
"interactingDrug": "http://schema.org/interactingDrug",
"interactionCount": "http://schema.org/interactionCount",
"interactionService": "http://schema.org/interactionService",
"interactionStatistic": "http://schema.org/interactionStatistic",
"interactionType": "http://schema.org/interactionType",
"interactivityType": "http://schema.org/interactivityType",
"interestRate": "http://schema.org/interestRate",
"inventoryLevel": "http://schema.org/inventoryLevel",
"inverseOf": "http://schema.org/inverseOf",
"isAcceptingNewPatients": "http://schema.org/isAcceptingNewPatients",
"isAccessibleForFree": "http://schema.org/isAccessibleForFree",
"isAccessoryOrSparePartFor": "http://schema.org/isAccessoryOrSparePartFor",
"isAvailableGenerically": "http://schema.org/isAvailableGenerically",
"isBasedOn": "http://schema.org/isBasedOn",
"isBasedOnUrl": "http://schema.org/isBasedOnUrl",
"isConsumableFor": "http://schema.org/isConsumableFor",
"isFamilyFriendly": "http://schema.org/isFamilyFriendly",
"isGift": "http://schema.org/isGift",
"isLiveBroadcast": "http://schema.org/isLiveBroadcast",
"isPartOf": "http://schema.org/isPartOf",
"isProprietary": "http://schema.org/isProprietary",
"isRelatedTo": "http://schema.org/isRelatedTo",
"isSimilarTo": "http://schema.org/isSimilarTo",
"isVariantOf": "http://schema.org/isVariantOf",
"isbn": "http://schema.org/isbn",
"isicV4": "http://schema.org/isicV4",
"isrcCode": "http://schema.org/isrcCode",
"issn": "http://schema.org/issn",
"issueNumber": "http://schema.org/issueNumber",
"issuedBy": "http://schema.org/issuedBy",
"issuedThrough": "http://schema.org/issuedThrough",
"iswcCode": "http://schema.org/iswcCode",
"item": "http://schema.org/item",
"itemCondition": "http://schema.org/itemCondition",
"itemListElement": "http://schema.org/itemListElement",
"itemListOrder": "http://schema.org/itemListOrder",
"itemLocation": "http://schema.org/itemLocation",
"itemOffered": "http://schema.org/itemOffered",
"itemReviewed": "http://schema.org/itemReviewed",
"itemShipped": "http://schema.org/itemShipped",
"itinerary": "http://schema.org/itinerary",
"jobBenefits": "http://schema.org/jobBenefits",
"jobImmediateStart": "http://schema.org/jobImmediateStart",
"jobLocation": "http://schema.org/jobLocation",
"jobLocationType": "http://schema.org/jobLocationType",
"jobStartDate": "http://schema.org/jobStartDate",
"jobTitle": "http://schema.org/jobTitle",
"keywords": "http://schema.org/keywords",
"knownVehicleDamages": "http://schema.org/knownVehicleDamages",
"knows": "http://schema.org/knows",
"knowsAbout": "http://schema.org/knowsAbout",
"knowsLanguage": "http://schema.org/knowsLanguage",
"labelDetails": "http://schema.org/labelDetails",
"landlord": "http://schema.org/landlord",
"language": "http://schema.org/language",
"lastReviewed": "http://schema.org/lastReviewed",
"latitude": "http://schema.org/latitude",
"learningResourceType": "http://schema.org/learningResourceType",
"leaseLength": "http://schema.org/leaseLength",
"legalName": "http://schema.org/legalName",
"legalStatus": "http://schema.org/legalStatus",
"legislationApplies": "http://schema.org/legislationApplies",
"legislationChanges": "http://schema.org/legislationChanges",
"legislationConsolidates": "http://schema.org/legislationConsolidates",
"legislationDate": "http://schema.org/legislationDate",
"legislationDateVersion": "http://schema.org/legislationDateVersion",
"legislationIdentifier": "http://schema.org/legislationIdentifier",
"legislationJurisdiction": "http://schema.org/legislationJurisdiction",
"legislationLegalForce": "http://schema.org/legislationLegalForce",
"legislationLegalValue": "http://schema.org/legislationLegalValue",
"legislationPassedBy": "http://schema.org/legislationPassedBy",
"legislationResponsible": "http://schema.org/legislationResponsible",
"legislationTransposes": "http://schema.org/legislationTransposes",
"legislationType": "http://schema.org/legislationType",
"leiCode": "http://schema.org/leiCode",
"lender": "http://schema.org/lender",
"lesser": "http://schema.org/lesser",
"lesserOrEqual": "http://schema.org/lesserOrEqual",
"letterer": "http://schema.org/letterer",
"license": "http://schema.org/license",
"line": "http://schema.org/line",
"linkRelationship": "http://schema.org/linkRelationship",
"liveBlogUpdate": "http://schema.org/liveBlogUpdate",
"loanMortgageMandateAmount": "http://schema.org/loanMortgageMandateAmount",
"loanPaymentAmount": "http://schema.org/loanPaymentAmount",
"loanPaymentFrequency": "http://schema.org/loanPaymentFrequency",
"loanRepaymentForm": "http://schema.org/loanRepaymentForm",
"loanTerm": "http://schema.org/loanTerm",
"loanType": "http://schema.org/loanType",
"location": "http://schema.org/location",
"locationCreated": "http://schema.org/locationCreated",
"lodgingUnitDescription": "http://schema.org/lodgingUnitDescription",
"lodgingUnitType": "http://schema.org/lodgingUnitType",
"logo": "http://schema.org/logo",
"longitude": "http://schema.org/longitude",
"loser": "http://schema.org/loser",
"lowPrice": "http://schema.org/lowPrice",
"lyricist": "http://schema.org/lyricist",
"lyrics": "http://schema.org/lyrics",
"mainContentOfPage": "http://schema.org/mainContentOfPage",
"mainEntity": "http://schema.org/mainEntity",
"mainEntityOfPage": "http://schema.org/mainEntityOfPage",
"makesOffer": "http://schema.org/makesOffer",
"manufacturer": "http://schema.org/manufacturer",
"map": "http://schema.org/map",
"mapType": "http://schema.org/mapType",
"maps": "http://schema.org/maps",
"marginOfError": "http://schema.org/marginOfError",
"masthead": "http://schema.org/masthead",
"material": "http://schema.org/material",
"materialExtent": "http://schema.org/materialExtent",
"maxPrice": "http://schema.org/maxPrice",
"maxValue": "http://schema.org/maxValue",
"maximumAttendeeCapacity": "http://schema.org/maximumAttendeeCapacity",
"maximumIntake": "http://schema.org/maximumIntake",
"mealService": "http://schema.org/mealService",
"measuredProperty": "http://schema.org/measuredProperty",
"measuredValue": "http://schema.org/measuredValue",
"measurementTechnique": "http://schema.org/measurementTechnique",
"mechanismOfAction": "http://schema.org/mechanismOfAction",
"median": "http://schema.org/median",
"medicalSpecialty": "http://schema.org/medicalSpecialty",
"medicineSystem": "http://schema.org/medicineSystem",
"meetsEmissionStandard": "http://schema.org/meetsEmissionStandard",
"member": "http://schema.org/member",
"memberOf": "http://schema.org/memberOf",
"members": "http://schema.org/members",
"membershipNumber": "http://schema.org/membershipNumber",
"membershipPointsEarned": "http://schema.org/membershipPointsEarned",
"memoryRequirements": "http://schema.org/memoryRequirements",
"mentions": "http://schema.org/mentions",
"menu": "http://schema.org/menu",
"menuAddOn": "http://schema.org/menuAddOn",
"merchant": "http://schema.org/merchant",
"messageAttachment": "http://schema.org/messageAttachment",
"mileageFromOdometer": "http://schema.org/mileageFromOdometer",
"minPrice": "http://schema.org/minPrice",
"minValue": "http://schema.org/minValue",
"minimumPaymentDue": "http://schema.org/minimumPaymentDue",
"missionCoveragePrioritiesPolicy": "http://schema.org/missionCoveragePrioritiesPolicy",
"model": "http://schema.org/model",
"modelDate": "http://schema.org/modelDate",
"modifiedTime": "http://schema.org/modifiedTime",
"monthlyMinimumRepaymentAmount": "http://schema.org/monthlyMinimumRepaymentAmount",
"mpn": "http://schema.org/mpn",
"multipleValues": "http://schema.org/multipleValues",
"muscleAction": "http://schema.org/muscleAction",
"musicArrangement": "http://schema.org/musicArrangement",
"musicBy": "http://schema.org/musicBy",
"musicCompositionForm": "http://schema.org/musicCompositionForm",
"musicGroupMember": "http://schema.org/musicGroupMember",
"musicReleaseFormat": "http://schema.org/musicReleaseFormat",
"musicalKey": "http://schema.org/musicalKey",
"naics": "http://schema.org/naics",
"name": "http://schema.org/name",
"namedPosition": "http://schema.org/namedPosition",
"nationality": "http://schema.org/nationality",
"naturalProgression": "http://schema.org/naturalProgression",
"nerve": "http://schema.org/nerve",
"nerveMotor": "http://schema.org/nerveMotor",
"netWorth": "http://schema.org/netWorth",
"nextItem": "http://schema.org/nextItem",
"noBylinesPolicy": "http://schema.org/noBylinesPolicy",
"nonEqual": "http://schema.org/nonEqual",
"nonProprietaryName": "http://schema.org/nonProprietaryName",
"normalRange": "http://schema.org/normalRange",
"nsn": "http://schema.org/nsn",
"numAdults": "http://schema.org/numAdults",
"numChildren": "http://schema.org/numChildren",
"numConstraints": "http://schema.org/numConstraints",
"numTracks": "http://schema.org/numTracks",
"numberOfAirbags": "http://schema.org/numberOfAirbags",
"numberOfAxles": "http://schema.org/numberOfAxles",
"numberOfBathroomsTotal": "http://schema.org/numberOfBathroomsTotal",
"numberOfBeds": "http://schema.org/numberOfBeds",
"numberOfDoors": "http://schema.org/numberOfDoors",
"numberOfEmployees": "http://schema.org/numberOfEmployees",
"numberOfEpisodes": "http://schema.org/numberOfEpisodes",
"numberOfForwardGears": "http://schema.org/numberOfForwardGears",
"numberOfFullBathrooms": "http://schema.org/numberOfFullBathrooms",
"numberOfItems": "http://schema.org/numberOfItems",
"numberOfLoanPayments": "http://schema.org/numberOfLoanPayments",
"numberOfPages": "http://schema.org/numberOfPages",
"numberOfPlayers": "http://schema.org/numberOfPlayers",
"numberOfPreviousOwners": "http://schema.org/numberOfPreviousOwners",
"numberOfRooms": "http://schema.org/numberOfRooms",
"numberOfSeasons": "http://schema.org/numberOfSeasons",
"numberedPosition": "http://schema.org/numberedPosition",
"nutrition": "http://schema.org/nutrition",
"object": "http://schema.org/object",
"observationDate": "http://schema.org/observationDate",
"observedNode": "http://schema.org/observedNode",
"occupancy": "http://schema.org/occupancy",
"occupationLocation": "http://schema.org/occupationLocation",
"occupationalCategory": "http://schema.org/occupationalCategory",
"occupationalCredentialAwarded": "http://schema.org/occupationalCredentialAwarded",
"offerCount": "http://schema.org/offerCount",
"offeredBy": "http://schema.org/offeredBy",
"offers": "http://schema.org/offers",
"offersPrescriptionByMail": "http://schema.org/offersPrescriptionByMail",
"openingHours": "http://schema.org/openingHours",
"openingHoursSpecification": "http://schema.org/openingHoursSpecification",
"opens": "http://schema.org/opens",
"operatingSystem": "http://schema.org/operatingSystem",
"opponent": "http://schema.org/opponent",
"option": "http://schema.org/option",
"orderDate": "http://schema.org/orderDate",
"orderDelivery": "http://schema.org/orderDelivery",
"orderItemNumber": "http://schema.org/orderItemNumber",
"orderItemStatus": "http://schema.org/orderItemStatus",
"orderNumber": "http://schema.org/orderNumber",
"orderQuantity": "http://schema.org/orderQuantity",
"orderStatus": "http://schema.org/orderStatus",
"orderedItem": "http://schema.org/orderedItem",
"organizer": "http://schema.org/organizer",
"origin": "http://schema.org/origin",
"originAddress": "http://schema.org/originAddress",
"originatesFrom": "http://schema.org/originatesFrom",
"outcome": "http://schema.org/outcome",
"overdosage": "http://schema.org/overdosage",
"overview": "http://schema.org/overview",
"ownedFrom": "http://schema.org/ownedFrom",
"ownedThrough": "http://schema.org/ownedThrough",
"ownershipFundingInfo": "http://schema.org/ownershipFundingInfo",
"owns": "http://schema.org/owns",
"pageEnd": "http://schema.org/pageEnd",
"pageStart": "http://schema.org/pageStart",
"pagination": "http://schema.org/pagination",
"parent": "http://schema.org/parent",
"parentItem": "http://schema.org/parentItem",
"parentOrganization": "http://schema.org/parentOrganization",
"parentService": "http://schema.org/parentService",
"parents": "http://schema.org/parents",
"partOfEpisode": "http://schema.org/partOfEpisode",
"partOfInvoice": "http://schema.org/partOfInvoice",
"partOfOrder": "http://schema.org/partOfOrder",
"partOfSeason": "http://schema.org/partOfSeason",
"partOfSeries": "http://schema.org/partOfSeries",
"partOfSystem": "http://schema.org/partOfSystem",
"partOfTVSeries": "http://schema.org/partOfTVSeries",
"partOfTrip": "http://schema.org/partOfTrip",
"participant": "http://schema.org/participant",
"partySize": "http://schema.org/partySize",
"passengerPriorityStatus": "http://schema.org/passengerPriorityStatus",
"passengerSequenceNumber": "http://schema.org/passengerSequenceNumber",
"path": "http://schema.org/contentUrl",
"pathophysiology": "http://schema.org/pathophysiology",
"pav": "http://purl.org/pav/",
"payload": "http://schema.org/payload",
"paymentAccepted": "http://schema.org/paymentAccepted",
"paymentDue": "http://schema.org/paymentDue",
"paymentDueDate": "http://schema.org/paymentDueDate",
"paymentMethod": "http://schema.org/paymentMethod",
"paymentMethodId": "http://schema.org/paymentMethodId",
"paymentStatus": "http://schema.org/paymentStatus",
"paymentUrl": "http://schema.org/paymentUrl",
"pcdm": "http://pcdm.org/models#",
"penciler": "http://schema.org/penciler",
"percentile10": "http://schema.org/percentile10",
"percentile25": "http://schema.org/percentile25",
"percentile75": "http://schema.org/percentile75",
"percentile90": "http://schema.org/percentile90",
"performTime": "http://schema.org/performTime",
"performer": "http://schema.org/performer",
"performerIn": "http://schema.org/performerIn",
"performers": "http://schema.org/performers",
"permissionType": "http://schema.org/permissionType",
"permissions": "http://schema.org/permissions",
"permitAudience": "http://schema.org/permitAudience",
"permittedUsage": "http://schema.org/permittedUsage",
"petsAllowed": "http://schema.org/petsAllowed",
"phase": "http://schema.org/phase",
"photo": "http://schema.org/photo",
"photos": "http://schema.org/photos",
"physiologicalBenefits": "http://schema.org/physiologicalBenefits",
"pickupLocation": "http://schema.org/pickupLocation",
"pickupTime": "http://schema.org/pickupTime",
"playMode": "http://schema.org/playMode",
"playerType": "http://schema.org/playerType",
"playersOnline": "http://schema.org/playersOnline",
"polygon": "http://schema.org/polygon",
"population": "http://schema.org/population",
"populationType": "http://schema.org/populationType",
"position": "http://schema.org/position",
"possibleComplication": "http://schema.org/possibleComplication",
"possibleTreatment": "http://schema.org/possibleTreatment",
"postOfficeBoxNumber": "http://schema.org/postOfficeBoxNumber",
"postOp": "http://schema.org/postOp",
"postalCode": "http://schema.org/postalCode",
"potentialAction": "http://schema.org/potentialAction",
"preOp": "http://schema.org/preOp",
"predecessorOf": "http://schema.org/predecessorOf",
"pregnancyCategory": "http://schema.org/pregnancyCategory",
"pregnancyWarning": "http://schema.org/pregnancyWarning",
"prepTime": "http://schema.org/prepTime",
"preparation": "http://schema.org/preparation",
"prescribingInfo": "http://schema.org/prescribingInfo",
"prescriptionStatus": "http://schema.org/prescriptionStatus",
"previousItem": "http://schema.org/previousItem",
"previousStartDate": "http://schema.org/previousStartDate",
"price": "http://schema.org/price",
"priceComponent": "http://schema.org/priceComponent",
"priceCurrency": "http://schema.org/priceCurrency",
"priceRange": "http://schema.org/priceRange",
"priceSpecification": "http://schema.org/priceSpecification",
"priceType": "http://schema.org/priceType",
"priceValidUntil": "http://schema.org/priceValidUntil",
"primaryImageOfPage": "http://schema.org/primaryImageOfPage",
"primaryPrevention": "http://schema.org/primaryPrevention",
"printColumn": "http://schema.org/printColumn",
"printEdition": "http://schema.org/printEdition",
"printPage": "http://schema.org/printPage",
"printSection": "http://schema.org/printSection",
"procedure": "http://schema.org/procedure",
"procedureType": "http://schema.org/procedureType",
"processingTime": "http://schema.org/processingTime",
"processorRequirements": "http://schema.org/processorRequirements",
"producer": "http://schema.org/producer",
"produces": "http://schema.org/produces",
"productID": "http://schema.org/productID",
"productReturnDays": "http://schema.org/productReturnDays",
"productReturnLink": "http://schema.org/productReturnLink",
"productSupported": "http://schema.org/productSupported",
"productionCompany": "http://schema.org/productionCompany",
"productionDate": "http://schema.org/productionDate",
"proficiencyLevel": "http://schema.org/proficiencyLevel",
"programMembershipUsed": "http://schema.org/programMembershipUsed",
"programName": "http://schema.org/programName",
"programPrerequisites": "http://schema.org/programPrerequisites",
"programmingLanguage": "http://schema.org/programmingLanguage",
"programmingModel": "http://schema.org/programmingModel",
"propertyID": "http://schema.org/propertyID",
"proprietaryName": "http://schema.org/proprietaryName",
"proteinContent": "http://schema.org/proteinContent",
"prov": "http://www.w3.org/ns/prov#",
"provider": "http://schema.org/provider",
"providerMobility": "http://schema.org/providerMobility",
"providesBroadcastService": "http://schema.org/providesBroadcastService",
"providesService": "http://schema.org/providesService",
"publicAccess": "http://schema.org/publicAccess",
"publication": "http://schema.org/publication",
"publicationType": "http://schema.org/publicationType",
"publishedBy": "http://schema.org/publishedBy",
"publishedOn": "http://schema.org/publishedOn",
"publisher": "http://schema.org/publisher",
"publisherImprint": "http://schema.org/publisherImprint",
"publishingPrinciples": "http://schema.org/publishingPrinciples",
"purchaseDate": "http://schema.org/purchaseDate",
"purpose": "http://schema.org/purpose",
"qualifications": "http://schema.org/qualifications",
"query": "http://schema.org/query",
"quest": "http://schema.org/quest",
"question": "http://schema.org/question",
"rangeIncludes": "http://schema.org/rangeIncludes",
"ratingCount": "http://schema.org/ratingCount",
"ratingExplanation": "http://schema.org/ratingExplanation",
"ratingValue": "http://schema.org/ratingValue",
"rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
"rdfa": "http://www.w3.org/ns/rdfa#",
"rdfs": "http://www.w3.org/2000/01/rdf-schema#",
"readBy": "http://schema.org/readBy",
"readonlyValue": "http://schema.org/readonlyValue",
"realEstateAgent": "http://schema.org/realEstateAgent",
"recipe": "http://schema.org/recipe",
"recipeCategory": "http://schema.org/recipeCategory",
"recipeCuisine": "http://schema.org/recipeCuisine",
"recipeIngredient": "http://schema.org/recipeIngredient",
"recipeInstructions": "http://schema.org/recipeInstructions",
"recipeYield": "http://schema.org/recipeYield",
"recipient": "http://schema.org/recipient",
"recognizedBy": "http://schema.org/recognizedBy",
"recognizingAuthority": "http://schema.org/recognizingAuthority",
"recommendationStrength": "http://schema.org/recommendationStrength",
"recommendedIntake": "http://schema.org/recommendedIntake",
"recordLabel": "http://schema.org/recordLabel",
"recordedAs": "http://schema.org/recordedAs",
"recordedAt": "http://schema.org/recordedAt",
"recordedIn": "http://schema.org/recordedIn",
"recordingOf": "http://schema.org/recordingOf",
"recourseLoan": "http://schema.org/recourseLoan",
"referenceQuantity": "http://schema.org/referenceQuantity",
"referencesOrder": "http://schema.org/referencesOrder",
"refundType": "http://schema.org/refundType",
"regionDrained": "http://schema.org/regionDrained",
"regionsAllowed": "http://schema.org/regionsAllowed",
"rel": "https://www.w3.org/ns/iana/link-relations/relation#",
"relatedAnatomy": "http://schema.org/relatedAnatomy",
"relatedCondition": "http://schema.org/relatedCondition",
"relatedDrug": "http://schema.org/relatedDrug",
"relatedLink": "http://schema.org/relatedLink",
"relatedStructure": "http://schema.org/relatedStructure",
"relatedTherapy": "http://schema.org/relatedTherapy",
"relatedTo": "http://schema.org/relatedTo",
"releaseDate": "http://schema.org/releaseDate",
"releaseNotes": "http://schema.org/releaseNotes",
"releaseOf": "http://schema.org/releaseOf",
"releasedEvent": "http://schema.org/releasedEvent",
"relevantOccupation": "http://schema.org/relevantOccupation",
"relevantSpecialty": "http://schema.org/relevantSpecialty",
"remainingAttendeeCapacity": "http://schema.org/remainingAttendeeCapacity",
"renegotiableLoan": "http://schema.org/renegotiableLoan",
"repeatCount": "http://schema.org/repeatCount",
"repeatFrequency": "http://schema.org/repeatFrequency",
"repetitions": "http://schema.org/repetitions",
"replacee": "http://schema.org/replacee",
"replacer": "http://schema.org/replacer",
"replyToUrl": "http://schema.org/replyToUrl",
"reportNumber": "http://schema.org/reportNumber",
"representativeOfPage": "http://schema.org/representativeOfPage",
"requiredCollateral": "http://schema.org/requiredCollateral",
"requiredGender": "http://schema.org/requiredGender",
"requiredMaxAge": "http://schema.org/requiredMaxAge",
"requiredMinAge": "http://schema.org/requiredMinAge",
"requiredQuantity": "http://schema.org/requiredQuantity",
"requirements": "http://schema.org/requirements",
"requiresSubscription": "http://schema.org/requiresSubscription",
"reservationFor": "http://schema.org/reservationFor",
"reservationId": "http://schema.org/reservationId",
"reservationStatus": "http://schema.org/reservationStatus",
"reservedTicket": "http://schema.org/reservedTicket",
"responsibilities": "http://schema.org/responsibilities",
"restPeriods": "http://schema.org/restPeriods",
"result": "http://schema.org/result",
"resultComment": "http://schema.org/resultComment",
"resultReview": "http://schema.org/resultReview",
"retrievedBy": "http://purl.org/pav/retrievedBy",
"retrievedFrom": "http://purl.org/pav/retrievedFrom",
"retrievedOn": "http://purl.org/pav/retrievedOn",
"returnFees": "http://schema.org/returnFees",
"returnPolicyCategory": "http://schema.org/returnPolicyCategory",
"review": "http://schema.org/review",
"reviewAspect": "http://schema.org/reviewAspect",
"reviewBody": "http://schema.org/reviewBody",
"reviewCount": "http://schema.org/reviewCount",
"reviewRating": "http://schema.org/reviewRating",
"reviewedBy": "http://schema.org/reviewedBy",
"reviews": "http://schema.org/reviews",
"riskFactor": "http://schema.org/riskFactor",
"risks": "http://schema.org/risks",
"roleName": "http://schema.org/roleName",
"roofLoad": "http://schema.org/roofLoad",
"roterms": "http://purl.org/ro/roterms#",
"rsvpResponse": "http://schema.org/rsvpResponse",
"runsTo": "http://schema.org/runsTo",
"runtime": "http://schema.org/runtime",
"runtimePlatform": "http://schema.org/runtimePlatform",
"rxcui": "http://schema.org/rxcui",
"safetyConsideration": "http://schema.org/safetyConsideration",
"salaryCurrency": "http://schema.org/salaryCurrency",
"salaryUponCompletion": "http://schema.org/salaryUponCompletion",
"sameAs": "http://schema.org/sameAs",
"sampleType": "http://schema.org/sampleType",
"saturatedFatContent": "http://schema.org/saturatedFatContent",
"scheduledPaymentDate": "http://schema.org/scheduledPaymentDate",
"scheduledTime": "http://schema.org/scheduledTime",
"schema": "http://schema.org/",
"schemaVersion": "http://schema.org/schemaVersion",
"screenCount": "http://schema.org/screenCount",
"screenshot": "http://schema.org/screenshot",
"sdDatePublished": "http://schema.org/sdDatePublished",
"sdLicense": "http://schema.org/sdLicense",
"sdPublisher": "http://schema.org/sdPublisher",
"season": "http://schema.org/season",
"seasonNumber": "http://schema.org/seasonNumber",
"seasons": "http://schema.org/seasons",
"seatNumber": "http://schema.org/seatNumber",
"seatRow": "http://schema.org/seatRow",
"seatSection": "http://schema.org/seatSection",
"seatingCapacity": "http://schema.org/seatingCapacity",
"seatingType": "http://schema.org/seatingType",
"secondaryPrevention": "http://schema.org/secondaryPrevention",
"securityScreening": "http://schema.org/securityScreening",
"seeks": "http://schema.org/seeks",
"seller": "http://schema.org/seller",
"sender": "http://schema.org/sender",
"sensoryUnit": "http://schema.org/sensoryUnit",
"serialNumber": "http://schema.org/serialNumber",
"seriousAdverseOutcome": "http://schema.org/seriousAdverseOutcome",
"serverStatus": "http://schema.org/serverStatus",
"servesCuisine": "http://schema.org/servesCuisine",
"serviceArea": "http://schema.org/serviceArea",
"serviceAudience": "http://schema.org/serviceAudience",
"serviceLocation": "http://schema.org/serviceLocation",
"serviceOperator": "http://schema.org/serviceOperator",
"serviceOutput": "http://schema.org/serviceOutput",
"servicePhone": "http://schema.org/servicePhone",
"servicePostalAddress": "http://schema.org/servicePostalAddress",
"serviceSmsNumber": "http://schema.org/serviceSmsNumber",
"serviceType": "http://schema.org/serviceType",
"serviceUrl": "http://schema.org/serviceUrl",
"servingSize": "http://schema.org/servingSize",
"sharedContent": "http://schema.org/sharedContent",
"sibling": "http://schema.org/sibling",
"siblings": "http://schema.org/siblings",
"signDetected": "http://schema.org/signDetected",
"signOrSymptom": "http://schema.org/signOrSymptom",
"significance": "http://schema.org/significance",
"significantLink": "http://schema.org/significantLink",
"significantLinks": "http://schema.org/significantLinks",
"skills": "http://schema.org/skills",
"sku": "http://schema.org/sku",
"slogan": "http://schema.org/slogan",
"smokingAllowed": "http://schema.org/smokingAllowed",
"sodiumContent": "http://schema.org/sodiumContent",
"softwareAddOn": "http://schema.org/softwareAddOn",
"softwareHelp": "http://schema.org/softwareHelp",
"softwareRequirements": "http://schema.org/softwareRequirements",
"softwareVersion": "http://schema.org/softwareVersion",
"source": "http://schema.org/source",
"sourceOrganization": "http://schema.org/sourceOrganization",
"sourcedFrom": "http://schema.org/sourcedFrom",
"spatial": "http://schema.org/spatial",
"spatialCoverage": "http://schema.org/spatialCoverage",
"speakable": "http://schema.org/speakable",
"specialCommitments": "http://schema.org/specialCommitments",
"specialOpeningHoursSpecification": "http://schema.org/specialOpeningHoursSpecification",
"specialty": "http://schema.org/specialty",
"speed": "http://schema.org/speed",
"spokenByCharacter": "http://schema.org/spokenByCharacter",
"sponsor": "http://schema.org/sponsor",
"sport": "http://schema.org/sport",
"sportsActivityLocation": "http://schema.org/sportsActivityLocation",
"sportsEvent": "http://schema.org/sportsEvent",
"sportsTeam": "http://schema.org/sportsTeam",
"spouse": "http://schema.org/spouse",
"stage": "http://schema.org/stage",
"stageAsNumber": "http://schema.org/stageAsNumber",
"starRating": "http://schema.org/starRating",
"startDate": "http://schema.org/startDate",
"startOffset": "http://schema.org/startOffset",
"startTime": "http://schema.org/startTime",
"status": "http://schema.org/status",
"steeringPosition": "http://schema.org/steeringPosition",
"step": "http://schema.org/step",
"stepValue": "http://schema.org/stepValue",
"steps": "http://schema.org/steps",
"storageRequirements": "http://schema.org/storageRequirements",
"streetAddress": "http://schema.org/streetAddress",
"strengthUnit": "http://schema.org/strengthUnit",
"strengthValue": "http://schema.org/strengthValue",
"structuralClass": "http://schema.org/structuralClass",
"study": "http://schema.org/study",
"studyDesign": "http://schema.org/studyDesign",
"studyLocation": "http://schema.org/studyLocation",
"studySubject": "http://schema.org/studySubject",
"stupidProperty": "http://schema.org/stupidProperty",
"subEvent": "http://schema.org/subEvent",
"subEvents": "http://schema.org/subEvents",
"subOrganization": "http://schema.org/subOrganization",
"subReservation": "http://schema.org/subReservation",
"subStageSuffix": "http://schema.org/subStageSuffix",
"subStructure": "http://schema.org/subStructure",
"subTest": "http://schema.org/subTest",
"subTrip": "http://schema.org/subTrip",
"subjectOf": "http://schema.org/subjectOf",
"subtitleLanguage": "http://schema.org/subtitleLanguage",
"subtype": "http://schema.org/subtype",
"successorOf": "http://schema.org/successorOf",
"sugarContent": "http://schema.org/sugarContent",
"suggestedAnswer": "http://schema.org/suggestedAnswer",
"suggestedGender": "http://schema.org/suggestedGender",
"suggestedMaxAge": "http://schema.org/suggestedMaxAge",
"suggestedMinAge": "http://schema.org/suggestedMinAge",
"suitableForDiet": "http://schema.org/suitableForDiet",
"superEvent": "http://schema.org/superEvent",
"supersededBy": "http://schema.org/supersededBy",
"supply": "http://schema.org/supply",
"supplyTo": "http://schema.org/supplyTo",
"supportingData": "http://schema.org/supportingData",
"surface": "http://schema.org/surface",
"target": "http://schema.org/target",
"targetCollection": "http://schema.org/targetCollection",
"targetDescription": "http://schema.org/targetDescription",
"targetName": "http://schema.org/targetName",
"targetPlatform": "http://schema.org/targetPlatform",
"targetPopulation": "http://schema.org/targetPopulation",
"targetProduct": "http://schema.org/targetProduct",
"targetUrl": "http://schema.org/targetUrl",
"taxID": "http://schema.org/taxID",
"telephone": "http://schema.org/telephone",
"temporal": "http://schema.org/temporal",
"temporalCoverage": "http://schema.org/temporalCoverage",
"termCode": "http://schema.org/termCode",
"termsOfService": "http://schema.org/termsOfService",
"text": "http://schema.org/text",
"thumbnail": "http://schema.org/thumbnail",
"thumbnailUrl": "http://schema.org/thumbnailUrl",
"tickerSymbol": "http://schema.org/tickerSymbol",
"ticketNumber": "http://schema.org/ticketNumber",
"ticketToken": "http://schema.org/ticketToken",
"ticketedSeat": "http://schema.org/ticketedSeat",
"timeRequired": "http://schema.org/timeRequired",
"timeToComplete": "http://schema.org/timeToComplete",
"tissueSample": "http://schema.org/tissueSample",
"title": "http://schema.org/title",
"toLocation": "http://schema.org/toLocation",
"toRecipient": "http://schema.org/toRecipient",
"tongueWeight": "http://schema.org/tongueWeight",
"tool": "http://schema.org/tool",
"torque": "http://schema.org/torque",
"totalJobOpenings": "http://schema.org/totalJobOpenings",
"totalPaymentDue": "http://schema.org/totalPaymentDue",
"totalPrice": "http://schema.org/totalPrice",
"totalTime": "http://schema.org/totalTime",
"touristType": "http://schema.org/touristType",
"track": "http://schema.org/track",
"trackingNumber": "http://schema.org/trackingNumber",
"trackingUrl": "http://schema.org/trackingUrl",
"tracks": "http://schema.org/tracks",
"trailer": "http://schema.org/trailer",
"trailerWeight": "http://schema.org/trailerWeight",
"trainName": "http://schema.org/trainName",
"trainNumber": "http://schema.org/trainNumber",
"trainingSalary": "http://schema.org/trainingSalary",
"transFatContent": "http://schema.org/transFatContent",
"transcript": "http://schema.org/transcript",
"translationOfWork": "http://schema.org/translationOfWork",
"translator": "http://schema.org/translator",
"transmissionMethod": "http://schema.org/transmissionMethod",
"trialDesign": "http://schema.org/trialDesign",
"tributary": "http://schema.org/tributary",
"typeOfBed": "http://schema.org/typeOfBed",
"typeOfGood": "http://schema.org/typeOfGood",
"typicalAgeRange": "http://schema.org/typicalAgeRange",
"typicalTest": "http://schema.org/typicalTest",
"underName": "http://schema.org/underName",
"unitCode": "http://schema.org/unitCode",
"unitText": "http://schema.org/unitText",
"unnamedSourcesPolicy": "http://schema.org/unnamedSourcesPolicy",
"unsaturatedFatContent": "http://schema.org/unsaturatedFatContent",
"uploadDate": "http://schema.org/uploadDate",
"upvoteCount": "http://schema.org/upvoteCount",
"url": "http://schema.org/url",
"urlTemplate": "http://schema.org/urlTemplate",
"usedToDiagnose": "http://schema.org/usedToDiagnose",
"userInteractionCount": "http://schema.org/userInteractionCount",
"usesDevice": "http://schema.org/usesDevice",
"usesHealthPlanIdStandard": "http://schema.org/usesHealthPlanIdStandard",
"validFor": "http://schema.org/validFor",
"validFrom": "http://schema.org/validFrom",
"validIn": "http://schema.org/validIn",
"validThrough": "http://schema.org/validThrough",
"validUntil": "http://schema.org/validUntil",
"value": "http://schema.org/value",
"valueAddedTaxIncluded": "http://schema.org/valueAddedTaxIncluded",
"valueMaxLength": "http://schema.org/valueMaxLength",
"valueMinLength": "http://schema.org/valueMinLength",
"valueName": "http://schema.org/valueName",
"valuePattern": "http://schema.org/valuePattern",
"valueReference": "http://schema.org/valueReference",
"valueRequired": "http://schema.org/valueRequired",
"variableMeasured": "http://schema.org/variableMeasured",
"variablesMeasured": "http://schema.org/variablesMeasured",
"variantCover": "http://schema.org/variantCover",
"vatID": "http://schema.org/vatID",
"vehicleConfiguration": "http://schema.org/vehicleConfiguration",
"vehicleEngine": "http://schema.org/vehicleEngine",
"vehicleIdentificationNumber": "http://schema.org/vehicleIdentificationNumber",
"vehicleInteriorColor": "http://schema.org/vehicleInteriorColor",
"vehicleInteriorType": "http://schema.org/vehicleInteriorType",
"vehicleModelDate": "http://schema.org/vehicleModelDate",
"vehicleSeatingCapacity": "http://schema.org/vehicleSeatingCapacity",
"vehicleSpecialUsage": "http://schema.org/vehicleSpecialUsage",
"vehicleTransmission": "http://schema.org/vehicleTransmission",
"vendor": "http://schema.org/vendor",
"verificationFactCheckingPolicy": "http://schema.org/verificationFactCheckingPolicy",
"version": "http://schema.org/version",
"video": "http://schema.org/video",
"videoFormat": "http://schema.org/videoFormat",
"videoFrameSize": "http://schema.org/videoFrameSize",
"videoQuality": "http://schema.org/videoQuality",
"volumeNumber": "http://schema.org/volumeNumber",
"warning": "http://schema.org/warning",
"warranty": "http://schema.org/warranty",
"warrantyPromise": "http://schema.org/warrantyPromise",
"warrantyScope": "http://schema.org/warrantyScope",
"wasDerivedFrom": "http://www.w3.org/ns/prov#wasDerivedFrom",
"webCheckinTime": "http://schema.org/webCheckinTime",
"webFeed": "http://schema.org/webFeed",
"weight": "http://schema.org/weight",
"weightTotal": "http://schema.org/weightTotal",
"wf4ever": "http://purl.org/ro/wf4ever#",
"wfdesc": "http://purl.org/ro/wfdesc#",
"wfprov": "http://purl.org/ro/wfprov#",
"wheelbase": "http://schema.org/wheelbase",
"width": "http://schema.org/width",
"winner": "http://schema.org/winner",
"wordCount": "http://schema.org/wordCount",
"workExample": "http://schema.org/workExample",
"workFeatured": "http://schema.org/workFeatured",
"workHours": "http://schema.org/workHours",
"workLocation": "http://schema.org/workLocation",
"workPerformed": "http://schema.org/workPerformed",
"workPresented": "http://schema.org/workPresented",
"workTranslation": "http://schema.org/workTranslation",
"workload": "http://schema.org/workload",
"worksFor": "http://schema.org/worksFor",
"worstRating": "http://schema.org/worstRating",
"xpath": "http://schema.org/xpath",
"yearlyRevenue": "http://schema.org/yearlyRevenue",
"yearsInOperation": "http://schema.org/yearsInOperation",
"yield": "http://schema.org/yield"
},
"@graph": [
{
"@id": "https://orcid.org/0000-0002-3597-8557",
"@type": "Person",
"contactType": "contributor",
"name": "Alban Gaignard"
},
{
"@id": "ro-crate-metadata.jsonld",
"@type": "CreativeWork",
"about": {
"@id": "./"
},
"hasPart": {
"@id": "https://orcid.org/0000-0002-3597-8557"
},
"identifier": "ro-crate-metadata.jsonld"
}
]
}
|
Python/StabilityAnalysis/Algorithmic stability analysis/Absolute/COIL-20/UFS_50_700Samples-1.ipynb | ###Markdown
1. Import libraries
###Code
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
from keras.datasets import fashion_mnist
from keras.models import Model
from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer
from keras.layers.normalization import BatchNormalization
from keras.utils import to_categorical
from keras import optimizers,initializers,constraints,regularizers
from keras import backend as K
from keras.callbacks import LambdaCallback,ModelCheckpoint
from keras.utils import plot_model
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
import h5py
import math
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import pandas as pd
from skimage import io
from PIL import Image
from sklearn.model_selection import train_test_split
import scipy.sparse as sparse
#--------------------------------------------------------------------------------------------------------------------------------
#Import ourslef defined methods
import sys
sys.path.append(r"./Defined")
import Functions as F
# The following code should be added before the keras model
#np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
del dataframe
###Output
_____no_output_____
###Markdown
2. Loading data
###Code
dataset_path='./Dataset/coil-20-proc/'
samples={}
for dirpath, dirnames, filenames in os.walk(dataset_path):
#print(dirpath)
#print(dirnames)
#print(filenames)
dirnames.sort()
filenames.sort()
for filename in [f for f in filenames if f.endswith(".png") and not f.find('checkpoint')>0]:
full_path = os.path.join(dirpath, filename)
file_identifier=filename.split('__')[0][3:]
if file_identifier not in samples.keys():
samples[file_identifier] = []
# Direct read
#image = io.imread(full_path)
# Resize read
image_=Image.open(full_path).resize((20, 20),Image.ANTIALIAS)
image=np.asarray(image_)
samples[file_identifier].append(image)
#plt.imshow(samples['1'][0].reshape(20,20))
data_arr_list=[]
label_arr_list=[]
for key_i in samples.keys():
key_i_for_label=[int(key_i)-1]
data_arr_list.append(np.array(samples[key_i]))
label_arr_list.append(np.array(72*key_i_for_label))
data_arr=np.concatenate(data_arr_list).reshape(1440, 20*20).astype('float32') / 255.
label_arr_onehot=to_categorical(np.concatenate(label_arr_list))
sample_used=699
x_train_all,x_test,y_train_all,y_test_onehot= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed)
x_train=x_train_all[0:sample_used]
y_train_onehot=y_train_all[0:sample_used]
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
F.show_data_figures(x_train[0:40],20,20,40)
F.show_data_figures(x_test[0:40],20,20,40)
key_feture_number=50
###Output
_____no_output_____
###Markdown
3.Model
###Code
np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
class Feature_Select_Layer(Layer):
def __init__(self, output_dim, **kwargs):
super(Feature_Select_Layer, self).__init__(**kwargs)
self.output_dim = output_dim
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1],),
initializer=initializers.RandomUniform(minval=0.999999, maxval=0.9999999, seed=seed),
trainable=True)
super(Feature_Select_Layer, self).build(input_shape)
def call(self, x, selection=False,k=key_feture_number):
kernel=K.abs(self.kernel)
if selection:
kernel_=K.transpose(kernel)
kth_largest = tf.math.top_k(kernel_, k=k)[0][-1]
kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel)
return K.dot(x, tf.linalg.tensor_diag(kernel))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
#--------------------------------------------------------------------------------------------------------------------------------
def Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate=1E-3,\
p_loss_weight_1=1,\
p_loss_weight_2=2):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
feature_selection_choose=feature_selection(input_img,selection=True,k=p_feture_number)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
encoded_choose=encoded(feature_selection_choose)
bottleneck_score=encoded_score
bottleneck_choose=encoded_choose
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
decoded_choose =decoded(bottleneck_choose)
latent_encoder_score = Model(input_img, bottleneck_score)
latent_encoder_choose = Model(input_img, bottleneck_choose)
feature_selection_output=Model(input_img,feature_selection_choose)
autoencoder = Model(input_img, [decoded_score,decoded_choose])
autoencoder.compile(loss=['mean_squared_error','mean_squared_error'],\
loss_weights=[p_loss_weight_1, p_loss_weight_2],\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,feature_selection_output,latent_encoder_score,latent_encoder_choose
###Output
_____no_output_____
###Markdown
3.1 Structure and paramter testing
###Code
epochs_number=200
batch_size_value=8
###Output
_____no_output_____
###Markdown
--- 3.1.1 Fractal Autoencoder---
###Code
loss_weight_1=0.0078125
F_AE,\
feature_selection_output,\
latent_encoder_score_F_AE,\
latent_encoder_choose_F_AE=Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3,\
p_loss_weight_1=loss_weight_1,\
p_loss_weight_2=1)
F_AE_history = F_AE.fit(x_train, [x_train,x_train],\
epochs=epochs_number,\
batch_size=batch_size_value,\
shuffle=True)
loss = F_AE_history.history['loss']
epochs = range(epochs_number)
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
fina_results=np.array(F_AE.evaluate(x_test,[x_test,x_test]))
fina_results
fina_results_single=np.array(F_AE.evaluate(x_test[0:1],[x_test[0:1],x_test[0:1]]))
fina_results_single
for i in np.arange(x_test.shape[0]):
fina_results_i=np.array(F_AE.evaluate(x_test[i:i+1],[x_test[i:i+1],x_test[i:i+1]]))
write_to_csv(fina_results_i.reshape(1,len(fina_results_i)),"./log/results_"+str(sample_used)+".csv")
###Output
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 17ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 16ms/step
1/1 [==============================] - 0s 26ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 13ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 16ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 15ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 13ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 11ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 17ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 12ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
|
tv-script-generation/Anna KaRNNa.ipynb | ###Markdown
Anna KaRNNaIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
###Code
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
First we'll load the text file and convert it into integers for our network to use.
###Code
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
###Output
_____no_output_____
###Markdown
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
###Code
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
###Output
_____no_output_____
###Markdown
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
###Code
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
###Output
_____no_output_____
###Markdown
HyperparametersHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
###Code
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
###Output
_____no_output_____
###Markdown
Write out the graph for TensorBoard
###Code
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
###Output
_____no_output_____
###Markdown
TrainingTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.
###Code
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
###Output
_____no_output_____
###Markdown
SamplingNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
###Code
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
###Output
Farrat, his felt has at it.
"When the pose ther hor exceed
to his sheant was," weat a sime of his sounsed. The coment and the facily that which had began terede a marilicaly whice whether the pose of his hand, at she was alligated herself the same on she had to
taiking to his forthing and streath how to hand
began in a lang at some at it, this he cholded not set all her. "Wo love that is setthing. Him anstering as seen that."
"Yes in the man that say the mare a crances is it?" said Sergazy Ivancatching. "You doon think were somether is ifficult of a mone of
though the most at the countes that the
mean on the come to say the most, to
his feesing of
a man she, whilo he
sained and well, that he would still at to said. He wind at his for the sore in the most
of hoss and almoved to see him. They have betine the sumper into at he his stire, and what he was that at the so steate of the
sound, and shin should have a geest of shall feet on the conderation to she had been at that imporsing the dre
|
docs/allcools/cell_level/dmg/01-AddGenemCFractions.ipynb | ###Markdown
Calculate Gene mC Fractions
###Code
import pandas as pd
import scanpy as sc
import anndata
import xarray as xr
import pybedtools
import dask
from ALLCools.plot import *
from ALLCools.mcds import MCDS
import pathlib
import numpy as np
gene_meta_path = '../../data/genome/gencode.vM22.annotation.gene.flat.tsv.gz'
chrom_to_remove = ['chrM']
# change this to the path to your filtered metadata
metadata_path = '../step_by_step/100kb/CellMetadata.PassQC.csv.gz'
# change this to the paths to your MCDS files
mcds_path_list = [
'../../data/Brain/3C-171206.mcds',
'../../data/Brain/3C-171207.mcds',
'../../data/Brain/9H-190212.mcds',
'../../data/Brain/9H-190219.mcds',
]
obs_dim = 'cell'
var_dim = 'gene'
min_cov = 5
###Output
_____no_output_____
###Markdown
Load metadata
###Code
gene_meta = pd.read_csv(gene_meta_path, index_col='gene_id', sep='\t')
metadata = pd.read_csv(metadata_path, index_col=0)
total_cells = metadata.shape[0]
print(f'Metadata of {total_cells} cells')
###Output
Metadata of 4958 cells
###Markdown
Filter genes by overlap and chromosomes
###Code
genes_to_skip = set()
# skip smaller genes mostly covered by a larger gene, e.g., a miRNA within a protein coding gene.
# F=0.9 means > 90% of gene_b is overlapped with gene_a, in this case, we only keep gene_a for DMG test
gene_bed = pybedtools.BedTool.from_dataframe(
gene_meta.reset_index()[['chrom', 'start', 'end', 'gene_id']])
mapped_bam = gene_bed.map(b=gene_bed, c=4, o='distinct', F=0.9)
for _, (*_, gene_a, gene_b_str) in mapped_bam.to_dataframe().iterrows():
for gene_b in gene_b_str.split(','):
if gene_b != gene_a:
genes_to_skip.add(gene_b)
# remove certain chromosomes
genes_to_skip |= set(gene_meta.index[gene_meta['chrom'].isin(chrom_to_remove)])
use_features = gene_meta.index[~gene_meta.index.isin(genes_to_skip)]
print(f'{use_features.size} features remained')
###Output
/home/hanliu/miniconda3/envs/allcools/lib/python3.8/subprocess.py:849: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stderr = io.open(errread, 'rb', bufsize)
###Markdown
Filter genes by cell mean coverage
###Code
with dask.config.set(**{'array.slicing.split_large_chunks': False}):
# still use all the cells to load MCDS
mcds = MCDS.open(mcds_path_list, obs_dim=obs_dim,
use_obs=metadata.index).sel({var_dim: use_features})
mcds.add_feature_cov_mean(var_dim=var_dim)
feature_cov_mean = mcds.coords[f'{var_dim}_cov_mean'].to_pandas()
use_features &= feature_cov_mean[feature_cov_mean > min_cov].index
print(f'{use_features.size} features remained')
mcds.filter_feature_by_cov_mean(var_dim, min_cov=min_cov)
###Output
Before cov mean filter: 41871 gene
After cov mean filter: 35664 gene 85.2%
###Markdown
Add Gene mC Fraction per MCDS file
###Code
gene_frac_dir = pathlib.Path('gene_frac')
gene_frac_dir.mkdir(exist_ok=True)
for mcds_path in mcds_path_list:
output_path = gene_frac_dir / (pathlib.Path(mcds_path).name + f'{var_dim}_da_frac.mcds')
if output_path.exists():
continue
print(f'Computaing gene mC fraction for {mcds_path}')
mcds = MCDS.open(mcds_path, obs_dim=obs_dim)
# remove non-related data
del_das = []
for da in mcds:
if da != f'{var_dim}_da':
del_das.append(da)
for da in del_das:
del mcds[da]
mcds.load()
mcds = mcds.sel({var_dim: use_features})
mcds.add_mc_rate(var_dim=var_dim, normalize_per_cell=True, clip_norm_value=10)
# use float32 to reduce file size and speedup IO
mcds = mcds.rename({var_dim: 'gene', f'{var_dim}_da_frac': 'gene_da_frac'})
mcds['gene_da_frac'].astype('float32').to_netcdf(output_path)
###Output
Computaing gene mC fraction for ../../data/Brain/3C-171206.mcds
Computaing gene mC fraction for ../../data/Brain/3C-171207.mcds
Computaing gene mC fraction for ../../data/Brain/9H-190212.mcds
Computaing gene mC fraction for ../../data/Brain/9H-190219.mcds
###Markdown
Save gene metadata together with gene fraction files
###Code
use_gene_meta = gene_meta.loc[use_features]
use_gene_meta.to_csv(gene_frac_dir / 'GeneMetadata.csv.gz')
###Output
_____no_output_____ |
notebook/Grafici/Prove-2/N_Iter_3000/Versicolor-Chi-LowPenalization-N_Iter_3000.ipynb | ###Markdown
Codice
###Code
#import e parsing file di log
def lettura_log(path):
file = open(path, 'r')
Lines = file.readlines()
coppie = []
for line in Lines:
if 'COUPLE(N_ITER,DISTANCE RMSE)' in line:
split = line.split(':')
s = split[3].replace('[','')
s = s.replace('(','')
s = s.replace(')','')
s = s.replace(']','')
s = s.split(',')
a = int(s[0])
b = float(s[1])
coppie.append((a,b))
return coppie
#funzione per graficare
import matplotlib.pyplot as plt
def grafico(path):
coppie = lettura_log(path)
x_val = [x[0] for x in coppie]
y_val = [x[1] for x in coppie]
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(x_val,y_val)
ax.plot(x_val,y_val,'or')
ax.set_ylabel('RMSE Distance (Gurobi -TensorFlow)')
ax.set_xlabel('Number of Iteration')
return plt.show()
def tabella(path):
coppie = lettura_log(path)
print ("N_ITER RMSE_DISTANCE")
for i in coppie:
print ("{:<14}{:<11}".format(*i))
###Output
_____no_output_____
###Markdown
Versicolor
###Code
grafico("../../log/N_Iter_3000/Versicolor/c1_sigma01_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c1_sigma01_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c1_sigma025_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c1_sigma025_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c1_sigma05_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c1_sigma05_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c75_sigma01_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c75_sigma01_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c75_sigma025_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c75_sigma025_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c75_sigma05_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c75_sigma05_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c200_sigma01_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c200_sigma01_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c200_sigma025_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c200_sigma025_penalization01.log")
grafico("../../log/N_Iter_3000/Versicolor/c200_sigma05_penalization01.log")
tabella("../../log/N_Iter_3000/Versicolor/c200_sigma05_penalization01.log")
###Output
_____no_output_____ |
Chapter 01/Introduction_to_Machine_Learning.ipynb | ###Markdown
What is AutoML?
###Code
# Sklearn has convenient modules to create sample data.
# make_blobs will help us to create a sample data set suitable for clustering
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=100, centers=2, cluster_std=0.30, random_state=0)
# Let's visualize what we have first
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
plt.scatter(X[:, 0], X[:, 1], s=50)
# We will import KMeans model from clustering model family of Sklearn
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=2)
k_means.fit(X)
predictions = k_means.predict(X)
# Let's plot the predictions
plt.scatter(X[:, 0], X[:, 1], c=predictions, cmap='brg')
k_means.get_params()
###Output
_____no_output_____
###Markdown
Featuretools
###Code
import pandas as pd
# First dataset contains the basic information for databases.
databases_df = pd.DataFrame({"database_id": [2234, 1765, 8796, 2237, 3398],
"creation_date": ["2018-02-01", "2017-03-02", "2017-05-03", "2013-05-12", "2012-05-09"]})
databases_df.head()
# Second dataset contains the information of transaction for each database id
db_transactions_df = pd.DataFrame({"transaction_id": [26482746, 19384752, 48571125, 78546789, 19998765, 26482646, 12484752, 42471125, 75346789, 16498765, 65487547, 23453847, 56756771, 45645667, 23423498, 12335268, 76435357, 34534711, 45656746, 12312987],
"database_id": [2234, 1765, 2234, 2237, 1765, 8796, 2237, 8796, 3398, 2237, 3398, 2237, 2234, 8796, 1765, 2234, 2237, 1765, 8796, 2237],
"transaction_size": [10, 20, 30, 50, 100, 40, 60, 60, 10, 20, 60, 50, 40, 40, 30, 90, 130, 40, 50, 30],
"transaction_date": ["2018-02-02", "2018-03-02", "2018-03-02", "2018-04-02", "2018-04-02", "2018-05-02", "2018-06-02", "2018-06-02", "2018-07-02", "2018-07-02", "2018-01-03", "2018-02-03", "2018-03-03", "2018-04-03", "2018-04-03", "2018-07-03", "2018-07-03", "2018-07-03", "2018-08-03", "2018-08-03"]})
db_transactions_df.head()
# Entities for each of datasets should be defined
entities = {
"databases" : (databases_df, "database_id"),
"transactions" : (db_transactions_df, "transaction_id")
}
# Relationships between tables should also be defined as below
relationships = [("databases", "database_id", "transactions", "database_id")]
print(entities)
# There are 2 entities called ‘databases’ and ‘transactions’
# All the pieces that are necessary to engineer features are in place, you can create your feature matrix as below
import featuretools as ft
feature_matrix_db_transactions, feature_defs = ft.dfs(entities=entities, relationships=relationships, target_entity="databases")
feature_defs
###Output
_____no_output_____
###Markdown
Auto-sklearn
###Code
# Necessary imports
import autosklearn.classification
import sklearn.model_selection
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
# Digits dataset is one of the most popular datasets in machine learning community.
# Every example in this datasets represents a 8x8 image of a digit.
X, y = sklearn.datasets.load_digits(return_X_y=True)
# Let's see the first image. Image is reshaped to 8x8, otherwise it's a vector of size 64.
X[0].reshape(8,8)
# Let's also plot couple of them
import matplotlib.pyplot as plt
%matplotlib inline
number_of_images = 10
images_and_labels = list(zip(X, y))
for i, (image, label) in enumerate(images_and_labels[:number_of_images]):
plt.subplot(2, number_of_images, i + 1)
plt.axis('off')
plt.imshow(image.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('%i' % label)
plt.show()
# We split our dataset to train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Similarly to creating an estimator in Scikit-learn, we create AutoSklearnClassifier
automl = autosklearn.classification.AutoSklearnClassifier()
# All you need to do is to invoke fit method to start experiment with different feature engineering methods and machine learning models
automl.fit(X_train, y_train)
# Generating predictions is same as Scikit-learn, you need to invoke predict method.
y_hat = automl.predict(X_test)
print("Accuracy score", sklearn.metrics.accuracy_score(y_test, y_hat))
# Accuracy score 0.98
###Output
Accuracy score 0.9933333333333333
###Markdown
MLBox
###Code
# Necessary Imports
from mlbox.preprocessing import *
from mlbox.optimisation import *
from mlbox.prediction import *
import wget
file_link = 'https://apsportal.ibm.com/exchange-api/v1/entries/8044492073eb964f46597b4be06ff5ea/data?accessKey=9561295fa407698694b1e254d0099600'
file_name = wget.download(file_link)
print(file_name)
# GoSales_Tx_NaiveBayes.csv
import pandas as pd
df = pd.read_csv('GoSales_Tx_NaiveBayes.csv')
df.head()
test_df = df.drop(['PRODUCT_LINE'], axis = 1)
# First 300 records saved as test datased
test_df[:300].to_csv('test_data.csv')
paths = ["GoSales_Tx_NaiveBayes.csv", "test_data.csv"]
target_name = "PRODUCT_LINE"
rd = Reader(sep = ',')
df = rd.train_test_split(paths, target_name)
dft = Drift_thresholder()
df = dft.fit_transform(df)
opt = Optimiser(scoring = 'accuracy', n_folds = 3)
opt.evaluate(None, df)
space = {
'ne__numerical_strategy':{"search":"choice", "space":[0]},
'ce__strategy':{"search":"choice",
"space":["label_encoding","random_projection", "entity_embedding"]},
'fs__threshold':{"search":"uniform", "space":[0.01,0.3]},
'est__max_depth':{"search":"choice", "space":[3,4,5,6,7]}
}
best = opt.optimise(space, df,15)
predictor = Predictor()
predictor.fit_predict(best, df)
###Output
_____no_output_____
###Markdown
TPOT
###Code
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
# Digits dataset that you have used in Auto-sklearn example
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# You will create your TPOT classifier with commonly used arguments
tpot = TPOTClassifier(generations=10, population_size=30, verbosity=2)
# When you invoke fit method, TPOT will create generations of populations, seeking best set of parameters. Arguments you have used to create TPOTClassifier such as generaions and population_size will affect the search space and resulting pipeline.
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
# 0.9834
tpot.export('my_pipeline.py')
!cat my_pipeline.py
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
exported_pipeline = make_pipeline(
StackingEstimator(estimator=DecisionTreeClassifier(criterion="entropy", max_depth=6, min_samples_leaf=2, min_samples_split=2)),
KNeighborsClassifier(n_neighbors=2, weights="distance")
)
exported_pipeline.fit(X_train, y_train)
results = exported_pipeline.predict(X_test)
###Output
_____no_output_____ |
tutorial-completed.ipynb | ###Markdown
GPT Tutorial > This code is taken from https://github.com/karpathy/minGPT written by Andrej Karpath. It's under MIT lincense We will implement the GPT model and train it for character-level language modeling objective. We will train it on Shakespare plays.In character-level language modeling, given the previous sequence of characters, we want our model to predict next character.We will use transformer based decoder for this purposes.Then, given the initial promp of a text, we will sample next characters iteratively by using our model. We will show that it can generate coherent text similar to Shakespare plays. GPT GPT is an **autoregressive Transformer decoder** for language modeling.You can find realted papers here:- GPT: https://openai.com/blog/language-unsupervised/- GPT-2: https://openai.com/blog/better-language-models/- GPT-3: https://arxiv.org/abs/2005.14165 1) Start by writing `GPT` module. - `GPT` uses an **embedding** layer and a **positional embedding** layer to represent the each token.- Then, it porcesses the initial input with many **Transformer layers** (`Block`)- Each layer is a sequential combination of a 1-hidden-layer MLP block and a **self-attention layer** (`CausalSelfAttention`)- Then, there IS a final **decoder**, just a linear projection.2) Go over the top-k sampler (`sample`) function that we used to sample from the model3) Go over the data loader (`CharDataset`) that reads from the raw text, and gives (x,y) for language modeling objective Today, we will not cover the details of `Trainer`
###Code
import math
import random
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.utils.data import Dataset
import logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger = logging.getLogger(__name__)
f"Parallelizing over: {torch.cuda.device_count()} GPUs"
###Output
_____no_output_____
###Markdown
GPT Module We will start by defining a config object which keeps all the configuration values for our model
###Code
class GPTConfig:
n_embd = 768
n_layer = 12
n_head = 12
embd_pdrop = 0.1
resid_pdrop = 0.1
attn_pdrop = 0.1
def __init__(self, vocab_size, block_size, **kwargs):
self.vocab_size = vocab_size
self.block_size = block_size
for (k,v) in kwargs.items():
setattr(self,k,v)
###Output
_____no_output_____
###Markdown
Lets define all the layers and the forward function of GPT module
###Code
class GPT(nn.Module):
""" the full GPT language model, with a context size of block_size """
def __init__(self, config):
super().__init__()
# input embedding stem
self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd)
self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
self.drop = nn.Dropout(config.embd_pdrop)
# transformer layers
self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
# decoder
self.ln_f = nn.LayerNorm(config.n_embd)
self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
###
self.block_size = config.block_size
self.apply(self._init_weights)
logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
def forward(self, idx, targets=None):
b, t = idx.size() # Batch x Seq_length integers
assert t <= self.block_size, "Cannot forward, model block size is exhausted."
# forward the GPT model, produce the scores
token_embeddings = self.tok_emb(idx)
position_embeddings = self.pos_emb[:,:t,:]
x = self.drop(token_embeddings + position_embeddings)
x = self.blocks(x)
x = self.ln_f(x)
logits = self.head(x)
# if we are given some desired targets also calculate the loss
loss = None
if targets is not None:
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
return logits, loss
def get_block_size(self):
return self.block_size
def _init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
def configure_optimizers(self, train_config):
"""
This long function is unfortunately doing something very simple and is being very defensive:
We are separating out all parameters of the model into two buckets: those that will experience
weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
We are then returning the PyTorch optimizer object.
"""
# separate out all parameters to those that will and won't experience regularizing weight decay
decay = set()
no_decay = set()
whitelist_weight_modules = (torch.nn.Linear, )
blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
for mn, m in self.named_modules():
for pn, p in m.named_parameters():
fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
if pn.endswith('bias'):
# all biases will not be decayed
no_decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
# weights of whitelist modules will be weight decayed
decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
# weights of blacklist modules will NOT be weight decayed
no_decay.add(fpn)
# special case the position embedding parameter in the root GPT module as not decayed
no_decay.add('pos_emb')
# validate that we considered every parameter
param_dict = {pn: p for pn, p in self.named_parameters()}
inter_params = decay & no_decay
union_params = decay | no_decay
assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), )
assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
% (str(param_dict.keys() - union_params), )
# create the pytorch optimizer object
optim_groups = [
{"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": train_config.weight_decay},
{"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0},
]
optimizer = torch.optim.AdamW(optim_groups, lr=train_config.learning_rate, betas=train_config.betas)
return optimizer
###Output
_____no_output_____
###Markdown
Transformer Blocks
###Code
class Block(nn.Module):
""" an unassuming Transformer block """
def __init__(self, config):
super().__init__()
self.ln1 = nn.LayerNorm(config.n_embd)
self.ln2 = nn.LayerNorm(config.n_embd)
self.attn = CausalSelfAttention(config)
self.mlp = nn.Sequential(
nn.Linear(config.n_embd, 4 * config.n_embd),
nn.GELU(),
nn.Linear(4 * config.n_embd, config.n_embd),
nn.Dropout(config.resid_pdrop),
)
def forward(self, x):
x = x + self.attn(self.ln1(x))
x = x + self.mlp(self.ln2(x))
return x
###Output
_____no_output_____
###Markdown
Self Attention with Causal Masking$$\operatorname{Attention}(Q, K, V)=\operatorname{softmax}\left(\frac{Q K^{T}}{\sqrt{d_{k}}}\right) V$$We will implement Key-Query-Value attention for this module. We want it to be causal so that each token only attends to its predecesors. Given a block size input "hello", we will set $x = [h,e,l,l]$, and $y=[e,l,l,o]$. With causall attention, we automatically ask model 4 next character predictions at once. - given just "h", please predict "e" as next - given "he" please predict "l" next - given "hel" predict "l" next - given "hell" predict "o" next
###Code
class CausalSelfAttention(nn.Module):
"""
A vanilla multi-head masked self-attention layer with a projection at the end.
It is possible to use torch.nn.MultiheadAttention here but I am including an
explicit implementation here to show that there is nothing too scary here.
"""
def __init__(self, config):
super().__init__()
#Because our attention projection size will be n_embd / n_head
#effective projection dimension for attention = 768 / 12
assert config.n_embd % config.n_head == 0
self.n_head = config.n_head
# key, query, value projections for all heads
self.key = nn.Linear(config.n_embd, config.n_embd)
self.query = nn.Linear(config.n_embd, config.n_embd)
self.value = nn.Linear(config.n_embd, config.n_embd)
# regularization
self.attn_drop = nn.Dropout(config.attn_pdrop)
self.resid_drop = nn.Dropout(config.resid_pdrop)
# output projection
self.proj = nn.Linear(config.n_embd, config.n_embd)
# causal mask to ensure that attention is only applied to the left in the input sequence
self.register_buffer("mask", torch.tril(torch.ones(config.block_size, config.block_size)).view(1,1,config.block_size,config.block_size))
def forward(self, x):
B, T, C = x.size()
k = self.key(x).view(B,T,self.n_head,C // self.n_head).transpose(1,2)
v = self.value(x).view(B,T,self.n_head,C // self.n_head).transpose(1,2)
q = self.query(x).view(B,T,self.n_head,C // self.n_head).transpose(1,2)
# calculate query, key, values for all heads in batch and move head forward to be the batch dim
scores = (q @ k.transpose(-2,-1)).div(math.sqrt(q.size(-1)))
scores.masked_fill_(self.mask[:,:,:T,:T]==0, float('-inf'))
scores = F.softmax(scores, dim=-1)
y = scores @ v
y = y.transpose(1,2).contiguous().view(B,T,C)
y = self.resid_drop(self.proj(y))
return y
###Output
_____no_output_____
###Markdown
TopK Sampler
###Code
def top_k_logits(logits, k):
v, _ = torch.topk(logits, k)
return logits.masked_fill(logits < v[:, [-1]], -float('Inf'))
@torch.no_grad()
def sample(model, x, steps, temperature=1.0, sample=False, top_k=None):
"""
take a conditioning sequence of indices in x (of shape (b,t)) and predict the next token in
the sequence, feeding the predictions back into the model each time. Clearly the sampling
has quadratic complexity unlike an RNN that is only linear, and has a finite context window
of block_size, unlike an RNN that has an infinite context window.
"""
block_size = model.get_block_size()
model.eval()
for k in range(steps):
x_cond = x if x.size(1) <= block_size else x[:, -block_size:] # crop context if needed
logits, _ = model(x_cond)
# pluck the logits at the final step and scale by temperature
logits = logits[:, -1, :] / temperature
# optionally crop probabilities to only the top k options
if top_k is not None:
logits = top_k_logits(logits, top_k)
# apply softmax to convert to probabilities
probs = F.softmax(logits, dim=-1)
# sample from the distribution or take the most likely
if sample:
ix = torch.multinomial(probs, num_samples=1)
else:
ix = torch.argmax(probs, dim=-1, keepdim=True)
# append to the sequence and continue
x = torch.cat((x, ix), dim=1)
return x
###Output
_____no_output_____
###Markdown
Trainer
###Code
"""
Simple training loop; Boilerplate that could apply to any arbitrary neural network,
so nothing in this cell really has anything to do with GPT specifically.
"""
from tqdm import tqdm
import numpy as np
import torch.optim as optim
from torch.optim.lr_scheduler import LambdaLR
from torch.utils.data.dataloader import DataLoader
class TrainerConfig:
# optimization parameters
max_epochs = 10
batch_size = 64
learning_rate = 3e-4
betas = (0.9, 0.95)
grad_norm_clip = 1.0
weight_decay = 0.1 # only applied on matmul weights
# learning rate decay params: linear warmup followed by cosine decay to 10% of original
lr_decay = False
warmup_tokens = 375e6 # these two numbers come from the GPT-3 paper, but may not be good defaults elsewhere
final_tokens = 260e9 # (at what point we reach 10% of original LR)
# checkpoint settings
ckpt_path = None
num_workers = 0 # for DataLoader
def __init__(self, **kwargs):
for k,v in kwargs.items():
setattr(self, k, v)
class Trainer:
def __init__(self, model, train_dataset, test_dataset, config):
self.model = model
self.train_dataset = train_dataset
self.test_dataset = test_dataset
self.config = config
# take over whatever gpus are on the system
self.device = 'cpu'
if torch.cuda.is_available():
self.device = torch.cuda.current_device()
self.model = torch.nn.DataParallel(self.model).to(self.device)
def save_checkpoint(self):
# DataParallel wrappers keep raw model object in .module attribute
raw_model = self.model.module if hasattr(self.model, "module") else self.model
logger.info("saving %s", self.config.ckpt_path)
torch.save(raw_model.state_dict(), self.config.ckpt_path)
def train(self):
model, config = self.model, self.config
raw_model = model.module if hasattr(self.model, "module") else model
optimizer = raw_model.configure_optimizers(config)
def run_epoch(split):
is_train = split == 'train'
model.train(is_train)
data = self.train_dataset if is_train else self.test_dataset
loader = DataLoader(data, shuffle=True, pin_memory=True,
batch_size=config.batch_size,
num_workers=config.num_workers)
losses = []
pbar = tqdm(enumerate(loader), total=len(loader)) if is_train else enumerate(loader)
for it, (x, y) in pbar:
# place data on the correct device
x = x.to(self.device)
y = y.to(self.device)
# forward the model
with torch.set_grad_enabled(is_train):
logits, loss = model(x, y)
loss = loss.mean() # collapse all losses if they are scattered on multiple gpus
losses.append(loss.item())
if is_train:
# backprop and update the parameters
model.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), config.grad_norm_clip)
optimizer.step()
# decay the learning rate based on our progress
if config.lr_decay:
self.tokens += (y >= 0).sum() # number of tokens processed this step (i.e. label is not -100)
if self.tokens < config.warmup_tokens:
# linear warmup
lr_mult = float(self.tokens) / float(max(1, config.warmup_tokens))
else:
# cosine learning rate decay
progress = float(self.tokens - config.warmup_tokens) / float(max(1, config.final_tokens - config.warmup_tokens))
lr_mult = max(0.1, 0.5 * (1.0 + math.cos(math.pi * progress)))
lr = config.learning_rate * lr_mult
for param_group in optimizer.param_groups:
param_group['lr'] = lr
else:
lr = config.learning_rate
# report progress
pbar.set_description(f"epoch {epoch+1} iter {it}: train loss {loss.item():.5f}. lr {lr:e}")
if not is_train:
test_loss = float(np.mean(losses))
logger.info("test loss: %f", test_loss)
return test_loss
best_loss = float('inf')
self.tokens = 0 # counter used for learning rate decay
for epoch in range(config.max_epochs):
run_epoch('train')
if self.test_dataset is not None:
test_loss = run_epoch('test')
# supports early stopping based on the test loss, or just save always if no test set is provided
good_model = self.test_dataset is None or test_loss < best_loss
if self.config.ckpt_path is not None and good_model:
best_loss = test_loss
self.save_checkpoint()
###Output
_____no_output_____
###Markdown
DatasetThe inputs here are simple text files, which we chop up to individual characters and then train GPT on. So you could say this is a char-transformer instead of a char-rnn. Doesn't quite roll off the tongue as well. In this example we will feed it some Shakespeare, which we'll get it to predict character-level.
###Code
class CharDataset(Dataset):
def __init__(self, data, block_size):
chars = sorted(list(set(data)))
data_size, vocab_size = len(data), len(chars)
print('data has %d characters, %d unique.' % (data_size, vocab_size))
self.stoi = { ch:i for i,ch in enumerate(chars) }
self.itos = { i:ch for i,ch in enumerate(chars) }
self.block_size = block_size
self.vocab_size = vocab_size
self.data = data
def __len__(self):
return len(self.data) - self.block_size
def __getitem__(self, idx):
"""
If the block_size is 4, then
we could e.g. sample a chunk of text "hello", the integers in
x will correspond to "hell" and in y will be "ello". This will
then actually "multitask" 4 separate examples at the same time
in the language model:
- given just "h", please predict "e" as next
- given "he" please predict "l" next
- given "hel" predict "l" next
- given "hell" predict "o" next
"""
# grab a chunk of (block_size + 1) characters from the data
chunk = self.data[idx:idx + self.block_size + 1]
# encode every character to an integer
dix = [self.stoi[s] for s in chunk]
x = torch.tensor(dix[:-1], dtype=torch.long) # hell
y = torch.tensor(dix[1:], dtype=torch.long) # ello
return x, y
block_size = 128 # spatial extent of the model for its context
!wget https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt
text = open('input.txt', 'r').read() # don't worry we won't run out of file handles
train_dataset = CharDataset(text, block_size) # one line of poem is roughly 50 characters
###Output
data has 1115394 characters, 65 unique.
###Markdown
Initialize Model
###Code
mconf = GPTConfig(train_dataset.vocab_size, train_dataset.block_size, n_layer=8, n_head=8, n_embd=512)
model = GPT(mconf)
###Output
_____no_output_____
###Markdown
Train Model
###Code
# initialize a trainer instance and kick off training
tconf = TrainerConfig(max_epochs=2, batch_size=6*128, learning_rate=6e-4,
lr_decay=True, warmup_tokens=512*20, final_tokens=2*len(train_dataset)*block_size,
num_workers=4)
trainer = Trainer(model, train_dataset, None, tconf)
trainer.train()
## Sample
context = "O God, O God!"
x = torch.tensor([train_dataset.stoi[s] for s in context], dtype=torch.long)[None,...].to(trainer.device)
y = sample(model, x, 2000, temperature=1.0, sample=True, top_k=10)[0]
completion = ''.join([train_dataset.itos[int(i)] for i in y])
print(completion)
###Output
O God, O God!--O nurse, how shall this be prevented?
My husband is on earth, my faith in heaven;
How shall that faith return again to earth,
Unless that husband send it me from heaven
By leaving earth? comfort me, counsel me.
Alack, alack, that heaven should practise stratagems
Upon so soft a subject as myself!
What say'st thou? hast thou not a word of joy?
Some comfort, nurse.
Nurse:
Faith, here it is.
Romeo is banish'd; and all the world to nothing,
That he dares ne'er come back to challenge you;
Or, if he do, it needs must be by stealth.
Then, since the case so stands as now it doth,
I think it best you married with the county.
O, he's a lovely gentleman!
Romeo's a dishclout to him: an eagle, madam,
Hath not so green, so quick, so fair an eye
As Paris hath. Beshrew my very heart,
I think you are happy in this second match,
For it excels your first: or if it did not,
Your first is dead; or 'twere as good he were,
As living here and you no use of him.
JULIET:
Speakest thou from thy heart?
Nurse:
And from my soul too;
Or else beshrew them both.
JULIET:
Amen!
Nurse:
What?
JULIET:
Well, thou hast comforted me marvellous much.
Go in: and tell my lady I am gone,
Having displeased my father, to Laurence' cell,
To make confession and to be absolved.
Nurse:
Marry, I will; and this is wisely done.
JULIET:
Ancient damnation! O most wicked fiend!
Is it more sin to wish me thus forsworn,
Or to dispraise my lord with that same tongue
Which she hath praised him with above compare
So many thousand times? Go, counsellor;
Thou and my bosom henceforth shall be twain.
I'll to the friar, to know his remedy:
If all else fail, myself have power to die.
FRIAR LAURENCE:
On Thursday, sir? the time is very short.
PARIS:
My father Capulet will have it so;
And I am nothing slow to slack his haste.
FRIAR LAURENCE:
You say you do not know the lady's mind:
Uneven is the course, I like it not.
PARIS:
Immoderately she weeps for Tybalt's death,
And therefore have I little talk'd of love;
For Venus s
|
python_samples/Jupyter_Notebooks/ibm_db-procedures.ipynb | ###Markdown
ibm_db.procedures() Purpose: Retrieve a list of procedures that have been registered in a database. Syntax: `IBM_DBStatement ibm_db.procedures( IBM_DBConnection `*`connection,`*` string `*`qualifierName,`*` string `*`schemaName,`*` string `*`procedureName`*` )` Parameters: * __*connection*__ : A valid Db2 server or database connection. * __qualifierName__ : A valid qualifier name for Db2 databases on OS/390 or z/OS servers; the value `None` or an empty string (`''`) for Db2 databases on other operating systems. * __schemaName__ : The name of the schema that contains the procedure(s) that information is to be obtained for. To match all schemas, provide the value `None` or an empty string; to match select schemas, provide a search pattern that contains __`_`__ and/or __`%`__ wildcards.* __procedureName__ : The name of the procedure(s) that information is to be obtained for. To match all procedures, provide the value `None` or an empty string; to match select procedures, provide a search pattern that contains __`_`__ and/or __`%`__ wildcards. Return values: * If __successful__, an IBM_DBStatement with a result set that contains the following information: * `PROCEDURE_CAT` : The name of the catalog associated with the schema that contains the procedure; Db2 does not use catalogs so this field will always contain the value `None`. *(Db2 databases on OS/390 or z/OS servers can return information in this field.)* * `PROCEDURE_SCHEM` : The name of the schema that contains the procedure. * `PROCEDURE_NAME` : The name of the procedure. * `NUM_INPUT_PARAMS` : The number of input (IN) parameters that have been defined for the procedure. * `NUM_OUTPUT_PARAMS` : The number of output (OUT) parameters that have been defined for the procedure. * `NUM_RESULT_SETS` : The number of result sets the procedure will return. * `REMARKS` : A user-supplied description of the procedure (if one has been provided). * `PROCEDURE_TYPE` : A numerical value that indicates whether the procedure is a stored procedure that does not return a value (`1`) or a function that returns a value (`2`). This field will always contain the value `1`. * If __unsuccessful__, the value `False`. Description: The __ibm_db.procedures()__ API is used to retrieve a list of stored procedures that have been registered in a database.The information returned by this API is placed in a result data set, which can be processed using the same APIs that are used to process result data sets that are generated by SQL queries. That is, a single row can be retrieved and stored in a tuple or dictionary using the __ibm_db.fetch_tuple()__ (tuple), __ibm_db.fetch_assoc()__ (dictionary), or __ibm_db.fetch_both()__ (tuple *and* dictionary) APIs. Alternately, the __ibm_db.fetch_row()__ API can be used to move the result set pointer to each row in the result set produced and the __ibm_db.result()__ API can be used to fetch a column from the current row. Example:
###Code
#----------------------------------------------------------------------------------------------#
# NAME: ibm_db-procedures.py #
# #
# PURPOSE: This program is designed to illustrate how to use the ibm_db.procedures() API. #
# #
# Additional APIs used: #
# ibm_db.fetch_assoc() #
# #
#----------------------------------------------------------------------------------------------#
# DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITY #
# #
# (C) COPYRIGHT International Business Machines Corp. 2018, 2019 All Rights Reserved #
# Licensed Materials - Property of IBM #
# #
# US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA #
# ADP Schedule Contract with IBM Corp. #
# #
# The following source code ("Sample") is owned by International Business Machines #
# Corporation ("IBM") or one of its subsidiaries and is copyrighted and licensed, not sold. #
# You may use, copy, modify, and distribute the Sample in any form without payment to IBM, #
# for the purpose of assisting you in the creation of Python applications using the ibm_db #
# library. #
# #
# The Sample code is provided to you on an "AS IS" basis, without warranty of any kind. IBM #
# HEREBY EXPRESSLY DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT #
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. #
# Some jurisdictions do not allow for the exclusion or limitation of implied warranties, so #
# the above limitations or exclusions may not apply to you. IBM shall not be liable for any #
# damages you suffer as a result of using, copying, modifying or distributing the Sample, #
# even if IBM has been advised of the possibility of such damages. #
#----------------------------------------------------------------------------------------------#
# Load The Appropriate Python Modules
import sys # Provides Information About Python Interpreter Constants And Functions
import ibm_db # Contains The APIs Needed To Work With Db2 Databases
#----------------------------------------------------------------------------------------------#
# Import The Db2ConnectionMgr Class Definition, Attributes, And Methods That Have Been Defined #
# In The File Named "ibm_db_tools.py"; This Class Contains The Programming Logic Needed To #
# Establish And Terminate A Connection To A Db2 Server Or Database #
#----------------------------------------------------------------------------------------------#
from ibm_db_tools import Db2ConnectionMgr
#----------------------------------------------------------------------------------------------#
# Import The ipynb_exit Class Definition, Attributes, And Methods That Have Been Defined In #
# The File Named "ipynb_exit.py"; This Class Contains The Programming Logic Needed To Allow #
# "exit()" Functionality To Work Without Raising An Error Or Stopping The Kernel If The #
# Application Is Invoked In A Jupyter Notebook #
#----------------------------------------------------------------------------------------------#
from ipynb_exit import exit
# Define And Initialize The Appropriate Variables
dbName = "SAMPLE"
userID = "db2inst1"
passWord = "Passw0rd"
dbConnection = None
schemaName = userID.upper()
resultSet = False
dataRecord = False
# Create An Instance Of The Db2ConnectionMgr Class And Use It To Connect To A Db2 Database
conn = Db2ConnectionMgr('DB', dbName, '', '', userID, passWord)
conn.openConnection()
if conn.returnCode is True:
dbConnection = conn.connectionID
else:
conn.closeConnection()
exit(-1)
# Attempt To Retrieve Information About Stored Procedures That Have Been Defined In The
# Current User's Schema
print("Obtaining information about stored procedures in the ", end="")
print(schemaName + " schema ... ", end="")
try:
resultSet = ibm_db.procedures(dbConnection, None, schemaName, '')
except Exception:
pass
# If The Information Desired Could Not Be Retrieved, Display An Error Message And Exit
if resultSet is False:
print("\nERROR: Unable to obtain the information desired\n.")
conn.closeConnection()
exit(-1)
# Otherwise, Complete The Status Message
else:
print("Done!\n")
# As Long As There Are Records (That Were Produced By The ibm_db.procedures API), ...
noData = False
loopCounter = 1
while noData is False:
# Retrieve A Record And Store It In A Python Dictionary
try:
dataRecord = ibm_db.fetch_assoc(resultSet)
except:
pass
# If The Data Could Not Be Retrieved Or If There Was No Data To Retrieve, Set The
# "No Data" Flag And Exit The Loop
if dataRecord is False:
noData = True
# Otherwise, Display The Information Retrieved
else:
# Display Record Header Information
print("Stored procedure " + str(loopCounter) + " details:")
print("_______________________________________________")
# Display The Information Stored In The Data Record Retrieved
print("Procedure schema : {}" .format(dataRecord['PROCEDURE_SCHEM']))
print("Procedure name : {}" .format(dataRecord['PROCEDURE_NAME']))
print("Number of input parameters : {}" .format(dataRecord['NUM_INPUT_PARAMS']))
print("Number of output parameters : {}" .format(dataRecord['NUM_OUTPUT_PARAMS']))
print("Number of result sets produced : {}" .format(dataRecord['NUM_RESULT_SETS']))
print("Procedure comments : {}" .format(dataRecord['REMARKS']))
# Increment The loopCounter Variable And Print A Blank Line To Separate The
# Records From Each Other
loopCounter += 1
print()
# Close The Database Connection That Was Opened Earlier
conn.closeConnection()
# Return Control To The Operating System
exit()
###Output
Connecting to the SAMPLE database ... Done!
Obtaining information about stored procedures in the DB2INST1 schema ... Done!
Stored procedure 1 details:
_______________________________________________
Procedure schema : DB2INST1
Procedure name : BONUS_INCREASE
Number of input parameters : 0
Number of output parameters : 0
Number of result sets produced : 1
Procedure comments : None
Stored procedure 2 details:
_______________________________________________
Procedure schema : DB2INST1
Procedure name : HIGH_EARNERS
Number of input parameters : 0
Number of output parameters : 0
Number of result sets produced : 3
Procedure comments : None
Stored procedure 3 details:
_______________________________________________
Procedure schema : DB2INST1
Procedure name : SALARY_STATS
Number of input parameters : 0
Number of output parameters : 0
Number of result sets produced : 0
Procedure comments : None
Disconnecting from the SAMPLE database ... Done!
|
Tuning_Notebooks/Seq2point_microwave_tuning.ipynb | ###Markdown
**GridsearchCV begins here:**
###Code
print(training_directory)
print(validation_directory)
#training and validation directory are already defined in the notebook
def generate_data(batch_size, offset, window_length):
from data_feeder_offset import TrainSlidingWindowGenerator
#window_offset = int(0.1 * input_window_length - 1)
window_offset = int((offset *window_length) - 1)
training_chunker = TrainSlidingWindowGenerator(file_name= training_directory,
chunk_size= 5 * 10 ** 2,
batch_size= batch_size,
crop=300000, shuffle=True,
skip_rows=0,
offset= window_offset,
windowlength = window_length,
ram_threshold=5*10**5)
validation_chunker = TrainSlidingWindowGenerator(file_name=validation_directory,
chunk_size=5 * 10 ** 2,
batch_size= batch_size,
crop=300000, shuffle=True,
skip_rows=0,
offset= window_offset,
windowlength = window_length,
ram_threshold=5*10**5)
return training_chunker, validation_chunker
def create_model_2(input_window_length, batch_size, window_offset, learning_rate):
"""Specifies the structure of a seq2point model using Keras' functional API.
Returns:
model (tensorflow.keras.Model): The uncompiled seq2point model.
"""
from tensorflow.keras.layers import Conv1D, Dense, Dropout, Reshape, Flatten, Conv2D, Input
from tensorflow.keras.models import Sequential
model = Sequential()
model.add(Input(shape=(input_window_length,)))
model.add(Reshape((1, input_window_length, 1)))
model.add(Conv2D(30,kernel_size=(10, 1), strides=(1, 1),activation="relu",input_shape=(1, input_window_length, 1), padding="same"))
model.add(Conv2D(30, kernel_size=(8, 1), activation='relu', strides=(1, 1), padding="same"))
model.add(Conv2D(40, kernel_size=(6, 1), activation='relu', strides=(1, 1), padding="same"))
model.add(Conv2D(60, kernel_size=(5, 1), activation='relu', strides=(1, 1), padding="same"))
model.add(Dropout(.2))
model.add(Conv2D(60, kernel_size=(5, 1), activation='relu', strides=(1, 1), padding="same"))
model.add(Dropout(.2))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.2))
model.add(Dense(1))
# compile model
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate= learning_rate, beta_1=0.9, beta_2=0.999), loss="mse", metrics=["mse", "msle", "mae"])
return model
#Validation on a leave out validation set
#Prints out the best parameters
import time
t1 = time.time()
batches = [500, 1000, 2000]
epochs = [2, 5, 10] #[2, 5, 10]
window_length = [11, 21, 51, 99, 199, 599]
learning = [0.01, 0.001, 0.0001]
offset = [0.1, 0.3, 0.5, 0.7]
all_results = []
for batch_size in batches:
for input_window_length in window_length:
for epoch in epochs:
#for learning_rate in learning:
for window_offset in offset:
learning_rate = 0.001
#window_offset = int(0.5 * (2+input_window_length) - 1)
accuracy_dict = {}
training_chunker, validation_chunker = generate_data(batch_size, window_offset, input_window_length)
steps_per_training_epoch = np.round(int(training_chunker.total_num_samples / batch_size), decimals=0)
model = create_model_2(input_window_length, batch_size, window_offset, learning_rate)
training_history = model.fit(training_chunker.load_dataset(),
steps_per_epoch=steps_per_training_epoch,
epochs = epoch,
verbose = 1,
#callbacks=callbacks,
validation_data = validation_chunker.load_dataset(),
validation_freq= 1,
validation_steps=100)
accuracy_dict["batch size"] = batch_size
accuracy_dict["window length"] = input_window_length
accuracy_dict["window offset"] = window_offset
accuracy_dict["epochs"] = epoch
accuracy_dict["validation loss"] = training_history.history['val_loss'][-1]
accuracy_dict["learning rate"] = learning_rate
#print(training_history.history['val_loss'])
all_results.append(accuracy_dict)
print(all_results)
import pandas as pd
df = pd.DataFrame(all_results)
df.to_csv(path +"tuning_results_microwave_2houses_withoffset.csv") #save the tuning results to a csv file
print("\nThe best parameters are:\n")
print(df.iloc[df["validation loss"].idxmin(),:])
t2 = time.time()
print("time elapsed in hours: {}".format((t2 - t1)/3600))
###Output
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
592/600 [============================>.] - ETA: 0s - loss: 0.0342 - mse: 0.0342 - msle: 0.0031 - mae: 0.0723Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0339 - mse: 0.0339 - msle: 0.0031 - mae: 0.0718 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0026 - val_mae: 0.0090
Epoch 2/2
600/600 [==============================] - 3s 6ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0023 - mae: 0.0295 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0023 - val_mae: 0.0389
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0343 - mse: 0.0343 - msle: 0.0032 - mae: 0.0737Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0343 - mse: 0.0343 - msle: 0.0032 - mae: 0.0737 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0026 - val_mae: 0.0096
Epoch 2/2
600/600 [==============================] - 3s 6ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0024 - mae: 0.0310 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0025 - val_mae: 0.0103
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0334 - mse: 0.0334 - msle: 0.0031 - mae: 0.0716Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0334 - mse: 0.0334 - msle: 0.0031 - mae: 0.0715 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0027 - val_mae: 0.0178
Epoch 2/2
600/600 [==============================] - 3s 6ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0024 - mae: 0.0313 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0268
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
597/600 [============================>.] - ETA: 0s - loss: 0.0326 - mse: 0.0326 - msle: 0.0030 - mae: 0.0719Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0325 - mse: 0.0325 - msle: 0.0030 - mae: 0.0717 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0024 - val_mae: 0.0107
Epoch 2/2
600/600 [==============================] - 3s 6ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0277 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0023 - val_mae: 0.0278
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
591/600 [============================>.] - ETA: 0s - loss: 0.0346 - mse: 0.0346 - msle: 0.0030 - mae: 0.0733Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0343 - mse: 0.0343 - msle: 0.0030 - mae: 0.0727 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0026 - val_mae: 0.0146
Epoch 2/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0024 - mae: 0.0296 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0024 - val_mae: 0.0229
Epoch 3/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0020 - mae: 0.0278 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0022 - val_mae: 0.0097
Epoch 4/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0019 - mae: 0.0250 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0022 - val_mae: 0.0160
Epoch 5/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0253 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0106
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
593/600 [============================>.] - ETA: 0s - loss: 0.0341 - mse: 0.0341 - msle: 0.0030 - mae: 0.0734Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0339 - mse: 0.0339 - msle: 0.0030 - mae: 0.0729 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0026 - val_mae: 0.0110
Epoch 2/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0024 - mae: 0.0311 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0256
Epoch 3/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0019 - mae: 0.0286 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0027 - val_mae: 0.0176
Epoch 4/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0258 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0023 - val_mae: 0.0155
Epoch 5/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0250 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0024 - val_mae: 0.0130
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
593/600 [============================>.] - ETA: 0s - loss: 0.0331 - mse: 0.0331 - msle: 0.0030 - mae: 0.0717Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0329 - mse: 0.0329 - msle: 0.0030 - mae: 0.0713 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0027 - val_mae: 0.0189
Epoch 2/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0025 - mae: 0.0314 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0022 - val_mae: 0.0288
Epoch 3/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0285 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0025 - val_mae: 0.0344
Epoch 4/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0257 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0250
Epoch 5/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0241 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0022 - val_mae: 0.0170
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
598/600 [============================>.] - ETA: 0s - loss: 0.0341 - mse: 0.0341 - msle: 0.0031 - mae: 0.0742Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0341 - mse: 0.0341 - msle: 0.0031 - mae: 0.0740 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0026 - val_mae: 0.0113
Epoch 2/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0024 - mae: 0.0293 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0097
Epoch 3/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0020 - mae: 0.0275 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0024 - val_mae: 0.0320
Epoch 4/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0248 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0173
Epoch 5/5
600/600 [==============================] - 3s 6ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0240 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0021 - val_mae: 0.0261
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
598/600 [============================>.] - ETA: 0s - loss: 0.0348 - mse: 0.0348 - msle: 0.0031 - mae: 0.0732Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0347 - mse: 0.0347 - msle: 0.0031 - mae: 0.0731 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0031 - val_mae: 0.0101
Epoch 2/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0026 - mae: 0.0327 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0022 - val_mae: 0.0275
Epoch 3/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0104 - mse: 0.0104 - msle: 0.0021 - mae: 0.0290 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0024 - val_mae: 0.0319
Epoch 4/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0019 - mae: 0.0269 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0113
Epoch 5/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0018 - mae: 0.0247 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0021 - val_mae: 0.0245
Epoch 6/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0240 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0203
Epoch 7/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0226 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0249
Epoch 8/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0226 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0018 - val_mae: 0.0153
Epoch 9/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0219 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0252
Epoch 10/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0016 - mae: 0.0211 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0018 - val_mae: 0.0297
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
594/600 [============================>.] - ETA: 0s - loss: 0.0358 - mse: 0.0358 - msle: 0.0032 - mae: 0.0744Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0356 - mse: 0.0356 - msle: 0.0032 - mae: 0.0740 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0031 - val_mae: 0.0137
Epoch 2/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0124 - mse: 0.0124 - msle: 0.0026 - mae: 0.0309 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0027 - val_mae: 0.0192
Epoch 3/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0020 - mae: 0.0279 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0026 - val_mae: 0.0159
Epoch 4/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0250 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0272
Epoch 5/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0086 - mse: 0.0086 - msle: 0.0017 - mae: 0.0241 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0024 - val_mae: 0.0253
Epoch 6/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0231 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0219
Epoch 7/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0228 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0025 - val_mae: 0.0198
Epoch 8/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0221 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0022 - val_mae: 0.0218
Epoch 9/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0220 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0024 - val_mae: 0.0279
Epoch 10/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0016 - mae: 0.0216 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0021 - val_mae: 0.0241
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
597/600 [============================>.] - ETA: 0s - loss: 0.0328 - mse: 0.0328 - msle: 0.0031 - mae: 0.0709Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0327 - mse: 0.0327 - msle: 0.0031 - mae: 0.0707 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0024 - val_mae: 0.0099
Epoch 2/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0023 - mae: 0.0308 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0168
Epoch 3/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0020 - mae: 0.0283 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0023 - val_mae: 0.0396
Epoch 4/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0018 - mae: 0.0261 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0266
Epoch 5/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0249 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0022 - val_mae: 0.0172
Epoch 6/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0237 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0169
Epoch 7/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0235 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0118
Epoch 8/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0016 - mae: 0.0223 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0120
Epoch 9/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0015 - mae: 0.0214 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0163
Epoch 10/10
600/600 [==============================] - 4s 6ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0209 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0098
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
594/600 [============================>.] - ETA: 0s - loss: 0.0331 - mse: 0.0331 - msle: 0.0030 - mae: 0.0714Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 4s 6ms/step - loss: 0.0329 - mse: 0.0329 - msle: 0.0030 - mae: 0.0711 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0025 - val_mae: 0.0129
Epoch 2/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0023 - mae: 0.0304 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0025 - val_mae: 0.0101
Epoch 3/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0020 - mae: 0.0282 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0208
Epoch 4/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0019 - mae: 0.0261 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0023 - val_mae: 0.0139
Epoch 5/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0018 - mae: 0.0243 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0181
Epoch 6/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0236 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0084
Epoch 7/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0224 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0018 - val_mae: 0.0185
Epoch 8/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0223 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0100
Epoch 9/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0211 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0132
Epoch 10/10
600/600 [==============================] - 3s 6ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0209 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0168
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0329 - mse: 0.0329 - msle: 0.0032 - mae: 0.0702Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0329 - mse: 0.0329 - msle: 0.0032 - mae: 0.0701 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0091
Epoch 2/2
600/600 [==============================] - 4s 7ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0022 - mae: 0.0296 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0087
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
593/600 [============================>.] - ETA: 0s - loss: 0.0322 - mse: 0.0322 - msle: 0.0030 - mae: 0.0692Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0320 - mse: 0.0320 - msle: 0.0030 - mae: 0.0688 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0028 - val_mae: 0.0207
Epoch 2/2
600/600 [==============================] - 4s 7ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0025 - mae: 0.0314 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0022 - val_mae: 0.0293
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
598/600 [============================>.] - ETA: 0s - loss: 0.0320 - mse: 0.0320 - msle: 0.0031 - mae: 0.0694Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0319 - mse: 0.0319 - msle: 0.0031 - mae: 0.0692 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0027 - val_mae: 0.0112
Epoch 2/2
600/600 [==============================] - 4s 7ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0022 - mae: 0.0299 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0019 - val_mae: 0.0454
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0315 - mse: 0.0315 - msle: 0.0031 - mae: 0.0675Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0315 - mse: 0.0315 - msle: 0.0031 - mae: 0.0674 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0025 - val_mae: 0.0093
Epoch 2/2
600/600 [==============================] - 4s 7ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0023 - mae: 0.0301 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0021 - val_mae: 0.0249
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
596/600 [============================>.] - ETA: 0s - loss: 0.0332 - mse: 0.0332 - msle: 0.0032 - mae: 0.0692Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0330 - mse: 0.0330 - msle: 0.0032 - mae: 0.0690 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0032 - val_mae: 0.0109
Epoch 2/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0029 - mae: 0.0343 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0107
Epoch 3/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0021 - mae: 0.0307 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0084
Epoch 4/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0271 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0229
Epoch 5/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0018 - mae: 0.0255 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0016 - val_mae: 0.0165
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
592/600 [============================>.] - ETA: 0s - loss: 0.0336 - mse: 0.0336 - msle: 0.0033 - mae: 0.0706Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0333 - mse: 0.0333 - msle: 0.0033 - mae: 0.0701 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0030 - val_mae: 0.0194
Epoch 2/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0028 - mae: 0.0346 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0098
Epoch 3/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0019 - mae: 0.0310 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0217
Epoch 4/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0089 - mse: 0.0089 - msle: 0.0018 - mae: 0.0275 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0097
Epoch 5/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0259 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0119
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
599/600 [============================>.] - ETA: 0s - loss: 0.0325 - mse: 0.0325 - msle: 0.0032 - mae: 0.0693Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0324 - mse: 0.0324 - msle: 0.0032 - mae: 0.0692 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0028 - val_mae: 0.0114
Epoch 2/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0024 - mae: 0.0333 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0021 - val_mae: 0.0181
Epoch 3/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0282 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0020 - val_mae: 0.0167
Epoch 4/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0267 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0151
Epoch 5/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0246 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0082
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0315 - mse: 0.0315 - msle: 0.0031 - mae: 0.0685Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0315 - mse: 0.0315 - msle: 0.0031 - mae: 0.0685 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0246
Epoch 2/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0129 - mse: 0.0129 - msle: 0.0030 - mae: 0.0324 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0126
Epoch 3/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0020 - mae: 0.0305 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0171
Epoch 4/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0277 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0139
Epoch 5/5
600/600 [==============================] - 4s 7ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0261 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0135
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
597/600 [============================>.] - ETA: 0s - loss: 0.0329 - mse: 0.0329 - msle: 0.0030 - mae: 0.0697Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 8ms/step - loss: 0.0328 - mse: 0.0328 - msle: 0.0030 - mae: 0.0695 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0021 - val_mae: 0.0209
Epoch 2/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0023 - mae: 0.0319 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0198
Epoch 3/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0018 - mae: 0.0278 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0018 - val_mae: 0.0220
Epoch 4/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0256 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0016 - val_mae: 0.0268
Epoch 5/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0251 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0192
Epoch 6/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0239 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0217
Epoch 7/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0230 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0014 - val_mae: 0.0122
Epoch 8/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0221 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0104
Epoch 9/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0219 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0015 - val_mae: 0.0149
Epoch 10/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0210 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0013 - val_mae: 0.0178
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0310 - mse: 0.0310 - msle: 0.0029 - mae: 0.0690Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0310 - mse: 0.0310 - msle: 0.0029 - mae: 0.0689 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0027 - val_mae: 0.0284
Epoch 2/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0024 - mae: 0.0322 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0143
Epoch 3/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0097 - mse: 0.0097 - msle: 0.0019 - mae: 0.0291 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0211
Epoch 4/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0265 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0133
Epoch 5/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0250 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0126
Epoch 6/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0235 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0191
Epoch 7/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0013 - mae: 0.0234 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0014 - val_mae: 0.0124
Epoch 8/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0216 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0016 - val_mae: 0.0114
Epoch 9/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0215 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0015 - val_mae: 0.0139
Epoch 10/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0060 - mse: 0.0060 - msle: 0.0011 - mae: 0.0206 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0016 - val_mae: 0.0157
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
593/600 [============================>.] - ETA: 0s - loss: 0.0314 - mse: 0.0314 - msle: 0.0030 - mae: 0.0678Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0312 - mse: 0.0312 - msle: 0.0030 - mae: 0.0674 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0028 - val_mae: 0.0113
Epoch 2/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0023 - mae: 0.0319 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0141
Epoch 3/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0086 - mse: 0.0086 - msle: 0.0017 - mae: 0.0291 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0021 - val_mae: 0.0312
Epoch 4/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0259 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0303
Epoch 5/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0245 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0024 - val_mae: 0.0369
Epoch 6/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0245 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0271
Epoch 7/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0222 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0293
Epoch 8/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0218 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0238
Epoch 9/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0208 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0219
Epoch 10/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0010 - mae: 0.0202 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0296
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
598/600 [============================>.] - ETA: 0s - loss: 0.0321 - mse: 0.0321 - msle: 0.0030 - mae: 0.0706Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 5s 7ms/step - loss: 0.0321 - mse: 0.0321 - msle: 0.0030 - mae: 0.0705 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0025 - val_mae: 0.0122
Epoch 2/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0021 - mae: 0.0292 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0121
Epoch 3/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0263 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0020 - val_mae: 0.0119
Epoch 4/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0249 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0138
Epoch 5/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0237 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0075
Epoch 6/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0228 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0100
Epoch 7/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0221 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0015 - val_mae: 0.0159
Epoch 8/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0011 - mae: 0.0218 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0014 - val_mae: 0.0162
Epoch 9/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0206 - val_loss: 0.0058 - val_mse: 0.0058 - val_msle: 0.0014 - val_mae: 0.0128
Epoch 10/10
600/600 [==============================] - 4s 7ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0198 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0013 - val_mae: 0.0157
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0326 - mse: 0.0326 - msle: 0.0032 - mae: 0.0687Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 11ms/step - loss: 0.0326 - mse: 0.0326 - msle: 0.0032 - mae: 0.0687 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0029 - val_mae: 0.0268
Epoch 2/2
600/600 [==============================] - 6s 10ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0025 - mae: 0.0308 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0016 - val_mae: 0.0177
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
599/600 [============================>.] - ETA: 0s - loss: 0.0308 - mse: 0.0308 - msle: 0.0030 - mae: 0.0666Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0307 - mse: 0.0307 - msle: 0.0030 - mae: 0.0665 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0030 - val_mae: 0.0175
Epoch 2/2
600/600 [==============================] - 6s 10ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0027 - mae: 0.0328 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0017 - val_mae: 0.0143
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
598/600 [============================>.] - ETA: 0s - loss: 0.0312 - mse: 0.0312 - msle: 0.0033 - mae: 0.0670Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0311 - mse: 0.0311 - msle: 0.0033 - mae: 0.0669 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0029 - val_mae: 0.0140
Epoch 2/2
600/600 [==============================] - 6s 10ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0026 - mae: 0.0306 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0016 - val_mae: 0.0315
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
595/600 [============================>.] - ETA: 0s - loss: 0.0303 - mse: 0.0303 - msle: 0.0032 - mae: 0.0669Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0302 - mse: 0.0302 - msle: 0.0032 - mae: 0.0666 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0030 - val_mae: 0.0297
Epoch 2/2
600/600 [==============================] - 6s 10ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0025 - mae: 0.0300 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0016 - val_mae: 0.0278
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
596/600 [============================>.] - ETA: 0s - loss: 0.0300 - mse: 0.0300 - msle: 0.0031 - mae: 0.0653Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0299 - mse: 0.0299 - msle: 0.0031 - mae: 0.0651 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0031 - val_mae: 0.0167
Epoch 2/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0026 - mae: 0.0291 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0015 - val_mae: 0.0301
Epoch 3/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0093 - mse: 0.0093 - msle: 0.0018 - mae: 0.0282 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0016 - val_mae: 0.0301
Epoch 4/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0269 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0015 - val_mae: 0.0145
Epoch 5/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0014 - mae: 0.0260 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0130
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
595/600 [============================>.] - ETA: 0s - loss: 0.0302 - mse: 0.0302 - msle: 0.0031 - mae: 0.0656Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0301 - mse: 0.0301 - msle: 0.0031 - mae: 0.0653 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0033 - val_mae: 0.0140
Epoch 2/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0028 - mae: 0.0295 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0153
Epoch 3/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0093 - mse: 0.0093 - msle: 0.0018 - mae: 0.0281 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0212
Epoch 4/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0269 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0013 - val_mae: 0.0265
Epoch 5/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0243 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0015 - val_mae: 0.0151
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
596/600 [============================>.] - ETA: 0s - loss: 0.0305 - mse: 0.0305 - msle: 0.0032 - mae: 0.0647Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0304 - mse: 0.0304 - msle: 0.0032 - mae: 0.0645 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0033 - val_mae: 0.0106
Epoch 2/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0028 - mae: 0.0313 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0087
Epoch 3/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0089 - mse: 0.0089 - msle: 0.0017 - mae: 0.0291 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0189
Epoch 4/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0265 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0024 - val_mae: 0.0269
Epoch 5/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0255 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0016 - val_mae: 0.0220
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
598/600 [============================>.] - ETA: 0s - loss: 0.0305 - mse: 0.0305 - msle: 0.0031 - mae: 0.0661Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 11ms/step - loss: 0.0304 - mse: 0.0304 - msle: 0.0031 - mae: 0.0660 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0029 - val_mae: 0.0196
Epoch 2/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0023 - mae: 0.0292 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0217
Epoch 3/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0269 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0027 - val_mae: 0.0234
Epoch 4/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0256 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0024 - val_mae: 0.0216
Epoch 5/5
600/600 [==============================] - 6s 10ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0246 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0144
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
599/600 [============================>.] - ETA: 0s - loss: 0.0312 - mse: 0.0312 - msle: 0.0031 - mae: 0.0668Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0312 - mse: 0.0312 - msle: 0.0031 - mae: 0.0667 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0030 - val_mae: 0.0209
Epoch 2/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0027 - mae: 0.0316 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0019 - val_mae: 0.0239
Epoch 3/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0295 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0025 - val_mae: 0.0236
Epoch 4/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0272 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0024 - val_mae: 0.0232
Epoch 5/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0260 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0257
Epoch 6/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0248 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0229
Epoch 7/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0240 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0014 - val_mae: 0.0130
Epoch 8/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0227 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0211
Epoch 9/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0216 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0017 - val_mae: 0.0205
Epoch 10/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0215 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0019 - val_mae: 0.0195
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
596/600 [============================>.] - ETA: 0s - loss: 0.0304 - mse: 0.0304 - msle: 0.0032 - mae: 0.0660Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0302 - mse: 0.0302 - msle: 0.0032 - mae: 0.0658 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0035 - val_mae: 0.0110
Epoch 2/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0027 - mae: 0.0307 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0181
Epoch 3/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0018 - mae: 0.0281 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0018 - val_mae: 0.0114
Epoch 4/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0277 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0094
Epoch 5/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0262 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0225
Epoch 6/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0249 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0208
Epoch 7/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0239 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0275
Epoch 8/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.8740e-04 - mae: 0.0232 - val_loss: 0.0056 - val_mse: 0.0056 - val_msle: 0.0012 - val_mae: 0.0169
Epoch 9/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.9373e-04 - mae: 0.0223 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0014 - val_mae: 0.0102
Epoch 10/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0047 - mse: 0.0047 - msle: 8.3685e-04 - mae: 0.0210 - val_loss: 0.0052 - val_mse: 0.0052 - val_msle: 0.0011 - val_mae: 0.0236
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
599/600 [============================>.] - ETA: 0s - loss: 0.0305 - mse: 0.0305 - msle: 0.0031 - mae: 0.0656Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0305 - mse: 0.0305 - msle: 0.0031 - mae: 0.0655 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0033 - val_mae: 0.0174
Epoch 2/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0027 - mae: 0.0308 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0017 - val_mae: 0.0525
Epoch 3/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0299 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0015 - val_mae: 0.0321
Epoch 4/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0274 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0015 - val_mae: 0.0303
Epoch 5/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0262 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0012 - val_mae: 0.0262
Epoch 6/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0252 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0012 - val_mae: 0.0218
Epoch 7/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0241 - val_loss: 0.0052 - val_mse: 0.0052 - val_msle: 9.7644e-04 - val_mae: 0.0104
Epoch 8/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0243 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0012 - val_mae: 0.0218
Epoch 9/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.2995e-04 - mae: 0.0221 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 9.5360e-04 - val_mae: 0.0272
Epoch 10/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.7557e-04 - mae: 0.0220 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0012 - val_mae: 0.0216
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0310 - mse: 0.0310 - msle: 0.0031 - mae: 0.0653Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 7s 10ms/step - loss: 0.0310 - mse: 0.0310 - msle: 0.0031 - mae: 0.0652 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0205
Epoch 2/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0031 - mae: 0.0330 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0112
Epoch 3/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0100 - mse: 0.0100 - msle: 0.0020 - mae: 0.0304 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0020 - val_mae: 0.0207
Epoch 4/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0281 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0020 - val_mae: 0.0137
Epoch 5/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0256 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0096
Epoch 6/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0245 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0113
Epoch 7/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0235 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0015 - val_mae: 0.0108
Epoch 8/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0231 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0019 - val_mae: 0.0151
Epoch 9/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.8122e-04 - mae: 0.0220 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0015 - val_mae: 0.0105
Epoch 10/10
600/600 [==============================] - 6s 10ms/step - loss: 0.0051 - mse: 0.0051 - msle: 9.1837e-04 - mae: 0.0212 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 0.0012 - val_mae: 0.0096
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0285 - mse: 0.0285 - msle: 0.0031 - mae: 0.0632Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 11s 16ms/step - loss: 0.0285 - mse: 0.0285 - msle: 0.0031 - mae: 0.0632 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0244
Epoch 2/2
600/600 [==============================] - 9s 15ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0030 - mae: 0.0298 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0019 - val_mae: 0.0170
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
599/600 [============================>.] - ETA: 0s - loss: 0.0286 - mse: 0.0286 - msle: 0.0030 - mae: 0.0632Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0286 - mse: 0.0286 - msle: 0.0030 - mae: 0.0631 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0034 - val_mae: 0.0127
Epoch 2/2
600/600 [==============================] - 9s 15ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0026 - mae: 0.0289 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0017 - val_mae: 0.0308
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
599/600 [============================>.] - ETA: 0s - loss: 0.0297 - mse: 0.0297 - msle: 0.0032 - mae: 0.0642Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0296 - mse: 0.0296 - msle: 0.0032 - mae: 0.0641 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0033 - val_mae: 0.0167
Epoch 2/2
600/600 [==============================] - 9s 15ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0030 - mae: 0.0296 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0017 - val_mae: 0.0219
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
598/600 [============================>.] - ETA: 0s - loss: 0.0306 - mse: 0.0306 - msle: 0.0032 - mae: 0.0653Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0305 - mse: 0.0305 - msle: 0.0032 - mae: 0.0652 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0035 - val_mae: 0.0211
Epoch 2/2
600/600 [==============================] - 9s 15ms/step - loss: 0.0124 - mse: 0.0124 - msle: 0.0030 - mae: 0.0298 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0015 - val_mae: 0.0197
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
599/600 [============================>.] - ETA: 0s - loss: 0.0311 - mse: 0.0311 - msle: 0.0032 - mae: 0.0657Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0311 - mse: 0.0311 - msle: 0.0032 - mae: 0.0656 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0033 - val_mae: 0.0275
Epoch 2/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0030 - mae: 0.0292 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0219
Epoch 3/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0020 - mae: 0.0275 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0015 - val_mae: 0.0195
Epoch 4/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0265 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0017 - val_mae: 0.0299
Epoch 5/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0255 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0016 - val_mae: 0.0179
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
599/600 [============================>.] - ETA: 0s - loss: 0.0307 - mse: 0.0307 - msle: 0.0031 - mae: 0.0688Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0307 - mse: 0.0307 - msle: 0.0031 - mae: 0.0687 - val_loss: 0.0110 - val_mse: 0.0110 - val_msle: 0.0034 - val_mae: 0.0128
Epoch 2/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0029 - mae: 0.0299 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0019 - val_mae: 0.0310
Epoch 3/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0095 - mse: 0.0095 - msle: 0.0020 - mae: 0.0282 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0016 - val_mae: 0.0130
Epoch 4/5
600/600 [==============================] - 9s 16ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0282 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0016 - val_mae: 0.0279
Epoch 5/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0277 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0122
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
598/600 [============================>.] - ETA: 0s - loss: 0.0293 - mse: 0.0293 - msle: 0.0031 - mae: 0.0645Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0292 - mse: 0.0292 - msle: 0.0031 - mae: 0.0643 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0030 - val_mae: 0.0210
Epoch 2/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0024 - mae: 0.0297 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0015 - val_mae: 0.0146
Epoch 3/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0275 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0016 - val_mae: 0.0184
Epoch 4/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0257 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0013 - val_mae: 0.0087
Epoch 5/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0248 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0014 - val_mae: 0.0131
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
598/600 [============================>.] - ETA: 0s - loss: 0.0311 - mse: 0.0311 - msle: 0.0032 - mae: 0.0658Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0310 - mse: 0.0310 - msle: 0.0032 - mae: 0.0657 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0033 - val_mae: 0.0255
Epoch 2/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0031 - mae: 0.0311 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0151
Epoch 3/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0311 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0126
Epoch 4/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0286 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0020 - val_mae: 0.0136
Epoch 5/5
600/600 [==============================] - 9s 15ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0271 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0233
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0298 - mse: 0.0298 - msle: 0.0031 - mae: 0.0657Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0297 - mse: 0.0297 - msle: 0.0031 - mae: 0.0656 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0034 - val_mae: 0.0140
Epoch 2/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0030 - mae: 0.0296 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0137
Epoch 3/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0020 - mae: 0.0285 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0019 - val_mae: 0.0298
Epoch 4/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0274 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0028 - val_mae: 0.0240
Epoch 5/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0263 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0028 - val_mae: 0.0134
Epoch 6/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0011 - mae: 0.0256 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0024 - val_mae: 0.0209
Epoch 7/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0056 - mse: 0.0056 - msle: 9.7000e-04 - mae: 0.0244 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0017 - val_mae: 0.0109
Epoch 8/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.3866e-04 - mae: 0.0232 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0018 - val_mae: 0.0180
Epoch 9/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0046 - mse: 0.0046 - msle: 7.5690e-04 - mae: 0.0220 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0016 - val_mae: 0.0093
Epoch 10/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.2174e-04 - mae: 0.0218 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0018 - val_mae: 0.0089
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
597/600 [============================>.] - ETA: 0s - loss: 0.0303 - mse: 0.0303 - msle: 0.0031 - mae: 0.0653Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0302 - mse: 0.0302 - msle: 0.0031 - mae: 0.0651 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0034 - val_mae: 0.0106
Epoch 2/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0028 - mae: 0.0297 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0017 - val_mae: 0.0367
Epoch 3/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0017 - mae: 0.0288 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0015 - val_mae: 0.0340
Epoch 4/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0272 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0013 - val_mae: 0.0185
Epoch 5/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0258 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0013 - val_mae: 0.0261
Epoch 6/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0248 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0013 - val_mae: 0.0174
Epoch 7/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0052 - mse: 0.0052 - msle: 8.8061e-04 - mae: 0.0237 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 0.0012 - val_mae: 0.0234
Epoch 8/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.3885e-04 - mae: 0.0220 - val_loss: 0.0050 - val_mse: 0.0050 - val_msle: 0.0011 - val_mae: 0.0197
Epoch 9/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0043 - mse: 0.0043 - msle: 6.9703e-04 - mae: 0.0214 - val_loss: 0.0047 - val_mse: 0.0047 - val_msle: 0.0010 - val_mae: 0.0113
Epoch 10/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0040 - mse: 0.0040 - msle: 6.3569e-04 - mae: 0.0206 - val_loss: 0.0050 - val_mse: 0.0050 - val_msle: 0.0010 - val_mae: 0.0141
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
599/600 [============================>.] - ETA: 0s - loss: 0.0291 - mse: 0.0291 - msle: 0.0032 - mae: 0.0637Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0290 - mse: 0.0290 - msle: 0.0032 - mae: 0.0636 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0028 - val_mae: 0.0153
Epoch 2/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0024 - mae: 0.0306 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0132
Epoch 3/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0294 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0183
Epoch 4/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0274 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0022 - val_mae: 0.0215
Epoch 5/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0257 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0021 - val_mae: 0.0131
Epoch 6/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0249 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0022 - val_mae: 0.0138
Epoch 7/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0239 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0023 - val_mae: 0.0198
Epoch 8/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0049 - mse: 0.0049 - msle: 8.5909e-04 - mae: 0.0233 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0021 - val_mae: 0.0096
Epoch 9/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0046 - mse: 0.0046 - msle: 8.0236e-04 - mae: 0.0226 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0206
Epoch 10/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0042 - mse: 0.0042 - msle: 7.0309e-04 - mae: 0.0215 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0023 - val_mae: 0.0134
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
598/600 [============================>.] - ETA: 0s - loss: 0.0289 - mse: 0.0289 - msle: 0.0031 - mae: 0.0613Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 10s 16ms/step - loss: 0.0288 - mse: 0.0288 - msle: 0.0031 - mae: 0.0612 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0032 - val_mae: 0.0236
Epoch 2/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0026 - mae: 0.0305 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0014 - val_mae: 0.0172
Epoch 3/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0015 - mae: 0.0287 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0105
Epoch 4/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0267 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0014 - val_mae: 0.0204
Epoch 5/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0062 - mse: 0.0062 - msle: 0.0011 - mae: 0.0256 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0109
Epoch 6/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0010 - mae: 0.0246 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0014 - val_mae: 0.0115
Epoch 7/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.7366e-04 - mae: 0.0234 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0189
Epoch 8/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.7043e-04 - mae: 0.0221 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0098
Epoch 9/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0042 - mse: 0.0042 - msle: 7.0724e-04 - mae: 0.0215 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0022 - val_mae: 0.0121
Epoch 10/10
600/600 [==============================] - 9s 15ms/step - loss: 0.0036 - mse: 0.0036 - msle: 5.8261e-04 - mae: 0.0204 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0012 - val_mae: 0.0087
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0275 - mse: 0.0275 - msle: 0.0031 - mae: 0.0602Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0275 - mse: 0.0275 - msle: 0.0031 - mae: 0.0602 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0033 - val_mae: 0.0157
Epoch 2/2
600/600 [==============================] - 15s 25ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0031 - mae: 0.0327 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0025 - val_mae: 0.0280
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0269 - mse: 0.0269 - msle: 0.0031 - mae: 0.0579Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0269 - mse: 0.0269 - msle: 0.0031 - mae: 0.0578 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0035 - val_mae: 0.0127
Epoch 2/2
600/600 [==============================] - 15s 25ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0028 - mae: 0.0322 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0014 - val_mae: 0.0099
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
599/600 [============================>.] - ETA: 0s - loss: 0.0267 - mse: 0.0267 - msle: 0.0030 - mae: 0.0586Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0267 - mse: 0.0267 - msle: 0.0030 - mae: 0.0585 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0130
Epoch 2/2
600/600 [==============================] - 15s 25ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0026 - mae: 0.0302 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0251
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
599/600 [============================>.] - ETA: 0s - loss: 0.0274 - mse: 0.0274 - msle: 0.0032 - mae: 0.0594Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0274 - mse: 0.0274 - msle: 0.0032 - mae: 0.0593 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0036 - val_mae: 0.0185
Epoch 2/2
600/600 [==============================] - 15s 25ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0030 - mae: 0.0320 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0016 - val_mae: 0.0183
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0283 - mse: 0.0283 - msle: 0.0032 - mae: 0.0602Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0282 - mse: 0.0282 - msle: 0.0032 - mae: 0.0602 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0316
Epoch 2/5
600/600 [==============================] - 15s 26ms/step - loss: 0.0124 - mse: 0.0124 - msle: 0.0031 - mae: 0.0310 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0021 - val_mae: 0.0179
Epoch 3/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0020 - mae: 0.0293 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0021 - val_mae: 0.0351
Epoch 4/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0286 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0023 - val_mae: 0.0267
Epoch 5/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0276 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0023 - val_mae: 0.0248
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
598/600 [============================>.] - ETA: 0s - loss: 0.0272 - mse: 0.0272 - msle: 0.0031 - mae: 0.0590Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0271 - mse: 0.0271 - msle: 0.0031 - mae: 0.0589 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0214
Epoch 2/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0125 - mse: 0.0125 - msle: 0.0030 - mae: 0.0325 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0013 - val_mae: 0.0171
Epoch 3/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0307 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0131
Epoch 4/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0273 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0205
Epoch 5/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0011 - mae: 0.0256 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0179
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0270 - mse: 0.0270 - msle: 0.0032 - mae: 0.0577Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0270 - mse: 0.0270 - msle: 0.0032 - mae: 0.0576 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0031 - val_mae: 0.0157
Epoch 2/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0027 - mae: 0.0317 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0015 - val_mae: 0.0205
Epoch 3/5
600/600 [==============================] - 15s 26ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0290 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0144
Epoch 4/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0274 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0152
Epoch 5/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.2734e-04 - mae: 0.0255 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0150
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
599/600 [============================>.] - ETA: 0s - loss: 0.0296 - mse: 0.0296 - msle: 0.0031 - mae: 0.0650Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0295 - mse: 0.0295 - msle: 0.0031 - mae: 0.0649 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0033 - val_mae: 0.0128
Epoch 2/5
600/600 [==============================] - 15s 26ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0028 - mae: 0.0315 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0137
Epoch 3/5
600/600 [==============================] - 15s 26ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0015 - mae: 0.0294 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0186
Epoch 4/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0271 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0124
Epoch 5/5
600/600 [==============================] - 15s 25ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0010 - mae: 0.0255 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0189
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0290 - mse: 0.0290 - msle: 0.0030 - mae: 0.0646Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0290 - mse: 0.0290 - msle: 0.0030 - mae: 0.0645 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0036 - val_mae: 0.0122
Epoch 2/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0029 - mae: 0.0303 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0191
Epoch 3/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0019 - mae: 0.0289 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0152
Epoch 4/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0271 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0025 - val_mae: 0.0168
Epoch 5/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0263 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0027 - val_mae: 0.0267
Epoch 6/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0248 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0023 - val_mae: 0.0287
Epoch 7/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.9992e-04 - mae: 0.0245 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0027 - val_mae: 0.0236
Epoch 8/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0046 - mse: 0.0046 - msle: 8.0053e-04 - mae: 0.0234 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0019 - val_mae: 0.0299
Epoch 9/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0042 - mse: 0.0042 - msle: 6.9768e-04 - mae: 0.0230 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0024 - val_mae: 0.0264
Epoch 10/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0039 - mse: 0.0039 - msle: 6.3496e-04 - mae: 0.0221 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0028 - val_mae: 0.0294
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
599/600 [============================>.] - ETA: 0s - loss: 0.0274 - mse: 0.0274 - msle: 0.0032 - mae: 0.0591Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0274 - mse: 0.0274 - msle: 0.0032 - mae: 0.0591 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0031 - val_mae: 0.0152
Epoch 2/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0027 - mae: 0.0305 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0015 - val_mae: 0.0136
Epoch 3/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0015 - mae: 0.0286 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0186
Epoch 4/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0266 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0013 - val_mae: 0.0161
Epoch 5/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0056 - mse: 0.0056 - msle: 9.9519e-04 - mae: 0.0257 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0013 - val_mae: 0.0184
Epoch 6/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.8581e-04 - mae: 0.0242 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0013 - val_mae: 0.0214
Epoch 7/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0047 - mse: 0.0047 - msle: 8.0820e-04 - mae: 0.0236 - val_loss: 0.0054 - val_mse: 0.0054 - val_msle: 0.0012 - val_mae: 0.0162
Epoch 8/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0041 - mse: 0.0041 - msle: 6.7641e-04 - mae: 0.0223 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0013 - val_mae: 0.0142
Epoch 9/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0039 - mse: 0.0039 - msle: 6.2484e-04 - mae: 0.0217 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0014 - val_mae: 0.0130
Epoch 10/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0036 - mse: 0.0036 - msle: 5.9832e-04 - mae: 0.0208 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0014 - val_mae: 0.0275
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0282 - mse: 0.0282 - msle: 0.0031 - mae: 0.0619Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0282 - mse: 0.0282 - msle: 0.0031 - mae: 0.0619 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0030 - val_mae: 0.0185
Epoch 2/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0027 - mae: 0.0314 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0168
Epoch 3/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0017 - mae: 0.0294 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0159
Epoch 4/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0271 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0023 - val_mae: 0.0162
Epoch 5/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0011 - mae: 0.0257 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0184
Epoch 6/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.3685e-04 - mae: 0.0248 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0167
Epoch 7/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0046 - mse: 0.0046 - msle: 7.9123e-04 - mae: 0.0243 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0142
Epoch 8/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0041 - mse: 0.0041 - msle: 7.1312e-04 - mae: 0.0233 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0021 - val_mae: 0.0146
Epoch 9/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0039 - mse: 0.0039 - msle: 6.5688e-04 - mae: 0.0226 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0021 - val_mae: 0.0134
Epoch 10/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0035 - mse: 0.0035 - msle: 5.9205e-04 - mae: 0.0216 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0022 - val_mae: 0.0142
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
598/600 [============================>.] - ETA: 0s - loss: 0.0272 - mse: 0.0272 - msle: 0.0031 - mae: 0.0596Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 16s 26ms/step - loss: 0.0272 - mse: 0.0272 - msle: 0.0031 - mae: 0.0595 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0034 - val_mae: 0.0170
Epoch 2/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0030 - mae: 0.0301 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0327
Epoch 3/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0020 - mae: 0.0304 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0281
Epoch 4/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0291 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0230
Epoch 5/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0268 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0180
Epoch 6/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0012 - mae: 0.0258 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0197
Epoch 7/10
600/600 [==============================] - 15s 25ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0256 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0160
Epoch 8/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0050 - mse: 0.0050 - msle: 9.0073e-04 - mae: 0.0241 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0187
Epoch 9/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.7345e-04 - mae: 0.0237 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0255
Epoch 10/10
600/600 [==============================] - 15s 26ms/step - loss: 0.0040 - mse: 0.0040 - msle: 6.5804e-04 - mae: 0.0230 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0019 - val_mae: 0.0266
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0306 - mse: 0.0306 - msle: 0.0031 - mae: 0.0667Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 42s 68ms/step - loss: 0.0306 - mse: 0.0306 - msle: 0.0031 - mae: 0.0667 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0036 - val_mae: 0.0148
Epoch 2/2
600/600 [==============================] - 40s 67ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0031 - mae: 0.0314 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0032 - val_mae: 0.0123
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0274 - mse: 0.0274 - msle: 0.0033 - mae: 0.0597Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0274 - mse: 0.0274 - msle: 0.0033 - mae: 0.0597 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0035 - val_mae: 0.0166
Epoch 2/2
600/600 [==============================] - 40s 67ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0033 - mae: 0.0323 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0033 - val_mae: 0.0359
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0246 - mse: 0.0246 - msle: 0.0031 - mae: 0.0537Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0246 - mse: 0.0246 - msle: 0.0031 - mae: 0.0537 - val_loss: 0.0118 - val_mse: 0.0118 - val_msle: 0.0035 - val_mae: 0.0384
Epoch 2/2
600/600 [==============================] - 40s 67ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0028 - mae: 0.0329 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0012 - val_mae: 0.0417
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
600/600 [==============================] - ETA: 0s - loss: 0.0284 - mse: 0.0284 - msle: 0.0031 - mae: 0.0638Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0284 - mse: 0.0284 - msle: 0.0031 - mae: 0.0637 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0034 - val_mae: 0.0148
Epoch 2/2
600/600 [==============================] - 40s 67ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0031 - mae: 0.0295 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0034 - val_mae: 0.0185
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0263 - mse: 0.0263 - msle: 0.0030 - mae: 0.0571Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0263 - mse: 0.0263 - msle: 0.0030 - mae: 0.0571 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0226
Epoch 2/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0030 - mae: 0.0315 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0018 - val_mae: 0.0256
Epoch 3/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0322 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0026 - val_mae: 0.0180
Epoch 4/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0011 - mae: 0.0281 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0022 - val_mae: 0.0169
Epoch 5/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.9943e-04 - mae: 0.0262 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0019 - val_mae: 0.0284
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0286 - mse: 0.0286 - msle: 0.0032 - mae: 0.0654Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 68ms/step - loss: 0.0285 - mse: 0.0285 - msle: 0.0032 - mae: 0.0654 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0033 - val_mae: 0.0403
Epoch 2/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0032 - mae: 0.0305 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0026 - val_mae: 0.0155
Epoch 3/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0095 - mse: 0.0095 - msle: 0.0020 - mae: 0.0319 - val_loss: 0.0056 - val_mse: 0.0056 - val_msle: 0.0012 - val_mae: 0.0160
Epoch 4/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0291 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0014 - val_mae: 0.0172
Epoch 5/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.8107e-04 - mae: 0.0270 - val_loss: 0.0056 - val_mse: 0.0056 - val_msle: 0.0012 - val_mae: 0.0127
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0248 - mse: 0.0248 - msle: 0.0030 - mae: 0.0552Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0248 - mse: 0.0248 - msle: 0.0030 - mae: 0.0552 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0035 - val_mae: 0.0316
Epoch 2/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0028 - mae: 0.0307 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0015 - val_mae: 0.0214
Epoch 3/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0306 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0024 - val_mae: 0.0158
Epoch 4/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0289 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0018 - val_mae: 0.0136
Epoch 5/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0049 - mse: 0.0049 - msle: 8.6083e-04 - mae: 0.0264 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0122
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
600/600 [==============================] - ETA: 0s - loss: 0.0254 - mse: 0.0254 - msle: 0.0031 - mae: 0.0579Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0254 - mse: 0.0254 - msle: 0.0031 - mae: 0.0578 - val_loss: 0.0121 - val_mse: 0.0121 - val_msle: 0.0035 - val_mae: 0.0350
Epoch 2/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0030 - mae: 0.0303 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0028 - val_mae: 0.0372
Epoch 3/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0018 - mae: 0.0305 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0027 - val_mae: 0.0200
Epoch 4/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0288 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0029 - val_mae: 0.0220
Epoch 5/5
600/600 [==============================] - 40s 67ms/step - loss: 0.0051 - mse: 0.0051 - msle: 9.2420e-04 - mae: 0.0271 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0026 - val_mae: 0.0239
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0291 - mse: 0.0291 - msle: 0.0033 - mae: 0.0622Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0291 - mse: 0.0291 - msle: 0.0033 - mae: 0.0622 - val_loss: 0.0120 - val_mse: 0.0120 - val_msle: 0.0035 - val_mae: 0.0282
Epoch 2/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0033 - mae: 0.0346 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0023 - val_mae: 0.0311
Epoch 3/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0021 - mae: 0.0327 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0020 - val_mae: 0.0132
Epoch 4/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0301 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0023 - val_mae: 0.0255
Epoch 5/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0062 - mse: 0.0062 - msle: 0.0012 - mae: 0.0281 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0025 - val_mae: 0.0132
Epoch 6/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0049 - mse: 0.0049 - msle: 8.4885e-04 - mae: 0.0267 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0242
Epoch 7/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.7373e-04 - mae: 0.0256 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0182
Epoch 8/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0040 - mse: 0.0040 - msle: 6.7991e-04 - mae: 0.0245 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0166
Epoch 9/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0037 - mse: 0.0037 - msle: 5.9677e-04 - mae: 0.0239 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0206
Epoch 10/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0033 - mse: 0.0033 - msle: 5.3269e-04 - mae: 0.0232 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0135
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0325 - mse: 0.0325 - msle: 0.0032 - mae: 0.0697Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 68ms/step - loss: 0.0325 - mse: 0.0325 - msle: 0.0032 - mae: 0.0697 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0036 - val_mae: 0.0218
Epoch 2/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0032 - mae: 0.0333 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0032 - val_mae: 0.0243
Epoch 3/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0031 - mae: 0.0304 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0034 - val_mae: 0.0164
Epoch 4/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0029 - mae: 0.0303 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0027 - val_mae: 0.0233
Epoch 5/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0026 - mae: 0.0307 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0028 - val_mae: 0.0251
Epoch 6/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0022 - mae: 0.0315 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0022 - val_mae: 0.0225
Epoch 7/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0019 - mae: 0.0331 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0025 - val_mae: 0.0165
Epoch 8/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0346 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0020 - val_mae: 0.0199
Epoch 9/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0013 - mae: 0.0362 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0017 - val_mae: 0.0187
Epoch 10/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0062 - mse: 0.0062 - msle: 8.3048e-04 - mae: 0.0375 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0014 - val_mae: 0.0164
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0368 - mse: 0.0368 - msle: 0.0030 - mae: 0.0787Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0368 - mse: 0.0368 - msle: 0.0030 - mae: 0.0787 - val_loss: 0.0131 - val_mse: 0.0131 - val_msle: 0.0036 - val_mae: 0.0147
Epoch 2/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0030 - mae: 0.0320 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0117
Epoch 3/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0029 - mae: 0.0298 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0020 - val_mae: 0.0216
Epoch 4/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0018 - mae: 0.0310 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0307
Epoch 5/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0296 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0024 - val_mae: 0.0215
Epoch 6/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0278 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0024 - val_mae: 0.0170
Epoch 7/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0049 - mse: 0.0049 - msle: 8.8673e-04 - mae: 0.0266 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0235
Epoch 8/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0043 - mse: 0.0043 - msle: 6.9052e-04 - mae: 0.0261 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0025 - val_mae: 0.0170
Epoch 9/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0039 - mse: 0.0039 - msle: 6.5000e-04 - mae: 0.0247 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0260
Epoch 10/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0041 - mse: 0.0041 - msle: 6.9928e-04 - mae: 0.0247 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0219
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
600/600 [==============================] - ETA: 0s - loss: 0.0269 - mse: 0.0269 - msle: 0.0032 - mae: 0.0568Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
600/600 [==============================] - 41s 67ms/step - loss: 0.0269 - mse: 0.0269 - msle: 0.0032 - mae: 0.0568 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0308
Epoch 2/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0124 - mse: 0.0124 - msle: 0.0031 - mae: 0.0319 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0020 - val_mae: 0.0141
Epoch 3/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0019 - mae: 0.0317 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0020 - val_mae: 0.0133
Epoch 4/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0297 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0016 - val_mae: 0.0227
Epoch 5/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0013 - mae: 0.0284 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0301
Epoch 6/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0012 - mae: 0.0274 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0016 - val_mae: 0.0162
Epoch 7/10
600/600 [==============================] - 40s 66ms/step - loss: 0.0053 - mse: 0.0053 - msle: 0.0011 - mae: 0.0271 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0018 - val_mae: 0.0146
Epoch 8/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0048 - mse: 0.0048 - msle: 9.9370e-04 - mae: 0.0266 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0255
Epoch 9/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0048 - mse: 0.0048 - msle: 9.8446e-04 - mae: 0.0266 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0182
Epoch 10/10
600/600 [==============================] - 40s 67ms/step - loss: 0.0045 - mse: 0.0045 - msle: 9.2288e-04 - mae: 0.0266 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0140
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0490 - mse: 0.0490 - msle: 0.0031 - mae: 0.1049Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 11ms/step - loss: 0.0489 - mse: 0.0489 - msle: 0.0031 - mae: 0.1048 - val_loss: 0.0127 - val_mse: 0.0127 - val_msle: 0.0034 - val_mae: 0.0162
Epoch 2/2
300/300 [==============================] - 3s 10ms/step - loss: 0.0138 - mse: 0.0138 - msle: 0.0030 - mae: 0.0308 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0026 - val_mae: 0.0120
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
296/300 [============================>.] - ETA: 0s - loss: 0.0483 - mse: 0.0483 - msle: 0.0033 - mae: 0.1026Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0479 - mse: 0.0479 - msle: 0.0033 - mae: 0.1017 - val_loss: 0.0110 - val_mse: 0.0110 - val_msle: 0.0034 - val_mae: 0.0097
Epoch 2/2
300/300 [==============================] - 3s 10ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0030 - mae: 0.0302 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0025 - val_mae: 0.0109
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0482 - mse: 0.0482 - msle: 0.0031 - mae: 0.1037Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0481 - mse: 0.0481 - msle: 0.0031 - mae: 0.1035 - val_loss: 0.0121 - val_mse: 0.0121 - val_msle: 0.0034 - val_mae: 0.0112
Epoch 2/2
300/300 [==============================] - 3s 10ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0028 - mae: 0.0295 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0161
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
298/300 [============================>.] - ETA: 0s - loss: 0.0495 - mse: 0.0495 - msle: 0.0032 - mae: 0.1053Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0493 - mse: 0.0493 - msle: 0.0032 - mae: 0.1048 - val_loss: 0.0125 - val_mse: 0.0125 - val_msle: 0.0034 - val_mae: 0.0171
Epoch 2/2
300/300 [==============================] - 3s 9ms/step - loss: 0.0138 - mse: 0.0138 - msle: 0.0031 - mae: 0.0310 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0022 - val_mae: 0.0101
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
296/300 [============================>.] - ETA: 0s - loss: 0.0465 - mse: 0.0465 - msle: 0.0029 - mae: 0.1003Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0461 - mse: 0.0461 - msle: 0.0030 - mae: 0.0995 - val_loss: 0.0124 - val_mse: 0.0124 - val_msle: 0.0034 - val_mae: 0.0108
Epoch 2/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0028 - mae: 0.0308 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0025 - val_mae: 0.0159
Epoch 3/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0296 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0163
Epoch 4/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0095 - mse: 0.0095 - msle: 0.0019 - mae: 0.0274 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0166
Epoch 5/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0261 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0023 - val_mae: 0.0135
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
299/300 [============================>.] - ETA: 0s - loss: 0.0485 - mse: 0.0485 - msle: 0.0031 - mae: 0.1021Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0484 - mse: 0.0484 - msle: 0.0031 - mae: 0.1018 - val_loss: 0.0120 - val_mse: 0.0120 - val_msle: 0.0034 - val_mae: 0.0211
Epoch 2/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0031 - mae: 0.0310 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0026 - val_mae: 0.0089
Epoch 3/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0024 - mae: 0.0305 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0026 - val_mae: 0.0117
Epoch 4/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0294 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0025 - val_mae: 0.0124
Epoch 5/5
300/300 [==============================] - 3s 9ms/step - loss: 0.0097 - mse: 0.0097 - msle: 0.0020 - mae: 0.0276 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0024 - val_mae: 0.0445
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
295/300 [============================>.] - ETA: 0s - loss: 0.0469 - mse: 0.0469 - msle: 0.0031 - mae: 0.1023Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0464 - mse: 0.0464 - msle: 0.0031 - mae: 0.1012 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0031 - val_mae: 0.0102
Epoch 2/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0028 - mae: 0.0293 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0022 - val_mae: 0.0130
Epoch 3/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0022 - mae: 0.0272 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0020 - val_mae: 0.0131
Epoch 4/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0100 - mse: 0.0100 - msle: 0.0020 - mae: 0.0261 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0020 - val_mae: 0.0211
Epoch 5/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0262 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0021 - val_mae: 0.0262
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
297/300 [============================>.] - ETA: 0s - loss: 0.0476 - mse: 0.0476 - msle: 0.0032 - mae: 0.1014Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0473 - mse: 0.0473 - msle: 0.0032 - mae: 0.1007 - val_loss: 0.0125 - val_mse: 0.0125 - val_msle: 0.0034 - val_mae: 0.0137
Epoch 2/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0031 - mae: 0.0303 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0167
Epoch 3/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0024 - mae: 0.0300 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0023 - val_mae: 0.0103
Epoch 4/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0104 - mse: 0.0104 - msle: 0.0021 - mae: 0.0280 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0139
Epoch 5/5
300/300 [==============================] - 3s 10ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0019 - mae: 0.0272 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0024 - val_mae: 0.0140
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
299/300 [============================>.] - ETA: 0s - loss: 0.0471 - mse: 0.0471 - msle: 0.0030 - mae: 0.1006Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0469 - mse: 0.0469 - msle: 0.0030 - mae: 0.1002 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0033 - val_mae: 0.0131
Epoch 2/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0135 - mse: 0.0135 - msle: 0.0030 - mae: 0.0310 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0092
Epoch 3/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0022 - mae: 0.0305 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0184
Epoch 4/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0020 - mae: 0.0281 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0280
Epoch 5/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0267 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0264
Epoch 6/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0254 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0022 - val_mae: 0.0115
Epoch 7/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0252 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0021 - val_mae: 0.0162
Epoch 8/10
300/300 [==============================] - 3s 9ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0236 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0021 - val_mae: 0.0208
Epoch 9/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0235 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0020 - val_mae: 0.0210
Epoch 10/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0228 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0020 - val_mae: 0.0095
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
297/300 [============================>.] - ETA: 0s - loss: 0.0481 - mse: 0.0481 - msle: 0.0031 - mae: 0.1037Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0477 - mse: 0.0477 - msle: 0.0031 - mae: 0.1030 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0115
Epoch 2/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0029 - mae: 0.0300 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0094
Epoch 3/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0022 - mae: 0.0289 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0025 - val_mae: 0.0110
Epoch 4/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0097 - mse: 0.0097 - msle: 0.0020 - mae: 0.0274 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0124
Epoch 5/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0019 - mae: 0.0263 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0024 - val_mae: 0.0201
Epoch 6/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0018 - mae: 0.0255 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0111
Epoch 7/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0017 - mae: 0.0254 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0022 - val_mae: 0.0127
Epoch 8/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0244 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0149
Epoch 9/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0244 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0149
Epoch 10/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0239 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0146
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
297/300 [============================>.] - ETA: 0s - loss: 0.0484 - mse: 0.0484 - msle: 0.0032 - mae: 0.1037Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 10ms/step - loss: 0.0481 - mse: 0.0481 - msle: 0.0032 - mae: 0.1030 - val_loss: 0.0128 - val_mse: 0.0128 - val_msle: 0.0034 - val_mae: 0.0135
Epoch 2/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0140 - mse: 0.0140 - msle: 0.0031 - mae: 0.0308 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0023 - val_mae: 0.0140
Epoch 3/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0296 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0024 - val_mae: 0.0189
Epoch 4/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0019 - mae: 0.0280 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0024 - val_mae: 0.0228
Epoch 5/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0018 - mae: 0.0263 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0023 - val_mae: 0.0259
Epoch 6/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0259 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0177
Epoch 7/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0247 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0222
Epoch 8/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0250 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0020 - val_mae: 0.0368
Epoch 9/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0244 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0241
Epoch 10/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0015 - mae: 0.0232 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0294
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
295/300 [============================>.] - ETA: 0s - loss: 0.0480 - mse: 0.0480 - msle: 0.0031 - mae: 0.1037Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 11ms/step - loss: 0.0475 - mse: 0.0475 - msle: 0.0031 - mae: 0.1026 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0206
Epoch 2/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0029 - mae: 0.0310 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0172
Epoch 3/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0022 - mae: 0.0301 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0022 - val_mae: 0.0175
Epoch 4/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0280 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0160
Epoch 5/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0089 - mse: 0.0089 - msle: 0.0017 - mae: 0.0265 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0020 - val_mae: 0.0157
Epoch 6/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0250 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0021 - val_mae: 0.0125
Epoch 7/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0241 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0022 - val_mae: 0.0095
Epoch 8/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0232 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0164
Epoch 9/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0015 - mae: 0.0230 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0093
Epoch 10/10
300/300 [==============================] - 3s 10ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0015 - mae: 0.0229 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0143
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0432 - mse: 0.0432 - msle: 0.0031 - mae: 0.0943Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 5s 13ms/step - loss: 0.0431 - mse: 0.0431 - msle: 0.0031 - mae: 0.0941 - val_loss: 0.0124 - val_mse: 0.0124 - val_msle: 0.0034 - val_mae: 0.0112
Epoch 2/2
300/300 [==============================] - 4s 12ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0029 - mae: 0.0315 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0097
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
296/300 [============================>.] - ETA: 0s - loss: 0.0448 - mse: 0.0448 - msle: 0.0032 - mae: 0.0970Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0444 - mse: 0.0444 - msle: 0.0032 - mae: 0.0961 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0033 - val_mae: 0.0106
Epoch 2/2
300/300 [==============================] - 4s 12ms/step - loss: 0.0125 - mse: 0.0125 - msle: 0.0029 - mae: 0.0301 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0157
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
297/300 [============================>.] - ETA: 0s - loss: 0.0444 - mse: 0.0444 - msle: 0.0031 - mae: 0.0963Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0441 - mse: 0.0441 - msle: 0.0031 - mae: 0.0956 - val_loss: 0.0128 - val_mse: 0.0128 - val_msle: 0.0034 - val_mae: 0.0213
Epoch 2/2
300/300 [==============================] - 4s 12ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0030 - mae: 0.0312 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0214
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
297/300 [============================>.] - ETA: 0s - loss: 0.0460 - mse: 0.0460 - msle: 0.0030 - mae: 0.0995Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 5s 14ms/step - loss: 0.0457 - mse: 0.0457 - msle: 0.0030 - mae: 0.0988 - val_loss: 0.0120 - val_mse: 0.0120 - val_msle: 0.0034 - val_mae: 0.0125
Epoch 2/2
300/300 [==============================] - 4s 12ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0029 - mae: 0.0304 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0022 - val_mae: 0.0249
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
297/300 [============================>.] - ETA: 0s - loss: 0.0487 - mse: 0.0487 - msle: 0.0032 - mae: 0.1022Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0483 - mse: 0.0483 - msle: 0.0032 - mae: 0.1015 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0033 - val_mae: 0.0173
Epoch 2/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0028 - mae: 0.0297 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0216
Epoch 3/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0104 - mse: 0.0104 - msle: 0.0021 - mae: 0.0266 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0169
Epoch 4/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0260 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0102
Epoch 5/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0016 - mae: 0.0249 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0103
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
298/300 [============================>.] - ETA: 0s - loss: 0.0454 - mse: 0.0454 - msle: 0.0030 - mae: 0.0987Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0452 - mse: 0.0452 - msle: 0.0030 - mae: 0.0981 - val_loss: 0.0129 - val_mse: 0.0129 - val_msle: 0.0034 - val_mae: 0.0213
Epoch 2/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0029 - mae: 0.0303 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0023 - val_mae: 0.0103
Epoch 3/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0021 - mae: 0.0309 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0021 - val_mae: 0.0251
Epoch 4/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0297 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0019 - val_mae: 0.0130
Epoch 5/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0268 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0128
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
297/300 [============================>.] - ETA: 0s - loss: 0.0451 - mse: 0.0451 - msle: 0.0032 - mae: 0.0965Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0448 - mse: 0.0448 - msle: 0.0032 - mae: 0.0958 - val_loss: 0.0130 - val_mse: 0.0130 - val_msle: 0.0034 - val_mae: 0.0149
Epoch 2/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0138 - mse: 0.0138 - msle: 0.0030 - mae: 0.0304 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0020 - val_mae: 0.0113
Epoch 3/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0022 - mae: 0.0307 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0018 - val_mae: 0.0160
Epoch 4/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0282 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0107
Epoch 5/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0265 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0016 - val_mae: 0.0112
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
297/300 [============================>.] - ETA: 0s - loss: 0.0450 - mse: 0.0450 - msle: 0.0032 - mae: 0.0972Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0447 - mse: 0.0447 - msle: 0.0032 - mae: 0.0965 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0034 - val_mae: 0.0112
Epoch 2/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0141 - mse: 0.0141 - msle: 0.0032 - mae: 0.0309 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0030 - val_mae: 0.0114
Epoch 3/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0026 - mae: 0.0319 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0023 - val_mae: 0.0100
Epoch 4/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0021 - mae: 0.0294 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0276
Epoch 5/5
300/300 [==============================] - 4s 12ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0280 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0097
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
299/300 [============================>.] - ETA: 0s - loss: 0.0444 - mse: 0.0444 - msle: 0.0031 - mae: 0.0963Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0443 - mse: 0.0443 - msle: 0.0031 - mae: 0.0960 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0033 - val_mae: 0.0139
Epoch 2/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0029 - mae: 0.0296 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0157
Epoch 3/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0104 - mse: 0.0104 - msle: 0.0020 - mae: 0.0279 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0018 - val_mae: 0.0240
Epoch 4/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0282 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0186
Epoch 5/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0256 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0269
Epoch 6/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0251 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0016 - val_mae: 0.0157
Epoch 7/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0251 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0017 - val_mae: 0.0111
Epoch 8/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0248 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0014 - val_mae: 0.0293
Epoch 9/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0236 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0313
Epoch 10/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0233 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0014 - val_mae: 0.0190
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
297/300 [============================>.] - ETA: 0s - loss: 0.0448 - mse: 0.0448 - msle: 0.0032 - mae: 0.0966Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0445 - mse: 0.0445 - msle: 0.0032 - mae: 0.0959 - val_loss: 0.0130 - val_mse: 0.0130 - val_msle: 0.0034 - val_mae: 0.0223
Epoch 2/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0030 - mae: 0.0304 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0191
Epoch 3/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0022 - mae: 0.0300 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0019 - val_mae: 0.0337
Epoch 4/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0017 - mae: 0.0286 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0100
Epoch 5/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0267 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0017 - val_mae: 0.0187
Epoch 6/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0256 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0016 - val_mae: 0.0253
Epoch 7/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0250 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0015 - val_mae: 0.0127
Epoch 8/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0244 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0014 - val_mae: 0.0233
Epoch 9/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0239 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0013 - val_mae: 0.0222
Epoch 10/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0235 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0012 - val_mae: 0.0226
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0452 - mse: 0.0452 - msle: 0.0032 - mae: 0.0971Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0451 - mse: 0.0451 - msle: 0.0032 - mae: 0.0970 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0147
Epoch 2/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0029 - mae: 0.0308 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0113
Epoch 3/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0100 - mse: 0.0100 - msle: 0.0020 - mae: 0.0281 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0176
Epoch 4/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0275 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0110
Epoch 5/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0269 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0155
Epoch 6/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0247 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0097
Epoch 7/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0249 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0167
Epoch 8/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0013 - mae: 0.0233 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0105
Epoch 9/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0231 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0022 - val_mae: 0.0095
Epoch 10/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0234 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0110
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
299/300 [============================>.] - ETA: 0s - loss: 0.0447 - mse: 0.0447 - msle: 0.0031 - mae: 0.0971Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 4s 13ms/step - loss: 0.0446 - mse: 0.0446 - msle: 0.0031 - mae: 0.0967 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0033 - val_mae: 0.0107
Epoch 2/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0030 - mae: 0.0307 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0025 - val_mae: 0.0137
Epoch 3/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0114 - mse: 0.0114 - msle: 0.0023 - mae: 0.0317 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0098
Epoch 4/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0100 - mse: 0.0100 - msle: 0.0020 - mae: 0.0294 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0125
Epoch 5/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0089 - mse: 0.0089 - msle: 0.0018 - mae: 0.0290 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0135
Epoch 6/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0272 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0114
Epoch 7/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0267 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0132
Epoch 8/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0249 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0017 - val_mae: 0.0196
Epoch 9/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0240 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0109
Epoch 10/10
300/300 [==============================] - 4s 12ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0013 - mae: 0.0242 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0016 - val_mae: 0.0203
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0444 - mse: 0.0444 - msle: 0.0032 - mae: 0.0953Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 19ms/step - loss: 0.0443 - mse: 0.0443 - msle: 0.0032 - mae: 0.0951 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0116
Epoch 2/2
300/300 [==============================] - 5s 18ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0030 - mae: 0.0298 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0184
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
299/300 [============================>.] - ETA: 0s - loss: 0.0410 - mse: 0.0410 - msle: 0.0030 - mae: 0.0902Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0408 - mse: 0.0408 - msle: 0.0030 - mae: 0.0899 - val_loss: 0.0131 - val_mse: 0.0131 - val_msle: 0.0034 - val_mae: 0.0194
Epoch 2/2
300/300 [==============================] - 5s 18ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0030 - mae: 0.0309 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0027 - val_mae: 0.0248
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
297/300 [============================>.] - ETA: 0s - loss: 0.0431 - mse: 0.0431 - msle: 0.0031 - mae: 0.0937Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0428 - mse: 0.0428 - msle: 0.0031 - mae: 0.0931 - val_loss: 0.0121 - val_mse: 0.0121 - val_msle: 0.0034 - val_mae: 0.0114
Epoch 2/2
300/300 [==============================] - 5s 18ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0030 - mae: 0.0305 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0021 - val_mae: 0.0159
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0403 - mse: 0.0403 - msle: 0.0030 - mae: 0.0915Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 19ms/step - loss: 0.0402 - mse: 0.0402 - msle: 0.0030 - mae: 0.0914 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0034 - val_mae: 0.0150
Epoch 2/2
300/300 [==============================] - 5s 18ms/step - loss: 0.0125 - mse: 0.0125 - msle: 0.0030 - mae: 0.0290 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0028 - val_mae: 0.0168
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
298/300 [============================>.] - ETA: 0s - loss: 0.0407 - mse: 0.0407 - msle: 0.0030 - mae: 0.0891Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0405 - mse: 0.0405 - msle: 0.0030 - mae: 0.0886 - val_loss: 0.0145 - val_mse: 0.0145 - val_msle: 0.0034 - val_mae: 0.0250
Epoch 2/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0142 - mse: 0.0142 - msle: 0.0030 - mae: 0.0324 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0027 - val_mae: 0.0154
Epoch 3/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0024 - mae: 0.0299 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0017 - val_mae: 0.0113
Epoch 4/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0089 - mse: 0.0089 - msle: 0.0017 - mae: 0.0277 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0015 - val_mae: 0.0187
Epoch 5/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0270 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0133
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0414 - mse: 0.0414 - msle: 0.0032 - mae: 0.0898Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 19ms/step - loss: 0.0413 - mse: 0.0413 - msle: 0.0032 - mae: 0.0897 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0034 - val_mae: 0.0193
Epoch 2/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0135 - mse: 0.0135 - msle: 0.0030 - mae: 0.0304 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0020 - val_mae: 0.0106
Epoch 3/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0021 - mae: 0.0288 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0153
Epoch 4/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0270 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0020 - val_mae: 0.0165
Epoch 5/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0016 - mae: 0.0270 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0080
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
299/300 [============================>.] - ETA: 0s - loss: 0.0412 - mse: 0.0412 - msle: 0.0032 - mae: 0.0896Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0410 - mse: 0.0410 - msle: 0.0032 - mae: 0.0893 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0101
Epoch 2/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0031 - mae: 0.0300 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0123
Epoch 3/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0105 - mse: 0.0105 - msle: 0.0022 - mae: 0.0290 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0148
Epoch 4/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0089 - mse: 0.0089 - msle: 0.0017 - mae: 0.0286 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0171
Epoch 5/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0265 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0098
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0419 - mse: 0.0419 - msle: 0.0031 - mae: 0.0912Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0418 - mse: 0.0418 - msle: 0.0031 - mae: 0.0911 - val_loss: 0.0118 - val_mse: 0.0118 - val_msle: 0.0034 - val_mae: 0.0212
Epoch 2/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0029 - mae: 0.0295 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0238
Epoch 3/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0020 - mae: 0.0285 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0024 - val_mae: 0.0264
Epoch 4/5
300/300 [==============================] - 5s 18ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0266 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0027 - val_mae: 0.0175
Epoch 5/5
300/300 [==============================] - 6s 19ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0255 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0027 - val_mae: 0.0101
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
297/300 [============================>.] - ETA: 0s - loss: 0.0432 - mse: 0.0432 - msle: 0.0031 - mae: 0.0952Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0429 - mse: 0.0429 - msle: 0.0031 - mae: 0.0945 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0034 - val_mae: 0.0186
Epoch 2/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0029 - mae: 0.0296 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0016 - val_mae: 0.0173
Epoch 3/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0103 - mse: 0.0103 - msle: 0.0021 - mae: 0.0293 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0014 - val_mae: 0.0155
Epoch 4/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0277 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0013 - val_mae: 0.0188
Epoch 5/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0015 - mae: 0.0275 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0192
Epoch 6/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0263 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0013 - val_mae: 0.0235
Epoch 7/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0267 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0013 - val_mae: 0.0181
Epoch 8/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0255 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0011 - val_mae: 0.0312
Epoch 9/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0244 - val_loss: 0.0056 - val_mse: 0.0056 - val_msle: 0.0013 - val_mae: 0.0090
Epoch 10/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0258 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0013 - val_mae: 0.0185
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0421 - mse: 0.0421 - msle: 0.0031 - mae: 0.0920Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 19ms/step - loss: 0.0420 - mse: 0.0420 - msle: 0.0031 - mae: 0.0918 - val_loss: 0.0135 - val_mse: 0.0135 - val_msle: 0.0034 - val_mae: 0.0107
Epoch 2/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0138 - mse: 0.0138 - msle: 0.0030 - mae: 0.0302 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0121
Epoch 3/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0294 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0018 - val_mae: 0.0313
Epoch 4/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0017 - mae: 0.0290 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0111
Epoch 5/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0273 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0125
Epoch 6/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0246 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0020 - val_mae: 0.0325
Epoch 7/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0013 - mae: 0.0248 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0022 - val_mae: 0.0110
Epoch 8/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0012 - mae: 0.0238 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0304
Epoch 9/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0238 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0016 - val_mae: 0.0218
Epoch 10/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0227 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0017 - val_mae: 0.0176
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
299/300 [============================>.] - ETA: 0s - loss: 0.0431 - mse: 0.0431 - msle: 0.0031 - mae: 0.0942Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 19ms/step - loss: 0.0430 - mse: 0.0430 - msle: 0.0031 - mae: 0.0939 - val_loss: 0.0125 - val_mse: 0.0125 - val_msle: 0.0034 - val_mae: 0.0123
Epoch 2/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0030 - mae: 0.0304 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0212
Epoch 3/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0282 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0204
Epoch 4/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0014 - mae: 0.0268 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0313
Epoch 5/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0252 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0016 - val_mae: 0.0125
Epoch 6/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0257 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0013 - val_mae: 0.0184
Epoch 7/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0250 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0014 - val_mae: 0.0085
Epoch 8/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0235 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0012 - val_mae: 0.0179
Epoch 9/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.8923e-04 - mae: 0.0228 - val_loss: 0.0052 - val_mse: 0.0052 - val_msle: 0.0012 - val_mae: 0.0105
Epoch 10/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.9933e-04 - mae: 0.0225 - val_loss: 0.0049 - val_mse: 0.0049 - val_msle: 0.0011 - val_mae: 0.0081
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0422 - mse: 0.0422 - msle: 0.0031 - mae: 0.0913Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 6s 18ms/step - loss: 0.0421 - mse: 0.0421 - msle: 0.0031 - mae: 0.0911 - val_loss: 0.0126 - val_mse: 0.0126 - val_msle: 0.0034 - val_mae: 0.0142
Epoch 2/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0135 - mse: 0.0135 - msle: 0.0031 - mae: 0.0306 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0031 - val_mae: 0.0216
Epoch 3/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0025 - mae: 0.0291 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0100
Epoch 4/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0093 - mse: 0.0093 - msle: 0.0018 - mae: 0.0297 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0238
Epoch 5/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0279 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0236
Epoch 6/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0263 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0117
Epoch 7/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0264 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0021 - val_mae: 0.0100
Epoch 8/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0060 - mse: 0.0060 - msle: 0.0011 - mae: 0.0243 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0261
Epoch 9/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0010 - mae: 0.0238 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0093
Epoch 10/10
300/300 [==============================] - 5s 18ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.7063e-04 - mae: 0.0242 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0140
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0400 - mse: 0.0400 - msle: 0.0032 - mae: 0.0883Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 10s 29ms/step - loss: 0.0399 - mse: 0.0399 - msle: 0.0032 - mae: 0.0881 - val_loss: 0.0128 - val_mse: 0.0128 - val_msle: 0.0034 - val_mae: 0.0161
Epoch 2/2
300/300 [==============================] - 8s 28ms/step - loss: 0.0139 - mse: 0.0139 - msle: 0.0031 - mae: 0.0314 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0025 - val_mae: 0.0109
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
299/300 [============================>.] - ETA: 0s - loss: 0.0406 - mse: 0.0406 - msle: 0.0030 - mae: 0.0912Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 28ms/step - loss: 0.0405 - mse: 0.0405 - msle: 0.0030 - mae: 0.0909 - val_loss: 0.0122 - val_mse: 0.0122 - val_msle: 0.0034 - val_mae: 0.0226
Epoch 2/2
300/300 [==============================] - 8s 28ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0029 - mae: 0.0314 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0017 - val_mae: 0.0119
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0386 - mse: 0.0386 - msle: 0.0031 - mae: 0.0862Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 29ms/step - loss: 0.0386 - mse: 0.0386 - msle: 0.0031 - mae: 0.0860 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0124
Epoch 2/2
300/300 [==============================] - 8s 27ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0031 - mae: 0.0295 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0029 - val_mae: 0.0210
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
299/300 [============================>.] - ETA: 0s - loss: 0.0381 - mse: 0.0381 - msle: 0.0030 - mae: 0.0830Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 28ms/step - loss: 0.0380 - mse: 0.0380 - msle: 0.0030 - mae: 0.0828 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0034 - val_mae: 0.0126
Epoch 2/2
300/300 [==============================] - 8s 28ms/step - loss: 0.0129 - mse: 0.0129 - msle: 0.0030 - mae: 0.0301 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0102
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
298/300 [============================>.] - ETA: 0s - loss: 0.0398 - mse: 0.0398 - msle: 0.0031 - mae: 0.0888Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 10s 30ms/step - loss: 0.0396 - mse: 0.0396 - msle: 0.0031 - mae: 0.0883 - val_loss: 0.0145 - val_mse: 0.0145 - val_msle: 0.0034 - val_mae: 0.0240
Epoch 2/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0147 - mse: 0.0147 - msle: 0.0031 - mae: 0.0324 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0125
Epoch 3/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0029 - mae: 0.0301 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0018 - val_mae: 0.0326
Epoch 4/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0021 - mae: 0.0284 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0014 - val_mae: 0.0373
Epoch 5/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0273 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0014 - val_mae: 0.0311
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0399 - mse: 0.0399 - msle: 0.0031 - mae: 0.0889Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 28ms/step - loss: 0.0398 - mse: 0.0398 - msle: 0.0031 - mae: 0.0887 - val_loss: 0.0128 - val_mse: 0.0128 - val_msle: 0.0034 - val_mae: 0.0231
Epoch 2/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0142 - mse: 0.0142 - msle: 0.0031 - mae: 0.0322 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0022 - val_mae: 0.0202
Epoch 3/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0023 - mae: 0.0299 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0184
Epoch 4/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0300 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0218
Epoch 5/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0289 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0135
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
298/300 [============================>.] - ETA: 0s - loss: 0.0405 - mse: 0.0405 - msle: 0.0031 - mae: 0.0888Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 29ms/step - loss: 0.0403 - mse: 0.0403 - msle: 0.0031 - mae: 0.0883 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0034 - val_mae: 0.0120
Epoch 2/5
300/300 [==============================] - 8s 27ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0030 - mae: 0.0299 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0105
Epoch 3/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0021 - mae: 0.0290 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0183
Epoch 4/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0289 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0186
Epoch 5/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0270 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0022 - val_mae: 0.0098
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
299/300 [============================>.] - ETA: 0s - loss: 0.0393 - mse: 0.0393 - msle: 0.0031 - mae: 0.0876Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 28ms/step - loss: 0.0391 - mse: 0.0391 - msle: 0.0031 - mae: 0.0873 - val_loss: 0.0124 - val_mse: 0.0124 - val_msle: 0.0034 - val_mae: 0.0161
Epoch 2/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0031 - mae: 0.0311 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0117
Epoch 3/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0105 - mse: 0.0105 - msle: 0.0022 - mae: 0.0300 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0014 - val_mae: 0.0138
Epoch 4/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0287 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0014 - val_mae: 0.0254
Epoch 5/5
300/300 [==============================] - 8s 28ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0281 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0016 - val_mae: 0.0201
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
298/300 [============================>.] - ETA: 0s - loss: 0.0408 - mse: 0.0408 - msle: 0.0030 - mae: 0.0907Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 29ms/step - loss: 0.0406 - mse: 0.0406 - msle: 0.0030 - mae: 0.0902 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0247
Epoch 2/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0029 - mae: 0.0283 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0026 - val_mae: 0.0141
Epoch 3/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0104 - mse: 0.0104 - msle: 0.0023 - mae: 0.0274 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0023 - val_mae: 0.0188
Epoch 4/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0269 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0026 - val_mae: 0.0270
Epoch 5/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0260 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0025 - val_mae: 0.0324
Epoch 6/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0257 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0025 - val_mae: 0.0238
Epoch 7/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0253 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0314
Epoch 8/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0256 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0213
Epoch 9/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.7181e-04 - mae: 0.0239 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0198
Epoch 10/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.2455e-04 - mae: 0.0240 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0021 - val_mae: 0.0327
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
298/300 [============================>.] - ETA: 0s - loss: 0.0410 - mse: 0.0410 - msle: 0.0031 - mae: 0.0899Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 29ms/step - loss: 0.0408 - mse: 0.0408 - msle: 0.0031 - mae: 0.0894 - val_loss: 0.0110 - val_mse: 0.0110 - val_msle: 0.0034 - val_mae: 0.0156
Epoch 2/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0029 - mae: 0.0294 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0175
Epoch 3/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0019 - mae: 0.0287 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0014 - val_mae: 0.0147
Epoch 4/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0272 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0284
Epoch 5/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0013 - mae: 0.0272 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0133
Epoch 6/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0254 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0014 - val_mae: 0.0326
Epoch 7/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0062 - mse: 0.0062 - msle: 0.0011 - mae: 0.0261 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0015 - val_mae: 0.0180
Epoch 8/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0055 - mse: 0.0055 - msle: 9.3887e-04 - mae: 0.0251 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0014 - val_mae: 0.0200
Epoch 9/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0052 - mse: 0.0052 - msle: 8.9733e-04 - mae: 0.0237 - val_loss: 0.0058 - val_mse: 0.0058 - val_msle: 0.0013 - val_mae: 0.0139
Epoch 10/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0049 - mse: 0.0049 - msle: 8.0929e-04 - mae: 0.0237 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 0.0011 - val_mae: 0.0186
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
299/300 [============================>.] - ETA: 0s - loss: 0.0397 - mse: 0.0397 - msle: 0.0030 - mae: 0.0882Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 29ms/step - loss: 0.0396 - mse: 0.0396 - msle: 0.0030 - mae: 0.0879 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0034 - val_mae: 0.0198
Epoch 2/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0029 - mae: 0.0296 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0020 - val_mae: 0.0320
Epoch 3/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0021 - mae: 0.0296 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0108
Epoch 4/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0291 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0016 - val_mae: 0.0309
Epoch 5/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0273 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0241
Epoch 6/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0262 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0015 - val_mae: 0.0138
Epoch 7/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0010 - mae: 0.0259 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0015 - val_mae: 0.0238
Epoch 8/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.1631e-04 - mae: 0.0251 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0233
Epoch 9/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.3393e-04 - mae: 0.0240 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0135
Epoch 10/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.3766e-04 - mae: 0.0228 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 0.0012 - val_mae: 0.0115
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
298/300 [============================>.] - ETA: 0s - loss: 0.0402 - mse: 0.0402 - msle: 0.0031 - mae: 0.0884Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 9s 29ms/step - loss: 0.0400 - mse: 0.0400 - msle: 0.0031 - mae: 0.0879 - val_loss: 0.0129 - val_mse: 0.0129 - val_msle: 0.0034 - val_mae: 0.0220
Epoch 2/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0139 - mse: 0.0139 - msle: 0.0031 - mae: 0.0310 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0031 - val_mae: 0.0113
Epoch 3/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0029 - mae: 0.0289 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0233
Epoch 4/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0024 - mae: 0.0285 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0016 - val_mae: 0.0251
Epoch 5/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0097 - mse: 0.0097 - msle: 0.0019 - mae: 0.0296 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0013 - val_mae: 0.0258
Epoch 6/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0290 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0013 - val_mae: 0.0408
Epoch 7/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0272 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0013 - val_mae: 0.0760
Epoch 8/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0273 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0013 - val_mae: 0.0198
Epoch 9/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0242 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0014 - val_mae: 0.0420
Epoch 10/10
300/300 [==============================] - 8s 28ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0245 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0012 - val_mae: 0.0401
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0356 - mse: 0.0356 - msle: 0.0033 - mae: 0.0733Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 16s 49ms/step - loss: 0.0355 - mse: 0.0355 - msle: 0.0033 - mae: 0.0732 - val_loss: 0.0136 - val_mse: 0.0136 - val_msle: 0.0034 - val_mae: 0.0228
Epoch 2/2
300/300 [==============================] - 14s 48ms/step - loss: 0.0149 - mse: 0.0149 - msle: 0.0033 - mae: 0.0360 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0032 - val_mae: 0.0143
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
299/300 [============================>.] - ETA: 0s - loss: 0.0361 - mse: 0.0361 - msle: 0.0032 - mae: 0.0776Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 16s 49ms/step - loss: 0.0360 - mse: 0.0360 - msle: 0.0032 - mae: 0.0774 - val_loss: 0.0150 - val_mse: 0.0150 - val_msle: 0.0034 - val_mae: 0.0207
Epoch 2/2
300/300 [==============================] - 14s 48ms/step - loss: 0.0150 - mse: 0.0150 - msle: 0.0032 - mae: 0.0323 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0032 - val_mae: 0.0117
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0357 - mse: 0.0357 - msle: 0.0030 - mae: 0.0799Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0357 - mse: 0.0357 - msle: 0.0030 - mae: 0.0797 - val_loss: 0.0130 - val_mse: 0.0130 - val_msle: 0.0034 - val_mae: 0.0227
Epoch 2/2
300/300 [==============================] - 14s 48ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0030 - mae: 0.0321 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0032 - val_mae: 0.0144
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
299/300 [============================>.] - ETA: 0s - loss: 0.0355 - mse: 0.0355 - msle: 0.0031 - mae: 0.0791Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0354 - mse: 0.0354 - msle: 0.0031 - mae: 0.0788 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0134
Epoch 2/2
300/300 [==============================] - 14s 48ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0031 - mae: 0.0316 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0026 - val_mae: 0.0175
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
299/300 [============================>.] - ETA: 0s - loss: 0.0439 - mse: 0.0439 - msle: 0.0033 - mae: 0.0945Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0438 - mse: 0.0438 - msle: 0.0033 - mae: 0.0942 - val_loss: 0.0128 - val_mse: 0.0128 - val_msle: 0.0034 - val_mae: 0.0126
Epoch 2/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0144 - mse: 0.0144 - msle: 0.0033 - mae: 0.0316 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0033 - val_mae: 0.0137
Epoch 3/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0030 - mae: 0.0299 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0154
Epoch 4/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0097 - mse: 0.0097 - msle: 0.0020 - mae: 0.0291 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0014 - val_mae: 0.0220
Epoch 5/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0281 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0201
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
299/300 [============================>.] - ETA: 0s - loss: 0.0356 - mse: 0.0356 - msle: 0.0030 - mae: 0.0777Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0355 - mse: 0.0355 - msle: 0.0030 - mae: 0.0774 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0034 - val_mae: 0.0156
Epoch 2/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0030 - mae: 0.0323 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0026 - val_mae: 0.0160
Epoch 3/5
300/300 [==============================] - 15s 48ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0023 - mae: 0.0298 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 0.0012 - val_mae: 0.0150
Epoch 4/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0279 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0190
Epoch 5/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0272 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0014 - val_mae: 0.0095
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0350 - mse: 0.0350 - msle: 0.0030 - mae: 0.0779Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0349 - mse: 0.0349 - msle: 0.0030 - mae: 0.0778 - val_loss: 0.0125 - val_mse: 0.0125 - val_msle: 0.0034 - val_mae: 0.0245
Epoch 2/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0129 - mse: 0.0129 - msle: 0.0029 - mae: 0.0321 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0017 - val_mae: 0.0173
Epoch 3/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0093 - mse: 0.0093 - msle: 0.0019 - mae: 0.0290 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0015 - val_mae: 0.0145
Epoch 4/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0289 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0113
Epoch 5/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0011 - mae: 0.0267 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0016 - val_mae: 0.0178
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
299/300 [============================>.] - ETA: 0s - loss: 0.0461 - mse: 0.0461 - msle: 0.0031 - mae: 0.1038Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0459 - mse: 0.0459 - msle: 0.0031 - mae: 0.1034 - val_loss: 0.0157 - val_mse: 0.0157 - val_msle: 0.0034 - val_mae: 0.0334
Epoch 2/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0144 - mse: 0.0144 - msle: 0.0030 - mae: 0.0308 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0020 - val_mae: 0.0131
Epoch 3/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0104 - mse: 0.0104 - msle: 0.0022 - mae: 0.0301 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0123
Epoch 4/5
300/300 [==============================] - 14s 48ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0281 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0153
Epoch 5/5
300/300 [==============================] - 15s 48ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0270 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0188
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0364 - mse: 0.0364 - msle: 0.0033 - mae: 0.0766Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0363 - mse: 0.0363 - msle: 0.0033 - mae: 0.0765 - val_loss: 0.0146 - val_mse: 0.0146 - val_msle: 0.0034 - val_mae: 0.0115
Epoch 2/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0152 - mse: 0.0152 - msle: 0.0033 - mae: 0.0331 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0034 - val_mae: 0.0231
Epoch 3/10
300/300 [==============================] - 15s 48ms/step - loss: 0.0125 - mse: 0.0125 - msle: 0.0031 - mae: 0.0304 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0023 - val_mae: 0.0192
Epoch 4/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0021 - mae: 0.0298 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0223
Epoch 5/10
300/300 [==============================] - 15s 49ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0282 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0023 - val_mae: 0.0280
Epoch 6/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0280 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0224
Epoch 7/10
300/300 [==============================] - 15s 49ms/step - loss: 0.0067 - mse: 0.0067 - msle: 0.0013 - mae: 0.0267 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0024 - val_mae: 0.0308
Epoch 8/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0264 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0023 - val_mae: 0.0482
Epoch 9/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0274 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0020 - val_mae: 0.0228
Epoch 10/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0055 - mse: 0.0055 - msle: 9.7845e-04 - mae: 0.0252 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0250
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
299/300 [============================>.] - ETA: 0s - loss: 0.0357 - mse: 0.0357 - msle: 0.0031 - mae: 0.0782Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0356 - mse: 0.0356 - msle: 0.0031 - mae: 0.0779 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0034 - val_mae: 0.0176
Epoch 2/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0031 - mae: 0.0316 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0024 - val_mae: 0.0142
Epoch 3/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0022 - mae: 0.0303 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0012 - val_mae: 0.0229
Epoch 4/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0279 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0016 - val_mae: 0.0179
Epoch 5/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0261 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0015 - val_mae: 0.0164
Epoch 6/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0258 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0018 - val_mae: 0.0169
Epoch 7/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.9774e-04 - mae: 0.0251 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0228
Epoch 8/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0051 - mse: 0.0051 - msle: 9.0455e-04 - mae: 0.0252 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0240
Epoch 9/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0046 - mse: 0.0046 - msle: 8.1845e-04 - mae: 0.0237 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0017 - val_mae: 0.0252
Epoch 10/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0043 - mse: 0.0043 - msle: 7.2215e-04 - mae: 0.0232 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0014 - val_mae: 0.0270
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0362 - mse: 0.0362 - msle: 0.0031 - mae: 0.0790Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 16s 49ms/step - loss: 0.0362 - mse: 0.0362 - msle: 0.0031 - mae: 0.0789 - val_loss: 0.0138 - val_mse: 0.0138 - val_msle: 0.0034 - val_mae: 0.0260
Epoch 2/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0142 - mse: 0.0142 - msle: 0.0031 - mae: 0.0335 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0032 - val_mae: 0.0137
Epoch 3/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0028 - mae: 0.0300 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0154
Epoch 4/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0291 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0143
Epoch 5/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0272 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0229
Epoch 6/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0067 - mse: 0.0067 - msle: 0.0012 - mae: 0.0266 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0225
Epoch 7/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0250 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0140
Epoch 8/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.2719e-04 - mae: 0.0249 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0131
Epoch 9/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0048 - mse: 0.0048 - msle: 8.5172e-04 - mae: 0.0243 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0130
Epoch 10/10
300/300 [==============================] - 14s 47ms/step - loss: 0.0044 - mse: 0.0044 - msle: 7.4170e-04 - mae: 0.0240 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0017 - val_mae: 0.0119
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0409 - mse: 0.0409 - msle: 0.0033 - mae: 0.0887Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 15s 49ms/step - loss: 0.0408 - mse: 0.0408 - msle: 0.0033 - mae: 0.0886 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0034 - val_mae: 0.0196
Epoch 2/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0138 - mse: 0.0138 - msle: 0.0033 - mae: 0.0305 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0028 - val_mae: 0.0315
Epoch 3/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0027 - mae: 0.0306 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0015 - val_mae: 0.0361
Epoch 4/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0295 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0015 - val_mae: 0.0122
Epoch 5/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0273 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0013 - val_mae: 0.0166
Epoch 6/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0062 - mse: 0.0062 - msle: 0.0011 - mae: 0.0267 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0013 - val_mae: 0.0166
Epoch 7/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0256 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0261
Epoch 8/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.0313e-04 - mae: 0.0256 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0013 - val_mae: 0.0162
Epoch 9/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0052 - mse: 0.0052 - msle: 9.1546e-04 - mae: 0.0244 - val_loss: 0.0060 - val_mse: 0.0060 - val_msle: 0.0012 - val_mae: 0.0265
Epoch 10/10
300/300 [==============================] - 14s 48ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.6100e-04 - mae: 0.0237 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0014 - val_mae: 0.0241
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0311 - mse: 0.0311 - msle: 0.0031 - mae: 0.0647Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 131ms/step - loss: 0.0311 - mse: 0.0311 - msle: 0.0031 - mae: 0.0646 - val_loss: 0.0139 - val_mse: 0.0139 - val_msle: 0.0034 - val_mae: 0.0178
Epoch 2/2
300/300 [==============================] - 39s 130ms/step - loss: 0.0144 - mse: 0.0144 - msle: 0.0031 - mae: 0.0371 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0032 - val_mae: 0.0365
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0314 - mse: 0.0314 - msle: 0.0031 - mae: 0.0644Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 131ms/step - loss: 0.0313 - mse: 0.0313 - msle: 0.0031 - mae: 0.0643 - val_loss: 0.0131 - val_mse: 0.0131 - val_msle: 0.0034 - val_mae: 0.0360
Epoch 2/2
300/300 [==============================] - 39s 130ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0031 - mae: 0.0322 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0033 - val_mae: 0.0150
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0319 - mse: 0.0319 - msle: 0.0031 - mae: 0.0694Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 130ms/step - loss: 0.0318 - mse: 0.0318 - msle: 0.0031 - mae: 0.0693 - val_loss: 0.0136 - val_mse: 0.0136 - val_msle: 0.0034 - val_mae: 0.0400
Epoch 2/2
300/300 [==============================] - 39s 130ms/step - loss: 0.0135 - mse: 0.0135 - msle: 0.0031 - mae: 0.0346 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0028 - val_mae: 0.0188
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
300/300 [==============================] - ETA: 0s - loss: 0.0349 - mse: 0.0349 - msle: 0.0032 - mae: 0.0767Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 130ms/step - loss: 0.0349 - mse: 0.0349 - msle: 0.0032 - mae: 0.0766 - val_loss: 0.0118 - val_mse: 0.0118 - val_msle: 0.0034 - val_mae: 0.0160
Epoch 2/2
300/300 [==============================] - 39s 129ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0032 - mae: 0.0318 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0035 - val_mae: 0.0122
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0609 - mse: 0.0609 - msle: 0.0031 - mae: 0.1234Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 130ms/step - loss: 0.0608 - mse: 0.0608 - msle: 0.0031 - mae: 0.1232 - val_loss: 0.0180 - val_mse: 0.0180 - val_msle: 0.0035 - val_mae: 0.0209
Epoch 2/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0156 - mse: 0.0156 - msle: 0.0031 - mae: 0.0307 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0192
Epoch 3/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0028 - mae: 0.0316 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0185
Epoch 4/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0020 - mae: 0.0301 - val_loss: 0.0055 - val_mse: 0.0055 - val_msle: 0.0013 - val_mae: 0.0109
Epoch 5/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0017 - mae: 0.0289 - val_loss: 0.0057 - val_mse: 0.0057 - val_msle: 0.0014 - val_mae: 0.0131
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0313 - mse: 0.0313 - msle: 0.0032 - mae: 0.0647Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 131ms/step - loss: 0.0313 - mse: 0.0313 - msle: 0.0032 - mae: 0.0646 - val_loss: 0.0126 - val_mse: 0.0126 - val_msle: 0.0035 - val_mae: 0.0333
Epoch 2/5
300/300 [==============================] - 39s 129ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0031 - mae: 0.0342 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0029 - val_mae: 0.0233
Epoch 3/5
300/300 [==============================] - 39s 129ms/step - loss: 0.0103 - mse: 0.0103 - msle: 0.0022 - mae: 0.0340 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0200
Epoch 4/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0292 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0019 - val_mae: 0.0287
Epoch 5/5
300/300 [==============================] - 39s 129ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0276 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0300
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0312 - mse: 0.0312 - msle: 0.0031 - mae: 0.0651Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 130ms/step - loss: 0.0312 - mse: 0.0312 - msle: 0.0031 - mae: 0.0650 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0034 - val_mae: 0.0282
Epoch 2/5
300/300 [==============================] - 39s 129ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0031 - mae: 0.0334 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0030 - val_mae: 0.0174
Epoch 3/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0025 - mae: 0.0333 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0016 - val_mae: 0.0179
Epoch 4/5
300/300 [==============================] - 39s 129ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0014 - mae: 0.0316 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0014 - val_mae: 0.0257
Epoch 5/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0291 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0117
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
300/300 [==============================] - ETA: 0s - loss: 0.0394 - mse: 0.0394 - msle: 0.0031 - mae: 0.0887Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 131ms/step - loss: 0.0394 - mse: 0.0394 - msle: 0.0031 - mae: 0.0885 - val_loss: 0.0135 - val_mse: 0.0135 - val_msle: 0.0034 - val_mae: 0.0133
Epoch 2/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0141 - mse: 0.0141 - msle: 0.0031 - mae: 0.0320 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0034 - val_mae: 0.0342
Epoch 3/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0031 - mae: 0.0305 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0034 - val_mae: 0.0151
Epoch 4/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0114 - mse: 0.0114 - msle: 0.0031 - mae: 0.0293 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0033 - val_mae: 0.0214
Epoch 5/5
300/300 [==============================] - 39s 130ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0030 - mae: 0.0293 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0032 - val_mae: 0.0263
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0338 - mse: 0.0338 - msle: 0.0032 - mae: 0.0738Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 130ms/step - loss: 0.0338 - mse: 0.0338 - msle: 0.0032 - mae: 0.0737 - val_loss: 0.0146 - val_mse: 0.0146 - val_msle: 0.0035 - val_mae: 0.0502
Epoch 2/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0143 - mse: 0.0143 - msle: 0.0032 - mae: 0.0361 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0033 - val_mae: 0.0233
Epoch 3/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0029 - mae: 0.0315 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0213
Epoch 4/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0019 - mae: 0.0306 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0016 - val_mae: 0.0157
Epoch 5/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0014 - mae: 0.0290 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0134
Epoch 6/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0062 - mse: 0.0062 - msle: 0.0012 - mae: 0.0281 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0254
Epoch 7/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0055 - mse: 0.0055 - msle: 0.0010 - mae: 0.0275 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0142
Epoch 8/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0050 - mse: 0.0050 - msle: 9.2026e-04 - mae: 0.0260 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0022 - val_mae: 0.0189
Epoch 9/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0044 - mse: 0.0044 - msle: 7.6565e-04 - mae: 0.0255 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0018 - val_mae: 0.0114
Epoch 10/10
300/300 [==============================] - 39s 131ms/step - loss: 0.0040 - mse: 0.0040 - msle: 6.7618e-04 - mae: 0.0251 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0195
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0337 - mse: 0.0337 - msle: 0.0031 - mae: 0.0733Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 131ms/step - loss: 0.0337 - mse: 0.0337 - msle: 0.0031 - mae: 0.0732 - val_loss: 0.0163 - val_mse: 0.0163 - val_msle: 0.0034 - val_mae: 0.0141
Epoch 2/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0156 - mse: 0.0156 - msle: 0.0031 - mae: 0.0360 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0132
Epoch 3/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0031 - mae: 0.0299 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0034 - val_mae: 0.0154
Epoch 4/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0031 - mae: 0.0287 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0030 - val_mae: 0.0148
Epoch 5/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0024 - mae: 0.0305 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0218
Epoch 6/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0301 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0014 - val_mae: 0.0203
Epoch 7/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0061 - mse: 0.0061 - msle: 0.0011 - mae: 0.0282 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0227
Epoch 8/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.4500e-04 - mae: 0.0281 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0016 - val_mae: 0.0153
Epoch 9/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0043 - mse: 0.0043 - msle: 7.2013e-04 - mae: 0.0261 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0226
Epoch 10/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0041 - mse: 0.0041 - msle: 6.7295e-04 - mae: 0.0255 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0141
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0394 - mse: 0.0394 - msle: 0.0032 - mae: 0.0861Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 130ms/step - loss: 0.0394 - mse: 0.0394 - msle: 0.0032 - mae: 0.0859 - val_loss: 0.0142 - val_mse: 0.0142 - val_msle: 0.0034 - val_mae: 0.0175
Epoch 2/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0143 - mse: 0.0143 - msle: 0.0032 - mae: 0.0337 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0035 - val_mae: 0.0265
Epoch 3/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0032 - mae: 0.0294 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0034 - val_mae: 0.0130
Epoch 4/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0031 - mae: 0.0291 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0318
Epoch 5/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0027 - mae: 0.0311 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0015 - val_mae: 0.0181
Epoch 6/10
300/300 [==============================] - 39s 129ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0327 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0164
Epoch 7/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0060 - mse: 0.0060 - msle: 0.0010 - mae: 0.0293 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0020 - val_mae: 0.0150
Epoch 8/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.4296e-04 - mae: 0.0274 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0020 - val_mae: 0.0216
Epoch 9/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.2580e-04 - mae: 0.0265 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0022 - val_mae: 0.0328
Epoch 10/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0041 - mse: 0.0041 - msle: 6.3555e-04 - mae: 0.0254 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0022 - val_mae: 0.0300
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
300/300 [==============================] - ETA: 0s - loss: 0.0324 - mse: 0.0324 - msle: 0.0033 - mae: 0.0674Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
300/300 [==============================] - 40s 131ms/step - loss: 0.0324 - mse: 0.0324 - msle: 0.0033 - mae: 0.0673 - val_loss: 0.0143 - val_mse: 0.0143 - val_msle: 0.0034 - val_mae: 0.0163
Epoch 2/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0145 - mse: 0.0145 - msle: 0.0033 - mae: 0.0337 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0129
Epoch 3/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0028 - mae: 0.0316 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0124
Epoch 4/10
300/300 [==============================] - 39s 131ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0017 - mae: 0.0310 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0014 - val_mae: 0.0120
Epoch 5/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0012 - mae: 0.0285 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0015 - val_mae: 0.0200
Epoch 6/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0053 - mse: 0.0053 - msle: 9.0473e-04 - mae: 0.0274 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0018 - val_mae: 0.0139
Epoch 7/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0047 - mse: 0.0047 - msle: 8.0367e-04 - mae: 0.0252 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0142
Epoch 8/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0041 - mse: 0.0041 - msle: 6.6593e-04 - mae: 0.0244 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0016 - val_mae: 0.0123
Epoch 9/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0038 - mse: 0.0038 - msle: 5.9233e-04 - mae: 0.0237 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0021 - val_mae: 0.0155
Epoch 10/10
300/300 [==============================] - 39s 130ms/step - loss: 0.0037 - mse: 0.0037 - msle: 5.8982e-04 - mae: 0.0244 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0193
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0685 - mse: 0.0685 - msle: 0.0031 - mae: 0.1479Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 19ms/step - loss: 0.0682 - mse: 0.0682 - msle: 0.0031 - mae: 0.1474 - val_loss: 0.0138 - val_mse: 0.0138 - val_msle: 0.0034 - val_mae: 0.0260
Epoch 2/2
150/150 [==============================] - 2s 17ms/step - loss: 0.0145 - mse: 0.0145 - msle: 0.0031 - mae: 0.0299 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0028 - val_mae: 0.0150
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
148/150 [============================>.] - ETA: 0s - loss: 0.0761 - mse: 0.0761 - msle: 0.0032 - mae: 0.1606Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0752 - mse: 0.0752 - msle: 0.0032 - mae: 0.1588 - val_loss: 0.0133 - val_mse: 0.0133 - val_msle: 0.0034 - val_mae: 0.0147
Epoch 2/2
150/150 [==============================] - 2s 17ms/step - loss: 0.0145 - mse: 0.0145 - msle: 0.0032 - mae: 0.0303 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0028 - val_mae: 0.0095
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
148/150 [============================>.] - ETA: 0s - loss: 0.0721 - mse: 0.0721 - msle: 0.0031 - mae: 0.1543Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0713 - mse: 0.0713 - msle: 0.0031 - mae: 0.1526 - val_loss: 0.0142 - val_mse: 0.0142 - val_msle: 0.0034 - val_mae: 0.0139
Epoch 2/2
150/150 [==============================] - 3s 17ms/step - loss: 0.0148 - mse: 0.0148 - msle: 0.0031 - mae: 0.0296 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0029 - val_mae: 0.0106
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
147/150 [============================>.] - ETA: 0s - loss: 0.0709 - mse: 0.0709 - msle: 0.0031 - mae: 0.1506Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 19ms/step - loss: 0.0698 - mse: 0.0698 - msle: 0.0031 - mae: 0.1483 - val_loss: 0.0143 - val_mse: 0.0143 - val_msle: 0.0034 - val_mae: 0.0146
Epoch 2/2
150/150 [==============================] - 3s 17ms/step - loss: 0.0147 - mse: 0.0147 - msle: 0.0031 - mae: 0.0289 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0031 - val_mae: 0.0126
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
148/150 [============================>.] - ETA: 0s - loss: 0.0703 - mse: 0.0703 - msle: 0.0032 - mae: 0.1499Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 18ms/step - loss: 0.0695 - mse: 0.0695 - msle: 0.0032 - mae: 0.1482 - val_loss: 0.0149 - val_mse: 0.0149 - val_msle: 0.0034 - val_mae: 0.0206
Epoch 2/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0154 - mse: 0.0154 - msle: 0.0032 - mae: 0.0288 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0034 - val_mae: 0.0169
Epoch 3/5
150/150 [==============================] - 3s 17ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0031 - mae: 0.0306 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0026 - val_mae: 0.0117
Epoch 4/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0024 - mae: 0.0309 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0178
Epoch 5/5
150/150 [==============================] - 3s 17ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0022 - mae: 0.0287 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0025 - val_mae: 0.0201
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
148/150 [============================>.] - ETA: 0s - loss: 0.0664 - mse: 0.0664 - msle: 0.0032 - mae: 0.1423Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0657 - mse: 0.0657 - msle: 0.0032 - mae: 0.1407 - val_loss: 0.0164 - val_mse: 0.0164 - val_msle: 0.0034 - val_mae: 0.0204
Epoch 2/5
150/150 [==============================] - 2s 16ms/step - loss: 0.0161 - mse: 0.0161 - msle: 0.0032 - mae: 0.0305 - val_loss: 0.0134 - val_mse: 0.0134 - val_msle: 0.0034 - val_mae: 0.0180
Epoch 3/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0142 - mse: 0.0142 - msle: 0.0031 - mae: 0.0324 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0026 - val_mae: 0.0110
Epoch 4/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0025 - mae: 0.0324 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0174
Epoch 5/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0296 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0025 - val_mae: 0.0105
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
148/150 [============================>.] - ETA: 0s - loss: 0.0717 - mse: 0.0717 - msle: 0.0033 - mae: 0.1524Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0709 - mse: 0.0709 - msle: 0.0033 - mae: 0.1507 - val_loss: 0.0144 - val_mse: 0.0144 - val_msle: 0.0034 - val_mae: 0.0133
Epoch 2/5
150/150 [==============================] - 3s 17ms/step - loss: 0.0156 - mse: 0.0156 - msle: 0.0033 - mae: 0.0288 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0115
Epoch 3/5
150/150 [==============================] - 3s 17ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0030 - mae: 0.0299 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0023 - val_mae: 0.0109
Epoch 4/5
150/150 [==============================] - 3s 17ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0023 - mae: 0.0296 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0115
Epoch 5/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0022 - mae: 0.0280 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0024 - val_mae: 0.0132
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
147/150 [============================>.] - ETA: 0s - loss: 0.0691 - mse: 0.0691 - msle: 0.0032 - mae: 0.1476Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0681 - mse: 0.0681 - msle: 0.0032 - mae: 0.1454 - val_loss: 0.0143 - val_mse: 0.0143 - val_msle: 0.0034 - val_mae: 0.0178
Epoch 2/5
150/150 [==============================] - 2s 17ms/step - loss: 0.0149 - mse: 0.0149 - msle: 0.0032 - mae: 0.0298 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0029 - val_mae: 0.0202
Epoch 3/5
150/150 [==============================] - 3s 17ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0028 - mae: 0.0310 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0022 - val_mae: 0.0157
Epoch 4/5
150/150 [==============================] - 2s 16ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0022 - mae: 0.0291 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0100
Epoch 5/5
150/150 [==============================] - 2s 16ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0020 - mae: 0.0274 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0174
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0691 - mse: 0.0691 - msle: 0.0031 - mae: 0.1481Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0688 - mse: 0.0688 - msle: 0.0031 - mae: 0.1476 - val_loss: 0.0147 - val_mse: 0.0147 - val_msle: 0.0034 - val_mae: 0.0122
Epoch 2/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0152 - mse: 0.0152 - msle: 0.0031 - mae: 0.0286 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0108
Epoch 3/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0139 - mse: 0.0139 - msle: 0.0031 - mae: 0.0299 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0032 - val_mae: 0.0106
Epoch 4/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0029 - mae: 0.0300 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0026 - val_mae: 0.0156
Epoch 5/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0024 - mae: 0.0293 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0023 - val_mae: 0.0154
Epoch 6/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0022 - mae: 0.0278 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0023 - val_mae: 0.0211
Epoch 7/10
150/150 [==============================] - 2s 16ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0020 - mae: 0.0274 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0023 - val_mae: 0.0084
Epoch 8/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0093 - mse: 0.0093 - msle: 0.0018 - mae: 0.0269 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0023 - val_mae: 0.0219
Epoch 9/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0018 - mae: 0.0257 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0265
Epoch 10/10
150/150 [==============================] - 2s 16ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0248 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0172
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
148/150 [============================>.] - ETA: 0s - loss: 0.0690 - mse: 0.0690 - msle: 0.0033 - mae: 0.1471Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0682 - mse: 0.0682 - msle: 0.0033 - mae: 0.1454 - val_loss: 0.0142 - val_mse: 0.0142 - val_msle: 0.0034 - val_mae: 0.0149
Epoch 2/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0155 - mse: 0.0155 - msle: 0.0033 - mae: 0.0298 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0033 - val_mae: 0.0143
Epoch 3/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0031 - mae: 0.0316 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0024 - val_mae: 0.0141
Epoch 4/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0024 - mae: 0.0307 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0094
Epoch 5/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0023 - mae: 0.0289 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0024 - val_mae: 0.0103
Epoch 6/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0103 - mse: 0.0103 - msle: 0.0021 - mae: 0.0273 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0022 - val_mae: 0.0085
Epoch 7/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0018 - mae: 0.0261 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0021 - val_mae: 0.0087
Epoch 8/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0255 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0021 - val_mae: 0.0102
Epoch 9/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0243 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0021 - val_mae: 0.0116
Epoch 10/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0238 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0270
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
147/150 [============================>.] - ETA: 0s - loss: 0.0680 - mse: 0.0680 - msle: 0.0031 - mae: 0.1470Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0670 - mse: 0.0670 - msle: 0.0031 - mae: 0.1448 - val_loss: 0.0144 - val_mse: 0.0144 - val_msle: 0.0034 - val_mae: 0.0168
Epoch 2/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0149 - mse: 0.0149 - msle: 0.0031 - mae: 0.0288 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0034 - val_mae: 0.0167
Epoch 3/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0135 - mse: 0.0135 - msle: 0.0030 - mae: 0.0303 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0025 - val_mae: 0.0152
Epoch 4/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0025 - mae: 0.0304 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0099
Epoch 5/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0021 - mae: 0.0291 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0021 - val_mae: 0.0107
Epoch 6/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0105 - mse: 0.0105 - msle: 0.0021 - mae: 0.0268 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0022 - val_mae: 0.0102
Epoch 7/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0020 - mae: 0.0258 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0141
Epoch 8/10
150/150 [==============================] - 3s 18ms/step - loss: 0.0095 - mse: 0.0095 - msle: 0.0019 - mae: 0.0262 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0023 - val_mae: 0.0263
Epoch 9/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0019 - mae: 0.0256 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0022 - val_mae: 0.0162
Epoch 10/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0250 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0021 - val_mae: 0.0118
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
147/150 [============================>.] - ETA: 0s - loss: 0.0717 - mse: 0.0717 - msle: 0.0031 - mae: 0.1534Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 3s 18ms/step - loss: 0.0706 - mse: 0.0706 - msle: 0.0031 - mae: 0.1511 - val_loss: 0.0142 - val_mse: 0.0142 - val_msle: 0.0034 - val_mae: 0.0184
Epoch 2/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0146 - mse: 0.0146 - msle: 0.0031 - mae: 0.0285 - val_loss: 0.0118 - val_mse: 0.0118 - val_msle: 0.0034 - val_mae: 0.0156
Epoch 3/10
150/150 [==============================] - 2s 16ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0029 - mae: 0.0302 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0024 - val_mae: 0.0153
Epoch 4/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0022 - mae: 0.0299 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0136
Epoch 5/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0020 - mae: 0.0277 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0122
Epoch 6/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0280 - val_loss: 0.0089 - val_mse: 0.0089 - val_msle: 0.0024 - val_mae: 0.0170
Epoch 7/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0086 - mse: 0.0086 - msle: 0.0017 - mae: 0.0268 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0023 - val_mae: 0.0179
Epoch 8/10
150/150 [==============================] - 2s 17ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0260 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0117
Epoch 9/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0015 - mae: 0.0246 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0085
Epoch 10/10
150/150 [==============================] - 3s 17ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0246 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0080
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0639 - mse: 0.0639 - msle: 0.0030 - mae: 0.1420Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 25ms/step - loss: 0.0637 - mse: 0.0637 - msle: 0.0030 - mae: 0.1415 - val_loss: 0.0154 - val_mse: 0.0154 - val_msle: 0.0034 - val_mae: 0.0211
Epoch 2/2
150/150 [==============================] - 3s 22ms/step - loss: 0.0151 - mse: 0.0151 - msle: 0.0030 - mae: 0.0291 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0033 - val_mae: 0.0169
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0664 - mse: 0.0664 - msle: 0.0031 - mae: 0.1442Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 26ms/step - loss: 0.0661 - mse: 0.0661 - msle: 0.0031 - mae: 0.1437 - val_loss: 0.0147 - val_mse: 0.0147 - val_msle: 0.0034 - val_mae: 0.0184
Epoch 2/2
150/150 [==============================] - 3s 22ms/step - loss: 0.0150 - mse: 0.0150 - msle: 0.0031 - mae: 0.0282 - val_loss: 0.0122 - val_mse: 0.0122 - val_msle: 0.0034 - val_mae: 0.0195
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
149/150 [============================>.] - ETA: 0s - loss: 0.0647 - mse: 0.0647 - msle: 0.0031 - mae: 0.1406Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0642 - mse: 0.0642 - msle: 0.0031 - mae: 0.1396 - val_loss: 0.0144 - val_mse: 0.0144 - val_msle: 0.0034 - val_mae: 0.0203
Epoch 2/2
150/150 [==============================] - 3s 22ms/step - loss: 0.0149 - mse: 0.0149 - msle: 0.0031 - mae: 0.0290 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0029 - val_mae: 0.0123
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0660 - mse: 0.0660 - msle: 0.0031 - mae: 0.1430Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0657 - mse: 0.0657 - msle: 0.0031 - mae: 0.1425 - val_loss: 0.0144 - val_mse: 0.0144 - val_msle: 0.0034 - val_mae: 0.0154
Epoch 2/2
150/150 [==============================] - 3s 22ms/step - loss: 0.0150 - mse: 0.0150 - msle: 0.0031 - mae: 0.0278 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0033 - val_mae: 0.0143
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0605 - mse: 0.0605 - msle: 0.0031 - mae: 0.1328Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0603 - mse: 0.0603 - msle: 0.0031 - mae: 0.1323 - val_loss: 0.0168 - val_mse: 0.0168 - val_msle: 0.0034 - val_mae: 0.0208
Epoch 2/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0159 - mse: 0.0159 - msle: 0.0031 - mae: 0.0297 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0102
Epoch 3/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0030 - mae: 0.0295 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0029 - val_mae: 0.0142
Epoch 4/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0026 - mae: 0.0284 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0021 - val_mae: 0.0100
Epoch 5/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0021 - mae: 0.0272 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0108
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
148/150 [============================>.] - ETA: 0s - loss: 0.0626 - mse: 0.0626 - msle: 0.0031 - mae: 0.1376Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0619 - mse: 0.0619 - msle: 0.0031 - mae: 0.1361 - val_loss: 0.0150 - val_mse: 0.0150 - val_msle: 0.0034 - val_mae: 0.0139
Epoch 2/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0154 - mse: 0.0154 - msle: 0.0031 - mae: 0.0296 - val_loss: 0.0110 - val_mse: 0.0110 - val_msle: 0.0033 - val_mae: 0.0105
Epoch 3/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0031 - mae: 0.0312 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0025 - val_mae: 0.0118
Epoch 4/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0026 - mae: 0.0318 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0119
Epoch 5/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0023 - mae: 0.0303 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0019 - val_mae: 0.0167
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
148/150 [============================>.] - ETA: 0s - loss: 0.0640 - mse: 0.0640 - msle: 0.0030 - mae: 0.1422Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0633 - mse: 0.0633 - msle: 0.0030 - mae: 0.1407 - val_loss: 0.0132 - val_mse: 0.0132 - val_msle: 0.0034 - val_mae: 0.0135
Epoch 2/5
150/150 [==============================] - 3s 23ms/step - loss: 0.0140 - mse: 0.0140 - msle: 0.0030 - mae: 0.0297 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0032 - val_mae: 0.0131
Epoch 3/5
150/150 [==============================] - 3s 23ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0028 - mae: 0.0288 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0024 - val_mae: 0.0157
Epoch 4/5
150/150 [==============================] - 3s 23ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0021 - mae: 0.0281 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0121
Epoch 5/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0090 - mse: 0.0090 - msle: 0.0017 - mae: 0.0261 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0168
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
148/150 [============================>.] - ETA: 0s - loss: 0.0662 - mse: 0.0662 - msle: 0.0032 - mae: 0.1431Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0654 - mse: 0.0654 - msle: 0.0032 - mae: 0.1415 - val_loss: 0.0147 - val_mse: 0.0147 - val_msle: 0.0034 - val_mae: 0.0177
Epoch 2/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0154 - mse: 0.0154 - msle: 0.0032 - mae: 0.0290 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0033 - val_mae: 0.0136
Epoch 3/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0135 - mse: 0.0135 - msle: 0.0030 - mae: 0.0315 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0023 - val_mae: 0.0132
Epoch 4/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0024 - mae: 0.0313 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0020 - val_mae: 0.0194
Epoch 5/5
150/150 [==============================] - 3s 22ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0020 - mae: 0.0307 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0244
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
148/150 [============================>.] - ETA: 0s - loss: 0.0635 - mse: 0.0635 - msle: 0.0029 - mae: 0.1403Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0629 - mse: 0.0629 - msle: 0.0029 - mae: 0.1387 - val_loss: 0.0159 - val_mse: 0.0159 - val_msle: 0.0034 - val_mae: 0.0188
Epoch 2/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0146 - mse: 0.0146 - msle: 0.0029 - mae: 0.0288 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0033 - val_mae: 0.0145
Epoch 3/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0028 - mae: 0.0289 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0209
Epoch 4/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0020 - mae: 0.0278 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0125
Epoch 5/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0018 - mae: 0.0260 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0162
Epoch 6/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0260 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0230
Epoch 7/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0079 - mse: 0.0079 - msle: 0.0016 - mae: 0.0250 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0134
Epoch 8/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0253 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0015 - val_mae: 0.0404
Epoch 9/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0249 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0015 - val_mae: 0.0276
Epoch 10/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0239 - val_loss: 0.0058 - val_mse: 0.0058 - val_msle: 0.0013 - val_mae: 0.0091
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0670 - mse: 0.0670 - msle: 0.0033 - mae: 0.1452Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 24ms/step - loss: 0.0667 - mse: 0.0667 - msle: 0.0033 - mae: 0.1447 - val_loss: 0.0150 - val_mse: 0.0150 - val_msle: 0.0034 - val_mae: 0.0149
Epoch 2/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0159 - mse: 0.0159 - msle: 0.0033 - mae: 0.0283 - val_loss: 0.0120 - val_mse: 0.0120 - val_msle: 0.0034 - val_mae: 0.0123
Epoch 3/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0031 - mae: 0.0303 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0021 - val_mae: 0.0180
Epoch 4/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0023 - mae: 0.0305 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0020 - val_mae: 0.0106
Epoch 5/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0100 - mse: 0.0100 - msle: 0.0020 - mae: 0.0290 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0019 - val_mae: 0.0289
Epoch 6/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0275 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0196
Epoch 7/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0261 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0155
Epoch 8/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0245 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0242
Epoch 9/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0258 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0284
Epoch 10/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0072 - mse: 0.0072 - msle: 0.0014 - mae: 0.0239 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0145
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
149/150 [============================>.] - ETA: 0s - loss: 0.0659 - mse: 0.0659 - msle: 0.0030 - mae: 0.1416Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 4s 23ms/step - loss: 0.0654 - mse: 0.0654 - msle: 0.0030 - mae: 0.1405 - val_loss: 0.0157 - val_mse: 0.0157 - val_msle: 0.0034 - val_mae: 0.0175
Epoch 2/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0151 - mse: 0.0151 - msle: 0.0030 - mae: 0.0284 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0034 - val_mae: 0.0163
Epoch 3/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0129 - mse: 0.0129 - msle: 0.0029 - mae: 0.0307 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0098
Epoch 4/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0023 - mae: 0.0312 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0020 - val_mae: 0.0207
Epoch 5/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0291 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0136
Epoch 6/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0015 - mae: 0.0271 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0019 - val_mae: 0.0206
Epoch 7/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0261 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0143
Epoch 8/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0247 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0191
Epoch 9/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0250 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0122
Epoch 10/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0252 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0212
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
148/150 [============================>.] - ETA: 0s - loss: 0.0625 - mse: 0.0625 - msle: 0.0030 - mae: 0.1391Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 5s 24ms/step - loss: 0.0618 - mse: 0.0618 - msle: 0.0030 - mae: 0.1376 - val_loss: 0.0148 - val_mse: 0.0148 - val_msle: 0.0034 - val_mae: 0.0216
Epoch 2/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0150 - mse: 0.0150 - msle: 0.0030 - mae: 0.0289 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0144
Epoch 3/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0029 - mae: 0.0308 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0024 - val_mae: 0.0174
Epoch 4/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0022 - mae: 0.0298 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0020 - val_mae: 0.0127
Epoch 5/10
150/150 [==============================] - 3s 23ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0018 - mae: 0.0278 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0019 - val_mae: 0.0094
Epoch 6/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0272 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0017 - val_mae: 0.0289
Epoch 7/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0269 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0179
Epoch 8/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0249 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0279
Epoch 9/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0255 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0015 - val_mae: 0.0111
Epoch 10/10
150/150 [==============================] - 3s 22ms/step - loss: 0.0067 - mse: 0.0067 - msle: 0.0013 - mae: 0.0242 - val_loss: 0.0062 - val_mse: 0.0062 - val_msle: 0.0014 - val_mae: 0.0152
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0617 - mse: 0.0617 - msle: 0.0032 - mae: 0.1367Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 7s 40ms/step - loss: 0.0615 - mse: 0.0615 - msle: 0.0032 - mae: 0.1362 - val_loss: 0.0155 - val_mse: 0.0155 - val_msle: 0.0034 - val_mae: 0.0183
Epoch 2/2
150/150 [==============================] - 5s 35ms/step - loss: 0.0152 - mse: 0.0152 - msle: 0.0032 - mae: 0.0297 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0115
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
149/150 [============================>.] - ETA: 0s - loss: 0.0592 - mse: 0.0592 - msle: 0.0031 - mae: 0.1298Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0588 - mse: 0.0588 - msle: 0.0031 - mae: 0.1289 - val_loss: 0.0142 - val_mse: 0.0142 - val_msle: 0.0034 - val_mae: 0.0121
Epoch 2/2
150/150 [==============================] - 5s 35ms/step - loss: 0.0147 - mse: 0.0147 - msle: 0.0031 - mae: 0.0293 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0034 - val_mae: 0.0103
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0584 - mse: 0.0584 - msle: 0.0029 - mae: 0.1295Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 37ms/step - loss: 0.0582 - mse: 0.0582 - msle: 0.0029 - mae: 0.1291 - val_loss: 0.0160 - val_mse: 0.0160 - val_msle: 0.0034 - val_mae: 0.0135
Epoch 2/2
150/150 [==============================] - 5s 35ms/step - loss: 0.0151 - mse: 0.0151 - msle: 0.0029 - mae: 0.0281 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0109
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
149/150 [============================>.] - ETA: 0s - loss: 0.0617 - mse: 0.0617 - msle: 0.0030 - mae: 0.1363Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0612 - mse: 0.0612 - msle: 0.0030 - mae: 0.1353 - val_loss: 0.0155 - val_mse: 0.0155 - val_msle: 0.0034 - val_mae: 0.0120
Epoch 2/2
150/150 [==============================] - 5s 35ms/step - loss: 0.0150 - mse: 0.0150 - msle: 0.0030 - mae: 0.0287 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0033 - val_mae: 0.0172
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0590 - mse: 0.0590 - msle: 0.0032 - mae: 0.1300Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0586 - mse: 0.0586 - msle: 0.0032 - mae: 0.1290 - val_loss: 0.0163 - val_mse: 0.0163 - val_msle: 0.0034 - val_mae: 0.0223
Epoch 2/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0160 - mse: 0.0160 - msle: 0.0032 - mae: 0.0293 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0180
Epoch 3/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0032 - mae: 0.0291 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0033 - val_mae: 0.0125
Epoch 4/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0030 - mae: 0.0282 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0021 - val_mae: 0.0147
Epoch 5/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0022 - mae: 0.0278 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0015 - val_mae: 0.0116
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0604 - mse: 0.0604 - msle: 0.0031 - mae: 0.1327Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0599 - mse: 0.0599 - msle: 0.0031 - mae: 0.1317 - val_loss: 0.0165 - val_mse: 0.0165 - val_msle: 0.0034 - val_mae: 0.0241
Epoch 2/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0158 - mse: 0.0158 - msle: 0.0031 - mae: 0.0283 - val_loss: 0.0129 - val_mse: 0.0129 - val_msle: 0.0034 - val_mae: 0.0186
Epoch 3/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0139 - mse: 0.0139 - msle: 0.0030 - mae: 0.0299 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0027 - val_mae: 0.0098
Epoch 4/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0026 - mae: 0.0309 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0020 - val_mae: 0.0182
Epoch 5/5
150/150 [==============================] - 5s 34ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0022 - mae: 0.0300 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0229
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0594 - mse: 0.0594 - msle: 0.0031 - mae: 0.1324Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0590 - mse: 0.0590 - msle: 0.0031 - mae: 0.1314 - val_loss: 0.0153 - val_mse: 0.0153 - val_msle: 0.0034 - val_mae: 0.0114
Epoch 2/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0154 - mse: 0.0154 - msle: 0.0031 - mae: 0.0288 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0119
Epoch 3/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0031 - mae: 0.0291 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0027 - val_mae: 0.0098
Epoch 4/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0114 - mse: 0.0114 - msle: 0.0026 - mae: 0.0291 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0129
Epoch 5/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0020 - mae: 0.0280 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0155
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0626 - mse: 0.0626 - msle: 0.0032 - mae: 0.1357Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0622 - mse: 0.0622 - msle: 0.0032 - mae: 0.1346 - val_loss: 0.0163 - val_mse: 0.0163 - val_msle: 0.0034 - val_mae: 0.0134
Epoch 2/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0161 - mse: 0.0161 - msle: 0.0032 - mae: 0.0286 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0202
Epoch 3/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0031 - mae: 0.0306 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0025 - val_mae: 0.0208
Epoch 4/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0024 - mae: 0.0297 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0107
Epoch 5/5
150/150 [==============================] - 5s 35ms/step - loss: 0.0095 - mse: 0.0095 - msle: 0.0019 - mae: 0.0293 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0174
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
149/150 [============================>.] - ETA: 0s - loss: 0.0628 - mse: 0.0628 - msle: 0.0031 - mae: 0.1370Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0623 - mse: 0.0623 - msle: 0.0031 - mae: 0.1360 - val_loss: 0.0164 - val_mse: 0.0164 - val_msle: 0.0034 - val_mae: 0.0258
Epoch 2/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0152 - mse: 0.0152 - msle: 0.0031 - mae: 0.0295 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0032 - val_mae: 0.0121
Epoch 3/10
150/150 [==============================] - 5s 34ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0029 - mae: 0.0284 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0091
Epoch 4/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0022 - mae: 0.0274 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0131
Epoch 5/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0258 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0126
Epoch 6/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0260 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0018 - val_mae: 0.0099
Epoch 7/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0261 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0020 - val_mae: 0.0083
Epoch 8/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0015 - mae: 0.0246 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0021 - val_mae: 0.0109
Epoch 9/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0014 - mae: 0.0241 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0023 - val_mae: 0.0151
Epoch 10/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0067 - mse: 0.0067 - msle: 0.0013 - mae: 0.0242 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0018 - val_mae: 0.0150
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0565 - mse: 0.0565 - msle: 0.0032 - mae: 0.1261Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 7s 36ms/step - loss: 0.0563 - mse: 0.0563 - msle: 0.0032 - mae: 0.1257 - val_loss: 0.0162 - val_mse: 0.0162 - val_msle: 0.0034 - val_mae: 0.0175
Epoch 2/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0158 - mse: 0.0158 - msle: 0.0032 - mae: 0.0295 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0034 - val_mae: 0.0200
Epoch 3/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0031 - mae: 0.0291 - val_loss: 0.0090 - val_mse: 0.0090 - val_msle: 0.0024 - val_mae: 0.0123
Epoch 4/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0024 - mae: 0.0294 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0181
Epoch 5/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0268 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0200
Epoch 6/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0016 - mae: 0.0286 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0015 - val_mae: 0.0215
Epoch 7/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0252 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0016 - val_mae: 0.0218
Epoch 8/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0014 - mae: 0.0263 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0016 - val_mae: 0.0152
Epoch 9/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0247 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0016 - val_mae: 0.0079
Epoch 10/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0013 - mae: 0.0253 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0017 - val_mae: 0.0210
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0597 - mse: 0.0597 - msle: 0.0031 - mae: 0.1312Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 36ms/step - loss: 0.0595 - mse: 0.0595 - msle: 0.0031 - mae: 0.1308 - val_loss: 0.0157 - val_mse: 0.0157 - val_msle: 0.0034 - val_mae: 0.0249
Epoch 2/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0151 - mse: 0.0151 - msle: 0.0031 - mae: 0.0296 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0033 - val_mae: 0.0154
Epoch 3/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0029 - mae: 0.0288 - val_loss: 0.0083 - val_mse: 0.0083 - val_msle: 0.0021 - val_mae: 0.0198
Epoch 4/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0021 - mae: 0.0282 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0119
Epoch 5/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0083 - mse: 0.0083 - msle: 0.0016 - mae: 0.0269 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0015 - val_mae: 0.0161
Epoch 6/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0247 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0016 - val_mae: 0.0116
Epoch 7/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0065 - mse: 0.0065 - msle: 0.0012 - mae: 0.0254 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0016 - val_mae: 0.0208
Epoch 8/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0248 - val_loss: 0.0063 - val_mse: 0.0063 - val_msle: 0.0015 - val_mae: 0.0189
Epoch 9/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0010 - mae: 0.0252 - val_loss: 0.0065 - val_mse: 0.0065 - val_msle: 0.0017 - val_mae: 0.0082
Epoch 10/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0055 - mse: 0.0055 - msle: 9.5379e-04 - mae: 0.0237 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0014 - val_mae: 0.0106
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
149/150 [============================>.] - ETA: 0s - loss: 0.0598 - mse: 0.0598 - msle: 0.0031 - mae: 0.1333Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 6s 37ms/step - loss: 0.0594 - mse: 0.0594 - msle: 0.0031 - mae: 0.1324 - val_loss: 0.0160 - val_mse: 0.0160 - val_msle: 0.0034 - val_mae: 0.0254
Epoch 2/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0156 - mse: 0.0156 - msle: 0.0031 - mae: 0.0284 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0167
Epoch 3/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0031 - mae: 0.0293 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0028 - val_mae: 0.0151
Epoch 4/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0110 - mse: 0.0110 - msle: 0.0026 - mae: 0.0283 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0020 - val_mae: 0.0109
Epoch 5/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0101 - mse: 0.0101 - msle: 0.0021 - mae: 0.0271 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0109
Epoch 6/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0020 - mae: 0.0266 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0106
Epoch 7/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0086 - mse: 0.0086 - msle: 0.0017 - mae: 0.0274 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0019 - val_mae: 0.0367
Epoch 8/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0284 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0019 - val_mae: 0.0284
Epoch 9/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0015 - mae: 0.0273 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0090
Epoch 10/10
150/150 [==============================] - 5s 35ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0014 - mae: 0.0257 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0020 - val_mae: 0.0142
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0560 - mse: 0.0560 - msle: 0.0033 - mae: 0.1260Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 10s 59ms/step - loss: 0.0558 - mse: 0.0558 - msle: 0.0033 - mae: 0.1255 - val_loss: 0.0177 - val_mse: 0.0177 - val_msle: 0.0034 - val_mae: 0.0132
Epoch 2/2
150/150 [==============================] - 8s 57ms/step - loss: 0.0177 - mse: 0.0177 - msle: 0.0033 - mae: 0.0281 - val_loss: 0.0141 - val_mse: 0.0141 - val_msle: 0.0034 - val_mae: 0.0123
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0578 - mse: 0.0578 - msle: 0.0032 - mae: 0.1324Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0576 - mse: 0.0576 - msle: 0.0032 - mae: 0.1319 - val_loss: 0.0177 - val_mse: 0.0177 - val_msle: 0.0034 - val_mae: 0.0220
Epoch 2/2
150/150 [==============================] - 9s 57ms/step - loss: 0.0168 - mse: 0.0168 - msle: 0.0032 - mae: 0.0285 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0262
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
149/150 [============================>.] - ETA: 0s - loss: 0.0566 - mse: 0.0566 - msle: 0.0033 - mae: 0.1260Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0562 - mse: 0.0562 - msle: 0.0033 - mae: 0.1251 - val_loss: 0.0153 - val_mse: 0.0153 - val_msle: 0.0034 - val_mae: 0.0169
Epoch 2/2
150/150 [==============================] - 8s 56ms/step - loss: 0.0159 - mse: 0.0159 - msle: 0.0033 - mae: 0.0315 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0122
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
149/150 [============================>.] - ETA: 0s - loss: 0.0596 - mse: 0.0596 - msle: 0.0030 - mae: 0.1338Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0592 - mse: 0.0592 - msle: 0.0030 - mae: 0.1329 - val_loss: 0.0171 - val_mse: 0.0171 - val_msle: 0.0034 - val_mae: 0.0129
Epoch 2/2
150/150 [==============================] - 8s 56ms/step - loss: 0.0156 - mse: 0.0156 - msle: 0.0030 - mae: 0.0277 - val_loss: 0.0114 - val_mse: 0.0114 - val_msle: 0.0034 - val_mae: 0.0118
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0560 - mse: 0.0560 - msle: 0.0032 - mae: 0.1246Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0556 - mse: 0.0556 - msle: 0.0032 - mae: 0.1237 - val_loss: 0.0177 - val_mse: 0.0177 - val_msle: 0.0034 - val_mae: 0.0226
Epoch 2/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0170 - mse: 0.0170 - msle: 0.0032 - mae: 0.0289 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0034 - val_mae: 0.0287
Epoch 3/5
150/150 [==============================] - 8s 57ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0032 - mae: 0.0294 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0126
Epoch 4/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0030 - mae: 0.0287 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0028 - val_mae: 0.0176
Epoch 5/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0026 - mae: 0.0290 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0163
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0586 - mse: 0.0586 - msle: 0.0032 - mae: 0.1318Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0584 - mse: 0.0584 - msle: 0.0032 - mae: 0.1313 - val_loss: 0.0172 - val_mse: 0.0172 - val_msle: 0.0034 - val_mae: 0.0219
Epoch 2/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0166 - mse: 0.0166 - msle: 0.0032 - mae: 0.0275 - val_loss: 0.0131 - val_mse: 0.0131 - val_msle: 0.0034 - val_mae: 0.0151
Epoch 3/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0142 - mse: 0.0142 - msle: 0.0032 - mae: 0.0310 - val_loss: 0.0110 - val_mse: 0.0110 - val_msle: 0.0034 - val_mae: 0.0223
Epoch 4/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0125 - mse: 0.0125 - msle: 0.0031 - mae: 0.0294 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0031 - val_mae: 0.0172
Epoch 5/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0027 - mae: 0.0286 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0019 - val_mae: 0.0259
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0589 - mse: 0.0589 - msle: 0.0032 - mae: 0.1285Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0585 - mse: 0.0585 - msle: 0.0032 - mae: 0.1276 - val_loss: 0.0179 - val_mse: 0.0179 - val_msle: 0.0034 - val_mae: 0.0263
Epoch 2/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0172 - mse: 0.0172 - msle: 0.0032 - mae: 0.0280 - val_loss: 0.0148 - val_mse: 0.0148 - val_msle: 0.0034 - val_mae: 0.0195
Epoch 3/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0151 - mse: 0.0151 - msle: 0.0032 - mae: 0.0312 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0030 - val_mae: 0.0172
Epoch 4/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0029 - mae: 0.0313 - val_loss: 0.0087 - val_mse: 0.0087 - val_msle: 0.0020 - val_mae: 0.0276
Epoch 5/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0022 - mae: 0.0293 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0212
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
149/150 [============================>.] - ETA: 0s - loss: 0.0546 - mse: 0.0546 - msle: 0.0031 - mae: 0.1232Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 10s 58ms/step - loss: 0.0543 - mse: 0.0543 - msle: 0.0031 - mae: 0.1223 - val_loss: 0.0147 - val_mse: 0.0147 - val_msle: 0.0034 - val_mae: 0.0163
Epoch 2/5
150/150 [==============================] - 8s 57ms/step - loss: 0.0149 - mse: 0.0149 - msle: 0.0031 - mae: 0.0306 - val_loss: 0.0110 - val_mse: 0.0110 - val_msle: 0.0034 - val_mae: 0.0190
Epoch 3/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0031 - mae: 0.0288 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0026 - val_mae: 0.0146
Epoch 4/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0103 - mse: 0.0103 - msle: 0.0023 - mae: 0.0285 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0144
Epoch 5/5
150/150 [==============================] - 8s 56ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0016 - mae: 0.0276 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0019 - val_mae: 0.0119
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0556 - mse: 0.0556 - msle: 0.0031 - mae: 0.1254Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0554 - mse: 0.0554 - msle: 0.0031 - mae: 0.1250 - val_loss: 0.0178 - val_mse: 0.0178 - val_msle: 0.0034 - val_mae: 0.0310
Epoch 2/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0167 - mse: 0.0167 - msle: 0.0031 - mae: 0.0281 - val_loss: 0.0141 - val_mse: 0.0141 - val_msle: 0.0034 - val_mae: 0.0121
Epoch 3/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0148 - mse: 0.0148 - msle: 0.0031 - mae: 0.0306 - val_loss: 0.0102 - val_mse: 0.0102 - val_msle: 0.0030 - val_mae: 0.0165
Epoch 4/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0125 - mse: 0.0125 - msle: 0.0029 - mae: 0.0304 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0113
Epoch 5/10
150/150 [==============================] - 9s 57ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0024 - mae: 0.0294 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0101
Epoch 6/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0021 - mae: 0.0286 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0111
Epoch 7/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0019 - mae: 0.0281 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0161
Epoch 8/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0018 - mae: 0.0276 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0018 - val_mae: 0.0175
Epoch 9/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0017 - mae: 0.0271 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0100
Epoch 10/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0080 - mse: 0.0080 - msle: 0.0016 - mae: 0.0266 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0019 - val_mae: 0.0168
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
149/150 [============================>.] - ETA: 0s - loss: 0.0565 - mse: 0.0565 - msle: 0.0032 - mae: 0.1277Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0561 - mse: 0.0561 - msle: 0.0032 - mae: 0.1267 - val_loss: 0.0176 - val_mse: 0.0176 - val_msle: 0.0034 - val_mae: 0.0154
Epoch 2/10
150/150 [==============================] - 8s 57ms/step - loss: 0.0173 - mse: 0.0173 - msle: 0.0032 - mae: 0.0283 - val_loss: 0.0143 - val_mse: 0.0143 - val_msle: 0.0033 - val_mae: 0.0179
Epoch 3/10
150/150 [==============================] - 8s 57ms/step - loss: 0.0151 - mse: 0.0151 - msle: 0.0032 - mae: 0.0316 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0034 - val_mae: 0.0154
Epoch 4/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0127 - mse: 0.0127 - msle: 0.0032 - mae: 0.0306 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0029 - val_mae: 0.0168
Epoch 5/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0113 - mse: 0.0113 - msle: 0.0027 - mae: 0.0299 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0247
Epoch 6/10
150/150 [==============================] - 9s 57ms/step - loss: 0.0092 - mse: 0.0092 - msle: 0.0019 - mae: 0.0283 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0016 - val_mae: 0.0273
Epoch 7/10
150/150 [==============================] - 8s 55ms/step - loss: 0.0078 - mse: 0.0078 - msle: 0.0015 - mae: 0.0275 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0016 - val_mae: 0.0194
Epoch 8/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0321 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0243
Epoch 9/10
150/150 [==============================] - 8s 57ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0013 - mae: 0.0259 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0016 - val_mae: 0.0330
Epoch 10/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0254 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0014 - val_mae: 0.0389
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
149/150 [============================>.] - ETA: 0s - loss: 0.0558 - mse: 0.0558 - msle: 0.0031 - mae: 0.1255Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 57ms/step - loss: 0.0554 - mse: 0.0554 - msle: 0.0031 - mae: 0.1246 - val_loss: 0.0170 - val_mse: 0.0170 - val_msle: 0.0034 - val_mae: 0.0185
Epoch 2/10
150/150 [==============================] - 8s 57ms/step - loss: 0.0161 - mse: 0.0161 - msle: 0.0031 - mae: 0.0280 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0122
Epoch 3/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0031 - mae: 0.0305 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0033 - val_mae: 0.0130
Epoch 4/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0029 - mae: 0.0298 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0019 - val_mae: 0.0164
Epoch 5/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0099 - mse: 0.0099 - msle: 0.0021 - mae: 0.0280 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0016 - val_mae: 0.0147
Epoch 6/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0085 - mse: 0.0085 - msle: 0.0016 - mae: 0.0276 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0018 - val_mae: 0.0160
Epoch 7/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0273 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0020 - val_mae: 0.0246
Epoch 8/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0070 - mse: 0.0070 - msle: 0.0013 - mae: 0.0277 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0018 - val_mae: 0.0283
Epoch 9/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0011 - mae: 0.0261 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0143
Epoch 10/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0011 - mae: 0.0250 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0179
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
149/150 [============================>.] - ETA: 0s - loss: 0.0566 - mse: 0.0566 - msle: 0.0033 - mae: 0.1266Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 9s 58ms/step - loss: 0.0562 - mse: 0.0562 - msle: 0.0033 - mae: 0.1256 - val_loss: 0.0175 - val_mse: 0.0175 - val_msle: 0.0034 - val_mae: 0.0178
Epoch 2/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0172 - mse: 0.0172 - msle: 0.0033 - mae: 0.0287 - val_loss: 0.0118 - val_mse: 0.0118 - val_msle: 0.0034 - val_mae: 0.0117
Epoch 3/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0032 - mae: 0.0303 - val_loss: 0.0085 - val_mse: 0.0085 - val_msle: 0.0022 - val_mae: 0.0146
Epoch 4/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0108 - mse: 0.0108 - msle: 0.0024 - mae: 0.0281 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0116
Epoch 5/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0279 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0014 - val_mae: 0.0176
Epoch 6/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0076 - mse: 0.0076 - msle: 0.0015 - mae: 0.0269 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0014 - val_mae: 0.0173
Epoch 7/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0013 - mae: 0.0256 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0014 - val_mae: 0.0246
Epoch 8/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0012 - mae: 0.0264 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0249
Epoch 9/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0241 - val_loss: 0.0067 - val_mse: 0.0067 - val_msle: 0.0016 - val_mae: 0.0145
Epoch 10/10
150/150 [==============================] - 8s 56ms/step - loss: 0.0056 - mse: 0.0056 - msle: 0.0010 - mae: 0.0234 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0020 - val_mae: 0.0122
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0581 - mse: 0.0581 - msle: 0.0032 - mae: 0.1318Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 17s 107ms/step - loss: 0.0579 - mse: 0.0579 - msle: 0.0032 - mae: 0.1313 - val_loss: 0.0179 - val_mse: 0.0179 - val_msle: 0.0034 - val_mae: 0.0232
Epoch 2/2
150/150 [==============================] - 15s 103ms/step - loss: 0.0162 - mse: 0.0162 - msle: 0.0032 - mae: 0.0290 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0138
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0510 - mse: 0.0510 - msle: 0.0032 - mae: 0.1162Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 104ms/step - loss: 0.0508 - mse: 0.0508 - msle: 0.0032 - mae: 0.1158 - val_loss: 0.0177 - val_mse: 0.0177 - val_msle: 0.0034 - val_mae: 0.0252
Epoch 2/2
150/150 [==============================] - 15s 103ms/step - loss: 0.0170 - mse: 0.0170 - msle: 0.0032 - mae: 0.0289 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0153
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0498 - mse: 0.0498 - msle: 0.0031 - mae: 0.1148Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0496 - mse: 0.0496 - msle: 0.0031 - mae: 0.1144 - val_loss: 0.0178 - val_mse: 0.0178 - val_msle: 0.0034 - val_mae: 0.0252
Epoch 2/2
150/150 [==============================] - 15s 103ms/step - loss: 0.0160 - mse: 0.0160 - msle: 0.0031 - mae: 0.0301 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0224
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0490 - mse: 0.0490 - msle: 0.0031 - mae: 0.1103Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0489 - mse: 0.0489 - msle: 0.0031 - mae: 0.1099 - val_loss: 0.0171 - val_mse: 0.0171 - val_msle: 0.0034 - val_mae: 0.0208
Epoch 2/2
150/150 [==============================] - 15s 103ms/step - loss: 0.0157 - mse: 0.0157 - msle: 0.0031 - mae: 0.0290 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0034 - val_mae: 0.0175
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0519 - mse: 0.0519 - msle: 0.0032 - mae: 0.1178Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0517 - mse: 0.0517 - msle: 0.0032 - mae: 0.1174 - val_loss: 0.0179 - val_mse: 0.0179 - val_msle: 0.0034 - val_mae: 0.0184
Epoch 2/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0169 - mse: 0.0169 - msle: 0.0032 - mae: 0.0288 - val_loss: 0.0120 - val_mse: 0.0120 - val_msle: 0.0034 - val_mae: 0.0271
Epoch 3/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0136 - mse: 0.0136 - msle: 0.0032 - mae: 0.0308 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0034 - val_mae: 0.0127
Epoch 4/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0126 - mse: 0.0126 - msle: 0.0032 - mae: 0.0287 - val_loss: 0.0101 - val_mse: 0.0101 - val_msle: 0.0033 - val_mae: 0.0147
Epoch 5/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0118 - mse: 0.0118 - msle: 0.0030 - mae: 0.0285 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0022 - val_mae: 0.0127
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0495 - mse: 0.0495 - msle: 0.0031 - mae: 0.1112Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 17s 108ms/step - loss: 0.0493 - mse: 0.0493 - msle: 0.0031 - mae: 0.1108 - val_loss: 0.0177 - val_mse: 0.0177 - val_msle: 0.0034 - val_mae: 0.0199
Epoch 2/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0163 - mse: 0.0163 - msle: 0.0031 - mae: 0.0286 - val_loss: 0.0118 - val_mse: 0.0118 - val_msle: 0.0034 - val_mae: 0.0216
Epoch 3/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0031 - mae: 0.0321 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0032 - val_mae: 0.0257
Epoch 4/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0028 - mae: 0.0302 - val_loss: 0.0071 - val_mse: 0.0071 - val_msle: 0.0017 - val_mae: 0.0276
Epoch 5/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0019 - mae: 0.0292 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0018 - val_mae: 0.0143
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0570 - mse: 0.0570 - msle: 0.0029 - mae: 0.1301Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0568 - mse: 0.0568 - msle: 0.0029 - mae: 0.1296 - val_loss: 0.0178 - val_mse: 0.0178 - val_msle: 0.0034 - val_mae: 0.0223
Epoch 2/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0154 - mse: 0.0154 - msle: 0.0029 - mae: 0.0277 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0163
Epoch 3/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0123 - mse: 0.0123 - msle: 0.0029 - mae: 0.0299 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0031 - val_mae: 0.0185
Epoch 4/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0026 - mae: 0.0290 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0154
Epoch 5/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0094 - mse: 0.0094 - msle: 0.0020 - mae: 0.0286 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0016 - val_mae: 0.0133
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0537 - mse: 0.0537 - msle: 0.0031 - mae: 0.1244Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 104ms/step - loss: 0.0536 - mse: 0.0536 - msle: 0.0031 - mae: 0.1240 - val_loss: 0.0179 - val_mse: 0.0179 - val_msle: 0.0034 - val_mae: 0.0198
Epoch 2/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0165 - mse: 0.0165 - msle: 0.0031 - mae: 0.0277 - val_loss: 0.0129 - val_mse: 0.0129 - val_msle: 0.0034 - val_mae: 0.0211
Epoch 3/5
150/150 [==============================] - 16s 104ms/step - loss: 0.0139 - mse: 0.0139 - msle: 0.0031 - mae: 0.0318 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0228
Epoch 4/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0031 - mae: 0.0300 - val_loss: 0.0099 - val_mse: 0.0099 - val_msle: 0.0030 - val_mae: 0.0122
Epoch 5/5
150/150 [==============================] - 15s 103ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0026 - mae: 0.0293 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0019 - val_mae: 0.0134
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0673 - mse: 0.0673 - msle: 0.0032 - mae: 0.1517Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0671 - mse: 0.0671 - msle: 0.0032 - mae: 0.1512 - val_loss: 0.0182 - val_mse: 0.0182 - val_msle: 0.0034 - val_mae: 0.0199
Epoch 2/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0176 - mse: 0.0176 - msle: 0.0032 - mae: 0.0278 - val_loss: 0.0137 - val_mse: 0.0137 - val_msle: 0.0034 - val_mae: 0.0309
Epoch 3/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0150 - mse: 0.0150 - msle: 0.0032 - mae: 0.0321 - val_loss: 0.0111 - val_mse: 0.0111 - val_msle: 0.0034 - val_mae: 0.0228
Epoch 4/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0032 - mae: 0.0293 - val_loss: 0.0108 - val_mse: 0.0108 - val_msle: 0.0034 - val_mae: 0.0256
Epoch 5/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0031 - mae: 0.0291 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0025 - val_mae: 0.0174
Epoch 6/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0026 - mae: 0.0296 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0020 - val_mae: 0.0102
Epoch 7/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0100 - mse: 0.0100 - msle: 0.0022 - mae: 0.0290 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0019 - val_mae: 0.0122
Epoch 8/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0019 - mae: 0.0295 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0018 - val_mae: 0.0442
Epoch 9/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0016 - mae: 0.0286 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0022 - val_mae: 0.0187
Epoch 10/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0273 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0214
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0509 - mse: 0.0509 - msle: 0.0031 - mae: 0.1163Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0507 - mse: 0.0507 - msle: 0.0031 - mae: 0.1159 - val_loss: 0.0180 - val_mse: 0.0180 - val_msle: 0.0034 - val_mae: 0.0304
Epoch 2/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0162 - mse: 0.0162 - msle: 0.0031 - mae: 0.0285 - val_loss: 0.0116 - val_mse: 0.0116 - val_msle: 0.0034 - val_mae: 0.0163
Epoch 3/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0130 - mse: 0.0130 - msle: 0.0030 - mae: 0.0310 - val_loss: 0.0097 - val_mse: 0.0097 - val_msle: 0.0029 - val_mae: 0.0123
Epoch 4/10
150/150 [==============================] - 16s 105ms/step - loss: 0.0109 - mse: 0.0109 - msle: 0.0025 - mae: 0.0296 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0018 - val_mae: 0.0155
Epoch 5/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0087 - mse: 0.0087 - msle: 0.0017 - mae: 0.0288 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0020 - val_mae: 0.0129
Epoch 6/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0014 - mae: 0.0282 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0023 - val_mae: 0.0196
Epoch 7/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0064 - mse: 0.0064 - msle: 0.0012 - mae: 0.0273 - val_loss: 0.0096 - val_mse: 0.0096 - val_msle: 0.0024 - val_mae: 0.0188
Epoch 8/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0058 - mse: 0.0058 - msle: 0.0010 - mae: 0.0269 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0026 - val_mae: 0.0184
Epoch 9/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.4554e-04 - mae: 0.0269 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0022 - val_mae: 0.0131
Epoch 10/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.6723e-04 - mae: 0.0263 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0020 - val_mae: 0.0189
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0501 - mse: 0.0501 - msle: 0.0030 - mae: 0.1142Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 104ms/step - loss: 0.0500 - mse: 0.0500 - msle: 0.0030 - mae: 0.1138 - val_loss: 0.0176 - val_mse: 0.0176 - val_msle: 0.0034 - val_mae: 0.0140
Epoch 2/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0158 - mse: 0.0158 - msle: 0.0030 - mae: 0.0271 - val_loss: 0.0124 - val_mse: 0.0124 - val_msle: 0.0034 - val_mae: 0.0174
Epoch 3/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0030 - mae: 0.0312 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0033 - val_mae: 0.0116
Epoch 4/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0116 - mse: 0.0116 - msle: 0.0029 - mae: 0.0308 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0020 - val_mae: 0.0116
Epoch 5/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0096 - mse: 0.0096 - msle: 0.0020 - mae: 0.0301 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0017 - val_mae: 0.0115
Epoch 6/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0081 - mse: 0.0081 - msle: 0.0015 - mae: 0.0286 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0020 - val_mae: 0.0122
Epoch 7/10
150/150 [==============================] - 16s 104ms/step - loss: 0.0069 - mse: 0.0069 - msle: 0.0013 - mae: 0.0267 - val_loss: 0.0064 - val_mse: 0.0064 - val_msle: 0.0015 - val_mae: 0.0127
Epoch 8/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0060 - mse: 0.0060 - msle: 0.0011 - mae: 0.0257 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0019 - val_mae: 0.0109
Epoch 9/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0057 - mse: 0.0057 - msle: 0.0010 - mae: 0.0257 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0018 - val_mae: 0.0139
Epoch 10/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.7381e-04 - mae: 0.0251 - val_loss: 0.0079 - val_mse: 0.0079 - val_msle: 0.0018 - val_mae: 0.0153
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0540 - mse: 0.0540 - msle: 0.0034 - mae: 0.1226Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 16s 105ms/step - loss: 0.0539 - mse: 0.0539 - msle: 0.0034 - mae: 0.1221 - val_loss: 0.0172 - val_mse: 0.0172 - val_msle: 0.0034 - val_mae: 0.0248
Epoch 2/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0169 - mse: 0.0169 - msle: 0.0034 - mae: 0.0315 - val_loss: 0.0112 - val_mse: 0.0112 - val_msle: 0.0034 - val_mae: 0.0227
Epoch 3/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0134 - mse: 0.0134 - msle: 0.0034 - mae: 0.0296 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0031 - val_mae: 0.0158
Epoch 4/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0028 - mae: 0.0301 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0025 - val_mae: 0.0135
Epoch 5/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0021 - mae: 0.0292 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0022 - val_mae: 0.0183
Epoch 6/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0016 - mae: 0.0285 - val_loss: 0.0100 - val_mse: 0.0100 - val_msle: 0.0026 - val_mae: 0.0193
Epoch 7/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0077 - mse: 0.0077 - msle: 0.0015 - mae: 0.0281 - val_loss: 0.0091 - val_mse: 0.0091 - val_msle: 0.0023 - val_mae: 0.0293
Epoch 8/10
150/150 [==============================] - 15s 102ms/step - loss: 0.0068 - mse: 0.0068 - msle: 0.0013 - mae: 0.0273 - val_loss: 0.0092 - val_mse: 0.0092 - val_msle: 0.0025 - val_mae: 0.0143
Epoch 9/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0066 - mse: 0.0066 - msle: 0.0012 - mae: 0.0261 - val_loss: 0.0080 - val_mse: 0.0080 - val_msle: 0.0019 - val_mae: 0.0250
Epoch 10/10
150/150 [==============================] - 15s 103ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.5992e-04 - mae: 0.0254 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0211
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0473 - mse: 0.0473 - msle: 0.0031 - mae: 0.1082Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 44s 286ms/step - loss: 0.0472 - mse: 0.0472 - msle: 0.0031 - mae: 0.1079 - val_loss: 0.0189 - val_mse: 0.0189 - val_msle: 0.0035 - val_mae: 0.0368
Epoch 2/2
150/150 [==============================] - 42s 282ms/step - loss: 0.0173 - mse: 0.0173 - msle: 0.0031 - mae: 0.0273 - val_loss: 0.0169 - val_mse: 0.0169 - val_msle: 0.0034 - val_mae: 0.0275
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0412 - mse: 0.0412 - msle: 0.0030 - mae: 0.0868Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0411 - mse: 0.0411 - msle: 0.0030 - mae: 0.0865 - val_loss: 0.0171 - val_mse: 0.0171 - val_msle: 0.0035 - val_mae: 0.0291
Epoch 2/2
150/150 [==============================] - 42s 282ms/step - loss: 0.0149 - mse: 0.0149 - msle: 0.0030 - mae: 0.0307 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0034 - val_mae: 0.0352
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0422 - mse: 0.0422 - msle: 0.0032 - mae: 0.0901Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0421 - mse: 0.0421 - msle: 0.0032 - mae: 0.0898 - val_loss: 0.0182 - val_mse: 0.0182 - val_msle: 0.0035 - val_mae: 0.0142
Epoch 2/2
150/150 [==============================] - 42s 282ms/step - loss: 0.0174 - mse: 0.0174 - msle: 0.0032 - mae: 0.0307 - val_loss: 0.0135 - val_mse: 0.0135 - val_msle: 0.0034 - val_mae: 0.0301
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/2
150/150 [==============================] - ETA: 0s - loss: 0.0826 - mse: 0.0826 - msle: 0.0032 - mae: 0.1722Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0823 - mse: 0.0823 - msle: 0.0032 - mae: 0.1716 - val_loss: 0.0186 - val_mse: 0.0186 - val_msle: 0.0035 - val_mae: 0.0207
Epoch 2/2
150/150 [==============================] - 42s 283ms/step - loss: 0.0177 - mse: 0.0177 - msle: 0.0032 - mae: 0.0268 - val_loss: 0.0185 - val_mse: 0.0185 - val_msle: 0.0034 - val_mae: 0.0279
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0425 - mse: 0.0425 - msle: 0.0032 - mae: 0.0906Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 44s 284ms/step - loss: 0.0424 - mse: 0.0424 - msle: 0.0032 - mae: 0.0903 - val_loss: 0.0180 - val_mse: 0.0180 - val_msle: 0.0035 - val_mae: 0.0159
Epoch 2/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0170 - mse: 0.0170 - msle: 0.0032 - mae: 0.0326 - val_loss: 0.0113 - val_mse: 0.0113 - val_msle: 0.0034 - val_mae: 0.0158
Epoch 3/5
150/150 [==============================] - 42s 282ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0032 - mae: 0.0316 - val_loss: 0.0107 - val_mse: 0.0107 - val_msle: 0.0034 - val_mae: 0.0185
Epoch 4/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0119 - mse: 0.0119 - msle: 0.0032 - mae: 0.0293 - val_loss: 0.0098 - val_mse: 0.0098 - val_msle: 0.0030 - val_mae: 0.0171
Epoch 5/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0112 - mse: 0.0112 - msle: 0.0027 - mae: 0.0334 - val_loss: 0.0078 - val_mse: 0.0078 - val_msle: 0.0017 - val_mae: 0.0366
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0420 - mse: 0.0420 - msle: 0.0032 - mae: 0.0866Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0419 - mse: 0.0419 - msle: 0.0032 - mae: 0.0863 - val_loss: 0.0184 - val_mse: 0.0184 - val_msle: 0.0034 - val_mae: 0.0343
Epoch 2/5
150/150 [==============================] - 42s 282ms/step - loss: 0.0171 - mse: 0.0171 - msle: 0.0032 - mae: 0.0304 - val_loss: 0.0121 - val_mse: 0.0121 - val_msle: 0.0034 - val_mae: 0.0200
Epoch 3/5
150/150 [==============================] - 42s 282ms/step - loss: 0.0137 - mse: 0.0137 - msle: 0.0032 - mae: 0.0336 - val_loss: 0.0105 - val_mse: 0.0105 - val_msle: 0.0033 - val_mae: 0.0163
Epoch 4/5
150/150 [==============================] - 42s 282ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0029 - mae: 0.0331 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0019 - val_mae: 0.0138
Epoch 5/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0088 - mse: 0.0088 - msle: 0.0017 - mae: 0.0322 - val_loss: 0.0075 - val_mse: 0.0075 - val_msle: 0.0017 - val_mae: 0.0279
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0594 - mse: 0.0594 - msle: 0.0032 - mae: 0.1371Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 285ms/step - loss: 0.0592 - mse: 0.0592 - msle: 0.0032 - mae: 0.1366 - val_loss: 0.0186 - val_mse: 0.0186 - val_msle: 0.0035 - val_mae: 0.0285
Epoch 2/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0175 - mse: 0.0175 - msle: 0.0032 - mae: 0.0274 - val_loss: 0.0181 - val_mse: 0.0181 - val_msle: 0.0035 - val_mae: 0.0445
Epoch 3/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0162 - mse: 0.0162 - msle: 0.0032 - mae: 0.0314 - val_loss: 0.0122 - val_mse: 0.0122 - val_msle: 0.0035 - val_mae: 0.0298
Epoch 4/5
150/150 [==============================] - 42s 282ms/step - loss: 0.0133 - mse: 0.0133 - msle: 0.0032 - mae: 0.0325 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0035 - val_mae: 0.0144
Epoch 5/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0122 - mse: 0.0122 - msle: 0.0032 - mae: 0.0295 - val_loss: 0.0106 - val_mse: 0.0106 - val_msle: 0.0035 - val_mae: 0.0165
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/5
150/150 [==============================] - ETA: 0s - loss: 0.0415 - mse: 0.0415 - msle: 0.0031 - mae: 0.0869Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 285ms/step - loss: 0.0413 - mse: 0.0413 - msle: 0.0031 - mae: 0.0866 - val_loss: 0.0163 - val_mse: 0.0163 - val_msle: 0.0034 - val_mae: 0.0177
Epoch 2/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0153 - mse: 0.0153 - msle: 0.0031 - mae: 0.0338 - val_loss: 0.0115 - val_mse: 0.0115 - val_msle: 0.0034 - val_mae: 0.0324
Epoch 3/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0120 - mse: 0.0120 - msle: 0.0030 - mae: 0.0316 - val_loss: 0.0086 - val_mse: 0.0086 - val_msle: 0.0023 - val_mae: 0.0212
Epoch 4/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0097 - mse: 0.0097 - msle: 0.0020 - mae: 0.0314 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0307
Epoch 5/5
150/150 [==============================] - 42s 283ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0297 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0213
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0410 - mse: 0.0410 - msle: 0.0030 - mae: 0.0867Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0409 - mse: 0.0409 - msle: 0.0030 - mae: 0.0864 - val_loss: 0.0179 - val_mse: 0.0179 - val_msle: 0.0034 - val_mae: 0.0122
Epoch 2/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0159 - mse: 0.0159 - msle: 0.0030 - mae: 0.0305 - val_loss: 0.0123 - val_mse: 0.0123 - val_msle: 0.0034 - val_mae: 0.0250
Epoch 3/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0132 - mse: 0.0132 - msle: 0.0030 - mae: 0.0338 - val_loss: 0.0109 - val_mse: 0.0109 - val_msle: 0.0034 - val_mae: 0.0224
Epoch 4/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0117 - mse: 0.0117 - msle: 0.0030 - mae: 0.0295 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0033 - val_mae: 0.0143
Epoch 5/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0111 - mse: 0.0111 - msle: 0.0029 - mae: 0.0298 - val_loss: 0.0095 - val_mse: 0.0095 - val_msle: 0.0029 - val_mae: 0.0176
Epoch 6/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0106 - mse: 0.0106 - msle: 0.0025 - mae: 0.0311 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0192
Epoch 7/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0091 - mse: 0.0091 - msle: 0.0019 - mae: 0.0320 - val_loss: 0.0061 - val_mse: 0.0061 - val_msle: 0.0013 - val_mae: 0.0236
Epoch 8/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0075 - mse: 0.0075 - msle: 0.0015 - mae: 0.0292 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0015 - val_mae: 0.0288
Epoch 9/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0074 - mse: 0.0074 - msle: 0.0014 - mae: 0.0321 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0016 - val_mae: 0.0172
Epoch 10/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0011 - mae: 0.0267 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0017 - val_mae: 0.0176
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0408 - mse: 0.0408 - msle: 0.0032 - mae: 0.0831Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0407 - mse: 0.0407 - msle: 0.0032 - mae: 0.0828 - val_loss: 0.0181 - val_mse: 0.0181 - val_msle: 0.0035 - val_mae: 0.0247
Epoch 2/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0165 - mse: 0.0165 - msle: 0.0032 - mae: 0.0291 - val_loss: 0.0117 - val_mse: 0.0117 - val_msle: 0.0034 - val_mae: 0.0314
Epoch 3/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0131 - mse: 0.0131 - msle: 0.0031 - mae: 0.0344 - val_loss: 0.0103 - val_mse: 0.0103 - val_msle: 0.0032 - val_mae: 0.0203
Epoch 4/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0115 - mse: 0.0115 - msle: 0.0029 - mae: 0.0322 - val_loss: 0.0084 - val_mse: 0.0084 - val_msle: 0.0021 - val_mae: 0.0266
Epoch 5/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0098 - mse: 0.0098 - msle: 0.0021 - mae: 0.0324 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0012 - val_mae: 0.0217
Epoch 6/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0073 - mse: 0.0073 - msle: 0.0013 - mae: 0.0317 - val_loss: 0.0059 - val_mse: 0.0059 - val_msle: 0.0012 - val_mae: 0.0290
Epoch 7/10
150/150 [==============================] - 42s 281ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0010 - mae: 0.0287 - val_loss: 0.0066 - val_mse: 0.0066 - val_msle: 0.0015 - val_mae: 0.0197
Epoch 8/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0049 - mse: 0.0049 - msle: 7.9897e-04 - mae: 0.0273 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0017 - val_mae: 0.0144
Epoch 9/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.2050e-04 - mae: 0.0268 - val_loss: 0.0076 - val_mse: 0.0076 - val_msle: 0.0017 - val_mae: 0.0164
Epoch 10/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0042 - mse: 0.0042 - msle: 6.6887e-04 - mae: 0.0259 - val_loss: 0.0072 - val_mse: 0.0072 - val_msle: 0.0016 - val_mae: 0.0157
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0485 - mse: 0.0485 - msle: 0.0033 - mae: 0.1094Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0483 - mse: 0.0483 - msle: 0.0033 - mae: 0.1090 - val_loss: 0.0186 - val_mse: 0.0186 - val_msle: 0.0035 - val_mae: 0.0270
Epoch 2/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0177 - mse: 0.0177 - msle: 0.0033 - mae: 0.0289 - val_loss: 0.0126 - val_mse: 0.0126 - val_msle: 0.0035 - val_mae: 0.0292
Epoch 3/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0140 - mse: 0.0140 - msle: 0.0033 - mae: 0.0331 - val_loss: 0.0104 - val_mse: 0.0104 - val_msle: 0.0032 - val_mae: 0.0166
Epoch 4/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0121 - mse: 0.0121 - msle: 0.0030 - mae: 0.0315 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0022 - val_mae: 0.0152
Epoch 5/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0102 - mse: 0.0102 - msle: 0.0022 - mae: 0.0330 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0017 - val_mae: 0.0170
Epoch 6/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0084 - mse: 0.0084 - msle: 0.0017 - mae: 0.0307 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0134
Epoch 7/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0071 - mse: 0.0071 - msle: 0.0013 - mae: 0.0303 - val_loss: 0.0077 - val_mse: 0.0077 - val_msle: 0.0018 - val_mae: 0.0188
Epoch 8/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0059 - mse: 0.0059 - msle: 0.0010 - mae: 0.0281 - val_loss: 0.0081 - val_mse: 0.0081 - val_msle: 0.0019 - val_mae: 0.0151
Epoch 9/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0051 - mse: 0.0051 - msle: 8.5018e-04 - mae: 0.0278 - val_loss: 0.0093 - val_mse: 0.0093 - val_msle: 0.0021 - val_mae: 0.0232
Epoch 10/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0045 - mse: 0.0045 - msle: 7.1403e-04 - mae: 0.0264 - val_loss: 0.0088 - val_mse: 0.0088 - val_msle: 0.0020 - val_mae: 0.0224
Importing training file...
Counting number of rows...
Done.
The dataset contains 300000 rows
Epoch 1/10
150/150 [==============================] - ETA: 0s - loss: 0.0418 - mse: 0.0418 - msle: 0.0032 - mae: 0.0872Importing training file...
Counting number of rows...
Done.
The dataset contains 33371 rows
150/150 [==============================] - 43s 284ms/step - loss: 0.0417 - mse: 0.0417 - msle: 0.0032 - mae: 0.0869 - val_loss: 0.0167 - val_mse: 0.0167 - val_msle: 0.0034 - val_mae: 0.0162
Epoch 2/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0160 - mse: 0.0160 - msle: 0.0032 - mae: 0.0340 - val_loss: 0.0119 - val_mse: 0.0119 - val_msle: 0.0035 - val_mae: 0.0317
Epoch 3/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0128 - mse: 0.0128 - msle: 0.0032 - mae: 0.0329 - val_loss: 0.0094 - val_mse: 0.0094 - val_msle: 0.0028 - val_mae: 0.0150
Epoch 4/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0107 - mse: 0.0107 - msle: 0.0024 - mae: 0.0335 - val_loss: 0.0068 - val_mse: 0.0068 - val_msle: 0.0017 - val_mae: 0.0127
Epoch 5/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0082 - mse: 0.0082 - msle: 0.0015 - mae: 0.0328 - val_loss: 0.0070 - val_mse: 0.0070 - val_msle: 0.0017 - val_mae: 0.0224
Epoch 6/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0063 - mse: 0.0063 - msle: 0.0010 - mae: 0.0307 - val_loss: 0.0073 - val_mse: 0.0073 - val_msle: 0.0019 - val_mae: 0.0180
Epoch 7/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0054 - mse: 0.0054 - msle: 9.1508e-04 - mae: 0.0277 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0018 - val_mae: 0.0238
Epoch 8/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0050 - mse: 0.0050 - msle: 8.4336e-04 - mae: 0.0265 - val_loss: 0.0069 - val_mse: 0.0069 - val_msle: 0.0017 - val_mae: 0.0218
Epoch 9/10
150/150 [==============================] - 42s 283ms/step - loss: 0.0044 - mse: 0.0044 - msle: 7.0335e-04 - mae: 0.0257 - val_loss: 0.0082 - val_mse: 0.0082 - val_msle: 0.0021 - val_mae: 0.0209
Epoch 10/10
150/150 [==============================] - 42s 282ms/step - loss: 0.0042 - mse: 0.0042 - msle: 7.0340e-04 - mae: 0.0252 - val_loss: 0.0074 - val_mse: 0.0074 - val_msle: 0.0019 - val_mae: 0.0185
[{'batch size': 500, 'window length': 11, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.009502998553216457, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.009302418678998947, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.008931530639529228, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.0091535784304142, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.008320311084389687, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.008477774448692799, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.008693310432136059, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.00872851349413395, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.00750806275755167, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.00857140775769949, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.007708210498094559, 'learning rate': 0.001}, {'batch size': 500, 'window length': 11, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007590387016534805, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.00759913120418787, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.009295889176428318, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.009766495786607265, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.00856915395706892, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.00661576259881258, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.0062211439944803715, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.007560122292488813, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.008275951258838177, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.006199403665959835, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.006677649915218353, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.007483077701181173, 'learning rate': 0.001}, {'batch size': 500, 'window length': 21, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.005680522881448269, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.007451065815985203, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.007889975793659687, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.007449944969266653, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.007371025625616312, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.006191415712237358, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.006193511188030243, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.006528730038553476, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.008022726513445377, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.0073609743267297745, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.005219689104706049, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.006320070941001177, 'learning rate': 0.001}, {'batch size': 500, 'window length': 51, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.005457186605781317, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.008079676888883114, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.007823038846254349, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.00788169540464878, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.007073689252138138, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.006789964158087969, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.0071616824716329575, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.0066254581324756145, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007221763022243977, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.006679201498627663, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.004958478733897209, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.008971652016043663, 'learning rate': 0.001}, {'batch size': 500, 'window length': 99, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.0056730457581579685, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.009421883150935173, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.006122894585132599, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.007592730689793825, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.007057685870677233, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.00958214234560728, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.008412824012339115, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.007924331352114677, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.008603233844041824, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.011511997319757938, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.006752608343958855, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.0107494555413723, 'learning rate': 0.001}, {'batch size': 500, 'window length': 199, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.008361773565411568, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.010125699453055859, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.011348860338330269, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.007309103384613991, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.010389056988060474, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.008316799998283386, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.005584654398262501, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.008450301364064217, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.009856272488832474, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.006888531614094973, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.005959399044513702, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.008600784465670586, 'learning rate': 0.001}, {'batch size': 500, 'window length': 599, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.006520700640976429, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.009110519662499428, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.008757900446653366, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.008701265789568424, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.008336121216416359, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.008353658951818943, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.010332503356039524, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.008650537580251694, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.00878320261836052, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.007388445548713207, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.008297048509120941, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.00801490992307663, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 11, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.008055821061134338, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.008440058678388596, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.007678253576159477, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.008800635114312172, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.008571222424507141, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.0069894008338451385, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007613964844495058, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.007501627784222364, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007222901564091444, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.006103890482336283, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.006005620118230581, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.00749818654730916, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 21, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007096455432474613, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.008171065710484982, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.009807048365473747, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.008602004498243332, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.009478462859988213, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.00618743896484375, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007153203710913658, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.007755916099995375, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.00963725708425045, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.0057249488309025764, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.006616417318582535, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.004926583729684353, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 51, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007150271441787481, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.009259389713406563, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.007784766145050526, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.009879306890070438, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.008710634894669056, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.007002906873822212, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007229534909129143, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.008389640599489212, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007088755257427692, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.008817266672849655, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.005475538317114115, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.0055230967700481415, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 99, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007006874307990074, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.010517640970647335, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.010230195708572865, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.010296305641531944, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.009404372423887253, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.007718645967543125, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.005870492197573185, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.006761378142982721, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007810952141880989, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.0075856721960008144, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.007107398007065058, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.007455440238118172, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 199, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.006411711219698191, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.01117539219558239, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.010651585645973682, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.009475708939135075, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.01067093014717102, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.0056986999697983265, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.00792985875159502, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.007086142431944609, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.010028726421296597, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.007373429369181395, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.007203069049865007, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.009468364529311657, 'learning rate': 0.001}, {'batch size': 1000, 'window length': 599, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007299786899238825, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.00963202677667141, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.009571049362421036, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.009862778708338737, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.010465390048921108, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.009336470626294613, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.00912407971918583, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.008838788606226444, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.008841559290885925, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.008636042475700378, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.00832443218678236, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.007986418902873993, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 11, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.008205713704228401, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.011166801676154137, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.01223126519471407, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.00984808150678873, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.011475819163024426, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.007380690425634384, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007824404165148735, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.007551869843155146, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007854297757148743, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.0058286478742957115, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.0064744786359369755, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.007238512858748436, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 21, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.006175574380904436, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.011243444867432117, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.010948901064693928, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.011192272417247295, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.010828613303601742, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.007158320862799883, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007882204838097095, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.00707838824018836, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007318996824324131, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.007102900184690952, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.006818369496613741, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.005920222029089928, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 51, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007786039263010025, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.014139309525489807, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.011651254259049892, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.011637314222753048, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.011414674110710621, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.008097875863313675, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.00808104407042265, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.0072879670187830925, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007425589952617884, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.007362073753029108, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.007250464987009764, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.006919314153492451, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 99, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007658490911126137, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.011162417009472847, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.011158077046275139, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.011083951219916344, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.011261427775025368, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.007881314493715763, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007498761173337698, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.006976596545428038, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007732865400612354, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.008211062289774418, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.008801703341305256, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.007918705232441425, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 199, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.008214910514652729, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.1, 'epochs': 2, 'validation loss': 0.016854938119649887, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.3, 'epochs': 2, 'validation loss': 0.01194289792329073, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.5, 'epochs': 2, 'validation loss': 0.013506383635103703, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.7, 'epochs': 2, 'validation loss': 0.018538378179073334, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.1, 'epochs': 5, 'validation loss': 0.0078028663992881775, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.3, 'epochs': 5, 'validation loss': 0.007451891433447599, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.5, 'epochs': 5, 'validation loss': 0.010632811114192009, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.7, 'epochs': 5, 'validation loss': 0.007661701180040836, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.1, 'epochs': 10, 'validation loss': 0.0073591736145317554, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.3, 'epochs': 10, 'validation loss': 0.007156928535550833, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.5, 'epochs': 10, 'validation loss': 0.008842824958264828, 'learning rate': 0.001}, {'batch size': 2000, 'window length': 599, 'window offset': 0.7, 'epochs': 10, 'validation loss': 0.007437148597091436, 'learning rate': 0.001}]
The best parameters are:
batch size 1000.000000
window length 51.000000
window offset 0.500000
epochs 10.000000
validation loss 0.004927
learning rate 0.001000
Name: 106, dtype: float64
time elapsed in hours: 4.403311675919427
|
Rnotebooks/syntaxTest/Rsyntaxpart1.ipynb | ###Markdown
R 文法確認ノートブック 代入と四則演算
###Code
# 代入記号として = を使用することも可能ですが,
# Rでは一般的に <- を用いるのが一般的です.
x<-3
x
2-1
2*3
4/2
3^4
3**4
###Output
_____no_output_____
###Markdown
メモ:複数の処理を一度に行いたい場合はセミコロン;で繋いでコードを書きます.
###Code
2-1;2*3;4/2;3^4;3**4
x=0
x=x+1
x
#どうやらダメっぽい
x+=1
###Output
_____no_output_____
###Markdown
ベクトルと行列
###Code
x_vector <- c(1, 2, 3, 4, 5)
x_vector
length(x_vector)
###Output
_____no_output_____
###Markdown
要素アクセス
###Code
vec <- c(10,9,8,7,6,5,4,3,2,1)
vec
# 添え字と要素の結果の関係に注意
vec[1]; vec[3]; vec[10]
# スライスによるアクセス
vec[4:8]
###Output
_____no_output_____
###Markdown
演算
###Code
# 加法と減法
vec1<-c(1,3,5)
vec2<-c(2,4,6)
vec1+vec2; vec1-vec2
#スカラー倍
scalar=3
# Rでは変数名にピリオド . を用いて良い.
x.vector<-c(2,1,0)
scalar*x.vector
# * と /
# 各々の要素に演算が適用される
vec1*vec2;vec1/vec2
###Output
_____no_output_____
###Markdown
要素の結合
###Code
# 結合
joined<-append(vec1,vec2)
joined
###Output
_____no_output_____
###Markdown
vec<- c(....) がめんどくさい
###Code
mendokusai<-c(1,2,3,4,5,6,7,8,9,10)
rakuchin<-c(1:10)
mendokusai
rakuchin
all(mendokusai==rakuchin)
vec<-1:10
vec
###Output
_____no_output_____
###Markdown
ドキュメンテーション
###Code
help(c)
###Output
_____no_output_____
###Markdown
行列
###Code
elements<-c(1,2,3,4,5,6)
matrix(elements,2,3)
matrix(elements,nrow=2,ncol=3)
#3行2列 オプション引数を省略してもいいらしい
matrix(elements,nrow=3,2)
#2行3列.可読性を考えると望ましくない書き方である
matrix(elements,3,nrow=2)
#どちらかのみを指定してOK
matrix(elements,2)
matrix(elements,ncol=3)
###Output
_____no_output_____
###Markdown
情報取得
###Code
mat<-matrix(elements,nrow=2,ncol=3)
ncol(mat)
nrow(mat)
dim(mat)
###Output
_____no_output_____
###Markdown
データアクセス
###Code
mat<-matrix(elements,nrow=2,ncol=3)
mat
#2行3列目の値
mat[2,3]
#1行目の要素を取り出す
mat[1,]
#3列目の要素を取り出す
mat[,3]
#2列目から3列目までの要素を取り出す
mat[,2:3]
# ~以外のを取り出す
#1行以外
mat[-1,]
#3列以外
mat[,-3]
#2列目から3列目までの要素以外
mat[,-(2:3)]
###Output
_____no_output_____
###Markdown
演算
###Code
mat<-matrix(elements,nrow=2,ncol=3)
mat+1
2*mat
elements.1<-c(1,3,5,7,9,11)
elements.2<-c(2,4,6,8,10,12)
mat.1<-matrix(elements.1,2,3)
mat.2<-matrix(elements.2,2,3)
mat.1; mat.2
mat.1+mat.2; mat.1-mat.2
# 要素ごとに演算がブロードキャストされる
mat.1*mat.2;mat.1/mat.2
###Output
_____no_output_____
###Markdown
行列の転置
###Code
mat
t(mat)
###Output
_____no_output_____
###Markdown
label を付与する
###Code
mat
colnames(mat)<-c("c1","c2","c3")
mat
rownames(mat)<-c("r1","r2")
mat
###Output
_____no_output_____
###Markdown
ドキュメンテーション
###Code
help(matrix)
###Output
_____no_output_____
###Markdown
統計値算出
###Code
x<-1:5
x
#max min
max(x); min(x)
#平均
sum(x);sum(x)/length(x);mean(x)
#分散,標準偏差
var(x); sd(x)
#中央値
median(x)
income.a<-c(100,200,300,400,500)
mean(income.a); median(income.a)
income.b<-c(100,200,300,400,100000)
mean(income.b);median(income.b)
###Output
_____no_output_____
###Markdown
一気に見たいんです
###Code
x
# 1st Qu =下側25%点
# 3rd Qu = 上側25%点
summary(x)
###Output
_____no_output_____
###Markdown
2次元配列の統計値算出
###Code
mat<-matrix(1:12,nrow=3,ncol=4,byrow=TRUE)
mat
###Output
_____no_output_____
###Markdown
要素全ての統計値
###Code
sum(mat);sum(1:12)
mean(mat);mean(1:12)
###Output
_____no_output_____
###Markdown
行ごと,列ごとの統計値
###Code
rowSums(mat); colSums(mat)
rowMeans(mat); colMeans(mat)
###Output
_____no_output_____
###Markdown
apply function```apply(X,MARGIN,FUN)MARGIN a vector giving the subscripts which the function will be applied over. E.g., for a matrix 1 indicates rows, 2 indicates columns, c(1, 2) indicates rows and columns. Where X has named dimnames, it can be a character vector selecting dimension names.```
###Code
help(apply)
apply(mat,1,sum)
apply(mat,2,sum)
summary(mat)
apply(mat,1,summary); apply(mat,2,summary)
###Output
_____no_output_____ |
Lesson-10_6_2_Advance-Cnn(Resnet_Cifar10).ipynb | ###Markdown
10-6 ResNet for cifar10 original code is =>https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlsphx-glr-beginner-blitz-cifar10-tutorial-py
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import visdom
vis = visdom.Visdom()
vis.close(env="main")
###Output
_____no_output_____
###Markdown
define value tracker
###Code
def value_tracker(value_plot, value, num):
'''num, loss_value, are Tensor'''
vis.line(X=num,
Y=value,
win = value_plot,
update='append'
)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
torch.manual_seed(777)
if device =='cuda':
torch.cuda.manual_seed_all(777)
###Output
_____no_output_____
###Markdown
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) How to Calculate mean and std in Normalize
###Code
transform = transforms.Compose([
transforms.ToTensor()
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True, download=True, transform=transform)
print(trainset.train_data.shape)
train_data_mean = trainset.train_data.mean( axis=(0,1,2) )
train_data_std = trainset.train_data.std( axis=(0,1,2) )
print(train_data_mean)
print(train_data_std)
train_data_mean = train_data_mean / 255
train_data_std = train_data_std / 255
print(train_data_mean)
print(train_data_std)
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.ToTensor(),
transforms.Normalize(train_data_mean, train_data_std)
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(train_data_mean, train_data_std)
])
trainset = torchvision.datasets.CIFAR10(root='./cifar10', train=True,
download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=256,
shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='./cifar10', train=False,
download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=256,
shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
_____no_output_____
###Markdown
make ResNet50 using resnet.py
###Code
import resnet
conv1x1=resnet.conv1x1
Bottleneck = resnet.Bottleneck
BasicBlock= resnet.BasicBlock
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(ResNet, self).__init__()
self.inplanes = 16
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1,
bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
#self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 16, layers[0], stride=1)
self.layer2 = self._make_layer(block, 32, layers[1], stride=1)
self.layer3 = self._make_layer(block, 64, layers[2], stride=2)
self.layer4 = self._make_layer(block, 128, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(128 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)
elif isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
#x.shape =[1, 16, 32,32]
x = self.bn1(x)
x = self.relu(x)
#x = self.maxpool(x)
x = self.layer1(x)
#x.shape =[1, 128, 32,32]
x = self.layer2(x)
#x.shape =[1, 256, 32,32]
x = self.layer3(x)
#x.shape =[1, 512, 16,16]
x = self.layer4(x)
#x.shape =[1, 1024, 8,8]
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
resnet50 = ResNet(resnet.Bottleneck, [3, 4, 6, 3], 10, True).to(device)
#1(conv1) + 9(layer1) + 12(layer2) + 18(layer3) + 9(layer4) +1(fc)= ResNet50
resnet50
a=torch.Tensor(1,3,32,32).to(device)
out = resnet50(a)
print(out)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(resnet50.parameters(), lr = 0.1, momentum = 0.9, weight_decay=5e-4)
lr_sche = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
###Output
_____no_output_____
###Markdown
make plot
###Code
loss_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='loss_tracker', legend=['loss'], showlegend=True))
acc_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='Accuracy', legend=['Acc'], showlegend=True))
###Output
_____no_output_____
###Markdown
define acc_check function
###Code
def acc_check(net, test_set, epoch, save=1):
correct = 0
total = 0
with torch.no_grad():
for data in test_set:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = (100 * correct / total)
print('Accuracy of the network on the 10000 test images: %d %%' % acc)
if save:
torch.save(net.state_dict(), "./model/model_epoch_{}_acc_{}.pth".format(epoch, int(acc)))
return acc
###Output
_____no_output_____
###Markdown
Training with (acc check + model save)
###Code
print(len(trainloader))
epochs = 150
for epoch in range(epochs): # loop over the dataset multiple times
running_loss = 0.0
lr_sche.step()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = resnet50(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 30 == 29: # print every 30 mini-batches
value_tracker(loss_plt, torch.Tensor([running_loss/30]), torch.Tensor([i + epoch*len(trainloader) ]))
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 30))
running_loss = 0.0
#Check Accuracy
acc = acc_check(resnet50, testloader, epoch, save=1)
value_tracker(acc_plt, torch.Tensor([acc]), torch.Tensor([epoch]))
print('Finished Training')
###Output
_____no_output_____
###Markdown
Model Accuracy Testing
###Code
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images = images.to(device)
labels = labels.to(device)
outputs = resnet50(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
###Output
_____no_output_____ |
RL/RL-Adventure-2/5.ddpg.ipynb | ###Markdown
Use CUDA
###Code
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
###Output
_____no_output_____
###Markdown
Replay Buffer
###Code
class ReplayBuffer:
def __init__(self, capacity):
self.capacity = capacity
self.buffer = []
self.position = 0
def push(self, state, action, reward, next_state, done):
if len(self.buffer) < self.capacity:
self.buffer.append(None)
self.buffer[self.position] = (state, action, reward, next_state, done)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
batch = random.sample(self.buffer, batch_size)
state, action, reward, next_state, done = map(np.stack, zip(*batch))
return state, action, reward, next_state, done
def __len__(self):
return len(self.buffer)
###Output
_____no_output_____
###Markdown
Normalize action space
###Code
class NormalizedActions(gym.ActionWrapper):
def _action(self, action):
low_bound = self.action_space.low
upper_bound = self.action_space.high
action = low_bound + (action + 1.0) * 0.5 * (upper_bound - low_bound)
action = np.clip(action, low_bound, upper_bound)
return action
def _reverse_action(self, action):
low_bound = self.action_space.low
upper_bound = self.action_space.high
action = 2 * (action - low_bound) / (upper_bound - low_bound) - 1
action = np.clip(action, low_bound, upper_bound)
return actions
###Output
_____no_output_____
###Markdown
Ornstein-Uhlenbeck processAdding time-correlated noise to the actions taken by the deterministic policywiki
###Code
class OUNoise(object):
def __init__(self, action_space, mu=0.0, theta=0.15, max_sigma=0.3, min_sigma=0.3, decay_period=100000):
self.mu = mu
self.theta = theta
self.sigma = max_sigma
self.max_sigma = max_sigma
self.min_sigma = min_sigma
self.decay_period = decay_period
self.action_dim = action_space.shape[0]
self.low = action_space.low
self.high = action_space.high
self.reset()
def reset(self):
self.state = np.ones(self.action_dim) * self.mu
def evolve_state(self):
x = self.state
dx = self.theta * (self.mu - x) + self.sigma * np.random.randn(self.action_dim)
self.state = x + dx
return self.state
def get_action(self, action, t=0):
ou_state = self.evolve_state()
self.sigma = self.max_sigma - (self.max_sigma - self.min_sigma) * min(1.0, t / self.decay_period)
return np.clip(action + ou_state, self.low, self.high)
#https://github.com/vitchyr/rlkit/blob/master/rlkit/exploration_strategies/ou_strategy.py
def plot(frame_idx, rewards):
clear_output(True)
plt.figure(figsize=(20,5))
plt.subplot(131)
plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))
plt.plot(rewards)
plt.show()
###Output
_____no_output_____
###Markdown
Continuous control with deep reinforcement learningArxiv
###Code
class ValueNetwork(nn.Module):
def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):
super(ValueNetwork, self).__init__()
self.linear1 = nn.Linear(num_inputs + num_actions, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, 1)
self.linear3.weight.data.uniform_(-init_w, init_w)
self.linear3.bias.data.uniform_(-init_w, init_w)
def forward(self, state, action):
x = torch.cat([state, action], 1)
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
class PolicyNetwork(nn.Module):
def __init__(self, num_inputs, num_actions, hidden_size, init_w=3e-3):
super(PolicyNetwork, self).__init__()
self.linear1 = nn.Linear(num_inputs, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, num_actions)
self.linear3.weight.data.uniform_(-init_w, init_w)
self.linear3.bias.data.uniform_(-init_w, init_w)
def forward(self, state):
x = F.relu(self.linear1(state))
x = F.relu(self.linear2(x))
x = F.tanh(self.linear3(x))
return x
def get_action(self, state):
state = torch.FloatTensor(state).unsqueeze(0).to(device)
action = self.forward(state)
return action.detach().cpu().numpy()[0, 0]
###Output
_____no_output_____
###Markdown
DDPG Update
###Code
def ddpg_update(batch_size,
gamma = 0.99,
min_value=-np.inf,
max_value=np.inf,
soft_tau=1e-2):
state, action, reward, next_state, done = replay_buffer.sample(batch_size)
state = torch.FloatTensor(state).to(device)
next_state = torch.FloatTensor(next_state).to(device)
action = torch.FloatTensor(action).to(device)
reward = torch.FloatTensor(reward).unsqueeze(1).to(device)
done = torch.FloatTensor(np.float32(done)).unsqueeze(1).to(device)
policy_loss = value_net(state, policy_net(state))
policy_loss = -policy_loss.mean()
next_action = target_policy_net(next_state)
target_value = target_value_net(next_state, next_action.detach())
expected_value = reward + (1.0 - done) * gamma * target_value
expected_value = torch.clamp(expected_value, min_value, max_value)
value = value_net(state, action)
value_loss = value_criterion(value, expected_value.detach())
policy_optimizer.zero_grad()
policy_loss.backward()
policy_optimizer.step()
value_optimizer.zero_grad()
value_loss.backward()
value_optimizer.step()
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
for target_param, param in zip(target_policy_net.parameters(), policy_net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
env = NormalizedActions(gym.make("Pendulum-v0"))
ou_noise = OUNoise(env.action_space)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
hidden_dim = 256
value_net = ValueNetwork(state_dim, action_dim, hidden_dim).to(device)
policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)
target_value_net = ValueNetwork(state_dim, action_dim, hidden_dim).to(device)
target_policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim).to(device)
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.data.copy_(param.data)
for target_param, param in zip(target_policy_net.parameters(), policy_net.parameters()):
target_param.data.copy_(param.data)
value_lr = 1e-3
policy_lr = 1e-4
value_optimizer = optim.Adam(value_net.parameters(), lr=value_lr)
policy_optimizer = optim.Adam(policy_net.parameters(), lr=policy_lr)
value_criterion = nn.MSELoss()
replay_buffer_size = 1000000
replay_buffer = ReplayBuffer(replay_buffer_size)
max_frames = 12000
max_steps = 500
frame_idx = 0
rewards = []
batch_size = 128
while frame_idx < max_frames:
state = env.reset()
ou_noise.reset()
episode_reward = 0
for step in range(max_steps):
action = policy_net.get_action(state)
action = ou_noise.get_action(action, step)
next_state, reward, done, _ = env.step(action)
replay_buffer.push(state, action, reward, next_state, done)
if len(replay_buffer) > batch_size:
ddpg_update(batch_size)
state = next_state
episode_reward += reward
frame_idx += 1
if frame_idx % max(1000, max_steps + 1) == 0:
plot(frame_idx, rewards)
if done:
break
rewards.append(episode_reward)
###Output
_____no_output_____ |
shamanai/notebooks/detecto.ipynb | ###Markdown
Table of Contents:
Step1: Image collection and Labelling
Step2: Installation of the required package
Step3: Custom image augmentation
Step4: Model Training
Step5: Model saving, loading, and predicting Step1: Image collection and labeling:
The first step of any object detection model is collecting images and performing annotation. For this project, I have downloaded 50 ‘Maruti Car Images’ from google image. There is a package called simple_image_download which is used for automatic image download. Feel free to use the following code:
With this code, we will get 50 downloaded images in our ‘Maruti Car’ folder of the working directory. Feel free to change the number of images to as many as you want. After that, we will randomly split images into two parts i.e. Train (35 images) and Test(15 images)
The next job is labeling the images. There are various image annotation tool is available. For this project, I have used MAKESENSE.AI. It’s a free online tool for labeling. No installation process is required. We can open it using the browser only. Using the link, I dropped my car images and did annotation for Train and Validation datasets separately.
Now, we can export the annotation in XML format as ‘Detecto’ supports it. Then we have placed XML files of train and validation images in the Train and validation folder respectively. So the folder tree looks like this:
###Code
from simple_image_download import simple_image_download as simp
response = simp.simple_image_download
lst=['Maruti car']
for rep in lst:
response().download(rep, 50)
##MAKESENSE.AI
###Output
_____no_output_____
###Markdown
Step2: Installation of the required packages:
As it is already mentioned that ‘Detecto’ is built on top of the PyTorch, we need to first install PyTorch. I have used Google Colab for this project. Then we need to check whether we have the support of GPU or not using the following code:
###Code
import torch
print(torch.cude.is_available())
###Output
_____no_output_____
###Markdown
If the print is ‘True’, it means you can use GPU. If it is ‘False’, please change the ‘Hardware Accelerator’ of the Notebook Setting to ‘GPU’. Now, your system is ready with the requisition to install ‘Detecto’. Use the following magic code to install it.
###Code
!pip install detecto
###Output
_____no_output_____
###Markdown
Once it’s done, let’s import the libraries using the following code:
###Code
from detecto import core, utils, visualize
from detecto.visualize import show_labeled_image, plot_prediction_grid
from torchvision import transforms
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Step3: Custom image augmentation:
Image augmentation is the process of artificially expanding data by creating a modified version of images. Detecto has an inbuilt function to do custom transform by applying to resize, flip, and saturation augmentation. Please, use the following code for augmenting the image dataset.
###Code
custom_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(900),
transforms.RandomHorizontalFlip(0.5),
transforms.ColorJitter(saturation=0.2),
transforms.ToTensor(),
utils.normalize_transform(),
])
###Output
_____no_output_____
###Markdown
Step4: Model Training:
Now, we have come to that most awaited step i.e. Model Training. Here, magic happens in Five lines of code.
###Code
Train_dataset=core.Dataset('Train/',transform=custom_transforms)#L1
Test_dataset = core.Dataset('Test/')#L2
loader=core.DataLoader(Train_dataset, batch_size=2, shuffle=True)#L3
model = core.Model(['Wheel', 'Head Light'])#L4
losses = model.fit(loader, Test_dataset, epochs=25, lr_step_size=5, learning_rate=0.001, verbose=True)#L5
###Output
_____no_output_____
###Markdown
In the first two lines of code(L1 & L2), we have assigned Train and Test dataset. In L3, we have created DataLoader over our dataset. It helps define how we batch and feed our images into the model for training. Feel free to experiment by changing ‘batch_size’.
Now, it’s time to mention the ‘Labels’ or ‘classes’ which are made in L4. Finally, model training will be started via ‘model.fit’ in L5. Here, we can play with different options such as epochs, lr_step_size, and learning rate’. The default model is Faster R-CNN ResNet-50 FPN. We have fine-tuned this model for our custom dataset.
Now, we can look at the loss function using the following code:
###Code
plt.plot(losses)
plt.show()
###Output
_____no_output_____
###Markdown
Step5: Model saving, loading, and predicting:
Once we are satisfied with a model loss, we need to save the model for future reference. So that we can load it as and when required. Use the following code for saving and loading.
###Code
model.save('model_weights.pth')
model = core.Model.load('model_weights.pth', ['Wheel', 'Head Light'])
###Output
_____no_output_____
###Markdown
After loading the model, we want to use it for prediction. Let’s use it for one observation from the Test folder and plot the image with a bounding box. Here, the prediction format is labels, boxes, and scores.
###Code
image = utils.read_image('Test/Maruti car_27.jpeg')
predictions = model.predict(image)
labels, boxes, scores = predictions
show_labeled_image(image, boxes, labels)
###Output
_____no_output_____
###Markdown
There are many unwanted bounding boxes in the above picture. So, we have to remove them. The simplest way to solve the issue is by providing a threshold on the score. For this project, I have put the threshold as 0.6 for both classes. I came to this point through different trials and errors. Use the following code to set up the threshold for bounding boxes and plotting them.
###Code
thresh=0.6
filtered_indices=np.where(scores>thresh)
filtered_scores=scores[filtered_indices]
filtered_boxes=boxes[filtered_indices]
num_list = filtered_indices[0].tolist()
filtered_labels = [labels[i] for i in num_list]
show_labeled_image(image, filtered_boxes, filtered_labels)
###Output
_____no_output_____ |
multipletargets.ipynb | ###Markdown
Our list of targets
###Code
targets = ['ENSG00000069696', 'ENSG00000144285']
targets_string = ', '.join('"{0}"'.format(t) for t in targets)
###Output
_____no_output_____
###Markdown
Make the API call with our list of targets to find the associations. Set facets to true.
###Code
url = 'https://www.targetvalidation.org/api/latest/public/association/filter'
headers = {"Accept": "application/json"}
# There may be an easier way of building these parameters...
data = "{\"target\":[" + targets_string + "], \"facets\":true}"
response = requests.post(url, headers=headers, data=data)
output = response.json()
###Output
_____no_output_____
###Markdown
Print out all the json returned just for reference
###Code
#print json.dumps(output, indent=2)
###Output
_____no_output_____
###Markdown
The therapeutic area facets look interesting - lets iterate through these and display
###Code
therapeuticareas = []
for bucket in output['facets']['therapeutic_area']['buckets']:
therapeuticareas.append({
'target_count' : bucket['unique_target_count']['value'],
'disease_count' : bucket['unique_disease_count']['value'],
'therapeutic_area' : bucket['label'],
'key' : bucket['key']
})
###Output
_____no_output_____
###Markdown
Sort by target count and then disease count
###Code
therapeuticareas = sorted(therapeuticareas, key=lambda k: (k['target_count'],k['disease_count']), reverse=True)
###Output
_____no_output_____
###Markdown
Using the python [tabulate](https://pypi.python.org/pypi/tabulate) library to render a pretty table of our extracted therapeutic areas.Note: You may need to run `pip install tabulate` in your python environment
###Code
print tabulate(therapeuticareas, headers="keys", tablefmt="grid")
###Output
+------------------------------+-----------------+-------------+----------------+
| therapeutic_area | disease_count | key | target_count |
+==============================+=================+=============+================+
| genetic disorder | 285 | efo_0000508 | 2 |
+------------------------------+-----------------+-------------+----------------+
| phenotype | 115 | efo_0000651 | 2 |
+------------------------------+-----------------+-------------+----------------+
| nervous system disease | 86 | efo_0000618 | 2 |
+------------------------------+-----------------+-------------+----------------+
| eye disease | 80 | efo_0003966 | 2 |
+------------------------------+-----------------+-------------+----------------+
| neoplasm | 49 | efo_0000616 | 2 |
+------------------------------+-----------------+-------------+----------------+
| metabolic disease | 38 | efo_0000589 | 2 |
+------------------------------+-----------------+-------------+----------------+
| cardiovascular disease | 38 | efo_0000319 | 2 |
+------------------------------+-----------------+-------------+----------------+
| endocrine system disease | 26 | efo_0001379 | 2 |
+------------------------------+-----------------+-------------+----------------+
| reproductive system disease | 25 | efo_0000512 | 2 |
+------------------------------+-----------------+-------------+----------------+
| skeletal system disease | 21 | efo_0002461 | 2 |
+------------------------------+-----------------+-------------+----------------+
| muscular disease | 19 | efo_0002970 | 2 |
+------------------------------+-----------------+-------------+----------------+
| immune system disease | 15 | efo_0000540 | 2 |
+------------------------------+-----------------+-------------+----------------+
| respiratory system disease | 10 | efo_0000684 | 2 |
+------------------------------+-----------------+-------------+----------------+
| infectious disease | 8 | efo_0005741 | 2 |
+------------------------------+-----------------+-------------+----------------+
| hematological system disease | 6 | efo_0005803 | 2 |
+------------------------------+-----------------+-------------+----------------+
| skin disease | 24 | efo_0000701 | 1 |
+------------------------------+-----------------+-------------+----------------+
| digestive system disease | 11 | efo_0000405 | 1 |
+------------------------------+-----------------+-------------+----------------+
| other | 2 | other | 1 |
+------------------------------+-----------------+-------------+----------------+
###Markdown
Lets just consider the first 5 top therapeutic areas
###Code
therapeuticareas = therapeuticareas[:5]
print tabulate(therapeuticareas, headers="keys", tablefmt="grid")
###Output
+------------------------+-----------------+-------------+----------------+
| therapeutic_area | disease_count | key | target_count |
+========================+=================+=============+================+
| genetic disorder | 285 | efo_0000508 | 2 |
+------------------------+-----------------+-------------+----------------+
| phenotype | 115 | efo_0000651 | 2 |
+------------------------+-----------------+-------------+----------------+
| nervous system disease | 86 | efo_0000618 | 2 |
+------------------------+-----------------+-------------+----------------+
| eye disease | 80 | efo_0003966 | 2 |
+------------------------+-----------------+-------------+----------------+
| neoplasm | 49 | efo_0000616 | 2 |
+------------------------+-----------------+-------------+----------------+
###Markdown
Now for each of those identify the top 5 diseases. Unfortunately we don't get the disease names in the facets, just the codes. Is this is the right approach then an API change???
###Code
for therapeuticarea in therapeuticareas:
print "Therapeutic area: " + therapeuticarea['therapeutic_area']
data = "{\"target\":[" + targets_string + "], \"facets\":true, \"therapeutic_area\":[\"" + therapeuticarea['key'] + "\"]}"
response = requests.post(url, headers=headers, data=data)
output = response.json()
diseases = []
for bucket in output['facets']['disease']['buckets']:
diseases.append({
'target_count' : bucket['unique_target_count']['value'],
'doc_count' : bucket['doc_count'],
'key' : bucket['key']
})
# Sort and take top 5
diseases = sorted(diseases, key=lambda k: (k['target_count'],k['doc_count']), reverse=True)
diseases = diseases[:5]
print tabulate(diseases, headers="keys", tablefmt="grid")
print ""
###Output
Therapeutic area: genetic disorder
+-------------+-----------------+----------------+
| doc_count | key | target_count |
+=============+=================+================+
| 2 | Orphanet_101435 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_101953 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_139009 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_1478 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_156638 | 2 |
+-------------+-----------------+----------------+
Therapeutic area: phenotype
+-------------+-------------+----------------+
| doc_count | key | target_count |
+=============+=============+================+
| 2 | EFO_0003108 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0003765 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0003843 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0003847 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0005230 | 2 |
+-------------+-------------+----------------+
Therapeutic area: nervous system disease
+-------------+-------------+----------------+
| doc_count | key | target_count |
+=============+=============+================+
| 2 | EFO_0000249 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000289 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000326 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000474 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000677 | 2 |
+-------------+-------------+----------------+
Therapeutic area: eye disease
+-------------+-----------------+----------------+
| doc_count | key | target_count |
+=============+=================+================+
| 2 | EFO_0001365 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_101435 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_183601 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_183616 | 2 |
+-------------+-----------------+----------------+
| 2 | Orphanet_34533 | 2 |
+-------------+-----------------+----------------+
Therapeutic area: neoplasm
+-------------+-------------+----------------+
| doc_count | key | target_count |
+=============+=============+================+
| 2 | EFO_0000305 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000311 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000313 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000326 | 2 |
+-------------+-------------+----------------+
| 2 | EFO_0000565 | 2 |
+-------------+-------------+----------------+
|
iot_practice_01.RNN.ipynb | ###Markdown
SETUP- - -
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "Dejavu Sans"
import seaborn as sns
sns.set()
%matplotlib inline
file_path = '/Users/quartz/data/iot-data/cansim-0800020-eng-6674700030567901031.csv'
data_raw = pd.read_csv(file_path, skiprows=6, skipfooter=9)
data_raw.head()
data_raw.dtypes
# 월별 끝일 삽일
from pandas.tseries.offsets import MonthEnd
# data_raw['Adjustments'] =
data_raw.Adjustments = pd.to_datetime(data_raw['Adjustments']) + MonthEnd(1)
data_raw = data_raw.set_index('Adjustments')
data_raw.head()
###Output
_____no_output_____
###Markdown
Plotting
###Code
# 기준점 형성(Timestamp)
split_date = pd.Timestamp('01-01-2011')
# Unadjusted feature만 활용해서 dataframe을 만든다
train = data_raw.loc[:split_date, ['Unadjusted']]
test = data_raw.loc[split_date:, ['Unadjusted']]
print(split_date, train.shape, test.shape)
# plot
ax = train.plot()
test.plot(ax=ax)
plt.legend(['train', 'test'])
###Output
2011-01-01 00:00:00 (240, 1) (73, 1)
###Markdown
preprocessing
###Code
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.fit_transform(test)
train_sc_df = pd.DataFrame(data=train_sc, columns=['Scaled'], index=train.index)
test_sc_df = pd.DataFrame(data=test_sc, columns=['Scaled'], index=test.index)
X_test = test_sc_df.dropna().drop('Scaled', axis=1)
y_test = test_sc_df.dropna()[['Scaled']]
train_sc_df.head()
for shift in range(1, 13):
train_sc_df['shift_{}'.format(shift)] = train_sc_df['Scaled'].shift(shift)
test_sc_df['shift_{}'.format(shift)] = test_sc_df['Scaled'].shift(shift)
train_sc_df.head(20)
###Output
_____no_output_____
###Markdown
make dataset(train, test)
###Code
# train, test
X_train_df = train_sc_df.dropna().drop('Scaled', axis=1)
y_train_df = train_sc_df.dropna()[['Scaled']]
X_test_df = test_sc_df.dropna().drop('Scaled', axis=1)
y_test_df = test_sc_df.dropna()[['Scaled']]
# DataFrame -> ndarray
X_train = X_train_df.values
y_train = y_train_df.values
X_test = X_test_df.values
y_test = y_test_df.values
# 2차원 데이터(size, feature)를 3차원 데이터(size, feature, time)으로.
X_train_t = X_train.reshape(X_train.shape[0], 12, 1)
X_test_t = X_test.reshape(X_test.shape[0], 12, 1)
# check shape
X_train.shape, X_train_t.shape, X_test.shape, X_test_t.shape
###Output
_____no_output_____
###Markdown
LSTM Modeling
###Code
from keras.layers import LSTM
from keras.models import Sequential
from keras.layers import Dense
import keras
import keras.backend as K
from keras.callbacks import EarlyStopping
K.clear_session()
# 손실 이력 클래스 정의
class LossHistory(keras.callbacks.Callback):
def init(self):
self.losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# 손실 이력 객체 생성
history = LossHistory()
history.init()
model = Sequential() # Sequential Model
model.add(LSTM(100, input_shape=(12,1))) # (timestamp, feature)
model.add(Dense(1)) # output = 1
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train_t, y_train, epochs=100, batch_size=30, verbose=2, callbacks=[history])
y_pred = model.predict(X_test_t)
###Output
Epoch 1/100
- 1s - loss: 0.1638
Epoch 2/100
- 0s - loss: 0.0272
Epoch 3/100
- 0s - loss: 0.0159
Epoch 4/100
- 0s - loss: 0.0156
Epoch 5/100
- 0s - loss: 0.0108
Epoch 6/100
- 0s - loss: 0.0103
Epoch 7/100
- 0s - loss: 0.0091
Epoch 8/100
- 0s - loss: 0.0088
Epoch 9/100
- 0s - loss: 0.0087
Epoch 10/100
- 0s - loss: 0.0084
Epoch 11/100
- 0s - loss: 0.0084
Epoch 12/100
- 0s - loss: 0.0081
Epoch 13/100
- 0s - loss: 0.0082
Epoch 14/100
- 0s - loss: 0.0084
Epoch 15/100
- 0s - loss: 0.0082
Epoch 16/100
- 0s - loss: 0.0083
Epoch 17/100
- 0s - loss: 0.0081
Epoch 18/100
- 0s - loss: 0.0081
Epoch 19/100
- 0s - loss: 0.0081
Epoch 20/100
- 0s - loss: 0.0079
Epoch 21/100
- 0s - loss: 0.0078
Epoch 22/100
- 0s - loss: 0.0077
Epoch 23/100
- 0s - loss: 0.0077
Epoch 24/100
- 0s - loss: 0.0077
Epoch 25/100
- 0s - loss: 0.0076
Epoch 26/100
- 0s - loss: 0.0076
Epoch 27/100
- 0s - loss: 0.0076
Epoch 28/100
- 0s - loss: 0.0075
Epoch 29/100
- 0s - loss: 0.0075
Epoch 30/100
- 0s - loss: 0.0076
Epoch 31/100
- 0s - loss: 0.0076
Epoch 32/100
- 0s - loss: 0.0077
Epoch 33/100
- 0s - loss: 0.0075
Epoch 34/100
- 0s - loss: 0.0076
Epoch 35/100
- 0s - loss: 0.0073
Epoch 36/100
- 0s - loss: 0.0071
Epoch 37/100
- 0s - loss: 0.0073
Epoch 38/100
- 0s - loss: 0.0070
Epoch 39/100
- 0s - loss: 0.0072
Epoch 40/100
- 0s - loss: 0.0070
Epoch 41/100
- 0s - loss: 0.0070
Epoch 42/100
- 0s - loss: 0.0069
Epoch 43/100
- 0s - loss: 0.0068
Epoch 44/100
- 0s - loss: 0.0068
Epoch 45/100
- 0s - loss: 0.0067
Epoch 46/100
- 0s - loss: 0.0068
Epoch 47/100
- 0s - loss: 0.0067
Epoch 48/100
- 0s - loss: 0.0068
Epoch 49/100
- 0s - loss: 0.0067
Epoch 50/100
- 0s - loss: 0.0065
Epoch 51/100
- 0s - loss: 0.0066
Epoch 52/100
- 0s - loss: 0.0067
Epoch 53/100
- 0s - loss: 0.0065
Epoch 54/100
- 0s - loss: 0.0064
Epoch 55/100
- 0s - loss: 0.0063
Epoch 56/100
- 0s - loss: 0.0064
Epoch 57/100
- 0s - loss: 0.0064
Epoch 58/100
- 0s - loss: 0.0065
Epoch 59/100
- 0s - loss: 0.0069
Epoch 60/100
- 0s - loss: 0.0062
Epoch 61/100
- 0s - loss: 0.0060
Epoch 62/100
- 0s - loss: 0.0060
Epoch 63/100
- 0s - loss: 0.0063
Epoch 64/100
- 0s - loss: 0.0059
Epoch 65/100
- 0s - loss: 0.0058
Epoch 66/100
- 0s - loss: 0.0057
Epoch 67/100
- 0s - loss: 0.0056
Epoch 68/100
- 0s - loss: 0.0055
Epoch 69/100
- 0s - loss: 0.0055
Epoch 70/100
- 0s - loss: 0.0058
Epoch 71/100
- 0s - loss: 0.0059
Epoch 72/100
- 0s - loss: 0.0056
Epoch 73/100
- 0s - loss: 0.0052
Epoch 74/100
- 0s - loss: 0.0051
Epoch 75/100
- 0s - loss: 0.0051
Epoch 76/100
- 0s - loss: 0.0051
Epoch 77/100
- 0s - loss: 0.0050
Epoch 78/100
- 0s - loss: 0.0047
Epoch 79/100
- 0s - loss: 0.0046
Epoch 80/100
- 0s - loss: 0.0045
Epoch 81/100
- 0s - loss: 0.0044
Epoch 82/100
- 0s - loss: 0.0042
Epoch 83/100
- 0s - loss: 0.0043
Epoch 84/100
- 0s - loss: 0.0043
Epoch 85/100
- 0s - loss: 0.0043
Epoch 86/100
- 0s - loss: 0.0037
Epoch 87/100
- 0s - loss: 0.0036
Epoch 88/100
- 0s - loss: 0.0034
Epoch 89/100
- 0s - loss: 0.0031
Epoch 90/100
- 0s - loss: 0.0033
Epoch 91/100
- 0s - loss: 0.0034
Epoch 92/100
- 0s - loss: 0.0027
Epoch 93/100
- 0s - loss: 0.0031
Epoch 94/100
- 0s - loss: 0.0039
Epoch 95/100
- 0s - loss: 0.0038
Epoch 96/100
- 0s - loss: 0.0032
Epoch 97/100
- 0s - loss: 0.0028
Epoch 98/100
- 0s - loss: 0.0031
Epoch 99/100
- 0s - loss: 0.0029
Epoch 100/100
- 0s - loss: 0.0029
###Markdown
Visualization
###Code
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
pred = y_pred
loss_ax.plot(pred, 'b', label='pred')
loss_ax.plot(y_test, 'r', label='act')
loss_ax.legend(loc='upper left')
plt.show()
# loss
plt.plot(history.losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
태양광 LSTM- - - EDA
###Code
!ls /Users/quartz/data/iot-data/solar_data.csv
file_path = '/Users/quartz/data/iot-data/solar_data.csv'
solar_raw = pd.read_csv(file_path, engine='python')
solar_raw.iloc[:, :15].head()
# 발전량, 강수량, 습도, 풍속, 기온 | 일조량, 미세먼지
solar_raw.columns
total = solar_raw['충전시간발전량']
# '5Hr', '6Hr', '7Hr', '8Hr', '9Hr', '10Hr'
sub = solar_raw[['10Hr', '11Hr', '12Hr', '13Hr', '14Hr', '15Hr', '16Hr']]
solar_raw['충전시간발전량'].tail()
sub.tail()
# 충전시간발전량은 오전 10시부터 오후 4시까지 발전량의 합이다
for i in range(10):
print(np.sum(sub.values[i]), total.values[i])
# 시간대(Hr)
solar_raw.iloc[:, :18].tail()
# 강수량()
solar_raw.iloc[:, 20:36].tail()
# 습도()
solar_raw.iloc[:, 36:52].tail()
# 풍속()
solar_raw.iloc[:, 52:68].tail()
# 기온()
solar_raw.iloc[:, 68:84].tail()
# data shape 확인
solar_raw.shape # 1년치 시계열 데이터
# data type 확인 - int64: 일출시간, 일몰시간, 20Hr으로 구성. float64:
solar_raw.dtypes
# 종속변수(충전시간발전량) 뜯어보기
solar_raw['충전시간발전량'].describe()
# 독립변수 간 상관관계 뜯어보기
solar_raw.corr()
plt.figure(figsize=(20, 15))
sns.heatmap(solar_revise.corr(), cmap="YlGnBu")
plt.show()
!ls ./tmp
# 변수 분포 살펴보기
columns = list(solar_raw.columns)
for column in columns:
y = solar_raw[column].values
sns.distplot(y)
plt.xticks([-0.5, 1.5])
plt.yticks([0, 1])
plt.title("{} distplot".format(column))
plt.savefig('/Users/quartz/Dropbox/iot-data-practice/tmp/{}.png'.format(column))
# 전체 feature(독립변수) 확인 : Hr, 충전시간발전량, 일출시간, 일몰시간, 강수량, 습도, 풍속, 기온
solar_raw.columns
# feature 하나씩 뜯어보기
solar_raw.iloc[:1, 17:34]
# 결측치 확인 : 0개
solar_raw.isna().sum()[50:100]
solar_raw.describe()
###Output
_____no_output_____
###Markdown
preprocessing
###Code
solar_raw['날짜'] = solar_raw['날짜'].apply(lambda x: "20"+str(x))
solar_raw.tail()
# 월별 끝일 삽일
solar_raw['날짜'] = pd.to_datetime(solar_raw['날짜'])
solar_raw = solar_raw.set_index('날짜')
solar_raw.head()
###Output
_____no_output_____
###Markdown
새로운 데이터셋 solar_revise 만들기- 현재 데이터(시간 당 발전량, 기온, 강수량, 습도, 풍속)로 내일 데이터(충전시간발전량) 예측하기
###Code
solar_1 = solar_raw.drop(['충전시간발전량'], axis=1)
solar_2 = solar_raw['충전시간발전량']
solar_2 = solar_2.values
solar_2 = solar_2[1:]
solar_2[:4]
solar_2 = np.append(solar_2, np.nan)
solar_1.shape, solar_2.shape
solar_1['충전시간발전량'] = solar_2
solar_1.dropna(inplace=True)
solar_revise = solar_1.copy()
solar_revise['20Hr'] = solar_revise['20Hr'].astype('float64')
solar_revise.to_pickle('./solar_revise.pkl')
solar_revise = pd.read_pickle('./solar_revise.pkl')
solar_revise.tail()
###Output
_____no_output_____
###Markdown
Feature Engineering```1. 데이터셋 중 10Hr ~ 16Hr 만 사용하기2. 데이터셋을 4개의 구간으로 나누기- 5~8 : 5_8Hr- 9~12 : 9_12Hr- 13~16 : 13_16Hr- 17~20 : 17_20Hr```
###Code
solar_1 = solar_revise.iloc[:, 5:12]
solar_2 = solar_revise.iloc[:, 21:28]
solar_3 = solar_revise.iloc[:, 37:44]
solar_4 = solar_revise.iloc[:, 53:60]
solar_5 = solar_revise.iloc[:, 69:76]
solar_6 = solar_revise.iloc[:, -1:]
solar_new = pd.concat([solar_1, solar_2, solar_3, solar_4, solar_5, solar_6], axis=1)
solar_new.tail()
solar_new.shape
y = solar_revise.iloc[0:1, :16]
y
solar_revise = solar_revise.drop(['일출시간', '일몰시간'], axis=1)
solar_revise.columns
from IPython.display import clear_output # clear_output() 으로 아웃풋 제거 가능
# 변수 분포 살펴보기
n = len(solar_revise)
for i in range(n)[:10]:
data = solar_revise.iloc[i:i+1,:16]
x = list(solar_revise.iloc[i:i+1,:16].columns)
y = list(solar_revise.iloc[i:i+1,:16].values[0])
plt.title('Hr_{}'.format(i))
plt.plot(x, y)
plt.savefig('./tmp_2/Hr_{}'.format(i))
clear_output()
### 새로운 Feature 만들기 (Hr, 기온, 습도, 풍속, 강수량)
###Output
_____no_output_____
###Markdown
모델링 함수 만들기 package, function
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
import keras
import keras.backend as K
from keras.layers import LSTM, Dense, Input, Embedding, Dropout
from keras.models import Sequential
from keras.models import Model
from keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_acc')
from keras.callbacks import CSVLogger
csv_logger = CSVLogger('training.log')
def dataset_reshape(dataset, window_size=1):
data = []
for i in range(len(dataset) - window_size - 1):
change_data = dataset[i:(i+window_size)]
data.append(np.array(change_data))
return np.array(data)
# 손실 이력 클래스 정의
class LossHistory(keras.callbacks.Callback):
def init(self):
self.losses = []
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# 손실 이력 객체 생성
history = LossHistory()
history.init()
###Output
_____no_output_____
###Markdown
modeling: LSTM
###Code
def make_model(dataset, input_shape=(0, 0), epochs=[10], batch_size=[30], dropout_rate=[0.2], layers=50, output_dim=1, cv=3):
columns = list(dataset.columns)
# 기준점 형성(Timestamp)
split_date = pd.Timestamp('2017-04-15')
# Unadjusted feature만 활용해서 dataframe을 만든다
train = dataset.loc[:split_date, columns]
test = dataset.loc[split_date:, columns]
# scaling
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.transform(test)
# train, test
train_sc_df = pd.DataFrame(data=train_sc, columns=columns, index=train.index)
test_sc_df = pd.DataFrame(data=test_sc, columns=columns, index=test.index)
X_train_df = train_sc_df.iloc[:, :-1]
y_train_df = train_sc_df.iloc[:, -1:]
X_test_df = test_sc_df.iloc[:, :-1]
y_test_df = test_sc_df.iloc[:, -1:]
# 2차원 데이터(size, feature)를 3차원 데이터(size, feature, time)으로.
X_train = dataset_reshape(X_train_df, 7)
y_train = dataset_reshape(y_train_df['충전시간발전량'], 7)
X_test = dataset_reshape(X_test_df, 7)
y_test = dataset_reshape(y_test_df['충전시간발전량'], 7)
# 모델 함수 만들기
def create_model(dropout_rate=0.0):
activation='relu'
dropout_rate=0.0
init_mode='uniform'
optimizer='adam'
lr=0.01
momentum=0
#create model
model = Sequential()
model.add(LSTM(layers, input_shape=input_shape))
model.add(Dropout(dropout_rate))
model.add(Dense(output_dim))
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
return model
# create model
model = KerasRegressor(build_fn=create_model, epochs=30, batch_size=30)
# Use scikit-learn to grid search
# activation = ['relu', 'tahn', 'sigmoid']
# optimizer = ['adam', 'SGD', 'RMSprop']
# dropout_rate = dropout_rate
# grid search epochs, batch size
epochs = epochs
batch_size = batch_size
param_grid = dict(epochs=epochs, batch_size=batch_size, dropout_rate=dropout_rate)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, verbose=1, cv=cv)
grid = grid.fit(X_train, y_train, callbacks=[history, csv_logger, EarlyStopping])
clear_output()
# make graph
y_pred = grid.predict(X_test)
y_test_tuple = (y_test[0], y_test[1], y_test[2], y_test[3], y_test[4], y_test[5], y_test[6])
y_pred_tuple = (y_pred[0], y_pred[1], y_pred[2], y_pred[3], y_pred[4], y_pred[5], y_pred[6])
plt.figure(figsize=(20, 10))
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(np.concatenate(y_test_tuple), 'b', label='act')
loss_ax.plot(np.concatenate(y_pred_tuple), 'r', label='pred')
loss_ax.legend(loc='lower right')
plt.show()
# loss graph
plt.plot(history.losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
return grid
grid_1 = make_model(dataset=solar_revise, input_shape=(7, 80), epochs=[100], dropout_rate=[0.2, 0.4], layers=100, output_dim=7)
grid_1.best_params_
grid_2 = make_model(dataset=solar_new, input_shape=(7, 35), epochs=[100], dropout_rate=[0.2, 0.4], layers=100, output_dim=7)
grid_1.cv_results_
###Output
_____no_output_____
###Markdown
save models
###Code
grid_1.best_estimator_.model.save("grid_1.h5")
grid_2.best_estimator_.model.save("grid_2.h5")
###Output
_____no_output_____
###Markdown
log history Scikit-learn, Statsmodels- - - preprocessing```1 1. 20Hr, 일몰시간, 일출시간, 날짜 : int64 -> float642```
###Code
from sklearn.model_selection import train_test_split
solar_data = solar_raw.copy()
solar_data = solar_data.astype('float32', copy=True)
solar_data.tail()
X_data = solar_data.drop(['충전시간발전량'], axis=1)
y_data = solar_data['충전시간발전량']
y_data.tail()
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.33, random_state=17)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
modeling: scikit-learn- - -```1. x, y train, text dataset 만들기2. linear regression으로 모델링(statsmodels)3. 다중회귀4. 다중 공선성 확인, 제거(PCA)5. 파라미터 튜닝``` scikit-learn
###Code
from sklearn.linear_model import LinearRegression
X_train_df.tail()
LR = LinearRegression(fit_intercept=True)
model_lr_1 = LR.fit(X_train_df.values, y_train_df.values)
# 성능평가
y_pred = model_lr_1.predict(X_test_df.values)
mse = (np.square(y_pred - y_test_df.values)).mean(axis=0)
mse
from sklearn.metrics import explained_variance_score
explained_variance_score(y_test_df.values, y_pred)
# 교차 검증
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model_lr_1, X_data, y_data, cv=50, scoring='r2')
scores = np.mean(scores)
scores
###Output
_____no_output_____
###Markdown
modeling: statsmodels- 다중공선성 제거
###Code
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
model_lr_2 = sm.OLS(y_train_df, X_train_df)
result_2 = model_lr_2.fit()
result_2.summary()
pd.set_option('display.max_columns', 200)
pd.set_option('display.width', 1000)
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(
X_train_df.values, i) for i in range(X_train_df.shape[1])]
vif["features"] = X_train_df.columns
vif.sort_values('VIF Factor', ascending=False)
formula = "충전시간발전량 ~ "
for column in list(X_data.columns):
to_add = "scale({}) + ".format(column)
formula += to_add
formula
###Output
_____no_output_____ |
2. Supervised Learning/Decision Trees/Gradient Boosting/sklearn.XGBoost.ipynb | ###Markdown
Sklearn, XGBoost sklearn.ensemble.RandomForestClassifier
###Code
from sklearn import ensemble, model_selection, metrics
import numpy as np
import pandas as pd
import xgboost as xgb
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Данные Задача на kaggle: https://www.kaggle.com/c/bioresponseДанные: https://www.kaggle.com/c/bioresponse/dataПо данным характеристикам молекулы требуется определить, будет ли дан биологический ответ (biological response).Признаки нормализаваны.Для демонстрации используется обучающая выборка из исходных данных train.csv, файл с данными прилагается.
###Code
bioresponce = pd.read_csv('bioresponse.csv', header=0, sep=',')
bioresponce.head()
bioresponce_target = bioresponce.Activity.values
bioresponce_data = bioresponce.iloc[:, 1:]
###Output
_____no_output_____
###Markdown
Модель RandomForestClassifier Зависимость качества от количесвта деревьев
###Code
n_trees = [1] + list(range(10, 55, 5))
%%time
scoring = []
for n_tree in n_trees:
estimator = ensemble.RandomForestClassifier(n_estimators = n_tree, min_samples_split=5, random_state=1)
score = model_selection.cross_val_score(estimator, bioresponce_data, bioresponce_target,
scoring = 'accuracy', cv = 3)
scoring.append(score)
scoring = np.asmatrix(scoring)
pylab.plot(n_trees, scoring.mean(axis = 1), marker='.', label='RandomForest')
pylab.grid(True)
pylab.xlabel('n_trees')
pylab.ylabel('score')
pylab.title('Accuracy score')
pylab.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Кривые обучения для деревьев большей глубины
###Code
%%time
xgb_scoring = []
for n_tree in n_trees:
estimator = xgb.XGBClassifier(learning_rate=0.1, max_depth=5, n_estimators=n_tree, min_child_weight=3)
score = model_selection.cross_val_score(estimator, bioresponce_data, bioresponce_target,
scoring = 'accuracy', cv = 3)
xgb_scoring.append(score)
xgb_scoring = np.asmatrix(xgb_scoring)
xgb_scoring
pylab.plot(n_trees, scoring.mean(axis = 1), marker='.', label='RandomForest')
pylab.plot(n_trees, xgb_scoring.mean(axis = 1), marker='.', label='XGBoost')
pylab.grid(True)
pylab.xlabel('n_trees')
pylab.ylabel('score')
pylab.title('Accuracy score')
pylab.legend(loc='lower right')
###Output
_____no_output_____ |
notebooks/rllib/.ipynb_checkpoints/cartpole-checkpoint.ipynb | ###Markdown
https://medium.com/distributed-computing-with-ray/intro-to-rllib-example-environments-3a113f532c70
###Code
%load_ext autoreload
import ray
import ray.rllib.agents.ppo as ppo
from ray.tune.logger import pretty_print
ray.shutdown()
ray.init(ignore_reinit_error=True)
###Output
WARNING:tensorflow:From /home/zciccwf/.conda/envs/deep_scheduler/lib/python3.8/site-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
###Markdown
Configure Checkpoint Saving
###Code
import shutil
import os
# clear saved agent folder
CHECKPOINT_ROOT = 'tmp/ppo/cartpole_v0'
shutil.rmtree(CHECKPOINT_ROOT, ignore_errors=True, onerror=None)
# clear ray results folder
RAY_RESULTS = os.getenv('HOME') + '/ray_results'
print(RAY_RESULTS)
shutil.rmtree(RAY_RESULTS, ignore_errors=True, onerror=None)
###Output
/home/zciccwf/ray_results
###Markdown
Configure RL Params
###Code
%autoreload
config = ppo.DEFAULT_CONFIG.copy() # use 'proximal policy optimisation' policy optimiser
print(config.keys())
config['num_gpus'] = 1
config['num_workers'] = 1
config['eager_tracing'] = False
config['log_level'] = 'WARN'
agent = ppo.PPOTrainer(config=config, env='CartPole-v0')
###Output
2020-11-06 11:33:43,626 INFO trainer.py:591 -- Tip: set framework=tfe or the --eager flag to enable TensorFlow eager execution
2020-11-06 11:33:43,627 INFO trainer.py:616 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
###Markdown
Train RL Agent
###Code
%autoreload
N_ITER = 50
s = "{:3d} | reward {:6.2f}/{:6.2f}/{:6.2f} | len {:6.2f} | saved agent to {}"
for i in range(N_ITER):
# perform 1 iter of training the policy with the PPO algorithm
result = agent.train()
file_name = agent.save(CHECKPOINT_ROOT)
print(s.format(
i + 1,
result["episode_reward_min"],
result["episode_reward_mean"],
result["episode_reward_max"],
result["episode_len_mean"],
file_name
))
###Output
[2m[36m(pid=196365)[0m WARNING:tensorflow:From /home/zciccwf/.conda/envs/deep_scheduler/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py:1659: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
[2m[36m(pid=196365)[0m Instructions for updating:
[2m[36m(pid=196365)[0m If using Keras pass *_constraint arguments to layers.
###Markdown
Examing Policy
###Code
policy = agent.get_policy()
model = policy.model
print(model.base_model.summary())
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
observations (InputLayer) [(None, 4)] 0
__________________________________________________________________________________________________
fc_1 (Dense) (None, 256) 1280 observations[0][0]
__________________________________________________________________________________________________
fc_value_1 (Dense) (None, 256) 1280 observations[0][0]
__________________________________________________________________________________________________
fc_2 (Dense) (None, 256) 65792 fc_1[0][0]
__________________________________________________________________________________________________
fc_value_2 (Dense) (None, 256) 65792 fc_value_1[0][0]
__________________________________________________________________________________________________
fc_out (Dense) (None, 2) 514 fc_2[0][0]
__________________________________________________________________________________________________
value_out (Dense) (None, 1) 257 fc_value_2[0][0]
==================================================================================================
Total params: 134,915
Trainable params: 134,915
Non-trainable params: 0
__________________________________________________________________________________________________
None
###Markdown
Rollout a Trained Agent from Saved Checkpoint
###Code
!rllib rollout tmp/ppo/cartpole_v0/checkpoint_50/checkpoint-50 --config "{\"env\": \"CartPole-v0\"}" --run PPO --steps 2000
###Output
WARNING:tensorflow:From /home/zciccwf/.conda/envs/deep_scheduler/lib/python3.8/site-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
2020-11-06 11:53:02,219 INFO services.py:1164 -- View the Ray dashboard at [1m[32mhttp://127.0.0.1:8265[39m[22m
2020-11-06 11:53:03,774 INFO trainer.py:591 -- Tip: set framework=tfe or the --eager flag to enable TensorFlow eager execution
2020-11-06 11:53:03,774 INFO trainer.py:616 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
2020-11-06 11:53:04.793561: F tensorflow/stream_executor/lib/statusor.cc:34] Attempting to fetch value instead of handling error Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 16945512448
*** Aborted at 1604663584 (unix time) try "date -d @1604663584" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGABRT (@0x82500003ef5) received by PID 16117 (TID 0x7f1c30f00740) from PID 16117; stack trace: ***
@ 0x7f1c30ada630 (unknown)
@ 0x7f1c30733387 __GI_raise
@ 0x7f1c30734a78 __GI_abort
@ 0x7f1bddbcc447 tensorflow::internal::LogMessageFatal::~LogMessageFatal()
@ 0x7f1bddf5415d stream_executor::port::internal_statusor::Helper::Crash()
@ 0x7f1bd119c90e tensorflow::BaseGPUDeviceFactory::EnablePeerAccess()
@ 0x7f1bd11a2f41 tensorflow::BaseGPUDeviceFactory::CreateDevices()
@ 0x7f1bd11e73bd tensorflow::DeviceFactory::AddDevices()
@ 0x7f1bd55776c8 tensorflow::DirectSessionFactory::NewSession()
@ 0x7f1bd126e7db tensorflow::NewSession()
@ 0x7f1bd4fa6b26 TF_NewSession
@ 0x7f1bd45dee02 tensorflow::TF_NewSessionRef()
@ 0x7f1bcf8aff28 _ZZN8pybind1112cpp_function10initializeIZL32pybind11_init__pywrap_tf_sessionRNS_6moduleEEUlP8TF_GraphPK17TF_SessionOptionsE8_P10TF_SessionJS5_S8_EJNS_4nameENS_5scopeENS_7siblingENS_19return_value_policyEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNEST_
@ 0x7f1bcf893e7d pybind11::cpp_function::dispatcher()
@ 0x55e4033b4c1e cfunction_call_varargs
@ 0x55e4033a9fff _PyObject_MakeTpCall
@ 0x55e40345c394 _PyEval_EvalFrameDefault
@ 0x55e4034438f0 _PyEval_EvalCodeWithName
@ 0x55e403444e74 _PyFunction_Vectorcall
@ 0x55e4033e0a5e method_vectorcall
@ 0x55e4034585b9 _PyEval_EvalFrameDefault
@ 0x55e403444099 _PyEval_EvalCodeWithName
@ 0x55e403444e74 _PyFunction_Vectorcall
@ 0x55e40342c69a slot_tp_init
@ 0x55e4033a9e98 _PyObject_MakeTpCall
@ 0x55e40345c3dc _PyEval_EvalFrameDefault
@ 0x55e403444099 _PyEval_EvalCodeWithName
@ 0x55e403444e74 _PyFunction_Vectorcall
@ 0x55e40345773a _PyEval_EvalFrameDefault
@ 0x55e403444099 _PyEval_EvalCodeWithName
@ 0x55e403444e74 _PyFunction_Vectorcall
@ 0x55e40342c69a slot_tp_init
|
Regression/Linear Models/ElasticNet_StandardScaler_PowerTransformer.ipynb | ###Markdown
ElasticNet with Standard Scaler & Power Transformer This Code template is for Regression tasks using a ElasticNet based on the Regression linear model Technique with StandardScaler and feature transformation technique PowerTransformer in a pipeline Required Packages
###Code
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path) #reading file
df.head()
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 12) #performing datasplitting
###Output
_____no_output_____
###Markdown
Data RescalingStandardScaler standardizes features by removing the mean and scaling to unit varianceThe standard score of a sample x is calculated as:z = (x - u) / swhere u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) for parameters Feature TransformationApply a power transform featurewise to make data more Gaussian-like.Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) for parameters ModelElastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds.**Features of ElasticNet Regression-*** It combines the L1 and L2 approaches.* It performs a more efficient regularization process.* It has two parameters to be set, λ and α. Model Tuning Parameters 1. alpha : float, default=1.0 > Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. 2. l1_ratio : float, default=0.5> The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. 3. normalize : bool, default=False>This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. 4. max_iter : int, default=1000>The maximum number of iterations. 5. tol : float, default=1e-4>The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. 6. selection : {‘cyclic’, ‘random’}, default=’cyclic’>If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
###Code
model = make_pipeline(StandardScaler(),PowerTransformer(), ElasticNet(random_state = 42))
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
###Output
_____no_output_____
###Markdown
Model evolution**r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.**MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.**MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model.
###Code
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))
print("R-squared score : ",r2_score(y_test,prediction))
###Output
R-squared score : 0.44683140925357334
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
SST5/SST5 Analyze MCMC leave4out3keep200 T5 generations plus E positive compute-20May.ipynb | ###Markdown
Compute % target class for noperturb
###Code
target_classes = [3,4]
topk_list = [10000, 1000, 100, 10]
percent_target_class = []
gt_class_preds = noperturb_df['gt_class_pred']
# gen_input_seq_classes = noperturb_df['gen_input_seq_class']
# sent_deltas = noperturb_df['sent_delta']
df = noperturb_df
###Output
_____no_output_____
###Markdown
iterate through all perturbed result tsv files
###Code
for perturb_results_tsv in perturb_results_tsvs:
print("*-"*30)
print("perturb_results_tsv: ", perturb_results_tsv)
perturb_df = pd.read_table(perturb_results_tsv)
perturb_df = perturb_df.sort_values(by='disc_pred', ascending=False)
# perturb_df['sent_delta'] = perturb_df['gt_class_pred'] - perturb_df['gen_input_seq_class']
gt_class_preds = perturb_df['gt_class_pred']
# gen_input_seq_classes = perturb_df['gen_input_seq_class']
# sent_deltas = perturb_df['sent_delta']
generated_seq_ppls = perturb_df['generated_seq_ppl']
for target_class in target_classes:
total_num = len(perturb_df['gt_class_pred'])
print("target_class: ", target_class)
num_target_class = np.sum(perturb_df['gt_class_pred'] == target_class)
percent_target_class = num_target_class / total_num *100
print("percent_target_class: ", percent_target_class)
for topk in topk_list:
topk_gt_class_preds = gt_class_preds[:topk]
# topk_sent_deltas = sent_deltas[:topk]
topk_num = len(topk_gt_class_preds)
print("topk: ", topk)
# print("topk_gt_class_preds: ", topk_gt_class_preds)
topk_num_target_class = np.sum(topk_gt_class_preds == target_class)
topk_percent_target_class = topk_num_target_class / topk_num *100
# print("topk_num_target_class: ", topk_num_target_class)
# print("topk_num: ", topk_num)
print("topk_percent_target_class: ", topk_percent_target_class)
# topk_sent_delta_mean = np.mean(topk_sent_deltas)
# print("topk_sent_deltas: ", topk_sent_deltas)
# print("topk_sent_delta_mean: ", topk_sent_delta_mean)
print("*")
print("--------------")
print("-------For all target classes-------")
print("target_classes: ", target_classes)
total_num = len(perturb_df['gt_class_pred'])
num_target_class = np.sum(perturb_df['gt_class_pred'].isin(target_classes))
percent_target_class = num_target_class / total_num *100
print("percent_target_class: ", percent_target_class)
for topk in topk_list:
topk_gt_class_preds = gt_class_preds[:topk]
# topk_sent_deltas = sent_deltas[:topk]
topk_generated_seq_ppls = generated_seq_ppls[:topk]
topk_num = len(topk_gt_class_preds)
print("topk: ", topk)
# print("topk_gt_class_preds: ", topk_gt_class_preds)
topk_num_target_class = np.sum(topk_gt_class_preds.isin(target_classes))
topk_percent_target_class = topk_num_target_class / topk_num *100
# print("topk_num_target_class: ", topk_num_target_class)
# print("topk_num: ", topk_num)
print("topk_percent_target_class: ", topk_percent_target_class)
topk_generated_seq_ppl_mean = np.mean(topk_generated_seq_ppls)
topk_generated_seq_ppl_std = np.std(topk_generated_seq_ppls)
print("topk_generated_seq_ppl_mean: ", topk_generated_seq_ppl_mean)
print("topk_generated_seq_ppl_std: ", topk_generated_seq_ppl_std)
# topk_sent_delta_mean = np.mean(topk_sent_deltas)
# print("topk_sent_deltas: ", topk_sent_deltas)
# print("topk_sent_delta_mean: ", topk_sent_delta_mean)
print("*")
# E[% positive, strong-positive] computation
df = perturb_df
num_rounds = 100 # N
round_pool_size = 1000
topk = 100 # K
main_pool_size = 25000
target_classes = [3, 4]
round_topk = {}
# cols_to_sort = ['latent_head_pred']
cols_to_sort = ['disc_pred']
df_main_pool = df.sample(n=main_pool_size)
print("--------------")
print("E[% positive, strong-positive] computation")
# print("Sorted by ", cols_to_sort)
for col_to_sort in cols_to_sort:
print("col_to_sort: ", col_to_sort)
round_topk[col_to_sort] = {}
for round_ind in range(num_rounds):
sampled_rows = df_main_pool.sample(n=round_pool_size)
sorted_sampled_rows = sampled_rows.sort_values(by=col_to_sort, ascending=False)[:topk]
topk_rows = sorted_sampled_rows[:topk]
round_topk[col_to_sort][round_ind] = {}
for target_class in target_classes:
total_num = len(topk_rows['gt_class_pred'])
# print("target_class: ", target_class)
num_target_class = np.sum(topk_rows['gt_class_pred'] == target_class)
percent_target_class = num_target_class / total_num *100
# print("percent_target_class: ", percent_target_class)
round_topk[col_to_sort][round_ind][target_class] = percent_target_class
# print("target_classes: ", target_classes)
total_num = len(topk_rows['gt_class_pred'])
num_target_class = np.sum(topk_rows['gt_class_pred'].isin(target_classes))
percent_target_class = num_target_class / total_num *100
# print("percent_target_class: ", percent_target_class)
round_topk[col_to_sort][round_ind]['all'] = percent_target_class
for target_class in target_classes:
percent_values = []
for round_ind in range(num_rounds):
percent_values.append(round_topk[col_to_sort][round_ind][target_class])
print("target_class: ", target_class)
mean_percent_values = np.mean(percent_values)
std_percent_values = np.std(percent_values)
print("mean_percent_values: ", mean_percent_values)
print("std_percent_values: ", std_percent_values)
percent_values = []
for round_ind in range(num_rounds):
percent_values.append(round_topk[col_to_sort][round_ind]['all'])
print("target_classes: ", target_classes)
mean_percent_values = np.mean(percent_values)
std_percent_values = np.std(percent_values)
print("mean_percent_values: ", mean_percent_values)
print("std_percent_values: ", std_percent_values)
###Output
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
perturb_results_tsv: generated_seqs/mcmc_SST5/SST5_mcmc_trainlabel2initseqs_20iter_temp01_t5mut_maxmask2/20iter_temp01_t5mut_maxmask2-mcmc_seqs.tsv
target_class: 3
percent_target_class: 8.070350985221674
topk: 10000
topk_percent_target_class: 15.8
*
topk: 1000
topk_percent_target_class: 38.7
*
topk: 100
topk_percent_target_class: 36.0
*
topk: 10
topk_percent_target_class: 20.0
*
--------------
target_class: 4
percent_target_class: 0.6388546798029556
topk: 10000
topk_percent_target_class: 1.46
*
topk: 1000
topk_percent_target_class: 11.1
*
topk: 100
topk_percent_target_class: 49.0
*
topk: 10
topk_percent_target_class: 80.0
*
--------------
-------For all target classes-------
target_classes: [3, 4]
percent_target_class: 8.70920566502463
topk: 10000
topk_percent_target_class: 17.26
topk_generated_seq_ppl_mean: 324.971897955456
topk_generated_seq_ppl_std: 7548.950360305394
*
topk: 1000
topk_percent_target_class: 49.8
topk_generated_seq_ppl_mean: 175.74435349369048
topk_generated_seq_ppl_std: 614.0970980091371
*
topk: 100
topk_percent_target_class: 85.0
topk_generated_seq_ppl_mean: 108.07325143814087
topk_generated_seq_ppl_std: 173.36038908109805
*
topk: 10
topk_percent_target_class: 100.0
topk_generated_seq_ppl_mean: 40.88314914703369
topk_generated_seq_ppl_std: 23.54353512475065
*
--------------
E[% positive, strong-positive] computation
col_to_sort: disc_pred
target_class: 3
mean_percent_values: 29.26
std_percent_values: 4.520221233523864
target_class: 4
mean_percent_values: 5.27
std_percent_values: 2.0873667622150163
target_classes: [3, 4]
mean_percent_values: 34.53
std_percent_values: 4.968812735453008
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
perturb_results_tsv: generated_seqs/mcmc_SST5/SST5_mcmc_trainlabel2initseqs_20iter_temp001_t5mut_maxmask2/20iter_temp001_t5mut_maxmask2-mcmc_seqs.tsv
target_class: 3
percent_target_class: 7.558497536945813
topk: 10000
topk_percent_target_class: 14.729999999999999
*
topk: 1000
topk_percent_target_class: 37.0
*
topk: 100
topk_percent_target_class: 31.0
*
topk: 10
topk_percent_target_class: 30.0
*
--------------
target_class: 4
percent_target_class: 0.5849753694581281
topk: 10000
topk_percent_target_class: 1.38
*
topk: 1000
topk_percent_target_class: 10.8
*
topk: 100
topk_percent_target_class: 46.0
*
topk: 10
topk_percent_target_class: 60.0
*
--------------
-------For all target classes-------
target_classes: [3, 4]
percent_target_class: 8.143472906403941
topk: 10000
topk_percent_target_class: 16.11
topk_generated_seq_ppl_mean: 219.9312084484277
topk_generated_seq_ppl_std: 2694.2279915573204
*
topk: 1000
topk_percent_target_class: 47.8
topk_generated_seq_ppl_mean: 181.45425888872145
topk_generated_seq_ppl_std: 681.3446227975745
*
topk: 100
topk_percent_target_class: 77.0
topk_generated_seq_ppl_mean: 125.77347030639649
topk_generated_seq_ppl_std: 299.32699014740854
*
topk: 10
topk_percent_target_class: 90.0
topk_generated_seq_ppl_mean: 104.97072563171386
topk_generated_seq_ppl_std: 121.9682217944817
*
--------------
E[% positive, strong-positive] computation
col_to_sort: disc_pred
target_class: 3
mean_percent_values: 26.37
std_percent_values: 4.788851636874962
target_class: 4
mean_percent_values: 5.15
std_percent_values: 2.2017038856304003
target_classes: [3, 4]
mean_percent_values: 31.52
std_percent_values: 5.016931332996297
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
perturb_results_tsv: generated_seqs/mcmc_SST5/SST5_mcmc_trainlabel2initseqs_100iter_temp01_t5mut_maxmask2/100iter_temp01_t5mut_maxmask2-mcmc_seqs.tsv
target_class: 3
percent_target_class: 7.192887931034483
topk: 10000
topk_percent_target_class: 14.82
*
topk: 1000
topk_percent_target_class: 44.0
*
topk: 100
topk_percent_target_class: 40.0
*
topk: 10
topk_percent_target_class: 0.0
*
--------------
target_class: 4
percent_target_class: 0.5926724137931034
topk: 10000
topk_percent_target_class: 1.51
*
topk: 1000
topk_percent_target_class: 10.8
*
topk: 100
topk_percent_target_class: 53.0
*
topk: 10
topk_percent_target_class: 100.0
*
--------------
-------For all target classes-------
target_classes: [3, 4]
percent_target_class: 7.785560344827585
topk: 10000
topk_percent_target_class: 16.33
topk_generated_seq_ppl_mean: 263.36750125455666
topk_generated_seq_ppl_std: 1132.583156580489
*
topk: 1000
topk_percent_target_class: 54.800000000000004
topk_generated_seq_ppl_mean: 224.14259061717988
topk_generated_seq_ppl_std: 1081.9870376492852
*
topk: 100
topk_percent_target_class: 93.0
topk_generated_seq_ppl_mean: 88.76500040054322
topk_generated_seq_ppl_std: 73.81148288213879
*
topk: 10
topk_percent_target_class: 100.0
topk_generated_seq_ppl_mean: 89.49919853210449
topk_generated_seq_ppl_std: 24.31455858541514
*
--------------
E[% positive, strong-positive] computation
col_to_sort: disc_pred
target_class: 3
mean_percent_values: 30.42
std_percent_values: 4.754324347370507
target_class: 4
mean_percent_values: 4.58
std_percent_values: 2.182567295640618
target_classes: [3, 4]
mean_percent_values: 35.0
std_percent_values: 5.31224999411737
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
perturb_results_tsv: generated_seqs/mcmc_SST5/SST5_mcmc_trainlabel2initseqs_100iter_temp001_t5mut_maxmask2/100iter_temp001_t5mut_maxmask2-mcmc_seqs.tsv
target_class: 3
percent_target_class: 6.758004926108374
topk: 10000
topk_percent_target_class: 15.040000000000001
*
topk: 1000
topk_percent_target_class: 47.099999999999994
*
topk: 100
topk_percent_target_class: 40.0
*
topk: 10
topk_percent_target_class: 20.0
*
--------------
target_class: 4
percent_target_class: 0.5464901477832512
topk: 10000
topk_percent_target_class: 1.35
*
topk: 1000
topk_percent_target_class: 10.100000000000001
*
topk: 100
topk_percent_target_class: 55.00000000000001
*
topk: 10
topk_percent_target_class: 80.0
*
--------------
-------For all target classes-------
target_classes: [3, 4]
percent_target_class: 7.304495073891626
topk: 10000
topk_percent_target_class: 16.39
topk_generated_seq_ppl_mean: 379.0083471046785
topk_generated_seq_ppl_std: 3919.094332940854
*
topk: 1000
topk_percent_target_class: 57.199999999999996
topk_generated_seq_ppl_mean: 722.119066740036
topk_generated_seq_ppl_std: 5147.209599365265
*
topk: 100
topk_percent_target_class: 95.0
topk_generated_seq_ppl_mean: 181.8544462108612
topk_generated_seq_ppl_std: 337.2284880012561
*
topk: 10
topk_percent_target_class: 100.0
topk_generated_seq_ppl_mean: 119.10943069458008
topk_generated_seq_ppl_std: 122.61757969084381
*
--------------
E[% positive, strong-positive] computation
col_to_sort: disc_pred
target_class: 3
mean_percent_values: 33.37
std_percent_values: 4.3695651957603285
target_class: 4
mean_percent_values: 4.19
std_percent_values: 2.1619204425695226
target_classes: [3, 4]
mean_percent_values: 37.56
std_percent_values: 4.800666620376799
|
examples/lenet300/lenet300_lc_example.ipynb | ###Markdown
DataWe use the MNIST dataset for this demo. The dataset containssubtracted 28x28 grayscale images with digits from 0 to 9. The images are normalized to have grayscale value 0 to 1 and then mean is subtracted.
###Code
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings('ignore')
plt.rcParams['figure.figsize'] = [10, 5]
def show_MNIST_images():
train_data_th = datasets.MNIST(root='./datasets', download=True, train=True)
data_train = np.array(train_data_th.data[:])
targets = np.array(train_data_th.targets)
images_to_show = 5
random_indexes = np.random.randint(data_train.shape[0], size=images_to_show)
for i,ind in enumerate(random_indexes):
plt.subplot(1,images_to_show,i+1)
plt.imshow(data_train[ind], cmap='gray')
plt.xlabel(targets[ind])
plt.xticks([])
plt.yticks([])
show_MNIST_images()
def data_loader(batch_size=2048, n_workers=4):
train_data_th = datasets.MNIST(root='./datasets', download=True, train=True)
test_data_th = datasets.MNIST(root='./datasets', download=True, train=False)
data_train = np.array(train_data_th.data[:]).reshape([-1, 28 * 28]).astype(np.float32)
data_test = np.array(test_data_th.data[:]).reshape([-1, 28 * 28]).astype(np.float32)
data_train = (data_train / 255)
dtrain_mean = data_train.mean(axis=0)
data_train -= dtrain_mean
data_test = (data_test / 255).astype(np.float32)
data_test -= dtrain_mean
train_data = TensorDataset(torch.from_numpy(data_train), train_data_th.targets)
test_data = TensorDataset(torch.from_numpy(data_test), test_data_th.targets)
train_loader = DataLoader(train_data, num_workers=n_workers, batch_size=batch_size, shuffle=True,)
test_loader = DataLoader(test_data, num_workers=n_workers, batch_size=batch_size, shuffle=False)
return train_loader, test_loader
###Output
_____no_output_____
###Markdown
Reference NetworkWe use cuda capable GPU for our experiments. The network has 3 fully-connected layers with dimensions 784x300, 300x100, and 100x10, and the total of 266200 parameters (which includes biases). The network was trained to have a test error of 1.79%, which is pretty decent result but not as low as you can get with convolutional neural networks.
###Code
device = torch.device('cuda')
def train_test_acc_eval_f(net):
train_loader, test_loader = data_loader()
def forward_func(x, target):
y = net(x)
return y, net.loss(y, target)
acc_train, loss_train = compute_acc_loss(forward_func, train_loader)
acc_test, loss_test = compute_acc_loss(forward_func, test_loader)
print(f"Train err: {100-acc_train*100:.2f}%, train loss: {loss_train}")
print(f"TEST ERR: {100-acc_test*100:.2f}%, test loss: {loss_test}")
def load_reference_lenet300():
net = lenet300_modern().to(device)
state_dict = torch.utils.model_zoo.load_url('https://ucmerced.box.com/shared/static/766axnc8qq429hiqqyqqo07ek46oqoxq.th')
net.load_state_dict(state_dict)
net.to(device)
return net
###Output
_____no_output_____
###Markdown
Let's verify the model's train and test errors:
###Code
train_test_acc_eval_f(load_reference_lenet300().eval().to(device))
###Output
_____no_output_____
###Markdown
Compression using the LC toolkit Step 1: L stepWe will use same L step with same hyperparamters for all our compression examples
###Code
def my_l_step(model, lc_penalty, step):
train_loader, test_loader = data_loader()
params = list(filter(lambda p: p.requires_grad, model.parameters()))
lr = 0.7*(0.98**step)
optimizer = optim.SGD(params, lr=lr, momentum=0.9, nesterov=True)
print(f'L-step #{step} with lr: {lr:.5f}')
epochs_per_step_ = 7
if step == 0:
epochs_per_step_ = epochs_per_step_ * 2
for epoch in range(epochs_per_step_):
avg_loss = []
for x, target in train_loader:
optimizer.zero_grad()
x = x.to(device)
target = target.to(dtype=torch.long, device=device)
out = model(x)
loss = model.loss(out, target) + lc_penalty()
avg_loss.append(loss.item())
loss.backward()
optimizer.step()
print(f"\tepoch #{epoch} is finished.")
print(f"\t avg. train loss: {np.mean(avg_loss):.6f}")
###Output
_____no_output_____
###Markdown
Step 2: Schedule of mu values
###Code
mu_s = [9e-5 * (1.1 ** n) for n in range(20)]
# 20 L-C steps in total
# total training epochs is 7 x 20 = 140
###Output
_____no_output_____
###Markdown
Compression time! PruningLet us prune all but 5% of the weights in the network (5% = 13310 weights)
###Code
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
compression_tasks = {
Param(layers, device): (AsVector, ConstraintL0Pruning(kappa=13310), 'pruning')
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run() # entry point to the LC algorithm
lc_alg.count_params()
compressed_model_bits = lc_alg.count_param_bits() + (300+100+10)*32
uncompressed_model_bits = (784*300+300*100+100*10 + 300 + 100 + 10)*32
compression_ratio = uncompressed_model_bits/compressed_model_bits
print(compression_ratio)
###Output
_____no_output_____
###Markdown
Note that we were pruning 95% of the weights. Naively, you would assume 20x compression ratio (100%/5%), however, this is not the case. Firstly, there are some uncompressed parts (in this case biases), and, secondly, storing a compressed model requires additional metadata (in this case positions of non-zero elements). Therefore we get only 16x compression ratio (vs naively expected 20x). To prevent manual computation of compression ratio, let us create a function below. Note, this function is model specific.
###Code
def compute_compression_ratio(lc_alg):
compressed_model_bits = lc_alg.count_param_bits() + (300+100+10)*32
uncompressed_model_bits = (784*300+300*100+100*10 + 300 + 100 + 10)*32
compression_ratio = uncompressed_model_bits/compressed_model_bits
return compression_ratio
###Output
_____no_output_____
###Markdown
QuantizationNow let us quantize each layer with its own codebook
###Code
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
compression_tasks = {
Param(layers[0], device): (AsVector, AdaptiveQuantization(k=2), 'layer0_quant'),
Param(layers[1], device): (AsVector, AdaptiveQuantization(k=2), 'layer1_quant'),
Param(layers[2], device): (AsVector, AdaptiveQuantization(k=2), 'layer2_quant')
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run()
print('Compressed_params:', lc_alg.count_params())
print('Compression_ratio:', compute_compression_ratio(lc_alg))
###Output
_____no_output_____
###Markdown
Mixing pruning, low rank, and quantization
###Code
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
compression_tasks = {
Param(layers[0], device): (AsVector, ConstraintL0Pruning(kappa=5000), 'pruning'),
Param(layers[1], device): (AsIs, LowRank(target_rank=9, conv_scheme=None), 'low-rank'),
Param(layers[2], device): (AsVector, AdaptiveQuantization(k=2), 'quant')
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run()
print('Compressed_params:', lc_alg.count_params())
print('Compression_ratio:', compute_compression_ratio(lc_alg))
print('Compression_ratio:', compute_compression_ratio(lc_alg))
###Output
_____no_output_____
###Markdown
Additive combination of Quantization and Pruning
###Code
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
compression_tasks = {
Param(layers, device): [
(AsVector, ConstraintL0Pruning(kappa=2662), 'pruning'),
(AsVector, AdaptiveQuantization(k=2), 'quant')
]
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run()
print('Compressed_params:', lc_alg.count_params())
print('Compression_ratio:', compute_compression_ratio(lc_alg))
###Output
_____no_output_____
###Markdown
Low-rank compression with automatic rank selection
###Code
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
alpha=1e-9
compression_tasks = {
Param(layers[0], device): (AsIs, RankSelection(conv_scheme='scheme_1', alpha=alpha, criterion='storage', module=layers[0], normalize=True), "layer1_lr"),
Param(layers[1], device): (AsIs, RankSelection(conv_scheme='scheme_1', alpha=alpha, criterion='storage', module=layers[1], normalize=True), "layer2_lr"),
Param(layers[2], device): (AsIs, RankSelection(conv_scheme='scheme_1', alpha=alpha, criterion='storage', module=layers[2], normalize=True), "layer3_lr")
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run()
print('Compressed_params:', lc_alg.count_params())
print('Compression_ratio:', compute_compression_ratio(lc_alg))
###Output
_____no_output_____
###Markdown
ScaledTernaryQuantization
###Code
from lc.compression_types import ScaledTernaryQuantization
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
compression_tasks = {
Param(layers[0], device): (AsVector, ScaledTernaryQuantization(), 'layer0_quant'),
Param(layers[1], device): (AsVector, ScaledTernaryQuantization(), 'layer1_quant'),
Param(layers[2], device): (AsVector, ScaledTernaryQuantization(), 'layer2_quant')
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run()
print('Compressed_params:', lc_alg.count_params())
print('Compression_ratio:', compute_compression_ratio(lc_alg))
###Output
_____no_output_____
###Markdown
ScaledBinaryQuantization
###Code
from lc.compression_types import ScaledBinaryQuantization
net = load_reference_lenet300()
layers = [lambda x=x: getattr(x, 'weight') for x in net.modules() if isinstance(x, nn.Linear)]
compression_tasks = {
Param(layers[0], device): (AsVector, ScaledBinaryQuantization(), 'layer0_quant'),
Param(layers[1], device): (AsVector, ScaledBinaryQuantization(), 'layer1_quant'),
Param(layers[2], device): (AsVector, ScaledBinaryQuantization(), 'layer2_quant')
}
lc_alg = lc.Algorithm(
model=net, # model to compress
compression_tasks=compression_tasks, # specifications of compression
l_step_optimization=my_l_step, # implementation of L-step
mu_schedule=mu_s, # schedule of mu values
evaluation_func=train_test_acc_eval_f # evaluation function
)
lc_alg.run()
print('Compressed_params:', lc_alg.count_params())
print('Compression_ratio:', compute_compression_ratio(lc_alg))
###Output
_____no_output_____ |
week01_intro/w1_seminar_gym_interface.ipynb | ###Markdown
OpenAI GymWe're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.That's where OpenAI Gym comes into play. It's a Python library that wraps many classical decision problems including robot control, videogames and board games.So here's how it works:
###Code
import gym
env = gym.make("MountainCar-v0")
env.reset()
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
###Output
Observation space: Box(2,)
Action space: Discrete(3)
###Markdown
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away. Gym interfaceThe three main methods of an environment are* `reset()`: reset environment to the initial state, _return first observation_* `render()`: show current environment state (a more colorful version :) )* `step(a)`: commit action `a` and return `(new_observation, reward, is_done, info)` * `new_observation`: an observation right after committing the action `a` * `reward`: a number representing your reward for committing action `a` * `is_done`: True if the MDP has just finished, False if still in progress * `info`: some auxiliary stuff about what just happened. For now, ignore it.
###Code
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the right slightly (around 0.0005)
###Output
taking action 2 (right)
new observation code: [-0.5109678 0.00091213]
reward: -1.0
is game over?: False
###Markdown
Play with itBelow is the code that drives the car to the right. However, if you simply use the default policy, the car will not reach the flag at the far right due to gravity.__Your task__ is to fix it. Find a strategy that reaches the flag. You are not required to build any sophisticated algorithms for now, and you definitely don't need to know any reinforcement learning for this. Feel free to hard-code :)
###Code
from IPython import display
# Create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(
gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1,
)
actions = {'left': 0, 'stop': 1, 'right': 2}
def policy(obs, t):
# Write the code for your policy here. You can use the observation
# (a tuple of position and velocity), the current time step, or both,
# if you want.
position, velocity = obs
a = actions['right'] if velocity >= 0 else actions['left']
# This is an example policy. You can try running it, but it will not work.
# Your goal is to fix that. You don't need anything sophisticated here,
# and you can hard-code any policy that seems to work.
# Hint: think how you would make a swing go farther and faster.
return a
plt.figure(figsize=(4, 3))
#display.clear_output(wait=True)
obs = env.reset()
for t in range(TIME_LIMIT):
plt.gca().clear()
action = policy(obs, t) # Call your policy
print(t,obs,phase)
obs, reward, done, _ = env.step(action) # Pass the action chosen by the policy to the environment
# We don't do anything with reward here because MountainCar is a very simple environment,
# and reward is a constant -1. Therefore, your goal is to end the episode as quickly as possible.
# Draw game image on display.
plt.imshow(env.render('rgb_array'))
display.clear_output(wait=True)
display.display(plt.gcf())
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
#display.clear_output(wait=True)
assert obs[0] > 0.47
print("You solved it!")
###Output
_____no_output_____ |
Book Review/Book Review.ipynb | ###Markdown
Book Review per chapter -Reviewer - Guinsly -[Github Repo](https://github.com/guinslym/python_earth_science_book). This is the code repository and Book Review for the book titled "[Introduction to Python in Earth Science Data Analysis](https://www.springer.com/gp/book/9783030780548) Chapter 1 Provide a quick overview of Python capability, compare to other program, how to install python to maximize your learning from this book using Ananaconda Continuum and some minimal statistical package. Chapter 2
###Code
Deeper overview of Python going though different Data Types. The Alea of python. High level overview of Python function and operators. Then going quicly over Mathematic funciton
###Output
_____no_output_____ |
3LabelCNN_hist_over_brightness.ipynb | ###Markdown
Looking at Recall from Class of SNEs
###Code
y_pred_class_hosts = y_pred_labels[:len_each_test_class]
y_pred_class_ia = y_pred_labels[len_each_test_class:2*len_each_test_class]
y_pred_class_iip = y_pred_labels[2*len_each_test_class:]
y_pred_class_ia_predicted_hosts = y_pred_class_ia == 0
y_pred_class_ia_predicted_ia = y_pred_class_ia == 1
y_pred_class_ia_predicted_iip = y_pred_class_ia == 2
rfr_ia_test = rfr_ia[index_begin_test_set:index_end_test_set]
(counts_rfr_ia_total, bins_rfr_ia_total, patches_rfr_ia_total) = plt.hist(rfr_ia_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_rfr_ia_phost, bins_rfr_ia_phost, patches_rfr_ia_phost) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_hosts], label="Predicted as host", bins=bins_rfr_ia_total, histtype='step', color='red')
(counts_rfr_ia_pia, bins_rfr_ia_pia, patches_rfr_ia_pia) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_ia], label="Predicted as IA", bins=bins_rfr_ia_total, histtype='step', color='green')
(counts_rfr_ia_piip, bins_rfr_ia_piip, patches_rfr_ia_piip) = plt.hist(rfr_ia_test[y_pred_class_ia_predicted_iip], label="Predicted as IIP", bins=bins_rfr_ia_total, histtype='step', color='blue')
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
rmags_ia_test = rmags_ia[index_begin_test_set:index_end_test_set]
(counts_mags_ia_total, bins_mags_ia_total, patches_mags_ia_total) = plt.hist(rmags_ia_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_mags_ia_phost, bins_mags_ia_phost, patches_mags_ia_phost) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_hosts], label="Predicted as host", bins=bins_mags_ia_total, histtype='step', color='red')
(counts_mags_ia_pia, bins_mags_ia_pia, patches_mags_ia_pia) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_ia], label="Predicted as IA", bins=bins_mags_ia_total, histtype='step', color='green')
(counts_mags_ia_piip, bins_mags_ia_piip, patches_mags_ia_piip) = plt.hist(rmags_ia_test[y_pred_class_ia_predicted_iip], label="Predicted as IIP", bins=bins_mags_ia_total, histtype='step', color='blue')
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_pred_class_iip_predicted_hosts = y_pred_class_iip == 0
y_pred_class_iip_predicted_ia = y_pred_class_iip == 1
y_pred_class_iip_predicted_iip = y_pred_class_iip == 2
rfr_iip_test = rfr_iip[index_begin_test_set:index_end_test_set]
(counts_rfr_iip_total, bins_rfr_iip_total, patches_rfr_iip_total) = plt.hist(rfr_iip_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_rfr_iip_phost, bins_rfr_iip_phost, patches_rfr_iip_phost) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_hosts], label="Predicted as host", bins=bins_rfr_iip_total, histtype='step', color='red')
(counts_rfr_iip_pia, bins_rfr_iip_pia, patches_rfr_iip_pia) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_ia], label="Predicted as IA", bins=bins_rfr_iip_total, histtype='step', color='blue')
(counts_rfr_iip_piip, bins_rfr_iip_piip, patches_iip_piip) = plt.hist(rfr_iip_test[y_pred_class_iip_predicted_iip], label="Predicted as IIP", bins=bins_rfr_iip_total, histtype='step', color='green')
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IIPs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
rmags_iip_test = rmags_iip[index_begin_test_set:index_end_test_set]
(counts_mags_iip_total, bins_mags_iip_total, patches_mags_iip_total) = plt.hist(rmags_iip_test, label="Total counts in dataset", bins=20, histtype='step', color='black', alpha=0.5)
(counts_mags_iip_phost, bins_mags_iip_phost, patches_mags_iip_phost) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_hosts], label="Predicted as host", bins=bins_mags_iip_total, histtype='step', color='red')
(counts_mags_iip_pia, bins_mags_iip_pia, patches_mags_iip_pia) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_ia], label="Predicted as IA", bins=bins_mags_iip_total, histtype='step', color='blue')
(counts_mags_iip_piip, bins_mags_iip_piip, patches_mags_iip_piip) = plt.hist(rmags_iip_test[y_pred_class_iip_predicted_iip], label="Predicted as IIP", bins=bins_mags_iip_total, histtype='step', color='green')
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title('Class of SNE IAs with Prediction Counts over Flux Ratio')
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rfr_ia_bins = [(bins_rfr_ia_total[i]+bins_rfr_ia_total[i+1])/2 for i in range(len(bins_rfr_ia_total)-1)]
ia_efficiency_host_over_total = counts_rfr_ia_phost / counts_rfr_ia_total
ia_efficiency_ia_over_total = counts_rfr_ia_pia / counts_rfr_ia_total
ia_efficiency_iip_over_total = counts_rfr_ia_piip / counts_rfr_ia_total
plt.step(med_rfr_ia_bins, ia_efficiency_host_over_total, label="Predicted as Host", color="red")
plt.step(med_rfr_ia_bins, ia_efficiency_ia_over_total, label="Predicted as IA", color="green")
plt.step(med_rfr_ia_bins, ia_efficiency_iip_over_total, label="Predicted as IIP", color="blue")
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Recall of IA, with False Negatives, Over Flux Ratio Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rfr_iip_bins = [(bins_rfr_iip_total[i]+bins_rfr_iip_total[i+1])/2 for i in range(len(bins_rfr_iip_total)-1)]
iip_efficiency_host_over_total = counts_rfr_iip_phost / counts_rfr_iip_total
iip_efficiency_ia_over_total = counts_rfr_iip_pia / counts_rfr_iip_total
iip_efficiency_iip_over_total = counts_rfr_iip_piip / counts_rfr_iip_total
plt.step(med_rfr_iip_bins, iip_efficiency_host_over_total, label="Predicted as Host", color="red")
plt.step(med_rfr_iip_bins, iip_efficiency_ia_over_total, label="Predicted as IA", color="blue")
plt.step(med_rfr_iip_bins, iip_efficiency_iip_over_total, label="Predicted as IIP", color="green")
plt.xlabel(r'$\rho$, SNE Flux Ratio', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.title("Recall of IIP, with False Negatives, Over Flux Ratio Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
###Output
_____no_output_____
###Markdown
Looking at Precision from Predicted Classes
###Code
y_predicted_hosts = (y_pred_labels == 0)
y_predicted_hosts_class_hosts = y_predicted_hosts[:len_each_test_class]
y_predicted_hosts_class_ia = y_predicted_hosts[len_each_test_class:2*len_each_test_class]
y_predicted_hosts_class_iip = y_predicted_hosts[2*len_each_test_class:]
rmags_predicted_hosts_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_hosts.astype(bool)]
rmags_predicted_hosts_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_ia.astype(bool)]
rmags_predicted_hosts_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_hosts_class_iip.astype(bool)]
(counts_mags_pred_hosts_total, bins_mags_pred_hosts_total, patches_mags_pred_hosts_total) = \
plt.hist(np.concatenate([rmags_predicted_hosts_class_hosts,
rmags_predicted_hosts_class_ia,
rmags_predicted_hosts_class_iip]),
histtype='step', color="black", alpha=0.5, bins=20, label="Total Counts")
(counts_mags_pred_hosts_class_hosts, bins_mags_pred_hosts_class_hosts, patches_mags_pred_hosts_class_hosts) =\
plt.hist(rmags_predicted_hosts_class_hosts, histtype='step', color="green",
bins=bins_mags_pred_hosts_total, label="Actual Class: Hosts")
(counts_mags_pred_hosts_class_ia, bins_mags_pred_hosts_class_ia, patches_mags_pred_hosts_class_ia) =\
plt.hist(rmags_predicted_hosts_class_ia, histtype='step', color="red",
bins=bins_mags_pred_hosts_total, label="Actual Class: SNE IA")
(counts_mags_pred_hosts_class_iip, bins_mags_pred_hosts_class_iip, patches_mags_pred_hosts_class_iip) =\
plt.hist(rmags_predicted_hosts_class_iip, histtype='step', color="blue",
bins=bins_mags_pred_hosts_total, label="Actual Class: SNE IIP")
plt.title("Predicted Hosts with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_host_bins = [(bins_mags_pred_hosts_total[i]+bins_mags_pred_hosts_total[i+1])/2 for i in range(len(bins_mags_pred_hosts_total)-1)]
# There's a zero in one of the bins...
pred_host_class_host_efficiency_over_total = counts_mags_pred_hosts_class_hosts / counts_mags_pred_hosts_total
pred_host_class_ia_efficiency_over_total = counts_mags_pred_hosts_class_ia / counts_mags_pred_hosts_total
pred_host_class_iip_efficiency_over_total = counts_mags_pred_hosts_class_iip / counts_mags_pred_hosts_total
plt.step(med_rmags_pred_host_bins, pred_host_class_host_efficiency_over_total, label="Actual: Host", color="green")
plt.step(med_rmags_pred_host_bins, pred_host_class_ia_efficiency_over_total, label="Actual: IA", color="red")
plt.step(med_rmags_pred_host_bins, pred_host_class_iip_efficiency_over_total, label="Actual: IIP", color="blue")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Hosts, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_ia = (y_pred_labels == 1)
y_predicted_ia_class_hosts = y_predicted_ia[:len_each_test_class]
y_predicted_ia_class_ia = y_predicted_ia[len_each_test_class:2*len_each_test_class]
y_predicted_ia_class_iip = y_predicted_ia[2*len_each_test_class:]
rmags_predicted_ia_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_ia_class_hosts.astype(bool)]
rmags_predicted_ia_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_ia_class_ia.astype(bool)]
rmags_predicted_ia_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_ia_class_iip.astype(bool)]
(counts_mags_pred_ia_total, bins_mags_pred_ia_total, patches_mags_pred_ia_total) = \
plt.hist(np.concatenate([rmags_predicted_ia_class_hosts,
rmags_predicted_ia_class_ia,
rmags_predicted_ia_class_iip]),
histtype='step', color="black", bins=20, alpha=0.5, label="Total Counts")
(counts_mags_pred_ia_class_hosts, bins_mags_pred_ia_class_hosts, patches_mags_pred_ia_class_hosts) = \
plt.hist(rmags_predicted_ia_class_hosts, histtype='step', color="red",
bins=bins_mags_pred_ia_total, label="Actual Class: Hosts")
(counts_mags_pred_ia_class_ia, bins_mags_pred_ia_class_ia, patches_mags_pred_ia_class_ia) = \
plt.hist(rmags_predicted_ia_class_ia, histtype='step', color="green",
bins=bins_mags_pred_ia_total, label="Actual Class: SNE IA")
(counts_mags_pred_ia_class_iip, bins_mags_pred_ia_class_iip, patches_mags_pred_ia_class_iip) = \
plt.hist(rmags_predicted_ia_class_iip, histtype='step', color="blue",
bins=bins_mags_pred_ia_total, label="Actual Class: SNE IIP")
plt.title("Predicted SNE IAs with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_ia_bins = [(bins_mags_pred_ia_total[i]+bins_mags_pred_ia_total[i+1])/2 for i in range(len(bins_mags_pred_ia_total)-1)]
pred_ia_class_host_efficiency_over_total = counts_mags_pred_ia_class_hosts / counts_mags_pred_ia_total
pred_ia_class_ia_efficiency_over_total = counts_mags_pred_ia_class_ia / counts_mags_pred_ia_total
pred_ia_class_iip_efficiency_over_total = counts_mags_pred_ia_class_iip / counts_mags_pred_ia_total
plt.step(med_rmags_pred_ia_bins, pred_ia_class_host_efficiency_over_total, label="Actual: Host", color="red")
plt.step(med_rmags_pred_ia_bins, pred_ia_class_ia_efficiency_over_total, label="Actual: IA", color="green")
plt.step(med_rmags_pred_ia_bins, pred_ia_class_iip_efficiency_over_total, label="Actual: IIP", color="blue")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Predicted IA, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
y_predicted_iip = (y_pred_labels == 2)
y_predicted_iip_class_hosts = y_predicted_iip[:len_each_test_class]
y_predicted_iip_class_ia = y_predicted_iip[len_each_test_class:2*len_each_test_class]
y_predicted_iip_class_iip = y_predicted_iip[2*len_each_test_class:]
rmags_predicted_iip_class_hosts = rmags_hosts[index_begin_test_set:index_end_test_set][y_predicted_iip_class_hosts.astype(bool)]
rmags_predicted_iip_class_ia = rmags_ia[index_begin_test_set:index_end_test_set][y_predicted_iip_class_ia.astype(bool)]
rmags_predicted_iip_class_iip = rmags_iip[index_begin_test_set:index_end_test_set][y_predicted_iip_class_iip.astype(bool)]
(counts_mags_pred_iip_total, bins_mags_pred_iip_total, patches_mags_pred_iip_total) = \
plt.hist(np.concatenate([rmags_predicted_iip_class_hosts,
rmags_predicted_iip_class_ia,
rmags_predicted_iip_class_iip]),
histtype='step', color="black", bins=20, alpha=0.5, label="Total Counts")
(counts_mags_pred_iip_class_hosts, bins_mags_pred_iip_class_hosts, patches_mags_pred_iip_class_hosts) = \
plt.hist(rmags_predicted_iip_class_hosts, histtype='step', color="red",
bins=bins_mags_pred_iip_total, label="Actual Class: Hosts")
(counts_mags_pred_iip_class_ia, bins_mags_pred_iip_class_ia, patches_mags_pred_iip_class_ia) = \
plt.hist(rmags_predicted_iip_class_ia, histtype='step', color="blue",
bins=bins_mags_pred_iip_total, label="Actual Class: SNE IA")
(counts_mags_pred_iip_class_iip, bins_mags_pred_iip_class_iip, patches_mags_pred_iip_class_iip) = \
plt.hist(rmags_predicted_iip_class_iip, histtype='step', color="green",
bins=bins_mags_pred_iip_total, label="Actual Class: SNE IIP")
plt.title("Predicted SNE IIPs with Class Counts over Magnitudes")
plt.ylabel("Counts")
plt.xlabel("Flux Magnitudes")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
med_rmags_pred_iip_bins = [(bins_mags_pred_iip_total[i]+bins_mags_pred_iip_total[i+1])/2 for i in range(len(bins_mags_pred_iip_total)-1)]
pred_iip_class_host_efficiency_over_total = counts_mags_pred_iip_class_hosts / counts_mags_pred_iip_total
pred_iip_class_ia_efficiency_over_total = counts_mags_pred_iip_class_ia / counts_mags_pred_iip_total
pred_iip_class_iip_efficiency_over_total = counts_mags_pred_iip_class_iip / counts_mags_pred_iip_total
plt.step(med_rmags_pred_iip_bins, pred_iip_class_host_efficiency_over_total, label="Actual: Host", color="red")
plt.step(med_rmags_pred_iip_bins, pred_iip_class_ia_efficiency_over_total, label="Actual: IA", color="blue")
plt.step(med_rmags_pred_iip_bins, pred_iip_class_iip_efficiency_over_total, label="Actual: IIP", color="green")
plt.xlabel('Flux Magnitudes', fontsize=12)
plt.ylabel('Fraction of Counts', fontsize=12)
plt.title("Precision of Predicted IIP, with False Positives, Over Magnitude Bins")
plt.legend(bbox_to_anchor=(1,0.5), loc="lower left")
plt.show()
###Output
_____no_output_____ |
Day1/3_sentiment/Sentiment Analysis - Unsupervised Lexical.ipynb | ###Markdown
Emotion and Sentiment AnalysisSentiment analysis is perhaps one of the most popular applications of NLP, with a vast number of tutorials, courses, and applications that focus on analyzing sentiments of diverse datasets ranging from corporate surveys to movie reviews. The key aspect of sentiment analysis is to analyze a body of text for understanding the opinion expressed by it. Typically, we quantify this sentiment with a positive or negative value, called polarity. The overall sentiment is often inferred as positive, neutral or negative from the sign of the polarity score.Usually, sentiment analysis works best on text that has a subjective context than on text with only an objective context. Objective text usually depicts some normal statements or facts without expressing any emotion, feelings, or mood. Subjective text contains text that is usually expressed by a human having typical moods, emotions, and feelings. Sentiment analysis is widely used, especially as a part of social media analysis for any domain, be it a business, a recent movie, or a product launch, to understand its reception by the people and what they think of it based on their opinions or, you guessed it, sentiment!Typically, sentiment analysis for text data can be computed on several levels, including on an individual sentence level, paragraph level, or the entire document as a whole. Often, sentiment is computed on the document as a whole or some aggregations are done after computing the sentiment for individual sentences. There are two major approaches to sentiment analysis. - Supervised machine learning or deep learning approaches - Unsupervised lexicon-based approachesFor the first approach we typically need pre-labeled data. Hence, we will be focusing on the second approach. For a comprehensive coverage of sentiment analysis, refer to Chapter 7: Analyzing Movie Reviews Sentiment, Practical Machine Learning with Python, Springer\Apress, 2018. In this scenario, we do not have the convenience of a well-labeled training dataset. Hence, we will need to use unsupervised techniques for predicting the sentiment by using knowledgebases, ontologies, databases, and lexicons that have detailed information, specially curated and prepared just for sentiment analysis. A lexicon is a dictionary, vocabulary, or a book of words. In our case, lexicons are special dictionaries or vocabularies that have been created for analyzing sentiments. Most of these lexicons have a list of positive and negative polar words with some score associated with them, and using various techniques like the position of words, surrounding words, context, parts of speech, phrases, and so on, scores are assigned to the text documents for which we want to compute the sentiment. After aggregating these scores, we get the final sentiment.Various popular lexicons are used for sentiment analysis, including the following.AFINN lexiconBing Liu’s lexiconMPQA subjectivity lexiconSentiWordNetVADER lexiconTextBlob lexiconThis is not an exhaustive list of lexicons that can be leveraged for sentiment analysis, and there are several other lexicons which can be easily obtained from the Internet. Feel free to check out each of these links and explore them. We will be covering two techniques in this section. Some Pre-Processing Import necessary depencencies
###Code
import pandas as pd
import numpy as np
import text_normalizer as tn
import model_evaluation_utils as meu
np.set_printoptions(precision=2, linewidth=80)
###Output
_____no_output_____
###Markdown
Load and normalize data1. Cleaning Text - strip HTML2. Removing accented characters3. Expanding Contractions4. Removing Special Characters5. Lemmatizing text¶6. Removing Stopwords
###Code
dataset = pd.read_csv(r'movie_reviews_cleaned.csv')
reviews = np.array(dataset['review'])
sentiments = np.array(dataset['sentiment'])
# extract data for model evaluation
train_reviews = reviews[:35000]
train_sentiments = sentiments[:35000]
test_reviews = reviews[35000:]
test_sentiments = sentiments[35000:]
sample_review_ids = [7626, 3533, 13010]
# SKIP FOR THE STUDENTS BECAUSE INSTRUCTOR HAS PRE_NORMALIZED AND SAVED THE FILE
# normalize dataset (time consuming using spacey pipeline)
"""
norm_test_reviews = tn.normalize_corpus(test_reviews)
norm_train_reviews = tn.normalize_corpus(train_reviews)
#output back to a csv file again
import csv
with open(r'movie_reviews_cleaned.csv', mode='w') as cleaned_file:
csv_writer = csv.writer(cleaned_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
csv_writer.writerow(['review', 'sentiment'])
for text, sent in zip(norm_test_reviews, test_sentiments):
csv_writer.writerow([text, sent])
for text, sent in zip(norm_train_reviews, train_sentiments):
csv_writer.writerow([text, sent])
"""
###Output
_____no_output_____
###Markdown
============================================ Part A. Unsupervised (Lexicon) Sentiment Analysis ============================================ 1. Sentiment Analysis with AFINNThe AFINN lexicon is perhaps one of the simplest and most popular lexicons that can be used extensively for sentiment analysis. Developed and curated by Finn Arup Nielsen, you can find more details on this lexicon in the paper, “A new ANEW: evaluation of a word list for sentiment analysis in microblogs”, proceedings of the ESWC 2011 Workshop. The current version of the lexicon is AFINN-en-165. txt and it contains over 3,300+ words with a polarity score associated with each word. You can find this lexicon at the author’s official GitHub repository along with previous versions of it, including AFINN-111. The author has also created a nice wrapper library on top of this in Python called afinn, which we will be using for our analysis.
###Code
from afinn import Afinn
afn = Afinn(emoticons=True)
# NOTE: to use afinn score, call the function afn.score("text you want the sentiment for")
# the lexicon will be used to compute summary of sentiment for the given text
###Output
_____no_output_____
###Markdown
Predict sentiment for sample reviewsWe can get a good idea of general sentiment for different sample.
###Code
for review, sentiment in zip(test_reviews[sample_review_ids], test_sentiments[sample_review_ids]):
print('REVIEW:', review)
print('Actual Sentiment:', sentiment)
print('Predicted Sentiment polarity:', afn.score(review))
print('-'*60)
###Output
REVIEW: word fail whenever want describe feeling movie sequel flaw sure start subspecie not execute well enough special effect glorify movie herd movie mass consumer care quantity quality cheap fun depth crap like blade not even deserve capital letter underworlddracula 2000dracula 3000 good movie munch popcorn drink couple coke make subspecie superior effort anyone claim vampire fanatic hand obvious vampire romanian story set transylvania scene film location convince atmosphere not base action pack chase expensive orchestral music radu source atmosphere vampire look like behave add breathtakingly gloomy castle dark passageway situate romania include typical vampiric element movement shadow wall vampire take flight work art short like fascinated vampire feel appearance well setting sinister dark no good place look subspecie movie vampire journal brilliant spin former
Actual Sentiment: positive
Predicted Sentiment polarity: 20.0
------------------------------------------------------------
REVIEW: good family movie laugh wish not much school stuff like bully fill movie also seem little easy save piece land build mean flow easily make aware wildlife cute way introduce piece land fast runner little slow little hokey remind go back school oh dvd chock full goody not miss 7 10 movie 10 10 dvd extra well worth watch well worth time see
Actual Sentiment: positive
Predicted Sentiment polarity: 12.0
------------------------------------------------------------
REVIEW: opinion movie not good hardly find good thing say still would like explain conclude another bad movie decide watch costas mandylor star main reason watch till end like action movie understand movie build action rather story know not go detail come credibility story event even not explain scene lack sense reality look ridiculous beginning movie look quite promising tough good look specialist not tough smart funny partner must job turn bit different expect story take place cruise ship disaster happen ship turn leave alive struggle survive escape shark professional killer rise water furthermore movie quite violent main weapon beside disaster already take passenger gun successfully use many case personally miss good man man woman woman prefer fight family fun not think think movie shoot hurry without real vision try say make usual action movie trick bit something call love without real meaning result bad movie
Actual Sentiment: negative
Predicted Sentiment polarity: 2.0
------------------------------------------------------------
###Markdown
Predict sentiment for test dataset
###Code
sentiment_polarity = [afn.score(review) for review in test_reviews]
predicted_sentiments = ['positive' if score >= 1.0 else 'negative' for score in sentiment_polarity]
###Output
_____no_output_____
###Markdown
Evaluate model performance
###Code
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=predicted_sentiments,
classes=['positive', 'negative'])
###Output
Model Performance metrics:
------------------------------
Accuracy: 0.7054
Precision: 0.7212
Recall: 0.7054
F1 Score: 0.6993
Model Classification report:
------------------------------
precision recall f1-score support
positive 0.66 0.84 0.74 7587
negative 0.78 0.56 0.65 7413
micro avg 0.71 0.71 0.71 15000
macro avg 0.72 0.70 0.70 15000
weighted avg 0.72 0.71 0.70 15000
Prediction Confusion Matrix:
------------------------------
Predicted:
positive negative
Actual: positive 6405 1182
negative 3237 4176
###Markdown
2. Sentiment Analysis with SentiWordNetSentiWordNet is a lexical resource for opinion mining. SentiWordNet assigns to each synset of WordNet three sentiment scores: positivity, negativity, objectivity. SentiWordNet is described in details in the papers:
###Code
from nltk.corpus import sentiwordnet as swn
import nltk
nltk.download('sentiwordnet')
awesome = list(swn.senti_synsets('awesome', 'a'))[0]
print('Positive Polarity Score:', awesome.pos_score())
print('Negative Polarity Score:', awesome.neg_score())
print('Objective Score:', awesome.obj_score())
###Output
[nltk_data] Downloading package sentiwordnet to
[nltk_data] /home/anniee/nltk_data...
[nltk_data] Package sentiwordnet is already up-to-date!
Positive Polarity Score: 0.875
Negative Polarity Score: 0.125
Objective Score: 0.0
###Markdown
Build modelFor each word in the review, add up the sentiment score of words that are NN, VB, JJ, RB if it's in the lexicon dictionary.
###Code
def analyze_sentiment_sentiwordnet_lexicon(review,
verbose=False):
# tokenize and POS tag text tokens
tagged_text = [(token.text, token.tag_) for token in tn.nlp(review)]
pos_score = neg_score = token_count = obj_score = 0
# get wordnet synsets based on POS tags
# get sentiment scores if synsets are found
for word, tag in tagged_text:
ss_set = None
if 'NN' in tag and list(swn.senti_synsets(word, 'n')):
ss_set = list(swn.senti_synsets(word, 'n'))[0]
elif 'VB' in tag and list(swn.senti_synsets(word, 'v')):
ss_set = list(swn.senti_synsets(word, 'v'))[0]
elif 'JJ' in tag and list(swn.senti_synsets(word, 'a')):
ss_set = list(swn.senti_synsets(word, 'a'))[0]
elif 'RB' in tag and list(swn.senti_synsets(word, 'r')):
ss_set = list(swn.senti_synsets(word, 'r'))[0]
# if senti-synset is found
if ss_set:
# add scores for all found synsets
pos_score += ss_set.pos_score()
neg_score += ss_set.neg_score()
obj_score += ss_set.obj_score()
token_count += 1
# aggregate final scores
final_score = pos_score - neg_score
norm_final_score = round(float(final_score) / token_count, 2)
final_sentiment = 'positive' if norm_final_score >= 0 else 'negative'
if verbose:
norm_obj_score = round(float(obj_score) / token_count, 2)
norm_pos_score = round(float(pos_score) / token_count, 2)
norm_neg_score = round(float(neg_score) / token_count, 2)
# to display results in a nice table
sentiment_frame = pd.DataFrame([[final_sentiment, norm_obj_score, norm_pos_score,
norm_neg_score, norm_final_score]],
columns=pd.MultiIndex(levels=[['SENTIMENT STATS:'],
['Predicted Sentiment', 'Objectivity',
'Positive', 'Negative', 'Overall']],
labels=[[0,0,0,0,0],[0,1,2,3,4]]))
print(sentiment_frame)
return final_sentiment
###Output
_____no_output_____
###Markdown
Predict sentiment for sample reviews
###Code
for review, sentiment in zip(test_reviews[sample_review_ids], test_sentiments[sample_review_ids]):
print('REVIEW:', review)
print('Actual Sentiment:', sentiment)
pred = analyze_sentiment_sentiwordnet_lexicon(review, verbose=True)
print('-'*60)
###Output
REVIEW: word fail whenever want describe feeling movie sequel flaw sure start subspecie not execute well enough special effect glorify movie herd movie mass consumer care quantity quality cheap fun depth crap like blade not even deserve capital letter underworlddracula 2000dracula 3000 good movie munch popcorn drink couple coke make subspecie superior effort anyone claim vampire fanatic hand obvious vampire romanian story set transylvania scene film location convince atmosphere not base action pack chase expensive orchestral music radu source atmosphere vampire look like behave add breathtakingly gloomy castle dark passageway situate romania include typical vampiric element movement shadow wall vampire take flight work art short like fascinated vampire feel appearance well setting sinister dark no good place look subspecie movie vampire journal brilliant spin former
Actual Sentiment: positive
SENTIMENT STATS:
Predicted Sentiment Objectivity Positive Negative Overall
0 positive 0.84 0.09 0.06 0.03
------------------------------------------------------------
REVIEW: good family movie laugh wish not much school stuff like bully fill movie also seem little easy save piece land build mean flow easily make aware wildlife cute way introduce piece land fast runner little slow little hokey remind go back school oh dvd chock full goody not miss 7 10 movie 10 10 dvd extra well worth watch well worth time see
Actual Sentiment: positive
SENTIMENT STATS:
Predicted Sentiment Objectivity Positive Negative Overall
0 positive 0.85 0.08 0.06 0.02
------------------------------------------------------------
REVIEW: opinion movie not good hardly find good thing say still would like explain conclude another bad movie decide watch costas mandylor star main reason watch till end like action movie understand movie build action rather story know not go detail come credibility story event even not explain scene lack sense reality look ridiculous beginning movie look quite promising tough good look specialist not tough smart funny partner must job turn bit different expect story take place cruise ship disaster happen ship turn leave alive struggle survive escape shark professional killer rise water furthermore movie quite violent main weapon beside disaster already take passenger gun successfully use many case personally miss good man man woman woman prefer fight family fun not think think movie shoot hurry without real vision try say make usual action movie trick bit something call love without real meaning result bad movie
Actual Sentiment: negative
SENTIMENT STATS:
Predicted Sentiment Objectivity Positive Negative Overall
0 positive 0.82 0.09 0.09 -0.0
------------------------------------------------------------
###Markdown
Predict sentiment for test dataset
###Code
predicted_sentiments = [analyze_sentiment_sentiwordnet_lexicon(review, verbose=False) for review in norm_test_reviews]
###Output
_____no_output_____
###Markdown
Evaluate model performance
###Code
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=predicted_sentiments,
classes=['positive', 'negative'])
###Output
Model Performance metrics:
------------------------------
Accuracy: 0.4981
Precision: 0.4971
Recall: 0.4981
F1 Score: 0.4944
Model Classification report:
------------------------------
precision recall f1-score support
positive 0.50 0.58 0.54 7587
negative 0.49 0.41 0.45 7413
micro avg 0.50 0.50 0.50 15000
macro avg 0.50 0.50 0.49 15000
weighted avg 0.50 0.50 0.49 15000
Prediction Confusion Matrix:
------------------------------
Predicted:
positive negative
Actual: positive 4420 3167
negative 4362 3051
###Markdown
3. Sentiment Analysis with VADER
###Code
from nltk.sentiment.vader import SentimentIntensityAnalyzer
###Output
/home/anniee/.local/lib/python3.6/site-packages/nltk/twitter/__init__.py:20: UserWarning: The twython library has not been installed. Some functionality from the twitter package will not be available.
warnings.warn("The twython library has not been installed. "
###Markdown
Build model
###Code
def analyze_sentiment_vader_lexicon(review,
threshold=0.1,
verbose=False):
# pre-process text
review = tn.strip_html_tags(review)
review = tn.remove_accented_chars(review)
review = tn.expand_contractions(review)
# analyze the sentiment for review
analyzer = SentimentIntensityAnalyzer()
scores = analyzer.polarity_scores(review)
# get aggregate scores and final sentiment
agg_score = scores['compound']
final_sentiment = 'positive' if agg_score >= threshold\
else 'negative'
if verbose:
# display detailed sentiment statistics
positive = str(round(scores['pos'], 2)*100)+'%'
final = round(agg_score, 2)
negative = str(round(scores['neg'], 2)*100)+'%'
neutral = str(round(scores['neu'], 2)*100)+'%'
sentiment_frame = pd.DataFrame([[final_sentiment, final, positive,
negative, neutral]],
columns=pd.MultiIndex(levels=[['SENTIMENT STATS:'],
['Predicted Sentiment', 'Polarity Score',
'Positive', 'Negative', 'Neutral']],
labels=[[0,0,0,0,0],[0,1,2,3,4]]))
print(sentiment_frame)
return final_sentiment
###Output
_____no_output_____
###Markdown
Predict sentiment for sample reviews
###Code
for review, sentiment in zip(test_reviews[sample_review_ids], test_sentiments[sample_review_ids]):
print('REVIEW:', review)
print('Actual Sentiment:', sentiment)
pred = analyze_sentiment_vader_lexicon(review, threshold=0.4, verbose=True)
print('-'*60)
###Output
REVIEW: word fail whenever want describe feeling movie sequel flaw sure start subspecie not execute well enough special effect glorify movie herd movie mass consumer care quantity quality cheap fun depth crap like blade not even deserve capital letter underworlddracula 2000dracula 3000 good movie munch popcorn drink couple coke make subspecie superior effort anyone claim vampire fanatic hand obvious vampire romanian story set transylvania scene film location convince atmosphere not base action pack chase expensive orchestral music radu source atmosphere vampire look like behave add breathtakingly gloomy castle dark passageway situate romania include typical vampiric element movement shadow wall vampire take flight work art short like fascinated vampire feel appearance well setting sinister dark no good place look subspecie movie vampire journal brilliant spin former
Actual Sentiment: positive
###Markdown
Predict sentiment for test dataset
###Code
predicted_sentiments = [analyze_sentiment_vader_lexicon(review, threshold=0.4, verbose=False) for review in test_reviews]
###Output
_____no_output_____
###Markdown
Evaluate model performance
###Code
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=predicted_sentiments,
classes=['positive', 'negative'])
###Output
_____no_output_____ |
2-Homework/ES07/.src/HW-Week7-Matplotlib-Fall2021-Arefeen-Solution.ipynb | ###Markdown
Weekly HW on Matplotlib for Plotting in Python 1) Pandas and Matplotlib libraries **Exercise-1: Import all the libraries - numpy, pandas, and matplotlib so that we do not have to worry about importing the libraries later on in this assignment****(POINTS: 6)**
###Code
#GIVE YOUR ANSWER FOR EXERCISE-1 IN THIS CELL
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**Exercise-2: The file named 'Electric_Vehicle_Population_Data.csv' contains the database of electric vehicles registered and operated in different cities and states (primarily Washington State) of the United States. It also contains the vehicle details like make, model, model year, vehicle type, electric range, base retail price and fields that are mostly self explanatory.****(POINTS: 54 - each task in this exercise carries 9 points)** **Task-1:** Read the **Electric_Vehicle_Population_Data.csv** file and store in a variable name **ev_pop**. After reading, display the first 10 rows of the dataframe **ev_pop** as the output.
###Code
#GIVE YOUR ANSWER FOR TASK-1 IN THIS CELL
ev_pop = pd.read_csv('Electric_Vehicle_Population_Data.csv')
ev_pop.head(10)
###Output
_____no_output_____
###Markdown
**Task-2:** Drop the columns 'Clean Alternative Fuel Vehicle (CAFV) Eligibility', 'Legislative District', 'DOL Vehicle ID', 'Vehicle Location' from the dataframe **ev_pop**. After indexing, display the first 5 rows of the dataframe **ev_pop** as the output. [Hint:The drop() function with 'columns' and 'inplace' argument may be used]
###Code
#GIVE YOUR ANSWER FOR TASK-2 IN THIS CELL
cols_2b_dropped = ['Clean Alternative Fuel Vehicle (CAFV) Eligibility', 'Legislative District', 'DOL Vehicle ID',
'Vehicle Location']
ev_pop.drop(columns = cols_2b_dropped, inplace = True)
ev_pop.head()
###Output
_____no_output_____
###Markdown
Let's say we first want to see whether the EV purchase has generally shown an growth trend over the years. The strategy to visualize it is to have the 'Model Year' (in an ordered manner) in the x-axis and the value counts of that 'Model Year' in the y-axis as a scatter plot.**Task-3:** Extract the value counts for each 'Model Year' and save the pandas Series in the variable name 'model_year'
###Code
#GIVE YOUR ANSWER FOR TASK-3 IN THIS CELL
model_year = ev_pop['Model Year'].value_counts()
###Output
_____no_output_____
###Markdown
At this point model_year is a pandas Series having the 'Model Year' values as its index and count of values as the Series. If we explore the dataset we shall see the 'Base MSRP' column inexplacably contain a good number of values equal to 'zero' and in some case unusually high values (can be disregarded as outlier) which are greater than 100,000.**Task-4:** Plot a scatter plot with the Model Year values in the x-axis and their value counts in the y-axis. Make sure the plot has proper labels and title. [Hint: You can use Series.index attribute to have model year as a data sequence]. Comment on your observations in a following markdown cell.
###Code
#GIVE YOUR ANSWER FOR TASK-4 IN THIS CELL
plt.scatter(model_year.index, model_year)
plt.xlabel('Model Year')
plt.ylabel('No. of Vehicles')
plt.title('EV Population by Model Year')
###Output
_____no_output_____
###Markdown
Student Answer (Expected): The observation is the EV usage or purchase shown a general trend of exponential growth which might be slowed down in last couple of years due to COVID and other challenging situations. Let's work towards plotting a histogram for 'MRSP Base Price' value of the entire EV Population database with the following two tasks.**Task-5:** Clean up the ev_pop dataframe by selecting only the rows of the dataframe where the two conditions: a) the Base MSRP is greater than 0 b) the base MSRP is less than or equal to 100000are simultaneously met (**and** operation).Save the modified dataframe as **ev_pop_cleaned**
###Code
#GIVE YOUR ANSWER FOR TASK-5 IN THIS CELL
ev_pop_cleaned = ev_pop[(ev_pop['Base MSRP']>0) & (ev_pop['Base MSRP']<100000)]
###Output
_____no_output_____
###Markdown
**Task-6:** Plot the Histogram of the 'Base MSRP' series of the **ev_pop_cleaned** dataframe
###Code
#GIVE YOUR ANSWER FOR TASK-6 IN THIS CELL
plt.hist(ev_pop_cleaned['Base MSRP'])
plt.xlabel('Base MSRP')
plt.title('Distribution of Base MSRP')
###Output
_____no_output_____
###Markdown
**Exercise-2: In this exercise, we shall visualize the data from the manufacturer's perspective through following four tasks"****(POINTS: 40 - each task in this exercise carries 10 points)** **Task-1:** Using the **ev_pop** dataframe, extract the top ten makers of electric vehicles by the dataset. Print the names of the makers and their corresponding value counts (no. of vehicles by the maker). [Hint: Use the **value_counts()** and **nlargest()** functions in tendem to extract the series of top ten makers]
###Code
#GIVE YOUR ANSWER FOR TASK-1 IN THIS CELL
maker = ev_pop['Make'].value_counts().nlargest(10)
print(maker)
###Output
TESLA 25880
NISSAN 11220
CHEVROLET 8382
FORD 3671
TOYOTA 2832
BMW 2626
KIA 2262
AUDI 1113
VOLKSWAGEN 971
HYUNDAI 880
Name: Make, dtype: int64
###Markdown
**Task-2:** Make a bar plot for the top ten makers and their vehicle counts in the **ev_pop** database. Use all the proper plotting practices for labels and title. For better readability make sure the x labels are rotated 90 degrees.**Hint**: Use the xticks.rotation() function for the label rotation. Use pyplot.show() to avoid unwanted texts above the bar graph.
###Code
#GIVE YOUR ANSWER FOR TASK-2 IN THIS CELL
plt.bar(maker.index, maker, color = 'teal')
plt.ylabel('No. of Vehicles')
plt.xticks(rotation = 90)
plt.show()
###Output
_____no_output_____
###Markdown
Now we want to observe the sales (revenue) trend of the top two manufacturers through the following tasks.**Task-3:** Extract two separate dataframes for the top two manufacturers where the 'Make' column value of **ev_pop** matches with the manufacturer. Sort the two dataframes by the ascending values for 'Model Year' and save the two sorted dataframes as **sorted_Tesla** and **sorted_Nissan**. Apply the **groupby()** and **sum()** function together for grouping the sorted dataframes by 'Model Year' and get the summation for numerical fields. It will extract two dataframes (name them **tesla** and **nissan** respectively) which will provide the yearly sum of numerical columns like 'Electric Range' and 'Base MSRP'.
###Code
#GIVE YOUR ANSWER FOR TASK-3 IN THIS CELL
ev_Tesla = ev_pop[ev_pop.Make == 'TESLA']
sorted_Tesla = ev_Tesla.sort_values('Model Year')
ev_Nissan = ev_pop[ev_pop.Make == 'NISSAN']
sorted_Nissan = ev_Nissan.sort_values('Model Year')
tesla = sorted_Tesla.groupby('Model Year').sum()
nissan = sorted_Nissan.groupby('Model Year').sum()
###Output
_____no_output_____
###Markdown
**Task-4:** Plot an overlaid line plot for 'Model Year' (vs.) 'Revenue (=summed up Base MSRP)' for each of the top two manufacturers. The plot must display an x-label, a y-label, a title, and a legend.**Note-1:** To increase the line width of the line plot, use the argument **`lw`**. For example: lw = 3.**Note-2:** To give your own labels for each color that corresponds to one of the 4 countries, use the argument **`label`**. For example: label = 'Tesla'.
###Code
#GIVE YOUR ANSWER FOR TASK-4 IN THIS CELL
plt.plot(tesla.index,tesla['Base MSRP'], color = 'black', label = 'Tesla')
plt.plot(nissan.index,nissan['Base MSRP'], color = 'orange', label = 'Nissan')
plt.xlabel('Year')
plt.ylabel('Maker\'s Revenue')
plt.title('Market trend of top two makers')
plt.legend()
###Output
_____no_output_____ |
notebooks/Train Cosmology Hyperparams.ipynb | ###Markdown
I'm gonna overwrite a lot of this notebook's old content. I changed the way I'm calculating wt, and wanna test that my training worked.
###Code
from pearce.emulator import OriginalRecipe, ExtraCrispy
from pearce.mocks import cat_dict
import numpy as np
from os import path
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
training_file = '/u/ki/swmclau2/des/PearceRedMagicWpCosmo.hdf5'
em_method = 'gp'
split_method = 'random'
a = 1.0
z = 1.0/a - 1.0
fixed_params = {'z':z}#, 'r':0.18477483}
n_leaves, n_overlap = 100, 2
emu = ExtraCrispy(training_file, n_leaves, n_overlap, split_method, method = em_method, fixed_params=fixed_params,
custom_mean_function = None, downsample_factor = 0.2)
###Output
_____no_output_____
###Markdown
emu = OriginalRecipe(training_file, method = em_method, fixed_params=fixed_params, independent_variable=None,\ custom_mean_function = None)
###Code
emu._ordered_params
params = {'ombh2': 0.021,
'omch2': 0.11,
'w0': -1.01,
'ns': 0.9578462,
'ln10As': 3.08,
'H0': 68.1,
'Neff': 3.04,
'logM1': 14.0,
'logMmin': 11.9,
'f_c': 0.2,
'logM0': 13.2,
'sigma_logM': 0.12,
'alpha':1.1}
###Output
_____no_output_____
###Markdown
params = {'ombh2': 0.021, 'omch2': 0.12, 'w0': -1, 'ns': 0.9578462, 'ln10As': 3.08, 'H0': 68.1, 'Neff': 3.04} params = {'logM1': 14.0, 'logMmin': 11.9, 'f_c': 0.2, 'logM0': 13.2, 'sigma_logM': 0.12, 'alpha':1.1}
###Code
wp = emu.emulate_wrt_r(params, emu.scale_bin_centers)[0]
emu._x_mean, emu._x_std
emu.x.shape
plt.plot(emu.scale_bin_centers, wp)
plt.xscale('log')
plt.xlabel(r'$r$ [Mpc]')
plt.ylabel(r'$w_p(r_p)$')
plt.show()
###Output
_____no_output_____
###Markdown
params = {'ombh2': 0.021, 'omch2': 0.11, 'w0': -1, 'ns': 0.9578462, 'ln10As': 3.08, 'H0': 68.1, 'Neff': 3.04, 'logM1': 14.0, 'logMmin': 11.9, 'f_c': 0.2, 'logM0': 13.2, 'sigma_logM': 0.12, 'alpha':1.1}
###Code
param_name = 'logMmin'
param_bounds = emu.get_param_bounds(param_name)
pvals = np.linspace(param_bounds[0],param_bounds[1], 5)
for val in pvals:
params[param_name] = val
#print params
wp = emu.emulate_wrt_r(params, emu.scale_bin_centers)[0]
#print(wp)
plt.plot(emu.scale_bin_centers, wp, label = '%s = %.2f'%(param_name, val))
plt.plot(emu.scale_bin_centers, np.mean(emu._y_mean)*np.ones_like(emu.scale_bin_centers), color = 'k')
plt.xscale('log')
plt.xlabel(r'$r$ [Mpc]')
plt.ylabel(r'$w_p(r_p)$')
plt.show()
432/18
idx = 25
binlen = len(emu.scale_bin_centers)
params = {pname: p for pname, p in zip(emu.get_param_names(), emu._x_std[:-1]*emu.x[idx*binlen, :-1] + emu._x_mean[:-1])}
wp = emu.emulate_wrt_r(params,emu.scale_bin_centers)[0]
plt.plot(emu.scale_bin_centers, wp, label = 'Emu')
plt.plot(emu.scale_bin_centers, emu._y_std*emu.y[idx*binlen:(idx+1)*binlen]+emu._y_mean, label = 'Truth')
#plt.plot(emu.x[idx*binlen:(idx+1)*binlen, -1], lm_pred)
plt.xscale('log')
plt.xlabel(r'$r$ [Mpc]')
plt.ylabel(r'$w_p(r_p)$')
plt.legend(loc = 'best')
plt.show()
emu.y.shape
emu._y_mean
params['f_c'] = 0.1
params['r'] = emu.scale_bin_centers
t_list = [params[pname] for pname in emu._ordered_params if pname in params]
t_grid = np.meshgrid(*t_list)
t = np.stack(t_grid).T
t = t.reshape((-1, emu.emulator_ndim))
t-=emu._x_mean
t/=(emu._x_std + 1e-5)
for i in xrange(emu.y.shape[0]):
print gp.predict(emu.y[i], t, return_cov= False)
emu.mean_function(t)
emu._mean_func.named_steps['linearregression'].coef_
###Output
_____no_output_____ |
datasets/zoom_test/bound_crowded_regions.ipynb | ###Markdown
ObjectiveInvestigate ways to bound regions with many crowded spots. These bounds will allow us to effectively "zoom in" on these regions and generate crops of these regions.- **Input:** Array of spot coordinates.- **Output:** Bounding boxes for regions with many crowded spots. ResultThis approach may have potential:1. Identify all crowded spots. Crowded spots are less than a crosshair arm length from the nearest neighbor.2. Separate regions with many crowded spots.3. Define a bounding box around each region with many crowded spots. Next Steps Some questions:(Refer to the plot 'Original coords with crops shown' in this notebook.) - Do we want crop boxes to be squares? - If yes, do we want bounds around coordinates to be squares or just resultant images to be squares? - If yes, do we want to shrink or grow the rectangles to make them squares? - In cases such as the blue box with stray but included points, do we want to try to exclude those stray points? - Some cyan (non-crowded) points are left out of the crop boxes. Do we want to extend the crop boxes to fit these spots or neglect them and stipulate that they must be found through a first pass annotating of the original image? To investigate: - Automatically setting the preference parameter for AffinityPropagation
###Code
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from sklearn.neighbors import KDTree
from sklearn.cluster import AffinityPropagation
coords = np.genfromtxt('coords.csv', delimiter=',')
for coord in coords:
plt.scatter(coord[0], coord[1], facecolors = 'c')
plt.title('Simulated spots from density test, density = 0.008 spots/pixel, snr = 10')
plt.show()
###Output
_____no_output_____
###Markdown
Earlier (outdated) approach1. Identify all crowded spots. Crowded spots have at least n neighbors within an m-pixel radius.2. Separate regions with many crowded spots.3. Define a bounding box around each region with many crowded spots.Aborted at Step 1 because I realized the way this finds crowded regions doesn't directly address the limiting pixel distance associated with crosshair arm length.
###Code
kdt = KDTree(coords, leaf_size=2, metric='euclidean')
def in_crowd(max_dist, min_num_neighbors, coord, kdt):
dist, ind = kdt.query([coord], k=min_num_neighbors+1)
distance_list = dist[0]
return max(distance_list) <= max_dist
max_dist = 10 # m
min_num_neighbors = 4 # n
for coord in coords:
if in_crowd(max_dist, min_num_neighbors, coord, kdt):
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = 'c')
plt.show()
###Output
_____no_output_____
###Markdown
Current approach1. Identify all crowded spots. Crowded spots are less than a crosshair arm length from the nearest neighbor.2. Separate regions with many crowded spots.3. Define a bounding box around each region with many crowded spots. Goal 1: Identify crowded spots.Highlight spots which are too close to (i.e. less than an crosshair arm length from) the nearest neighbor. Min distance between two spots = crosshair arm length, relative to image widthThere's a minimum distance between two spots for the crosshair mark left on one spot to not obscure the other spot. This minimum distance is the length of one arm of a crosshair. This minimum distance is in proportion with the pixel width of the image, since in Quantius the crosshairs take up the same proportion of the image regardless of image size.Measuring by hand, I found the crosshair to image width ratio to be about 7:115, or 0.0609. Therefore one crosshair arm length is 0.03045 times the image width, so spots should be at least that far apart.**"Crowded spots"** are spots which are less than a crosshair arm's length from the nearest neighbor.
###Code
def get_nnd(coord, kdt):
dist, ind = kdt.query([coord], k=2)
return dist[0][1]
crosshair_arm_to_image_width_ratio = 0.03045 # measured empirically in Quantius's UI
image_width = 300
crosshair_arm_length = crosshair_arm_to_image_width_ratio * image_width
print('crosshair_arm_length = ' + str(crosshair_arm_length))
close_distances = []
crowded_spots = []
for coord in coords:
nnd = get_nnd(coord, kdt)
if nnd < crosshair_arm_length:
close_distances.append(nnd)
crowded_spots.append(coord)
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = 'c')
print('crowded spots / total spots = ' + str(len(crowded_spots)) + ' / ' + str(len(coords)) + ' = ' + str(round((float(len(crowded_spots))/len(coords)), 2)) + ' %')
plt.title('magenta = crowded spots, cyan = other spots')
plt.show()
plt.hist(close_distances, color = 'm')
plt.title('Distances between each crowded spots and the nearest crowded spot')
plt.show()
###Output
crosshair_arm_length = 9.135
crowded spots / total spots = 117 / 156 = 0.75 %
###Markdown
Goal 2: Separate regions with many crowded spots.Use AffinityPropagation on crowded spots to separate out regions with many crowded spots. Smaller preference parameter results in fewer separated regions.
###Code
pref_param = -50000
crowded_coords = np.asarray(crowded_spots)
af = AffinityPropagation(preference = pref_param).fit(crowded_coords)
cluster_centers_indices = af.cluster_centers_indices_
centers = [crowded_coords[index] for index in af.cluster_centers_indices_]
print(centers)
for coord in coords:
nnd = get_nnd(coord, kdt)
if nnd < crosshair_arm_length:
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = 'c')
for center in centers:
plt.scatter(center[0], center[1], facecolors = 'orange')
plt.show()
###Output
_____no_output_____
###Markdown
Goal 3: Define a bounding box around each region with many crowded spots.
###Code
cluster_members_lists = [[] for i in range(len(centers))]
for label_index, coord in zip(af.labels_, crowded_coords):
cluster_members_lists[label_index].append(coord)
crop_bounds = []
for l in cluster_members_lists:
l = np.asarray(l)
x = l[:,0]
y = l[:,1]
crop_bounds.append((min(x), max(x), min(y), max(y)))
print(crop_bounds)
from matplotlib.patches import Rectangle
fig,ax = plt.subplots(1)
for coord in coords:
nnd = get_nnd(coord, kdt)
if nnd < crosshair_arm_length:
ax.scatter(coord[0], coord[1], facecolors = 'm')
else:
ax.scatter(coord[0], coord[1], facecolors = 'c')
for center in centers:
plt.scatter(center[0], center[1], facecolors = 'orange')
colors = ['red', 'orange', 'green', 'blue', 'violet']
for crop, col in zip(crop_bounds, colors):
rect = Rectangle((crop[0], crop[2]), crop[1]-crop[0], crop[3]-crop[2], edgecolor = col, facecolor = 'none')
ax.add_patch(rect)
plt.title('Original coords with crops shown')
plt.show()
###Output
_____no_output_____
###Markdown
Analyze each crop seperately to see whether spots are now spaced far enough apart.On scatter plots, spots closer to nearest neighbor than the width of a crosshair arm are marked in magenta.
###Code
for i in range(len(crop_bounds)):
print('-------------------------------------------------')
print('Crop ' + str(i))
crop = crop_bounds[i]
col = colors[i]
xmin = crop[0]
xmax = crop[1]
ymin = crop[2]
ymax = crop[3]
crop_width = crop[1]-crop[0]
crosshair_arm_length = crosshair_arm_to_image_width_ratio * crop_width
print('crosshair_arm_length = ' + str(crosshair_arm_length))
crop_coords = []
for coord in coords:
if coord[0] >= xmin and coord[0] <= xmax:
if coord[1] >= ymin and coord[1] <= ymax:
crop_coords.append(coord)
crop_kdt = KDTree(crop_coords, leaf_size=2, metric='euclidean')
close_distances = []
crowded_spots = []
for coord in crop_coords:
nnd = get_nnd(coord, crop_kdt)
if nnd < crosshair_arm_length:
close_distances.append(nnd)
crowded_spots.append(coord)
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = col)
print('crowded spots / total spots = ' + str(len(crowded_spots)) + ' / ' + str(len(crop_coords)) + ' = ' + str(round((float(len(crowded_spots))/len(crop_coords)), 2)) + ' %')
plt.title('magenta = crowded spots, ' + str(col) + ' = other spots')
plt.show()
plt.hist(close_distances, color = 'm')
plt.yticks(np.arange(0, 10, step=1))
plt.title('Dist. from each crowded spot to the nearest crowded spot')
plt.show()
###Output
-------------------------------------------------
Crop 0
crosshair_arm_length = 2.04015
crowded spots / total spots = 4 / 31 = 0.13 %
|
Explainable_Hostel_Recommender_System.ipynb | ###Markdown
IntroductionThis Recommender System recommends similar hostels with an explanation and as a result, such recommendations become more effective and more persuasive.Since there is no data available for hostels in Ireland on data libraries so I have scrapped the data from Hostel World website for the experiment.Let's start by importing the necessary libraries.
###Code
# Importing libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
import seaborn as sns
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.metrics import euclidean_distances
import warnings
warnings.filterwarnings('ignore')
# Loading and displaying the dataset
df_hostels = pd.read_csv("../input/hybrid.csv", encoding='latin1')
df_hostels.head()
###Output
_____no_output_____
###Markdown
Here for some entertainment features, the values are give as 1 and 0. It means 0 = No and 1 = Yes. 1. Exploratory Data Analysis
###Code
# General Information
df_hostels.info()
# Statistical characteristics of numerical features
df_hostels.describe()
###Output
_____no_output_____
###Markdown
Let's draw histograms for some relavent fields:
###Code
plt.figure(figsize=(14, 10))
plt.subplot(221)
plt.hist(df_hostels['Price'].values, bins=20)
plt.title('Price')
plt.subplot(222)
plt.hist(df_hostels['summary.score'].values, bins=20)
plt.title('summary.score')
###Output
_____no_output_____
###Markdown
Most hostels in the dataset are between 10 to 50 Euros. 2. Data PreprocessingHandling Missing Values
###Code
df_hostels.isnull().sum()
###Output
_____no_output_____
###Markdown
No missing value are found.Now, I'm deleting summary.score, name, and rating.band columns because summary.score is giving the average value for columns like value.for.money, security, location, staff, atmosphere, cleanliness, and facilities. But since we are taking these features separately for similarity computation so we have removed the summary.score column. We can access the name later so name column is removed. The third column rating.band is also not necessary and is removed because it is giving rating based on summary.score i-e if summary.score is between 1 to 3, then rating.band is "Good".
###Code
df_hostels.drop(['summary.score', 'Name', 'rating.band'], inplace=True, axis=1);
# Label Encoding
le = LabelEncoder()
df_hostels['Distance'] = le.fit_transform(df_hostels['Distance'])
df_hostels['City'] = le.fit_transform(df_hostels['City'])
df_hostels.head(3)
###Output
_____no_output_____
###Markdown
3. Modeling
###Code
# Function get all cities
def getSameCityRows(anchor_id):
getRow = df_hostels.loc[anchor_id-1, :]
city = getRow['City']
df_sorted = df_hostels.loc[df_hostels['City'] == int(city)]
return df_sorted
def get_recommendations11(df, anchor_id):
featureString = 'Distance,Price,City'
getRow = df.loc[anchor_id-1, :]
featureString = 'Value.for.money,Security,Location,Staff,Atmosphere,Cleanliness,Facilities,Board.Games,Dvds,Foosball,Games.Room,PlayStation,Pool.Table,Wifi,Distance,City,Price'
features = featureString.split(',')
df_sorted = df.copy()
df_sorted = pd.concat([df_sorted[df_sorted['hostel.id'] == anchor_id],df_sorted[df_sorted['hostel.id'] != anchor_id]])
df_features = df_sorted[features].copy()
df_features = normalize_features(df_features)
# compute the distances
X = df_features.values
Y = df_features.values[0].reshape(1, -1)
distances = euclidean_distances(X, Y)
df_sorted['similarity_distance'] = distances
return df_sorted.sort_values('similarity_distance').reset_index(drop=True)
def get_recommendations(df, anchor_id):
featureString = 'Distance,Price,City'
getRow = df.loc[anchor_id-1, :]
if(getRow['Value.for.money'] != 0.0):
featureString+=",Value.for.money"
if(getRow['Security'] != 0.0):
featureString+=",Security"
if(getRow['Location'] != 0.0):
featureString+=",Location"
if(getRow['Staff'] != 0.0):
featureString+=",Staff"
if(getRow['Atmosphere'] != 0.0):
featureString+=",Atmosphere"
if(getRow['Cleanliness'] != 0.0):
featureString+=",Cleanliness"
if(getRow['Facilities'] != 0.0):
featureString+=",Facilities"
if(getRow['Board.Games'] != 0):
featureString+=",Board.Games"
if(getRow['Dvds'] != 0):
featureString+=",Dvds"
if(getRow['Foosball'] != 0):
featureString+=",Foosball"
if(getRow['Games.Room'] != 0):
featureString+=",Games.Room"
if(getRow['PlayStation'] != 0):
featureString+=",PlayStation"
if(getRow['Pool.Table'] != 0):
featureString+=",Pool.Table"
if(getRow['Wifi'] != 0):
featureString+=",Wifi"
features = featureString.split(',')
df_sorted = df.copy()
df_sorted = pd.concat([df_sorted[df_sorted['hostel.id'] == anchor_id],df_sorted[df_sorted['hostel.id'] != anchor_id]])
df_features = df_sorted[features].copy()
df_features = normalize_features(df_features)
# compute the distances
X = df_features.values
Y = df_features.values[0].reshape(1, -1)
distances = euclidean_distances(X, Y)
df_sorted['similarity_distance'] = distances
return df_sorted.sort_values('similarity_distance').reset_index(drop=True)
def normalize_features(df):
for col in df.columns:
# fill any NaN's with the mean
df[col] = df[col].fillna(df[col].mean())
df[col] = StandardScaler().fit_transform(df[col].values.reshape(-1, 1))
return df
def Remove(duplicate):
final_list = []
for num in duplicate:
if num not in final_list:
final_list.append(num)
return final_list
###Output
_____no_output_____
###Markdown
4. Explainable Processing Methods Now, I will code to put an explanation before the hostels' recommendations according to this logic.If all three recommended hostels have 8 or more than 8 rating in any of the following columns (Value.for.money, Security, Location, Staff, Atmosphere, Cleanliness, Facilities), then mention those in the explanation.Also, If all three recommended hostels have 1 which means yes in any of the following columns (Board.Games, Dvds, Foosball, Games.Room, PlayStation, Pool.Table or Wifi), then mention those in the explanation.
###Code
def processSummary(df):
tf = (df.values > 7.9)
if False in tf[:]:
return False
else:
return True
def processEntertainment(df):
tf = (df.values == 1)
if False in tf[:]:
return False
else:
return True
def getExplaination(finalShowDown):
# Summary Fields
value_for_money = processSummary(finalShowDown['Value.for.money'])
security = processSummary(finalShowDown['Security'])
staff = processSummary(finalShowDown['Staff'])
atmosphere = processSummary(finalShowDown['Atmosphere'])
clean = processSummary(finalShowDown['Cleanliness'])
facilities = processSummary(finalShowDown['Facilities'])
location = processSummary(finalShowDown['Location'])
summary = ""
finalDecision = "Similar hostels "
if(value_for_money):
summary+='Value for money, '
if(security):
summary+='Security, '
if(staff):
summary+='Staff, '
if(atmosphere):
summary+='Atmosphere, '
if(clean):
summary+='Cleanliness, '
if(facilities):
summary+='Facilities, '
if(location):
summary+='Location, '
if summary != "":
summary = removeLastOccurence(summary, ",")
summary_split = summary.split(",")
if(len(summary_split) > 1):
summary = removeAgainOccurence(summary, ",")
finalDecision+="who are famous for excellent "+summary
# Entertainment Fields
board_games = processEntertainment(finalShowDown['Board.Games'])
dvd = processEntertainment(finalShowDown['Dvds'])
foosball = processEntertainment(finalShowDown['Foosball'])
games_room = processEntertainment(finalShowDown['Games.Room'])
play_station = processEntertainment(finalShowDown['PlayStation'])
pool_table = processEntertainment(finalShowDown['Pool.Table'])
wifi = processEntertainment(finalShowDown['Wifi'])
entertainment = ""
if(board_games):
entertainment+='Board Games, '
if(dvd):
entertainment+='DVDs, '
if(foosball):
entertainment+='Foosball, '
if(games_room):
entertainment+='Games Room, '
if(play_station):
entertainment+='PlayStation, '
if(pool_table):
entertainment+='Pool Table, '
if(wifi):
entertainment+='Wifi, '
if entertainment != "":
entertainment = removeLastOccurence(entertainment, ",")
ent_split = entertainment.split(",")
if(len(ent_split) > 1):
entertainment = removeAgainOccurence(entertainment, ",")
if summary != "":
finalDecision+="and also have "+entertainment+"."
else:
finalDecision+="who have "+entertainment+"."
else:
finalDecision+="."
return removeLastOccurence(finalDecision, " ")
def removeLastOccurence(str_val, delimiter):
k = str_val.rfind(delimiter)
new_string = str_val[:k] + "" + str_val[k+1:]
return new_string
def removeAgainOccurence(str_val, delimiter):
k = str_val.rfind(delimiter)
new_string = str_val[:k] + " and" + str_val[k+1:]
return new_string
###Output
_____no_output_____
###Markdown
Results for all hostel ids using loop
###Code
for anchor_id in range(1,121):
getRows = getSameCityRows(anchor_id)
recommendations = get_recommendations(getRows, anchor_id)
finalShowDown = recommendations.head(n=4)
print("\n")
print('\033[1m'+"Test Resuts for hostel.id="+str(anchor_id)+"\n")
finalOutput = finalShowDown.loc[[1,2,3]]
explaination = getExplaination(finalOutput)
if(explaination == 'Similar hostels.'):
explaination = 'Similar hostels which have similar Price and have same Distance from the City Centre.'
print(explaination)
display(finalShowDown)
###Output
[1mTest Resuts for hostel.id=1
Similar hostels who are famous for excellent Value for money, Security, Staff, Atmosphere, Cleanliness, Facilities and Location and also have Board Games and Wifi.
|
5 Day Data Cleaning Challenge/Character Encodings.ipynb | ###Markdown
Previous days* [Day 1: Handling missing values](https://www.kaggle.com/rtatman/data-cleaning-challenge-handling-missing-values)* [Day 2: Scaling and normalization](https://www.kaggle.com/rtatman/data-cleaning-challenge-scale-and-normalize-data)* [Day 3: Parsing dates](https://www.kaggle.com/rtatman/data-cleaning-challenge-parsing-dates/)___Welcome to day 4 of the 5-Day Data Challenge! Today, we're going to be working with different character encodings. To get started, click the blue "Fork Notebook" button in the upper, right hand corner. This will create a private copy of this notebook that you can edit and play with. Once you're finished with the exercises, you can choose to make your notebook public to share with others. :)> **Your turn!** As we work through this notebook, you'll see some notebook cells (a block of either code or text) that has "Your Turn!" written in it. These are exercises for you to do to help cement your understanding of the concepts we're talking about. Once you've written the code to answer a specific question, you can run the code by clicking inside the cell (box with code in it) with the code you want to run and then hit CTRL + ENTER (CMD + ENTER on a Mac). You can also click in a cell and then click on the right "play" arrow to the left of the code. If you want to run all the code in your notebook, you can use the double, "fast forward" arrows at the bottom of the notebook editor.Here's what we're going to do today:* [Get our environment set up](Get-our-environment-set-up)* [What are encodings?](What-are-encodings?)* [Reading in files with encoding problems](Reading-in-files-with-encoding-problems)* [Saving your files with UTF-8 encoding](Plot-the-day-of-the-month-to-the-date-parsing)Let's get started! Get our environment set up________The first thing we'll need to do is load in the libraries we'll be using. Not our datasets, though: we'll get to those later!> **Important!** Make sure you run this cell yourself or the rest of your code won't work!
###Code
# modules we'll use
import pandas as pd
import numpy as np
# helpful character encoding module
import chardet
# set seed for reproducibility
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Now we're ready to work with some character encodings! (If you like, you can add a code cell here and take this opportunity to take a look at some of the data.) What are encodings?____Character encodings are specific sets of rules for mapping from raw binary byte strings (that look like this: 0110100001101001) to characters that make up human-readable text (like "hi"). There are many different encodings, and if you tried to read in text with a different encoding that the one it was originally written in, you ended up with scrambled text called "mojibake" (said like mo-gee-bah-kay). Here's an example of mojibake:æ–‡å—化ã??You might also end up with a "unknown" characters. There are what gets printed when there's no mapping between a particular byte and a character in the encoding you're using to read your byte string in and they look like this:����������Character encoding mismatches are less common today than they used to be, but it's definitely still a problem. There are lots of different character encodings, but the main one you need to know is UTF-8.> UTF-8 is **the** standard text encoding. All Python code is in UTF-8 and, ideally, all your data should be as well. It's when things aren't in UTF-8 that you run into trouble.It was pretty hard to deal with encodings in Python 2, but thankfully in Python 3 it's a lot simpler. (Kaggle Kernels only use Python 3.) There are two main data types you'll encounter when working with text in Python 3. One is is the string, which is what text is by default.
###Code
# start with a string
before = "This is the euro symbol: €"
# check to see what datatype it is
type(before)
###Output
_____no_output_____
###Markdown
The other data is the [bytes](https://docs.python.org/3.1/library/functions.htmlbytes) data type, which is a sequence of integers. You can convert a string into bytes by specifying which encoding it's in:
###Code
# encode it to a different encoding, replacing characters that raise errors
after = before.encode("utf-8", errors = "replace")
# check the type
type(after)
print (after)
###Output
b'This is the euro symbol: \xe2\x82\xac'
###Markdown
If you look at a bytes object, you'll see that it has a b in front of it, and then maybe some text after. That's because bytes are printed out as if they were characters encoded in ASCII. (ASCII is an older character encoding that doesn't really work for writing any language other than English.) Here you can see that our euro symbol has been replaced with some mojibake that looks like "\xe2\x82\xac" when it's printed as if it were an ASCII string.
###Code
# take a look at what the bytes look like
after
###Output
_____no_output_____
###Markdown
When we convert our bytes back to a string with the correct encoding, we can see that our text is all there correctly, which is great! :)
###Code
# convert it back to utf-8
print(after.decode("utf-8"))
###Output
This is the euro symbol: €
###Markdown
However, when we try to use a different encoding to map our bytes into a string,, we get an error. This is because the encoding we're trying to use doesn't know what to do with the bytes we're trying to pass it. You need to tell Python the encoding that the byte string is actually supposed to be in.> You can think of different encodings as different ways of recording music. You can record the same music on a CD, cassette tape or 8-track. While the music may sound more-or-less the same, you need to use the right equipment to play the music from each recording format. The correct decoder is like a cassette player or a cd player. If you try to play a cassette in a CD player, it just won't work.
###Code
# try to decode our bytes with the ascii encoding
print(after.decode("ascii"))
###Output
_____no_output_____
###Markdown
We can also run into trouble if we try to use the wrong encoding to map from a string to bytes. Like I said earlier, strings are UTF-8 by default in Python 3, so if we try to treat them like they were in another encoding we'll create problems. For example, if we try to convert a string to bytes for ascii using encode(), we can ask for the bytes to be what they would be if the text was in ASCII. Since our text isn't in ASCII, though, there will be some characters it can't handle. We can automatically replace the characters that ASCII can't handle. If we do that, however, any characters not in ASCII will just be replaced with the unknown character. Then, when we convert the bytes back to a string, the character will be replaced with the unknown character. The dangerous part about this is that there's not way to tell which character it *should* have been. That means we may have just made our data unusable!
###Code
# start with a string
before = "This is the euro symbol: €"
# encode it to a different encoding, replacing characters that raise errors
after = before.encode("ascii", errors = "replace")
print (after)
# convert it back to utf-8
print(after.decode("ascii"))
# We've lost the original underlying byte string! It's been
# replaced with the underlying byte string for the unknown character :(
###Output
b'This is the euro symbol: ?'
This is the euro symbol: ?
###Markdown
This is bad and we want to avoid doing it! It's far better to convert all our text to UTF-8 as soon as we can and keep it in that encoding. The best time to convert non UTF-8 input into UTF-8 is when you read in files, which we'll talk about next.First, however, try converting between bytes and strings with different encodings and see what happens. Notice what this does to your text. Would you want this to happen to data you were trying to analyze?
###Code
# Your turn! Try encoding and decoding different symbols to ASCII and
# see what happens. I'd recommend $, #, 你好 and नमस्ते but feel free to
# try other characters. What happens? When would this cause problems?
temp_string = "$@#नमस्ते"
after_enc = temp_string.encode("ascii", errors="replace")
print (after_enc)
after_dec = after_enc.decode("ascii")
print (after_dec)
###Output
b'$@#??????'
$@#??????
###Markdown
Reading in files with encoding problems___Most files you'll encounter will probably be encoded with UTF-8. This is what Python expects by default, so most of the time you won't run into problems. However, sometimes you'll get an error like this:
###Code
# try to read in a file not in UTF-8
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv")
###Output
_____no_output_____
###Markdown
Notice that we get the same `UnicodeDecodeError` we got when we tried to decode UTF-8 bytes as if they were ASCII! This tells us that this file isn't actually UTF-8. We don't know what encoding it actually *is* though. One way to figure it out is to try and test a bunch of different character encodings and see if any of them work. A better way, though, is to use the chardet module to try and automatically guess what the right encoding is. It's not 100% guaranteed to be right, but it's usually faster than just trying to guess.I'm going to just look at the first ten thousand bytes of this file. This is usually enough for a good guess about what the encoding is and is much faster than trying to look at the whole file. (Especially with a large file this can be very slow.) Another reason to just look at the first part of the file is that we can see by looking at the error message that the first problem is the 11th character. So we probably only need to look at the first little bit of the file to figure out what's going on.
###Code
with open("../input/kickstarter-projects/ks-projects-201801.csv", 'rb') as rawdata:
result = chardet.detect(rawdata.read(10000))
print (result)
###Output
{'encoding': 'Windows-1252', 'confidence': 0.73, 'language': ''}
###Markdown
So chardet is 73% confidence that the right encoding is "Windows-1252". Let's see if that's correct:
###Code
kickstarter_2016 = pd.read_csv('../input/kickstarter-projects/ks-projects-201612.csv', encoding='Windows-1252')
kickstarter_2016.head()
# read in the file with the encoding detected by chardet
kickstarter2_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201801.csv", encoding='Windows-1252')
# look at the first few lines
kickstarter2_2016.head()
###Output
_____no_output_____
###Markdown
Yep, looks like chardet was right! The file reads in with no problem (although we do get a warning about datatypes) and when we look at the first few rows it seems to be be fine.
###Code
with open("../input/fatal-police-shootings-in-the-us/PoliceKillingsUS.csv", 'rb') as rawdata:
next_result = chardet.detect(rawdata.read(100000))
print (next_result)
# Your Turn! Trying to read in this file gives you an error. Figure out
# what the correct encoding should be and read in the file. :)
police_killings = pd.read_csv("../input/fatal-police-shootings-in-the-us/PoliceKillingsUS.csv", encoding='Windows-1252')
###Output
_____no_output_____
###Markdown
Saving your files with UTF-8 encoding___Finally, once you've gone through all the trouble of getting your file into UTF-8, you'll probably want to keep it that way. The easiest way to do that is to save your files with UTF-8 encoding. The good news is, since UTF-8 is the standard encoding in Python, when you save a file it will be saved as UTF-8 by default:
###Code
# save our file (will be saved as UTF-8 by default!)
kickstarter_2016.to_csv("ks-projects-201801-utf8.csv")
###Output
_____no_output_____
###Markdown
Pretty easy, huh? :)> If you haven't saved a file in a kernel before, you need to hit the commit & run button and wait for your notebook to finish running first before you can see or access the file you've saved out. If you don't see it at first, wait a couple minutes and it should show up. The files you save will be in the directory "../output/", and you can download them from your notebook.
###Code
# Your turn! Save out a version of the police_killings dataset with UTF-8 encoding
police_killings.to_csv("police-killings-utf8.csv")
###Output
_____no_output_____ |
_notebooks/2022-01-22-mnist.ipynb | ###Markdown
fastbook 04_mnist_basics> done- toc:true- branch: master- badges: false- comments: false - author: 최서연- categories: [fastbook, MNIST] ref: https://github.com/fastai/fastbook
###Code
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
from fastai.vision.all import *
from fastbook import *
matplotlib.rc('image', cmap='Greys')
###Output
_____no_output_____
###Markdown
Introduction - We'll explain stochastic gradient descent (SGD), the mechanism for learning by updating weights automatically.- We'll discuss the choice of a loss function for our basic classification task, and the role of mini-batches.- We'll also describe the math that a basic neural network is actually doing.- Finally, we'll put all these pieces together. End sidebar
###Code
path = untar_data(URLs.MNIST_SAMPLE)
Path.BASE_PATH = path
path.ls()
(path/'train').ls()
###Output
_____no_output_____
###Markdown
Let's take a look in one of these folders (using sorted to ensure we all get the same order of file
###Code
threes = (path/'train'/'3').ls().sorted()
sevens = (path/'train'/'7').ls().sorted()
###Output
_____no_output_____
###Markdown
Let’s take a look at one now. Here’s an image of a handwritten number 3, taken from the famous MNIST dataset of handwritten numbers:
###Code
im3_path=threes[1]
im3=Image.open(im3_path)
im3
###Output
_____no_output_____
###Markdown
Here we are using the *Image* class from the *Python Imaging Library (PIL)*, which is the most widely used Python package for opening, manipulating, and viewing images.
###Code
array(im3)[4:10,4:10]
###Output
_____no_output_____
###Markdown
NumPy indexes from top to bottom and left to right, so this section is located in the top-left corner of the image.Here's the same thing as a PyTorch tensor:
###Code
tensor(im3)[4:10,4:10]
im3_t=tensor(im3)
df=pd.DataFrame(im3_t[4:15,4:22])
df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')
###Output
_____no_output_____
###Markdown
- You can see that the background **white pixels are stored as the number 0**, **black is the number 255**, and **shades of gray are between the two**. - The entire image contains 28 pixels across and 28 pixels down, for a total of 784 pixels. Pixel Similarity
###Code
array(im3)[4:10,4:10]/255
three_tensors=[tensor(Image.open(i)) for i in threes]
seven_tensors=[tensor(Image.open(i)) for i in sevens]
len(three_tensors),len(seven_tensors)
###Output
_____no_output_____
###Markdown
Since we now *have tensors* (which Jupyter by default will print as values), rather than PIL images (which Jupyter by default will display as images), we need to use fastai's show_image function to display it:`
###Code
show_image(seven_tensors[3])
###Output
_____no_output_____
###Markdown
- For every pixel position, we want to compute the average over all the images of the intensity of that pixel. - 모든 픽셀 위치에서 해당 픽셀 강도의 모든 이미지에 대한 평균을 계산하기 위해서 - To do this we first combine all the images in this list into a single three-dimensional tensor. - 우선 이 리스트의 모든 이미지를 단일 3차원 텐서로 결합한다.- The most common way to describe such a tensor is to call it a rank-3 tensor. - We often need to stack up individual tensors in a collection into a single tensor. Generally when images are floats, the pixel values are expected to be between 0 and 1, so we will also divide by 255 here:
###Code
stacked_threes = torch.stack(three_tensors).float()/255
stacked_sevens = torch.stack(seven_tensors).float()/255
stacked_threes.shape, stacked_sevens.shape
###Output
_____no_output_____
###Markdown
There is nothing specifically about this tensor that says that the first axis is the number of images, the second is the height, and the third is the width
###Code
len(stacked_sevens.shape)
###Output
_____no_output_____
###Markdown
> important: rank is the number of axes or dimensions in a tensor; shape is the size of each axis of a tensor. We can also get a tensor's rank directly with ndim:
###Code
stacked_sevens.ndim
###Output
_____no_output_____
###Markdown
for every pixel position, this will compute the average of that pixel over all images. The result will be one value for every pixel position, or a single image. Here it is:
###Code
mean3=stacked_threes.mean(0)
show_image(mean3)
mean7=stacked_sevens.mean(0)
show_image(mean7)
a_3 = stacked_threes[1]
show_image(a_3)
a_7 = stacked_sevens[1]
show_image(a_7)
show_image(abs(a_3-mean3))
###Output
_____no_output_____
###Markdown
- We can't just add up the differences between the pixels of this image and the ideal digit. - Some differences will be positive while others will be negative, and these differences will cancel out, resulting in a situation where an image that is too dark in some places and too light in others might be shown as having zero total differences from the ideal.- That would be misleading! 1. Take the mean of the *absolute value* of differences (absolute value is the function that replaces negative values with positive values). This is called the *mean absolute difference* or L1 norm2. Take the mean of the *square* of differences (which makes everything positive) and then take the *square root* (which undoes the squaring). This is called the root mean *squared error (RMSE)* or L2 norm.
###Code
dist_3_abs=(a_3 - mean3).abs().mean()
dist_3_sqr = ((a_3-mean3)**2).mean().sqrt()
dist_3_abs, dist_3_sqr
dist_7_abs=(a_7 - mean7).abs().mean()
dist_7_sqr = ((a_7-mean7)**2).mean().sqrt()
dist_7_abs, dist_7_sqr
###Output
_____no_output_____
###Markdown
- In both cases, the distance between our 3 and the "ideal" 3 is less than the distance to the ideal 7. - So our simple model will give the right prediction in this case. PyTorch already provides both of these as loss functions. You'll find these inside `torch.nn.functional`, which the PyTorch team recommends importing as F (and is available by default under that name in fastai):
###Code
F.l1_loss(a_3.float(),mean3), F.mse_loss(a_3,mean3).sqrt()
F.l1_loss(a_7.float(),mean7), F.mse_loss(a_7,mean7).sqrt()
###Output
_____no_output_____
###Markdown
Here mse stands for mean squared error, and l1 refers to the standard mathematical jargon for mean absolute value (in math it's called the L1 norm). Computing Metrics Using Broadcasting
###Code
valid_3_tens = torch.stack([tensor(Image.open(i)) for i in (path/'valid'/'3').ls()]).float()/255
valid_7_tens = torch.stack([tensor(Image.open(i)) for i in (path/'valid'/'7').ls()]).float()/255
valid_3_tens.shape, valid_7_tens.shape
def mnist_distance(a,b): return (a-b).abs().mean((-1,-2))
mnist_distance(a_3,mean3)
###Output
_____no_output_____
###Markdown
This is the same value we previously calculated for the distance between these two images, the ideal 3 mean3 and the arbitrary sample 3 a_3, which are both single-image tensors with a shape of [28,28].
###Code
valid_3_dist=mnist_distance(valid_3_tens,mean3)
valid_3_dist, valid_3_dist.shape
###Output
_____no_output_____
###Markdown
PyTorch treats mean3, a rank-2 tensor representing a single image, as if it were 1,010 copies of the same image, and then subtracts each of those copies from each 3 in our validation set.
###Code
valid_3_tens.shape, mean3.shape, (valid_3_tens-mean3).shape
###Output
_____no_output_____
###Markdown
- PyTorch doesn't actually copy mean3 1,010 times. It pretends it were a tensor of that shape, but doesn't actually allocate any additional memory- It does the whole calculation in C (or, if you're using a GPU, in CUDA, the equivalent of C on the GPU), tens of thousands of times faster than pure Python (up to millions of times faster on a GPU!). our function calls *mean((-1,-2))*. The tuple (-1,-2) represents a range of axes. In Python, `-1` refers to the last element, and `-2` refers to the second-to-last. So in this case, this tells PyTorch that we want to take the mean ranging over the values indexed by the last two axes of the tensor. The last two axes are the horizontal and vertical dimensions of an image. After taking the mean over the last two axes, we are left with just the first tensor axis, which indexes over our images, which is why our final size was (1010). In other words, for every image, we averaged the intensity of all the pixels in that image. We can use mnist_distance to figure out whether an image is a 3 or not by using the following logic: **if the distance between the digit in question and the ideal 3 is less than the distance to the ideal 7, then it's a 3**. This function will automatically do broadcasting and be applied elementwise, just like all PyTorch functions and operators:
###Code
def is_3(x): return mnist_distance(x,mean3)<mnist_distance(x,mean7)
is_3(a_3),is_3(a_3).float()
###Output
_____no_output_____
###Markdown
Note that when we convert the Boolean response to a float, **we get 1.0 for True and 0.0 for False**.
###Code
is_3(valid_3_tens)
accuracy_3s=is_3(valid_3_tens).float().mean()
accuracy_7s = (1-is_3(valid_7_tens).float()).mean()
accuracy_3s, accuracy_7s, (accuracy_3s+accuracy_7s)/2
###Output
_____no_output_____
###Markdown
Stochastic Gradient Descent (SGD) we could instead look at each individual pixel and come up with a set of weights for each one, such that the highest weights are associated with those pixels most likely to be black for a particular category. - We want to find the specific values for the vector w that causes the result of our function to be high for those images that are actually 8s Here are the steps that we are going to require, to turn this function into a machine learning classifier:1. Initialize the weights. - 가중치를 초기화한다. - We initialize the parameters to random values. 2. For each image, use these weights to predict whether it appears to be a 3 or a 7. - 각 이미지에 대해 3이나 7로 나타나는지 예측하기 위해 가중치를 사용한다.3. Based on these predictions, calculate how good the model is (its loss). - 이 예측을 바탕으로 모델이 얼마나 좋은지 즉, 손실을 계산한다. - We need some function that will return a number that is small if the performance of the model is good. - the standard approach is to treat a small loss as good, and a large loss as bad, although this is just a convention.4. Calculate the gradient, which measures for each weight, how changing that weight would change the loss - 각 가중치를 측정하는 기울기를 계산하고, 가중치가 변경되면 손실은 어떻게 변화하는지 계산한다.5. Step (that is, change) all the weights based on that calculation. - 계산에 따라 모든 가중치를 단계, 즉 변경한다.6. Go back to the step 2, and repeat the process. - 2단계로 돌아가서 과정을 반복한다.7. Iterate until you decide to stop the training process (for instance, because the model is good enough or you don't want to wait any longer). - 학습 과정을 중단하기로 결정할떄까지 반복한다.(모델이 충분히 좋거나 더이상 기다리기 싫을때) - Once we've decided how many epochs to train the model for (a few suggestions for this were given in the earlier list), we apply that decision. - These seven steps, illustrated in , are the key to the training of all deep learning models.- That deep learning turns out to rely entirely on these steps is extremely surprising and counterintuitive.
###Code
gv('''
init->predict->loss->gradient->step->stop
step->predict[label=repeat]
''')
###Output
_____no_output_____
###Markdown
let's pretend that this is our loss function, and x is a weight parameter of the function:
###Code
def f(x): return x**2
plot_function(f,'x','x**2')
###Output
_____no_output_____
###Markdown
The sequence of steps we described earlier starts by picking some random value for a parameter, and calculating the value of the loss:
###Code
plot_function(f,'x','x**2')
plt.scatter(-1.5,f(-1.5),color='red')
###Output
_____no_output_____
###Markdown
We can change our weight by a little in the direction of the slope, calculate our loss and adjustment again, and repeat this a few times.- 위와 같이 제시되었을때, 가중치를 기울기 방향으로 조금씩 바꾸어 손실과 조정을 반복해서 계산할 수 있다.(결국 (0,0)에 도달하게 될 것이다.) Calculating Gradients - PyTorch is able to automatically compute the derivative of nearly any function!- What's more, it does it very fast. - Most of the time, it will be at least as fast as any derivative function that you can create by hand.- 파이토치는 어느 함수든 자동으로 계산하여 유도할 수 있으며, 빠르기까지 하다. First, let's pick a tensor value which we want gradients at:
###Code
xt = tensor(3.0).requires_grad_()
###Output
_____no_output_____
###Markdown
Notice the special method` requires_grad_`? - That's the magical incantation we use to tell PyTorch that we want to calculate gradients with respect to that variable at that value. - It is essentially tagging the variable, so PyTorch will remember to keep track of how to compute gradients of the other, direct calculations on it that you will ask for. - requires_grad_()는 변수에 태그하는 것이어서 파이토치가 나중에 우리가 직접 계산해달라고 해줄 다른 기울기를 계산하는 법의 추적을 가지고 있는 것이라 보면 된다. Notice how PyTorch prints not just the value calculated, but also a note that it has a gradient function it'll be using to calculate our gradients when needed:
###Code
yt = f(xt)
yt
yt.backward()
###Output
_____no_output_____
###Markdown
The "backward" here refers to backpropagation, which is the name given to the process of calculating the derivative of each layer. - backward는 역전파를 가리키는데, 이건 각 층의 유도를 계산하는 과정에 주어진 이름이다. - 역전파를 시작하겠다! This is called the "backward pass" of the network, as opposed to the "forward pass," which is where the activations are calculated.
###Code
xt.grad
###Output
_____no_output_____
###Markdown
위는 우리가 입력한 xt 즉, 3과 f함수 **2가 적용되어 6이 나온 것. Now we'll repeat the preceding steps, but with a vector argument for our function:
###Code
xt=tensor([3.,4.,10.]).requires_grad_()
xt
###Output
_____no_output_____
###Markdown
we'll **add sum** to our function so it can **take a vector** (i.e., a rank-1 tensor), and return a scalar (i.e., a rank-0 tensor):
###Code
def f(x): return (x**2).sum()
yt=f(xt)
yt
yt.backward()
xt.grad
###Output
_____no_output_____
###Markdown
Stepping With a Learning Rate Nearly all approaches start with the basic idea of multiplying the gradient by some small number, called the **learning rate (LR).**- The learning rate is often a number between 0.001 and 0.1, although it could be anything. Once you've picked a learning rate, you can adjust your parameters using this simple function:$$\text{w -= gradient(w_ * lr)}$$ An End-to-End SGD Example 롤러코스터 스피드 측정하는 예제
###Code
time = torch.arange(0,20).float()
time
speed=torch.randn(20)*3+0.75*(time-9.5)**2+1
plt.scatter(time,speed)
###Output
_____no_output_____
###Markdown
We've added a bit of random noise, since measuring things manually isn't precise.a function of the form $a*(time**2)+(b*time)+c$
###Code
def f(t, params):
a,b,c = params
return a*(t**2) + (b*t) + c
###Output
_____no_output_____
###Markdown
Thus, to find the best quadratic function, we only need to find the best values for a, b, and c. - We need to define first what we mean by "best." - We define this precisely by choosing a loss function, which will return a value based on a prediction and a target, where lower values of the function correspond to "better" predictions. - 여기서 'best'의 의미는 정확하게 손실 함수를 선택함으로써 정의하는데, 함수의 낮은 값이 더 나은 예측에 기반한 예측과 target을 반환하는 것이다.- For continuous data, it's common to use mean squared error:
###Code
def mse(preds, targets): return ((preds-targets)**2).mean().sqrt()
###Output
_____no_output_____
###Markdown
**Step 1: Initialize the parameters**
###Code
params=torch.randn(3).requires_grad_()
orig_params=params.clone()
orig_params
###Output
_____no_output_____
###Markdown
**Step 2: Calculate the predictions**
###Code
preds=f(time,params)
def show_preds(preds, ax=None):
if ax is None: ax=plt.subplots()[1]
ax.scatter(time, speed)
ax.scatter(time, to_np(preds), color='red')
ax.set_ylim(-300,100)
to_np??
show_preds(preds)
###Output
_____no_output_____
###Markdown
our random parameters suggest that the roller coaster will end up going backwards, since we have negative speeds! **Step 3: Calculate the loss**
###Code
loss=mse(preds,speed)
loss
###Output
_____no_output_____
###Markdown
**Step 4: Calculate the gradients**
###Code
loss.backward()
params.grad
params.grad * 1e-5
###Output
_____no_output_____
###Markdown
We'll need to pick a learning rate (we'll discuss how to do that in practice in the next chapter; for now we'll just use 1e-5, or 0.00001):
###Code
params
###Output
_____no_output_____
###Markdown
**Step 5: Step the weights**
###Code
lr = 1e-5
params.data -= lr * params.grad.data
params.grad = None
###Output
_____no_output_____
###Markdown
> a: Understanding this bit depends on remembering recent history. To calculate the gradients we call backward on the loss. But this loss was itself calculated by mse, which in turn took preds as an input, which was calculated using f taking as an input params, which was the object on which we originally called required_grads_—which is the original call that now allows us to call backward on loss. This chain of function calls represents the mathematical composition of functions, which enables PyTorch to use calculus's chain rule under the hood to calculate these gradients. > a: 이 비트를 이해하는 것은 최근 기록을 기억하느냐에 달려있다. gradient를 계산하는 것을 손실에서 backward로 부른다. 하지만 그 손실은 mse에 의해 스스로 계산되었고, 이를 입력으로 pred를 받아들였고, f를 입력 매개 변수로 사용해 계산되었는데 이게 손실에서 backward라 부른 것이자 원래 required_grads_라 부르는 것이었다. 이 함수의 연결성을 함수의 수학적 구성으로 부르고, 파이토치에서는 gradient를 계산하기 위해 미적분의 chain rule을 사용한다.
###Code
preds = f(time,params)
mse(preds,speed)
show_preds(preds)
###Output
_____no_output_____
###Markdown
we'll create a function to apply one step:
###Code
def apply_step(params, prn=True):
preds = f(time, params)
loss = mse(preds, speed)
loss.backward()
params.data -= lr * params.grad.data
params.grad = None
if prn: print(loss.item())
return preds
###Output
_____no_output_____
###Markdown
**Step 6: Repeat the process**
###Code
for i in range(10): apply_step(params)
params=orig_params.detach().requires_grad_()
_,axs = plt.subplots(1,4,figsize=(12,3))
for ax in axs: show_preds(apply_step(params, False), ax)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Step 7: stop** Summarizing Gradient Descent
###Code
#hide_input
#id gradient_descent
#caption The gradient descent process
#alt Graph showing the steps for Gradient Descent
gv('''
init->predict->loss->gradient->step->stop
step->predict[label=repeat]
''')
###Output
_____no_output_____
###Markdown
The MNIST Loss Function
###Code
train_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28)
###Output
_____no_output_____
###Markdown
We'll use 1 for 3s and 0 for 7s:
###Code
train_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1)
train_x.shape,train_y.shape
dset = list(zip(train_x,train_y))
x,y = dset[0]
x.shape,y
valid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28)
valid_y = tensor([1]*len(valid_3_tens) + [0]*len(valid_7_tens)).unsqueeze(1)
valid_dset = list(zip(valid_x,valid_y))
###Output
_____no_output_____
###Markdown
Now we need an (initially random) weight for every pixel
###Code
def init_params(size,std=1.0): return(torch.randn(size)*std).requires_grad_()
weights = init_params((28*28,1))
###Output
_____no_output_____
###Markdown
The function `weights*pixels` won't be flexible enough—it is always equal to 0 when the pixels are equal to 0 (i.e., its intercept is 0). You might remember from high school math that the formula for a line is `y=w*x+b`; we still need the `b`. We'll initialize it to a random number too:
###Code
bias=init_params(1)
###Output
_____no_output_____
###Markdown
이미지 한 개 예측 계산해보기
###Code
(train_x[0]*weights.T).sum() + bias
###Output
_____no_output_____
###Markdown
In Python, matrix multiplication is represented with the @ operator. Let's try it:
###Code
def linear1(xb): return xb@weights + bias
preds = linear1(train_x)
preds
corrects = (preds>0.5).float() == train_y
corrects
corrects.float().mean().item()
preds = linear1(train_x)
((preds>0.0).float() == train_y).float().mean().item()
###Output
_____no_output_____
###Markdown
The purpose of the loss function is to measure the difference between predicted values and the true values — that is, the targets (aka labels). Let's make another argument, trgts, with values of 0 or 1 which tells whether an image actually is a 3 or not. It is also a vector (i.e., another rank-1 tensor), indexed over the images. This would mean our loss function would receive these values as its inputs:
###Code
trgts = tensor([1,0,1])
prds = tensor([0.9, 0.4, 0.2])
def mnist_loss(predictions, targets):
return torch.where(targets==1, 1-predictions, predictions).mean()
###Output
_____no_output_____
###Markdown
`torch.where(a,b,c)` is the same as running the list comprehension [b[i] if a[i] else c[i] for i in range(len(a))], except it works on tensors, at C/CUDA speed.
###Code
torch.where(trgts==1, 1-prds, prds)
mnist_loss(prds,trgts)
mnist_loss(tensor([0.9, 0.4, 0.8]),trgts)
###Output
_____no_output_____
###Markdown
Sigmoid One problem with mnist_loss as currently defined is that it assumes that predictions are always between 0 and 1. We need to ensure, then, that this is actually the case! As it happens, there is a function that does exactly that—let's take a look.
###Code
def sigmoid(x): return 1/(1+torch.exp(-x))
###Output
_____no_output_____
###Markdown
Pytorch defines an accelerated version for us, so we don’t really need our own. This is an important function in deep learning, since we often want to ensure values are between 0 and 1. This is what it looks like:
###Code
plot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4)
###Output
_____no_output_____
###Markdown
As you can see, **it takes any input value, positive or negative**, and smooshes it onto an output value between 0 and 1. It's also a smooth curve that only goes up, which makes it easier for SGD to find meaningful gradients.
###Code
def mnist_loss(predictions, targets):
predictions = predictions.sigmoid()
return torch.where(targets==1, 1-predictions, predictions).mean()
###Output
_____no_output_____
###Markdown
SGD and Mini-Batches Now that we have a loss function that is suitable for driving SGD, we can consider some of the details involved in the next phase of the learning process, which is to change or update the weights based on the gradients. This is called an optimization step.- gradient에 따라 가중치를 변경하거나 업데이트하는 단계를 최적화 단계라고 한다. *A larger batch size means that you will get a more accurate and stable estimate of your dataset's gradients from the loss function, but it will take longer, and you will process fewer mini-batches per epoch.* **Choosing a good batch size is one of the decisions you need to make as a deep learning practitioner to train your model quickly and accurately. We will talk about how to make this choice throughout this book.** Another good reason for using mini-batches rather than calculating the gradient on individual data items is that, in practice, we nearly always do our training on an accelerator such as a GPU. One simple and effective thing we can vary is what data items we put in each mini-batch. Rather than simply enumerating our dataset in order for every epoch, instead what we normally do is randomly shuffle it on every epoch, before we create mini-batches. **PyTorch and fastai provide a class that will do the shuffling and mini-batch collation for you, called DataLoader**.- 미니 배치를 만들기 전에 epoc을 순서대로가 아닌 임의로 섞는 것이 일반적이다. 파이토치와 패스트에이아이는 임의로 섞고, 미니배치 수집할 DataLoader라고 불리는 클래스가 있다. A DataLoader can take any Python collection and turn it into an iterator over many batches, like so:
###Code
coll = range(15)
dl = DataLoader(coll, batch_size=5, shuffle=True)
list(dl)
###Output
_____no_output_____
###Markdown
A collection that contains tuples of independent and dependent variables is known in PyTorch as a Dataset. Here's an example of an extremely simple Dataset:
###Code
ds = L(enumerate(string.ascii_lowercase))
ds
###Output
_____no_output_____
###Markdown
When we pass a Dataset to a DataLoader we will get back many batches which are themselves tuples of tensors representing batches of independent and dependent variables:- Dataset을 DataLoader에 전달하면 독립변수, 종속변수의 배치를 나타내는 텐서의 튜플인 많은 배치를 다시 얻을 것이다.
###Code
dl = DataLoader(ds, batch_size=6, shuffle=True)
list(dl)
###Output
_____no_output_____
###Markdown
Putting It All Together
###Code
weights = init_params((28*28,1))
bias = init_params(1)
dl = DataLoader(dset, batch_size=256)
xb,yb = first(dl)
xb.shape,yb.shape
valid_dl = DataLoader(valid_dset, batch_size=256)
batch = train_x[:4]
batch.shape
preds = linear1(batch)
preds
loss = mnist_loss(preds, train_y[:4])
loss
loss.backward()
weights.grad.shape,weights.grad.mean(),bias.grad
def calc_grad(xb, yb, model):
preds = model(xb)
loss = mnist_loss(preds, yb)
loss.backward()
calc_grad(batch, train_y[:4], linear1)
weights.grad.mean(),bias.grad
calc_grad(batch, train_y[:4], linear1)
weights.grad.mean(),bias.grad
###Output
_____no_output_____
###Markdown
The gradients have changed! *The reason for this is that loss.backward actually adds the gradients of loss to any gradients that are currently stored*. So, we have to set the current gradients to 0 first:
###Code
weights.grad.zero_()
bias.grad.zero_();
def train_epoch(model, lr, params):
for xb,yb in dl:
calc_grad(xb, yb, model)
for p in params:
p.data -= p.grad*lr
p.grad.zero_()
###Output
_____no_output_____
###Markdown
To decide if an output represents a 3 or a 7, we can just check whether it's greater than 0. So our accuracy for each item can be calculated (using broadcasting, so no loops!) with:
###Code
(preds>0.0).float() == train_y[:4]
def batch_accuracy(xb, yb):
preds = xb.sigmoid()
correct = (preds>0.5) == yb
return correct.float().mean()
batch_accuracy(linear1(batch), train_y[:4])
def validate_epoch(model):
accs = [batch_accuracy(model(xb), yb) for xb,yb in valid_dl]
return round(torch.stack(accs).mean().item(), 4)
validate_epoch(linear1)
lr = 1.
params = weights,bias
train_epoch(linear1, lr, params)
validate_epoch(linear1)
for i in range(20):
train_epoch(linear1, lr, params)
print(validate_epoch(linear1), end=' ')
###Output
0.8857 0.9321 0.9433 0.9482 0.9516 0.9545 0.9565 0.9584 0.9614 0.9623 0.9628 0.9643 0.9658 0.9658 0.9662 0.9662 0.9662 0.9672 0.9672 0.9672
###Markdown
We're already about at the same accuracy as our "pixel similarity" approach, and we've created a general-purpose foundation we can build on. Our next step will be to create an object that will handle the SGD step for us. In PyTorch, it's called an optimizer.- 파이토치에서 최적화 도구라 불리는 SGD 단계를 처리할 객체를 만들 것이다. Creating an Optimizer The first thing we can do is replace our linear1 function with PyTorch's `nn.Linear` module. A module is an object of a class that inherits from the PyTorch `nn.Module` class. `nn.Linear` does the same thing as our `init_params` and `linear` together. It contains both the weights and biases in a single class. Here's how we replicate our model from the previous section:
###Code
linear_model = nn.Linear(28*28,1)
w,b = linear_model.parameters()
w.shape,b.shape
###Output
_____no_output_____
###Markdown
We can use this information to create an optimizer:
###Code
class BasicOptim:
def __init__(self,params,lr): self.params,self.lr = list(params),lr
def step(self, *args, **kwargs):
for p in self.params: p.data -= p.grad.data * self.lr
def zero_grad(self, *args, **kwargs):
for p in self.params: p.grad = None
###Output
_____no_output_____
###Markdown
We can create our optimizer by passing in the model's parameters:
###Code
opt = BasicOptim(linear_model.parameters(), lr)
###Output
_____no_output_____
###Markdown
Our training loop can now be simplified to:
###Code
def train_epoch(model):
for xb,yb in dl:
calc_grad(xb, yb, model)
opt.step()
opt.zero_grad()
validate_epoch(linear_model)
###Output
_____no_output_____
###Markdown
Let's put our little training loop in a function, to make things simpler:
###Code
def train_model(model, epochs):
for i in range(epochs):
train_epoch(model)
print(validate_epoch(model), end=' ')
train_model(linear_model, 20)
###Output
0.4932 0.4932 0.6763 0.8701 0.9209 0.9365 0.9516 0.957 0.9634 0.9658 0.9678 0.9702 0.9721 0.9741 0.9746 0.9761 0.977 0.9775 0.978 0.978
###Markdown
fastai provides the `SGD` class which, by default, does the same thing as our `BasicOptim`:
###Code
linear_model = nn.Linear(28*28,1)
opt = SGD(linear_model.parameters(), lr)
train_model(linear_model, 20)
###Output
0.4932 0.7949 0.8535 0.916 0.9355 0.9472 0.9575 0.9634 0.9663 0.9678 0.9697 0.9712 0.9746 0.9751 0.9761 0.9765 0.9775 0.978 0.978 0.979
###Markdown
fastai also provides `Learner.fit`, which we can use instead of `train_model`. To create a `Learner` we first need to create a `DataLoaders`, by passing in our training and validation `DataLoaders`:
###Code
dls = DataLoaders(dl, valid_dl)
###Output
_____no_output_____
###Markdown
To create a Learner without using an application (such as cnn_learner) we need to pass in all the elements that we've created in this chapter: the DataLoaders, the model, the optimization function (which will be passed the parameters), the loss function, and optionally any metrics to print:- cnn_learner같은 어플리케이션 없이 학습시키기 위해 선택적으로 인쇄할 모든 매트릭스, 손실 함수, 최적화 함수, 모델, DataLoaders를 모두 전달해야 한다.
###Code
learn = Learner(dls, nn.Linear(28*28,1), opt_func=SGD,
loss_func=mnist_loss, metrics=batch_accuracy)
learn.fit(10, lr=lr)
###Output
_____no_output_____
###Markdown
As you can see, there's nothing magic about the PyTorch and fastai classes. They are just convenient pre-packaged pieces that make your life a bit easier! Adding a Nonlinearity the entire definition of a basic neural network:
###Code
def simple_net(xb):
res = xb@w1 + b1
res = res.max(tensor(0.0))
res = res@w2 + b2
return res
###Output
_____no_output_____
###Markdown
That's it! All we have in `simple_net` is two linear classifiers with a `max` function between them.Here, `w1` and `w2` are weight tensors, and b1 and b2 are bias tensors
###Code
w1 = init_params((28*28,30)) # has 30 output activations
b1 = init_params(30)
w2 = init_params((30,1)) # must have 30 input activations, so they match
b2 = init_params(1)
###Output
_____no_output_____
###Markdown
That little function res.max(tensor(0.0)) is called a rectified linear unit, also known as ReLU. We think we can all agree that rectified linear unit sounds pretty fancy and complicated... But actually, there's nothing more to it than res.max(tensor(0.0))—in other words, replace every negative number with a zero. This tiny function is also available in PyTorch as F.relu:- ReLU로 알려진 res.mas(0.0)은 정류된 선형 단위. 모든 음수를 0으로 바꿔준다.
###Code
plot_function(F.relu)
###Output
_____no_output_____
###Markdown
The basic idea is that by using more linear layers, we can have our model do more computation, and therefore model more complex functions. But there's no point just putting one linear layer directly after another one, because when we multiply things together and then add them up multiple times, that could be replaced by multiplying different things together and adding them up just once! That is to say, a series of any number of linear layers in a row can be replaced with a single linear layer with a different set of parameters.- 기본적인 아이디어는 선형함수를 더 사용할수록 모델이 더 많은 계산을 할 수 있어 더 복잡한 함수를 모데링할 수 있다는 것이다.- 하지만 한 선형 층을 다른 층에 바로 붙이면 의미가 없는데, 왜냐하면 곱하고 여러번 더하는 건 곱하고 한 번에 더하는 것과 같기 때문이다. 즉 한 행에 에 있는 일련의 선형 층은 서로 다른 파라미터 집합을 가진 단일 선형 층으로 대체할 수 있다는 것이다.But if we put a nonlinear function between them, such as `max`, then this is no longer true. Now each linear layer is actually somewhat decoupled from the other ones, and can do its own useful work. The max function is particularly interesting, because it operates as a simple if statement.- 만일 'max'와 같은 비선형 함수를 선형 함수 사이에 넣는다면 각 선형 층은 다른 층과 어느정도 분리되는 것으로 보고 유용한 작업을 할 수 있을 것으로 본다. 'max' 함수는 특히 if 구문을 사용하기 때문에 흥미롭다.Amazingly enough, it can be mathematically proven that this little function can solve any computable problem to an arbitrarily high level of accuracy, if you can find the right parameters for w1 and w2 and if you make these matrices big enough. For any arbitrarily wiggly function, we can approximate it as a bunch of lines joined together; to make it closer to the wiggly function, we just have to use shorter lines. This is known as the `universal approximation theorem`. The three lines of code that we have here are known as layers. The first and third are known as linear layers, and the second line of code is known variously as a nonlinearity, or activation function.- 'max'라는 함수는 임의의 높은 정확도로 계산 가능한 문제를 풀 수 있다는 것을 수학적으로 증명할 수 있다.- 'w1','w2'의 right parameter를 찾을 수 있고, 이 행렬이 충분히 크다면 말이다.- 임의의 wiggly 함수에 대해 선들을 합쳐서 근사할 수 있다.- wiggly 함수에 가깝게 하기 위해서 보편 근사 정리라고 알려진 더 짧은 선들을 사용해야 한다.- 이 세 줄의 선은 layer라 부르고, 첫번째, 세번째 줄은 선형 layer이며, 두 번째 줄은 비선형 layer이며, 활성화 함수이다.
###Code
simple_net = nn.Sequential(
nn.Linear(28*28,30),
nn.ReLU(),
nn.Linear(30,1)
)
###Output
_____no_output_____
###Markdown
`nn.Sequential` creates a module that will call each of the listed layers or functions in turn.- nn.Sequential는 나열된 layer나 함수를 차례로 호출하는 모듈을 만든다.`nn.ReLU` is a PyTorch module that does exactly the same thing as the F.relu function. Most functions that can appear in a model also have identical forms that are modules. Generally, it's just a case of replacing F with nn and changing the capitalization. When using nn.Sequential, PyTorch requires us to use the module version. Since modules are classes, we have to instantiate them, which is why you see nn.ReLU() in this example.- nn.Sequential을 사용할 경우, 파이토치는 모듈 버전을 사용할 것을 요구한다. - 이 모듈은 class이고, nn.ReLU에서 본 이유이기도 했던, instance화를 해야한다.Because `nn.Sequential` is a module, we can get its parameters, which will return a list of all the parameters of all the modules it contains. Let's try it out! As this is a deeper model, we'll use a lower learning rate and a few more epochs.- nn.Sequential은 모듈이기 때문에 파라미터를 가져올 수 있고, 포함한 모든 모듈의 매게변수 목록을 반환할 것이다.
###Code
learn = Learner(dls, simple_net, opt_func=SGD,
loss_func=mnist_loss, metrics=batch_accuracy)
#hide_output
learn.fit(40, 0.1)
###Output
_____no_output_____
###Markdown
the training process is recorded in `learn.recorder`, with the table of output stored in the `values` attribute, so we can plot the accuracy over training as:
###Code
plt.plot(L(learn.recorder.values).itemgot(2));
learn.recorder.values[-1][2]
###Output
_____no_output_____
###Markdown
1. A function that can solve any problem to any level of accuracy (the neural network) given the correct set of parameters - 정확한 파라미터 집합이 주어진다면 어떤 수준의 정확도(신경망)으로든 문제를 해결할 수 있다.2. A way to find the best set of parameters for any function (stochastic gradient descent) - 모든 함수에 대해 최적의 매개함수 집합을 찾는 방법을 알 수 있다. Going Deeper We already know that a single nonlinearity with two linear layers is enough to approximate any function. So why would we use deeper models? The reason is performance. With a deeper model (that is, one with more layers) we do not need to use as many parameters; it turns out that we can use smaller matrices with more layers, and get better results than we would get with larger matrices, and few layers.- 두 개의 선형 layer를 가진 단일 비선형성이 충분히 어느 함수든 잘 설명한다는 것을 알았다.- 왜 깊은 모델을 사용해야할까? 그 이유는 성능이다. - deeper model에서는 많은 파라미터를 사용할 필요가 없다.- layer가 더 많은 작은 행렬을 사용할 수 있다면 더 큰 행렬과 더 작은 layer를 가진 것보다 더 좋은 결과를 얻을 수 있다.That means that we can train the model more quickly, and it will take up less memory.- 모델을 더 빨리 학습시킬 수 있다면 더 적은 메모리를 사용할 것이다. what happens when we train an 18-layer model
###Code
dls = ImageDataLoaders.from_folder(path)
learn = cnn_learner(dls, resnet18, pretrained=False,
loss_func=F.cross_entropy, metrics=accuracy)
learn.fit_one_cycle(1, 0.1)
###Output
_____no_output_____ |
02-NoteBooks/02-TargetAnalysis.ipynb | ###Markdown
Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
pd.set_option('display.max_columns', 200)
###Output
_____no_output_____
###Markdown
Read Data
###Code
data = pd.read_csv('../01-Data/DataGas.csv', parse_dates=['Analysis_Date', 'Last_Day_of_Analyses_of_Week'])
data.sample(10)
data.columns
###Output
_____no_output_____
###Markdown
Target Analysis Train and Validation Split (Simple Holdout)
###Code
data_train = data[data['Last_Day_of_Analyses_of_Week'] < '2011-01-01']
data_valid = data[data['Last_Day_of_Analyses_of_Week'] >= '2011-01-01']
data_train.shape, data_valid.shape
data_train['diff_1_Mean_Price'] = data_train.groupby(['State'])['Mean_Price'].apply(lambda row: row.diff().shift(-1))
###Output
_____no_output_____
###Markdown
Plot Graphic
###Code
plt.figure(figsize=[24,6])
plt.subplot(1, 2, 1)
plt.plot('Last_Day_of_Analyses_of_Week', 'Mean_Price', data=data_train)
plt.xlabel('Year')
plt.ylabel('Average Resale Price')
plt.title('Average GASOLINA COMUM Resale Price by time')
plt.subplot(1, 2, 2)
plt.plot('Last_Day_of_Analyses_of_Week', 'diff_1_Mean_Price', data=data_train)
plt.xlabel('Year')
plt.ylabel('First Difference of Average Resale Price')
plt.title('First Difference of Average GASOLINA COMUM Resale Price by time')
plt.show()
###Output
_____no_output_____ |
src/TeamPlayerStats.ipynb | ###Markdown
###Code
TEAM = "West Ham"
SEASON = "2020"
###Output
_____no_output_____
###Markdown
Installing Required Libraries
###Code
!pip install aiohttp
!pip install understat
###Output
_____no_output_____
###Markdown
Importing Library and Getting Data
###Code
import asyncio
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import aiohttp
from understat import Understat
import nest_asyncio
nest_asyncio.apply()
async def main():
global players
async with aiohttp.ClientSession() as session:
understat = Understat(session)
players = await understat.get_team_players(TEAM, SEASON)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
###Output
_____no_output_____
###Markdown
Non-Penalty Goals + Assists per 90
###Code
players2 = []
for player in players:
if (((float(player['npg']) + float(player['assists'])) / int(player['time'])) * 90 > 0.1):
players2.append([player['player_name'], ((float(player['npg']) + float(player['assists'])) / int(player['time'])) * 90])
df2 = pd.DataFrame(data = players2, columns=['Player', 'non penalty goals + assists per 90'])
df2
sns.set_theme()
sns.barplot(x = 'non penalty goals + assists per 90', y = 'Player', data = df2, orient='h')
###Output
_____no_output_____
###Markdown
Non Penalty Goals and Assists
###Code
players3 = []
for player in players:
if float(player['npg']) + float(player['assists']) > 1:
players3.append([player['player_name'], float(player['npg']) + float(player['assists'])])
df3 = pd.DataFrame(data = players3, columns=['Player', 'non penalty goals + assists'], )
df3
sns.set_theme()
sns.barplot(x = 'non penalty goals + assists', y = 'Player', data = df3, orient='h')
###Output
_____no_output_____
###Markdown
Non-Penalty Expected Goals + Expected Assists per 90
###Code
players1 = []
for player in players:
if (((float(player['npxG']) + float(player['xA'])) / int(player['time'])) * 90 > 0.1):
players1.append([player['player_name'], ((float(player['npxG']) + float(player['xA'])) / int(player['time'])) * 90])
df1 = pd.DataFrame(data = players1, columns=['Player', 'npxG + xA per 90'], )
df1
sns.set_theme()
sns.barplot(x = 'npxG + xA per 90', y = 'Player', data = df1, orient='h')
###Output
_____no_output_____
###Markdown
Minutes Played
###Code
players4 = []
for player in players:
if (int(player['time']) >= 1000):
players4.append([player['player_name'], int(player['time'])])
df4 = pd.DataFrame(data = players4, columns=['Player', 'time'], )
df4
sns.set_theme()
sns.barplot(x = 'time', y = 'Player', data = df4, orient='h')
###Output
_____no_output_____
###Markdown
Key passes
###Code
players4 = []
for player in players:
if (int(player['key_passes']) > 2):
players4.append([player['player_name'], int(player['key_passes'])])
df4 = pd.DataFrame(data = players4, columns=['Player', 'key_passes'], )
df4
sns.set_theme()
sns.barplot(x = 'key_passes', y = 'Player', data = df4, orient='h')
###Output
_____no_output_____
###Markdown
Average xA per key pass
###Code
players4 = []
for player in players:
if (int(player['key_passes']) > 2):
if (float(player['xA']) / int(player['key_passes']) > 0.1):
players4.append([player['player_name'], float(player['xA']) / int(player['key_passes'])])
df4 = pd.DataFrame(data = players4, columns=['Player', 'avg xA per key pass'], )
df4
sns.set_theme()
sns.barplot(x = 'avg xA per key pass', y = 'Player', data = df4, orient='h')
###Output
_____no_output_____ |
trainings/hyperparameters/.ipynb_checkpoints/hyperparameters-checkpoint.ipynb | ###Markdown
Optimizing hyperparameters with Pipelines and GridsearchWhen training machine learning models, usually there are certain model parameters which need to be chosen. Some examples are:- **Random Forest**: The max depth of the trees, splitting parameters, max features per tree, etc- **Logistic Regression**: Penalty function, stopping criteria, regularization strength- **Neural Network**: Activation functions, learning rate, gradient descent algorithm- **Tokenizing**: Filter out frequent words, unigrams vs bigrams vs trigrams, etc...> Note that parameters in the feature preparation step should also included in the hyperparameters, as these also can have a big influence on the accuracy of our model.Almost every machine learning model has so-called hyperparameters which need to be chosen. Often default values are given in ML frameworks like scikit-learn, but if we do some optimization we can probably improve our model performance by tweaking these values. Geometric interpretationSo how are we going to find these parameters? What problem are we trying to solve?> Given the test/training data, find the hyperparameters that minimize the model error (classification/regression) on the test dataIf we consider a problem with two hyperparameters, we can actually plot this as a function: Here the $(x,y)$ points are combinations of our two hyperparameters, and the $z:=f(x,y)$ is the error on the training data. Often our hyperparameter-space will be of much higher dimensions, but that becomes more difficult to visualize. Grid SearchThere are many approaches to tuning these hyperparameters (a whole [field of mathematics](https://en.wikipedia.org/wiki/Mathematical_optimization?oldformat=true) is focused on creating such algorithms!), the simplest one is the gridsearch. We split the parameters space into an evenly defined grid like this:We then calculate the error for each gridpoint, and hope we didn't miss much in the points we didn't calculate. We then choose the best one as our optimum.Here's [a fun article](https://medium.com/rants-on-machine-learning/smarter-parameter-sweeps-or-why-grid-search-is-plain-stupid-c17d97a0e881) about why there are much better techniques than grid search, for the curious reader.> **Note**: There are smarter ways of doing such a search (like stochastic gradient descent) but these need some information about the slope of the error function at the point you're evaluating it. In some situations this is possible, but often won't be. Implementation in scikit-learnFor implementation, we're going to use a simple text classification problem. We'll put the entire model (from data to score) into a **`Pipeline`** object so that we can easily pass it to a **`GridSearchCV`** function which will do the searching for us.There are two classes which are used to create machine-learning models in scikit-learn:- Transformers: These have a fit and transform method.- Estimators: These have a fit and predict method.When creating a ML model, you use **multiple transformers** (for feature extraction, text preprocessing, post processing of scores) together with **one estimator**.The pipeline class can be used to combines all these seperate steps into one new estimator, which we can then do cool stuff with.
###Code
from sklearn.datasets import fetch_20newsgroups
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import LogisticRegression
news = fetch_20newsgroups()
parameters = {
'hashing__n_features': [2**n for n in range(12,18)],
'hashing__ngram_range': [(1,1), (1,2), (1,3)]
}
pipeline = Pipeline([
('hashing', HashingVectorizer(strip_accents='ascii',
analyzer='word',
stop_words='english')),
('logres', LogisticRegression())
])
clf = GridSearchCV(estimator = pipeline,
param_grid = parameters,
scoring = 'accuracy',
n_jobs=-1)
clf.fit(news.data, news.target)
###Output
_____no_output_____
###Markdown
Great! It looks like our gridsearch was succesfull. Lets have a look at what the outcome was. We can do this by looking at the **`mean_test_score`** and the corresponding **`params`** lists in the **`cv_results_`** attribute:
###Code
score = clf.cv_results_['mean_test_score']
features = [[par['hashing__n_features'], par['hashing__ngram_range'][1]]
for par in clf.cv_results_['params']]
import matplotlib.pyplot as plt
import matplotlib
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter([x[0] for x in features], [x[1] for x in features], score, c = score, s = score*150)
ax.set_xlabel('hash dimension')
ax.set_ylabel('max ngram size')
ax.set_zlabel('accuracy')
matplotlib.rcParams.update({'figure.figsize':[10,10]})
ax.view_init(30, 20)
plt.draw()
plt.pause(.001)
plt.show()
###Output
_____no_output_____ |
Code/Archive/.ipynb_checkpoints/ndar_fmri_to_subfolders-checkpoint.ipynb | ###Markdown
PART TWO
###Code
pwd
indir = './subjects/'
folders = np.array(os.listdir(indir))
has_nii = np.array([any(['.nii' in cont for cont in os.listdir(os.path.join(indir,f))]) for f in folders])
has_tar = np.array([any(['.tar' in cont for cont in os.listdir(os.path.join(indir,f))]) for f in folders])
has_rest = np.array([any(['rest' in cont.lower() for cont in os.listdir(os.path.join(indir,f))]) for f in folders])
(has_rest * has_nii).sum()
exts = list()
for folder in folders:
these_exts = [cont.split('.')[-1] for cont in os.listdir(os.path.join(indir,folder))]
[exts.append(ext) for ext in these_exts]
c = pd.DataFrame(np.array(exts),columns=['key'])
c['ones'] = np.ones(len(c))
c.groupby(['key']).count()
###Output
_____no_output_____ |
3-Building-the-Classifier.ipynb | ###Markdown
Building the Classifier 1. Loading the corpus
###Code
import re
from nltk import sent_tokenize
cg_sents = []
smg_sents = []
def remove_duplicate_punctuation(s):
return(re.sub(r'(\.|\?|!|;)\1+', r'\1 ', s))
with open('./Data/cg_twitter.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
cg_sents += sent_tokenize(line)
with open('./Data/cg_fb.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
cg_sents += sent_tokenize(line)
with open('./Data/cg_other.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
cg_sents += sent_tokenize(line)
with open('./Data/smg_twitter.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
smg_sents += sent_tokenize(line)
with open('./Data/smg_fb.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
smg_sents += sent_tokenize(line)
with open('./Data/smg_other.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
smg_sents += sent_tokenize(line)
cg_sents[:3]
###Output
_____no_output_____
###Markdown
2. Cleaning the text
###Code
import unicodedata
from string import punctuation
from nltk.tokenize import WhitespaceTokenizer
punctuation += '´΄’…“”–—―»«'
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def get_clean_sent_el(sentence):
sentence = re.sub(r'^RT', '', sentence)
sentence = re.sub(r'\&\w*;', '', sentence)
sentence = re.sub(r'\@\w*', '', sentence)
sentence = re.sub(r'\$\w*', '', sentence)
sentence = re.sub(r'https?:\/\/.*\/\w*', '', sentence)
sentence = ''.join(c for c in sentence if c <= '\uFFFF')
sentence = strip_accents(sentence)
sentence = re.sub(r'#\w*', '', sentence)
sentence = sentence.lower()
tokens = WhitespaceTokenizer().tokenize(sentence)
new_tokens = []
for token in tokens:
if token == 'ο,τι' or token == 'ό,τι' or token == 'o,ti' or token == 'ó,ti':
new_tokens.append(token)
else:
token = re.sub(r'(?<=[.,!\?;\'΄´])(?=[^\s])', r' ', token)
new_token = token.translate(str.maketrans({key: None for key in punctuation}))
if (new_token != ''):
new_tokens.append(new_token)
sentence =' '.join(new_tokens)
sentence = re.sub('\ufeff', '', sentence)
sentence = sentence.strip(' ')
sentence = re.sub(' ', ' ', sentence)
return sentence
cg_sents_clean = []
smg_sents_clean = []
for sent in cg_sents:
cg_sents_clean.append(get_clean_sent_el(sent))
for sent in smg_sents:
smg_sents_clean.append(get_clean_sent_el(sent))
cg_sents_clean = list(filter(None, cg_sents_clean))
smg_sents_clean = list(filter(None, smg_sents_clean))
cg_sents_clean[:3]
###Output
_____no_output_____
###Markdown
3. Building the feature extractor
###Code
from nltk import ngrams
def get_word_ngrams(tokens, n):
ngrams_list = []
ngrams_list.append(list(ngrams(tokens, n)))
ngrams_flat_tuples = [ngram for ngram_list in ngrams_list for ngram in ngram_list]
format_string = '%s'
for i in range(1, n):
format_string += (' %s')
ngrams_list_flat = [format_string % ngram_tuple for ngram_tuple in ngrams_flat_tuples]
return ngrams_list_flat
def get_char_ngrams(word, n):
ngrams_list = []
word = re.sub(r'ς', 'σ', word)
ngrams_list.append(list(ngrams(word, n, pad_left=True, pad_right=True, left_pad_symbol='_', right_pad_symbol='_')))
# Removing redundant ngrams:
if (n > 2):
redundant_combinations = n - 2
ngrams_list = [ngram_list[redundant_combinations : -redundant_combinations] for ngram_list in ngrams_list]
ngrams_flat_tuples = [ngram for ngram_list in ngrams_list for ngram in ngram_list]
format_string = ''
for i in range(0, n):
format_string += ('%s')
ngrams_list_flat = [format_string % ngram_tuple for ngram_tuple in ngrams_flat_tuples]
return ngrams_list_flat
# Feature extractor
def get_ngram_features(sent): # The reason I do not use NLTK's everygrams to extract the features quickly is because the behavior of my n-gram extractor is modified to remove redundant n-grams. Also, I need to label word and char n-grams to avoid ambiguity
sentence_tokens = WhitespaceTokenizer().tokenize(sent)
features = {}
# Word unigrams
ngrams = get_word_ngrams(sentence_tokens, 1)
for ngram in ngrams:
features[f'word({ngram})'] = features.get(f'word({ngram})', 0) + 1 # The second parameter to .get() is a default value if the key doesn't exist.
# Word bigrams
ngrams = get_word_ngrams(sentence_tokens, 2)
for ngram in ngrams:
features[f'word_bigram({ngram})'] = features.get(f'word_bigram({ngram})', 0) + 1
# Char unigrams
for word in sentence_tokens:
ngrams = get_char_ngrams(word, 1)
for ngram in ngrams:
features[f'char({ngram})'] = features.get(f'char({ngram})', 0) + 1
# Char bigrams
for word in sentence_tokens:
ngrams = get_char_ngrams(word, 2)
for ngram in ngrams:
features[f'char_bigram({ngram})'] = features.get(f'char_bigram({ngram})', 0) + 1
# Char trigrams
for word in sentence_tokens:
ngrams = get_char_ngrams(word, 3)
for ngram in ngrams:
features[f'char_trigram({ngram})'] = features.get(f'char_trigram({ngram})', 0) + 1
return features
get_ngram_features('αυτη ειναι η σπαρτη')
# from nltk import everygrams
# def sent_process(sent):
# return [''.join(ng) for ng in everygrams(sent.replace(' ', '_ _'), 1, 4)
# if ' ' not in ng and '\n' not in ng and ng != ('_',)]
# sent_process('αυτη ειναι η σπαρτη')
###Output
_____no_output_____
###Markdown
4. Creating the training and test sets
###Code
import random
all_sents_labeled = ([(sentence, 'CG') for sentence in cg_sents_clean] + [(sentence, 'SMG') for sentence in smg_sents_clean])
random.shuffle(all_sents_labeled)
all_sents_labeled[0]
NO_ALL_SENTENCES = len(all_sents_labeled)
NO_TRAIN_SENTENCES = round(NO_ALL_SENTENCES * .8)
train_set = all_sents_labeled[:NO_TRAIN_SENTENCES]
test_set = all_sents_labeled[NO_TRAIN_SENTENCES:]
train_set_sents = [sent[0] for sent in train_set]
train_set_labels = [sent[1] for sent in train_set]
test_set_sents = [sent[0] for sent in test_set]
test_set_labels = [sent[1] for sent in test_set]
print(train_set_sents[0], train_set_labels[0])
print('DATASET\t', 'SENTENCES')
print('All\t', NO_ALL_SENTENCES)
print('Training', NO_TRAIN_SENTENCES)
print('Testing\t', NO_ALL_SENTENCES - NO_TRAIN_SENTENCES)
###Output
DATASET SENTENCES
All 1039
Training 831
Testing 208
###Markdown
5. Vectorization
###Code
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer(analyzer=get_ngram_features)
train_set_vectors = count_vect.fit_transform(train_set_sents)
test_set_vectors = count_vect.transform(test_set_sents) # Unlike fit_transform(), transform() does not change the count vectorizer's vocabulary so it should be used for the test set.
train_set_vectors
from numpy import set_printoptions, nan
set_printoptions(threshold=nan) # Prints whole array. Required because by default an array with thousands of elements wouldn't be printed in full.
train_set_vectors.toarray()[0]
count_vect.vocabulary_ # The numbers are not counts but indices.
len(count_vect.vocabulary_) # This is the same as the length of each vector.
###Output
_____no_output_____
###Markdown
6. Building the classifiers
###Code
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
def show_confusion_matrix(cm):
print('\t Predicted')
print('\t CG SMG')
print('\t -------- --------')
print('\tCG | {:^6} | {:^6}'.format(cm[0][0], cm[0][1]))
print('Actual\t -------- --------')
print('\tSMG | {:^6} | {:^6}'.format(cm[1][0], cm[1][1]))
def show_most_informative_features(vectorizer, clf, n=10):
print("\t\t CG\t\t\t\t\t\t SMG\n")
feature_names = vectorizer.get_feature_names()
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
print("\t%.4f\t%17s\t\t\t%.4f\t%17s" % (coef_1, fn_1, coef_2, fn_2))
###Output
_____no_output_____
###Markdown
6.1 Multinomial Naive Bayes classifier
###Code
clf_multinomialNB = MultinomialNB() # There are no params for MultinomialDB that prevent overfitting, so any overfitting is caused by the small dataset size.
clf_multinomialNB.fit(train_set_vectors, train_set_labels)
clf_multinomialNB_predictions = clf_multinomialNB.predict(test_set_vectors)
print('\t\t\tPERFORMANCE\n')
print('Accuracy:', round(accuracy_score(test_set_labels, clf_multinomialNB_predictions), 2), '\n')
print(classification_report(test_set_labels, clf_multinomialNB_predictions))
cmatrix = confusion_matrix(test_set_labels, clf_multinomialNB_predictions)
show_confusion_matrix(cmatrix)
show_most_informative_features(count_vect, clf_multinomialNB, n=20)
###Output
CG SMG
-11.2815 char_bigram(αα) -5.3154 char(α)
-11.2815 char_bigram(β_) -5.3231 char(ι)
-11.2815 char_bigram(δ_) -5.3309 char(ο)
-11.2815 char_bigram(δκ) -5.3335 char(ε)
-11.2815 char_bigram(ηα) -5.3413 char(τ)
-11.2815 char_bigram(ηβ) -5.3466 char(ν)
-11.2815 char_bigram(ηε) -5.3519 char(σ)
-11.2815 char_bigram(ηο) -5.4010 char_bigram(α_)
-11.2815 char_bigram(θθ) -5.4122 char(υ)
-11.2815 char_bigram(θκ) -5.4179 char(ρ)
-11.2815 char_bigram(ιυ) -5.4293 char(κ)
-11.2815 char_bigram(οζ) -5.4467 char(μ)
-11.2815 char_bigram(πδ) -5.4526 char(π)
-11.2815 char_bigram(πκ) -5.4644 char_bigram(ι_)
-11.2815 char_bigram(ππ) -5.4885 char(η)
-11.2815 char_bigram(φκ) -5.4915 char(λ)
-11.2815 char_bigram(ωε) -5.5008 char_bigram(_τ)
-11.2815 char_trigram(_α_) -5.6013 char_bigram(ου)
-11.2815 char_trigram(_αο) -5.6013 char_bigram(ο_)
-11.2815 char_trigram(_γω) -5.6116 char_bigram(ει)
###Markdown
6.2 Linear Support Vector classifier
###Code
clf_linearSVC = LinearSVC(max_iter=1500) # n_samples < n_features in training set so the dual param is kept at its default value of True. Default max_iter = 1000
clf_linearSVC.fit(train_set_vectors, train_set_labels)
clf_linearSVC_predictions = clf_linearSVC.predict(test_set_vectors)
print('\t\t\tPERFORMANCE\n')
print('Accuracy:', round(accuracy_score(test_set_labels, clf_linearSVC_predictions), 2), '\n')
print(classification_report(test_set_labels, clf_linearSVC_predictions))
cmatrix = confusion_matrix(test_set_labels, clf_linearSVC_predictions)
show_confusion_matrix(cmatrix)
show_most_informative_features(count_vect, clf_linearSVC, n=20)
###Output
CG SMG
-0.3471 char_trigram(εν_) 0.1548 word(και)
-0.3013 word(εν) 0.1405 char_bigram(λυ)
-0.2699 char_bigram(τζ) 0.1393 char_trigram(αλλ)
-0.2593 char_trigram(_τζ) 0.1342 char_bigram(_β)
-0.2262 char_bigram(αμ) 0.1301 word(δεν)
-0.1907 char_bigram(_ε) 0.1301 char_trigram(δεν)
-0.1864 char_bigram(θκ) 0.1296 char_trigram(τασ)
-0.1544 char_trigram(καμ) 0.1294 char_trigram(ινα)
-0.1523 char_trigram(_ου) 0.1252 char_bigram(να)
-0.1464 char_trigram(_εν) 0.1212 char_bigram(_ξ)
-0.1449 char(π) 0.1206 char(χ)
-0.1382 char_trigram(θκι) 0.1147 char_bigram(τα)
-0.1362 char(ζ) 0.1132 char_trigram(_τα)
-0.1335 char_bigram(_λ) 0.1106 char_bigram(ικ)
-0.1331 char_trigram(_λα) 0.1078 char_trigram(ιστ)
-0.1299 char_bigram(φκ) 0.1074 char(δ)
-0.1257 char_bigram(νν) 0.1070 char_trigram(θελ)
-0.1216 char_trigram(αμα) 0.1070 char_trigram(μαλ)
-0.1213 char_trigram(ιαν) 0.1034 word(τα)
-0.1189 char_trigram(υλλ) 0.1029 char(ξ)
###Markdown
6.3 Logistic Regression classifier
###Code
clf_logisticRegression = LogisticRegression() # Again, dual = True. Default solver = 'liblinear'. It's recommended for smaller databases. For bigger databases, 'saga' could be used.
clf_logisticRegression.fit(train_set_vectors, train_set_labels)
clf_logisticRegression_predictions = clf_logisticRegression.predict(test_set_vectors)
print('\t\t\tPERFORMANCE\n')
print('Accuracy:', round(accuracy_score(test_set_labels, clf_logisticRegression_predictions), 2), '\n')
print(classification_report(test_set_labels, clf_logisticRegression_predictions))
cmatrix = confusion_matrix(test_set_labels, clf_logisticRegression_predictions)
show_confusion_matrix(cmatrix)
show_most_informative_features(count_vect, clf_logisticRegression, n=20)
###Output
CG SMG
-1.2587 char_trigram(εν_) 0.6666 word(και)
-1.1392 word(εν) 0.4642 char_trigram(δεν)
-1.0289 char_bigram(τζ) 0.4625 char_bigram(λυ)
-0.9605 char_trigram(_τζ) 0.4611 word(δεν)
-0.8454 char_bigram(αμ) 0.4540 char_trigram(ινα)
-0.6719 char_bigram(θκ) 0.4383 char_bigram(_β)
-0.6625 char_bigram(_ε) 0.4321 char_bigram(να)
-0.6533 char_trigram(_εν) 0.4241 char(χ)
-0.5861 char_trigram(καμ) 0.4208 char(δ)
-0.5038 char(ζ) 0.4150 char_trigram(και)
-0.4929 char_trigram(θκι) 0.4109 char_trigram(αλλ)
-0.4926 char_trigram(τζι) 0.4021 char_bigram(τα)
-0.4795 char_trigram(_ου) 0.3963 char_trigram(_απ)
-0.4702 char_bigram(ζι) 0.3951 char_bigram(ικ)
-0.4546 char_bigram(φκ) 0.3829 char_trigram(τασ)
-0.4521 char_trigram(αμα) 0.3620 char_trigram(ιστ)
-0.4311 char(π) 0.3611 word(ειναι)
-0.4307 char_trigram(_λα) 0.3548 char_trigram(αυτ)
-0.4295 char_bigram(_λ) 0.3489 char_trigram(_ει)
-0.4239 char_bigram(εν) 0.3477 char_trigram(μαλ)
###Markdown
**It seems that the classification algorithm with the best performance is *Multinomial Naive Bayes***. 7. Analyzing misclassifications made by the Multinomial Naive Bayes classifier
###Code
print('MISCLASSIFICATIONS\n')
misclassificationCount = 0
for i, sent in enumerate(test_set_sents):
if test_set_labels[i] != clf_multinomialNB_predictions[i]:
misclassificationCount += 1
print(f'{misclassificationCount}.', sent, f'(CORRECT = {test_set_labels[i]},', f'PREDICTED = {clf_multinomialNB_predictions[i]})\n')
###Output
MISCLASSIFICATIONS
1. ολοι ετσι κανουν (CORRECT = SMG, PREDICTED = CG)
2. μιλω σας ποιος με ειδεν και δεν με φοβηθηκε (CORRECT = CG, PREDICTED = SMG)
3. το προβλημα με το παρανομο κυνηγι εξω εχει παραγινει (CORRECT = SMG, PREDICTED = CG)
4. αν εμεις δεν καταφεραμε τοτε οσα λαχταρησαμε δεν ξεσπαμε πανω τους (CORRECT = SMG, PREDICTED = CG)
5. παστιτσιο και παντζαρια σαλατα το μενου για σημερα καλη μας ορεξη (CORRECT = SMG, PREDICTED = CG)
6. πολλα φοουμαι πως το ακελ κατερριψε και αυτο (CORRECT = CG, PREDICTED = SMG)
7. ισως γιατι δεν θυμομαστε καλα (CORRECT = SMG, PREDICTED = CG)
8. τελικα οι πορνες δεν ηταν και τοσο αλλοδαπες αλλα τι σημασια εχει η δουλεια εγινε (CORRECT = SMG, PREDICTED = CG)
9. στη ζωη μου εισαι γουρι στο τζατζικι το αγγουρι (CORRECT = SMG, PREDICTED = CG)
###Markdown
8. Trying the Multinomial Naive Bayes classifier with custom input First, a more powerful version of the classifier is built by using all the data available:
###Code
full_set_sents = [sent[0] for sent in all_sents_labeled]
full_set_labels = [sent[1] for sent in all_sents_labeled]
full_set_vectors = count_vect.fit_transform(full_set_sents)
clf_super_multinomialNB = MultinomialNB()
clf_super_multinomialNB.fit(full_set_vectors, full_set_labels)
###Output
_____no_output_____
###Markdown
Trying 2 custom sentences:
###Code
cgSent = 'Η Κύπρος εν που τες πιο όμορφες χώρες.'
smgSent = 'Η Κύπρος είναι από τις πιο όμορφες χώρες.'
demoSentences = [cgSent, smgSent]
cgSent = get_clean_sent_el(cgSent)
smgSent = get_clean_sent_el(smgSent)
test_vec = count_vect.transform([cgSent, smgSent])
for sentenceNumber, predictionArr in enumerate(clf_super_multinomialNB.predict_proba(test_vec)):
print(f'SENTENCE {sentenceNumber + 1}: “{demoSentences[sentenceNumber]}”')
if predictionArr[0] > predictionArr[1]:
print(f'PREDICTION: Cypriot Greek (Confidence: {predictionArr[0]:.2f})\n')
else:
print(f'PREDICTION: Standard Modern Greek (Confidence: {predictionArr[1]:.2f})\n')
###Output
SENTENCE 1: “Η Κύπρος εν που τες πιο όμορφες χώρες.”
PREDICTION: Cypriot Greek (Confidence: 1.00)
SENTENCE 2: “Η Κύπρος είναι από τις πιο όμορφες χώρες.”
PREDICTION: Standard Modern Greek (Confidence: 1.00)
|
src/0_prepare-dataset.ipynb | ###Markdown
Spoken Language Recognition Using Convolutional Neural Networks_written by Joscha S. Rieber (Fraunhofer IAIS) in 2020_ Dataset preparationPlease go to the [Mozilla Common Voice Website](https://commonvoice.mozilla.org/) and download the full German and English datasets. In the following scripts we will thin out the datasets to make them more handy and play with the data.* Download German and English datasets* Extract them* Define paths below
###Code
train = 'train'
test = 'test'
eng = 'english'
ger = 'german'
languages = [eng, ger]
categories = [train, test]
original_dataset_paths = {}
original_dataset_paths[eng] = '/data/jrieber/IFINDER-2143/common-voice/cv-corpus-5.1-2020-06-22/en/' # TODO: Adapt this folder!
original_dataset_paths[ger] = '/data/jrieber/IFINDER-2143/common-voice/cv-corpus-5.1-2020-06-22/de/' # TODO: Adapt this folder!
target_root_path = '../data/'
num_files_to_take_for_each_language = 20000
train_rate = 0.8 # Use 80 % of the data for training and the rest for testing
###Output
_____no_output_____
###Markdown
Check pathsIf something goes wrong here, check paths again and read the documentation of the GitHub repository and check how to set-up your environment correctly
###Code
import os
for lang in languages:
if not os.path.isdir(original_dataset_paths[lang]):
raise
for category in categories:
if not os.path.isdir(target_root_path + category + '/' + lang):
raise
for lang in languages:
if not os.path.isfile(original_dataset_paths[lang] + 'validated.tsv'):
raise
if not os.path.isdir(original_dataset_paths[lang] + 'clips'):
raise
###Output
_____no_output_____
###Markdown
Collect only num_files_to_take_for_each_language files which duration is between 7.5 and 10 secondsNote, that this process might take many hours!
###Code
# If this goes wrong, check your environment and read the documentation
import librosa as lr
from glob import glob
from random import shuffle
from shutil import copy2
import numpy as np
import pandas as pd
import warnings
def copy_audio_files_for_language(lang):
print('')
print('Copying files for language ' + lang + '...')
print('')
# Only take validated speech data
df = pd.read_csv(original_dataset_paths[lang] + 'validated.tsv', sep='\t')
all_filenames = df['path'].tolist()
shuffle(all_filenames)
counter = 0
category = train
# Process files
for filename in all_filenames:
file = original_dataset_paths[lang] + 'clips/' + filename
try:
audio_segment, sample_rate = lr.load(file)
if np.count_nonzero(audio_segment) == 0:
raise Exception('Audio is silent!')
if audio_segment.ndim != 1:
raise Exception('Audio signal has wrong number of dimensions: ' + str(audio_segment.ndim))
duration_sec = lr.core.get_duration(audio_segment, sr=sample_rate)
except Exception as e:
print('WARNING! Error while loading file \"' + file + '\": ' + str(e) + ' - Skipping...')
continue
# Only copy audio files with a certain minimum duration
if 7.5 < duration_sec < 10.0:
copy2(file, target_root_path + category + '/' + lang)
counter += 1
# Stop after collecting enough files
if counter == int(num_files_to_take_for_each_language * train_rate):
category = test
if counter == num_files_to_take_for_each_language:
break
###Output
_____no_output_____
###Markdown
Copy files to create the German language train and test datasets
###Code
warnings.simplefilter('ignore', UserWarning)
copy_audio_files_for_language(ger)
warnings.simplefilter('default', UserWarning)
###Output
Copying files for language german...
###Markdown
Copy files to create the English language train and test datasets
###Code
warnings.simplefilter('ignore', UserWarning)
copy_audio_files_for_language(eng)
warnings.simplefilter('default', UserWarning)
###Output
Copying files for language english...
WARNING! Error while loading file "/data/jrieber/IFINDER-2143/common-voice/cv-corpus-5.1-2020-06-22/en/clips/common_voice_en_190149.mp3": Audio is silent! - Skipping...
###Markdown
Check number of collected files
###Code
for category in categories:
if category == train:
num_files = int(num_files_to_take_for_each_language * train_rate)
else:
num_files = int(num_files_to_take_for_each_language * (1.0 - train_rate))
for lang in languages:
folder = target_root_path + category + '/' + lang + '/'
all_files = glob(folder + '*.mp3')
if len(all_files) < (num_files - 1):
raise Exception('Folder \"' + folder + '\" only contains ' + str(len(all_files)) + ' files instead of ' + str(num_files) + '!')
print('Okay!')
###Output
Okay!
###Markdown
Now make yourself familiar with the dataset by listening to some of the files Statistics
###Code
warnings.simplefilter('ignore', UserWarning)
for category in categories:
for lang in languages:
duration_sec = 0.0
folder = target_root_path + category + '/' + lang + '/'
all_files = glob(folder + '*.mp3')
for file in all_files:
duration_sec += lr.core.get_duration(filename=file)
duration_h = duration_sec / 60.0 / 60.0
print('Total duration of ' + lang + ' ' + category + ' is ' + str(round(duration_h, 1)) + ' h')
warnings.simplefilter('default', UserWarning)
###Output
Total duration of english train is 37.0 h
Total duration of german train is 37.0 h
Total duration of english test is 9.2 h
Total duration of german test is 9.3 h
|
Inference_ApogeeBCNN.ipynb | ###Markdown
Inference for ApogeeBCNN model which trained on full spectra Result on High SNR testing set
###Code
import numpy as np
from utils_h5 import H5Loader
from astroNN.models import load_folder
# Load the dataset testing data
loader = H5Loader('_highsnr_test')
loader.load_combined = True # load combined spectra
loader.load_err = False
# load the correct entry with correct order from ApogeeBCNNcensored
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y = loader.load()
# load RA, DEC, SNR entry
RA_visit = loader.load_entry('RA')
DEC_visit = loader.load_entry('DEC')
SNR_visit = loader.load_entry('SNR')
# Load model and do inference
bcnn = load_folder('astroNN_0606_run001')
bcnn.mc_num = 100
pred, pred_err = bcnn.test(x)
import pandas as pd
from IPython.display import display, HTML
from astropy.stats import mad_std as mad
residue = (pred - y)
bias = np.ma.median(np.ma.array(residue, mask=[y == -9999.]), axis=0)
scatter = mad(np.ma.array(residue, mask=[y == -9999.]), axis=0)
d = {'Name': bcnn.targetname, 'Bias': [f'{bias_single:.{3}f}' for bias_single in bias],
'Scatter': [f'{scatter_single:.{3}f}' for scatter_single in scatter]}
df = pd.DataFrame(data=d)
display(HTML(df.to_html()))
###Output
_____no_output_____
###Markdown
Result on Individual Visit
###Code
import numpy as np
from utils_h5 import H5Loader
from astroNN.models import load_folder
# Load the dataset testing data
loader = H5Loader('__train')
loader.load_combined = False # load individual visits spectra
loader.load_err = False
# load the correct entry with correct order from ApogeeBCNNcensored
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y = loader.load()
# load RA, DEC, SNR entry
RA_visit = loader.load_entry('RA')
DEC_visit = loader.load_entry('DEC')
SNR_visit = loader.load_entry('SNR')
# Load model and do inference
bcnn = load_folder('astroNN_0606_run001')
bcnn.mc_num = 100
pred, pred_err = bcnn.test(x)
import pandas as pd
from IPython.display import display, HTML
from astropy.stats import mad_std as mad
residue = (pred - y)
bias = np.ma.median(np.ma.array(residue, mask=[y == -9999.]), axis=0)
scatter = mad(np.ma.array(residue, mask=[y == -9999.]), axis=0)
d = {'Name': bcnn.targetname, 'Bias': [f'{bias_single:.{3}f}' for bias_single in bias],
'Scatter': [f'{scatter_single:.{3}f}' for scatter_single in scatter]}
df = pd.DataFrame(data=d)
display(HTML(df.to_html()))
###Output
_____no_output_____ |
Imputation-BTMF-Bdata.ipynb | ###Markdown
Matrix Computation Concepts Kronecker product- **Definition**:Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as$$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).- **Example**:If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have$$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$$$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$ Khatri-Rao product (`kr_prod`)- **Definition**:Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r}$$where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.- **Example**:If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$$$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$$$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
###Code
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
def BTMF(dense_mat, sparse_mat, W, X, theta, time_lags, maxiter1, maxiter2):
"""Bayesian Temporal Matrix Factorization, BTMF."""
d=theta.shape[0]
dim1 = sparse_mat.shape[0]
dim2 = sparse_mat.shape[1]
rank = W.shape[1]
pos = np.where((dense_mat > 0) & (sparse_mat == 0))
position = np.where(sparse_mat > 0)
binary_mat = np.zeros((dim1, dim2))
binary_mat[position] = 1
tau = 1
alpha = 1e-6
beta = 1e-6
beta0 = 1
nu0 = rank
mu0 = np.zeros((rank))
W0 = np.eye(rank)
for iter in range(maxiter1):
W_bar = np.mean(W, axis = 0)
var_mu0 = (dim1 * W_bar + beta0 * mu0)/(dim1 + beta0)
var_nu = dim1 + nu0
var_W = np.linalg.inv(np.linalg.inv(W0)
+ dim1 * np.cov(W.T) + dim1 * beta0/(dim1 + beta0)
* np.outer(W_bar - mu0, W_bar - mu0))
var_W = (var_W + var_W.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim1 + beta0, var_W, var_nu, seed = None)
var1 = X.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat.T).reshape([rank, rank,
dim1]) + np.dstack([var_Lambda0] * dim1)
var4 = tau * np.matmul(var1, sparse_mat.T) + np.dstack([np.matmul(var_Lambda0,
var_mu0)] * dim1)[0, :, :]
for i in range(dim1):
var_Lambda1 = var3[ :, :, i]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, i])
W[i, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
var_nu = dim2 + nu0
mat0 = X[0 : np.max(time_lags), :]
mat = np.matmul(mat0.T, mat0)
new_mat = np.zeros((dim2 - np.max(time_lags), rank))
for t in range(dim2 - np.max(time_lags)):
new_mat[t, :] = X[t + np.max(time_lags), :] - np.einsum('ij, ij -> j',
theta, X[t + np.max(time_lags)
- time_lags, :])
mat += np.matmul(new_mat.T, new_mat)
var_W = np.linalg.inv(np.linalg.inv(W0) + mat)
var_W = (var_W + var_W.T)/2
Lambda_x = wishart(df = var_nu, scale = var_W, seed = None).rvs()
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat).reshape([rank, rank,
dim2]) + np.dstack([Lambda_x] * dim2)
var4 = tau * np.matmul(var1, sparse_mat)
for t in range(dim2):
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t >= 0 and t <= np.max(time_lags) - 1:
Qt = np.zeros(rank)
else:
Qt = np.matmul(Lambda_x, np.einsum('ij, ij -> j', theta, X[t - time_lags, :]))
if t >= 0 and t <= dim2 - np.min(time_lags) - 1:
if t > np.max(time_lags) - 1 and t <= dim2 - np.max(time_lags) - 1:
index = list(range(0, d))
else:
index = list(np.where((t + time_lags > np.max(time_lags) - 1)
& (t + time_lags <= dim2 - 1)))[0]
for k in index:
Ak = theta[k, :]
Mt += np.multiply(np.outer(Ak, Ak), Lambda_x)
theta0 = theta.copy()
theta0[k, :] = 0
var5 = X[t + time_lags[k], :] - np.einsum('ij, ij -> j',
theta0, X[t + time_lags[k]
- time_lags, :])
Nt += np.matmul(np.matmul(np.diag(Ak), Lambda_x), var5)
var_mu = var4[:, t] + Nt + Qt
var_Lambda = var3[:, :, t] + Mt
inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)
var_mu = np.matmul(inv_var_Lambda, var_mu)
X[t, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda)
mat_hat = np.matmul(W, X.T)
rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos]) ** 2)/dense_mat[pos].shape[0])
var_alpha = alpha + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_beta = beta + 0.5 * np.sum(error[position] ** 2)
tau = np.random.gamma(var_alpha, 1/var_beta)
theta_bar = np.mean(theta, axis = 0)
var_mu0 = (d * theta_bar + beta0 * mu0)/(d + beta0)
var_nu = d + nu0
var_W = np.linalg.inv(np.linalg.inv(W0)
+ d * np.cov(theta.T) + d * beta0/(d + beta0)
* np.outer(theta_bar - mu0, theta_bar - mu0))
var_W = (var_W + var_W.T)/2
mu_theta, Lambda_theta = Normal_Wishart(var_mu0, d + beta0, var_W, var_nu, seed = None)
for k in range(d):
theta0 = theta.copy()
theta0[k, :] = 0
mat0 = np.zeros((dim2 - np.max(time_lags), rank))
for L in range(d):
mat0 += np.matmul(X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :],
np.diag(theta0[L, :]))
VarPi = X[np.max(time_lags) : dim2 , :] - mat0
mat1 = np.zeros((rank, rank))
mat2 = np.zeros(rank)
for t in range(np.max(time_lags), dim2):
B = X[t - time_lags[k], :]
mat1 += np.multiply(np.outer(B, B), Lambda_x)
mat2 += np.matmul(np.matmul(np.diag(B), Lambda_x), VarPi[t - np.max(time_lags), :])
var_Lambda = mat1 + Lambda_theta
inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)
var_mu = np.matmul(inv_var_Lambda, mat2 + np.matmul(Lambda_theta, mu_theta))
theta[k, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda)
if (iter + 1) % 100 == 0:
print('Iter: {}'.format(iter + 1))
print('RMSE: {:.6}'.format(rmse))
print()
W_plus = np.zeros((dim1, rank))
X_plus = np.zeros((dim2, rank))
theta_plus = np.zeros((d, rank))
mat_hat_plus = np.zeros((dim1, dim2))
for iter in range(maxiter2):
W_bar = np.mean(W, axis = 0)
var_mu0 = (dim1 * W_bar + beta0 * mu0)/(dim1 + beta0)
var_nu = dim1 + nu0
var_W = np.linalg.inv(np.linalg.inv(W0)
+ dim1 * np.cov(W.T) + dim1 * beta0/(dim1 + beta0)
* np.outer(W_bar - mu0, W_bar - mu0))
var_W = (var_W + var_W.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim1 + beta0, var_W, var_nu, seed = None)
var1 = X.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat.T).reshape([rank, rank,
dim1]) + np.dstack([var_Lambda0] * dim1)
var4 = tau * np.matmul(var1, sparse_mat.T) + np.dstack([np.matmul(var_Lambda0,
var_mu0)] * dim1)[0, :, :]
for i in range(dim1):
var_Lambda1 = var3[ :, :, i]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, i])
W[i, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
W_plus += W
var_nu = dim2 + nu0
mat0 = X[0 : max(time_lags), :]
mat = np.matmul(mat0.T, mat0)
new_mat = np.zeros((dim2 - max(time_lags), rank))
for t in range(dim2 - np.max(time_lags)):
new_mat[t, :] = X[t + np.max(time_lags), :] - np.einsum('ij, ij -> j',
theta, X[t + np.max(time_lags)
- time_lags, :])
mat += np.matmul(new_mat.T, new_mat)
var_W = np.linalg.inv(np.linalg.inv(W0) + mat)
var_W = (var_W + var_W.T)/2
Lambda_x = wishart(df = var_nu, scale = var_W, seed = None).rvs()
var1 = W.T
var2 = kr_prod(var1,var1)
var3 = tau * np.matmul(var2, binary_mat).reshape([rank, rank,
dim2]) + np.dstack([Lambda_x] * dim2)
var4 = tau * np.matmul(var1, sparse_mat)
for t in range(dim2):
Mt = np.zeros((rank, rank))
Nt = np.zeros(rank)
if t >= 0 and t <= np.max(time_lags) - 1:
Qt = np.zeros(rank)
else:
Qt = np.matmul(Lambda_x, np.einsum('ij, ij -> j', theta, X[t - time_lags, :]))
if t >= 0 and t <= dim2 - np.min(time_lags) - 1:
if t > np.max(time_lags) - 1 and t <= dim2 - np.max(time_lags) - 1:
index = list(range(0, d))
else:
index = list(np.where((t + time_lags > np.max(time_lags) - 1)
& (t + time_lags <= dim2 - 1)))[0]
for k in index:
Ak = theta[k, :]
Mt += np.multiply(np.outer(Ak, Ak), Lambda_x)
theta0 = theta.copy()
theta0[k, :] = 0
var5 = X[t + time_lags[k], :] - np.einsum('ij, ij -> j',
theta0, X[t + time_lags[k]
- time_lags, :])
Nt += np.matmul(np.matmul(np.diag(Ak), Lambda_x), var5)
var_mu = var4[:, t] + Nt + Qt
var_Lambda = var3[:, :, t] + Mt
inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)
var_mu = np.matmul(inv_var_Lambda, var_mu)
X[t, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda)
X_plus += X
mat_hat = np.matmul(W, X.T)
mat_hat_plus += mat_hat
var_alpha = alpha + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_beta = beta + 0.5 * np.sum(error[position] ** 2)
tau = np.random.gamma(var_alpha, 1/var_beta)
theta_bar = np.mean(theta, axis = 0)
var_mu0 = (d * theta_bar + beta0 * mu0)/(d + beta0)
var_nu = d + nu0
var_W = np.linalg.inv(np.linalg.inv(W0)
+ d * np.cov(theta.T) + d * beta0/(d + beta0)
* np.outer(theta_bar - mu0, theta_bar - mu0))
var_W = (var_W + var_W.T)/2
mu_theta, Lambda_theta = Normal_Wishart(var_mu0, d + beta0, var_W, var_nu, seed = None)
for k in range(d):
theta0 = theta.copy()
theta0[k, :] = 0
mat0 = np.zeros((dim2 - np.max(time_lags), rank))
for L in range(d):
mat0 += np.matmul(X[np.max(time_lags) - time_lags[L] : dim2 - time_lags[L] , :],
np.diag(theta0[L, :]))
VarPi = X[np.max(time_lags) : dim2 , :] - mat0
mat1 = np.zeros((rank, rank))
mat2 = np.zeros(rank)
for t in range(np.max(time_lags), dim2):
B = X[t - time_lags[k], :]
mat1 += np.multiply(np.outer(B, B), Lambda_x)
mat2 += np.matmul(np.matmul(np.diag(B), Lambda_x), VarPi[t - max(time_lags), :])
var_Lambda = mat1 + Lambda_theta
inv_var_Lambda = np.linalg.inv((var_Lambda + var_Lambda.T)/2)
var_mu = np.matmul(inv_var_Lambda, mat2 + np.matmul(Lambda_theta, mu_theta))
theta[k, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda)
theta_plus += theta
W = W_plus/maxiter2
X = X_plus/maxiter2
theta = theta_plus/maxiter2
mat_hat = mat_hat_plus/maxiter2
final_mape = np.sum(np.abs(dense_mat[pos] -
mat_hat[pos])/dense_mat[pos])/dense_mat[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_mat[pos] -
mat_hat[pos])**2)/dense_mat[pos].shape[0])
print('Final MAPE: {:.6}'.format(final_mape))
print('Final RMSE: {:.6}'.format(final_rmse))
print()
return W, X, theta
###Output
_____no_output_____
###Markdown
Data Organization Part 1: Matrix StructureWe consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),$$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$ Part 2: Tensor StructureWe consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),$$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$.
###Code
import scipy.io
tensor = scipy.io.loadmat('Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(random_tensor + 0.5 - missing_rate).reshape([random_tensor.shape[0],
random_tensor.shape[1]
* random_tensor.shape[2]])
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
# binary_tensor = np.zeros(tensor.shape)
# for i1 in range(tensor.shape[0]):
# for i2 in range(tensor.shape[1]):
# binary_tensor[i1,i2,:] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)
# binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1]
# * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
time_lags = np.array([1, 2, 18])
d = time_lags.shape[0]
W = 0.1 * np.random.randn(dim1, rank)
X = 0.1 * np.random.randn(dim2, rank)
theta = 0.1 * np.random.randn(d, rank)
maxiter1 = 1000
maxiter2 = 500
W, X, theta = BTMF(dense_mat, sparse_mat, W, X, theta, time_lags, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
###Output
Iter: 100
RMSE: 9.94129
|
Project_Portfolio_Management/Project_Portfolio_Management.ipynb | ###Markdown
This notebook implements the solution to the **Project Portfolio Management** mini-case. It assumes you are familiar with the case and the model. ____
Basic Setup
Import useful modules, read the data and store it in data frames, and set up some useful Python lists. You may want to expand this section and make sure you understand how the data is organized, and also read the last part where the Python lists are created, as these may be very useful when you build your model.
###Code
#@markdown We first import some useful modules.
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# import numpy
import numpy as np
import urllib.request # for file downloading
# Import pandas for data-frames
import pandas as pd
pd.options.display.max_rows = 15
pd.options.display.float_format = "{:,.2f}".format
from IPython.display import display
# Make sure Matplotlib runs inline, for nice figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
import matplotlib.ticker as ticker
# install Gurobi (our linear optimization solver)
!pip install -i https://pypi.gurobi.com gurobipy
from gurobipy import *
# some modules to create local directories for CBC (to avoid issues with solving multiple models)
import os
def new_local_directory(name):
full_path = os.path.join(".", name)
os.makedirs(full_path, exist_ok=True)
return full_path
# install the latest version of seaborn for nicer graphics
#!pip install --prefix {sys.prefix} seaborn==0.11.0 &> /dev/null
import seaborn as sns
# Ignore useless some warnings
import warnings
warnings.simplefilter(action="ignore")
print("Completed successfully!")
###Output
Looking in indexes: https://pypi.gurobi.com
Requirement already satisfied: gurobipy in /usr/local/lib/python3.7/dist-packages (9.1.1)
Completed successfully!
###Markdown
Load the case data into Pandas data frames We first download an Excel file with all the data from Github.
###Code
#@markdown Download the entire data as an Excel file from Github
url_Excel = 'https://github.com/dan-a-iancu/airm/blob/master/Project_Portfolio_Management/Project_Data.xlsx?raw=true'
local_file = "Portfolio_Project_Data.xlsx" # name of local file where you want to store the downloaded file
urllib.request.urlretrieve(url_Excel, local_file) # download from website and save it locally
###Output
_____no_output_____
###Markdown
Read in and store the data in suitable dataframes.
###Code
#@markdown Create a dataframe with the information on projects
# Read in the information about the available projects
projectData = pd.read_excel("Portfolio_Project_Data.xlsx", sheet_name = "Projects", index_col = 0)
# Have a look:
display(projectData)
#@markdown Create a dataframe with the information on available resources (this is useful in **Q5**)
# Read in the information about the available projects
resourceData = pd.read_excel("Portfolio_Project_Data.xlsx", sheet_name = "Resources", index_col = 0)
# Have a look:
display(resourceData)
###Output
_____no_output_____
###Markdown
Also set up any other problem data/parameters, such as the initial capital available.
###Code
initialCapital = 25000
###Output
_____no_output_____
###Markdown
Create Python lists based on the data-frames
__NOTE__: Make sure you understand what the __lists__ created here are! These will be very helpful when creating the model.
###Code
#@markdown Some useful lists for building all the models
# the list with project names (A,B, ...)
allProjects = list(projectData.index)
print("This is the list of all the project names:")
print(allProjects)
# the unique locations / continents
allLocations = list(projectData["Location"].unique())
print("\nThis is the list of unique locations:")
print(allLocations)
#@markdown The following lists will be useful in **Q5**
# the list with periods when the projects could be scheduled
allPeriods = list(resourceData.columns)
print("These are periods when the projects could be scheduled:")
print(allPeriods)
# the types of resources needed
allResources = list(resourceData.index)
print("\nThese are the unique resources needed to execute the projects:")
print(allResources)
###Output
These are periods when the projects could be scheduled:
['January-March', 'April-June', 'July-September', 'October-December']
These are the unique resources needed to execute the projects:
['Engineers', 'Field_Workers', 'Support']
###Markdown
_____ **Q1** **Create an empty model**
###Code
#@title Create the model
ProjectSelectionModel = Model("Funding Projects")
###Output
Restricted license - for non-production use only - expires 2022-01-13
###Markdown
**Define the Decision Variables**We have a **binary** decision for whether to select each project or not. We can add one such variable for each project using Gurobi's ``addVars`` method. Note that specify the **type** of the decision using `vtype`.
###Code
#@title Add the decision variables
fund_project = ProjectSelectionModel.addVars(allProjects, vtype = GRB.BINARY, name="Fund")
###Output
_____no_output_____
###Markdown
**Calculate and add the objective function**
The objective corresponds to the total impact achieved on all the continents.
###Code
#@title Add the objective
ProjectSelectionModel.setObjective(
quicksum(fund_project[p]*projectData["Impact"][p] for p in allProjects), GRB.MAXIMIZE )
###Output
_____no_output_____
###Markdown
**Add the Constraints**
The only constraint here is that the capital used should not exceed the initial capital available.
###Code
#@title Add the constraints
# we only have one constraint, that "capital used <= capital available"
ProjectSelectionModel.addConstr(
quicksum(fund_project[p]*projectData["Capital_Required"][p] for p in allProjects)
<= initialCapital, name = "Initial_capital_avail" )
###Output
_____no_output_____
###Markdown
**Solve the model**
###Code
#@markdown Select whether to run the [Gurobi](https://www.gurobi.com/) optimization algorithms silently (no output details)
run_silently = False #@param {type:"boolean"}
if run_silently:
ProjectSelectionModel.setParam('OutputFlag',0)
else:
ProjectSelectionModel.setParam('OutputFlag',1)
ProjectSelectionModel.optimize()
print('\nSolved the optimization problem...')
###Output
Parameter OutputFlag unchanged
Value: 1 Min: 0 Max: 1 Default: 1
Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (linux64)
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads
Optimize a model with 1 rows, 10 columns and 10 nonzeros
Model fingerprint: 0xfa4d6032
Variable types: 0 continuous, 10 integer (10 binary)
Coefficient statistics:
Matrix range [2e+03, 2e+04]
Objective range [1e+00, 1e+01]
Bounds range [1e+00, 1e+00]
RHS range [2e+04, 2e+04]
Found heuristic solution: objective 30.0000000
Presolve removed 1 rows and 10 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Explored 0 nodes (0 simplex iterations) in 0.01 seconds
Thread count was 1 (of 2 available processors)
Solution count 2: 32
Optimal solution found (tolerance 1.00e-04)
Best objective 3.200000000000e+01, best bound 3.200000000000e+01, gap 0.0000%
Solved the optimization problem...
###Markdown
**Print the optimal objective and optimal solution**
###Code
#@title Print the solution
print("Projects Funded: ")
for p in allProjects:
if fund_project[p].X == 1:
print("Project ",p)
print("Optimal total impact: {:.2f}".format(ProjectSelectionModel.objVal))
###Output
Projects Funded:
Project A
Project B
Project C
Project E
Project I
Optimal total impact: 32.00
###Markdown
**Calculate and print some additional information about the solution**
We calculate the "bang-for-the-buck" for each project, and display it together with the solution as a dataframe. We also calculate the impact achieved on each continent.
###Code
#@markdown Calculate and print additional information
# change the precision on pandas dataframes
pd.options.display.float_format = "{:,.5f}".format
# create a dataframe with the optimal decisions for each project
df_results = pd.DataFrame({"Fund_Project?" : \
[np.int(fund_project[p].X) for p in allProjects], \
"Bang-for-buck" : \
[ projectData["Impact"][p]/projectData["Capital_Required"][p] for p in allProjects]}, \
index = allProjects)
display(df_results)
# impact by continent
print('\n\n{:<50}\nImpact achieved on each continent:'.format("="*80))
for loc in allLocations:
print(" {:<15} : {:,.2f}".format(loc, sum(projectData["Impact"][p]*fund_project[p].X \
for p in allProjects if projectData["Location"][p]==loc)) )
###Output
_____no_output_____
###Markdown
Store the optimal impact obtained in **Q1**.
###Code
impact_Q1 = ProjectSelectionModel.objVal # optimal value Q1
###Output
_____no_output_____
###Markdown
Create a few useful functions
To help with subsequent parts of the problem, we also add all the steps above inside a **function** that creates and returns a generic model like the one we created in **Q1**, together with all the decision variables and constraints.
###Code
#@title A function that generates a model like the one in **Q1**
def create_model_like_in_Q1():
# create the model
ProjectSelectionModel = Model("Funding Projects")
#@markdown Decision variables
# one binary decision for each project
fund_project = ProjectSelectionModel.addVars(allProjects, vtype = GRB.BINARY, name="Fund")
#@markdown Objective
# calculate the net impact objective and add it to the model
ProjectSelectionModel.setObjective(
quicksum(fund_project[p]*projectData["Impact"][p] for p in allProjects), GRB.MAXIMIZE )
#@markdown Constraints
# add constraint that "capital used <= capital available"
constraints = ProjectSelectionModel.addConstr(
quicksum(fund_project[p]*projectData["Capital_Required"][p] for p in allProjects)
<= initialCapital, name = "Initial_capital_avail" )
# return the model, the decision variables and the constraints
return ProjectSelectionModel, fund_project, constraints
#@title A function that prints a solution and some useful information
def print_solution(fund_project):
# change the precision on pandas dataframes
pd.options.display.float_format = "{:,.5f}".format
# create a dataframe with the optimal decisions for each project
df_results = pd.DataFrame({"Fund_Project?" : \
[np.int(fund_project[p].X) for p in allProjects]}, \
index = allProjects)
display(df_results)
# impact by continent
print('\n\n{:<50}\nImpact achieved on each continent:'.format("="*80))
for loc in allLocations:
print(" {:<15} : {:,.2f}".format(loc, sum(projectData["Impact"][p]*fund_project[p].X \
for p in allProjects if projectData["Location"][p]==loc)) )
###Output
_____no_output_____
###Markdown
______
**Q2**
Before running this section, make sure you have run all the previous sections of the Colab file. Re-recreate an identical model to the one from **Q1** and store the model, the decision variables and the constraints.
###Code
#@title Create a model like the one in Q1
ProjectSelectionModel, fund_project, budget_constraint = \
create_model_like_in_Q1()
###Output
_____no_output_____
###Markdown
Add a new decision variable **Z** to the model, meant to capture the minimum impact on either continent.
###Code
#@title Add a new decision variable **Z**
Z = ProjectSelectionModel.addVar(name="min_impact")
###Output
_____no_output_____
###Markdown
Set the objective to maximize the new variable **Z**.
###Code
#@title Set the objective to maximize **Z**
ProjectSelectionModel.setObjective(Z, GRB.MAXIMIZE)
###Output
_____no_output_____
###Markdown
Add constraints that **Z** should be less than or equal to the impact on each continent.
###Code
#@title Add constraints that **Z** cannot exceed the impact on each continent
for loc in allLocations:
ProjectSelectionModel.addConstr( Z <= quicksum(projectData["Impact"][p]*fund_project[p] \
for p in allProjects if projectData["Location"][p]==loc))
###Output
_____no_output_____
###Markdown
Solve the new model.
###Code
#@title Solve the model
#@markdown Select whether to run the [Gurobi](https://www.gurobi.com/) optimization algorithms silently (no output details)
run_silently = True #@param {type:"boolean"}
if run_silently:
ProjectSelectionModel.setParam('OutputFlag',0)
else:
ProjectSelectionModel.setParam('OutputFlag',1)
ProjectSelectionModel.optimize()
print('\nSolved the optimization problem...')
###Output
Solved the optimization problem...
###Markdown
Print information about the solution.
###Code
#@markdown Print the solution
# the objective in this problem
print("The largest impact that can be simultaneously achieved on each continent: {}".\
format(ProjectSelectionModel.objVal))
# other useful information about the solution
print_solution(fund_project)
###Output
The largest impact that can be simultaneously achieved on each continent: 5.0
###Markdown
Store the optimal value of the problem --- this is useful subsequently, e.g., in **Q3**.
###Code
#@markdown Store the optimal value for later use (in **Q3**)
# store the optimal value
opt_value_Q2 = ProjectSelectionModel.objVal
###Output
_____no_output_____
###Markdown
______
**Q3**
Before running this section, make sure you have run all the previous sections of the Colab file. Re-recreate an identical model to the one from **Q1** and store the model, the decision variables and the constraints.
###Code
#@title Create a model like the one in Q1
ProjectSelectionModel, fund_project, budget_constraint = \
create_model_like_in_Q1()
###Output
_____no_output_____
###Markdown
Add constraints that the impact in each location should be greater than or equal to the (optimal) minimum impact calculated in **Q2**.
###Code
#@title Add constraints that the impact on each continent is at least the minimum calculated in **Q2**
for loc in allLocations:
ProjectSelectionModel.addConstr( quicksum(projectData["Impact"][p]*fund_project[p] \
for p in allProjects if projectData["Location"][p]==loc) \
>= opt_value_Q2)
###Output
_____no_output_____
###Markdown
Solve the new model.
###Code
#@markdown Select whether to run the [Gurobi](https://www.gurobi.com/) optimization algorithms silently (no output details)
run_silently = True #@param {type:"boolean"}
if run_silently:
ProjectSelectionModel.setParam('OutputFlag',0)
else:
ProjectSelectionModel.setParam('OutputFlag',1)
ProjectSelectionModel.optimize()
print('\nSolved the optimization problem...')
###Output
Solved the optimization problem...
###Markdown
Print information about the solution.
###Code
#@markdown Print the solution
# the objective in this problem
print("The largest total cumulative impact that can be achieved while ensuring that each continent has impact of at least {} is : {}".\
format(opt_value_Q2, ProjectSelectionModel.objVal))
# other useful information about the solution
print_solution(fund_project)
###Output
The largest total cumulative impact that can be achieved while ensuring that each continent has impact of at least 5.0 is : 23.0
###Markdown
______
**Q4**
Before running this section, make sure you have run all the previous sections of the Colab file.
###Code
#@markdown Store the max risk score in a parameter
max_risk = 5.5
###Output
_____no_output_____
###Markdown
Re-recreate an identical model to the one from **Q1** and store the model, the decision variables and the constraints.
###Code
#@title Create a model like the one in Q1
ProjectSelectionModel, fund_project, budget_constraint = \
create_model_like_in_Q1()
###Output
_____no_output_____
###Markdown
Add a constraint that the average risk score should not exceed the maximum allowed risk. Note that this constraint is of the form
> $\frac{\sum_{p} R_p \cdot X_p}{\sum_p X_p} \leq M$
where $R_p$ is the risk score for project $p$, and $X_p \in \{0,1\}$ is the binary variable indicating whether project $p$ is selected. This is a nonlinear constraint, but it can be re-formulated in a linear way as:
> $\sum_{p} R_p \cdot X_p \leq M \cdot \sum_p X_p$
We formulate it in this linear way below. (Formulating the first non-linear constraint above would result in an error from Gurobi.)
###Code
#@title Add a constraint that the average risk does not exceed max allowed
ProjectSelectionModel.addConstr( quicksum(projectData["Risks"][p]*fund_project[p] \
for p in allProjects) <= \
max_risk*quicksum(fund_project[p] for p in allProjects))
###Output
_____no_output_____
###Markdown
Solve the new model.
###Code
#@markdown Select whether to run the [Gurobi](https://www.gurobi.com/) optimization algorithms silently (no output details)
run_silently = True #@param {type:"boolean"}
if run_silently:
ProjectSelectionModel.setParam('OutputFlag',0)
else:
ProjectSelectionModel.setParam('OutputFlag',1)
ProjectSelectionModel.optimize()
print('\nSolved the optimization problem...')
###Output
Solved the optimization problem...
###Markdown
Print information about the solution.
###Code
#@markdown Print the solution
# the objective in this problem
print("The largest total cumulative impact achieved without exceeding an average risk of {} is : {}".\
format(max_risk, ProjectSelectionModel.objVal))
# other useful information about the solution
print_solution(fund_project)
###Output
The largest total cumulative impact achieved without exceeding an average risk of 5.5 is : 27.0
###Markdown
______
**Q5**
Before running this section, make sure you have run all the previous sections of the Colab file. Re-recreate an identical model to the one from **Q1** and store the model, the decision variables and the constraints.
###Code
#@title Create a model like the one in Q1
ProjectSelectionModel, fund_project, budget_constraint = \
create_model_like_in_Q1()
###Output
_____no_output_____
###Markdown
Add a new set of decision variables for whether to schedule each project in a particular period. Here, we want to have one binary decision for every project and for every potential period, so we use `addVars` to define these.
###Code
#@title Add binary decision variables for whether to schedule the projects in a given period
schedule_project = ProjectSelectionModel.addVars(allProjects, allPeriods, name="schedule")
###Output
_____no_output_____
###Markdown
Add constraints to connect the decisions on whether to fund projects with the decisions on whether to schedule projects. If we let $X_p$ denote the binary decision of whether to fund project $p$, and $Y_{p,t}$ the binary variable for whether project $p$ should be scheduled in period $t$, then the constraints we need to add here are:
> $\sum_{t=1}^T Y_{p,t} = X_p$, for every project $p$.
In other words, if $X_p=1$ (so project $p$ is funded), it must be scheduled in one of the periods, so the sum on the left must be equal to 1. If $X_p=0$, project $p$ is not funded and it should also not be scheduled.
###Code
#@title Constraints that a project must be scheduled if and only if it is funded.
for p in allProjects:
ProjectSelectionModel.addConstr( fund_project[p] == \
quicksum(schedule_project[p,t] for t in allPeriods))
###Output
_____no_output_____
###Markdown
Add constraints on the available resources. For each resource, the usage should not exceed what is available.
###Code
#@title Constraints on available resources
for t in allPeriods:
for r in allResources:
ProjectSelectionModel.addConstr( quicksum(schedule_project[p,t]*projectData[r][p] \
for p in allProjects) <= \
resourceData[t][r] )
###Output
_____no_output_____
###Markdown
Solve the new model.
###Code
#@markdown Select whether to run the [Gurobi](https://www.gurobi.com/) optimization algorithms silently (no output details)
run_silently = True #@param {type:"boolean"}
if run_silently:
ProjectSelectionModel.setParam('OutputFlag',0)
else:
ProjectSelectionModel.setParam('OutputFlag',1)
ProjectSelectionModel.optimize()
print('\nSolved the optimization problem...')
###Output
Solved the optimization problem...
###Markdown
Print information about the solution.
###Code
#@markdown Print the solution
# the objective in this problem
print("The largest impact that can be simultaneously achieved on each continent: {}".\
format(ProjectSelectionModel.objVal))
# create a small dataframe to store the decisions to fund as well as the decisions to schedule
dict = {"Fund_Project" : [np.int(fund_project[p].X) for p in allProjects]}
for t in allPeriods:
dict["Schedule "+t] = [np.int(schedule_project[p,t].X) for p in allProjects]
all_decisions_Q5 = pd.DataFrame(dict, index=allProjects)
# other useful information about the solution
display(all_decisions_Q5)
###Output
The largest impact that can be simultaneously achieved on each continent: 32.0
|
notebooks/data/wikivoyage/preprocessing/preprocessing-wikivoyage-metadata.ipynb | ###Markdown
Preprocessing WikivoyageAssumes wikivoyage data has been parsed into a dataframe with metadata and a generator for creating the tokens.
###Code
data_dir = '../../../data/wikivoyage/'
path_wiki_metadata_in = data_dir + 'clean/wikivoyage_metadata_all.csv'
path_wiki_metadata_out = data_dir + 'processed/wikivoyage_destinations.csv'
import pandas as pd
###Output
_____no_output_____
###Markdown
Requirements for base productEssentials (= scope of this notebook):- Structured metadata: * Destination name * Geolocation * Parent, needed for retrieving country (but could also be done on geolocation?)- Unstructured content: * Embeddings for retrieving activitiesAlso retrieved in the past, but not yet needed (= not in scope):- Redirect names so that people can search by name?- Links to other datasets: DMOZ, Commons, Wikipedia- Full hierarchy- Number of direct children & sum of all children destinations- Number of parents, and whether the parent is 'odd' (parent is park or city)- Continent Load data
###Code
data = (
# converter is not needed if converted earlier at extraction
pd.read_csv(path_wiki_metadata_in, converters={'status' : str.lower})
# throw away one odd case in which the title is missing
.loc[lambda df: ~df['title'].isnull()]
)
data.shape
###Output
_____no_output_____
###Markdown
Preprocessing Setting some of the parents manuallyNote: "World" for the continents is later undone, as "World" redirects to "Destinations"
###Code
# set is part of for continents
data.loc[(data['ispartof'].isnull()) & (data['articletype'] == 'continent'), 'ispartof'] = 'World'
###Output
_____no_output_____
###Markdown
Some destinations have a missign parent that cannot be fixed programatically:* "Sonoma County" doesn't have any parent listed in it's xml text...So, for these let's also set `ispartof` manually:
###Code
data.loc[lambda df: df['title'] == 'Sonoma County', 'ispartof'] = 'North Coast (California)'
###Output
_____no_output_____
###Markdown
Scope data: throw away most irrelevant content
###Code
df = (
data.copy()
.loc[lambda df: ~df['title'].isin(['Space', 'Moon'])] # these are of type 'park' so need to excl. them by name
.loc[lambda df: ~df['title'].str.contains('disambiguation')]
.loc[lambda df: df['disambiguation'] == False]
.loc[lambda df: df['historical'] == False]
.loc[lambda df: ~df['articletype'].isnull()]
.loc[lambda df: ~df['ispartof'].isnull()]
)
print(df.shape)
###Output
_____no_output_____
###Markdown
Getting the parent path for each destinationBefore we can get each parent, we need to replace all `title`'s and `ispartof`'s with their redirects if available. This way we can avoid 'broken chains' wheren an `ispartof` refers to a redirect title instead of a title.To do this we create a lookup dataframe and apply a function to replace the title if there is a redirect.
###Code
def replace_title_with_redirect_if_possible(title, lookup_df):
redirect_title = lookup_df.loc[lookup_df['title'] == title, 'redirect']
return redirect_title.iat[0] if len(redirect_title) > 0 else title
redirect_table = (
data
.loc[lambda df: ~df['redirect'].isnull()]
[['pageid', 'title', 'redirect']]
.copy()
)
# filter away all redirects
df = df.loc[lambda df: df['redirect'].isnull()]
df['title'] = df.apply(lambda x: replace_title_with_redirect_if_possible(x['title'], redirect_table), axis=1)
df['ispartof'] = df.apply(lambda x: replace_title_with_redirect_if_possible(x['ispartof'], redirect_table), axis=1)
###Output
_____no_output_____
###Markdown
Now we are going to do a left join with itself to get the `parentid`. We need lowercased helper columns for the join as sometimes the capitals between the `title` and `ispartof` don't match. For example "Geraldton (Ontario)" has as a parent "northern Ontario" versus the actual record that starts with a capital N: "Northern Ontario".
###Code
lower_case_matching_df = (
df[['pageid', 'title']]
# lowercase for better matching
.assign(title_lower = lambda df: df['title'].str.lower())
.drop('title', axis=1)
# rename columns for matching
.rename({'pageid' : 'parentid'}, axis=1)
)
df = (
df
.assign(ispartof_lower = lambda df: df['ispartof'].str.lower())
.merge(lower_case_matching_df, how='left', left_on='ispartof_lower', right_on='title_lower')
.drop(['title_lower', 'ispartof_lower'], axis=1)
# .assign(parentid = lambda df: df['parentid'].astype(int))
)
###Output
_____no_output_____
###Markdown
Set `parentid` for "Destinations" and "Other destinations" to 0.
###Code
df.loc[df['ispartof'] == 'Destinations', 'parentid'] = 0
df.loc[df['ispartof'] == 'Other destinations', 'parentid'] = 0
print(df.shape)
###Output
_____no_output_____
###Markdown
Scope data: require good articletype and having a parentFocus on core destinations content here
###Code
df_scoped = (
df.copy()
.loc[lambda df: df['articletype'].isin(['district', 'city', 'region', 'park', 'country', 'continent'])]
)
print(df_scoped.shape)
###Output
_____no_output_____
###Markdown
Check uitval o.b.v. parent matching. Zo weinig op dit punt. Gewoon negeren/deleten.
###Code
uitval_parent = (
df_scoped
.copy()
.loc[lambda df: df['parentid'].isnull()]
# .loc[lambda df: ~df['ispartof'].isin(['Destinations', 'Other destinations'])]
)
print(uitval_parent.shape)
uitval_parent
df_scoped = (
df_scoped.loc[lambda df: ~df['parentid'].isnull()]
# finally convert parent_id into int now that it's always available
.assign(parentid = lambda df: df['parentid'].astype(int))
)
print(df_scoped.shape)
###Output
_____no_output_____
###Markdown
Save country as feature
###Code
def find_record(pageid, lookup_df):
return lookup_df.loc[lookup_df['pageid'] == pageid].iloc[0]
def find_parent(pageid, lookup_df):
current_record = find_record(pageid, lookup_df)
articletype, country, parentid = current_record['articletype'], current_record['title'], current_record['parentid']
# loop until country found, or no other possibilities left
while (current_record['articletype'] != 'country') and (current_record['parentid'] != 0):
# lookup parent record and get type
current_record = find_record(current_record['parentid'], lookup_df)
articletype, country, parentid = current_record['articletype'], current_record['title'], current_record['parentid']
# when done with loop, return country name if found
return country if articletype == 'country' else None
lookup_df = df_scoped.copy()
df_scoped['country'] = df_scoped['pageid'].apply(lambda x: find_parent(x, lookup_df))
###Output
_____no_output_____
###Markdown
There are quite some destinations for which a country couldn't been found. Many of these are special regions, belonging to bigger countries, like many of the carribean islands:- Puerto Rico- Cayman Islands- U.S. Virgin Islands- Bonaire- French Guiana (doesn't have its own flag, but is part of France - could set France as parentid)However, many of these islands have their own flag. Need to solve that by matching with some flag dataset in the future.
###Code
uitval_country = df_scoped.loc[(df_scoped['country'].isnull()) & (df_scoped['articletype'] != "region")].copy()
print(uitval_country.shape)
uitval_country.sample(10)
###Output
_____no_output_____
###Markdown
To make sure any destination has a country feature value, set it to `ispartof` when `country` is missing:
###Code
df_scoped.loc[lambda df: df['country'].isnull(), 'country'] = df_scoped.loc[lambda df: df['country'].isnull(), 'ispartof']
###Output
_____no_output_____
###Markdown
Scope data: select only end destinationsKeep cities and parks only.
###Code
df_dest = (
df_scoped
.loc[lambda df: df['articletype'].isin(['city', 'park'])]
.drop(['redirect', 'disambiguation', 'historical'], axis=1)
)
print(df_dest.shape)
###Output
_____no_output_____
###Markdown
Scope data: require geo locationMake sure all have a geo location
###Code
uitval_geo = df_dest.loc[lambda df: (df['lat'].isnull()) | (df['lon'].isnull())].copy()
print(uitval_geo.shape)
uitval_geo.sample(3)
df_final = df_dest.loc[lambda df: (~df['lat'].isnull()) & (~df['lon'].isnull())].copy()
print(df_final.shape)
###Output
_____no_output_____
###Markdown
Write to CSV TODO: make sure input dataframe longitude columns is renamed from `lon` to `lng`
###Code
(
df_final
.rename(columns={'pageid': 'id', 'title': 'name', 'articletype': 'type', 'lon': 'lng'})
.to_csv(path_wiki_metadata_out, index=False)
)
###Output
_____no_output_____ |
SpaCy/02_Spacy_custom_extensions.ipynb | ###Markdown
Extensions____________________*This notebook is based on / inspired by excellent course materials from https://campus.datacamp.com/courses/advanced-nlp-with-spacy course at DataCamp*____________________To set an extension we use `.set_extension()` method. This method can be used on:* `Doc`* `Span`* `Token`Let's see a set of examples:* `Doc.set_extension('title', default = None)`* `Span.set_extension('is_german_word', default = False)`* `Token.set_extension('has_color', default = False)`To access extensions we need to use `._.` to distinguish them from built-in properties:`doc._.title = 'Document 1'` Types of extensions* Attribute extensions* Property extensions* Method extensions
###Code
# Get the model
nlp = spacy.load('en_core_web_sm')
# Define some data
my_str = 'I used to live in Vienna 5 years ago.'
doc1 = nlp(my_str)
###Output
_____no_output_____
###Markdown
Token level extensions
###Code
# Token level attributes
Token.set_extension('is_city', default = False)
# Set extension
doc1[5]._.is_city = True
# Let's see how it works
print([(token.text, token._.is_city) for token in doc1])
###Output
[('I', False), ('used', False), ('to', False), ('live', False), ('in', False), ('Vienna', True), ('5', False), ('years', False), ('ago', False), ('.', False)]
###Markdown
Doc level extensions
###Code
# Define a getter function
def get_has_number(doc):
# If any token is like_num - return True
return any(token.like_num for token in doc)
# Register the Doc property extension 'has_number' with the getter get_has_number
Doc.set_extension('has_number', getter = get_has_number)
# Check how it works
print('has_number:', doc1._.has_number)
###Output
has_number: True
###Markdown
Span level extensions
###Code
# Define a method
def to_html(span, tag):
# Wrap the span text in a HTML tag and return it
return '<{tag}>{text}</{tag}>'.format(tag = tag, text = span.text)
# Register the Span property extension 'to_html' with the method to_html
Span.set_extension('to_html', method = to_html)
# Process the text and call to_html method on the span with `h1` tag
doc = nlp("Hello world, this is my sentence.")
span = doc[0:2]
print(span._.to_html('h1'))
###Output
<h1>Hello world</h1>
###Markdown
Extensions and entities Example 1
###Code
def get_wikipedia_url(span):
# Get a Wikipedia URL if the span has one of the labels
if span.label_ in ('PERSON', 'ORG', 'GPE', 'LOCATION'):
entity_text = span.text.replace(' ', '_')
return "https://en.wikipedia.org/w/index.php?search=" + entity_text
# Set the Span extension wikipedia_url using get getter get_wikipedia_url
Span.set_extension('wikipedia_url', getter = get_wikipedia_url, force = True)
# Phrase
doc = nlp("In over fifty years from his very first recordings right through to his last album, David Bowie was at the vanguard of contemporary culture. Annie Lennox")
# Get names and links
for ent in doc.ents:
# Print the text and Wikipedia URL of the entity
print(ent.text, ent._.wikipedia_url)
###Output
fifty years None
David Bowie https://en.wikipedia.org/w/index.php?search=David_Bowie
Annie Lennox https://en.wikipedia.org/w/index.php?search=Annie_Lennox
###Markdown
Example 2
###Code
nlp2 = spacy.load('en_core_web_sm')
# Let's define a list of districts in Tel Aviv
distrs = ['Old Yafo', 'Shapira', 'Ezra', 'Florentin']
# Add patterns
patterns = list(nlp2.pipe(distrs))
# Initialize Matcher
matcher = PhraseMatcher(nlp2.vocab)
matcher.add('DISTRICT', None, *patterns)
def tlv_component(doc_):
# Apply the matcher to the doc
matches = matcher(doc_)
# Create a Span for each match and assign the label 'TLV_DISTRICT'
spans = [Span(doc_, start, end, label = 'TLV_DISTRICT')
for match_id, start, end in matches]
# Overwrite the doc.ents with the matched spans
doc_.ents = tuple(spans)
return doc_
district_loc = {
'Old Yafo': 'Southwest',
'Shapira': 'South',
'Ezra': 'Southeast',
'Florentin': 'South'
}
# Add the component to the pipeline after the 'ner' component
nlp2.add_pipe(tlv_component, after = 'ner')
nlp2.pipeline
# Create a document
doc2 = nlp2('I stayed in Old Yafo for a couple of days and then moved to Shapira to visit my friends. They told me\
that in their opinion Ezra is nicer than Florentin. I disagreed.')
# Register district_loc and getter that looks up the span text in TLV districts
Span.set_extension('district_location', getter = lambda span: district_loc[span.text], force = True)
for ent in doc2.ents:
print(f'DISTRICT: {ent.text:10} | LABEL: {ent.label_:10} | LOCATION: {ent._.district_location}')
###Output
DISTRICT: Old Yafo | LABEL: TLV_DISTRICT | LOCATION: Southwest
DISTRICT: Shapira | LABEL: TLV_DISTRICT | LOCATION: South
DISTRICT: Ezra | LABEL: TLV_DISTRICT | LOCATION: Southeast
DISTRICT: Florentin | LABEL: TLV_DISTRICT | LOCATION: South
|
tutorial/007_3gate.ipynb | ###Markdown
3 qubits gateWe learned about 1qubit and 2qubits gate. Now we learn about 3qubits gate.Usually the acutal quantum computer has only 1qubit and 2qubits gate operation. We have to create many qubits gate by using these 1 and 2qubits gate. Let's see it. CircuitThe basic circuit of Toffoli gate consists of H and CX and T gate.
###Code
!pip install blueqat
from blueqat import Circuit
import numpy as np
#The first x gate is a data input to flip the 0th and 1st qubit to check the toffoligate.
Circuit().x[1:].h[0].cnot[1,0].rz(-np.pi/4)[0].cnot[2,0].rz(np.pi/4)[0].cnot[1,0].rz(-np.pi/4)[0].cnot[2,0].rz(np.pi/4)[:1].h[0].cnot[1,0].cnot[0,1].cnot[1,0].cnot[2,0].rz(-np.pi/4)[0].rz(np.pi/4)[2].cnot[2,0].m[:].run(shots=1)
###Output
_____no_output_____
###Markdown
This is the Toffoli gate. Toffoli gate is also called as CCX gate. This gate has 2 controlled gate and 1target gate. If the both controlled gate are 1 this flips the target gate. In blueqat it is also written in very short expression as .ccx[c,c,x]
###Code
Circuit().x[:2].ccx[0,1,2].m[:].run(shots=1)
###Output
_____no_output_____ |
quantum-with-qiskit/.ipynb_checkpoints/B34_Superposition_and_Measurement-checkpoint.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Superposition[Watch Lecture](https://youtu.be/uJZtxWHAlPI)There is no classical counterpart of the concept "superposition".But, we can still use a classical analogy that might help us to give some intuitions. Probability distribution Suppose that Asja starts in $ \myvector{1\\0} $ and secretly applies the probabilistic operator $ \mymatrix{cc}{ 0.3 & 0.6 \\ 0.7 & 0.4 } $.Because she applies her operator secretly, our information about her state is probabilistic, which is calculated as$$ \myvector{0.3 \\ 0.7} = \mymatrix{cc}{ 0.3 & 0.6 \\ 0.7 & 0.4 } \myvector{1\\0}.$$Asja is either in state 0 or in state 1.However, from our point of view, Asja is in state 0 with probability $ 0.3 $ and in state 1 with probability $ 0.7 $.We can say that Asja is in a probability distribution of states 0 and 1, being in both states at the same time but with different weights.On the other hand, if we observe Asja's state, then our information about Asja becomes deterministic: either $ \myvector{1 \\ 0} $ or $ \myvector{0 \\ 1} $.We can say that, after observing Asja's state, the probabilistic state $ \myvector{0.3 \\ 0.7} $ collapses to either $ \myvector{1 \\ 0} $ or $ \myvector{0 \\ 1} $. The third experiment Remember the following experiment. We trace it step by step by matrix-vector multiplication. The initial Step The photon is in state $ \ket{v_0} = \vzero $. The first step Hadamard is applied:$ \ket{v_1} = \hadamard \vzero = \stateplus $.At this point, the photon is in a superposition of state $ \ket{0} $ and state $ \ket{1} $, being in both states with the amplitudes $ \frac{1}{\sqrt{2}} $ and $ \frac{1}{\sqrt{2}} $, respectively.The state of photon is $ \ket{v_1} = \stateplus $, and we can also represent it as follows: $ \ket{v_1} = \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1} $. The second step Hadamard is applied again:We write the effect of Hadamard on states $ \ket{0} $ and $ \ket{1} $ as follows:$ H \ket{0} = \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1} $ (These are the transition amplitudes of the first column.)$ H \ket{1} = \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} $ (These are the transition amplitudes of the second column.)This representation helps us to see clearly why the state $ \ket{1} $ disappears.Now, let's see the effect of Hadamard on the quantum state $ \ket{v_1} = \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1} $:$ \ket{v_2} = H \ket{v_1} = H \mybigpar{ \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1} } = \frac{1}{\sqrt{2}} H \ket{0} + \frac{1}{\sqrt{2}} H \ket{1} $We can replace $ H\ket{0} $ and $ H\ket{1} $ as described above. $ \ket{v_2} $ is formed by the summation of the following terms: $~~~~~~~~ \dsqrttwo H \ket{0} = $ $\donehalf \ket{0} $ $ + \donehalf \ket{1} $$~~~~~~~~ \dsqrttwo H \ket{1} = $ $\donehalf \ket{0} $ $ - \donehalf \ket{1} $$ \mathbf{+}\mbox{___________________} $$ ~~ $ $\mypar{ \donehalf+\donehalf } \ket{0} $ + $\mypar{ \donehalf-\donehalf } \ket{1} $ $ = \mathbf{\ket{0}} $.The amplitude of $ \ket{0} $ becomes 1, but the amplitude of $ \ket{1} $ becomes 0 because of cancellation. The photon was in both states at the same time with certain amplitudes.After the second Hadamard, the "outcomes" are interfered with each other.The interference can be constructive or destructive.In our examples, the outcome $ \ket{0} $s are interfered constructively, but the outcome $ \ket{1} $s are interfered destructively. Observations Probabilistic systems: If there is a nonzero transition to a state, then it contributes to the probability of this state positively. Quantum systems: If there is a nonzero transition to a state, then we cannot know its contribution without knowing the other transitions to this state.If it is the only transition, then it contributes to the amplitude (and probability) of the state, and it does not matter whether the sign of the transition is positive or negative.If there is more than one transition, then depending on the summation of all transitions, we can determine whether a specific transition contributes or not.As a simple rule, if the final amplitude of the state and nonzero transition have the same sign, then it is a positive contribution; and, if they have the opposite signs, then it is a negative contribution. Task 1 [on paper]Start in state $ \ket{u_0} = \ket{1} $.Apply Hadamard operator to $ \ket{u_0} $, i.e, find $ \ket{u_1} = H \ket{u_0} $.Apply Hadamard operator to $\ket{u_1}$, i.e, find $ \ket{u_2} = H \ket{u_1} $.Observe the constructive and destructive interferences, when calculating $ \ket{u_2} $. Being in a superposition A quantum system can be in more than one state with nonzero amplitudes.Then, we say that our system is in a superposition of these states.When evolving from a superposition, the resulting transitions may affect each other constructively and destructively. This happens because of having opposite sign transition amplitudes. Otherwise, all nonzero transitions are added up to each other as in probabilistic systems. Measurement We can measure a quantum system, and then the system is observed in one of its states. This is the most basic type of measurement in quantum computing. (There are more generic measurement operators, but we will not mention about them.)The probability of the system to be observed in a specified state is the square value of its amplitude. If the amplitude of a state is zero, then this state cannot be observed. If the amplitude of a state is nonzero, then this state can be observed. For example, if the system is in quantum state $$ \myrvector{ -\frac{\sqrt{2}}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} },$$then, after a measurement, we can observe the system in state $\ket{0} $ with probability $ \frac{2}{3} $ and in state $\ket{1}$ with probability $ \frac{1}{3} $. Collapsing After the measurement, the system collapses to the observed state, and so the system is no longer in a superposition. Thus, the information kept in a superposition is lost. - In the above example, when the system is observed in state $\ket{0}$, then the new state becomes $ \myvector{1 \\ 0} $. - If it is observed in state $\ket{1}$, then the new state becomes $ \myvector{0 \\ 1} $. The second experiment of the quantum coin flipping Remember the experiment set-up. In this experiment, after the first quantum coin-flipping, we make a measurement.If the measurement outcome is state $ \ket{0} $, then we apply a second Hadamard.First, we trace the experiment analytically. the tex code of the image The first Hadamard We start in state $ \ket{0} = \vzero $. Then, we apply Hadamard operator: $ \stateplus = \hadamard \vzero $ The first measurement Due to the photon detector A, the photon cannot be in superposition, and so it forces the photon to be observed in state $\ket{0}$ or state $ \ket{1} $. This is a measurement. Since the amplitudes are $ \sqrttwo $, we observe each state with equal probability. Thus, with probability $ \frac{1}{2} $, the new quantum state is $ \ket{0} = \vzero $. And, with probability $ \frac{1}{2} $, the new quantum state is $ \ket{1} = \vone $. The second Hadamard If the photon is in state $ \ket{0} $, then another Hadamard operator is applied. In other words, with probability $ \frac{1}{2} $, the computation continues and another Hadamard is applied: $ \stateplus = \hadamard \vzero $ The second measurement Due to photon detectors B1 and B2, we make another measurement. Thus, we observe state $ \ket{0} $ with probability $ \frac{1}{4} $ and state $ \ket{1} $ with probability $ \frac{1}{4} $. At the end, the state $ \ket{0} $ can be observed with probability $ \frac{1}{4} $, and the state $ \ket{1} $ can be observed with probability $ \frac{3}{4} $. Implementing the second experiment By using the simulator, we can implement the second experiment.For this purpose, qiskit provides a conditional operator based on the value of a classical register.In the following example, the last operator (x-gate) on the quantum register will be executed if the value of the classical register is 1. q = QuantumRegister(1) c = ClassicalRegister(1) qc = QuantumCircuit(q,c) ... qc.measure(q,c) qc.x(q[0]).c_if(c,1) In our experiment, we use such classical control after the first measurement.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# define a quantum register with a single qubit
q = QuantumRegister(1,"q")
# define a classical register with a single bit
c = ClassicalRegister(1,"c")
# define a quantum circuit
qc = QuantumCircuit(q,c)
# apply the first Hadamard
qc.h(q[0])
# the first measurement
qc.measure(q,c)
# apply the second Hadamard if the measurement outcome is 0
qc.h(q[0]).c_if(c,0)
# the second measurement
qc.measure(q[0],c)
# draw the circuit
qc.draw(output="mpl")
###Output
_____no_output_____
###Markdown
Task 2 If we execute this circuit 1000 times, what are the expected numbers of observing the outcomes '0' and '1'?Test your result by executing the following code.
###Code
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=1000)
counts = job.result().get_counts(qc)
print(counts)
###Output
_____no_output_____
###Markdown
Task 3 Repeat the second experiment with the following modifications.Start in state $ \ket{1} $.Apply a Hadamard gate.Make a measurement. If the measurement outcome is 0, stop.Otherwise, apply a second Hadamard, and then make a measurement.Execute your circuit 1000 times.Calculate the expected values of observing '0' and '1', and then compare your result with the simulator result.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
#
# your code is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 Design the following quantum circuit.Start in state $ \ket{0} $. Repeat 3 times: if the classical bit is 0: apply a Hadamard operator make a measurementExecute your circuit 1000 times.Calculate the expected values of observing '0' and '1', and then compare your result with the simulator result.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
#
# your code is here
#
###Output
_____no_output_____
###Markdown
Task 5 [extra] Design the following randomly created quantum circuit.Start in state $ \ket{0} $. apply a Hadamard operator make a measurement REPEAT 4 times: randomly pick x in {0,1} if the classical bit is x: apply a Hadamard operator make a measurement Draw your circuit, and guess the expected frequency of observing '0' and '1' if the circuit is executed 10000 times.Then, execute your circuit 10000 times, and compare your result with the simulator result.Repeat execution a few more times.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# import randrange for random choices
from random import randrange
#
# your code is here
#
###Output
_____no_output_____ |
Part 1 - Intro to Data Science/DS_1.12_How To Break Into the Field.ipynb | ###Markdown
How To Break Into the FieldNow you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import HowToBreakIntoTheField as t
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
schema = pd.read_csv('./survey_results_schema.csv')
df.head()
###Output
_____no_output_____
###Markdown
Question 1**1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column.
###Code
# cell for work
list(schema[schema.Column == 'CousinEducation']['Question'])
def get_description(column_name, schema=schema):
'''
INPUT - schema - pandas dataframe with the schema of the developers survey
column_name - string - the name of the column you would like to know about
OUTPUT -
desc - string - the description of the column
'''
desc = list(schema[schema['Column'] == column_name]['Question'])
return desc
#test your code
#Check your function against solution - you shouldn't need to change any of the below code
get_description(df.columns[0]) # This should return a string of the first column description
# cell for work
df.columns
# cell for work
descrips = list(get_description(col) for col in df.columns)
# data exploration
descrips[:5]
# cell for work
list(schema[schema['Column'] == 'Country']['Question'])[0]
# cell for work
#https://stackoverflow.com/questions/952914/how-to-make-a-flat-list-out-of-list-of-lists
list(set(sum(descrips, [])))[:5]
#Check your function against solution - you shouldn't need to change any of the below code
descrips = list(get_description(col) for col in df.columns)
descrips_set = set(sum(descrips, []))
t.check_description(descrips_set)
###Output
Nice job it looks like your function works correctly!
###Markdown
The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column.
###Code
get_description('CousinEducation')
###Output
_____no_output_____
###Markdown
Question 2**2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up!
###Code
cous_ed_vals = df.CousinEducation.value_counts() #Provide a pandas series of the counts for each CousinEducation status
# assure this looks right
cous_ed_vals[:5]
# The below should be a bar chart of the proportion of individuals in your ed_vals
# if it is set up correctly.
(cous_ed_vals/df.shape[0]).plot(kind="bar", figsize=(15, 5));
plt.title("Formal Education");
###Output
_____no_output_____
###Markdown
We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned.
###Code
# cell to work
study = df['CousinEducation'].value_counts().reset_index()
study[:5]
help(t.total_count)
possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df
study_df
study_df.plot(kind="bar", x='method', y='count')
possible_vals = ["Take online courses", "Buy books and work through the exercises",
"None of these", "Part-time/evening courses", "Return to college",
"Contribute to open source", "Conferences/meet-ups", "Bootcamp",
"Get a job as a QA tester", "Participate in online coding competitions",
"Master's degree", "Participate in hackathons", "Other"]
def clean_and_plot(df, title='Method of Educating Suggested', plot=True):
'''
INPUT
df - a dataframe holding the CousinEducation column
title - string the title of your plot
axis - axis object
plot - bool providing whether or not you want a plot back
OUTPUT
study_df - a dataframe with the count of how many individuals
Displays a plot of pretty things related to the CousinEducation column.
'''
study = df['CousinEducation'].value_counts().reset_index()
study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True)
study_df = t.total_count(study, 'method', 'count', possible_vals)
study_df.set_index('method', inplace=True)
if plot:
(study_df/study_df.sum()).plot(kind='bar', legend=None);
plt.title(title);
plt.show()
props_study_df = study_df/study_df.sum()
return props_study_df
props_df = clean_and_plot(df)
###Output
_____no_output_____
###Markdown
Question 3**3.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**.
###Code
# cell for work
df.FormalEducation.head()
df.FormalEducation[1] in ["Master's degree", "Doctoral", "Professional degree"]
#for i in range(len(df.FormalEducation)-1):
for i in range(5):
print(int(df.FormalEducation[i] in ["Master's degree", "Doctoral", "Professional degree"]))
def higher_ed(formal_ed_str):
'''
INPUT
formal_ed_str - a string of one of the values from the Formal Education column
OUTPUT
return 1 if the string is in ("Master's degree", "Doctoral", "Professional degree")
return 0 otherwise
'''
if formal_ed_str in ("Master's degree", "Doctoral", "Professional degree"):
return 1
else:
return 0
df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df
# Check your code here
df['HigherEd'] = df["FormalEducation"].apply(higher_ed)
higher_ed_perc = df['HigherEd'].mean()
t.higher_ed_test(higher_ed_perc)
###Output
Nice job! That's right. The percentage of individuals in these three groups is 0.2302376714480159.
###Markdown
Question 4**4.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**.Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column.
###Code
df['HigherEd'].head()
ed_1 = df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1
ed_0 = df[df['HigherEd'] == 0] # Subset df to only those with HigherEd of 0
print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect
print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect
#Check your subset is correct - you should get a plot that was created using pandas styling
#which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html
ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False)
ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False)
comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True)
comp_df.columns = ['ed_1_perc', 'ed_0_perc']
comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc']
comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d'])
###Output
_____no_output_____
###Markdown
Question 5**5.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude.
###Code
sol = {'Everyone should get a higher level of formal education': False,
'Regardless of formal education, online courses are the top suggested form of education': True,
'There is less than a 1% difference between suggestions of the two groups for all forms of education': False,
'Those with higher formal education suggest it more than those who do not have it': True}
t.conclusions(sol)
###Output
Nice job that looks right!
|
jupyter_notebooks/trash/ibr_s_new_classes.ipynb | ###Markdown
Vehicle Dynamics $\frac{d}{dt} \vec{x} = f(\vec{x}, \vec{u})$ def gen_x_next(x_k, u_k, dt): k1 = f(x_k, u_k) k2 = f(x_k+dt/2*k1, u_k) k3 = f(x_k+dt/2*k2, u_k) k4 = f(x_k+dt*k3, u_k) x_next = x_k + dt/6*(k1+2*k2+2*k3+k4) return x_next F = cas.Function('F',[x,u,t],[ode],) States$\vec{x}$ = $[x, y, \phi, \delta, V, s]^T$$\vec{u}$ = $[\delta^u, v^u]^T$ Discrete (integrated) dynamics $\vec{x}_{t+1} = F(\vec{x}_{t}, \vec{u}_{t})$
###Code
T = 4 #numbr of time horizons
dt = 0.1
N = int(T/dt) #Number of control intervals
###Output
_____no_output_____
###Markdown
intg_options = {}intg_options['tf'] = dt from dtintg_options['simplify'] = Trueintg_options['number_of_finite_elements'] = 6 from 4dae = {} What's a DAE?dae['x'] = xdae['p'] = udae['ode'] = f(x,u)intg = cas.integrator('intg','rk', dae, intg_options)res = intg(x0=x,p=u)x_next = res['xf']F = cas.Function('F',[x,u],[x_next],['x','u'],['x_next']) Problem Definition Parameterization of Desired Trajectory ($\vec{x}_d = f_d(s)$)
###Code
s = cas.MX.sym('s')
xd = s
yd = 0
phid = 0
des_traj = cas.vertcat(xd, yd, phid)
fd = cas.Function('fd',[s],[des_traj],['s'],['des_traj'])
#Globally true information
min_dist = 2 * (2 * .5**2)**.5
initial_speed = 6.7
# Initial Conditions
x0 = np.array([2*min_dist, .6*min_dist, 0, 0, initial_speed, 0]).T
x0_2 = np.array([2*min_dist,-.6*min_dist, .0, 0, initial_speed, 0]).T
x0_amb = np.array([0, 0.0, 0, 0, 1.25 * initial_speed,0]).T
###Output
_____no_output_____
###Markdown
Solve it centrally just to warm start the solution
###Code
x1_MPC = mpc.MPC()
x2_MPC = mpc.MPC()
x1_MPC.k_s = -1.0
x2_MPC.k_s = -1.0
amb_MPC = mpc.MPC()
amb_MPC.theta_iamb = 0.0
amb_MPC.k_u_v = 0.10
amb_MPC.k_u_change = 1.0
amb_MPC.k_s = -1.0
amb_MPC.max_v = 20.0
amb_MPC.max_X_dev = 5.0
opt = mpc.OptimizationMPC(x1_MPC, x2_MPC,amb_MPC)
opt.generate_optimization(N, dt, min_dist, fd, T, x0, x0_2, x0_amb, 2)
x1, u1, x1_des, x2, u2, x2_des, xamb, uamb, xamb_des = opt.get_solution()
optional_suffix = "_newsclassesibr"
subdir_name = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + optional_suffix
folder = "results/" + subdir_name + "/"
os.makedirs(folder)
os.makedirs(folder+"imgs/")
print(folder)
cplot.plot_cars(x1, x2, xamb, folder)
CIRCLES = False
if CIRCLES:
vid_fname = folder + subdir_name + 'circle.mp4'
else:
vid_fname = folder + subdir_name + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
video = io.open(vid_fname, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
###Output
_____no_output_____
###Markdown
IBR
###Code
br1 = mpc.IterativeBestResponseMPC(x1_MPC, x2_MPC, amb_MPC)
br1.generate_optimization(N, dt, min_dist, fd, T, x0, x0_2, x0_amb, 2)
x1r1, u1r1, x1_desr1 = br1.get_solution(x2, u2, x2_des, xamb, uamb, xamb_des)
cplot.plot_cars(x1r1, x2, xamb, folder)
CIRCLES = False
if CIRCLES:
vid_fname = folder + subdir_name + 'circle1.mp4'
else:
vid_fname = folder + subdir_name + 'car1.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
video = io.open(vid_fname, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
br1.solution.value(x1_MPC.lat_cost)
br1.solution.value(x1_MPC.lon_cost)
###Output
_____no_output_____
###Markdown
Warm Start
###Code
speeding_amb_u = np.zeros((2,N))
speeding_amb_u[1,:10] = np.ones((1,10)) * 3.999
x = np.zeros((6,N+1))
x1 = np.zeros((6,N+1))
xamb = np.zeros((6,N+1))
u0 = np.zeros((2,N))
u0[0,:10] = np.ones((1,10)) * np.pi/5
u0[1,:10] = np.ones((1,10)) * 3.999
u1 = np.zeros((2,N))
u1[0,:10] = np.ones((1,10)) * -np.pi/5
u1[1,:10] = np.ones((1,10)) * 3.999
x[:,0] = x0
x1[:,0] = x0_2
xamb[:,0] = x0_amb
for t in range(N):
x[:,t+1:t+2] = F(x[:,t],u0[:,t])
x1[:,t+1:t+2] = F(x1[:,t],u1[:,t])
xamb[:,t+1:t+2] = F(xamb[:,t],speeding_amb_u[:,t])
opti.set_initial(uamb_opt, speeding_amb_u)
opti.set_initial(u_opt, u0)
opti.set_initial(u2_opt, u0)
opti.set_initial(xamb_opt, xamb)
opti.set_initial(x2_opt, x1)
opti.set_initial(x_opt, x)
opti.solve()
x_warm = sol.value(x_opt)
u_warm = sol.value(u_opt)
x2_warm = sol.value(x2_opt)
u2_warm = sol.value(u2_opt)
xamb_warm = sol.value(xamb_opt)
uamb_warm = sol.value(uamb_opt)
x_des = sol.value(x_desired)
x2_des = sol.value(x2_desired)
xamb_des = sol.value(xamb_desired)
car1_v_cost = car1_s_cost
car2_v_cost = car2_s_cost
amb_v_cost = amb_s_cost
car1_sub_costs = [car1_u_delta_cost, car1_u_v_cost, k_lat1*car1_lat_cost, k_lon1*car1_lon_cost, k_phi1*car1_phi_cost, k_phid1*phid1_cost, q_v*car1_v_cost]
car1_sub_costs_labels = ['udel1', 'uv1', 'elat1', 'lon1', 'ephi1', 'v1']
plt.bar(range(len(car1_sub_costs)), [sol.value(c) for c in car1_sub_costs])
plt.xticks(range(len(car1_sub_costs)), car1_sub_costs_labels,rotation=45)
plt.title('Car 1')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
car2_sub_costs = [car2_u_delta_cost, car2_u_v_cost, 10*car2_lat_cost, 10*car2_lon_cost, k_phi2*car2_phi_cost, k_phid2*phid2_cost, q_v*car2_v_cost]
car2_sub_costs_labels = ['udel2', 'uv2', 'elat2', 'lon2', 'ephi2', 'v2']
plt.bar(range(len(car2_sub_costs)), [sol.value(c) for c in car2_sub_costs])
plt.xticks(range(len(car2_sub_costs)), car2_sub_costs_labels,rotation=45)
plt.title('Car 2')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
amb_sub_costs = [amb_u_delta_cost, amb_u_v_cost, 10*amb_lat_cost, 10*amb_lon_cost,k_phiamb*amb_phi_cost, k_phidamb*phidamb_cost, q_v*amb_v_cost]
amb_sub_costs_labels = ['udelA', 'uvA', 'elatA', 'lonA', 'ephiA', 'vA']
plt.bar(range(len(amb_sub_costs)), [sol.value(c) for c in amb_sub_costs])
plt.xticks(range(len(amb_sub_costs)), amb_sub_costs_labels,rotation=45)
plt.title('Amb')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
all_costs = [0.1*c for c in car1_sub_costs] + [0.1 for c in car2_sub_costs] + [10*c for c in amb_sub_costs]
all_labels = car1_sub_costs_labels + car2_sub_costs_labels + amb_sub_costs_labels
plt.bar(range(len(all_costs)), [sol.value(c) for c in all_costs])
plt.xticks(range(len(all_labels)), all_labels,rotation=90)
plt.title('All Cars')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
###Output
_____no_output_____
###Markdown
Optimization
###Code
## Best response for vehicle 1
opti = cas.Opti()
n_ctrl = 2
n_state = 6
#Variables
x_opt = opti.variable(n_state, N+1) # initialize X for each car that we will optimize
u_opt = opti.variable(n_ctrl, N)
# These are now parameters!!
x2_opt = opti.parameter(n_state, N+1)
xamb_opt = opti.parameter(n_state, N+1)
u2_opt = opti.parameter(n_ctrl, N)
uamb_opt = opti.parameter(n_ctrl, N)
p = opti.parameter(n_state, 1) #this will be the initial state
p2 = opti.parameter(n_state, 1)
pamb = opti.parameter(n_state, 1)
x_desired = opti.variable(3, N+1)
x2_desired = opti.variable(3, N+1)
xamb_desired = opti.variable(3, N+1)
#### Costs
car1_u_delta_cost = 10 * cas.sumsqr(u_opt[0,:])
car1_u_v_cost = 1 * cas.sumsqr(u_opt[1,:])
car1_lat_cost = np.sum([(-cas.sin(x_desired[2,k]) * (x_opt[0,k]-x_desired[0,k]) +
cas.cos(x_desired[2,k]) * (x_opt[1,k]-x_desired[1,k]))**2
for k in range(N+1)])
car1_lon_cost = np.sum([(cas.cos(x_desired[2,k]) * (x_opt[0,k]-x_desired[0,k]) +
cas.sin(x_desired[2,k]) * (x_opt[1,k]-x_desired[1,k]))**2
for k in range(N+1)])
car1_phi_cost = cas.sumsqr(x_desired[2,:]-x_opt[2,:])
car1_v_cost = cas.sumsqr(x_opt[4,:])
phid_1 = x_opt[4,:] * cas.tan(x_opt[3,:]) / L
phid1_cost = cas.sumsqr(phid_1)
k_lat1 = 10
k_lon1 = 10
k_phid1 = 1.0
car1_costs = (car1_u_delta_cost + car1_u_v_cost +
k_lat1*car1_lat_cost + k_lon1*car1_lon_cost + car1_phi_cost +
k_phid1 * phid1_cost +
q_v*car1_v_cost)
car2_u_delta_cost = 10 * cas.sumsqr(u2_opt[0,:])
car2_u_v_cost = 1 * cas.sumsqr(u2_opt[1,:])
car2_lat_cost = np.sum([(-cas.sin(x2_desired[2,k]) * (x2_opt[0,k]-x2_desired[0,k]) +
cas.cos(x2_desired[2,k]) * (x2_opt[1,k]-x2_desired[1,k]))**2
for k in range(N+1)])
car2_lon_cost = np.sum([(cas.cos(x2_desired[2,k]) * (x2_opt[0,k]-x2_desired[0,k]) +
cas.sin(x2_desired[2,k]) * (x2_opt[1,k]-x2_desired[1,k]))**2
for k in range(N+1)])
car2_phi_cost = cas.sumsqr(x2_desired[2,:]-x2_opt[2,:])
car2_v_cost = cas.sumsqr(x2_opt[4,:])
phid_2 = x2_opt[4,:] * cas.tan(x2_opt[3,:]) / L
phid2_cost =cas.sumsqr(phid_2)
k_lat2 = 10
k_lon2 = 10
k_phid2 = 1.0
car2_costs = (car2_u_delta_cost + car2_u_v_cost +
k_lat2*car2_lat_cost + k_lon2*car2_lon_cost + car2_phi_cost +
k_phid2*phid2_cost + q_v*car2_v_cost)
R_k = 1*R_k
# amb_u_v_cost = np.sum([cas.transpose(uamb_opt[:,k]) @ R_k @ uamb_opt[:,k] for k in range(N)])
amb_u_delta_cost = 10 * cas.sumsqr(uamb_opt[0,:])
amb_u_v_cost = 0.1 * cas.sumsqr(uamb_opt[1,:])
amb_lat_cost = np.sum([(-cas.sin(xamb_desired[2,k]) * (xamb_opt[0,k]-xamb_desired[0,k]) +
cas.cos(xamb_desired[2,k]) * (xamb_opt[1,k]-xamb_desired[1,k]))**2
for k in range(N+1)])
amb_lon_cost = np.sum([(cas.cos(xamb_desired[2,k]) * (xamb_opt[0,k]-xamb_desired[0,k]) +
cas.sin(xamb_desired[2,k]) * (xamb_opt[1,k]-xamb_desired[1,k]))**2
for k in range(N+1)])
amb_phi_cost = cas.sumsqr(xamb_desired[2,:]-xamb_opt[2,:])
amb_v_cost = cas.sumsqr(xamb_opt[4,:])
phid_amb= xamb_opt[4,:] * cas.tan(xamb_opt[3,:]) / L
phidamb_cost =cas.sumsqr(phid_amb)
k_latamb = 20
k_lonamb = 10
k_phidamb = 1.0
amb_costs = (amb_u_delta_cost + amb_u_v_cost +
k_latamb*amb_lat_cost + k_lonamb*amb_lon_cost + amb_phi_cost +
k_phidamb * phidamb_cost + q_v*amb_v_cost
)
theta_1 = np.pi/4
theta_2 = np.pi/4
theta_amb = 0
######## optimization ##################################
opti.minimize(np.cos(theta_1)*car1_costs + np.sin(theta_1)*amb_costs)
##########################################################
#constraints
for k in range(N):
opti.subject_to( x_opt[:, k+1] == F(x_opt[:, k], u_opt[:, k]))
for k in range(N+1):
opti.subject_to( x_desired[:, k] == fd(x_opt[-1, k]) ) #This should be the trajectory dynamic constraint
opti.subject_to(opti.bounded(-np.pi/6, u_opt[0,:], np.pi/6))
opti.subject_to(opti.bounded(-4, u_opt[1,:], 4)) # 0-60 around 4 m/s^2
v_max = 10
opti.subject_to(opti.bounded(0, x_opt[4,:],v_max))
opti.subject_to(x_opt[:,0] == p)
# min_dist = 0.6
for k in range(N+1):
opti.subject_to( cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]) > min_dist**2 )
opti.subject_to( cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]) > min_dist**2 )
# constraints to help out
opti.subject_to( opti.bounded(-1, x_opt[0,:], 30) )
opti.subject_to( opti.bounded(-10, x_opt[1,:], 10) )
opti.subject_to( opti.bounded(-np.pi/2, x_opt[2,:], np.pi/4) )
# constrain the lane deviations to prevent wacky solutions
# opti.subject_to( opti.bounded(-5, x_opt[0,:] - x_desired[0,:], 5))
# opti.subject_to( opti.bounded(-5, x2_opt[0,:] - x2_desired[0,:], 5))
opti.subject_to( opti.bounded(-5, xamb_opt[0,:] - xamb_desired[0,:], 5))
opti.subject_to( opti.bounded(-10, x_opt[1,:] - x_desired[1,:], 10))
opti.solver('ipopt',{'warn_initial_bounds':True},{'print_level':0})
opti.set_value(p,x0)
opti.set_value(p2,x0_2)
opti.set_value(pamb,x0_amb)
opti2 = cas.Opti()
n_ctrl = 2
n_state = 6
#Variables
x2_opt2 = opti2.variable(n_state, N+1)
u2_opt2 = opti2.variable(n_ctrl, N)
### Agent 2 has these as parameters!!!
x_opt2 = opti2.parameter(n_state, N+1) # initialize X for each car that we will opti2mize
xamb_opt2 = opti2.parameter(n_state, N+1)
u_opt2 = opti2.parameter(n_ctrl, N)
uamb_opt2 = opti2.parameter(n_ctrl, N)
p_2 = opti2.parameter(n_state, 1) #this will be the initial state
p2_2 = opti2.parameter(n_state, 1)
pamb_2 = opti2.parameter(n_state, 1)
x_desired_2 = opti2.variable(3, N+1)
x2_desired_2 = opti2.variable(3, N+1)
xamb_desired_2 = opti2.variable(3, N+1)
#### Costs
car1_u_delta_cost_2 = 10 * cas.sumsqr(u_opt2[0,:])
car1_u_v_cost_2 = 1 * cas.sumsqr(u_opt2[1,:])
car1_lat_cost_2 = np.sum([(-cas.sin(x_desired_2[2,k]) * (x_opt2[0,k]-x_desired_2[0,k]) +
cas.cos(x_desired_2[2,k]) * (x_opt2[1,k]-x_desired_2[1,k]))**2
for k in range(N+1)])
car1_lon_cost_2 = np.sum([(cas.cos(x_desired_2[2,k]) * (x_opt2[0,k]-x_desired_2[0,k]) +
cas.sin(x_desired_2[2,k]) * (x_opt2[1,k]-x_desired_2[1,k]))**2
for k in range(N+1)])
car1_phi_cost_2 = cas.sumsqr(x_desired_2[2,:]-x_opt2[2,:])
car1_v_cost_2 = cas.sumsqr(x_opt2[4,:])
phid_1_2 = x_opt2[4,:] * cas.tan(x_opt2[3,:]) / L
phid1_cost_2 = cas.sumsqr(phid_1_2)
k_lat1_2 = 10
k_lon1_2 = 10
k_phid1_2 = 1.0
car1_costs_2 = (car1_u_delta_cost_2 + car1_u_v_cost_2 +
k_lat1_2*car1_lat_cost_2 + k_lon1_2*car1_lon_cost_2 + car1_phi_cost_2 +
k_phid1_2 * phid1_cost_2 +
q_v*car1_v_cost_2)
car2_u_delta_cost_2 = 10 * cas.sumsqr(u2_opt2[0,:])
car2_u_v_cost_2 = 1 * cas.sumsqr(u2_opt2[1,:])
car2_lat_cost_2 = np.sum([(-cas.sin(x2_desired_2[2,k]) * (x2_opt2[0,k]-x2_desired_2[0,k]) +
cas.cos(x2_desired_2[2,k]) * (x2_opt2[1,k]-x2_desired_2[1,k]))**2
for k in range(N+1)])
car2_lon_cost_2 = np.sum([(cas.cos(x2_desired_2[2,k]) * (x2_opt2[0,k]-x2_desired_2[0,k]) +
cas.sin(x2_desired_2[2,k]) * (x2_opt2[1,k]-x2_desired_2[1,k]))**2
for k in range(N+1)])
car2_phi_cost_2 = cas.sumsqr(x2_desired_2[2,:]-x2_opt2[2,:])
car2_v_cost_2 = cas.sumsqr(x2_opt2[4,:])
phid_2_2 = x2_opt2[4,:] * cas.tan(x2_opt2[3,:]) / L
phid2_cost_2 =cas.sumsqr(phid_2_2)
k_lat2_2 = 10
k_lon2_2 = 10
k_phid2_2 = 1.0
car2_costs_2 = (car2_u_delta_cost_2 + car2_u_v_cost_2 +
k_lat2*car2_lat_cost_2 + k_lon2*car2_lon_cost_2 + car2_phi_cost_2 +
k_phid2_2*phid2_cost_2 + q_v*car2_v_cost_2)
R_k_2 = 1*R_k
# amb_u_v_cost = np.sum([cas.transpose(uamb_opt2[:,k]) @ R_k @ uamb_opt2[:,k] for k in range(N)])
amb_u_delta_cost_2 = 10 * cas.sumsqr(uamb_opt2[0,:])
amb_u_v_cost_2 = 0.1 * cas.sumsqr(uamb_opt2[1,:])
amb_lat_cost_2 = np.sum([(-cas.sin(xamb_desired_2[2,k]) * (xamb_opt2[0,k]-xamb_desired_2[0,k]) +
cas.cos(xamb_desired_2[2,k]) * (xamb_opt2[1,k]-xamb_desired_2[1,k]))**2
for k in range(N+1)])
amb_lon_cost_2 = np.sum([(cas.cos(xamb_desired_2[2,k]) * (xamb_opt2[0,k]-xamb_desired_2[0,k]) +
cas.sin(xamb_desired_2[2,k]) * (xamb_opt2[1,k]-xamb_desired_2[1,k]))**2
for k in range(N+1)])
amb_phi_cost_2 = cas.sumsqr(xamb_desired_2[2,:]-xamb_opt2[2,:])
amb_v_cost_2 = cas.sumsqr(xamb_opt2[4,:])
phid_amb_2 = xamb_opt2[4,:] * cas.tan(xamb_opt2[3,:]) / L
phidamb_cost_2 =cas.sumsqr(phid_amb_2)
k_latamb_2 = 20
k_lonamb_2 = 10
k_phidamb_2 = 1.0
amb_costs_2 = (amb_u_delta_cost_2 + amb_u_v_cost_2 +
k_latamb_2*amb_lat_cost_2 + k_lonamb_2*amb_lon_cost_2 + amb_phi_cost_2 +
k_phidamb_2 * phidamb_cost_2 + q_v*amb_v_cost_2
)
theta_2 = np.pi/4
######## opti2mization ##################################
opti2.minimize(np.cos(theta_2)*car2_costs_2 + np.sin(theta_2)*amb_costs_2)
########################################################
#constraints
#Just repeat constraints for x2
for k in range(N):
opti2.subject_to( x2_opt2[:, k+1] == F(x2_opt2[:, k], u2_opt2[:, k]))
for k in range(N+1):
opti2.subject_to( x2_desired_2[:, k] == fd(x2_opt2[-1, k]) ) #This should be the trajectory dynamic constraint
opti2.subject_to(opti2.bounded(-np.pi/6, u2_opt2[0,:], np.pi/6))
opti2.subject_to(opti2.bounded(-4, u2_opt2[1,:], 4))
v_max = 10
opti2.subject_to(opti2.bounded(0, x2_opt2[4,:],v_max))
opti2.subject_to(x2_opt2[:,0] == p2_2)
# min_dist = 0.6
for k in range(N+1):
opti2.subject_to( cas.sumsqr(x_opt2[0:2,k] - x2_opt2[0:2,k]) > min_dist**2 )
opti2.subject_to( cas.sumsqr(x2_opt2[0:2,k] - xamb_opt2[0:2,k]) > min_dist**2 )
# constraints to help out
opti2.subject_to( opti2.bounded(-1, x2_opt2[0,:], 30) )
opti2.subject_to( opti2.bounded(-10, x2_opt2[1,:], 10) )
opti2.subject_to( opti2.bounded(-np.pi/4, x2_opt2[2,:], np.pi/4) )
# constrain the lane deviations to prevent wacky solutions
# opti2.subject_to( opti2.bounded(-5, x_opt2[0,:] - x_desired_2[0,:], 5))
# opti2.subject_to( opti2.bounded(-5, x2_opt2[0,:] - x2_desired_2[0,:], 5))
opti2.subject_to( opti2.bounded(-10, x2_opt2[1,:] - x2_desired_2[1,:], 10))
opti2.solver('ipopt',{'warn_initial_bounds':True},{'print_level':10})
opti2.set_value(p_2,x0)
opti2.set_value(p2_2,x0_2)
opti2.set_value(pamb_2,x0_amb)
###Output
_____no_output_____
###Markdown
Ambulance Optimization
###Code
opti3 = cas.Opti()
n_ctrl = 2
n_state = 6
#Variables
xamb_opt3 = opti3.variable(n_state, N+1)
uamb_opt3 = opti3.variable(n_ctrl, N)
### Agent 2 has these as parameters!!!
x_opt3 = opti3.parameter(n_state, N+1) # initialize X for each car that we will opti3mize
x2_opt3 = opti3.parameter(n_state, N+1)
u_opt3 = opti3.parameter(n_ctrl, N)
u2_opt3 = opti3.parameter(n_ctrl, N)
p_3 = opti3.parameter(n_state, 1) #this will be the initial state
p2_3 = opti3.parameter(n_state, 1)
pamb_3 = opti3.parameter(n_state, 1)
x_desired_3 = opti3.variable(3, N+1)
x2_desired_3 = opti3.variable(3, N+1)
xamb_desired_3 = opti3.variable(3, N+1)
#### Costs
amb_u_delta_cost_3 = 10 * cas.sumsqr(uamb_opt3[0,:])
amb_u_v_cost_3 = 0.1 * cas.sumsqr(uamb_opt3[1,:])
amb_lat_cost_3 = np.sum([(-cas.sin(xamb_desired_3[2,k]) * (xamb_opt3[0,k]-xamb_desired_3[0,k]) +
cas.cos(xamb_desired_3[2,k]) * (xamb_opt3[1,k]-xamb_desired_3[1,k]))**2
for k in range(N+1)])
amb_lon_cost_3 = np.sum([(cas.cos(xamb_desired_3[2,k]) * (xamb_opt3[0,k]-xamb_desired_3[0,k]) +
cas.sin(xamb_desired_3[2,k]) * (xamb_opt3[1,k]-xamb_desired_3[1,k]))**2
for k in range(N+1)])
amb_phi_cost_3 = cas.sumsqr(xamb_desired_3[2,:]-xamb_opt3[2,:])
amb_v_cost_3 = cas.sumsqr(xamb_opt3[4,:])
phid_amb_3 = xamb_opt3[4,:] * cas.tan(xamb_opt3[3,:]) / L
phidamb_cost_3 =cas.sumsqr(phid_amb_3)
k_latamb_3 = 20
k_lonamb_3 = 10
k_phidamb_3 = 1.0
amb_costs_3 = (amb_u_delta_cost_3 + amb_u_v_cost_3 +
k_latamb_3*amb_lat_cost_3 + k_lonamb_3*amb_lon_cost_3 + amb_phi_cost_3 +
k_phidamb_3 * phidamb_cost_3 + q_v*amb_v_cost_3
)
theta_3 = np.pi/4
######## opti3mization ##################################
opti3.minimize(amb_costs_3)
########################################################
#constraints
#Just repeat constraints for x2
for k in range(N):
opti3.subject_to( xamb_opt3[:, k+1] == F(xamb_opt3[:, k], u2_opt3[:, k]))
for k in range(N+1):
opti3.subject_to( xamb_desired_3[:, k] == fd(xamb_opt3[-1, k]) ) #This should be the trajectory dynamic constraint
opti3.subject_to(opti3.bounded(-np.pi/6, uamb_opt3[0,:], np.pi/6))
opti3.subject_to(opti3.bounded(-4, uamb_opt3[1,:], 4))
v_max = 10
opti3.subject_to(opti3.bounded(0, xamb_opt3[4,:],v_max))
opti3.subject_to(xamb_opt3[:,0] == pamb_3)
# min_dist = 0.6
for k in range(N+1):
opti3.subject_to( cas.sumsqr(xamb_opt3[0:2,k] - x2_opt3[0:2,k]) > min_dist**2 )
opti3.subject_to( cas.sumsqr(x_opt3[0:2,k] - xamb_opt3[0:2,k]) > min_dist**2 )
# constraints to help out
opti3.subject_to( opti3.bounded(-1, xamb_opt3[0,:], 30) )
opti3.subject_to( opti3.bounded(-10, xamb_opt3[1,:], 10) )
opti3.subject_to( opti3.bounded(-np.pi/4, xamb_opt3[2,:], np.pi/4) )
# constrain the lane deviations to prevent wacky solutions
opti3.subject_to( opti3.bounded(-10, xamb_opt3[1,:] - xamb_desired_3[1,:], 10))
opti3.solver('ipopt',{'warn_initial_bounds':True},{'print_level':5})
opti3.set_value(p_3,x0)
opti3.set_value(p2_3,x0_2)
opti3.set_value(pamb_3,x0_amb)
###Output
_____no_output_____
###Markdown
Best Response, V1
###Code
opti.set_value(x2_opt, x2_warm)
opti.set_value(xamb_opt, xamb_warm)
opti.set_value(u2_opt, u2_warm)
opti.set_value(uamb_opt, uamb_warm)
opti.set_initial(x_opt, x_warm)
opti.set_initial(u_opt, u_warm)
sol = opti.solve()
x1_1 = opti.debug.value(x_opt)
x2_1 = opti.debug.value(x2_opt)
xamb_1 = opti.debug.value(xamb_opt)
x_des = opti.debug.value(x_desired)
# x2_des = opti.debug.value(x2_desired)
for k in range(N+1):
fig, ax = ego_car.get_frame(x1_1[:,k])
fig, ax = ego_car.get_frame(x2_1[:,k], ax)
fig, ax = ego_car.get_frame(xamb_1[:,k], ax, amb=True)
ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + 'car1.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
video = io.open(vid_fname, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
x_warm = sol.value(x_opt)
u_warm = sol.value(u_opt)
x2_warm = sol.value(x2_opt)
u2_warm = sol.value(u2_opt)
xamb_warm = sol.value(xamb_opt)
uamb_warm = sol.value(uamb_opt)
x_warm = sol.value(x_opt)
u_warm = sol.value(u_opt)
BR_iteration = 0
opti2.set_value(x_opt2, x_warm)
opti2.set_value(u_opt2, u_warm)
opti2.set_value(xamb_opt2, xamb_warm)
opti2.set_value(uamb_opt2, uamb_warm)
# opti2.set_value(xamb_opt2, sol.value(xamb_opt))
# opti2.set_value(uamb_opt2, sol.value(uamb_opt))
opti2.set_initial(x2_opt2, x2_warm)
opti2.set_initial(u2_opt2, u2_warm)
sol2 = opti2.solve()
x2_warm = sol2.value(x2_opt2)
u2_warm = sol2.value(u2_opt2)
opti3.set_value(x_opt3, x_warm)
opti3.set_value(u_opt3, u_warm)
opti3.set_value(x2_opt3, x2_warm)
opti3.set_value(u2_opt3, u2_warm)
opti3.set_initial(xamb_opt3, xamb_warm)
opti3.set_initial(uamb_opt3, uamb_warm)
sol3 = opti3.solve()
xamb_warm = sol3.value(xamb_opt3)
uamb_warm = sol3.value(uamb_opt3)
opti.set_value(x2_opt, x2_warm)
opti.set_value(xamb_opt, xamb_warm)
opti.set_value(u2_opt, u2_warm)
opti.set_value(uamb_opt, uamb_warm)
opti.set_initial(x_opt, x_warm)
opti.set_initial(u_opt, u_warm)
sol = opti.solve()
x_warm = sol.value(x_opt)
u_warm = sol.value(u_opt)
# x2_warm = sol.value(x2_opt)
# u2_warm = sol.value(u2_opt)
# xamb_warm = sol.value(xamb_opt)
# uamb_warm = sol.value(uamb_opt)
# x_des = sol/
for k in range(N+1):
fig, ax = ego_car.get_frame(x_warm[:,k])
fig, ax = ego_car.get_frame(x2_warm[:,k], ax)
fig, ax = ego_car.get_frame(xamb_warm[:,k], ax, amb=True)
# ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + '%02d'%BR_iteration + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
opti3.set_value(x_opt3, x_warm)
opti3.set_value(u_opt3, u_warm)
opti3.set_value(x2_opt3, x2_warm)
opti3.set_value(u2_opt3, u2_warm)
opti3.set_initial(xamb_opt3, xamb_warm)
opti3.set_initial(uamb_opt3, uamb_warm)
sol3 = opti3.solve()
x_plot = opti3.debug.value(x_opt3)
x2_plot = opti3.debug.value(x2_opt3)
xamb_plot = opti3.debug.value(xamb_opt3)
BR_iteration = 2
for k in range(N+1):
fig, ax = ego_car.get_frame(x_plot[:,k],None, False,min_dist,True)
fig, ax = ego_car.get_frame(x2_plot[:,k], ax, False,min_dist,True)
fig, ax = ego_car.get_frame(xamb_plot[:,k], ax, False,min_dist,True)
# ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + 'circ'+'%02d'%BR_iteration + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
x_plot = sol2.value(x_opt2)
x2_plot = sol2.value(x2_opt2)
xamb_plot = sol2.value(xamb_opt2)
x2_plot = sol2.value(x2_opt2)
x_plot = x_warm
xamb_plot = xamb_warm
BR_iteration = 1
for k in range(N+1):
fig, ax = ego_car.get_frame(x_plot[:,k],None, False,min_dist,True)
fig, ax = ego_car.get_frame(x2_plot[:,k], ax, False,min_dist,True)
fig, ax = ego_car.get_frame(xamb_plot[:,k], ax, False,min_dist,True)
# ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + 'circ'+'%02d'%BR_iteration + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
for BR_iteration in range(20):
opti2.set_value(x_opt2, sol.value(x_opt))
opti2.set_value(u_opt2, sol.value(u_opt))
opti2.set_value(xamb_opt2, sol.value(xamb_opt))
opti2.set_value(uamb_opt2, sol.value(uamb_opt))
opti2.set_initial(x2_opt2, sol.value(x2_opt))
opti2.set_initial(u2_opt2, sol.value(u2_opt))
sol2 = opti2.solve()
opti3.set_value(x_opt3, sol2.value(x_opt2))
opti3.set_value(u_opt3, sol2.value(u_opt2))
opti3.set_value(x2_opt3, sol2.value(x2_opt2))
opti3.set_value(u2_opt3, sol2.value(uamb_opt2))
opti3.set_initial(xamb_opt3, sol2.value(xamb_opt2))
opti3.set_initial(uamb_opt3, sol2.value(uamb_opt2))
sol3 = opti3.solve()
opti.set_value(x2_opt, sol3.value(x2_opt3))
opti.set_value(xamb_opt, sol3.value(xamb_opt3))
opti.set_value(u2_opt, sol3.value(u2_opt3))
opti.set_value(uamb_opt, sol3.value(uamb_opt3))
opti.set_initial(x_opt, sol3.value(x_opt3))
opti.set_initial(u_opt, sol3.value(u_opt3))
sol = opti.solve()
x_warm = sol.value(x_opt)
u_warm = sol.value(u_opt)
x2_warm = sol.value(x2_opt)
u2_warm = sol.value(u2_opt)
xamb_warm = sol.value(xamb_opt)
uamb_warm = sol.value(uamb_opt)
# x_des = sol/
for k in range(N+1):
fig, ax = ego_car.get_frame(x_warm[:,k])
fig, ax = ego_car.get_frame(x2_warm[:,k], ax)
fig, ax = ego_car.get_frame(xamb_warm[:,k], ax, amb=True)
# ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + '%02d'%BR_iteration + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
for BR_iteration in range(20):
vid_fname = folder + '%02d'%BR_iteration + 'car.mp4'
print('Saving video to: {}'.format(vid_fname))
###Output
_____no_output_____
###Markdown
Best Response V2
###Code
x1 = sol3.value(x_opt3)
x2 = sol3.value(x2_opt3)
xamb = sol3.value(xamb_opt3)
x_des = sol3.value(xamb_desired_3)
for k in range(N+1):
fig, ax = ego_car.get_frame(x1[:,k])
fig, ax = ego_car.get_frame(x2[:,k], ax)
fig, ax = ego_car.get_frame(xamb[:,k], ax, amb=True)
ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + 'caramb.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
car1_sub_costs = [car1_u_delta_cost, car1_u_v_cost, 10*car1_lat_cost, 10*car1_lon_cost, car1_phi_cost, phid1_cost, q_v*car1_v_cost]
car1_sub_costs_labels = ['udel1', 'uv1', 'elat1', 'lon1', 'ephi1', 'v1']
plt.bar(range(len(car1_sub_costs)), [sol.value(c) for c in car1_sub_costs])
plt.xticks(range(len(car1_sub_costs)), car1_sub_costs_labels,rotation=45)
plt.title('Car 1')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
car2_sub_costs = [car2_u_delta_cost, car2_u_v_cost, 10*car2_lat_cost, 10*car2_lon_cost, car2_phi_cost, phid2_cost, q_v*car2_v_cost]
car2_sub_costs_labels = ['udel2', 'uv2', 'elat2', 'lon2', 'ephi2', 'v2']
plt.bar(range(len(car2_sub_costs)), [sol.value(c) for c in car2_sub_costs])
plt.xticks(range(len(car2_sub_costs)), car2_sub_costs_labels,rotation=45)
plt.title('Car 2')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
amb_sub_costs = [amb_u_delta_cost, amb_u_v_cost, 10*amb_lat_cost, 10*amb_lon_cost, amb_phi_cost, phidamb_cost, q_v*amb_v_cost]
amb_sub_costs_labels = ['udelA', 'uvA', 'elatA', 'lonA', 'ephiA', 'vA']
plt.bar(range(len(amb_sub_costs)), [sol.value(c) for c in amb_sub_costs])
plt.xticks(range(len(amb_sub_costs)), amb_sub_costs_labels,rotation=45)
plt.title('Amb')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
all_costs = [0.1*c for c in car1_sub_costs] + [0.1 for c in car2_sub_costs] + [10*c for c in amb_sub_costs]
all_labels = car1_sub_costs_labels + car2_sub_costs_labels + amb_sub_costs_labels
plt.bar(range(len(all_costs)), [sol.value(c) for c in all_costs])
plt.xticks(range(len(all_labels)), all_labels,rotation=90)
plt.title('All Cars')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
sol.value(x_opt)[3:5, 10:20]
dt
plt.plot(opti.debug.value(x_opt)[4,:],'o',c='b')
plt.plot(opti.debug.value(x2_opt)[4,:],'o',c='g')
plt.plot(opti.debug.value(xamb_opt)[4,:],'o',c='r')
plt.ylabel("Velocity")
plt.show()
plt.plot(opti.debug.value(u_opt)[1,:],'o',c='b')
plt.plot(opti.debug.value(u2_opt)[1,:],'o',c='g')
plt.plot(opti.debug.value(uamb_opt)[1,:],'o',c='r')
plt.ylabel("Acceleration $\delta V_u$")
plt.show()
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='g')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='g')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
plt.hlines(min_dist,0,50)
plt.ylabel('Intervehicle Distance')
plt.ylim([-.1, 2*min_dist])
plt.plot([opti.debug.value(slack1) for k in range(opti.debug.value(x_opt).shape[1])],'.',c='b')
plt.plot([opti.debug.value(slack2) for k in range(opti.debug.value(x_opt).shape[1])],'.',c='r')
plt.plot([opti.debug.value(slack3) for k in range(opti.debug.value(x_opt).shape[1])],'.',c='g')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='g')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='g')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
plt.ylabel('slack')
# plt.ylim([.7,.71])
if not PLOT_LIVE:
for k in range(N+1):
fig, ax = ego_car.get_frame(x_mpc[:,k])
fig, ax = ego_car.get_frame(x2_mpc[:,k], ax)
fig, ax = ego_car.get_frame(xamb_mpc[:,k], ax, amb=True)
ax.plot(x_des[0,:], x_des[1,:], '--')
ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
###Output
_____no_output_____ |
ccsc_FlareML.ipynb | ###Markdown
Predicting Solar Flares with Machine Learning Yasser Abduallah, Jason T. L. Wang, Haimin Wang 1. Introduction Solar flare prediction plays an important role in understanding and forecasting space weather. The main goal of the Helioseismic and Magnetic Imager (HMI), one of the instruments on NASA's Solar Dynamics Observatory, is to study the origin of solar variability and characterize the Sun's magnetic activity. HMI provides continuous full-disk observations of the solar vector magnetic field with high cadence data that lead to reliable predictive capability; yet, solar flare prediction effort utilizing these data is still limited.In this notebook we provide an overview of the FlareML system to demonstrate how to predict solar flares using machine learning (ML) and SDO/HMI vector magnetic data products (SHARP parameters). 2. FlareML Workflow 2.1 Data Prepration & Loading The data folder includes two sub-directories: train_data and test_data.* The train_data includes a CSV training data file that is used to train the model. * The test_data includes a CSV test data file that is used to predict the included flares.The files are loaded and used during the testing and training process. 2.2 ENS Model Training and TestingYou may train the model with your own data or train the model with the default data (see Sections 2.2.1 and 2.2.2). 2.2.1 ENS Model Training with Default DataHere, we show how to train the model with default data.To train the model with your own data:1. You should first upload your file to the data directory (in the left hand side file list).2. Edit the args variable in the following code and update the path to the training file: 'train_data_file':'data/train_data/flaringar_training_sample.csv' and replace the value 'data/train_data/flaringar_training_sample.csv' with your new file name.
###Code
print('Loading the train_model function...')
from flareml_train import train_model
args = {'train_data_file':'data/train_data/flaringar_training_sample.csv',
'algorithm': 'ENS',
'modelid': 'custom_model_id'
}
train_model(args)
###Output
Loading the train_model function...
Starting training with a model with id: custom_model_id training data file: data/train_data/flaringar_training_sample.csv
Loading data set...
Training is in progress, please wait until it is done...
Finished 1/3 training..
Finished 2/3 training..
Finished 3/3 training..
Finished training the ENS model, you may use the flareml_test.py program to make prediction.
###Markdown
2.2.2 Predicting with Your ENS ModelTo predict the testing data using the model you trained above, make sure the modelid value in the args variable in the following code is set exactly as the one used in the training, for example: 'custom_model_id'.
###Code
from flareml_test import test_model
args = {'test_data_file': 'data/test_data/flaringar_simple_random_40.csv',
'algorithm': 'ENS',
'modelid': 'custom_model_id'}
custom_result = test_model(args)
###Output
Starting testing with a model with id: custom_model_id testing data file: data/test_data/flaringar_simple_random_40.csv
Loading data set...
Done loading data...
Formatting and mapping the flares classes..
Prediction is in progress, please wait until it is done...
Finished the prediction task..
###Markdown
2.2.3 Plotting the ResultsThe prediction result can be plotted by passing the result variable to the function plot_custom_result as shown in the following example. The result shows the accuracy (TSS value) your model achieves for each flare class.
###Code
from flareml_utils import plot_custom_result
plot_custom_result(custom_result)
###Output
_____no_output_____
###Markdown
2.3 RF Model Training and Testing 2.3.1 RF Model Training with Default Data
###Code
print('Loading the train_model function...')
from flareml_train import train_model
args = {'train_data_file':'data/train_data/flaringar_training_sample.csv',
'algorithm': 'RF',
'modelid': 'custom_model_id'
}
train_model(args)
###Output
Loading the train_model function...
Starting training with a model with id: custom_model_id training data file: data/train_data/flaringar_training_sample.csv
Loading data set...
Training is in progress, please wait until it is done...
Finished training the RF model, you may use the flareml_test.py program to make prediction.
###Markdown
2.3.2 Predicting with Your RF Model
###Code
from flareml_test import test_model
args = {'test_data_file': 'data/test_data/flaringar_simple_random_40.csv',
'algorithm': 'RF',
'modelid': 'custom_model_id'}
custom_result = test_model(args)
###Output
Starting testing with a model with id: custom_model_id testing data file: data/test_data/flaringar_simple_random_40.csv
Loading data set...
Done loading data...
Formatting and mapping the flares classes..
Prediction is in progress, please wait until it is done...
Finished the prediction task..
###Markdown
2.3.3 Plotting the Results
###Code
from flareml_utils import plot_custom_result
plot_custom_result(custom_result)
###Output
_____no_output_____
###Markdown
2.4 MLP Model Training and Testing 2.4.1 MLP Model Training with Default Data
###Code
print('Loading the train_model function...')
from flareml_train import train_model
args = {'train_data_file':'data/train_data/flaringar_training_sample.csv',
'algorithm': 'MLP',
'modelid': 'custom_model_id'
}
train_model(args)
###Output
Loading the train_model function...
Starting training with a model with id: custom_model_id training data file: data/train_data/flaringar_training_sample.csv
Loading data set...
Training is in progress, please wait until it is done...
Finished training the MLP model, you may use the flareml_test.py program to make prediction.
###Markdown
2.4.2 Predicting with Your MLP Model
###Code
from flareml_test import test_model
args = {'test_data_file': 'data/test_data/flaringar_simple_random_40.csv',
'algorithm': 'MLP',
'modelid': 'custom_model_id'}
custom_result = test_model(args)
###Output
Starting testing with a model with id: custom_model_id testing data file: data/test_data/flaringar_simple_random_40.csv
Loading data set...
Done loading data...
Formatting and mapping the flares classes..
Prediction is in progress, please wait until it is done...
Finished the prediction task..
###Markdown
2.4.3 Plotting the Results
###Code
from flareml_utils import plot_custom_result
plot_custom_result(custom_result)
###Output
_____no_output_____
###Markdown
2.5 ELM Model Training and Testing 2.5.1 ELM Model Training with Default Data
###Code
print('Loading the train_model function...')
from flareml_train import train_model
args = {'train_data_file':'data/train_data/flaringar_training_sample.csv',
'algorithm': 'ELM',
'modelid': 'custom_model_id'
}
train_model(args)
###Output
Loading the train_model function...
Starting training with a model with id: custom_model_id training data file: data/train_data/flaringar_training_sample.csv
Loading data set...
Training is in progress, please wait until it is done...
Finished training the ELM model, you may use the flareml_test.py program to make prediction.
###Markdown
2.5.2 Predicting with Your ELM Model
###Code
from flareml_test import test_model
args = {'test_data_file': 'data/test_data/flaringar_simple_random_40.csv',
'algorithm': 'ELM',
'modelid': 'custom_model_id'}
custom_result = test_model(args)
###Output
Starting testing with a model with id: custom_model_id testing data file: data/test_data/flaringar_simple_random_40.csv
Loading data set...
Done loading data...
Formatting and mapping the flares classes..
Prediction is in progress, please wait until it is done...
Finished the prediction task..
###Markdown
2.5.3 Plotting the Resluts
###Code
from flareml_utils import plot_custom_result
plot_custom_result(custom_result)
###Output
_____no_output_____
###Markdown
2.6 Predicting with Pretrained ModelsThere are default and pretrained models that can be used to predict without running your own trained model. The modelid is set to default_model which uses all pretrained algorithms.
###Code
from flareml_test import test_model
args = {'test_data_file': 'data/test_data/flaringar_simple_random_40.csv',
'modelid': 'default_model'}
result = test_model(args)
###Output
Starting testing with a model with id: default_model testing data file: data/test_data/flaringar_simple_random_40.csv
Loading data set...
Done loading data...
Formatting and mapping the flares classes..
Prediction is in progress, please wait until it is done...
Finished the prediction task..
###Markdown
2.6.1 Plotting the ResultsThe prediction result can be plotted by passing the result variable to the function plot_result as shown in the following example.The result shows the accuracy (TSS value) that each of the pretrained models achieves for each flare class.
###Code
from flareml_utils import plot_result
plot_result(result)
###Output
_____no_output_____ |
exam/Stefani_Massimo_ Introduction_Python/Stefani_Massimo_intro_Python.ipynb | ###Markdown
Introduction au language PythonMassimo Stefani, Gymnase du Bugnon, le 31.05.19Sources:* [Pensez en Python](https://allen-downey.developpez.com/livres/python/pensez-python/) par Allen B. Downey,* [Site officiel de Python](https://www.python.org/)* [w3schools.com](https://www.w3schools.com/python/python_intro.asp)Exemples:* Quelques exemples ont été directement copiés de _Pensez Python_.Images:* PrintScreen Menu* [Introduction](Introduction)* [Console](Console)* [Exécution d'un script](Execution-d'un-script)* [Le langage](Le-langage) * [La fonction input](La-fonction-input())* [Affection](Affection) * [Opérations arithmétiques](Opérations-arithmétiques) Introduction Pyhton a été créé en 1990 par le programmateur néerlandais Guido Van Rosssum. Ce langage de programmation orienté objet est un langage qui a été conçu pour être facile à apprendre. Sa logique et facilité de lecture est un grand avantage. Grâce à cela, il est très accessible aux personnes débutantes dans le monde de la programmation. Console Python est un langage qui peut être exécuté des différentes façons. Une d'entre elles est l'exécution dans la console (Linux ou Windows). Une invite >>> (prompt) est affichée afin d'entrer le code souhaité et recevoir une réponse instantanée. Cette méthode est utilisée pour exécuter des commandes basiques ou faire des tests. Voici quelques captures d'écrans avec des exemples:* Exemple 1: `print('Hello World')`* Exemple 2: `2+5`  Dans Jupyter Notebook, `In[i]` serait notre prompt `(>>>)` et `Out[i]` la console. Execution d'un script Python peut être utilisé comme un langage de script. Il peut être enregistré dans un fichier ``.py``. Et exécuté dans un programme dans un ordre précis. Le programme l'interprète et donne le résultat.Thonny est un des ces programmes compatibles avec Python.  Nous pouvons voir la console, l'état des variables et le contenu du fichier `test.py` Le langage Python est un langage qui fonctionne avec des valeurs. Chacune de ces valeurs ont un `type`. Que ce soit des lettres, des nombres entiers ou des nombres à virgules, Python les définit d'une manière spécifique:* str : Ce type est attribué aux valeurs se formant grâce à une chaîne des caractères.* int : Ce type est attribué aux nombres entiers.* float : Ce type est accordé aux nombres à virgules. Les chaînes des caractères (string) sont de type __str__
###Code
type('Hello, World!')
###Output
_____no_output_____
###Markdown
Les nombres entiers (integers) appartiennent au type __int__
###Code
type(2)
###Output
_____no_output_____
###Markdown
Les nombres à virgule (floating-point numbers) leur est accordé le type __float__
###Code
import math
type(math.pi)
###Output
_____no_output_____
###Markdown
Néomoins, d'autres types sont aussi présents dans Python. Le type __bool__ est un type qui se caractérise pour avoir deux valeurs définies: `True` ou `False`. La plupart du temps ces valeurs sont utilisées dans les expressions booléennes; pour définir un état ou des comparaisons.
###Code
type(True), type(False)
5 == 5, 6 == 5
5 != 6, 5 > 6, 5 < 6, 5 >= 6, 5 <= 6
###Output
_____no_output_____
###Markdown
Les types peuvent être également utilisés comme fonctions. `int()` Transforme une chaîne de caractères composée de chiffres comme un __int__
###Code
int('32')
###Output
_____no_output_____
###Markdown
`float()` Transforme des entiers et des chaînes de caractères en __float__
###Code
float(32)
float('35')
###Output
_____no_output_____
###Markdown
`str()` Convertit son argument en un chaîne.
###Code
str(32)
str(35.5415)
###Output
_____no_output_____
###Markdown
La fonction `input()` Cette fonction a comme but de laisser l'opportunité à l'utilisateur d'utiliser le clavier et la souris pour compléter des informations. L'utilisateur peut alors lui-même assigner des variables.
###Code
question = "Age? "
reponse = raw_input(question)
reponse
###Output
_____no_output_____
###Markdown
Il est important de souligner que dans la langue française il existe l'apostrophe (`'`) ceci est un problème, car Python va l'interpréter comme la fermeture ou ouverture d'une chaîne des caractères. C'est pour cela qu'on utilise `""`. Affection Dans Python, il est possible de créer nos propes variables et leur donner une valeur précise.Les programmateurs doivent donner des noms à leurs variables. Ces noms doivent pouvoir expliquer à quoi ces variables servent.Ces variables doivent suivre certains critères:* Il ne faut jamais commencer une variable par un chiffre. Les caractères spéciaux ne sont pas non plus admis.* Dans le but d'avoir des variables lisibles. Les programmateurs utilisent les `_` à la place des espaces.* On ne peut pas utiliser les mots clés réservés par Python Exemple d'une bonne affection:
###Code
message = 'Ceci est un message ayant comme type: str'
x = 5
y = 2.5
###Output
_____no_output_____
###Markdown
Exemple d'affections erronées.
###Code
76trombones = 'grande parade'
plus@ = 1000000 # Caractère '@' non admis.
class = 'Théorie avancée de la fermentation alcoolique'
###Output
_____no_output_____
###Markdown
Voici une liste des mots réservés de Python: False class finally is return None continue for lambda try True def from nonlocal while True del global not with and elif if or yield as else import pass assert break in raise Ces valeurs peuvent être utilisées et nous pouvons faire des opération avec celles-ci. Afin d'affricher les variables nous pouvons les imprimer avec `print()` ou directement sur l'entrée.
###Code
print(message)
print(x)
print(y)
x
###Output
_____no_output_____
###Markdown
Opérations arithmétiques Grâce à Python il est également possible d'effectuer des opérations arithmétiques. Or vous ne pouvez pas les appliquer pour les chaînes des caractères.Il y deux exceptions: `+` et `*`. Vous pouvez enchaîner des chaînes des caractères grâce à l'addition et vous pouvez les répéter grâce à la multiplication.
###Code
x+y, x*y, x/y, x**y
###Output
_____no_output_____
###Markdown
Python respecte l'ordre des opérations. `40 * 2 + 5 != 40 * (2 + 5)`
###Code
40 * 2 + 5
40 * (2 + 5)
###Output
_____no_output_____
###Markdown
Avec les chaînes des caractères.
###Code
premier = 'plate'
second = 'forme'
premier + second
premier*5
###Output
_____no_output_____
###Markdown
Comme vous avez pu constater précédemment. J'ai écrit la chose suivante: `import math`. Grâce à cela, il m'a été possible d'importer un module. Dans ce cas, le module `math` rassemble toutes les opérations possibles pour effectuer des opérations plus compliquées.
###Code
math.sqrt(2) / 2 #Racine carrée de 2 divisée par 2
###Output
_____no_output_____
###Markdown
Vous pouvez apprendre plus sur le module appliquant: `help(math)`
###Code
help(math)
###Output
_____no_output_____ |
04-Convolutional Sentiment Analysis.ipynb | ###Markdown
Convolutional Sentiment Analysis====Tradizionalmente le reti convoluzionali sono usate per analizzare le immagini, i layer di convoluzione di solito sono seguiti da uno o piu linear layer. I layer convoluzionali usano filtri (chiamati anche kernel) che scansionano l'immagine e ne creano un'altra. L'idea intuitiva che sta dietro all'apprendimento delle reti convoluzionali è che lavorano come estrattori di feature. Andandosi a concentrare sulle parti più importanti della nostra immagine.Come si usano le reti convoluzionali sul testo? Ad esempio un filtro 1x2 può controllare due parole sequenziali, bi-gram. L'intuizione è che la presenza di alcuni bi-grams o tri-grams in una frase sono un buon indicatore del risultato finale.Preparazione dei dati----Invece di creare i bi-grams come nel modello FastText lasceremo che sia lo strato di convoluzione a fare questo lavoro.Il layer di convoluzione si aspetta che la dimensione del batch sia la prima, dobbiamo dire a TorchText di preparare i dati in questo modo andando ad esplicitarlo con il parametro batch_first = True.
###Code
import torch
from torchtext import data
from torchtext import datasets
import random
import numpy as np
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = 'spacy', batch_first = True)
LABEL = data.LabelField(dtype = torch.float)
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
###Output
_____no_output_____
###Markdown
Creiamo il vocabolario e carichiamo il vettore di word embeddings.
###Code
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
###Output
_____no_output_____
###Markdown
Come prima creiamo gli iteratori.
###Code
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
###Output
_____no_output_____
###Markdown
Costuiamo il modello----Andiamo a vedere come costruire una CNSS da usare per il testo. Le immagini sono tipicamente bidimensionali (non consideriamo la dimensione dei colori) mentre il testo viene trasformato in una sequenza di numeri (monodimensionale).Però sappiamo che il primo passo di quasi tutti i notebook precedentei è stato convertire le parole in word embeddings.Ecco come possiamo immaginare le parole nella seconda dimensione, ogni parola lungo un asse e gli elementi del vettore lungo l'altra. Analizziamo la rappresentazione a due dimensioni della frase seguente:Possiamo usare un filtro di dimensione [n x emb_dim]. Questo coprirà $n$ parole in sequenzaConsidera l'ìmmagine qui sotto, con i nostri word vectori rappresentati in verde. Abbiamo 4 parole con una dimensionalità di embedding impostata a 5, creiamo dunque un tensore "immagine" [4x5]Un filtro che copre due parole alla volta dovrà essere un filtro [2x5], mostrato in giallo.L'output del filtro, in rosso sarà un singolo numero risultato della convoluzione.il filtro si muove in basso e calcola il prossimo risultato della convoluzioneFino alla fine della fraseNel nostro caso avremo come risultato un vettore con il numero di elementi pari alla lughezza della frase meno l'altezza del filtro più uno nel nostro caso $4-2+1=3$.L'esempio mostra come calcolare l'output con un solo filtro. Il nostro modello, avrà molti di questi filtri. L'idea è che ogni filtro si concentrerà su una differente feature da estrarre. Nel nostro modello avremo anche differenti dimensioni dei filtri, con dimensione 3,4 e 5 con centinaia di questi. L'intuizione è che guarderemo differenti occorrenze di tri-grams, 4-grams and 5-grams che sono rilevanti per l'analisi del sentiment delle recensioni dei nostri film.Il posso successivo del modello e usare il pooling (max pooling) sull'ouptut del layer convoluzionale.Questo è simile a quanto fatto nel modello FastText dove andavamo a calcolare la media di ogni word vector, con la funzione F.avg_pool2d, ora invece di calcolare la media su una dimensione, andremo a prendere il valore massimo.Qui sotto un esempio grafico.L'idea è che il valore massimo è la feature "più importante" per determinare il sentiment di una recensione, che corrisponde al n-gram "più importante" della recensione.Come facciamo a riconoscere l'n-gram più importante? Fortunatamente non dobbiamo farlo noi!. Tramite la backpropagation, i pesi dei filtri sono cambiati in modo da far risultare certi n-gram più indicativi nella recensione che abbiamo letto e dargli un valore più alto.Il nostro modello ha 100 filtri con 3 differenti dimensioni, questo significa che si concentrerà su 300 differenti n-grams.Concateneremo i risultato di questi filtri in un singolo vettore e lo passeremo ad un linear layer per ottenere il risultato.Possiamo pensare ai pesi dell'ultimo livello come un "soppesatore del risultato" per ognino dei 300 n-grams per ottenere il risultato finale.Implementazione nel dettaglio----Implementeremo i layer convoluzionali con la funzione nn.Conv2d. Il parametro in_channels è il numero dei canali nella nostra immagine nel layer convoluzionale. Le immagini di solito ne hanno 3 (il canale rosso,blu e verde), stiamo usando del testo e dunque avremo un canale soltanto. `out_channels` è il numero dei filtri mentre `kernel_size` è la dimensione dei filtri stessi.Ogni `kernel_sizes` avrà una dimensione [n x emb_dim] dove $n$ è la dimensione dei n-grams.In PyTorch, le RNN vogliono la dimensione del batch in imput come seconda dimensione, mentre le CNN volgiono la dimensione del batch come prima, non dobbiamo cambiare niente se abbiamo già impostato `batch_first = True` in campo TEXT.Possiamo poi passare la frase nel nostra layer di embedding. La seconda dimensione del nostro input è il numero di canali da dare alla funzione `nn.Conv2d`. Un testo tecnicamente non a la dimensione channel, eseguiamo un `unsqueeze` del nostro tensore per crearne una.Passiamo dunque i tensori lungo i layer convoluzionali e il pooling, usiamo anche la activation function RelU dopo ogni convolutional layer.Un'altra simpatica feature del pooling layer è che si può lavorare frasi con lunghezze differenti.La dimensione dell'uscita del layer convoluzionale è solo dipendente dal numero di filtri. Senza il layer max pooling l'ingresso del layer linear dipenderebbe dalla dimensione della frase in input (e non è quello che si vuole).Una opzione sarebbe quella di tagliare o riempire tutte le frasi per averle tutte uguali, comunque con il layer max pooling siamo sicuri che il linear layer sarà sempre ad una dimensione fissa.**Nota:** Otteremo una eccezione se la nostra frase sarà più corta del più grande filtro utilizzato. Se questo dovesse succedere dobbiamo usare i token `` per riempire la frase. Comunque nell' IMDb non ci sono frasi più corte di 5 parole perciò possiamo proseguire tranquilli.Alla fine eseguiamo un dropout sulla concatenazione dei filtri e diamo il tensore il tensore al linear layer per ottenere il risultato.
###Code
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.conv_0 = nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (filter_sizes[0], embedding_dim))
self.conv_1 = nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (filter_sizes[1], embedding_dim))
self.conv_2 = nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (filter_sizes[2], embedding_dim))
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved_0 = F.relu(self.conv_0(embedded).squeeze(3))
conved_1 = F.relu(self.conv_1(embedded).squeeze(3))
conved_2 = F.relu(self.conv_2(embedded).squeeze(3))
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled_0 = F.max_pool1d(conved_0, conved_0.shape[2]).squeeze(2)
pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2)
pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2)
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat((pooled_0, pooled_1, pooled_2), dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
###Output
_____no_output_____
###Markdown
Currentemente il modello CNN può usare solo 3 differenti dimensioni di filtri, ma possiamo milgiorare il codice del nostro modello e rendelo più generico e prendere ogni numero di filtri.Possiamo fare questo mettendo tutti i nostri filtri convoluzionali in un `nn.ModuleList`, una funzione di PyTorch per gestire una lista di `nn.Modules`.Se aggiungessimo semplicemente una lista Pyhton, i moduli nella lista non verrebbero "visti" da PyTorch e questo ci darebbe dei problemi. Ora possiamo usare una lista arbitraria di dimensioni di filtri, nella parte di codice che esegue la list comprehension creeremo i convolutional layer per ognuno dei filtri richiesti. Nel metodo forward passiamo ogni elemento nella lista convolutional layer e lo applichiamo alla frase in ingresso, al risultato applichiamo il max pool prima di concatenare il risultato e passarlo prima al dropout e poi al linear layer.
###Code
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
###Output
_____no_output_____
###Markdown
Possiamo anche implementare il modello sopra usando dei layers 1-dimensional convolutional, dove la dimensione di embedding è la profondità del filtro e il numero dei token è il parametro width.
###Code
class CNN1d(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv1d(in_channels = embedding_dim,
out_channels = n_filters,
kernel_size = fs)
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.permute(0, 2, 1)
#embedded = [batch size, emb dim, sent len]
conved = [F.relu(conv(embedded)) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
###Output
_____no_output_____
###Markdown
andiamo ad instanziare il modello
###Code
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [2,3,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = CNN1d(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
###Output
_____no_output_____
###Markdown
come sempre fatto andiamo a vedere quanti parametri ha il modello
###Code
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 2,600,801 trainable parameters
###Markdown
Andiamo a caricare il vettore di embedding
###Code
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
###Output
_____no_output_____
###Markdown
Inizializziamo poi il vettore di embedding unk e pad a zero.
###Code
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
###Output
_____no_output_____
###Markdown
Train del modello---La fase di train del modello non cambia dai precedenti notebook
###Code
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut4-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
model.load_state_dict(torch.load('tut4-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.399 | Test Acc: 84.86%
###Markdown
Input personalizzato---**Nota:** Come scritto prima, se la frase di input è più breve del filtro più grande avremo un errore. Per evitare questo la nostra funzione `predict_sentiment` accetta iun parametro `min_len`.Se la frase in ingresso ha meno token di min_len, andremo a riempire la frase con i tag di padding fino a raggiungere quota min_len.
###Code
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence, min_len = 5):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
if len(tokenized) < min_len:
tokenized += ['<pad>'] * (min_len - len(tokenized))
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(0)
prediction = torch.sigmoid(model(tensor))
return prediction.item()
###Output
_____no_output_____
###Markdown
con una frase negativa
###Code
predict_sentiment(model, "This film is terrible")
###Output
_____no_output_____
###Markdown
con una frase positiva
###Code
predict_sentiment(model, "This film is great")
###Output
_____no_output_____ |
result_scripts/Figure4.ipynb | ###Markdown
Calculate IDseq NT and NR recall
###Code
# Loop through all the samples and determine the recall via idseq-bench-score tool
results = {}
for sample_id in SAMPLE_DICT.keys():
sample_name = SAMPLE_DICT[sample_id]
print(sample_name)
results[sample_name] = {}
# run idseq-bench-score for this sample
bench_result = subprocess.check_output("idseq-bench-score "+ PROJECT + " " + sample_id + " " + VERSION, shell=True)
d = json.loads(''.join(bench_result.decode('utf-8').split('\n')[6:]))
try:
# get the Rhinovirus C (species and genus) recall values from idseq-bench-score json
results[sample_name]['NTspecies'] = d['per_rank']['species']['NT']['463676']['recall_per_read']['count']
results[sample_name]['NTgenus'] = d['per_rank']['genus']['NT']['12059']['recall_per_read']['count']
results[sample_name]['NRspecies'] = d['per_rank']['species']['NR']['463676']['recall_per_read']['count']
results[sample_name]['NRgenus'] = d['per_rank']['genus']['NR']['12059']['recall_per_read']['count']
except:
# if the sample does not have any reads mapping to Rhinovirus C, the metrics will not
# appear in the json result; skip these samples
print("failed to gather metrics for sample: " + sample_name)
df = pd.DataFrame(results).transpose()
df.fillna(0, inplace=True)
df[['NTspecies','NRspecies']].plot(figsize=(8,3), alpha=.6)
###Output
_____no_output_____
###Markdown
Read Kraken2 results
###Code
kraken_results_folder = './data/kraken2/hrc_experiment/'
kraken_dict = {}
for sample_id in SAMPLE_DICT.keys():
sample_name = SAMPLE_DICT[sample_id]
kraken_dict[sample_name] = {}
print(sample_name)
kraken_res = kraken_results_folder + sample_name + '.kraken2.out'
this_result = pd.read_csv(kraken_res, sep = '\t', header=None)
kraken_dict[sample_name]['classified'] = 2*(Counter(this_result[0])['C'])
correct = 2*Counter(this_result[2])[463676]
false_positives = (kraken_dict[sample_name]['classified'] - correct) #/ kraken_dict[sample_name]['classified']
kraken_dict[sample_name]['recall'] = 2* (Counter(this_result[2])[463676] / kraken_dict[sample_name]['classified'])
kraken_dict[sample_name]['true_positive'] = 2*(Counter(this_result[2])[463676])
kraken_dict[sample_name]['false_positives'] = (false_positives)
df2 = pd.concat([df,pd.DataFrame(kraken_dict).transpose()],axis=1)
df2.head()
df2.columns = ['IDseq NR Genus', 'IDseq NR Species', 'IDseq NT Genus', 'IDseq NT Species',
'Kraken Total Classified','Kraken False Positives','Kraken Recall','Kraken True Positives']
df2[['IDseq NT Species','IDseq NR Species','Kraken True Positives','Kraken False Positives']].plot(style='.-', lw = 3, ms = 10,figsize=(12,4), alpha=.8, cmap='Accent')#color=['#068977','green','darkblue','orange'])
plt.xlabel('Sample ID')
plt.ylabel('Number of Reads')
plt.savefig('../Figures/Figure 4.pdf')
df2.columns = ['IDseq NR Genus', 'IDseq NR Species', 'IDseq NT Genus', 'IDseq NT Species',
'Kraken Total Classified','Kraken False Positives','Kraken Recall','Kraken True Positives']
percentage_scaled_df = df2[['IDseq NT Species','IDseq NR Species','Kraken True Positives','Kraken False Positives']]
# passed-filters (PF) values from IDseq pipeline QC steps
pf_values = [8582,8562,8514,8480,8422,8544,8558,8504,8492,8496,8450,8430,8458,8466,8466,8478,8466]
percentage_scaled_df['IDseq NT Species %'] = [list(df2['IDseq NT Species'])[i] / pf_values[i] for i in range(len(pf_values))]
percentage_scaled_df['IDseq NR Species %'] = [list(df2['IDseq NR Species'])[i] / pf_values[i] for i in range(len(pf_values))]
percentage_scaled_df['Kraken True Positives %'] = [list(df2['Kraken True Positives'])[i] / 10000 for i in range(len(pf_values))]
percentage_scaled_df['Kraken False Positives %'] = [list(df2['Kraken False Positives'])[i] / 10000 for i in range(len(pf_values))]
percentage_scaled_df = percentage_scaled_df[['IDseq NT Species %', 'IDseq NR Species %', 'Kraken True Positives %', 'Kraken False Positives %']]
percentage_scaled_df.plot(style='.-', lw = 3, ms = 10,figsize=(12,4), alpha=.8, cmap='Accent')#color=['#068977','green','darkblue','orange'])
plt.xlabel('Sample ID')
plt.ylabel('Percentage of Reads')
plt.savefig('../Figures/Figure4_v3.pdf')
df2.head(10)
###Output
_____no_output_____ |
BigWordGame.ipynb | ###Markdown
Jon's ~~Wordle Clone~~ "Big Word Game"* Slide word_size to choose the number of letters* Slide number_attempts to choose the number of tries you get* The answer is tied to game_code, so if you use the same code as a friend, you'll both be guessing the same answer. Type in "RANDOM" to get a random word.↓ Click this circle to play!
###Code
# Peeking at the code, are we?
# Stuff for the colab form
#@title Settings { vertical-output: true, display-mode: "form" }
word_size = 6 #@param {type:"slider", min:4, max:12, step:1}
number_attempts = 7 #@param {type:"slider", min:3, max:20, step:1}
game_code = "HELLO THERE" #@param {type:"string"}
# Libraries and stuff
print("Importing")
import random # Select random words
import time # Calculate time taken
import pandas as pd
import nltk # Natural Language ToolKit, contains word lists
from google.colab import output # Clear the console output
from nltk.corpus import brown # Word corpus
from termcolor import colored # Colored console text
# download corpus (words)
nltk.download("brown")
# Set the randomizer seed according to the room code
output.clear()
print("Setting room code")
if game_code == "RANDOM":
pass
else:
random.seed(a=game_code, version=2)
# Filter words from corpus depending on word length selected
output.clear()
print(" Building word list")
word_list = [
word.lower() for word in brown.words() if len(word) == word_size and word.isalpha()
]
print(f"{len(word_list)} word(s) found with {word_size} letters")
# Initial instructions
output.clear()
print(f"Guess a {word_size} letter word in {number_attempts} attempts!")
print(f"{colored('Green', 'green')} letters are in the right place")
print(f"{colored('Yellow', 'yellow')} letters are in the word, but at the wrong place")
print(f"{colored('Red', 'red')} letters are not in the word")
answer = random.choice(word_list) # Select a random word from the list
start = time.time() # Start the timer
keyboard = " Q W E R T Y U I O P \n A S D F G H J K L \n Z X C V B N M" # This is the keyboard display text
print(keyboard)
initial_number_attempts = number_attempts # Saving this here for later
# Initial checks for word length and validity
total_results = [] # List of results, empty for now
while number_attempts > 0: # While you've still got tries left
while True:
guess = str(input()).lower() # Convert to lowercase to check
if len(guess) < word_size: # If the word is too small
print("Too few letters")
continue
elif len(guess) > word_size: # If the word is too big
print("Too many letters")
continue
elif guess not in word_list: # If the word isn't a word
print("Word not in dictionary")
continue
else:
break
# Main game loop
result = "" # The result, empty for now
for idx, letter in enumerate(guess): # iterate through every letter in your guess
if guess[idx] == answer[idx]: # If it's in the right spot
result += colored(letter, "green") # Color it green and add it to the results
keyboard = keyboard.replace( # Also make the letter green in the keyboard display
letter.upper(), (colored(letter.upper(), "green")) # Keyboard uses uppercase letters
)
elif guess[idx] in list(answer): # Otherwise, if the letter is in the word (but not in the right spot)
result += colored(letter, "yellow") # Color it yellow
keyboard = keyboard.replace(
letter.upper(), (colored(letter.upper(), "yellow"))
)
else:
result += colored(letter, "red") # Otherwise, color it red
keyboard = keyboard.replace(
letter.upper(), (colored(letter.upper(), "red"))
)
total_results.append(result) # Add this result to the list of all results
if guess == answer: # If you get it right
break # exit
else: # otherwise show the previous words and the keyboard, and deduct a try
output.clear()
print(*total_results, sep="\n")
print(keyboard)
number_attempts -= 1
end = time.time() # End timer
output.clear()
# Results
print(*total_results, sep="\n")
print(f"The answer was {answer}")
print(f"{len(total_results)}/{initial_number_attempts} attempts used")
print(f"Time taken: {round(end-start)} seconds")
###Output
_____no_output_____ |
notebooks/freenet_analisis_new.ipynb | ###Markdown
Experiment - Global results Config paramsNumber of simultaneous spiders running- MAX_ONGOING_SPIDERS = 10Number of tries for error sites- MAX_CRAWLING_ATTEMPTS_ON_ERROR = 2Number of tries for error sites- MAX_CRAWLING_ATTEMPTS_ON_DISCOVERING = 2*24*7 7 days, 2 try per hourNumber of tries for error sites- MAX_DURATION_ON_DISCOVERING = 24*7*60 Minutes --> 7 daysNumber of parallel single threads running- MAX_SINGLE_THREADS_ON_DISCOVERING = 25Http response timeout- HTTP_TIMEOUT = 180 SecondsInitial seed file- INITIAL_SEEDS = "seed_urls.txt"Batch size of initial seeds- INITIAL_SEEDS_BACH_SIZE = 59 590/10=59Time to wait until the next seeds self-assignment- SEEDS_ASSIGNMENT_PERIOD = 1200 seconds (10 machine, 2 minute/machine --> 20 minutes )To schedule the discovering time. Each site will be discover every TIME_INTERVAL_TO_DISCOVER- TIME_INTERVAL_TO_DISCOVER = 30 minutesMysql:- max_connections=1500
###Code
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.dates as mdates
import seaborn as sns
import numpy as np
from sqlalchemy import create_engine
import pymysql
import spacy
from googletrans import Translator
from collections import Counter
from collections import defaultdict
# ID del experimento para ser almacenado
experiment_id = 'experiment_04082020_2020'
# Directorio para los dataframes
data_path = 'data/experiment/' + experiment_id + '/bbdd/'
# Guardar el experimento: 1 --> para mantener los resultados
to_save = 0
# Obtener el experimento: 1 --> restaurar desde ficheros, 0 --> Desde la base de datos
from_fs = 1
# Acceso a la base de datos local o remoto: 1 --> local, 0 --> remoto
bbdd_connection = 1
# limitacion para la tabla siteprocessinglog en caso de que sea muy grande
logprocessing_limit = 1000000
# visualizacion al maximo el tamaño de las columnas
pd.set_option('display.max_colwidth', None)
# Directorio de imagenes
img_path = "img/"
# Parametros de configuracion para los graficos
# fondict for axis labels
font_labels = {'family' : 'arial',
'weight' : 'normal',
'size' : 26}
# fondict for title labels
font_title = {'family' : 'arial',
'weight' : 'bold',
'size' : 24}
# fontsize for tickso
ticks_fontsize=20
# legend fontsize
legend_fontsize=15
# Linewidth and markersize
lw=5
ms=10
# Mapeo de UUID y nombre de la maquina
#uuid = {'30304872-abed-11ea-b816-4889e7cf26ff':'i2pProjectM1'}
#uuid
if from_fs: ## Restaurando ficheros
df_site = pd.read_pickle(data_path + experiment_id + "_site.pickle")
df_status = pd.read_pickle(data_path + experiment_id + "_status.pickle")
df_source = pd.read_pickle(data_path + experiment_id + "_source.pickle")
df_logprocessing = pd.read_pickle(data_path + experiment_id + "_logprocessing.pickle")
df_language = pd.read_pickle(data_path + experiment_id + "_sitelanguage.pickle")
df_sitehomeinfo = pd.read_pickle(data_path + experiment_id + "_sitehomeinfo.pickle")
df_connectivity = pd.read_pickle(data_path + experiment_id + "_siteconnectivity_updated_offline.pickle")
df_src_link = pd.read_pickle(data_path + experiment_id + "_link_site.pickle")
df_dst_link = pd.read_pickle(data_path + experiment_id + "_link_site_2.pickle")
else:## Obteniendo de la base de datos
if bbdd_connection:
port = '3306'
else:
port = '6666'
engine = create_engine('mysql+pymysql://root:toor@localhost:'+port+'/freenet08', echo=False)
df_site = pd.read_sql_query('select * from site', engine)
df_status = pd.read_sql_query('select * from sitestatus', engine)
df_source = pd.read_sql_query('select * from sitesource', engine)
df_logprocessing = pd.read_sql_query('select * from siteprocessinglog limit ' + str(logprocessing_limit), engine)
df_language = pd.read_sql_query('select sitelanguage.* from sitelanguage', engine)
df_sitehomeinfo = pd.read_sql_query('select sitehomeinfo.* from sitehomeinfo', engine)
df_connectivity = pd.read_sql_query('select siteconnectivitysummary.* from siteconnectivitysummary', engine)
df_src_link = pd.read_sql_query('select link_site.* from link_site', engine)
df_dst_link = pd.read_sql_query('select link_site_2.* from link_site_2', engine)
## Almacenando dataframes
if to_save:
df_site.to_pickle(data_path + experiment_id + "_site.pickle")
df_status.to_pickle(data_path + experiment_id + "_status.pickle")
df_source.to_pickle(data_path + experiment_id + "_source.pickle")
df_logprocessing.to_pickle(data_path + experiment_id + "_logprocessing.pickle")
df_language.to_pickle(data_path + experiment_id + "_sitelanguage.pickle")
df_sitehomeinfo.to_pickle(data_path + experiment_id + "_sitehomeinfo.pickle")
df_connectivity.to_pickle(data_path + experiment_id + "_siteconnectivity_updated_offline.pickle")
df_src_link.to_pickle(data_path + experiment_id + "_link_site.pickle")
df_dst_link.to_pickle(data_path + experiment_id + "_link_site_2.pickle")
# Agregamos el dato de la duracion, en minutos, que seria la diferencia entre la fecha de inicio y la de fin
df_site['duration'] = (df_site['timestamp_s'] - df_site['timestamp']).apply(lambda x:x.total_seconds()/60)
# Agregamos el dato host, que es el mapeo del uuid correspondiente
#df_site['host']=df_site['uuid'].map(uuid)
#Agregamos una abreviatura para los sitios de freenet
df_site['abbr'] = df_site['name']
for i in range(0, len(df_site.index)):
name = df_site['abbr'][i]
#Comprobamos si acaba en barra
if name[-1] is "/":
name = name[:-1]
#Comprobamos si es USK o SSK
is_usk = False
if "USK@" in name:
is_usk = True
#Seleccionamos lo de despues del arroba
name = name.split("@", 1)[1]
if is_usk:
name = name.rsplit("/", 1)[0]
name = name.split("/", 1)[1]
else:
if "/" in name:
name = name.split("/", 1)[1]
#df_site['abbr'][i] = name
df_site.at[i, 'abbr'] = name
# Agregamos la informacion del estado del sitio
df_site_status = df_site.merge(df_status,left_on='current_status',right_on='id')
df_site_status = df_site_status.drop(labels=['type_x','id_y','description','current_status'],axis=1)
df_site_status=df_site_status.rename(columns={'type_y':'status'})
# Agregamos la infromacion de la fuente del sitio
df_site_source = df_site.merge(df_source,left_on='source',right_on='id')
df_site_source = df_site_source.drop(labels=['type_x','id_y','description','source'],axis=1)
df_site_source=df_site_source.rename(columns={'type_y':'source'})
# Unimos ambas informaciones en un mismo lugar
df_site_source_status = df_site_source.merge(df_status,left_on='current_status',right_on='id')
df_site_source_status = df_site_source_status.drop(labels=['id','current_status','description'],axis=1)
df_site_source_status = df_site_source_status.rename(columns={'type':'current_status', 'id_x':'id'})
#Unimos la informacion del sitio con las de la conectividad
df_site_conn = df_site_source_status.merge(df_connectivity,left_on='id',right_on='site')
df_site_conn = df_site_conn.drop(labels=['id_x','id_y','pages_x'],axis=1)
df_site_conn = df_site_conn.rename(columns={'pages_y':'pages'})
#Unimos la conectividad de los nodos para los grafos
df_links = df_src_link.merge(df_dst_link,left_on='link',right_on='link')
df_links = df_links.rename(columns={'site_x':'Source','site_y':'Target','link':'Label'})
#Unimos los site con la info de home
df_site_home = df_site.merge(df_sitehomeinfo,left_on='id',right_on='site')
df_site_home = df_site_home.drop(labels=['id_x','id_y'],axis=1)
#Unimos los sites con la info del home con el lenguaje
df_site_home_lan = df_site_home.merge(df_language[df_language['engine'] == 'GOOGLE'],left_on='site',right_on='site')
#Le agregamos una columna mas que usaremos para el analisis de los datos
df_site_home_lan['illicit_category'] = ""
df_site_home_lan['illicit_value'] = 0
# Vemos la fuente de los sitios en general
total_all_source = df_site_source_status['source'].value_counts()
print(total_all_source)
total_all_source.plot(kind='pie', autopct='%1.1f%%', startangle=90, fontsize=14, figsize=(8,8))
# Vemos la fuente de los sitios activos
df_site_active = df_site_source_status[df_site_source_status['current_status'] == 'FINISHED']
total_active_source = df_site_active['source'].value_counts()
print(total_active_source)
total_active_source.plot(kind='pie', autopct='%1.1f%%', startangle=90, fontsize=14, figsize=(8,8))
# Vemos la distribucion de sitios por estado
total_status_sites = df_site_status['status'].value_counts()
print(total_status_sites)
total_status_sites.plot(kind='pie', autopct='%1.1f%%', labeldistance=None, fontsize=14, figsize=(8,8)).legend(loc='upper right', bbox_to_anchor=(0.25,0.25))
df_ss_analysis = df_site_source_status.copy()
df_ss_analysis = df_ss_analysis.set_index('timestamp')
df_ss_analysis_s = df_site_source_status.copy()
df_ss_analysis_s = df_ss_analysis_s.set_index('timestamp_s')
df_ss_s = df_ss_analysis_s.copy() #Con fecha de stop del crawling
df_ss = df_ss_analysis.copy() #Con fecha de incorporacion a la bbdd
# Evolucion temporal de los sitios procesados
df_ss_all = df_ss['2020-08-03':]
df_ss_all_s = df_ss_s['2020-08-03':]
#df_ss_all = df_ss['2020-07-15':]
temp_evo_sites = df_ss_all.resample('D').count()['name'].cumsum()
print(temp_evo_sites)
ax = temp_evo_sites.plot(kind='line', fontsize=14, figsize=(8,8), style='o-')
ax.set_ylabel('Sitios procesados')
ax.set_xlabel('Fecha')
# Evolucion temporal de los sitios crawleados
temp_evo_sites_active = df_ss_all_s[df_ss_all_s['current_status'] == 'FINISHED'].resample('D').count()['name'].cumsum()
print(temp_evo_sites_active)
ax = temp_evo_sites_active.plot(kind='line', fontsize=14, figsize=(8,8), style='o-')
ax.set_ylabel('Sites successfully crawled', fontsize=14)
ax.set_xlabel('Date', fontsize=16)
ax.get_figure().savefig("/home/emilio/Documentos/SitiosActivosTemporal_eng.pdf")
# Numero de sitios con crawling finalizado tras el primer dia
# VS
# Numero de sitios con crawling finalizado tras el ultimo dia
df_ss_first_day = df_ss_s['2020-08-04':'2020-08-05'] #Primeras 24 horas
#df_ss_first_day = df_ss_s['2020-07-16']
site_crawled_first_day = df_ss_first_day[df_ss_first_day['current_status'] == 'FINISHED']['source'].value_counts()
print(site_crawled_first_day)
site_crawled_last_day = df_site_source_status[df_site_source_status['current_status'] == 'FINISHED']['source'].value_counts()
print(site_crawled_last_day)
df = pd.DataFrame({'Primer dia': site_crawled_first_day, 'Al finalizar': site_crawled_last_day}, index=['DISCOVERED', 'SEED'])
ax = df.plot(rot=0, kind = 'bar', fontsize=14, figsize=(12,8))
for p in ax.patches:
ax.annotate(np.round(p.get_height(),decimals=2), (p.get_x()+p.get_width()/2., p.get_height()), ha='center', va='center', xytext=(0, 10), textcoords='offset points')
ax.set_ylabel('Sitios correctamente crawleados')
ax.set_xlabel('Fuente')
#Histograma de los sites crawleados tras el primer dia en funcion de los outgoing sites
#Se salta por ahora
#Intentos de descubrimientos de los sitios crawleados
total_in_status = df_site_status[df_site_status['status']=='FINISHED']['discovering_tries'].count()
print("Total FINISHED: " + str(total_in_status))
value_count_in_status = df_site_status[df_site_status['status']=='FINISHED']['discovering_tries'].value_counts()
#Porcentaje
print((value_count_in_status/total_in_status)*100)
#Valor absoluto
print(value_count_in_status[value_count_in_status.index > 290])
print("5 intentos o menos: ")
print(value_count_in_status[value_count_in_status.index <= 5].sum())
print("Mas de 5 intentos: ")
print(value_count_in_status[value_count_in_status.index > 5].sum())
try_disc_crawled_sites = df_site_status[df_site_status['status']=='FINISHED']['discovering_tries']
ax = try_disc_crawled_sites.hist(bins=600, figsize=(15,8), xlabelsize=18, ylabelsize=18)
ax.set_ylabel('Sites successfully crawled', fontsize=18)
ax.set_xlabel('Discovery attempts', fontsize=18)
ax.get_figure().savefig("/home/emilio/Documentos/IntentosDescubrimientosActivos_eng600.pdf")
### Analisis de idioma (segun Google)
language_google = df_language[df_language['engine'] == 'GOOGLE']['language']
language_google = language_google.replace('','undefined')
language_google_count = language_google.value_counts()
print("Se han detectado {} idiomas diferentes.".format(language_google_count.count()))
condition = language_google_count<7 # Definir el limite para agrupar en 'others'
mask_obs = language_google_count[condition].index
mask_dict = dict.fromkeys(mask_obs, 'others')
language_google = language_google.replace(mask_dict)
language_google_count = language_google.value_counts() #Valores
language_google_count_norm = language_google.value_counts(normalize=True) #Porcentaje
language_google_count_all = pd.concat([language_google_count, language_google_count_norm], axis=1) #Todo
print(language_google_count_all)
language_google_count.plot(kind='pie', autopct='%1.1f%%', labeldistance=None, fontsize=14, figsize=(8,8)).legend(loc='upper right', bbox_to_anchor=(0.25,0.25))
#Analisis de idioma (segun NLTK)
language_nltk = df_language[df_language['engine'] == 'NLTK']['language']
language_nltk_count = language_nltk.value_counts()
print("Se han detectado {} idiomas diferentes.".format(language_nltk_count.count()))
#print(language_nltk_count) # Seleccionar limite en base a resultados
condition = language_nltk_count<17 # Definir el limite para agrupar en 'others'
mask_obs = language_nltk_count[condition].index
mask_dict = dict.fromkeys(mask_obs, 'others')
language_nltk = language_nltk.replace(mask_dict)
language_nltk_count = language_nltk.value_counts() #Valores
language_nltk_count_norm = language_nltk.value_counts(normalize=True) #Porcentaje
language_nltk_count_all = pd.concat([language_nltk_count, language_nltk_count_norm], axis=1)
print(language_nltk_count_all)
language_nltk_count.plot(kind='pie', autopct='%1.1f%%', labeldistance=None, fontsize=14, figsize=(8,8)).legend(loc='upper right', bbox_to_anchor=(0.25,0.25))
#Numero de paginas en los sitio crawleados
total_in_status = df_site_status[df_site_status['status']=='FINISHED']['pages'].count()
print("Total FINISHED: " + str(total_in_status))
value_count_pagescrawledsites = df_site_status[df_site_status['status']=='FINISHED']['pages'].value_counts()
print("Número de sitios con 5 paginas o menos: {}".format(value_count_pagescrawledsites[value_count_pagescrawledsites.index <= 5].sum()))
#Porcentaje
print((value_count_pagescrawledsites/total_in_status)*100)
#Valor absoluto
print(value_count_pagescrawledsites)
#Aqui se muestra en la primera columna el numero de paginas y en la segunda el numero de sitios con dichas paginas
#range=[0, 100]
num_pages_crawled_sites = df_site_status[df_site_status['status']=='FINISHED']['pages']
ax = num_pages_crawled_sites.hist(bins=40000, figsize=(15,8), xlabelsize=18, ylabelsize=18)
ax.set_ylabel('Sites successfully crawled', fontsize=18)
ax.set_xlabel('Number of pages', fontsize=18)
ax.set_yscale('log')
#ax.get_figure().savefig("/home/emilio/Documentos/PaginasSitiosActivos_eng.pdf")
#Tener en cuenta que puede haber paginas, aunque muy pocas, que tengan mas de 200 y no salgan en el histograma
#TOP 5 de sitios con mas paginas
top_pages = df_site_status[df_site_status['status']=='FINISHED'][['abbr', 'pages', 'name']]
top_pages = top_pages.sort_values(by=['pages'], ascending=False).head()
ax = top_pages.plot.bar(rot=0, fontsize=14, figsize=(12,8), x = 'abbr')
for p in ax.patches:
ax.annotate(np.round(p.get_height(),decimals=2), (p.get_x()+p.get_width()/2., p.get_height()), ha='center', va='center', xytext=(0, 10), textcoords='offset points')
ax.set_ylabel('Nº de páginas')
ax.set_xlabel('Abreviatura del sitio')
top_pages
# Tiempo que tarda en crawlear los sites
total_in_status = df_site_status[df_site_status['status']=='FINISHED']['duration'].count()
print("Total FINISHED: " + str(total_in_status))
#Porcentaje
#print((df_site_status[df_site_status['status']=='FINISHED']['duration'].value_counts()/total_in_status)*100)
#Valor absoluto
print(df_site_status[df_site_status['status']=='FINISHED']['duration'].value_counts())
#Aqui se muestra en la primera columna el tiemop que tarda y en la segunda el numero de sitios que han tardado ese tiempo
duration_crawled_sites = df_site_status[df_site_status['status']=='FINISHED']['duration']
ax = duration_crawled_sites.hist(bins=200, figsize=(15,8))
#ax = duration_crawled_sites.hist(bins=200, figsize=(15,8), range=[0,4000])
#Eje X es el tiempo en minutos y eje Y el número de sites
ax.set_xlabel('Tiempo (minutos)')
ax.set_ylabel('Nº de sitios')
#Relacion entre duracion y numero de paginas
#pages_duration = pd.concat([df_site_status[df_site_status['status']=='FINISHED']['pages'], df_site_status[df_site_status['status']=='FINISHED']['duration']], axis=1)
pages_duration = df_site_status[df_site_status['status']=='FINISHED'][['pages', 'duration']]
ax = pages_duration.plot.scatter(x='duration', y='pages',figsize=(15,8), facecolors='none', edgecolors='deepskyblue', alpha=0.2, s=100)
ax.set_xlabel('Tiempo (minutos)')
ax.set_ylabel('Nº de páginas')
#Relacion entre duracion y los intentos de discovering
discovering_duration = df_site_status[df_site_status['status']=='FINISHED'][['discovering_tries', 'duration']]
ax = discovering_duration.plot.scatter(x='duration', y='discovering_tries',figsize=(15,8), facecolors='none', edgecolors='deepskyblue', alpha=0.2, s=100)
ax.set_xlabel('Tiempo (minutos)')
ax.set_ylabel('Intentos de descubrimiento')
#Estadisticas de los intentos de descubrimientos, paginas y duracion
# https://blog.adrianistan.eu/estadistica-python-media-mediana-varianza-percentiles-parte-iii
#try_pages_duration = pd.concat([df_site_status[df_site_status['status']=='FINISHED']['discovering_tries'], df_site_status[df_site_status['status']=='FINISHED']['pages'], df_site_status[df_site_status['status']=='FINISHED']['duration']], axis=1)
try_pages_duration = df_site_status[df_site_status['status']=='FINISHED'][['discovering_tries', 'pages', 'duration']]
#Media
avg = try_pages_duration.mean()
print("MEDIA:")
print(avg)
print("\n")
#Mediana
median = try_pages_duration.median()
print("MEDIANA:")
print(median)
print("\n")
#Moda
mode = try_pages_duration.mode()
print("MODA:")
print(mode)
print("\n")
#Desviacion estandar
std = try_pages_duration.std(ddof=0)
print("DESVIACION ESTANDAR:")
print(std)
print("\n")
#Rango e IQR
rango = try_pages_duration.max() - try_pages_duration.min()
iqr = try_pages_duration.quantile(0.75) - try_pages_duration.quantile(0.25)
print("MINIMO:")
print(try_pages_duration.min())
print("MAXIMO:")
print(try_pages_duration.max())
print("\n")
print("RANGO (DIFERENCIA ENTRE MAXIMO Y MINIMO):")
print(rango)
print("\n")
print("RANGO INTERCUARTILICO:")
print(iqr)
print("\n")
#Coeficiente de variacion
cv = std / avg
print("COEFICIENTE DE VARIACION:")
print(cv)
print("\n")
# Analisis pagina principal
#letters_home = df_sitehomeinfo['letters']
words_home = df_sitehomeinfo['words']
images_home = df_sitehomeinfo['images']
scripts_home = df_sitehomeinfo['scripts']
plt.grid()
plt.hist(words_home, bins=4000, label = "Words", hatch='/')
plt.hist(images_home, bins=4000, label = "Images", hatch='.')
plt.hist(scripts_home, bins=100, label = "Scripts", hatch='-')
plt.legend(loc='upper right')
#plt.yscale("symlog")
plt.xscale("symlog")
plt.xlabel("Number of words/images/scripts", fontsize=8)
plt.ylabel("Number of sites", fontsize=8)
#plt.show()
plt.savefig('/home/emilio/Documentos/WordsScriptsImages2.svg')
# Letras
#letters_home
#ax = letters_home.hist(bins=100, figsize=(15,8))
#ax = letters_home.hist(bins=100, figsize=(15,8), range=[0, 200000])
#ax.set_xlabel('Nº de letras')
#ax.set_ylabel('Nº de sitios')
#Palabras
ax = words_home.hist(bins=100, figsize=(15,8))
ax.set_xlabel('Nº de palabras')
ax.set_ylabel('Nº de sitios')
#Scripts
ax = scripts_home.hist(bins=5, figsize=(15,8), range=[0, 5])
ax.set_xlabel('Nº de scripts')
ax.set_ylabel('Nº de sitios')
#Imagenes
ax = images_home.hist(bins=150, figsize=(15,8))
ax.set_xlabel('Nº de imágenes')
ax.set_ylabel('Nº de sitios')
#Analisis estadistico de la pagina principal
# https://blog.adrianistan.eu/estadistica-python-media-mediana-varianza-percentiles-parte-iii
homeinfo_stats = pd.concat([letters_home, words_home, scripts_home, images_home], axis=1)
#Media
avg = homeinfo_stats.mean()
print("MEDIA:")
print(avg)
print("\n")
#Mediana
median = homeinfo_stats.median()
print("MEDIANA:")
print(median)
print("\n")
#Moda
mode = homeinfo_stats.mode()
print("MODA:")
print(mode)
print("\n")
#Desviacion estandar
std = homeinfo_stats.std(ddof=0)
print("DESVIACION ESTANDAR:")
print(std)
print("\n")
print("MINIMO:")
print(homeinfo_stats.min())
print("MAXIMO:")
print(homeinfo_stats.max())
print("\n")
#Rango e IQR
rango = homeinfo_stats.max() - homeinfo_stats.min()
iqr = homeinfo_stats.quantile(0.75) - homeinfo_stats.quantile(0.25)
print("RANGO (DIFERENCIA ENTRE MAXIMO Y MINIMO):")
print(rango)
print("\n")
print("RANGO INTERCUARTILICO:")
print(iqr)
print("\n")
#Coeficiente de variacion
cv = std / avg
print("COEFICIENTE DE VARIACION:")
print(cv)
print("\n")
#Analisis de la conectividad
#Outgoing (nodos que apuntan, hacia fuera)
outgoing = df_connectivity['outgoing']
outgoing_all = pd.concat([outgoing.value_counts(), outgoing.value_counts(normalize=True)], axis=1) #Todo
print(outgoing_all)
#Outgoing
ax = outgoing.hist(bins=250, figsize=(15,8))
ax.set_xlabel('Nº de outgoing')
ax.set_ylabel('Nº de sitios')
#Outgoing sin contar los que tiene 0 outgoing
outgoing_nozero = df_connectivity[df_connectivity['outgoing'] > 1]['outgoing']
ax = outgoing_nozero.hist(bins=250, figsize=(15,8))
ax.set_xlabel('Nº de outgoing')
ax.set_ylabel('Nº de sitios')
#Estadisticas de los outgoing
# https://blog.adrianistan.eu/estadistica-python-media-mediana-varianza-percentiles-parte-iii
outgoing_stats = df_connectivity['outgoing']
#Media
avg = outgoing_stats.mean()
print("MEDIA:")
print(avg)
print("\n")
#Mediana
median = outgoing_stats.median()
print("MEDIANA:")
print(median)
print("\n")
#Moda
mode = outgoing_stats.mode()
print("MODA:")
print(mode)
print("\n")
#Desviacion estandar
std = outgoing_stats.std(ddof=0)
print("DESVIACION ESTANDAR:")
print(std)
print("\n")
print("MINIMO:")
print(outgoing_stats.min())
print("MAXIMO:")
print(outgoing_stats.max())
print("\n")
#Rango e IQR
rango = outgoing_stats.max() - outgoing_stats.min()
iqr = outgoing_stats.quantile(0.75) - outgoing_stats.quantile(0.25)
print("RANGO (DIFERENCIA ENTRE MAXIMO Y MINIMO):")
print(rango)
print("\n")
print("RANGO INTERCUARTILICO:")
print(iqr)
print("\n")
#Coeficiente de variacion
cv = std / avg
print("COEFICIENTE DE VARIACION:")
print(cv)
print("\n")
#Top 10 sitios con mas outgoing
top_outgoing = df_site_conn.sort_values(by=['outgoing'], ascending=False).head(10).reset_index(drop=True)
top_outgoing
#Para generar grafos se utiliza la herramienta Gephi, para ello, se generan los ficheros de nodos y aristas
df_links_topoutgoing = pd.DataFrame()
#Buscamos relaciones entre los 10 sitios tops
for i in range(0,10):
for j in range(0,10):
df_links_topoutgoing = pd.concat([df_links_topoutgoing, df_links[(df_links['Target'] == top_outgoing['site'][i]) & (df_links['Source'] == top_outgoing['site'][j])]])
print(df_links_topoutgoing)
#Generamos los ficheros de nodos y aristas para Gephi
df_links_topoutgoing.to_csv(data_path + 'aristas_topoutgoing.csv',sep=',',index=False)
df_nodes = top_outgoing[['site','abbr']]
df_nodes = df_nodes.rename(columns={'site':'id','abbr':'Label'})
df_nodes.to_csv(data_path + 'nodos_topoutgoing.csv',sep=',',index=False)
#Relacion entre numero de paginas y outgoing
pages_outgoing = pd.concat([df_site_conn['pages'], df_site_conn['outgoing']], axis=1)
ax = pages_outgoing.plot.scatter(x='outgoing', y='pages', figsize=(15,8), facecolors='none', edgecolors='deepskyblue', alpha=0.2, s=100)
ax.set_xlabel('Nº de outgoing')
ax.set_ylabel('Nº de páginas')
top_pages_outgoing = df_site_conn.sort_values(by=['pages'], ascending=False).head(10)
top_pages_outgoing[['name', 'outgoing', 'pages']]
#Incoming (nodos apuntados, hacia dentro)
incoming = df_connectivity['incoming']
incoming_all = pd.concat([incoming.value_counts(), incoming.value_counts(normalize=True)], axis=1) #Todo
print(incoming_all)
print(incoming_all[incoming_all.index == 0])
#Incoming
ax = incoming.hist(bins=175, figsize=(15,8))
ax.set_xlabel('Nº de incoming')
ax.set_ylabel('Nº de sitios')
#Incoming sin contar los que tiene 0 incoming (innecesaria si hay pocos con 0s)
incoming_nozero = df_connectivity[df_connectivity['incoming'] > 1]['incoming']
ax = incoming_nozero.hist(bins=175, figsize=(15,8))
ax.set_xlabel('Nº de incoming')
ax.set_ylabel('Nº de sitios')
#Estadisticas de los incoming
# https://blog.adrianistan.eu/estadistica-python-media-mediana-varianza-percentiles-parte-iii
incoming_stats = df_connectivity['incoming']
#Media
avg = incoming_stats.mean()
print("MEDIA:")
print(avg)
print("\n")
#Mediana
median = incoming_stats.median()
print("MEDIANA:")
print(median)
print("\n")
#Moda
mode = incoming_stats.mode()
print("MODA:")
print(mode)
print("\n")
#Desviacion estandar
std = incoming_stats.std(ddof=0)
print("DESVIACION ESTANDAR:")
print(std)
print("\n")
print("MINIMO:")
print(incoming_stats.min())
print("MAXIMO:")
print(incoming_stats.max())
print("\n")
#Rango e IQR
rango = incoming_stats.max() - incoming_stats.min()
iqr = incoming_stats.quantile(0.75) - incoming_stats.quantile(0.25)
print("RANGO (DIFERENCIA ENTRE MAXIMO Y MINIMO):")
print(rango)
print("\n")
print("RANGO INTERCUARTILICO:")
print(iqr)
print("\n")
#Coeficiente de variacion
cv = std / avg
print("COEFICIENTE DE VARIACION:")
print(cv)
print("\n")
#Top 10 sitios con mas incoming
top_incoming = df_site_conn.sort_values(by=['incoming'], ascending=False).head(10).reset_index(drop=True)
top_incoming
#Top 10 sitios con menos incoming
bottom_incoming = df_site_conn.sort_values(by=['incoming'], ascending=True).head(10).reset_index(drop=True)
bottom_incoming
#Para generar grafos se utiliza la herramienta Gephi, para ello, se generan los ficheros de nodos y aristas
df_links_topincoming = pd.DataFrame()
#Buscamos relaciones entre los 10 sitios tops
for i in range(0,10):
for j in range(0,10):
df_links_topincoming = pd.concat([df_links_topincoming, df_links[(df_links['Target'] == top_incoming['site'][i]) & (df_links['Source'] == top_incoming['site'][j])]])
print(df_links_topincoming)
#Generamos los ficheros de nodos y aristas para Gephi
df_links_topincoming.to_csv(data_path + 'aristas_topincoming.csv',sep=',',index=False)
df_nodes = top_incoming[['site','abbr']]
df_nodes = df_nodes.rename(columns={'site':'id','abbr':'Label'})
df_nodes.to_csv(data_path + 'nodos_topincoming.csv',sep=',',index=False)
#Buscar sitios aislados
isolate_sites = df_site_conn[(df_site_conn['incoming'] <= 1) & (df_site_conn['outgoing'] == 0)]['name'].count()
some_conn = df_site_conn[(df_site_conn['incoming'] > 1) | (df_site_conn['outgoing'] > 0)]['name'].count()
compl_conn = df_site_conn[(df_site_conn['incoming'] > 1) & (df_site_conn['outgoing'] > 0)]['name'].count()
print("Aislados: ")
print(isolate_sites)
print("Algo conectados: ")
print(some_conn - compl_conn)
print("Completamente conectados: ")
print(compl_conn)
distr_conn = pd.DataFrame({'Tipo': ['Aislados', 'Algo conectados', 'Conectados'], 'Conectividad': [isolate_sites, some_conn - compl_conn, compl_conn]})
distr_conn.plot(kind='pie', y = 'Conectividad', labels = distr_conn['Tipo'], autopct='%1.1f%%', labeldistance=None, fontsize=14, figsize=(8,8)).legend(loc='upper right', bbox_to_anchor=(0.25,0.25))
#Grafo completo
#Para generar grafos se utiliza la herramienta Gephi, para ello, se generan los ficheros de nodos y aristas
#Generamos los ficheros de nodos y aristas para Gephi
df_links.to_csv(data_path + 'aristas_total.csv',sep=',',index=False)
df_nodes = df_site[['id','abbr']]
df_nodes = df_nodes.rename(columns={'abbr':'Label'})
df_nodes.to_csv(data_path + 'nodos_total.csv',sep=',',index=False)
#ANALISIS DEL CONTENIDO
#Algunos sitios destacados a partir de los datos analizados
#Mayor numero de paginas
top_pages = df_site.sort_values(by=['pages'], ascending=False).head(5).reset_index(drop=True)
top_pages
#Mayor numero de intentos de descubrimiento
top_trydiscovering = df_site_home.sort_values(by=['discovering_tries'], ascending=False).head(5).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site']]
top_trydiscovering
#Mayor numero de palabras
top_words = df_site_home.sort_values(by=['words'], ascending=False).head(5).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site']]
top_words
#Mayor numero de imagenes
top_images = df_site_home.sort_values(by=['images'], ascending=False).head(5).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site']]
top_images
#Mayor numero de outgoing
top_outgoing = df_site_conn.sort_values(by=['outgoing'], ascending=False).head(5).reset_index(drop=True)
top_outgoing
#Mayor numero de incoming
top_incoming = df_site_conn.sort_values(by=['incoming'], ascending=False).head(5).reset_index(drop=True)
top_incoming
#Menor numero de incoming
bottom_incoming = df_site_conn.sort_values(by=['incoming'], ascending=True).head(5).reset_index(drop=True)
bottom_incoming
#ANALISIS DE DATOS CON SpaCy
translator = Translator()
#Preparamos las keywords y las traducimos a cada idioma a analizar
#EXTREMISMO
keyword_extremismo = defaultdict(dict)
keyword_extremismo['en'] = ['terrorist', 'terrorism', 'qaeda', 'explosive', 'bomb', 'jihad', 'akbar']
keyword_extremismo['es'] = []
keyword_extremismo['fr'] = []
keyword_extremismo['de'] = []
keyword_extremismo['pl'] = []
#ARMAS
keyword_armas = defaultdict(dict)
keyword_armas['en'] = ['gun', 'firearm', 'rifle', 'hunting', 'ammunition', 'ammo', 'weapon']
keyword_armas['es'] = []
keyword_armas['fr'] = []
keyword_armas['de'] = []
keyword_armas['pl'] = []
#DROGAS
keyword_drogas = defaultdict(dict)
keyword_drogas['en'] = ['drug', 'pharmacy', 'viagra', 'cannabis', 'treatment', 'mdma', 'lsd', 'oxycodine', 'oxicotin', 'fentanyl', 'cocaine', 'ecstasy', 'methamphetamine', 'methadone']
keyword_drogas['es'] = []
keyword_drogas['fr'] = []
keyword_drogas['de'] = []
keyword_drogas['pl'] = []
#FINANZAS
keyword_finanzas = defaultdict(dict)
keyword_finanzas['en'] = ['laundering', 'offshore', 'counterfeit', 'credit', 'bank', 'passport', 'card', 'backdate']
keyword_finanzas['es'] = []
keyword_finanzas['fr'] = []
keyword_finanzas['de'] = []
keyword_finanzas['pl'] = []
#HACKING
#Se mantienen en ingles en todos los idiomas
keyword_hacking = defaultdict(dict)
keyword_hacking['en'] = ['leak', 'malware', 'ddos', 'exploit', 'google dork', 'virus']
keyword_hacking['es'] = ['leak', 'malware', 'ddos', 'exploit', 'google dork', 'virus']
keyword_hacking['fr'] = ['leak', 'malware', 'ddos', 'exploit', 'google dork', 'virus']
keyword_hacking['de'] = ['leak', 'malware', 'ddos', 'exploit', 'google dork', 'virus']
keyword_hacking['pl'] = ['leak', 'malware', 'ddos', 'exploit', 'google dork', 'virus']
#PORNO
keyword_porno = defaultdict(dict)
keyword_porno['en'] = ['newstar', 'tinymodel', 'child', 'zoophili', 'ls', 'star', 'women', 'beautiful', 'cutelovers', 'nymphet', 'anal', 'lolita', 'twink', 'teen']
keyword_porno['es'] = []
keyword_porno['fr'] = []
keyword_porno['de'] = []
keyword_porno['pl'] = []
#VIOLENCIA
keyword_violencia = defaultdict(dict)
keyword_violencia['en'] = ['instructions', 'handbook', 'murder', 'kill', 'hired']
keyword_violencia['es'] = []
keyword_violencia['fr'] = []
keyword_violencia['de'] = []
keyword_violencia['pl'] = []
#Traducimos las keywords a los diferentes idiomas
while True:
try:
for lang_dst in keyword_extremismo:
if lang_dst is not 'en':
translated = translator.translate(keyword_extremismo['en'], src='en', dest=lang_dst)
for trans in translated:
keyword_extremismo[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
if lang_dst is not 'en':
translated = translator.translate(keyword_armas['en'], src='en', dest=lang_dst)
for trans in translated:
keyword_armas[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
if lang_dst is not 'en':
translated = translator.translate(keyword_drogas['en'], src='en', dest=lang_dst)
for trans in translated:
keyword_drogas[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
if lang_dst is not 'en':
translated = translator.translate(keyword_finanzas['en'], src='en', dest=lang_dst)
for trans in translated:
keyword_finanzas[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
#if lang_dst is not 'en':
# translated = translator.translate(keyword_hacking['en'], src='en', dest=lang_dst)
# for trans in translated:
# keyword_hacking[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
if lang_dst is not 'en':
translated = translator.translate(keyword_porno['en'], src='en', dest=lang_dst)
for trans in translated:
keyword_porno[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
if lang_dst is not 'en':
translated = translator.translate(keyword_violencia['en'], src='en', dest=lang_dst)
for trans in translated:
keyword_violencia[lang_dst].append(trans.text)
#print(f'{trans.origin} -> {trans.text}')
except:
print("Google Translator ReadTimeout error...")
continue
break
#Categorizacion de cada sitio a traves de keywords entre las palabras mas comunes
df_site_home_lan = df_site_home_lan.reset_index(drop=True)
len_df_site_home_lan = len(df_site_home_lan.index) #Numero de sitios a analizar
for i in range(0, len_df_site_home_lan):
#Comprobamos que el sitio no este categorizado
if len(df_site_home_lan['illicit_category'][i]) == 0:
#Informacion del estado del analisis
for_status = round(i*100/len_df_site_home_lan, 1)
if for_status%5 == 0:
print("En proceso... {}%".format(for_status))
#Configura variables segun el idioma del sitio
if df_site_home_lan['language'][i] == 'english':
nlp = spacy.load("en_core_web_sm")
lang_dst = 'en'
elif df_site_home_lan['language'][i] == 'spanish':
nlp = spacy.load("es_core_news_sm")
lang_dst = 'es'
elif df_site_home_lan['language'][i] == 'french':
nlp = spacy.load("fr_core_news_sm")
lang_dst = 'fr'
elif df_site_home_lan['language'][i] == 'german':
nlp = spacy.load("de_core_news_sm")
lang_dst = 'de'
elif df_site_home_lan['language'][i] == 'polish':
nlp = spacy.load("pl_core_news_sm")
lang_dst = 'pl'
else:
nlp = spacy.load("xx_ent_wiki_sm")
lang_dst = 'en'
nlp.max_length = 1905827 # or even higher
#print(df_site_home_lan['language'][i])
#print(df_site_home_lan['name'][i])
#print(df_site_home_lan['site'][i])
text = df_site_home_lan['title'][i] + " " + df_site_home_lan['text'][i]
doc = nlp(text)
words = [token.text for token in doc if token.is_stop != True and token.is_punct != True]
word_freq = Counter(words)
#Matriz donde x es la posicion en el top de apariciones e y puede ser 0 (palabra) o 1 (nº de apariciones)
common_words = word_freq.most_common(10)
#print(common_words)
#Introducimos las palabras en una lista y la ponemos en minusculas
list_words = [] #Lista de palabras
value_words = [] #Lista de valores
for j in range(0,len(common_words)):
list_words.append(common_words[j][0].lower())
value_words.append(common_words[j][1])
#Valores para la categorizacion
category_dict = {"Extremismo": 0, "Armas": 0, "Drogas": 0, "Finanzas": 0, "Hacking": 0, "Porno": 0, "Violencia": 0}
#EXTREMISMO
for key in keyword_extremismo[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Extremismo"] += value_words[list_words.index(key2)]
#ARMAS
for key in keyword_armas[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Armas"] += value_words[list_words.index(key2)]
#DROGAS
for key in keyword_drogas[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Drogas"] += value_words[list_words.index(key2)]
#FINANZAS
for key in keyword_finanzas[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Finanzas"] += value_words[list_words.index(key2)]
#HACKING
for key in keyword_hacking[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Hacking"] += value_words[list_words.index(key2)]
#PORNO
for key in keyword_porno[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Porno"] += value_words[list_words.index(key2)]
#VIOLENCIA
for key in keyword_violencia[lang_dst]:
key_contains = [s for s in list_words if key in s]
for key2 in key_contains:
category_dict["Violencia"] += value_words[list_words.index(key2)]
#print(category_dict)
#Buscamos la categoria más valorada
#Si no se ha encontrado ninguna categoria
if all(value == 0 for value in category_dict.values()):
category = "No ilicito"
category_value = 0
else:
category = max(category_dict, key=category_dict.get)
category_value = category_dict[category]
df_site_home_lan.at[i, 'illicit_category'] = category
df_site_home_lan.at[i, 'illicit_value'] = category_value
print("Proceso finalizado.")
# Vemos la distribucion de sitios por estado
total_illicit_content = df_site_home_lan['illicit_category'].value_counts()
total_illicit_content_all = pd.concat([total_illicit_content, df_site_home_lan['illicit_category'].value_counts(normalize=True)], axis=1)
print(total_illicit_content_all)
total_illicit_content.plot(kind='pie', autopct='%1.1f%%', labeldistance=None, fontsize=14, figsize=(8,8)).legend(loc='upper right', bbox_to_anchor=(0.25,0.25))
#Top 5 de Porno
top_porno = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Porno'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_porno
#Top 5 de Armas
top_armas = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Armas'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_armas
#Top 5 de Hacking
top_hacking = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Hacking'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_hacking
#Top 5 de Violencia
top_violencia = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Violencia'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_violencia
#Top 5 de Finanzas
top_finanzas = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Finanzas'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_finanzas
#Top 5 de Extremismo
top_extremismo = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Extremismo'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_extremismo
#Top 5 de Drogas
top_drogas = df_site_home_lan[df_site_home_lan['illicit_category'] == 'Drogas'].sort_values(by=['illicit_value'], ascending=False).head(10).reset_index(drop=True)[['name', 'error_tries', 'discovering_tries', 'pages', 'duration', 'abbr', 'letters', 'words', 'images', 'title', 'site', 'illicit_category', 'illicit_value']]
top_drogas
###Output
_____no_output_____ |
notebooks/data_exploration_irene.ipynb | ###Markdown
Are there specific car types that fail more often than others?
###Code
sns.displot(data=df, x='GVW_TYPE', hue='OVERALL_RESULT', multiple='fill', discrete=True, stat='probability')
sns.displot(data=df, x='MODEL_YEAR', hue='OVERALL_RESULT', multiple='fill', discrete=True, stat='probability')
sns.displot(data=df, x='CYL', hue='OVERALL_RESULT', multiple='fill', discrete=True, stat='probability')
sns.displot(data=df, x='ENGINE_SIZE', hue='OVERALL_RESULT', multiple='fill', discrete=False, stat='probability')
sns.displot(data=df[df.ODOMETER < 500000], x="ODOMETER", hue="OVERALL_RESULT", multiple='fill', discrete=False, stat='probability')
sns.displot(data=df, x="VEHICLE_TYPE", hue="OVERALL_RESULT", multiple='fill', discrete=True, stat='probability')
sns.displot(data=df, x="ESC", hue="OVERALL_RESULT", multiple='fill', discrete=True, stat='probability')
###Output
_____no_output_____
###Markdown
Do the types of tests correlate?
###Code
# select columns to correlate
cols = ['E_HIGH_RPM', 'E_HIGH_CO2', 'E_HIGH_O2', 'E_HIGH_HC', 'E_HIGH_HC_DCF',
'E_HIGH_HC_LIMIT', 'E_HIGH_CO', 'E_HIGH_CO_DCF', 'E_HIGH_CO_LIMIT',
'E_IDLE_DCF', 'E_IDLE_RPM', 'E_IDLE_CO2', 'E_IDLE_O2', 'E_IDLE_HC',
'E_IDLE_HC_DCF', 'E_IDLE_HC_LIMIT', 'E_IDLE_CO', 'E_IDLE_CO_DCF',
'E_IDLE_CO_LIMIT', 'E_HIGH_DCF_2', 'E_HIGH_RPM_2', 'E_HIGH_CO2_2',
'E_HIGH_O2_2', 'E_HIGH_HC_2', 'E_HIGH_HC_DCF_2', 'E_HIGH_CO_2',
'E_HIGH_CO_DCF_2','E_IDLE_DCF_2','E_IDLE_RPM_2','E_IDLE_CO2_2',
'E_IDLE_O2_2','E_IDLE_HC_2', 'E_IDLE_HC_DCF_2', 'E_IDLE_CO_2',
'E_IDLE_CO_DCF_2'
]
df_cor = df[cols].copy()
# show some correlations
plt.matshow(df_cor.corr())
plt.show()
# only CO2 stuff
cols = ['E_HIGH_CO2', 'E_IDLE_CO2', 'E_HIGH_CO2_2', 'E_IDLE_CO2_2']
df_cor = df[cols].copy()
# show some correlations
plt.matshow(df_cor.corr())
plt.colorbar()
plt.show()
# only CO stuff
cols = ['E_HIGH_CO', 'E_IDLE_CO', 'E_HIGH_CO_2', 'E_IDLE_CO_2']
df_cor = df[cols].copy()
# show some correlations
plt.matshow(df_cor.corr())
plt.colorbar()
plt.show()
# only O2 stuff
cols = ['E_HIGH_O2', 'E_IDLE_O2', 'E_HIGH_O2_2', 'E_IDLE_O2_2']
df_cor = df[cols].copy()
# show some correlations
plt.matshow(df_cor.corr())
plt.colorbar()
plt.show()
# only HC stuff
cols = ['E_HIGH_HC', 'E_IDLE_HC', 'E_HIGH_HC_2', 'E_IDLE_HC_2']
df_cor = df[cols].copy()
# show some correlations
plt.matshow(df_cor.corr())
plt.colorbar()
plt.show()
# only first reading stuff
cols = ['E_HIGH_CO', 'E_IDLE_CO', 'E_HIGH_O2', 'E_IDLE_O2',
'E_HIGH_CO2', 'E_IDLE_CO2', 'E_HIGH_HC', 'E_IDLE_HC']
df_cor = df[cols].copy()
# show some correlations
plt.matshow(df_cor.corr())
plt.colorbar()
plt.show()
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/lines_bars_and_markers/errorbar_subsample.ipynb | ###Markdown
Errorbar SubsampleDemo for the errorevery keyword to show data full accuracy data plots withfew errorbars.
###Code
import numpy as np
import matplotlib.pyplot as plt
# example data
x = np.arange(0.1, 4, 0.1)
y1 = np.exp(-1.0 * x)
y2 = np.exp(-0.5 * x)
# example variable error bar values
y1err = 0.1 + 0.1 * np.sqrt(x)
y2err = 0.1 + 0.1 * np.sqrt(x/2)
# Now switch to a more OO interface to exercise more features.
fig, (ax_l, ax_c, ax_r) = plt.subplots(nrows=1, ncols=3,
sharex=True, figsize=(12, 6))
ax_l.set_title('all errorbars')
ax_l.errorbar(x, y1, yerr=y1err)
ax_l.errorbar(x, y2, yerr=y2err)
ax_c.set_title('only every 6th errorbar')
ax_c.errorbar(x, y1, yerr=y1err, errorevery=6)
ax_c.errorbar(x, y2, yerr=y2err, errorevery=6)
ax_r.set_title('second series shifted by 3')
ax_r.errorbar(x, y1, yerr=y1err, errorevery=(0, 6))
ax_r.errorbar(x, y2, yerr=y2err, errorevery=(3, 6))
fig.suptitle('Errorbar subsampling for better appearance')
plt.show()
###Output
_____no_output_____ |
Spring_2022_DeCal_Material/Homework/Week7/HW_7.ipynb | ###Markdown
Homework 7 This homework is all about useful external libraries that are most common to use in astronomy research and Objeect Oriented Programming. The two most important libraries apart from scipy, numpy, and matplotlib are **astropy** and **pandas**. We explore the basics of these super versatile libraries. Followed by a nice problem involving creating python objects Astropy (40 Points) CRAZY UNIT CONVERSION!!! (20 Points) As you take more astronomy classes, you will face more and more unit conversion problems - they are annoying. That's why astropy.units is very helpful. Let's do some practices here.The documentations for astropy.units and astropy.constants will very helpful to you.astropy.units documentation: https://docs.astropy.org/en/stable/units/astropy.constants documentation: https://docs.astropy.org/en/stable/constants/NOTE: In this problem, you MUST use astropy.constants when doing calculations involving fundamental constants. Also, you cannot look up values such as solar mass, earth mass, etc. Use the two packages solely. Problem 1.1) Speed of light (5 Points)What is the speed of light ($c$) in $pc/yr$?
###Code
### Write your code here
###Output
_____no_output_____
###Markdown
Problem 1.2) Newton's 2nd Law (5 Points)Recall that NII states $$F =ma\,\,.$$Say a force of $97650134N$ is exerted on an object having a mass of $0.0071$ earth mass. What is the acceleration of the object in $AU/days^2$?
###Code
### Write your code here
###Output
_____no_output_____
###Markdown
Problem 1.3) Newton's Universal Law of Gravitation (10 Points)Recall that the gravitational acceleration due to an object with mass $m$ at a distance $r$ is given by $$a_g = \frac{Gm}{r^2}\,\,.$$What is the gravitational acceleration due to a planet of $3.1415926$ Jupiter-mass at a distance of $1.523AU$? Give your answer in $pc/yr^2$.
###Code
### Write your code here
###Output
_____no_output_____
###Markdown
Problem 1.4: Visualising Coordinate Transformation (20 Points) We introduced coordinate transformation using astropy, but maybe that was too astract to you, so let's use this problem as a way for you to visualise this process. Each part will be worth **5 Points**There are several things you need to do:1. Open up the FITS file named 'clusters.fits' (this part of the code is written for you already)2. Read it as a table using astropy.table (you will have to import the packages you need and write your own code from hereafter)3. Plot the positions of all the objects in the table, COLOUR-CODED by their types (there is a column named 'CLASS'), with RA on the x-axis and DEC on the y-axis. You should see a curved trend with a huge dip in the middle.4. Carry out a coordinate transformation from the ICRS coordinates to the galactic coordinates - there is a column named "DISTANCE" which you will need. 5. Now plot the position of all the objects in the galactic coordinates, with $\ell$ on the x-axis and $b$ on the y-axis; again, colour-code everything by their "CLASS". If you did everything correctly, you should see that the curve in the previous plot resembles a horizontal band. 6. Answer this question: What is that curved band in the first plot and the horizontal band in the second plot? Does it make sense that the band got straightened up? Why?Note: When you make your plots, please include the axis labels with units and the legend.
###Code
from astropy.io import fits
#You will have to import other packages to complete this problem
###IMPORT YOUR OTHER PACKAGES HERE
fits_file = fits.open('clusters.fits')
#To read the fits file as a table, simply run the line: Table.read(fits_file)
#Although you will have to write up your code to get that Table function
### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
(DOUBLE CLICK HERE TO ANSWER QUESTION 6):YOUR ANSWER: Pandas (30 Points)One of the most efficient and easy to use libraries for importing data files. We will explore the basics here.Let's import some data that represents the position of a ball being thrown off the roof of Campbell Hall. Using some basic kinematics we can derive the following equation.$$y(t) = -\frac{1}{2} g t^2 + v_{0,y} t + y_0$$For this problem we need to import our position measurements from our fellow colleagues in our research group. Problem 2.1 (5 Points)Your job for this problem is to simply read in the file named **"projectile.csv"** using the pandas library (DONT USE `numpy`). Print out your DataFrame so we can see what the data looks like as a table.
###Code
###YOUR CODE HERE###
###Output
_____no_output_____
###Markdown
Problem 2.2 (5 Points)Now load your DataFrame columns into numpy arrays and make a plot of Position vs. Time.
###Code
###YOUR CODE HERE###
###Output
_____no_output_____
###Markdown
Problem 2.3 (5 Points)In the last problem set we learned how to curve fit a quadratic equation. The above equation is also a quadratic equation with respect to time. Use what we learned last week to fit a curve to the noisy data from our fellow researchers. Explicitly print out what the initial velocity $v_{0,y}$ and initial height $y_0$ are based on your curve fit along with their respective errors.
###Code
###YOUR CODE HERE###
###Output
_____no_output_____
###Markdown
Problem 2.4 (5 Points)Alright now we have a model function that can fit the function as a function of time. create two lists/arrays of values using this function. One list's values should be time where we use `t = np.linspace(0,5,100)` to create the values and the other list should be your model's output after taking in all those times. (A list of the values you would normally plot)Once you have created your two lists of values, construct a pandas DataFrame using these lists. Your data frame should have two columns with 100 values each.
###Code
###Your Code Here###
###Output
_____no_output_____
###Markdown
Problem 2.5 (10 Points)Last part of the problem set! This is basically one line of code. Export your new DataFrame to a csv file called **"trajectory.csv"**, this will be useful for your colleagues!
###Code
###Your Code Here###
###Output
_____no_output_____
###Markdown
Object Oriented Programming (30 Points) Problem 3.1 (10 Points)Create a "vector" class from scratch. Look at the lecture slides for how to write one from scratch. Your vector should be able to have a length calculation method, and a method for calculating the dot product as well as finding the angle between two vectors for example.
###Code
###Your Code Here###
###Output
_____no_output_____
###Markdown
Problem 3.2 (10 Points)Create a star class that uses vector objects as it's position and velocity traits. This star class should also have a temperature trait. Then create two star objects with initial positions Vector(0,0) and Vector(80, 30). The initial velocities can be (0,0) for both, set star1's temperature to be 6000 K and star2's temperature to be 10000 K. Find the distance between the stars using the object traits and methods from both the star and vector classes.
###Code
###Your Code Here###
###Output
_____no_output_____
###Markdown
Problem 3.3 (10 Points)now edit your star class to have a method called `cool_down()` which changes the object's temperature the farther apart the two stars are. This cool_down method should cool with this form$$T_{new} = T_{old} e^{-\frac{|\mathbf{\Delta r}|}{R}}$$where R = 100 and $|\mathbf{\Delta r}|$ is the distance between two stars. Note that it doesn't return anything, but instead just updates the temperature value of BOTH the stars in question.
###Code
###Your Code Here###
###Output
_____no_output_____ |
pycity_base/examples/tutorials/tutorial_pycity_base.ipynb | ###Markdown
pycity_base TutorialThis is a tutorial on how to use pycity_base. pycity_base is a Python package for data handling and scenario generation of city districts and urban energy systems, developed by the Institute of Energy Efficient Buildings and Indoor Climate and the Institute of Automation of Complex Power Systems, E.ON Energy Research Center, RWTH Aachen University. Part 1: Buildings, apartments and loads
###Code
(Nearly) every object within pycity_base requires a environment. The environment object holds general data, which are valid for all objects within the city, such as time and weather data or market prices.
Thus, all objects point to environment. Therefore, the first step is to generate an environment.
import pycity_base.classes.timer as Timer
import pycity_base.classes.weather as Weather
import pycity_base.classes.prices as Prices
import pycity_base.classes.environment as Env
# Generate timer object for environment
timer = Timer.Timer(time_discretization=3600, timesteps_total=8760)
# Timer object holds timestep, number of timesteps as well as
# forecast horizon
# Generate weather object
weather = Weather.Weather(timer)
# Weather object holds weather data, such as outdoor temperatures,
# direct and diffuse radiation
# Default TRY value is TRY2010_05_Jahr.dat
# (Test reference year 2010 for region 5 in Germany)
# Generate price object
price = Prices.Prices()
# Holding energy prices and subsidies
# Generate environment object
environment = Env.Environment(timer=timer, weather=weather, prices=price)
# Now we got an environment with timer, weather and price data
# Show current timestep
print('Time discretization in seconds:')
print(environment.timer.time_discretization)
# Show weather forecast for outdoor temperature (only extract the first 10 timestep values)
print('\nShow outdoor tempreature forecast:')
print(environment.weather.getWeatherForecast(getTAmbient=True)[0][:10])
###Output
Time discretization in seconds:
3600
Show outdoor tempreature forecast:
[3.6 2.1 1. 0.1 0. 0. 0.2 0.1 0.1 0. ]
###Markdown
After defining the environment, we are going to generate load objects for an apartment
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pycity_base.classes.demand.domestic_hot_water as DomesticHotWater
import pycity_base.classes.demand.electrical_demand as ElectricalDemand
import pycity_base.classes.demand.space_heating as SpaceHeating
# Generate space heating load object
space_heating = SpaceHeating.SpaceHeating(environment, method=1,
living_area=150,
specific_demand=100)
# Method 1 --> Use standardized load profile (SLP)
# Annual demand is calculated product of living_area and specific_demand
# Show space heating power curve in Watt
print('Space heating power curve in Watt:')
print(space_heating.get_power(currentValues=False))
# currentValues = False --> Show values for all timesteps
# (not only for forecast horizon)
# Plot curve
plt.plot(space_heating.get_power(currentValues=False))
plt.xlabel('Time in hours')
plt.ylabel('Thermal power in Watt (space heating)')
plt.title('Space heating power curve')
plt.show()
###Output
Space heating power curve in Watt:
[1894.03202642 1943.44155755 1992.85108867 ... 2483.68049022 1695.48051729
1236.16974171]
###Markdown
After generation a space heating load object, we will define an electrical load object
###Code
# Generate electrical load object
el_demand = ElectricalDemand.ElectricalDemand(environment,
method=1,
annual_demand=3000)
# Method 1 --> Use standardized load profile (SLP)
print('Electrical load in W:')
print(el_demand.get_power(currentValues=False))
###Output
Electrical load in W:
[316.028805 227.846295 174.312204 ... 552.381822 465.609927 354.624798]
###Markdown
Next, we generate a domestic hot water object, based on IEA Annex 42 data
###Code
# Generate domestic hot water object via Annex 42 data
dhw_annex42 = DomesticHotWater.DomesticHotWater(environment,
t_flow=60,
thermal=True,
method=1,
daily_consumption=70,
supply_temperature=25)
# Method 1 --> Use Annex 42 data
print('Hot water power load in W:')
print(dhw_annex42.get_power(currentValues=False, returnTemperature=False))
###Output
Hot water power load in W:
[ 0. 0. 0. ... 244.64611111 108.09944444
0. ]
###Markdown
Now we generate an apartment object and add the loads to the apartment.
###Code
import pycity_base.classes.demand.apartment as Apartment
# Initialize apartment object
apartment = Apartment.Apartment(environment)
# Add single entity to apartment
apartment.addEntity(space_heating)
# Add multiple entities to apartment
apartment.addMultipleEntities([el_demand, dhw_annex42])
el_power_curve = apartment.get_power_curves(getDomesticHotWater=False,
getSpaceHeating=False,
currentValues=False)[0]
print('El. power curve of apartment in Watt:')
print(el_power_curve)
# Plot curve
plt.plot(el_power_curve)
plt.xlabel('Time in number of 15 minute timesteps')
plt.ylabel('Electrical power in Watt')
plt.title('Electrical power of apartment')
plt.show()
###Output
El. power curve of apartment in Watt:
[316.028805 227.846295 174.312204 ... 552.381822 465.609927 354.624798]
###Markdown
Next, we going to generate a building object and add our apartment to it.
###Code
import pycity_base.classes.building as Building
from pycity_base.functions import change_resolution as chres
# Initialize building object
building = Building.Building(environment)
# Add apartment (with loads) to building object
building.addEntity(entity=apartment)
# Return space heating power curve from building
print('Show space heating power curve of building')
space_heat_curve = building.get_space_heating_power_curve()
print(space_heat_curve)
# Return el. power curve from building
print('Show el. power curve of building')
el_power_curve = building.get_electric_power_curve()
print(el_power_curve)
# Return hot water power curve from building
print('Show domestic hot water power curve of building')
dhw_power_curve = building.get_dhw_power_curve()
print(dhw_power_curve)
# Convert to identical timestep (of 3600 seconds)
el_power_curve_res = chres.changeResolution(el_power_curve, 900, 3600)
# Plot all load curves
plt.subplot(3, 1, 1)
plt.title('Load curves of building')
plt.plot(space_heat_curve)
plt.ylabel('Space heat. power in W')
plt.subplot(3, 1, 2)
plt.plot(el_power_curve_res)
plt.ylabel('El. power in W')
plt.subplot(3, 1, 3)
plt.plot(dhw_power_curve)
plt.ylabel('Hot water power in W')
plt.xlabel('Time in hours')
plt.show()
###Output
Show space heating power curve of building
[1894.03202642 1943.44155755 1992.85108867 ... 2483.68049022 1695.48051729
1236.16974171]
Show el. power curve of building
[316.028805 227.846295 174.312204 ... 552.381822 465.609927 354.624798]
Show domestic hot water power curve of building
[ 0. 0. 0. ... 244.64611111 108.09944444
0. ]
###Markdown
pycity_base is also able to generate stochastic user profiles (instead of using standardized profiles). The stochastic occupancy profiles can be used to generate stochastic el. load and hot water profiles.
###Code
import pycity_base.classes.demand.occupancy as Occupancy
# Generate stochastic occupancy object (with 3 occupants)
occupancy_object = Occupancy.Occupancy(environment, number_occupants=3)
# Extract occupancy profile
occupancy_profile = occupancy_object.occupancy
print('Occupancy profile:')
print(occupancy_profile)
print('Maximum number of occupants:')
print(np.max(occupancy_profile))
plt.plot(occupancy_object.occupancy[:200])
plt.ylabel('Number of active occupants')
plt.title('Occupancy profile')
plt.show()
###Output
Occupancy profile:
[0 0 0 ... 2 1 1]
Maximum number of occupants:
3
###Markdown
Based on the occupancy profile, we will generate a stochastic, el. load profile. This is going to take a couple of seconds.
###Code
# Generate stochastic, electrical load object (time intensive calculation!)
el_dem_stochastic = \
ElectricalDemand.ElectricalDemand(environment,
method=2,
total_nb_occupants=3,
randomize_appliances=True,
light_configuration=10,
occupancy=occupancy_object.occupancy)
# Get electric power curve
el_power_curve_stoch = el_dem_stochastic.get_power(currentValues=False)
print('Electric power curve in W:')
print(el_power_curve_stoch)
###Output
Electric power curve in W:
[ 47. 47. 47. ... 746.75463685 782.76275598
652.40803049]
###Markdown
Futhermore, we will generate a stochastic hot water power profile.
###Code
# Generate stochastic, domestic hot water object
dhw_stochastical = \
DomesticHotWater.DomesticHotWater(environment,
t_flow=60,
thermal=True,
method=2,
supply_temperature=20,
occupancy=occupancy_object.occupancy)
# Get dhw power curve
dhw_power_curve = dhw_stochastical.get_power(currentValues=False,
returnTemperature=False)
print('Hot water power curve in W:')
print(dhw_power_curve)
# Plot all load curves
plt.subplot(3, 1, 1)
plt.plot(occupancy_object.occupancy[:432])
plt.ylabel('Number of occupants')
plt.subplot(3, 1, 2)
plt.plot(el_power_curve_stoch[:4320])
plt.ylabel('El. power in W')
plt.subplot(3, 1, 3)
plt.plot(dhw_power_curve[:4320])
plt.ylabel('Hot water power in W')
plt.show()
###Output
_____no_output_____
###Markdown
Part 2: Building energy systems (BES)We learned how to set up the demand/load part of a building object in part 1. Now we will learn how to define building energy systems and add them to a building.The BES class is a 'container' for all kind of building energy systems. The BES container can be added to the building object.
###Code
import pycity_base.classes.supply.building_energy_system as BES
import pycity_base.classes.supply.boiler as Boiler
# Initialize boiler object
boiler = Boiler.Boiler(environment, q_nominal=10000, eta=0.85)
# Initialize BES object
bes = BES.BES(environment)
# Add device (boiler) to BES
bes.addDevice(boiler)
# Use method getHasDevice to get info about boiler device
print('BES has boiler? (method getHasDevice): ', bes.getHasDevices(all_devices=False, boiler=True))
# Or directly access attribute has_boiler
print('BES has boiler? (attribute has_boiler): ', bes.has_boiler)
# If you like to access the boiler, you can it via BES attribute boiler, which holds the boiler object
print('bes.boiler attribute: ', bes.boiler)
print('bes.boiler.kind: ', bes.boiler[0].kind)
###Output
bes.boiler attribute: [<pycity_base.classes.supply.boiler.Boiler object at 0x00000282CBBF9160>]
###Markdown
The same options are available for any other energy system. First, you have to initialize the energy system (such as CHP, HP or PV). Second, you have to add it to the BES. There are only two exceptions: PV- and Windfarms can also directly be placed on nodes within the city graph (will be shown later, when dealing with city district object).Now we will add the BES to our building object
###Code
building.addEntity(entity=bes)
print('Does building have BES? ', building.has_bes)
print('Does building have apartment? ', building.has_apartment)
print('Does building have heating curve? ', building.has_heating_curve)
###Output
Does building have BES? True
Does building have apartment? True
Does building have heating curve? False
|
examples/notebooks/brml/chapter_01__probabilistic_reasoning.ipynb | ###Markdown
Bayesian Reasoning and Machine Learning 1.1 Probability Refresher 1.1.1 Interpreting Conditional Probability
###Code
darts = Discrete.from_probs(
data={i: 1 / 20 for i in range(1, 21)},
variables='region'
)
darts.data
1 / 19
darts.given(region__ne=20).p(region=5)
darts.p(region=5, region__ne=20) / darts.p(region__ne=20)
darts.p(region=5) / darts.p(region__ne=20)
###Output
_____no_output_____
###Markdown
1.1.2 Probability Tables
###Code
country = Discrete.from_counts({
'england': 60_776_238,
'scotland': 5_116_900,
'wales': 2_980_700
}, 'country')
country.data
language__given__country = Conditional.from_probs(
data={
('english', 'england'): 0.95,
('english', 'scotland'): 0.7,
('english', 'wales'): 0.6,
('scottish', 'england'): 0.04,
('scottish', 'scotland'): 0.3,
('scottish', 'wales'): 0.0,
('welsh', 'england'): 0.01,
('welsh', 'scotland'): 0.0,
('welsh', 'wales'): 0.4,
},
joint_variables='language',
conditional_variables='country'
)
language__given__country.data
language__country = language__given__country * country
language__country.data
###Output
_____no_output_____
###Markdown
1.2 Probabilistic Reasoning Example 1.2
###Code
has_kj = Discrete.from_probs(data={
'yes': 1e-5,
'no': 1 - 1e-5
}, variables='has_kj')
has_kj.data
eats_hbs__given__has_kj = Conditional.from_probs({
('yes', 'yes'): 0.9,
('no', 'yes'): 0.1
},
joint_variables='eats_hbs',
conditional_variables='has_kj'
)
eats_hbs__given__has_kj.data
###Output
_____no_output_____
###Markdown
1)
###Code
eats_hbs = Discrete.from_probs({'yes': 0.5, 'no': 0.5}, variables='eats_hbs')
eats_hbs.data
has_kj__given__eats_hbs = eats_hbs__given__has_kj * has_kj / eats_hbs
has_kj__given__eats_hbs.data
has_kj__given__eats_hbs.p(has_kj='yes', eats_hbs='yes')
###Output
_____no_output_____
###Markdown
2)
###Code
eats_hbs = Discrete.from_probs({'yes': 0.001, 'no': 0.999}, variables='eats_hbs')
eats_hbs.data
has_kj__given__eats_hbs = eats_hbs__given__has_kj * has_kj / eats_hbs
has_kj__given__eats_hbs.data
has_kj__given__eats_hbs.p(has_kj='yes', eats_hbs='yes')
###Output
_____no_output_____
###Markdown
Example 1.3
###Code
butler = Discrete.from_probs({'yes': 0.6, 'no': 0.4}, variables='butler')
maid = Discrete.from_probs({'yes': 0.2, 'no': 0.8}, variables='maid')
butler__and__maid = butler * maid
butler__and__maid.data
knife__given__butler__and__maid = Conditional.from_probs(data={
('yes', 'no', 'no'): 0.3,
('yes', 'no', 'yes'): 0.2,
('yes', 'yes', 'no'): 0.6,
('yes', 'yes', 'yes'): 0.1,
('no', 'no', 'no'): 0.7,
('no', 'no', 'yes'): 0.8,
('no', 'yes', 'no'): 0.4,
('no', 'yes', 'yes'): 0.9,
},
joint_variables='knife_used',
conditional_variables=['butler', 'maid']
)
knife__given__butler__and__maid.data
butler__and__maid__and__knife = knife__given__butler__and__maid * butler__and__maid
butler__and__maid__and__knife.data
butler__given__knife = butler__and__maid__and__knife.given(knife_used='yes').p(butler='yes')
butler__given__knife
###Output
_____no_output_____
###Markdown
Example 1.4
###Code
occupied__given__alice__and__bob = Conditional.binary_from_probs({
(False, False): 1,
(False, True): 1,
(True, False): 1,
(True, True): 0,
},
joint_variable='occupied',
conditional_variables=['alice', 'bob']
)
occupied__given__alice__and__bob.data
alice__and__bob = Discrete.from_probs({
(False, False): 0.25,
(False, True): 0.25,
(True, False): 0.25,
(True, True): 0.25,
}, variables=['alice', 'bob'])
alice__and__bob.data
alice__and__bob__and__occupied = occupied__given__alice__and__bob * alice__and__bob
alice__and__bob__and__occupied.given(alice=True, occupied=True).p(bob=False)
occupied__given__alice__and__bob = Conditional.binary_from_probs({
(False, False): 1,
(False, True): 1,
(True, False): 1,
(True, True): 0,
},
joint_variable='occupied',
conditional_variables=['alice', 'bob']
)
occupied__given__alice__and__bob.data
alice__and__bob = Discrete.from_probs({
(False, False): 0.25,
(False, True): 0.25,
(True, False): 0.25,
(True, True): 0.25,
}, variables=['alice', 'bob'])
alice__and__bob.data
alice__and__bob__and__occupied = occupied__given__alice__and__bob * alice__and__bob
alice__and__bob__and__occupied.given(alice=True, occupied=True).p(bob=False)
###Output
_____no_output_____
###Markdown
Example 1.7 xor
###Code
xor = Conditional.from_probs(
data={
(1, 0, 0): 0,
(1, 0, 1): 1,
(1, 1, 0): 1,
(1, 1, 1): 0,
},
joint_variables='A_xor_B',
conditional_variables=['A', 'B']
)
xor.data
###Output
_____no_output_____
###Markdown
soft xor
###Code
c__given__a__and__b = Conditional.binary_from_probs(
data={
(0, 0): 0.1,
(0, 1): 0.99,
(1, 0): 0.8,
(1, 1): 0.25,
},
joint_variable='C',
conditional_variables=['A', 'B']
)
c__given__a__and__b.data
a = Discrete.binary(0.65, 'A')
a.data
b = Discrete.binary(0.77, 'B')
b.data
a__and__b = a * b
a__and__b.data
a__and__b__and__c = a__and__b * c__given__a__and__b
a__and__b__and__c.data
a__and__b__and__c.given(C=0).p(A=1)
###Output
_____no_output_____
###Markdown
1.3.1 Two dice : what were the individual scores?
###Code
t = Discrete.from_observations(
data = DataFrame({
't': [s_a + s_b
for s_a, s_b in product(range(1, 7), range(1, 7))]
})
)
t.data
s_a__s_b = Discrete.from_probs(
data = {
(a, b): 1 / 36
for a, b in product(range(1, 7), range(1, 7))
},
variables=['s_a', 's_b']
)
s_a__s_b.data.unstack('s_a')
t_9__given__s_a__s_b = Conditional.from_probs(
data={
(9, a, b): int(a + b == 9)
for a, b in product(range(1, 7), range(1, 7))
},
joint_variables=['t'],
conditional_variables=['s_a', 's_b']
)
t_9__given__s_a__s_b.data.stack('s_b')
t_9__s_a__s_b = t_9__given__s_a__s_b * s_a__s_b
t_9__s_a__s_b.data.unstack('s_a')
t_9 = t_9__s_a__s_b / t.p(t=9)
t_9.data.unstack('s_a')
###Output
_____no_output_____ |
GeoCodingApp.ipynb | ###Markdown
IntroductionThis script was developed for a COVID-19 related study at UW to help recovery address information from GeoIDs (latitude and longitude). Simply upload a csv file with two columns: "Location Latitude" and "Location Longitude". You may also wish to include additional columns such as a unique ID to link the results back to your dataset. I recommend against uploading any private information onto Google Colab. The best pracitice is to save a copy of this code into your own Google Drive. *Be aware of any laws governing how and where your dataset can be uploaded.For any questions please email us at: [email protected] ChangFounder and CEO, Kai Analytics and Survey Research Inc.Copyright (c) 2020 Kai Analytics and Survey Research Inc. LicensingMIT LicenseCopyright (c) 2020 Kai Analytics and Survey Research Inc.Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:**The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.**THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.
###Code
import pandas as pd
from google.colab import files
import io
# You need to upload your file using the menu tool bar on the left.
# Make sure the file name matches to the single quotes below.
df = pd.read_csv('covidCopingGeoID.csv')
# Let's check and see we've loaded our data properly.
# We do this by checking its first 10 rows.
pd.options.display.max_columns = None
display(df.head(10))
df['latlon'] = df["Location Latitude"].map(str)+ "," + df["Location Longitude"].map(str)
print(df['latlon'].head())
!pip install geopy
# This is the opensource geo coding package
# Depending on the size of your dataset, you might want to slow down the number
# of address searches per second via min_delay_seconds
from geopy.geocoders import Nominatim
from geopy.extra.rate_limiter import RateLimiter
locator = Nominatim(user_agent='myGeocoder')
#locator = Nominatim(user_agent="geoapiExercises")
rgeocode = RateLimiter(locator.reverse, min_delay_seconds=0.001)
# Define some city, county, and state lookup function
# You can probably combine theses into a bigger function but I thought it's
# easier to show you the steps broken down one by one
def city(coord):
location = locator.reverse(coord, exactly_one=True)
address = location.raw['address']
city = address.get('city', 'N/A')
return city
def county(coord):
location = locator.reverse(coord, exactly_one=True)
address = location.raw['address']
county = address.get('county', 'N/A')
return county
def state(coord):
location = locator.reverse(coord, exactly_one=True)
address = location.raw['address']
state = address.get('state', 'N/A')
return state
# Populate city data
df['city'] = df['latlon'].apply(city)
# Populate county data
df['county'] = df['latlon'].apply(county)
# Populate state data
df['state'] = df['latlon'].apply(state)
#print(df['address'])
print(df)
from google.colab import files
df.to_csv('uwReverseGeocodeResults.csv', index=False)
files.download('uwReverseGeocodeResults.csv')
###Output
_____no_output_____ |
examples/.ipynb_checkpoints/README.md-checkpoint.ipynb | ###Markdown
Calculating distance matrix with mpi using cognet class 1. Initialize cognet class, either from model and dataformatter objects, or directly inputting data:
###Code
from cognet.cognet import cognet as cg
from cognet.model import model
from cognet.dataFormatter import dataFormatter
import pandas as pd
import numpy as np
data_ = dataFormatter(samples='examples_data/gss_2018.csv')
model_ = model()
model_.load("examples_data/gss_2018.joblib")
cognet_ = cg()
cognet_.load_from_model(model_, data_, 'all')
###Output
updating
###Markdown
2. For smaller and less intensive datasets, use cognet.distfunc_multiples
###Code
distance_matrix=cognet_.distfunc_multiples("examples_results/distfunc_multiples_testing.csv")
###Output
_____no_output_____
###Markdown
3. For larger and more intensive datasets, first call cognet.dmat_filewriter to write the necessary files.
###Code
Cg.dmat_filewriter("GSS_cognet.py", "examples_data/gss_2018.joblib",
MPI_SETUP_FILE="GSS_mpi_setup.sh",
MPI_RUN_FILE="GSS_mpi_run.sh",
MPI_LAUNCHER_FILE="GSS_mpi_launcher.sh",
YEARS='2018',NODES=4,T=14)
###Output
_____no_output_____
###Markdown
4. Make any changes necessary to the run and setup scripts and pyfile, then call the run script in the terminal
###Code
from subprocess import call
call(["./GSS_mpi_run.sh"])
###Output
_____no_output_____ |
source/2021/100Beginner/content/022_square_of_sum.ipynb | ###Markdown
第22讲 两个数和与差的平方 Problem 问题描述 使用`qianglib`库提供的方法,绘制方格坐标纸,其中坐标系的原点`(0,0)`位于绘图区的最左下方,使用的`scale`值为`20`. Use the methods provided in the library `qianglib`, draw a grid coordinate system where the origin `(0, 0)` is at the bottom left of the coordinate system. 1. 绘制一个边长为12各边平行于坐标轴的正方形,使得正方形左下角的顶点坐标为(7, 2)。画笔选择红色,画笔线的宽度为3。在正方形内部接近正中的位置书写一个字母”A“来表示这个正方形。 Draw a square with a side length of 12 and each side parallel to the coordinate axis; make sure that the vertex coordinates of the lower left corner of the square is (7, 2). Use "red" color and line width 3 to draw it. Write a letter "A" in the center of the square. 2. 绘制一个新的边长为8个边平行与坐标轴的正方形,使得该正方形左下角的顶点恰好为先前绘制的正方形右上角的顶点。环比选择蓝色,画笔线宽度为3。在正方形内部接近正中的位值书写一个字母”B“来表示这个正方形。 Draw a new square whose side length is 8 and all sides parallel to the coordinate axis; make sure that the vertex of the lower left corner of the square is exactly the vertex of the upper right corner of the square previously drawn. Use "blue" color and line width 3 to draw it. Write a letter "B" in the center of the square. 3. 绘制第三个各边平行于坐标轴的正方型,使得该正方形的左下角顶点与正方形A左下角的顶点重合,右上角顶点与正方形B右上角顶点重合。画笔选择黑色,画笔线宽度为2。Draw the third square with each side parallel to the coordinate axis; make sure that the vertex of the lower left corner of the new square coincides with the vertex of the lower left corner of square A, and the vertex of the upper right corner of the new square coincides with the vertex of the upper right corner of square B. Use color "black" and line width 2 to draw it. 4. 第三个正方形被分割为4个部分,分别为正方形A、B、以及正方形A的上方B的左侧和正方形A的右方B的下方的剩余的两块矩形区域,记作C和D并在图上标记C,D。the Third square is divided into 4 parts: square A, B, and rectangle above A, and rectangle below B, denoted as C and D, respectively. 完成后的图形应该如下图所示: The finished figure should look like the following: 5. 先根据第三个正方形的边长计算该正方形的面积,记作S1。再根据这个正方形由四块区域ABCD组成,计算这四块区域的面积的和,记作S2。S1的大小和S2的大小应该相等。First calculate the area of the third square based on the side length of it, denoted as S1. Then according to this square is composed of four areas A,B,C, and D, calculate the sum of the four areas, denoted as S2. S1 should be exactly equal to S2. 6. 尝试将正方形A和B的边长作如下每一行所示的修改,再重新计算正方形A,B以及第三个大正方形每一个顶点的位置。在计算的时候保持A的左下角坐标为(7, 2),A的右上角与B的左下角重合,大正方形恰好把AB包含在内。然后,直接根据大正方形的边长计算其面积,记为S1;随后计算对应的正方形A、B的面积以及矩形C、D的面积的和,记为S2。把相关结果填入下表。S1应该始终与S2相等。Try to modify the side lengths of squares A and B as shown in each row below, and then calculate the coordinate values of each vertexes of squares A, B and the third large square. During calculating, keep the coordinates of the lower left corner of A always as (7, 2), the upper right corner of A coincides with the lower left corner of B, and the big square happens to include square A and B. Then calculate the area S1 directly base on the the side length of the large square; calculate S2 as sum of the area of the corresponding squares A and B and rectangles C and D. Fill in the relevant results in the table below. Note: S1 should always be equal to S2.| Side A | Side B | S1 | A | B | C | D | S2||:---------:|:---------:|-----|---|---|---|----|---|| 12 | 6 | | | | | | || 10 | 6 | | | | | | || 4 | 8 | | | | | | || 9 | 1 | | | | | | || 8 | 8 | | | | | | | **Answer Area**
###Code
from turtle import setup, reset, pu, pd, bye, left, right, fd, bk, screensize
from turtle import goto, seth, write, ht, st, home, dot, pen, speed
from qianglib import prepare_paper, draw_grid, mark, lines, line, polygon, text
# from qianglib import square
width, height = 800, 600
setup(width, height, 0, 0)
prepare_paper(width, height, scale=20, min_x=0, min_y=0)
A1 = (7, 3) # start point A, lower bottom 左下角顶点
a, b = 12, 8 # side length
###Output
_____no_output_____
###Markdown
Solution 1
###Code
A2 = (A1[0]+a, A1[1]) # use A1, a to represent A2
A3 = (A1[0]+a, A1[1]+a) # use A1, a to represent A3
A4 = (A1[0], A1[1]+a) # use A1, a to represent A4
square_A = [A1, A2, A3, A4]
for point in square_A:
mark(point, color="red")
polygon(square_A, linewidth=3, color="red")
text((A1[0]-1, A1[1]+a/2-1), "a", color='red', font=('Arial', 20, 'normal'))
text((A1[0]+a/2, A1[1]-1.5), "a", color='red', font=('Arial', 20, 'normal'))
B1 = A3
B2 = (B1[0]+b, B1[1]) # use B1, b to represent B2
B3 = (B1[0]+b, B1[1]+b) # use B1, b to represent B2
B4 = (B1[0], B1[1]+b) # use B1, b to represent B2
square_B = [B1, B2, B3, B4]
for point in square_B:
mark(point, color="blue")
polygon(square_B, linewidth=3, color="blue")
text((B1[0]+b+0.5, B1[1]+b/2-1), "b", color='blue', font=('Arial', 20, 'normal'))
text((B1[0]+b/2, B1[1]+b), "b", color='blue', font=('Arial', 20, 'normal'))
C2 = (A1[0]+a+b, A1[1]) #
C4 = (A1[0], A1[1]+a+b) #
square_Big = [A1, C2, B3, C4]
for point in square_Big:
mark(point, color="black")
polygon(square_Big, linewidth=2, color="black")
text((A1[0]-4, A1[1]+(a+b)/2-1), "a+b", color='black', font=('Arial', 20, 'normal'))
text((A1[0]+(a+b)/2, A1[1]-2.5), "a+b", color='black', font=('Arial', 20, 'normal'))
text((A1[0]+(a+b)+0.5, A1[1]+a/2-1), "a", color='red', font=('Arial', 20, 'normal'))
text((A1[0]+a/2, A1[1]+(a+b)), "a", color='red', font=('Arial', 20, 'normal'))
text((B1[0]+b-(a+b)-1, B1[1]+b/2-1), "b", color='blue', font=('Arial', 20, 'normal'))
text((B1[0]+b/2, B1[1]+b-(a+b)-1.5), "b", color='blue', font=('Arial', 20, 'normal'))
center_A = (A1[0]+a/2, A1[1]+a/2-1)
text(center_A, "A=axa", align="center", color="red", font=('Arial', 20, 'normal'))
center_B = (B1[0]+b/2, B1[1]+b/2-1)
text(center_B, "B=bxb", align="center", color="blue", font=('Arial', 20, 'normal'))
center_C = (A2[0]+b/2, A2[1]+a/2-1)
text(center_C, "C=axb", align="center", color="black", font=('Arial', 20, 'normal'))
center_D = (A4[0]+a/2, A4[1]+b/2-1)
text(center_D, "D=bxa", align="center", color="black", font=('Arial', 20, 'normal'))
equation = '(a+b)^2 = a^2 + 2ab + b^2'
equation_pos = (A1[0]+(a+b)/2, B3[1]+2)
text(equation_pos, equation, align="center", color="black", font=('Arial', 30, 'bold'))
###Output
_____no_output_____
###Markdown
Solution 2
###Code
def square(center, side_length):
"""given a square center and side length, draw the square
给定正方形的中心点和边长,绘制这个正方形
1. calculate each vertex of the square 计算正方形的每一个顶点
2. mark each vertex 标记每一个顶点
3. draw the square 绘制这个正方形
4. text side 标记边长
5. text the square in center 在中心位置标记这个正方形
params
center: coordinates of the center point of a square,
tuple, example:(16, 11)
side_length: side length of a square,
float, example: 10.0
return
None
"""
# step 1 第一步
point_left_bottom = () # 左下角顶点坐标
point_top_left = () # 左上角顶点坐标
point_top_right = () # 右上角顶点坐标
point_right_bottom = () # 右下角顶点坐标
# 有序的顶点构成一个列表代表正方形
points = [point_left_bottom, point_top_right,
point_right_bottom, point_top_left]
# step 2
for point in points:
mark(point, color="red")
# step 3
# step 4
# step 5
return
center_A = (16, 11)
mark(center_A, "Center "+str(center_A), color="red")
side_length_A = 10
square(center_A, side_length_A)
def get_square(left_bottom, side_length):
square = []
x, y = left_bottom[0], left_bottom[1]
square.append(left_bottom)
square.append((x+side_length, y))
square.append((x+side_length, y+side_length))
square.append((x, y+side_length))
return square
def get_center(left_bottom, side_length):
x = left_bottom[0]+side_length/2
y = left_bottom[1]+side_length/2
return (x, y)
square_A = get_square(SPA, side_A)
print(square_A)
polygon(square_A, linewidth=3, color="red")
center_A = get_center(SPA, side_A)
text(center_A, "A", align="center", font=("Arial", 30, "normal"), color="red")
SPB = (SPA[0]+side_A, SPA[1]+side_A)
square_B = get_square(SPB, side_B)
print(square_B)
polygon(square_B, linewidth=3, color="blue")
center_B = get_center(SPB, side_B)
text(center_B, "B", align="center", font=("Arial", 30, "normal"), color="blue")
square_big = get_square(SPA, side_A+side_B)
polygon(square_big, linewidth=2, color="black")
pos_C = (center_A[0], SPB[1]+side_B/2)
text(pos_C, "C", align="center", font=("Arial", 30, "normal"), color="black")
pos_D = (center_B[0], center_A[1])
text(pos_D, "D", align="center", font=("Arial", 30, "normal"), color="black")
S1 = (side_A + side_B)**2
S2 = side_A**2 + side_B**2 + 2 * side_A * side_B
print(S1 == S2)
###Output
_____no_output_____ |
Scala Programming for Data Science/Data Science with Scala/Module 4: Fitting a Model/3.4.5.ipynb | ###Markdown
" 3.4.5 Evaluation Lesson ObjectivesAfter completing this lesson you should be able to:* Evaluate binary classification algorithms using area under the Receiver Operating Characteristic (ROC) curve* Evaluate multiclass classification and regression algorithms using several metrics* Evaluate logistic and linear regression algorithms using summariesEvaluatorsAfter training a model and making predictions for the test data it is time to evaluate the model.* An evaluator is a class that computes metrics from the predictions* There are three types of evaluators available: * `BinaryClassificationEvaluator` * `MultiClassClassificationEvaluator` * `RegressionEvaluator` Continuing from previous exampleIf you haven't downloaded the data set from the previous lesson then there is a link in the script to download it to your temporary folder and load it.
###Code
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder().getOrCreate()
import spark.implicits._
import org.apache.spark.sql.functions._
import org.apache.spark.mllib.util.MLUtils.{
convertVectorColumnsFromML => fromML,
convertVectorColumnsToML => toML
}
import org.apache.spark.mllib.util.MLUtils
val data = toML(MLUtils.loadLibSVMFile(sc, "/resources/data/sample_libsvm_data.txt").toDF())
val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))
###Output
_____no_output_____
###Markdown
Example of Logistic RegressionNow we look at an example of binary classification using Logistic Regression. First I create a new instance of a Logistic Regression and set its parameters:* The maximum number of iterations* Regularization* Elastic Net
###Code
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
val logr = new LogisticRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)
val logrModel = logr.fit(trainingData)
println(s"Weights: ${logrModel.coefficients} Intercept: ${logrModel.intercept}")
###Output
_____no_output_____
###Markdown
summary
###Code
logrModel.summary.objectiveHistory
###Output
_____no_output_____
###Markdown
BinaryClassificationEvaluatorLet's start with the `BinaryClassificationEvaluator`:* Evaluator for binary classification* Expects two input columns: **rawPrediction** and **label*** Supported metric: `areaUnderROC`As its name states, it is used to evaluate binary classifiers. It expects two input columns, the `rawPrediction` column and the label column. The only supported metric is the area under the ROC curve.This is an example of a Binary Classification Evaluator. I'm going to build upon the Logistic Regression model from the previous lesson and evaluate its predictions. First, I call the `transform` method on the test data to get a `DataFrame` with the predictions, which I name `predictionsLogR`:
###Code
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
val predictionsLogR = logrModel.transform(testData)
###Output
_____no_output_____
###Markdown
Then, I create a new instance of a `BinaryClassificationEvaluator` and set the corresponding columns as inputs and the metric name to the only available metric, `areaUnderROC`:
###Code
val evaluator = new BinaryClassificationEvaluator().setLabelCol("label").setRawPredictionCol("rawPrediction").setMetricName("areaUnderROC")
###Output
_____no_output_____
###Markdown
Now I can call the evaluator's evaluate method on the predictions made by the Logistic Regression to get its area under the ROC curve:
###Code
val roc = evaluator.evaluate(predictionsLogR)
###Output
_____no_output_____
###Markdown
MulticlassClassificationEvaluatorOften, there are more than two categories you can classify an item into. The Multi-Class Classification Evaluator is an evaluator for multi-class classification problems.* Expects two input columns: **prediction** and **label*** Supported metrics: * `F1` (default) * Precision * Recall * `weightedPrecision` * `weightedRecall` Reusing RF Classification Example ITo show what a `Multiclass` Classification Evaluator can do we will need a model that can do more than the two categories the Random Forest classifier we calculated before would do. We will need to prepare the Pipeline for that model.This is the exact script we have run in previous sessions to set up Pipelines for Random Forests and Gradient-Boosting Trees:
###Code
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.{StringIndexer, IndexToString, VectorIndexer}
val labelIndexer = new StringIndexer().setInputCol("label").setOutputCol("indexedLabel").fit(data)
val labelConverter = new IndexToString().setInputCol("prediction").setOutputCol("predictedLabel").setLabels(labelIndexer.labels)
val featureIndexer = new VectorIndexer().setInputCol("features").setOutputCol("indexedFeatures").setMaxCategories(4).fit(data)
import org.apache.spark.ml.classification.RandomForestClassifier
import org.apache.spark.ml.classification.RandomForestClassificationModel
val rfC = new RandomForestClassifier().setLabelCol("indexedLabel").setFeaturesCol("indexedFeatures").setNumTrees(3)
###Output
_____no_output_____
###Markdown
Reusing RF Classification Example II
###Code
import org.apache.spark.ml.Pipeline
// split into training and test data
val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))
val pipelineRFC = new Pipeline().setStages(Array(labelIndexer, featureIndexer, rfC, labelConverter))
val modelRFC = pipelineRFC.fit(trainingData)
val predictionsRFC = modelRFC.transform(testData)
###Output
_____no_output_____
###Markdown
All the rest is exactly the same as before, calling the `fit` method to get a modeland calling the `transform` method to make predictions. The predictions are thenreturned in the `predictionsRFC` `DataFrame`. MulticlassClassificationEvaluatorNow an example of a Multi Class Evaluator. For this example, I can evaluate any of the multiclass classifiers I have trained so far, and I choose to evaluate the predictions made by the Random Forest Classifier, which I previously assigned to the `predictionsRFC` `DataFrame`.The true labels of the test set were in the indexed label column and the predictions made by the model were in its prediction column. So, I create a new instance of a `MulticlassClassificationEvaluator` and set the corresponding columns as inputs. Also, I set the metric to be **precision** instead of the default **F1-score**.
###Code
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
val evaluator = new MulticlassClassificationEvaluator().setLabelCol("indexedLabel").setPredictionCol("prediction").setMetricName("accuracy")
val accuracy = evaluator.evaluate(predictionsRFC)
println("Test Error = " + (1.0 - accuracy))
###Output
_____no_output_____
###Markdown
Now I can call the evaluator's evaluate method on the predictions made by the Random Forest Classifier to get the estimated precision, which is 96.6% or, put in another way, a 3.3% test error. RegressionEvaluator* Evaluator for regression problems* Expects two input columns: **prediction** and **label*** Supported metrics: * **rmse**: root mean squared error (default) * **mse**: mean squared error * **r2**: R2, the coefficient of determination * **mae**: mean absolute error Reusing RF Regression ExampleWe will use the previous regression in our previous lesson in Random Forest. If you've come to this lesson directly and don't have the context, here is the code that produces the predictions we will evaluate:
###Code
import org.apache.spark.ml.regression.RandomForestRegressor
import org.apache.spark.ml.regression.RandomForestRegressionModel
val rfR = new RandomForestRegressor().setLabelCol("label").setFeaturesCol("indexedFeatures")
val pipelineRFR = new Pipeline().setStages(Array(featureIndexer, rfR))
val modelRFR = pipelineRFR.fit(trainingData)
val predictions = modelRFR.transform(testData)
###Output
_____no_output_____ |
finance/data_api/eod_histo_data.ipynb | ###Markdown
Get Exchanges
###Code
params = {'api_token': API_KEY}
exchanges = requests.get(url="https://eodhistoricaldata.com/api/exchanges-list/", params=params)
len(json.loads(exchanges.text))
json.loads(exchanges.text)
###Output
_____no_output_____
###Markdown
Get Symbols
###Code
EXCHANGE_CODE = 'LSE'
url=f"https://eodhistoricaldata.com/api/exchange-symbol-list/{EXCHANGE_CODE}?api_token={API_KEY}"
tickers = requests.get(url=url)
len(tickers.text.split("\n"))
tickers.text.split("\n")
type_counts = {}
for elem in tickers.text.split("\n"):
sub_elems = elem.split(",")
if len(sub_elems) == 7:
type_ = sub_elems[5]
if type_ in type_counts.keys():
type_counts[type_] += 1
else:
type_counts[type_] = 1
type_counts
exchange_counts = {}
for elem in tickers.text.split("\n"):
sub_elems = elem.split(",")
if len(sub_elems) == 7:
type_ = sub_elems[3]
if type_ in exchange_counts.keys():
exchange_counts[type_] += 1
else:
exchange_counts[type_] = 1
exchange_counts
###Output
_____no_output_____
###Markdown
Get Historical Quotes
###Code
FROM = "2022-02-23"
TO = "2022-03-23"
TICKER = "NVDA.US"
url = f"https://eodhistoricaldata.com/api/eod/{TICKER}?api_token={API_KEY}&from={FROM}&to={TO}"
quotes = requests.get(url=url)
# &fmt=json pd.read_json(quotes.text)
df = pd.read_csv(url, skipfooter=1)
df.head()
###Output
_____no_output_____
###Markdown
Split/Dividend Adjustment
###Code
df['k'] = df['Close'] / df['Adjusted_close']
df['Open'] = df['Open'] / df['k']
df['High'] = df['High'] / df['k']
df['Low'] = df['Low'] / df['k']
df.drop(['Close', 'k'], axis='columns', inplace=True)
df.rename(columns={"Adjusted_close": "Close"}, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Get Dividends And Splits
###Code
url = f"https://eodhistoricaldata.com/api/div/{TICKER}?api_token={API_KEY}&from={FROM}&fmt=json"
dividends = requests.get(url=url)
dividends.json()
url = f"https://eodhistoricaldata.com/api/splits/{TICKER}?api_token={API_KEY}&from={FROM}"
splits = requests.get(url=url)
splits.text
###Output
_____no_output_____
###Markdown
Speedtest historical quotes
###Code
for TICKER in ["NVDA.US", "TSLA.US", "MSFT.US", "AMZN.US", "NFLX.US"]:
url = f"https://eodhistoricaldata.com/api/eod/{TICKER}?api_token={API_KEY}&from={FROM}&to={TO}"
quotes = requests.get(url=url)
# print("-", end="")
# url = f"https://eodhistoricaldata.com/api/div/{TICKER}?api_token={API_KEY}&from={FROM}&fmt=json"
# dividends = requests.get(url=url)
# print("-", end="")
# url = f"https://eodhistoricaldata.com/api/splits/{TICKER}?api_token={API_KEY}&from={FROM}"
# splits = requests.get(url=url)
print("-")
###Output
-
-
-
-
-
|
1.DA-introduction.ipynb | ###Markdown
Data Analysis with Python IntroductionWelcome!In this section, you will learn how to approach data acquisition in various ways, and obtain necessary insights from a dataset. By the end of this lab, you will successfully load the data into Jupyter Notebook, and gain some fundamental insights via Pandas Library. Table of Contents Data Acquisition Basic Insight of DatasetEstimated Time Needed: 10 min Data AcquisitionThere are various formats for a dataset, .csv, .json, .xlsx etc. The dataset can be stored in different places, on your local machine or sometimes online.In this section, you will learn how to load a dataset into our Jupyter Notebook.In our case, the Automobile Dataset is an online source, and it is in CSV (comma separated value) format. Let's use this dataset as an example to practice data reading. data source: https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data data type: csvThe Pandas Library is a useful tool that enables us to read various datasets into a data frame; our Jupyter notebook platforms have a built-in Pandas Library so that all we need to do is import Pandas without installing.
###Code
# import pandas library
import pandas as pd
###Output
_____no_output_____
###Markdown
Read DataWe use pandas.read_csv() function to read the csv file. In the bracket, we put the file path along with a quotation mark, so that pandas will read the file into a data frame from that address. The file path can be either an URL or your local file address.Because the data does not include headers, we can add an argument headers = None inside the read_csv() method, so that pandas will not automatically set the first row as a header.You can also assign the dataset to any variable you create.
###Code
# Import pandas library
import pandas as
# Read the online file by the URL provides above, and assign it to variable "df"
other_path = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv"
df = pd.read_csv(other_path, header=None)
###Output
_____no_output_____
###Markdown
After reading the dataset, we can use the dataframe.head(n) method to check the top n rows of the dataframe; where n is an integer. Contrary to dataframe.head(n), dataframe.tail(n) will show you the bottom n rows of the dataframe.
###Code
# show the first 5 rows using dataframe.head() method
print("The first 5 rows of the dataframe")
df.head(5)
###Output
_____no_output_____
###Markdown
Question 1: check the bottom 10 rows of data frame "df".
###Code
df.tail(10)
###Output
_____no_output_____
###Markdown
Question 1 Answer: Run the code below for the solution! Double-click here for the solution.<!-- The answer is below:print("The last 10 rows of the dataframe\n")df.tail(10)--> Add HeadersTake a look at our dataset; pandas automatically set the header by an integer from 0.To better describe our data we can introduce a header, this information is available at: https://archive.ics.uci.edu/ml/datasets/AutomobileThus, we have to add headers manually.Firstly, we create a list "headers" that include all column names in order.Then, we use dataframe.columns = headers to replace the headers by the list we created.
###Code
# create headers list
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
print("headers\n", headers)
###Output
_____no_output_____
###Markdown
We replace headers and recheck our data frame
###Code
df.columns = headers
df.head(10)
###Output
_____no_output_____
###Markdown
we can drop missing values along the column "price" as follows
###Code
df.dropna(subset=["price"], axis=0)
###Output
_____no_output_____
###Markdown
Now, we have successfully read the raw dataset and add the correct headers into the data frame. Question 2: Find the name of the columns of the dataframe
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- The answer is below:print(df.columns)--> Save DatasetCorrespondingly, Pandas enables us to save the dataset to csv by using the dataframe.to_csv() method, you can add the file path and name along with quotation marks in the brackets. For example, if you would save the dataframe df as automobile.csv to your local machine, you may use the syntax below:
###Code
df.to_csv("automobile.csv", index=False)
###Output
_____no_output_____
###Markdown
We can also read and save other file formats, we can use similar functions to **`pd.read_csv()`** and **`df.to_csv()`** for other data formats, the functions are listed in the following table: Read/Save Other Data Formats| Data Formate | Read | Save || ------------- |:--------------:| ----------------:|| csv | `pd.read_csv()` |`df.to_csv()` || json | `pd.read_json()` |`df.to_json()` || excel | `pd.read_excel()`|`df.to_excel()` || hdf | `pd.read_hdf()` |`df.to_hdf()` || sql | `pd.read_sql()` |`df.to_sql()` || ... | ... | ... | Basic Insight of DatasetAfter reading data into Pandas dataframe, it is time for us to explore the dataset.There are several ways to obtain essential insights of the data to help us better understand our dataset. Data TypesData has a variety of types.The main types stored in Pandas dataframes are object, float, int, bool and datetime64. In order to better learn about each attribute, it is always good for us to know the data type of each column. In Pandas:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
returns a Series with the data type of each column.
###Code
# check the data type of data frame "df" by .dtypes
print(df.dtypes)
###Output
_____no_output_____
###Markdown
As a result, as shown above, it is clear to see that the data type of "symboling" and "curb-weight" are int64, "normalized-losses" is object, and "wheel-base" is float64, etc.These data types can be changed; we will learn how to accomplish this in a later module. DescribeIf we would like to get a statistical summary of each column, such as count, column mean value, column standard deviation, etc. We use the describe method:
###Code
dataframe.describe()
###Output
_____no_output_____
###Markdown
This method will provide various summary statistics, excluding NaN (Not a Number) values.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
This shows the statistical summary of all numeric-typed (int, float) columns.For example, the attribute "symboling" has 205 counts, the mean value of this column is 0.83, the standard deviation is 1.25, the minimum value is -2, 25th percentile is 0, 50th percentile is 1, 75th percentile is 2, and the maximum value is 3.However, what if we would also like to check all the columns including those that are of type object.You can add an argument include = "all" inside the bracket. Let's try it again.
###Code
# describe all the columns in "df"
df.describe(include = "all")
###Output
_____no_output_____
###Markdown
Now, it provides the statistical summary of all the columns, including object-typed attributes.We can now see how many unique values, which is the top value and the frequency of top value in the object-typed columns.Some values in the table above show as "NaN", this is because those numbers are not available regarding a particular column type. Question 3: You can select the columns of a data frame by indicating the name of each column, for example, you can select the three columns as follows: dataframe[[' column 1 ',column 2', 'column 3']]Where "column" is the name of the column, you can apply the method ".describe()" to get the statistics of those columns as follows: dataframe[[' column 1 ',column 2', 'column 3'] ].describe()Apply the method to ".describe()" to the columns 'length' and 'compression-ratio'.
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- The answer is below:df[['length', 'compression-ratio']].describe()--> InfoAnother method you can use to check your dataset is:
###Code
dataframe.info
###Output
_____no_output_____
###Markdown
It provide a concise summary of your DataFrame.
###Code
# look at the info of "df"
df.info
###Output
_____no_output_____ |
IllinoisGRMHD-Trusted/doc/Tutorial-IllinoisGRMHD__outer_boundaries.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: outer_boundaries.C Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain the outer boundary conditions imposed on the quantities evolved within `IllinoisGRMHD` Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](outer_boundaries__c): **`outer_boundaries.C`** 1. [Step 2.a](outer_boundaries__amu): *The vector potential variables* 1. [Step 2.a.i](outer_boundaries__amu__linear_extrapolation): Defining the linear extrapolation operators 1. [Step 2.a.ii](outer_boundaries__amu__applying_bcs): Applying outer boundary conditions to $A_{\mu}$ 1. [Step 2.b](outer_boundaries__hydro_vars): *The hydrodynamic variables* 1. [Step 2.b.i](outer_boundaries__hydro_vars__zero_deriv_outflow): Defining the zero derivative, outflow operators 1. [Step 2.b.ii](outer_boundaries__hydro_vars__applying_bcs): Applying boundary conditions to $\left\{P,\rho_{b},v^{i}\right\}$ 1. [Step 2.c](outer_boundaries__conservatives): *The conservative variables*1. [Step 3](code_validation): **Code validation**1. [Step 4](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
###Code
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__outer_boundaries__C = os.path.join(IGM_src_dir_path,"outer_boundaries.C")
###Output
_____no_output_____
###Markdown
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$ Step 2: `outer_boundaries.C` \[Back to [top](toc)\]$$\label{outer_boundaries__c}$$The strategy used to set outer boundary for the primitives, $\left\{P,\rho_{b},v^{i}\right\}$, and for the scalar and vector potentials, $\left\{\left[\sqrt{\gamma}\Phi\right],A_{i}\right\}$, follows eqs. (39) and (40) of the [original release paper of IllinoisGRMHD](https://arxiv.org/pdf/1501.07276.pdf). For example, if we are trying to apply boundary condition along the $x$-direction, we would have$$\boxed{E_{i+1}=\left\{\begin{align}E_{i}\ , &{\rm\ if\ } E\in\left\{P,\rho_{b},v^{y},v^{z}\right\},{\rm\ or\ } E=v^{x}\ {\rm and\ } v^{x}\geq0\\0\ , &{\rm\ if\ } E=v^{x}\ {\rm and\ } v^{x}<0\\2E_{i} - E_{i-1}\ , &{\rm\ if\ } E\in\left\{\left[\sqrt{\gamma}\Phi\right],A_{x},A_{y},A_{z}\right\}\end{align}\right.}\ ,$$for the ghostzone points along the *positive* $x$-direction, and$$\boxed{E_{i-1}=\left\{\begin{align}E_{i}\ , &{\rm\ if\ } E\in\left\{P,\rho_{b},v^{y},v^{z}\right\},{\rm\ or\ } E=v^{x}\ {\rm and\ } v^{x}\geq0\\0\ , &{\rm\ if\ } E=v^{x}\ {\rm and\ } v^{x}<0\\2E_{i} - E_{i+1}\ , &{\rm\ if\ } E\in\left\{\left[\sqrt{\gamma}\Phi\right],A_{x},A_{y},A_{z}\right\}\end{align}\right.}\ ,$$for the ghostzone points along the *negative* $x$-direction.In this way, linear extrapolation outer boundary conditions are applied to the vector potential variables $\left\{\left[\sqrt{\gamma}\Phi\right],A_{i}\right\}$ and zero-derivative, outflow outer boundary conditions are applied to the hydrodynamic variables $\left\{P,\rho_{b},v^{i}\right\}$. Step 2.a: The vector potential variables \[Back to [top](toc)\]$$\label{outer_boundaries__amu}$$ Step 2.a.i: Defining the linear extrapolation operators \[Back to [top](toc)\]$$\label{outer_boundaries__amu__linear_extrapolation}$$We start by applying outer boundary conditions to $\left\{\left[\sqrt{\gamma}\Phi\right],A_{i}\right\}$. We follow the prescription described above:$$\boxed{\begin{align}\text{Positive direction: }E_{i+1} = 2E_{i} - E_{i-1}\ , &{\rm\ if\ } E\in\left\{\left[\sqrt{\gamma}\Phi\right],A_{x},A_{y},A_{z}\right\}\\\text{Negative direction: }E_{i-1} = 2E_{i} - E_{i+1}\ , &{\rm\ if\ } E\in\left\{\left[\sqrt{\gamma}\Phi\right],A_{x},A_{y},A_{z}\right\}\end{align}}\ ,$$which uses a linear extrapolation outer boundary condition.
###Code
%%writefile $outfile_path__outer_boundaries__C
/*******************************************************
* Outer boundaries are handled as follows:
* (-1) Update RHS quantities, leave RHS quantities zero on all outer ghostzones (including outer AMR refinement, processor, and outer boundaries)
* ( 0) Let MoL update all evolution variables
* ( 1) Apply outer boundary conditions (BCs) on A_{\mu}
* ( 2) Compute B^i from A_i everywhere, synchronize B^i
* ( 3) Call con2prim to get primitives on interior pts
* ( 4) Apply outer BCs on {P,rho_b,vx,vy,vz}.
* ( 5) (optional) set conservatives on outer boundary.
*******************************************************/
#include "cctk.h"
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "IllinoisGRMHD_headers.h"
#include "IllinoisGRMHD_EoS_lowlevel_functs.C"
#include "inlined_functions.C"
#define IDX(i,j,k) CCTK_GFINDEX3D(cctkGH,(i),(j),(k))
#define XMAX_OB_LINEAR_EXTRAP(FUNC,imax) for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) FUNC[IDX(imax,j,k)] = 2.0 * FUNC[IDX(imax-1,j,k)] - FUNC[IDX(imax-2,j,k)];
#define YMAX_OB_LINEAR_EXTRAP(FUNC,jmax) for(int k=0;k<cctk_lsh[2];k++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,jmax,k)] = 2.0 * FUNC[IDX(i,jmax-1,k)] - FUNC[IDX(i,jmax-2,k)];
#define ZMAX_OB_LINEAR_EXTRAP(FUNC,kmax) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,j,kmax)] = 2.0 * FUNC[IDX(i,j,kmax-1)] - FUNC[IDX(i,j,kmax-2)];
#define XMIN_OB_LINEAR_EXTRAP(FUNC,imin) for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) FUNC[IDX(imin,j,k)] = 2.0 * FUNC[IDX(imin+1,j,k)] - FUNC[IDX(imin+2,j,k)];
#define YMIN_OB_LINEAR_EXTRAP(FUNC,jmin) for(int k=0;k<cctk_lsh[2];k++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,jmin,k)] = 2.0 * FUNC[IDX(i,jmin+1,k)] - FUNC[IDX(i,jmin+2,k)];
#define ZMIN_OB_LINEAR_EXTRAP(FUNC,kmin) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,j,kmin)] = 2.0 * FUNC[IDX(i,j,kmin+1)] - FUNC[IDX(i,j,kmin+2)];
###Output
Overwriting ../src/outer_boundaries.C
###Markdown
Step 2.a.ii: Applying outer boundary conditions to $A_{\mu}$ \[Back to [top](toc)\]$$\label{outer_boundaries__amu__applying_bcs}$$Now we apply boundary conditions to $A_{\mu}$. The code below is pretty straightforward, but it is useful to understand the following `cctk` variables (refer e.g. to page A85/A264 of the [Cactus Reference Manual](https://cactuscode.org/documentation/ReferenceManual.pdf)):1. `cctk_lsh[i]`: the number of *total* number of grid points along direction $x^{i}$, used *by each processor*.2. `cctk_bbox[i]`: an array of integers that tell if the boundary gridpoints used by each processor are *internal* (i.e. artificial) or *physical* (i.e. actual boundary points). The variable follows the pattern: 1. `cctk_bbox[0]`: **Direction**: $x$ | **Orientation**: $+$ | Returns $\color{red}{0}$ if the boundary is $\color{red}{\text{artificial}}$ and $\color{blue}{1}$ if it is $\color{blue}{\text{physical}}$ 1. `cctk_bbox[1]`: **Direction**: $x$ | **Orientation**: $-$ | Returns $\color{red}{0}$ if the boundary is $\color{red}{\text{artificial}}$ and $\color{blue}{1}$ if it is $\color{blue}{\text{physical}}$ 1. `cctk_bbox[2]`: **Direction**: $y$ | **Orientation**: $+$ | Returns $\color{red}{0}$ if the boundary is $\color{red}{\text{artificial}}$ and $\color{blue}{1}$ if it is $\color{blue}{\text{physical}}$ 1. `cctk_bbox[3]`: **Direction**: $y$ | **Orientation**: $-$ | Returns $\color{red}{0}$ if the boundary is $\color{red}{\text{artificial}}$ and $\color{blue}{1}$ if it is $\color{blue}{\text{physical}}$ 1. `cctk_bbox[4]`: **Direction**: $z$ | **Orientation**: $+$ | Returns $\color{red}{0}$ if the boundary is $\color{red}{\text{artificial}}$ and $\color{blue}{1}$ if it is $\color{blue}{\text{physical}}$ 1. `cctk_bbox[5]`: **Direction**: $z$ | **Orientation**: $-$ | Returns $\color{red}{0}$ if the boundary is $\color{red}{\text{artificial}}$ and $\color{blue}{1}$ if it is $\color{blue}{\text{physical}}$
###Code
%%writefile -a $outfile_path__outer_boundaries__C
/*********************************************
* Apply outer boundary conditions on A_{\mu}
********************************************/
extern "C" void IllinoisGRMHD_outer_boundaries_on_A_mu(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if(CCTK_EQUALS(EM_BC,"frozen")) return;
bool Symmetry_none=false; if(CCTK_EQUALS(Symmetry,"none")) Symmetry_none=true;
int levelnumber = GetRefinementLevel(cctkGH);
IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij(cctkGH,cctk_lsh, gxx,gxy,gxz,gyy,gyz,gzz,alp,
gtxx,gtxy,gtxz,gtyy,gtyz,gtzz,
gtupxx,gtupxy,gtupxz,gtupyy,gtupyz,gtupzz,
phi_bssn,psi_bssn,lapm1);
// Don't apply approximate outer boundary conditions on initial data, which should be defined everywhere, or on levels != [coarsest level].
if(cctk_iteration==0 || levelnumber!=0) return;
if(cctk_nghostzones[0]!=cctk_nghostzones[1] || cctk_nghostzones[0]!=cctk_nghostzones[2])
CCTK_VError(VERR_DEF_PARAMS,"ERROR: IllinoisGRMHD outer BC driver does not support unequal number of ghostzones in different directions!");
for(int which_bdry_pt=0;which_bdry_pt<cctk_nghostzones[0];which_bdry_pt++) {
int imax=cctk_lsh[0]-cctk_nghostzones[0]+which_bdry_pt; // for cctk_nghostzones==3, this goes {cctk_lsh-3,cctk_lsh-2,cctk_lsh-1}; outer bdry pt is at cctk_lsh-1
int jmax=cctk_lsh[1]-cctk_nghostzones[1]+which_bdry_pt;
int kmax=cctk_lsh[2]-cctk_nghostzones[2]+which_bdry_pt;
int imin=cctk_nghostzones[0]-which_bdry_pt-1; // for cctk_nghostzones==3, this goes {2,1,0}
int jmin=cctk_nghostzones[1]-which_bdry_pt-1;
int kmin=cctk_nghostzones[2]-which_bdry_pt-1;
if(cctk_bbox[1]) { XMAX_OB_LINEAR_EXTRAP(Ax,imax); XMAX_OB_LINEAR_EXTRAP(Ay,imax); XMAX_OB_LINEAR_EXTRAP(Az,imax); XMAX_OB_LINEAR_EXTRAP(psi6phi,imax); }
if(cctk_bbox[3]) { YMAX_OB_LINEAR_EXTRAP(Ax,jmax); YMAX_OB_LINEAR_EXTRAP(Ay,jmax); YMAX_OB_LINEAR_EXTRAP(Az,jmax); YMAX_OB_LINEAR_EXTRAP(psi6phi,jmax); }
if(cctk_bbox[5]) { ZMAX_OB_LINEAR_EXTRAP(Ax,kmax); ZMAX_OB_LINEAR_EXTRAP(Ay,kmax); ZMAX_OB_LINEAR_EXTRAP(Az,kmax); ZMAX_OB_LINEAR_EXTRAP(psi6phi,kmax); }
if(cctk_bbox[0]) { XMIN_OB_LINEAR_EXTRAP(Ax,imin); XMIN_OB_LINEAR_EXTRAP(Ay,imin); XMIN_OB_LINEAR_EXTRAP(Az,imin); XMIN_OB_LINEAR_EXTRAP(psi6phi,imin); }
if(cctk_bbox[2]) { YMIN_OB_LINEAR_EXTRAP(Ax,jmin); YMIN_OB_LINEAR_EXTRAP(Ay,jmin); YMIN_OB_LINEAR_EXTRAP(Az,jmin); YMIN_OB_LINEAR_EXTRAP(psi6phi,jmin); }
if((cctk_bbox[4]) && Symmetry_none) { ZMIN_OB_LINEAR_EXTRAP(Ax,kmin); ZMIN_OB_LINEAR_EXTRAP(Ay,kmin); ZMIN_OB_LINEAR_EXTRAP(Az,kmin); ZMIN_OB_LINEAR_EXTRAP(psi6phi,kmin); }
}
}
###Output
Appending to ../src/outer_boundaries.C
###Markdown
Step 2.b: The hydrodynamic variables \[Back to [top](toc)\]$$\label{outer_boundaries__hydro_vars}$$ Step 2.b.i: Defining the zero derivative, outflow operators \[Back to [top](toc)\]$$\label{outer_boundaries__hydro_vars__zero_deriv_outflow}$$We now apply outer boundary conditions to $\left\{P,\rho_{b},v^{i}\right\}$, imposing zero derivative, outflow boundary conditions. We follow the prescription described above:$$\boxed{\begin{matrix}\text{Positive direction: }E_{i+1}=\left\{\begin{matrix}E_{i}\ , &{\rm\ if\ } E\in\left\{P,\rho_{b},v^{y},v^{z}\right\},{\rm\ or\ } E=v^{x}\ {\rm and\ } v^{x}\geq0\\0\ , &{\rm\ if\ } E=v^{x}\ {\rm and\ } v^{x}<0\end{matrix}\right.\\\text{Negative direction: }E_{i-1}=\left\{\begin{matrix}E_{i}\ , &{\rm\ if\ } E\in\left\{P,\rho_{b},v^{y},v^{z}\right\},{\rm\ or\ } E=v^{x}\ {\rm and\ } v^{x}\geq0\\0\ , &{\rm\ if\ } E=v^{x}\ {\rm and\ } v^{x}<0\end{matrix}\right.\end{matrix}}\ .$$
###Code
%%writefile -a $outfile_path__outer_boundaries__C
#define XMAX_OB_SIMPLE_COPY(FUNC,imax) for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) FUNC[IDX(imax,j,k)] = FUNC[IDX(imax-1,j,k)];
#define YMAX_OB_SIMPLE_COPY(FUNC,jmax) for(int k=0;k<cctk_lsh[2];k++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,jmax,k)] = FUNC[IDX(i,jmax-1,k)];
#define ZMAX_OB_SIMPLE_COPY(FUNC,kmax) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,j,kmax)] = FUNC[IDX(i,j,kmax-1)];
#define XMIN_OB_SIMPLE_COPY(FUNC,imin) for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) FUNC[IDX(imin,j,k)] = FUNC[IDX(imin+1,j,k)];
#define YMIN_OB_SIMPLE_COPY(FUNC,jmin) for(int k=0;k<cctk_lsh[2];k++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,jmin,k)] = FUNC[IDX(i,jmin+1,k)];
#define ZMIN_OB_SIMPLE_COPY(FUNC,kmin) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) FUNC[IDX(i,j,kmin)] = FUNC[IDX(i,j,kmin+1)];
#define XMAX_INFLOW_CHECK(vx,imax) for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) if(vx[IDX(imax,j,k)]<0.) vx[IDX(imax,j,k)]=0.;
#define YMAX_INFLOW_CHECK(vy,jmax) for(int k=0;k<cctk_lsh[2];k++) for(int i=0;i<cctk_lsh[0];i++) if(vy[IDX(i,jmax,k)]<0.) vy[IDX(i,jmax,k)]=0.;
#define ZMAX_INFLOW_CHECK(vz,kmax) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) if(vz[IDX(i,j,kmax)]<0.) vz[IDX(i,j,kmax)]=0.;
#define XMIN_INFLOW_CHECK(vx,imin) for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) if(vx[IDX(imin,j,k)]>0.) vx[IDX(imin,j,k)]=0.;
#define YMIN_INFLOW_CHECK(vy,jmin) for(int k=0;k<cctk_lsh[2];k++) for(int i=0;i<cctk_lsh[0];i++) if(vy[IDX(i,jmin,k)]>0.) vy[IDX(i,jmin,k)]=0.;
#define ZMIN_INFLOW_CHECK(vz,kmin) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) if(vz[IDX(i,j,kmin)]>0.) vz[IDX(i,j,kmin)]=0.;
###Output
Appending to ../src/outer_boundaries.C
###Markdown
Step 2.b.ii: Applying boundary conditions to $\left\{P,\rho_{b},v^{i}\right\}$ \[Back to [top](toc)\]$$\label{outer_boundaries__hydro_vars__applying_bcs}$$As with the previous case, applying the boundary conditions is a straightforward procedure. We refer the reader to the `cctk` quantities discussed in [Step 2.a.ii](outer_boundaries__amu__applying_bcs), in case clarifications are needed.
###Code
%%writefile -a $outfile_path__outer_boundaries__C
/*******************************************************
* Apply outer boundary conditions on {P,rho_b,vx,vy,vz}
* It is better to apply BCs on primitives than conservs,
* because small errors in conservs can be greatly
* amplified in con2prim, sometimes leading to unphysical
* primitives & unnecessary fixes.
*******************************************************/
extern "C" void IllinoisGRMHD_outer_boundaries_on_P_rho_b_vx_vy_vz(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if(CCTK_EQUALS(Matter_BC,"frozen")) return;
bool Symmetry_none=false; if(CCTK_EQUALS(Symmetry,"none")) Symmetry_none=true;
int levelnumber = GetRefinementLevel(cctkGH);
// Don't apply approximate outer boundary conditions on initial data, which should be defined everywhere, or on levels != [coarsest level].
if(cctk_iteration==0 || levelnumber!=0) return;
int ENABLE=1;
IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij(cctkGH,cctk_lsh, gxx,gxy,gxz,gyy,gyz,gzz,alp,
gtxx,gtxy,gtxz,gtyy,gtyz,gtzz,
gtupxx,gtupxy,gtupxz,gtupyy,gtupyz,gtupzz,
phi_bssn,psi_bssn,lapm1);
//if(levelnumber<=11110) {
if(cctk_nghostzones[0]!=cctk_nghostzones[1] || cctk_nghostzones[0]!=cctk_nghostzones[2])
CCTK_VError(VERR_DEF_PARAMS,"ERROR: IllinoisGRMHD outer BC driver does not support unequal number of ghostzones in different directions!");
for(int which_bdry_pt=0;which_bdry_pt<cctk_nghostzones[0];which_bdry_pt++) {
int imax=cctk_lsh[0]-cctk_nghostzones[0]+which_bdry_pt; // for cctk_nghostzones==3, this goes {cctk_lsh-3,cctk_lsh-2,cctk_lsh-1}; outer bdry pt is at cctk_lsh-1
int jmax=cctk_lsh[1]-cctk_nghostzones[1]+which_bdry_pt;
int kmax=cctk_lsh[2]-cctk_nghostzones[2]+which_bdry_pt;
int imin=cctk_nghostzones[0]-which_bdry_pt-1; // for cctk_nghostzones==3, this goes {2,1,0}
int jmin=cctk_nghostzones[1]-which_bdry_pt-1;
int kmin=cctk_nghostzones[2]-which_bdry_pt-1;
// Order here is for compatibility with old version of this code.
/* XMIN & XMAX */
// i=imax=outer boundary
if(cctk_bbox[1]) { XMAX_OB_SIMPLE_COPY(P,imax); XMAX_OB_SIMPLE_COPY(rho_b,imax); XMAX_OB_SIMPLE_COPY(vx,imax); XMAX_OB_SIMPLE_COPY(vy,imax); XMAX_OB_SIMPLE_COPY(vz,imax); if(ENABLE) XMAX_INFLOW_CHECK(vx,imax); }
// i=imin=outer boundary
if(cctk_bbox[0]) {
XMIN_OB_SIMPLE_COPY(P,imin); XMIN_OB_SIMPLE_COPY(rho_b,imin); XMIN_OB_SIMPLE_COPY(vx,imin); XMIN_OB_SIMPLE_COPY(vy,imin); XMIN_OB_SIMPLE_COPY(vz,imin); if(ENABLE) XMIN_INFLOW_CHECK(vx,imin); }
/* YMIN & YMAX */
// j=jmax=outer boundary
if(cctk_bbox[3]) { YMAX_OB_SIMPLE_COPY(P,jmax); YMAX_OB_SIMPLE_COPY(rho_b,jmax); YMAX_OB_SIMPLE_COPY(vx,jmax); YMAX_OB_SIMPLE_COPY(vy,jmax); YMAX_OB_SIMPLE_COPY(vz,jmax); if(ENABLE) YMAX_INFLOW_CHECK(vy,jmax); }
// j=jmin=outer boundary
if(cctk_bbox[2]) {
YMIN_OB_SIMPLE_COPY(P,jmin); YMIN_OB_SIMPLE_COPY(rho_b,jmin); YMIN_OB_SIMPLE_COPY(vx,jmin); YMIN_OB_SIMPLE_COPY(vy,jmin); YMIN_OB_SIMPLE_COPY(vz,jmin); if(ENABLE) YMIN_INFLOW_CHECK(vy,jmin); }
/* ZMIN & ZMAX */
// k=kmax=outer boundary
if(cctk_bbox[5]) { ZMAX_OB_SIMPLE_COPY(P,kmax); ZMAX_OB_SIMPLE_COPY(rho_b,kmax); ZMAX_OB_SIMPLE_COPY(vx,kmax); ZMAX_OB_SIMPLE_COPY(vy,kmax); ZMAX_OB_SIMPLE_COPY(vz,kmax); if(ENABLE) ZMAX_INFLOW_CHECK(vz,kmax); }
// k=kmin=outer boundary
if((cctk_bbox[4]) && Symmetry_none) {
ZMIN_OB_SIMPLE_COPY(P,kmin); ZMIN_OB_SIMPLE_COPY(rho_b,kmin); ZMIN_OB_SIMPLE_COPY(vx,kmin); ZMIN_OB_SIMPLE_COPY(vy,kmin); ZMIN_OB_SIMPLE_COPY(vz,kmin); if(ENABLE) ZMIN_INFLOW_CHECK(vz,kmin); }
}
###Output
Appending to ../src/outer_boundaries.C
###Markdown
Step 2.c: The conservative variables \[Back to [top](toc)\]$$\label{outer_boundaries__conservatives}$$After we have applied boundary conditions to our primitives (i.e. hydrodynamics) variables, we [make sure their values lie within the physical range and then recompute the conservatives](Tutorial-IllinoisGRMHD__apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.ipynb). Notice that the boundary conditions are then not applied directly to the conservative variables. The reason why the code is structured in this way is because small variations in the values of the conservative variables can cause the conservative-to-primitive algorithm to fail.
###Code
%%writefile -a $outfile_path__outer_boundaries__C
/**********************************
* Piecewise Polytropic EOS Patch *
* Setting up the EOS struct *
**********************************/
/*
* The short piece of code below takes care
* of initializing the EOS parameters.
* Please refer to the "inlined_functions.C"
* source file for the documentation on the
* function.
*/
eos_struct eos;
initialize_EOS_struct_from_input(eos);
#pragma omp parallel for
for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) {
if(((cctk_bbox[0]) && i<cctk_nghostzones[0]) ||
((cctk_bbox[1]) && i>=cctk_lsh[0]-cctk_nghostzones[0]) ||
((cctk_bbox[2]) && j<cctk_nghostzones[1]) ||
((cctk_bbox[3]) && j>=cctk_lsh[1]-cctk_nghostzones[1]) ||
((cctk_bbox[4]) && k<cctk_nghostzones[2] && CCTK_EQUALS(Symmetry,"none")) ||
((cctk_bbox[5]) && k>=cctk_lsh[2]-cctk_nghostzones[2])) {
int index = CCTK_GFINDEX3D(cctkGH,i,j,k);
int ww;
CCTK_REAL METRIC[NUMVARS_FOR_METRIC],dummy=-1e100; // Set dummy to insane value, to ensure it isn't being used.
ww=0;
//psi[index] = exp(phi[index]);
METRIC[ww] = phi_bssn[index];ww++;
METRIC[ww] = dummy; ww++; // Don't need to set psi.
METRIC[ww] = gtxx[index]; ww++;
METRIC[ww] = gtxy[index]; ww++;
METRIC[ww] = gtxz[index]; ww++;
METRIC[ww] = gtyy[index]; ww++;
METRIC[ww] = gtyz[index]; ww++;
METRIC[ww] = gtzz[index]; ww++;
METRIC[ww] = lapm1[index]; ww++;
METRIC[ww] = betax[index]; ww++;
METRIC[ww] = betay[index]; ww++;
METRIC[ww] = betaz[index]; ww++;
METRIC[ww] = gtupxx[index]; ww++;
METRIC[ww] = gtupyy[index]; ww++;
METRIC[ww] = gtupzz[index]; ww++;
METRIC[ww] = gtupxy[index]; ww++;
METRIC[ww] = gtupxz[index]; ww++;
METRIC[ww] = gtupyz[index]; ww++;
CCTK_REAL U[MAXNUMVARS];
ww=0;
U[ww] = rho_b[index]; ww++;
U[ww] = P[index]; ww++;
U[ww] = vx[index]; ww++;
U[ww] = vy[index]; ww++;
U[ww] = vz[index]; ww++;
U[ww] = Bx[index]; ww++;
U[ww] = By[index]; ww++;
U[ww] = Bz[index]; ww++;
struct output_stats stats;
CCTK_REAL CONSERVS[NUM_CONSERVS],TUPMUNU[10],TDNMUNU[10];
const int already_computed_physical_metric_and_inverse=0;
CCTK_REAL g4dn[4][4],g4up[4][4];
IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs(already_computed_physical_metric_and_inverse,U,stats,eos,METRIC,g4dn,g4up, TUPMUNU,TDNMUNU,CONSERVS);
rho_b[index] = U[RHOB];
P[index] = U[PRESSURE];
vx[index] = U[VX];
vy[index] = U[VY];
vz[index] = U[VZ];
rho_star[index]=CONSERVS[RHOSTAR];
tau[index] =CONSERVS[TAUENERGY];
mhd_st_x[index]=CONSERVS[STILDEX];
mhd_st_y[index]=CONSERVS[STILDEY];
mhd_st_z[index]=CONSERVS[STILDEZ];
if(update_Tmunu) {
ww=0;
eTtt[index] = TDNMUNU[ww]; ww++;
eTtx[index] = TDNMUNU[ww]; ww++;
eTty[index] = TDNMUNU[ww]; ww++;
eTtz[index] = TDNMUNU[ww]; ww++;
eTxx[index] = TDNMUNU[ww]; ww++;
eTxy[index] = TDNMUNU[ww]; ww++;
eTxz[index] = TDNMUNU[ww]; ww++;
eTyy[index] = TDNMUNU[ww]; ww++;
eTyz[index] = TDNMUNU[ww]; ww++;
eTzz[index] = TDNMUNU[ww];
}
//if(i==5 && j==5 && k==5) CCTK_VInfo(CCTK_THORNSTRING,"%e %e %e %e",eTtt[index],eTtx[index],eTty[index],eTxy[index]);
//CCTK_VInfo(CCTK_THORNSTRING,"YAY: "); for(ww=0;ww<10;ww++) CCTK_VInfo(CCTK_THORNSTRING,"%e ",TDNMUNU[ww]); CCTK_VInfo(CCTK_THORNSTRING,"");
}
}
}
###Output
Appending to ../src/outer_boundaries.C
###Markdown
Step 3: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
###Code
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/outer_boundaries.C"
original_IGM_file_name = "outer_boundaries-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__outer_boundaries__C = !diff $original_IGM_file_path $outfile_path__outer_boundaries__C
if Validation__outer_boundaries__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for outer_boundaries.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for outer_boundaries.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__outer_boundaries__C:
print(diff_line)
###Output
Validation test for outer_boundaries.C: FAILED!
Diff:
19a20
> #include "IllinoisGRMHD_EoS_lowlevel_functs.C"
31a33
>
73a76
>
91a95
>
155c159,170
< // FIXME: only for single gamma-law EOS.
---
>
> /**********************************
> * Piecewise Polytropic EOS Patch *
> * Setting up the EOS struct *
> **********************************/
> /*
> * The short piece of code below takes care
> * of initializing the EOS parameters.
> * Please refer to the "inlined_functions.C"
> * source file for the documentation on the
> * function.
> */
157,165c172,173
< eos.neos=neos;
< eos.K_poly=K_poly;
< eos.rho_tab[0]=rho_tab[0];
< eos.P_tab[0]=P_tab[0];
< eos.gamma_th=gamma_th;
< eos.eps_tab[0]=eps_tab[0];
< eos.k_tab[0]=k_tab[0]; eos.k_tab[1]=k_tab[1];
< eos.gamma_tab[0]=gamma_tab[0]; eos.gamma_tab[1]=gamma_tab[1];
<
---
> initialize_EOS_struct_from_input(eos);
>
246a255
>
###Markdown
Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__outer_boundaries.pdf](Tutorial-IllinoisGRMHD__outer_boundaries.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
###Code
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__outer_boundaries.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__outer_boundaries.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__outer_boundaries.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__outer_boundaries.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
_____no_output_____ |
benchmarking/ColanderBenchmarking.ipynb | ###Markdown
Colander Benchmarking
###Code
from colander.mock_data_generation.utils import *
from colander.estimate import greedy_strain_estimation
import networkx as nx
###Output
_____no_output_____
###Markdown
Broadly speaking, this process involves a few steps:1. Generate a random "starting genome"2. Generate strains3. Shear strains into k-mers then make de Bruijn graph4. Run greedy strain estimation codeOf course, genomes in practice are not random sequences of nucleotides -- as chapter 1 of Compeau and Pevzner shows, factors like G/C skew and repetitive regions are examples of nonrandomness in real genomes. This gives us reason to doubt the efficacy of modeling genomes completely randomly, as we do here.That being said, we have to start somewhere. Define parameter sets for how the tests we're going to runThe parameters are:- Starting genome length- Strain coverages- *N* parameter (each test runs estimation once for each *N* parameter in its list)(We've kept the k-mer size and hypervariable region settings / mutation rates consistent, but these could of course be adjusted on a per-test basis as well.)
###Code
tests = [
[1000, [1, 3, 5], [1, 2, 3, 4, 5]],
[1000, [20, 20, 45], [1, 2, 3, 4, 5]],
[1000, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [5, 10, 15, 20]],
[10000, [1, 3, 5], [1, 2, 3, 4, 5]],
[10000, [20, 20, 45], [1, 2, 3, 4, 5]],
[10000, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [5, 10, 15, 20]]
]
for i in range(len(tests)):
print("PARAMETER SET {}".format(i))
print("----------------".format(i))
glen = tests[i][0]
genome = generate_random_sequence(glen)
# Define hypervariable regions in the genome: these will undergo more mutations
hv_regions = [(glen // 50, glen // 50 + 100), (glen // 2, glen // 2 + 100)]
strains = generate_strains_from_genome(
genome,
tests[i][1],
hv_regions,
hypervariable_mutation_probability=0.01,
normal_mutation_probability=0.001
)
kmers = []
for s in strains:
kmers += shear_into_kmers(s.seq, s.coverage, 15)
g = make_debruijn_graph(kmers)
for n in tests[i][2]:
cs = greedy_strain_estimation(g, n)
print("CycleSet with N = {} has {} cycles and conformity score {}".format(
n, len(cs), cs.conformity_score(g)
))
###Output
PARAMETER SET 0
----------------
CycleSet with N = 1 has 1 cycles and conformity score 15237
CycleSet with N = 2 has 2 cycles and conformity score 999
CycleSet with N = 3 has 3 cycles and conformity score 0
CycleSet with N = 4 has 3 cycles and conformity score 0
CycleSet with N = 5 has 3 cycles and conformity score 0
PARAMETER SET 1
----------------
CycleSet with N = 1 has 1 cycles and conformity score 1551600
CycleSet with N = 2 has 2 cycles and conformity score 400000
CycleSet with N = 3 has 3 cycles and conformity score 0
CycleSet with N = 4 has 3 cycles and conformity score 0
CycleSet with N = 5 has 3 cycles and conformity score 0
PARAMETER SET 2
----------------
CycleSet with N = 5 has 5 cycles and conformity score 1004
CycleSet with N = 10 has 6 cycles and conformity score 0
CycleSet with N = 15 has 6 cycles and conformity score 0
CycleSet with N = 20 has 6 cycles and conformity score 0
PARAMETER SET 3
----------------
CycleSet with N = 1 has 1 cycles and conformity score 152520
CycleSet with N = 2 has 2 cycles and conformity score 10005
CycleSet with N = 3 has 3 cycles and conformity score 0
CycleSet with N = 4 has 3 cycles and conformity score 0
CycleSet with N = 5 has 3 cycles and conformity score 0
PARAMETER SET 4
----------------
CycleSet with N = 1 has 1 cycles and conformity score 14916000
CycleSet with N = 2 has 2 cycles and conformity score 3996000
CycleSet with N = 3 has 3 cycles and conformity score 0
CycleSet with N = 4 has 3 cycles and conformity score 0
CycleSet with N = 5 has 3 cycles and conformity score 0
PARAMETER SET 5
----------------
CycleSet with N = 5 has 4 cycles and conformity score 248895
CycleSet with N = 10 has 4 cycles and conformity score 248895
CycleSet with N = 15 has 4 cycles and conformity score 248895
CycleSet with N = 20 has 4 cycles and conformity score 248895
|
stage2_Ref_Calls/Play_by_play_data.ipynb | ###Markdown
Master Feature Types and Mapping from Raw to Readable Feature Names
###Code
# All types of fouls with 100+ occurences over last 3 seasons, raw as written in play-by-play files
# Other rare types listed on: https://pudding.cool/2017/02/two-minute-report/
# (ex: C.P.Foul or Clear Path)
foul_types_raw_all = ['Personal','P.FOUL',
'Turnover: Shot Clock',
'S.FOUL','Shooting',
'L.B.FOUL',
'OFF.Foul',
'Traveling']
foul_types = ['Personal', '24 Second', 'Shooting', 'Loose Ball', 'Offensive', 'Traveling']
# Originally included more foul types, but processing a season's data took more than 4 hours
foul_types_dict = {'P.FOUL':'Personal', # 'Personal Take Foul':'Personal',
'Personal':'Personal', # 'Personal Block':'Personal',
'Turnover: Shot Clock':'24 Second',
'S.FOUL':'Shooting','Shooting':'Shooting',
'L.B.FOUL':'Loose Ball',
'OFF.Foul':'Offensive',
'Traveling':'Traveling'
#'T.Foul':'Technical','T.Foul (Def. 3 Sec':'Defensive 3 Second'
}
features = ['SCORE','HOMEDESCRIPTION','VISITORDESCRIPTION','EVENTNUM','GAME_ID','PCTIMESTRING',
'PERIOD','PLAYER1_ID','PLAYER1_NAME','PLAYER1_TEAM_NICKNAME',
'PLAYER2_ID','PLAYER2_NAME','PLAYER2_TEAM_NICKNAME',
'PLAYER3_ID','PLAYER3_NAME','PLAYER3_TEAM_NICKNAME',
'SCOREMARGIN'
]
team_to_abrev_dict = {'76ers':'PHI',
'Bucks':'MIL',
'Bulls':'CHI',
'Cavaliers':'CLE',
'Celtics':'BOS',
'Clippers':'LAC',
'Grizzlies':'MEM',
'Hawks':'ATL',
'Heat':'MIA',
'Hornets':'CHA',
'Jazz':'UTA',
'Kings':'SAC',
'Knicks':'NYK',
'Lakers':'LAL',
'Magic':'ORL',
'Mavericks':'DAL',
'Nets':'BKN',
'Nuggets':'DEN',
'Pacers':'IND',
'Pelicans':'NOP',
'Pistons':'DET',
'Raptors':'TOR',
'Rockets':'HOU',
'Spurs':'SAS',
'Suns':'PHX',
'Thunder':'OKC',
'Timberwolves':'MIN',
'Trail Blazers':'POR',
'Warriors':'GSW',
'Wizards':'WAS'}
###Output
_____no_output_____
###Markdown
Checking Which Columns are Relevant in Seasonal Play-by-Play Data
###Code
# df = pd.read_csv('./data_raw/2015-16_pbp.csv')
# df.info()
# df.head()
# df = df[features]
# df.to_csv('./data_raw/2015-16_pbp_gzip.csv', index=False, compression='gzip')
df = pd.read_csv('./data_raw/2015-16_pbp_gzip.csv', compression='gzip')
df['PERSON1TYPE'].value_counts()
df['PERSON2TYPE'].value_counts()
# These variables doesn't seem at all relevant, since we have names of each person 1-3
df['PERSON3TYPE'].value_counts()
###Output
_____no_output_____
###Markdown
Mapping Raw to Re-Named Features
###Code
# Only interested in play-by-play events that are fouls and also one of the 6 most common foul types
def process_pbp(df):
c = 0
for idx, row in df.iterrows():
c += 1
if c % 50000 == 0: print(c, ' rows processed')
if type(row['VISITORDESCRIPTION']) == str:
for foul in foul_types_raw_all:
if (foul in row['VISITORDESCRIPTION']):
df.loc[idx, 'Foul'] = foul
df.loc[idx, 'Fouler'] = row['PLAYER1_NAME']
df.loc[idx, 'Fouler_team'] = row['PLAYER1_TEAM_ABBREVIATION']
df.loc[idx, 'is_home_team'] = False
if type(row["PLAYER2_NAME"]) != float: # excluding null values
df.loc[idx, 'Foulee'] = row['PLAYER2_NAME']
df.loc[idx, 'Foulee_team'] = row['PLAYER2_TEAM_ABBREVIATION']
if (type(row['HOMEDESCRIPTION']) == str):
for foul in foul_types_raw_all:
if (foul in row['HOMEDESCRIPTION']):
df.loc[idx, 'Foul'] = foul
df.loc[idx, 'Fouler'] = row['PLAYER1_NAME']
df.loc[idx, 'Fouler_team'] = row['PLAYER1_TEAM_ABBREVIATION']
df.loc[idx, 'is_home_team'] = True
if type(row["PLAYER2_NAME"]) != float: # excluding null values
df.loc[idx, 'Foulee'] = row['PLAYER2_NAME']
df.loc[idx, 'Foulee_team'] = row['PLAYER2_TEAM_ABBREVIATION']
process_pbp(df)
df.info()
df.drop('Home_team', axis=1, inplace=True)
df.to_csv('./data/play_by_play.csv', index=False, compression='gzip')
###Output
_____no_output_____
###Markdown
Engineering 2016-2017 Season Dataset
###Code
df2 = pd.read_csv('./data_raw/2016-17_pbp.csv')
df2.info()
# df2.head()
# df2 = df2[features]
# df2.info()
process_pbp(df2)
# df2.to_csv('./data_raw/2016-17_pbp_gzip.csv', index=False, compression='gzip')
df2 = pd.read_csv('./data_raw/2016-17_pbp_gzip.csv', compression='gzip')
# df2.drop(['PLAYER1_TEAM_ABBREVIATION','PLAYER2_TEAM_ABBREVIATION','PLAYER3_TEAM_ABBREVIATION'],
# axis=1, inplace=True)
# df3 = pd.read_csv('./data_raw/2016-17_pbp_gzip.csv', compression='gzip')
# df2 = pd.concat([df2, df3[['PLAYER1_TEAM_NICKNAME','PLAYER2_TEAM_NICKNAME','PLAYER3_TEAM_NICKNAME']]],
# axis=1)
df2.info()
df3['GAME_ID'][df3['GAME_ID']<21509999].value_counts()
df3['GAME_ID'][df3['GAME_ID']>21599999].value_counts()
df2['GAME_ID'][df2['GAME_ID']<21509999].value_counts()
df2['GAME_ID'][df2['GAME_ID']>21599999].value_counts()
# df3.to_csv('./data_raw/2015-16_pbp_gzip.csv', index=False, compression='gzip')
df3 = df3.append(df2, ignore_index=True)
# Both seasons data appended together (2016-17 and 2015-16)
df3.to_csv('./data/play_by_play.csv', index=False, compression='gzip')
df = None
df2 = None
df3 = pd.read_csv('./data/play_by_play.csv', compression='gzip')
df3.info()
df3['Fouler_team'].value_counts().sort_index()
df3['PLAYER1_TEAM_NICKNAME'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Engineering Adjusted Player Usage Variable, Counted Per Game
###Code
# Almost all Types of Shots and their Frequency:
# Jump Shot 0.461887
# Layup Shot 0.103953
# Driving Layup Shot 0.066412
# Pullup Jump shot 0.049389
# Floating Jump shot 0.024552
# Hook Shot 0.023900
# Tip Layup Shot 0.022701
# Step Back Jump shot 0.019193
# Running Layup Shot 0.018051
# Dunk Shot 0.017982
# Turnaround Jump Shot 0.016326
# Cutting Layup Shot 0.015823
# Fadeaway Jump Shot 0.012487
# Driving Finger Roll Layup Shot 0.011093
# Reverse Layup Shot 0.010899
# Putback Layup Shot 0.010773
# Running Jump Shot 0.008614
# Jump Bank Shot 0.008580
# Turnaround Hook Shot 0.008112
# Driving Floating Jump Shot 0.008077
# Alley Oop Dunk Shot 0.007655
# Driving Dunk Shot 0.007380
# Driving Reverse Layup Shot 0.007369
# Turnaround Fadeaway shot 0.006889
# Cutting Dunk Shot 0.006649
# Running Dunk Shot 0.006238
# Driving Hook Shot 0.004776
# Alley Oop Layup shot 0.004684
# Driving Bank shot 0.004536
# Putback Dunk Shot 0.003622
# Finger Roll Layup Shot 0.003496
# Running Finger Roll Layup Shot 0.002491
# Driving Floating Bank Jump Shot 0.001954
# Cutting Finger Roll Layup Shot 0.001748
# Turnaround Bank shot 0.001599
# Running Reverse Layup Shot 0.001554
# Tip Dunk Shot 0.001291
# Pullup Bank shot 0.001280
# Running Pull-Up Jump Shot 0.001234
match = re.search("(\d+)\'", "Neto 4' Fadeaway Jumper (2 PTS)")
match.group()[0]
df3.head()
df3.loc[4,'SCORE']
usage_types1 = ['Layup', 'Dunk', 'Driving'] # Common situations to foul, x2 weight
usage_types2 = ['Shot', 'shot', 'Jumper', 'Turnover', 'REBOUND']
usage_type3 = 'AST' # only type which co-occurs during another event type
# shot_types = ['Layup', 'Dunk', 'Driving', 'Shot', 'shot', 'Jumper'] # All have distance shot from
# Specific type not needed since it takes all remaining shot types including:
# Jump shots, pullups, floaters, hooks, step backs, bank shot
# NBA Usage statistic formula:
# 100 * ((FGA + 0.44 * FTA + TOV) * (Team MP / 5)) / (MP * (Team FGA + 0.44 * Team FTA + Team TOV))
# Above are all the considered types of opportunities where there could be a foul called
# for the player with the ball (included later are all on-ball foul types)
# (each type has a different baseline foul rate but here we assume an overall equal average)
# Types of Turnovers included: Travel, Lost Ball, Bad Pass, Out of Bounds, Foul Turnover, 3 Second
test = pd.DataFrame(data=[[1,2],[3,4]])
test
plus1(test, 2, 2)
test
def plus1(df, row_index, col_name):
try:
df.loc[row_index, col_name] += 1
except KeyError:
df.loc[row_index, col_name] = 1
def make_usage(df, df_usage):
c = 0
for idx, row in df.iterrows():
c += 1
if c % 50000 == 0: print(c, ' rows processed')
if type(row['Foul']) == float:
usage1done = False
usage2done = False
if type(row['VISITORDESCRIPTION']) == str:
text = row['VISITORDESCRIPTION']
is_home_text = 'visitor'
player2text = 'home'
elif (type(row['HOMEDESCRIPTION']) == str):
text = row['HOMEDESCRIPTION']
is_home_text = 'home'
player2text = 'home'
else:
text = ''
usage1done = True
usage2done = True
is_home_text = 'error'
player2text = 'error'
for usage in usage_types1:
if (usage in text) & (not usage1done):
usage1done = True
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_AdjUsage')
# Plus 1 twice due to double weight for driving shots
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_AdjUsage')
match = re.search("(\d+)\'", text)
try:
shot_dist = int(match.group()[0])
if shot_dist <= 5:
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_<5ft')
elif shot_dist <= 10:
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_5-10ft')
elif shot_dist <= 15:
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_10-15ft')
elif shot_dist <= 22:
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_15-22ft')
else:
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_23+ft')
except: pass
if 'AST' in text:
plus1(df_usage, row['GAME_ID'], str(row['PLAYER2_NAME']) + '_AdjUsage')
if not usage1done:
for usage in usage_types2:
if (usage in text) & (not usage2done):
usage2done = True
plus1(df_usage, row['GAME_ID'], str(row['PLAYER1_NAME']) + '_AdjUsage')
elif type(row['Foul']) == str:
foul = row['Foul']
if foul in ['Personal', 'Shooting', 'Loose Ball']:
plus1(df_usage, row['GAME_ID'], str(row['Foulee']))
plus1(df_usage, row['GAME_ID'], str(row['Foulee_team']) + '_' + foul+player2text)
plus1(df_usage, row['GAME_ID'], str(row['Fouler_team']) + '_'+foul+'_given_'+is_home_text)
plus1(df_usage, row['GAME_ID'], str(row['Foulee']) + '_AdjUsage')
elif foul in ['Offensive', 'Traveling']:
plus1(df_usage, row['GAME_ID'], str(row['Fouler']))
plus1(df_usage, row['GAME_ID'], str(row['Fouler_team']) + '_' + foul+is_home_text)
plus1(df_usage, row['GAME_ID'], str(row['Foulee_team']) + '_'+foul+'_taken'+player2text)
plus1(df_usage, row['GAME_ID'], str(row['Fouler']) + '_AdjUsage')
elif foul == '24 Second':
plus1(df_usage, row['GAME_ID'], str(row['Fouler_team']) + '_24_Sec'+is_home_text)
df_usage = pd.DataFrame()
make_usage(df3, df_usage)
df_usage.info()
df_usage.head()
df_usage.to_csv('./data/usage.csv', index=True, compression='gzip')
###Output
_____no_output_____ |
Anomaly & Change point detection/Anomaly detection.ipynb | ###Markdown
* alibi-detect comes with several benchmark datasets for time-series anomalydetection.1. fetch_ecg—ECG dataset from the BIDMC Congestive Heart Failure Database.2. fetch_nab—Numenta Anomaly Benchmark.3. fetch_kdd—KDD Cup '99 dataset of computer network intrusions.
###Code
!pip install alibi_detect
###Output
Collecting alibi_detect
Downloading alibi_detect-0.7.3-py3-none-any.whl (280 kB)
[K |████████████████████████████████| 280 kB 30.6 MB/s
[?25hCollecting tensorflow<2.7.0,>=2.0.0
Downloading tensorflow-2.6.2-cp37-cp37m-manylinux2010_x86_64.whl (458.3 MB)
[K |████████████████████████████████| 458.3 MB 10 kB/s
[?25hCollecting transformers<5.0.0,>=4.0.0
Downloading transformers-4.12.5-py3-none-any.whl (3.1 MB)
[K |████████████████████████████████| 3.1 MB 45.7 MB/s
[?25hRequirement already satisfied: dill<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (0.3.4)
Requirement already satisfied: numpy<2.0.0,>=1.16.2 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (1.19.5)
Collecting tensorflow-probability<0.13.0,>=0.8.0
Downloading tensorflow_probability-0.12.2-py2.py3-none-any.whl (4.8 MB)
[K |████████████████████████████████| 4.8 MB 62.9 MB/s
[?25hRequirement already satisfied: matplotlib<4.0.0,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (3.2.2)
Requirement already satisfied: requests<3.0.0,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (2.23.0)
Requirement already satisfied: tqdm<5.0.0,>=4.28.1 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (4.62.3)
Requirement already satisfied: opencv-python<5.0.0,>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (4.1.2.30)
Requirement already satisfied: scikit-image!=0.17.1,<0.19,>=0.14.2 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (0.18.3)
Requirement already satisfied: scikit-learn<1.1.0,>=0.20.2 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (1.0.1)
Requirement already satisfied: Pillow<9.0.0,>=5.4.1 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (7.1.2)
Requirement already satisfied: pandas<2.0.0,>=0.23.3 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (1.1.5)
Requirement already satisfied: scipy<2.0.0,>=1.3.0 in /usr/local/lib/python3.7/dist-packages (from alibi_detect) (1.4.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib<4.0.0,>=3.0.0->alibi_detect) (3.0.6)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib<4.0.0,>=3.0.0->alibi_detect) (0.11.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib<4.0.0,>=3.0.0->alibi_detect) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib<4.0.0,>=3.0.0->alibi_detect) (1.3.2)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas<2.0.0,>=0.23.3->alibi_detect) (2018.9)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib<4.0.0,>=3.0.0->alibi_detect) (1.15.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.21.0->alibi_detect) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.21.0->alibi_detect) (2021.10.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.21.0->alibi_detect) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.21.0->alibi_detect) (2.10)
Requirement already satisfied: tifffile>=2019.7.26 in /usr/local/lib/python3.7/dist-packages (from scikit-image!=0.17.1,<0.19,>=0.14.2->alibi_detect) (2021.11.2)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image!=0.17.1,<0.19,>=0.14.2->alibi_detect) (2.6.3)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from scikit-image!=0.17.1,<0.19,>=0.14.2->alibi_detect) (1.2.0)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image!=0.17.1,<0.19,>=0.14.2->alibi_detect) (2.4.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<1.1.0,>=0.20.2->alibi_detect) (3.0.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<1.1.0,>=0.20.2->alibi_detect) (1.1.0)
Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (3.3.0)
Requirement already satisfied: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.12.0)
Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.6.3)
Requirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.4.0)
Requirement already satisfied: h5py~=3.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (3.1.0)
Requirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.1.2)
Collecting clang~=5.0
Downloading clang-5.0.tar.gz (30 kB)
Collecting tensorboard<2.7,>=2.6.0
Downloading tensorboard-2.6.0-py3-none-any.whl (5.6 MB)
[K |████████████████████████████████| 5.6 MB 67.8 MB/s
[?25hCollecting flatbuffers~=1.12.0
Downloading flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: grpcio<2.0,>=1.37.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.42.0)
Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.2.0)
Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.37.0)
Collecting typing-extensions~=3.7.4
Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Collecting keras<2.7,>=2.6.0
Downloading keras-2.6.0-py2.py3-none-any.whl (1.3 MB)
[K |████████████████████████████████| 1.3 MB 56.8 MB/s
[?25hRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (3.17.3)
Requirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.1.0)
Collecting wrapt~=1.12.1
Downloading wrapt-1.12.1.tar.gz (27 kB)
Collecting tensorflow-estimator<2.7,>=2.6.0
Downloading tensorflow_estimator-2.6.0-py2.py3-none-any.whl (462 kB)
[K |████████████████████████████████| 462 kB 64.4 MB/s
[?25hRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py~=3.1.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.5.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.8.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (57.4.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.4.6)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.6.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (3.3.6)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.35.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (4.2.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (1.3.0)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (4.8.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (3.6.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.7,>=2.6.0->tensorflow<2.7.0,>=2.0.0->alibi_detect) (3.1.1)
Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability<0.13.0,>=0.8.0->alibi_detect) (4.4.2)
Requirement already satisfied: cloudpickle>=1.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability<0.13.0,>=0.8.0->alibi_detect) (1.3.0)
Requirement already satisfied: dm-tree in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability<0.13.0,>=0.8.0->alibi_detect) (0.1.6)
Collecting sacremoses
Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)
[K |████████████████████████████████| 895 kB 46.9 MB/s
[?25hCollecting huggingface-hub<1.0,>=0.1.0
Downloading huggingface_hub-0.1.2-py3-none-any.whl (59 kB)
[K |████████████████████████████████| 59 kB 6.3 MB/s
[?25hCollecting tokenizers<0.11,>=0.10.1
Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)
[K |████████████████████████████████| 3.3 MB 46.4 MB/s
[?25hCollecting pyyaml>=5.1
Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
[K |████████████████████████████████| 596 kB 44.1 MB/s
[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.0.0->alibi_detect) (3.4.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.0.0->alibi_detect) (2019.12.20)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers<5.0.0,>=4.0.0->alibi_detect) (21.3)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers<5.0.0,>=4.0.0->alibi_detect) (7.1.2)
Building wheels for collected packages: clang, wrapt
Building wheel for clang (setup.py) ... [?25l[?25hdone
Created wheel for clang: filename=clang-5.0-py3-none-any.whl size=30692 sha256=69cee2deeae4eea3fc21b7834b6f7e53c62d1d1a3f03c2f9f70d7bbc838fe073
Stored in directory: /root/.cache/pip/wheels/98/91/04/971b4c587cf47ae952b108949b46926f426c02832d120a082a
Building wheel for wrapt (setup.py) ... [?25l[?25hdone
Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_x86_64.whl size=68720 sha256=0f1adf3709a65b06b1f80ff716f02ead72e12f9ff508ca71d48a643fc9cbca01
Stored in directory: /root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
Successfully built clang wrapt
Installing collected packages: typing-extensions, pyyaml, wrapt, tokenizers, tensorflow-estimator, tensorboard, sacremoses, keras, huggingface-hub, flatbuffers, clang, transformers, tensorflow-probability, tensorflow, alibi-detect
Attempting uninstall: typing-extensions
Found existing installation: typing-extensions 3.10.0.2
Uninstalling typing-extensions-3.10.0.2:
Successfully uninstalled typing-extensions-3.10.0.2
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Attempting uninstall: wrapt
Found existing installation: wrapt 1.13.3
Uninstalling wrapt-1.13.3:
Successfully uninstalled wrapt-1.13.3
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.7.0
Uninstalling tensorflow-estimator-2.7.0:
Successfully uninstalled tensorflow-estimator-2.7.0
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.7.0
Uninstalling tensorboard-2.7.0:
Successfully uninstalled tensorboard-2.7.0
Attempting uninstall: keras
Found existing installation: keras 2.7.0
Uninstalling keras-2.7.0:
Successfully uninstalled keras-2.7.0
Attempting uninstall: flatbuffers
Found existing installation: flatbuffers 2.0
Uninstalling flatbuffers-2.0:
Successfully uninstalled flatbuffers-2.0
Attempting uninstall: tensorflow-probability
Found existing installation: tensorflow-probability 0.15.0
Uninstalling tensorflow-probability-0.15.0:
Successfully uninstalled tensorflow-probability-0.15.0
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.7.0
Uninstalling tensorflow-2.7.0:
Successfully uninstalled tensorflow-2.7.0
Successfully installed alibi-detect-0.7.3 clang-5.0 flatbuffers-1.12 huggingface-hub-0.1.2 keras-2.6.0 pyyaml-6.0 sacremoses-0.0.46 tensorboard-2.6.0 tensorflow-2.6.2 tensorflow-estimator-2.6.0 tensorflow-probability-0.12.2 tokenizers-0.10.3 transformers-4.12.5 typing-extensions-3.7.4.3 wrapt-1.12.1
###Markdown
* Load the time-series of computer network intrusions (KDD99).
###Code
from alibi_detect.datasets import fetch_kdd
intrusions = fetch_kdd()
intrusions["target"].sum() / len(intrusions["target"])
###Output
_____no_output_____
###Markdown
* intrusions is a dictionary, where the data key returns a matrix of 494021x18.* The 18 dimensions of the time-series are the continuous features of the dataset, mostly error rates and counts.
###Code
intrusions["feature_names"]
###Output
_____no_output_____
###Markdown
* Load and run the SpectralResidual model that implements the methodproposed by Microsoft.
###Code
from alibi_detect.od import SpectralResidual
od = SpectralResidual(
threshold=1.0, window_amp=20, window_local=20, n_est_points=10, n_grad_points=5
)
intrusion_outliers = od.predict(intrusions["data"])
###Output
_____no_output_____
###Markdown
* Then get the anomaly scores for each point in time-series.
###Code
scores = od.score(intrusions["data"][:, 0])
###Output
_____no_output_____
###Markdown
* Plot time-series (we'll choose – arbitrarily – the first dimensionof dataset).
###Code
import pandas as pd
pd.Series(intrusions["data"][:, 0]).plot();
###Output
_____no_output_____
###Markdown
* Plot the scores imposed on top of time-series.* Use a dual y-axis for plotting the scores and the data within the same plot.
###Code
import matplotlib
ax = pd.Series(intrusions["data"][:, 0], name="data").plot(
legend=False, figsize=(12, 6)
)
ax2 = ax.twinx()
ax = pd.Series(scores, name="scores").plot(
ax=ax2, legend=False, color="r", marker=matplotlib.markers.CARETDOWNBASE
)
ax.figure.legend(bbox_to_anchor=(1, 1), loc="upper left");
###Output
_____no_output_____
###Markdown
* Some points are not recognized as outliers since the periodic nature of the signal is removed by the Fourier filter.
###Code
###Output
_____no_output_____ |
1.research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb | ###Markdown
Machine Learning Pipeline - Feature EngineeringIn the following notebooks, we will go through the implementation of each one of the steps in the Machine Learning Pipeline. We will discuss:1. Data Analysis2. **Feature Engineering**3. Feature Selection4. Model Training5. Obtaining Predictions / ScoringWe will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated. What is the objective of the machine learning model?We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance with the:1. mean squared error (mse)2. root squared of the mean squared error (rmse)3. r-squared (r2). Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# to handle datasets
import pandas as pd
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# for the yeo-johnson transformation
import scipy.stats as stats
# to divide train and test set
from sklearn.model_selection import train_test_split
# feature scaling
from sklearn.preprocessing import MinMaxScaler
# to save the trained scaler class
import joblib
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Put the variables in a similar scale TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
with_string_missing
# replace missing values with new label: "Missing"
X_train[with_string_missing] = X_train[with_string_missing].fillna('Missing')
X_test[with_string_missing] = X_test[with_string_missing].fillna('Missing')
for var in with_frequent_category:
# there can be more than 1 mode in a variable
# we take the first one with [0]
mode = X_train[var].mode()[0]
print(var, mode)
X_train[var].fillna(mode, inplace=True)
X_test[var].fillna(mode, inplace=True)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# replace missing values as we described above
for var in vars_with_na:
# calculate the mean using the train set
mean_val = X_train[var].mean()
print(var, mean_val)
# add binary missing indicator (in train and test)
X_train[var + '_na'] = np.where(X_train[var].isnull(), 1, 0)
X_test[var + '_na'] = np.where(X_test[var].isnull(), 1, 0)
# replace missing values by the mean
# (in train and test)
X_train[var].fillna(mean_val, inplace=True)
X_test[var].fillna(mean_val, inplace=True)
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeWe learned in the previous notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
X_train.drop(['YrSold'], axis=1, inplace=True)
X_test.drop(['YrSold'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
for var in ["LotFrontage", "1stFlrSF", "GrLivArea"]:
X_train[var] = np.log(X_train[var])
X_test[var] = np.log(X_test[var])
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
# the yeo-johnson transformation learns the best exponent to transform the variable
# it needs to learn it from the train set:
X_train['LotArea'], param = stats.yeojohnson(X_train['LotArea'])
# and then apply the transformation to the test set with the same
# parameter: see who this time we pass param as argument to the
# yeo-johnson
X_test['LotArea'] = stats.yeojohnson(X_test['LotArea'], lmbda=param)
print(param)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
for var in skewed:
# map the variable values into 0 and 1
X_train[var] = np.where(X_train[var]==0, 0, 1)
X_test[var] = np.where(X_test[var]==0, 0, 1)
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, will be replaced by the string "Rare".
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
def find_frequent_labels(df, var, rare_perc):
# function finds the labels that are shared by more than
# a certain % of the houses in the dataset
df = df.copy()
tmp = df.groupby(var)[var].count() / len(df)
return tmp[tmp > rare_perc].index
for var in cat_others:
# find the frequent categories
frequent_ls = find_frequent_labels(X_train, var, 0.01)
print(var, frequent_ls)
print()
# replace rare categories by the string "Rare"
X_train[var] = np.where(X_train[var].isin(
frequent_ls), X_train[var], 'Rare')
X_test[var] = np.where(X_test[var].isin(
frequent_ls), X_test[var], 'Rare')
###Output
MSZoning Index(['FV', 'RH', 'RL', 'RM'], dtype='object', name='MSZoning')
Street Index(['Pave'], dtype='object', name='Street')
Alley Index(['Grvl', 'Missing', 'Pave'], dtype='object', name='Alley')
LotShape Index(['IR1', 'IR2', 'Reg'], dtype='object', name='LotShape')
LandContour Index(['Bnk', 'HLS', 'Low', 'Lvl'], dtype='object', name='LandContour')
Utilities Index(['AllPub'], dtype='object', name='Utilities')
LotConfig Index(['Corner', 'CulDSac', 'FR2', 'Inside'], dtype='object', name='LotConfig')
LandSlope Index(['Gtl', 'Mod'], dtype='object', name='LandSlope')
Neighborhood Index(['Blmngtn', 'BrDale', 'BrkSide', 'ClearCr', 'CollgCr', 'Crawfor',
'Edwards', 'Gilbert', 'IDOTRR', 'MeadowV', 'Mitchel', 'NAmes', 'NWAmes',
'NoRidge', 'NridgHt', 'OldTown', 'SWISU', 'Sawyer', 'SawyerW',
'Somerst', 'StoneBr', 'Timber'],
dtype='object', name='Neighborhood')
Condition1 Index(['Artery', 'Feedr', 'Norm', 'PosN', 'RRAn'], dtype='object', name='Condition1')
Condition2 Index(['Norm'], dtype='object', name='Condition2')
BldgType Index(['1Fam', '2fmCon', 'Duplex', 'Twnhs', 'TwnhsE'], dtype='object', name='BldgType')
HouseStyle Index(['1.5Fin', '1Story', '2Story', 'SFoyer', 'SLvl'], dtype='object', name='HouseStyle')
RoofStyle Index(['Gable', 'Hip'], dtype='object', name='RoofStyle')
RoofMatl Index(['CompShg'], dtype='object', name='RoofMatl')
Exterior1st Index(['AsbShng', 'BrkFace', 'CemntBd', 'HdBoard', 'MetalSd', 'Plywood',
'Stucco', 'VinylSd', 'Wd Sdng', 'WdShing'],
dtype='object', name='Exterior1st')
Exterior2nd Index(['AsbShng', 'BrkFace', 'CmentBd', 'HdBoard', 'MetalSd', 'Plywood',
'Stucco', 'VinylSd', 'Wd Sdng', 'Wd Shng'],
dtype='object', name='Exterior2nd')
MasVnrType Index(['BrkFace', 'None', 'Stone'], dtype='object', name='MasVnrType')
Foundation Index(['BrkTil', 'CBlock', 'PConc', 'Slab'], dtype='object', name='Foundation')
Heating Index(['GasA', 'GasW'], dtype='object', name='Heating')
CentralAir Index(['N', 'Y'], dtype='object', name='CentralAir')
Electrical Index(['FuseA', 'FuseF', 'SBrkr'], dtype='object', name='Electrical')
Functional Index(['Min1', 'Min2', 'Mod', 'Typ'], dtype='object', name='Functional')
GarageType Index(['Attchd', 'Basment', 'BuiltIn', 'Detchd'], dtype='object', name='GarageType')
PavedDrive Index(['N', 'P', 'Y'], dtype='object', name='PavedDrive')
PoolQC Index(['Missing'], dtype='object', name='PoolQC')
MiscFeature Index(['Missing', 'Shed'], dtype='object', name='MiscFeature')
SaleType Index(['COD', 'New', 'WD'], dtype='object', name='SaleType')
SaleCondition Index(['Abnorml', 'Family', 'Normal', 'Partial'], dtype='object', name='SaleCondition')
MSSubClass Int64Index([20, 30, 50, 60, 70, 75, 80, 85, 90, 120, 160, 190], dtype='int64', name='MSSubClass')
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.
###Code
# this function will assign discrete values to the strings of the variables,
# so that the smaller value corresponds to the category that shows the smaller
# mean house sale price
def replace_categories(train, test, y_train, var, target):
tmp = pd.concat([X_train, y_train], axis=1)
# order the categories in a variable from that with the lowest
# house sale price, to that with the highest
ordered_labels = tmp.groupby([var])[target].mean().sort_values().index
# create a dictionary of ordered categories to integer values
ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)}
print(var, ordinal_label)
print()
# use the dictionary to replace the categorical strings by integers
train[var] = train[var].map(ordinal_label)
test[var] = test[var].map(ordinal_label)
for var in cat_others:
replace_categories(X_train, X_test, y_train, var, 'SalePrice')
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
# let's now save the train and test sets for the next notebook!
X_train.to_csv('xtrain.csv', index=False)
X_test.to_csv('xtest.csv', index=False)
y_train.to_csv('ytrain.csv', index=False)
y_test.to_csv('ytest.csv', index=False)
# now let's save the scaler
joblib.dump(scaler, 'minmax_scaler.joblib')
###Output
_____no_output_____ |
houseprice_prediction/code/Lets-Model-it.ipynb | ###Markdown
Flow- Linear Regression 끄적여보기- [1차 Modeling](2.-1차-Modeling)- [2차 Modeling](2차-Modeling) 1. 선형회귀(Linear Regression) 끄적여보기
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv("../dataset/train.csv")
y_train = df['price']
x_train = df.drop(columns=['price'])
x_train.shape, y_train.shape
df_test = pd.read_csv("../dataset/test.csv")
df_test.tail()
df_test['date'] = df_test['date'].map(lambda x: int(x.split('T')[0]))
df_test['date'].min()
###Output
_____no_output_____
###Markdown
기본 선형모델(feature를 추가하지 않은)- feature drop - id, date, zipcode (drop)- OLS
###Code
model_ols = sm.OLS.from_formula("np.log1p(price) ~ bedrooms + bathrooms + sqft_living + sqft_lot + floors + C(waterfront) \
+ view + condition + grade + sqft_above + sqft_basement + yr_built \
+ yr_renovated + lat + long + sqft_living15 + sqft_lot15 - 1", df)
result = model_ols.fit()
print(result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: np.log1p(price) R-squared: 0.769
Model: OLS Adj. R-squared: 0.769
Method: Least Squares F-statistic: 3131.
Date: Tue, 02 Apr 2019 Prob (F-statistic): 0.00
Time: 19:13:29 Log-Likelihood: -689.49
No. Observations: 15035 AIC: 1413.
Df Residuals: 15018 BIC: 1542.
Df Model: 16
Covariance Type: nonrobust
====================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------
C(waterfront)[0] -52.4150 2.389 -21.936 0.000 -57.099 -47.731
C(waterfront)[1] -52.0795 2.390 -21.791 0.000 -56.764 -47.395
bedrooms -0.0102 0.003 -3.384 0.001 -0.016 -0.004
bathrooms 0.0672 0.005 13.579 0.000 0.058 0.077
sqft_living 9.45e-05 3.44e-06 27.503 0.000 8.78e-05 0.000
sqft_lot 4.865e-07 7.14e-08 6.817 0.000 3.47e-07 6.26e-07
floors 0.0712 0.005 13.086 0.000 0.061 0.082
view 0.0606 0.003 18.923 0.000 0.054 0.067
condition 0.0703 0.004 19.750 0.000 0.063 0.077
grade 0.1632 0.003 50.555 0.000 0.157 0.170
sqft_above 3.774e-05 3.43e-06 11.014 0.000 3.1e-05 4.45e-05
sqft_basement 5.677e-05 4e-06 14.190 0.000 4.89e-05 6.46e-05
yr_built -0.0032 0.000 -28.974 0.000 -0.003 -0.003
yr_renovated 4.202e-05 5.54e-06 7.589 0.000 3.12e-05 5.29e-05
lat 1.3420 0.016 84.516 0.000 1.311 1.373
long -0.0462 0.018 -2.578 0.010 -0.081 -0.011
sqft_living15 9.957e-05 5.16e-06 19.282 0.000 8.94e-05 0.000
sqft_lot15 -2.323e-07 1.11e-07 -2.100 0.036 -4.49e-07 -1.55e-08
==============================================================================
Omnibus: 288.374 Durbin-Watson: 1.985
Prob(Omnibus): 0.000 Jarque-Bera (JB): 613.038
Skew: -0.021 Prob(JB): 7.60e-134
Kurtosis: 3.988 Cond. No. 1.52e+17
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 1.74e-21. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
- price보다 np.log1p(price)를 해주는게 R square가 높다..- condition number가 너무 크다. 1. std err : 표준오차(standard error) ; 2. t : (w_hat - w) / std_err = t3. p>|t| : "w가 0이다"라는 귀무가설의 유의확률; t의 유의확률4. F statistics : --- EDA를 통해 사용해볼 features- 47.5~47.8도 사이에 위치한 zipcode --> 중심지와 교외지역- 부지면적(sqft_lot) 대비 거주면적(sqft_living) 비율- 거주면적(sqft_living) 대비 지상면적(sqft_above) 비율- 재건축 여부 (yr_renovated)- 선거연도에 지어졌는지 유무(election_year)
###Code
# 먼저 독립변수들간의 상관관계를 조금 줄여보자
# sqft_above, sqft_basement drop
model_ols = sm.OLS.from_formula("np.log1p(price) ~ bedrooms + bathrooms + sqft_living + sqft_lot + floors + C(waterfront) \
+ view + condition + grade + yr_built \
+ yr_renovated + lat + long + sqft_living15 + sqft_lot15 - 1", df)
result = model_ols.fit()
print(result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: np.log1p(price) R-squared: 0.769
Model: OLS Adj. R-squared: 0.769
Method: Least Squares F-statistic: 3337.
Date: Fri, 05 Apr 2019 Prob (F-statistic): 0.00
Time: 22:33:55 Log-Likelihood: -693.64
No. Observations: 15035 AIC: 1419.
Df Residuals: 15019 BIC: 1541.
Df Model: 15
Covariance Type: nonrobust
====================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------
C(waterfront)[0] -53.9802 2.327 -23.193 0.000 -58.542 -49.418
C(waterfront)[1] -53.6479 2.328 -23.049 0.000 -58.210 -49.086
bedrooms -0.0101 0.003 -3.350 0.001 -0.016 -0.004
bathrooms 0.0694 0.005 14.200 0.000 0.060 0.079
sqft_living 0.0001 5.06e-06 27.444 0.000 0.000 0.000
sqft_lot 4.809e-07 7.14e-08 6.741 0.000 3.41e-07 6.21e-07
floors 0.0643 0.005 13.168 0.000 0.055 0.074
view 0.0621 0.003 19.631 0.000 0.056 0.068
condition 0.0711 0.004 20.022 0.000 0.064 0.078
grade 0.1618 0.003 50.697 0.000 0.156 0.168
yr_built -0.0032 0.000 -28.930 0.000 -0.003 -0.003
yr_renovated 4.217e-05 5.54e-06 7.613 0.000 3.13e-05 5.3e-05
lat 1.3475 0.016 85.480 0.000 1.317 1.378
long -0.0569 0.018 -3.241 0.001 -0.091 -0.022
sqft_living15 9.732e-05 5.11e-06 19.061 0.000 8.73e-05 0.000
sqft_lot15 -2.379e-07 1.11e-07 -2.151 0.032 -4.55e-07 -2.11e-08
==============================================================================
Omnibus: 282.043 Durbin-Watson: 1.986
Prob(Omnibus): 0.000 Jarque-Bera (JB): 592.804
Skew: -0.026 Prob(JB): 1.88e-129
Kurtosis: 3.971 Cond. No. 8.26e+07
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 8.26e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
- condition number가 많이 줄었다- scaling을 해주자
###Code
model_ols = sm.OLS.from_formula("np.log1p(price) ~ bedrooms + bathrooms + scale(sqft_living) + scale(sqft_lot) + floors + C(waterfront) \
+ view + condition + grade + scale(lat) + scale(long) + scale(sqft_living15) + scale(sqft_lot15) - 1", df)
result = model_ols.fit()
print(result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: np.log1p(price) R-squared: 0.752
Model: OLS Adj. R-squared: 0.751
Method: Least Squares F-statistic: 3496.
Date: Fri, 05 Apr 2019 Prob (F-statistic): 0.00
Time: 22:37:40 Log-Likelihood: -1246.8
No. Observations: 15035 AIC: 2522.
Df Residuals: 15021 BIC: 2628.
Df Model: 13
Covariance Type: nonrobust
========================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------------
C(waterfront)[0] 11.5416 0.030 381.223 0.000 11.482 11.601
C(waterfront)[1] 11.8993 0.041 293.033 0.000 11.820 11.979
bedrooms -0.0052 0.003 -1.687 0.092 -0.011 0.001
bathrooms 0.0253 0.005 5.296 0.000 0.016 0.035
scale(sqft_living) 0.1541 0.005 32.258 0.000 0.145 0.163
scale(sqft_lot) 0.0238 0.003 7.562 0.000 0.018 0.030
floors 0.0291 0.005 5.913 0.000 0.019 0.039
view 0.0750 0.003 23.024 0.000 0.069 0.081
condition 0.0988 0.003 28.372 0.000 0.092 0.106
grade 0.1397 0.003 43.293 0.000 0.133 0.146
scale(lat) 0.2015 0.002 91.562 0.000 0.197 0.206
scale(long) -0.0344 0.002 -14.212 0.000 -0.039 -0.030
scale(sqft_living15) 0.0659 0.004 18.015 0.000 0.059 0.073
scale(sqft_lot15) -0.0075 0.003 -2.367 0.018 -0.014 -0.001
==============================================================================
Omnibus: 200.039 Durbin-Watson: 1.994
Prob(Omnibus): 0.000 Jarque-Bera (JB): 331.247
Skew: 0.106 Prob(JB): 1.18e-72
Kurtosis: 3.696 Cond. No. 212.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
- lat/long까지 scaling해줬더니 conditional number가 급감하였다.
###Code
df2 = df.copy()
# 평당가격 (실제론 피트당 가격)
# sqft_living
df2['per_price'] = df2['price'] / df2['sqft_living']
price_per_zipcode = df2.groupby(['zipcode'])['per_price'].agg({'zipprice_mean' : 'mean', 'zipprice_std' : np.std}).reset_index()
price_per_zipcode.tail()
price_per_zipcode['zipprice_mean'].describe()
# 평당가격이 317 이상되면 중심부가 아닐까?
# merge df2 and price_per_zipcode
df2 = df2.merge(price_per_zipcode, how='left', on='zipcode')
df2.tail()
idx = df2[(df2.zipprice_mean > 317.) & (df2.lat >= 47.5) & (df2.lat < 47.8)].index
df2['center_region'] = 0
df2.loc[idx, 'center_region'] = 1
df2.center_region
model_ols = sm.OLS.from_formula("np.log1p(price) ~ bedrooms + bathrooms + scale(sqft_living) + scale(sqft_lot) + floors + C(waterfront) \
+ view + condition + grade + scale(lat) + scale(long) + scale(sqft_living15) + scale(sqft_lot15) + C(center_region) - 1", df2)
result = model_ols.fit()
print(result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: np.log1p(price) R-squared: 0.826
Model: OLS Adj. R-squared: 0.826
Method: Least Squares F-statistic: 5106.
Date: Tue, 09 Apr 2019 Prob (F-statistic): 0.00
Time: 19:46:04 Log-Likelihood: 1445.4
No. Observations: 15035 AIC: -2861.
Df Residuals: 15020 BIC: -2747.
Df Model: 14
Covariance Type: nonrobust
=========================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------
C(waterfront)[0] 11.7124 0.025 461.093 0.000 11.663 11.762
C(waterfront)[1] 12.1809 0.034 356.885 0.000 12.114 12.248
C(center_region)[T.1] 0.3974 0.005 80.426 0.000 0.388 0.407
bedrooms 0.0060 0.003 2.296 0.022 0.001 0.011
bathrooms 0.0258 0.004 6.474 0.000 0.018 0.034
scale(sqft_living) 0.1572 0.004 39.380 0.000 0.149 0.165
scale(sqft_lot) 0.0249 0.003 9.451 0.000 0.020 0.030
floors -0.0194 0.004 -4.660 0.000 -0.028 -0.011
view 0.0603 0.003 22.086 0.000 0.055 0.066
condition 0.0732 0.003 24.988 0.000 0.067 0.079
grade 0.1203 0.003 44.423 0.000 0.115 0.126
scale(lat) 0.1560 0.002 81.076 0.000 0.152 0.160
scale(long) 0.0325 0.002 14.872 0.000 0.028 0.037
scale(sqft_living15) 0.0748 0.003 24.454 0.000 0.069 0.081
scale(sqft_lot15) -0.0034 0.003 -1.279 0.201 -0.009 0.002
==============================================================================
Omnibus: 540.426 Durbin-Watson: 2.010
Prob(Omnibus): 0.000 Jarque-Bera (JB): 1607.665
Skew: -0.026 Prob(JB): 0.00
Kurtosis: 4.601 Cond. No. 213.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
- conditional number는 큰 변동없음.(25이하까지 떨어뜨려야하는 것으로 알고 있음.)- R square 값이 올랐다 (0.75 --> 0.82)
###Code
df2['is_renovated'] = df2['yr_renovated'].map(lambda x : 0 if x == 0 else 1)
df2['is_election_year'] = df2['yr_built'].map(lambda x : 1 if x % 4 == 0 else 0)
model_ols = sm.OLS.from_formula("np.log1p(price) ~ bedrooms + bathrooms + scale(sqft_living) + scale(sqft_lot) + floors + C(waterfront) \
+ I(view / condition) + grade + scale(lat) + scale(long) + scale(sqft_living15) \
+ scale(sqft_lot15) + C(center_region) + C(is_renovated) + C(is_election_year) - 1", df2)
result = model_ols.fit()
print(result.summary())
model_ols = sm.OLS.from_formula("np.log1p(price) ~ bedrooms + bathrooms + scale(sqft_living) + I(scale(sqft_living / sqft_living15)) + scale(sqft_lot) + floors + C(waterfront) \
+ I(view / condition) + grade + scale(lat) + scale(long) \
+ scale(sqft_lot15) + C(center_region) + C(is_renovated) + C(is_election_year) - 1", df2)
result = model_ols.fit()
print(result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: np.log1p(price) R-squared: 0.815
Model: OLS Adj. R-squared: 0.814
Method: Least Squares F-statistic: 4401.
Date: Tue, 09 Apr 2019 Prob (F-statistic): 0.00
Time: 19:58:42 Log-Likelihood: 955.22
No. Observations: 15035 AIC: -1878.
Df Residuals: 15019 BIC: -1757.
Df Model: 15
Covariance Type: nonrobust
=========================================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------------------------------
C(waterfront)[0] 11.9256 0.023 525.088 0.000 11.881 11.970
C(waterfront)[1] 12.3991 0.033 380.179 0.000 12.335 12.463
C(center_region)[T.1] 0.4056 0.005 79.747 0.000 0.396 0.416
C(is_renovated)[T.1] 0.0976 0.009 10.365 0.000 0.079 0.116
C(is_election_year)[T.1] 0.0040 0.004 0.919 0.358 -0.004 0.012
bedrooms 0.0137 0.003 5.080 0.000 0.008 0.019
bathrooms 0.0194 0.004 4.696 0.000 0.011 0.027
scale(sqft_living) 0.2291 0.004 54.238 0.000 0.221 0.237
I(scale(sqft_living / sqft_living15)) -0.0365 0.002 -14.925 0.000 -0.041 -0.032
scale(sqft_lot) 0.0236 0.003 8.687 0.000 0.018 0.029
floors -0.0437 0.004 -10.392 0.000 -0.052 -0.035
I(view / condition) 0.1829 0.009 19.904 0.000 0.165 0.201
grade 0.1275 0.003 46.369 0.000 0.122 0.133
scale(lat) 0.1542 0.002 77.646 0.000 0.150 0.158
scale(long) 0.0356 0.002 15.821 0.000 0.031 0.040
scale(sqft_lot15) -0.0009 0.003 -0.311 0.756 -0.006 0.005
==============================================================================
Omnibus: 759.550 Durbin-Watson: 2.004
Prob(Omnibus): 0.000 Jarque-Bera (JB): 2703.615
Skew: -0.122 Prob(JB): 0.00
Kurtosis: 5.063 Cond. No. 179.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
--- --- 2. 1차 Modeling
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import warnings
warnings.filterwarnings('ignore')
df_train = pd.read_csv("../dataset/train.csv")
df_test = pd.read_csv("../dataset/test.csv")
train_id = df_train.id
test_id = df_test.id
df_train.shape, df_test.shape
# df_train[df_train.sqft_living > 10000].index
# df_train = df_train.drop(index=8912)
# data preprocessing
# test도 형태를 맞춰준다.
# 중심지인지 아닌지 : is_center
# def is_center(train, test):
# # 평당가격 (실제론 피트당 가격)
# # sqft_living
# train['per_price'] = train['price'] / train['sqft_living']
# price_per_zipcode = train.groupby(['zipcode'])['per_price'].agg({'zipprice_mean' : 'mean', 'zipprice_std' : np.std}).reset_index()
# # 평당가격이 317 이상되면 중심부가 아닐까?
# # merge df2 and price_per_zipcode
# train = train.merge(price_per_zipcode, how='left', on='zipcode')
# test = test.merge(price_per_zipcode, how='left', on='zipcode')
# # 317.146477의 의미는 zipcode별 피트당 가격의 상위 25%
# train_idx = train[(train.zipprice_mean >= 317.146477) & (train.lat >= 47.5) & (train.lat < 47.8)].index
# test_idx = test[(test.zipprice_mean >= 317.146477) & (test.lat >= 47.5) & (test.lat < 47.8)].index
# train['is_center'] = 0
# test['is_center'] = 0
# train.loc[train_idx, 'is_center'] = 1
# test.loc[test_idx, 'is_center'] = 1
# train.drop(columns=['zipprice_mean','per_price','zipprice_std'], inplace=True)
# test.drop(columns=['zipprice_mean','zipprice_std'], inplace=True)
# return train, test
# 재건축(?)했는지 안했는지
def is_renovated(train, test):
train['is_renovated'] = train['yr_renovated'].map(lambda x: 0 if x == 0 else 1)
test['is_renovated'] = test['yr_renovated'].map(lambda x: 0 if x == 0 else 1)
return train, test
# 지어진지 몇년되었는지
def years_of_construction(train, test):
train['years_of_construction'] = 2015 - train['yr_built']
test['years_of_construction'] = 2015 - test['yr_built']
return train, test
# 'price' --> log scaling 해준다.
# 단, 이렇게 해주면 제출시 np.exp()씌워주자
def target_logscaling(train):
train['log_price'] = np.log1p(train['price'])
return train
def buy_year_dummy(train, test):
train['buy_year'] = train['date'].map(lambda x : int(x.split('T')[0][:4]))
test['buy_year'] = test['date'].map(lambda x : int(x.split('T')[0][:4]))
train['buy_2014'] = train['date'].map(lambda x : 1 if x == 2014 else 0)
train['buy_2015'] = train['date'].map(lambda x : 1 if x == 2015 else 0)
test['buy_2014'] = test['date'].map(lambda x : 1 if x == 2014 else 0)
test['buy_2015'] = test['date'].map(lambda x : 1 if x == 2015 else 0)
return train, test
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
cols = ['sqft_living', 'sqft_lot', 'sqft_living15', 'sqft_lot15', 'lat', 'long','years_of_construction']
def make_normal(train, test, cols):
"""
평균0으로 scaling (standardization)
train의 평균, 표준편차 사용하여 test noramlize!!
대상 columns : ['sqft_living', 'sqft_lot', 'sqft_living15', 'sqft_lot15', 'lat', 'long']
혹시 있을지 모를 outlier의 영향을 최소화하기 위해 RobustScaler활용
"""
# mean_ls = list(train[['sqft_living', 'sqft_lot', 'sqft_living15', 'sqft_lot15', 'lat', 'long']].mean(axis=0))
standardScaler = StandardScaler()
standardScaler.fit(train[cols])
temp1 = standardScaler.transform(train[cols])
temp1 = pd.DataFrame(data=temp1,columns=cols, dtype=np.float32)
df1 = train.drop(columns=cols)
df_train = pd.concat([df1, temp1], axis=1)
temp2 = standardScaler.transform(test[cols])
temp2 = pd.DataFrame(data=temp2, columns=cols ,dtype=np.float32)
df2 = test.drop(columns=cols)
df_test = pd.concat([df2, temp2], axis=1)
return df_train, df_test
def column_selection(train, drop_cols):
cols = list(train.columns)
for drop_col in drop_cols:
if drop_col in cols:
idx = cols.index(drop_col)
cols.pop(idx)
return cols
# 가장 비싼 집과의 거리
def get_distance_from_max(train, test):
"""
train set에서 가장 비싼 집과 떨어진 거리 계산
"""
max_idx = train[train['price'] == train['price'].max()].index[0]
max_location = np.array(train[['lat','long']].loc[max_idx,:]) # 가장 비싼 집의 좌표
train_location = np.array(train[['lat','long']])
test_location = np.array(test[['lat','long']])
# compute distance : ||location1 - location2||
distance_from_max_train = sp.linalg.norm(max_location - train_location, axis=1)
distance_from_max_test = sp.linalg.norm(max_location - test_location, axis=1)
return distance_from_max_train, distance_from_max_test
def features_append(train, test):
# 47.5~47.7도 사이의 지역인지 아닌지
train['is_center'] = train['lat'].map(lambda x : 1 if x > 47.5 and x < 47.7 else 0)
test['is_center'] = test['lat'].map(lambda x : 1 if x > 47.5 and x < 47.7 else 0)
# above와 basement의 비율
train['ratio_above/living'] = np.log1p(train['sqft_above'] / train['sqft_living'])
train['ratio_basement/living'] = np.log1p(train['sqft_basement'] / train['sqft_living'])
test['ratio_above/living'] = np.log1p(test['sqft_above'] / test['sqft_living'])
test['ratio_basement/living'] = np.log1p(test['sqft_basement'] / test['sqft_living'])
# 면적의 차이 : sqft_living - sqft_living15, sqft_lot - sqft_lot15
train['living_diff'] = train['sqft_living'] - train['sqft_living15']
test['living_diff'] = test['sqft_living'] - test['sqft_living15']
# sqft_living / sqft_lot : 부지면적대비 주거면적의 비율
train['ratio_living/lot'] = np.log1p(train['sqft_living'] / train['sqft_lot'])
test['ratio_living/lot'] = np.log1p(test['sqft_living'] / test['sqft_lot'])
# sqft_living / sqft_living15, sqft_lot / sqft_lot15
train['ratio_living/living15'] = np.log1p(train['sqft_living'] / train['sqft_living15'])
train['ratio_lot/lot15'] = np.log1p(train['sqft_lot'] / train['sqft_lot15'])
test['ratio_living/living15'] = np.log1p(test['sqft_living'] / test['sqft_living15'])
test['ratio_lot/lot15'] = np.log1p(test['sqft_lot'] / test['sqft_lot15'])
# bathrooms + bedrooms : 총 구성
train['bath+bed'] = train['bathrooms'] + train['bedrooms']
test['bath+bed'] = test['bathrooms'] + test['bedrooms']
# total area
train['total_area'] = train['sqft_living'] + train['sqft_lot']
test['total_area'] = test['sqft_living'] + test['sqft_lot']
# total area 15
train['total_area15'] = train['sqft_living15'] + train['sqft_lot15']
test['total_area15'] = test['sqft_living15'] + test['sqft_lot15']
#
return train, test
# # zipcode dummy화
# df_train = pd.concat([df_train,pd.get_dummies(data=df_train['zipcode'], columns=['zipcode'], prefix='zipcode')], axis=1)
# df_test = pd.concat([df_test,pd.get_dummies(data=df_test['zipcode'], columns=['zipcode'], prefix='zipcode')], axis=1)
df_train['date'] = df_train['date'].map(lambda x : int(x.split('T')[0][:6]))
df_test['date'] = df_test['date'].map(lambda x : int(x.split('T')[0][:6]))
# # sqft_living > sqft_living15 : categorical로 만들기
# def which_bigger(train, test):
# 'ratio_above/living','ratio_basement/living', 'ratio_living/lot','ratio_living/living15','ratio_lot/lot15','bath+bed'
scaling_cols = ['sqft_living','sqft_lot','sqft_above','sqft_basement','sqft_living15','sqft_lot15', 'long','lat','living_diff']
# df_train, df_test = is_center(df_train, df_test)
df_train, df_test = features_append(df_train, df_test)
df_train['distance_from_max'], df_test['distance_from_max'] = get_distance_from_max(df_train, df_test)
df_train, df_test = years_of_construction(df_train, df_test)
df_train, df_test = is_renovated(df_train, df_test)
df_train, df_test = make_normal(df_train, df_test,
cols=scaling_cols)
df_train = target_logscaling(df_train)
# baseline model
# scaling은 해준다. (feature, target 모두)
scaling_cols = ['sqft_living','sqft_lot','sqft_above','sqft_basement','sqft_living15','sqft_lot15', 'long','lat']
df_train, df_test = make_normal(df_train, df_test,
cols=scaling_cols)
df_train = target_logscaling(df_train)
df_train.tail()
training_cols = column_selection(df_train, drop_cols=['id','log_price','price'])
target = df_train['log_price']
df_train, df_test = df_train[training_cols], df_test[training_cols]
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold, cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.preprocessing import RobustScaler
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.015,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 0,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 4,
"random_state": 9999,
"categorical_feature=name" : 'zipcode'
}
y_reg = target
#prepare fit model with cross-validation
folds = KFold(n_splits=5, shuffle=True, random_state=2019)
oof = np.zeros(len(df_train))
predictions = np.zeros(len(df_test))
feature_importance_df = pd.DataFrame()
#run model
for fold_, (trn_idx, val_idx) in enumerate(folds.split(df_train)):
trn_data = lgb.Dataset(df_train.iloc[trn_idx], label=y_reg.iloc[trn_idx])#, categorical_feature=categorical_feats)
val_data = lgb.Dataset(df_train.iloc[val_idx], label=y_reg.iloc[val_idx])#, categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=500, early_stopping_rounds = 100)
oof[val_idx] = clf.predict(df_train.iloc[val_idx], num_iteration=clf.best_iteration)
#feature importance
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = df_train.columns
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
#predictions
predictions += clf.predict(df_test, num_iteration=clf.best_iteration) / folds.n_splits
cv = np.sqrt(mean_squared_error(oof, y_reg))
print(cv)
cv1 = np.sqrt(mean_squared_error(np.expm1(oof), np.expm1(y_reg)))
print(cv1)
np.expm1(predictions)
##plot the feature importance
cols = (feature_importance_df[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:1000].index)
best_features = feature_importance_df.loc[feature_importance_df.Feature.isin(cols)]
plt.figure(figsize=(14,26))
sns.barplot(x="importance", y="Feature", data=best_features.sort_values(by="importance",ascending=False))
plt.title('LightGBM Features (averaged over folds)')
plt.tight_layout()
submission = pd.read_csv("../dataset/sample_submission.csv")
submission.tail()
submission['price'] = np.expm1(predictions)
submission.tail()
submission.to_csv("13th_submission.csv", index=False)
###Output
_____no_output_____
###Markdown
---
###Code
# 본 모델링은 강천성님 kernel을 참고하였습니다.
# model blending
from sklearn.ensemble import GradientBoostingRegressor
import xgboost as xgb
import lightgbm as lgb
gboost = GradientBoostingRegressor(random_state=2019)
xgboost = xgb.XGBRegressor(random_state=2019)
lightgbm = lgb.LGBMRegressor(random_state=2019)
models = [{'model':gboost, 'name':'GradientBoosting'}, {'model':xgboost, 'name':'XGBoost'},
{'model':lightgbm, 'name':'LightGBM'}]
def get_cv_score(models):
kfold = KFold(n_splits=5, random_state=2019).get_n_splits(df_train.values)
for m in models:
print("Model {} CV score : {:.4f}".format(m['name'], np.mean(cross_val_score(m['model'], df_train.values, target)),
kf=kfold))
get_cv_score(models)
def AveragingBlending(models, x, y, sub_x):
for m in models :
m['model'].fit(x.values, y)
predictions = np.column_stack([
m['model'].predict(sub_x.values) for m in models
])
return np.mean(predictions, axis=1)
y_pred = AveragingBlending(models, df_train, target, df_test)
y_pred = np.expm1(y_pred)
submission = pd.read_csv("../dataset/sample_submission.csv")
submission.tail()
submission['price'] = y_pred
submission.tail()
submission.to_csv("11th_submission.csv", index=False) # lgb하나썼을때보다 오히려 성능 떨어짐
len(df_train.columns)
from sklearn.decomposition import PCA
pca1 = PCA(n_components=70)
df_train_low = pca1.fit_transform(df_train)
df_train_low = pd.DataFrame(df_train_low)
df_train = pd.concat([df_train, df_train_low], axis=1)
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold, cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.preprocessing import RobustScaler
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.015,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 0,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 4,
"random_state": 9999,
"categorical_feature=name" : 'zipcode'
}
y_reg = target
#prepare fit model with cross-validation
folds = KFold(n_splits=5, shuffle=True, random_state=2019)
oof = np.zeros(len(df_train))
predictions = np.zeros(len(df_test))
feature_importance_df = pd.DataFrame()
#run model
for fold_, (trn_idx, val_idx) in enumerate(folds.split(df_train)):
trn_data = lgb.Dataset(df_train.iloc[trn_idx], label=y_reg.iloc[trn_idx])#, categorical_feature=categorical_feats)
val_data = lgb.Dataset(df_train.iloc[val_idx], label=y_reg.iloc[val_idx])#, categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=500, early_stopping_rounds = 100)
oof[val_idx] = clf.predict(df_train.iloc[val_idx], num_iteration=clf.best_iteration)
#feature importance
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = df_train.columns
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
#predictions
predictions += clf.predict(df_test, num_iteration=clf.best_iteration) / folds.n_splits
cv = np.sqrt(mean_squared_error(oof, y_reg))
print(cv)
cv1 = np.sqrt(mean_squared_error(np.expm1(oof), np.expm1(y_reg)))
print(cv1)
submission = pd.read_csv("../dataset/sample_submission.csv")
submission.tail()
submission['price'] = np.expm1(predictions)
submission.tail()
submission.to_csv("13th_submission.csv", index=False)
###Output
_____no_output_____
###Markdown
--- --- --- 2차 Modeling- 기본 feature에 scaling해준게 10만 9천대... - feature : normalization - target : log-scaling- 조금씩 추가해보자
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings('ignore')
df_train = pd.read_csv("../dataset/train.csv")
df_test = pd.read_csv("../dataset/test.csv")
df_train['date'] = df_train['date'].map(lambda x : int(x.split('T')[0][:6]))
df_test['date'] = df_test['date'].map(lambda x : int(x.split('T')[0][:6]))
df_train = df_train.loc[df_train.index != 8912, :] # outlier 제거
test_id = df_test.id
df_train.shape, df_test.shape
# # 이상치 제거
# # 위에서 하나 제거(sqft_living기준 13000이상짜리 1개, id=8912)
# # ratio_total/total15
# # ratio_total/total15 > 70보다 큰 데이터 제거 (1개)
# df_train['total_area'] = df_train['sqft_living'] + df_train['sqft_lot']
# df_train['total_area15'] = df_train['sqft_living15'] + df_train['sqft_lot15']
# df_train['ratio_total/total15'] = df_train['total_area'] / df_train['total_area15']
# idx = df_train[df_train['ratio_total/total15'] > 70].index[0]
# df_train = df_train.loc[df_train.index != idx, :]
# # total area
# # 1500000이상 제거
# df_train['total_area'] = df_train['sqft_living'] + df_train['sqft_lot']
# idx = df_train[df_train.total_area > 1500000].index[0]
# df_train = df_train.loc[df_train.index != idx, :]
# df_train.drop(columns=['total_area','total_area15','ratio_total/total15'], inplace=True)
###Output
_____no_output_____
###Markdown
1. distance를 추가하자 --> 10만 7700대 2. 1(distance) + 47.5~47.7사이인지 여부를 추가하자 --> 10만 8천대 3. 2(distance, 47.5~47.7) + 상위 25% 지역, 하위 25% 지역 binary category로 추가 --> 10만 7900대 4. 1(distance) + 집 총 면적 추가 -->10만 8900대 5. 1(distance) + PCA + K-means --> 10만 8900대 *cv는 1번보다 좋았다. 6. 1(distance) + K-means --> 10만 9500대 7. outlier제거 + 1(distance) + above/floor ratio + sqft-feature log scaling --> 10만 6896대 *cv : 116424.83291074971(1번보다 안좋음) 8. 7번 + normalization --> 10만 6600대 9. 7번 + etc featuren 추가 (grade + view + waterfront + condition) --> 10만 4125 10 9번 + ratio_living/living15 --> 10만 5000대 ; feature를 다 때려박고 가장 importance가 높았던 feature인 living/living15추가한것임. 11. 10번 + PCA(sqft_living, sqft_lot, sqft_above, sqft_basement) --> 10만 4000대 12. 9번 + PCA(sqft_living, sqft_lot, sqft_above, sqft_basement) --> 10만 3915대 12. 9번 + PCA(sqft_living, sqft_above, sqft_basement) --> 10만 3685대
###Code
# # PCA 실시 : lat, long
# pca = PCA(n_components=2).fit(df_train[['lat','long']])
# temp_train = pca.transform(df_train[['lat','long']])
# df_train['pca_comp1'] = temp_train[:,0]
# df_train['pca_comp2'] = temp_train[:,1]
# temp_test = pca.transform(df_test[['lat','long']])
# df_test['pca_comp1'] = temp_test[:,0]
# df_test['pca_comp2'] = temp_test[:,1]
# PCA 실시 : sqft_living, sqft_lot, sqft_above, sqft_basement
# PCA후 log_scaling을 위해 전부 positive로 만들어준다.
# pca = PCA(n_components=5).fit(df_train[['sqft_living','sqft_lot','sqft_above','sqft_basement','sqft_living15','sqft_lot15']])
# temp_train = pca.transform(df_train[['sqft_living','sqft_lot','sqft_above','sqft_basement','sqft_living15','sqft_lot15']])
pca = PCA(n_components=3).fit(df_train[['sqft_living','sqft_above','sqft_basement']])
temp_train = pca.transform(df_train[['sqft_living','sqft_above','sqft_basement']])
df_train['pca_comp1'] = np.sqrt(temp_train[:,0]**2)
df_train['pca_comp2'] = np.sqrt(temp_train[:,1]**2)
# df_train['pca_comp3'] = np.sqrt(temp_train[:,2]**2)
# df_train['pca_comp4'] = np.sqrt(temp_train[:,3]**2)
# df_train['pca_comp5'] = np.sqrt(temp_train[:,4]**2)
temp_test = pca.transform(df_test[['sqft_living','sqft_above','sqft_basement']])
df_test['pca_comp1'] = np.sqrt(temp_test[:,0]**2)
df_test['pca_comp2'] = np.sqrt(temp_test[:,1]**2)
# df_test['pca_comp3'] = np.sqrt(temp_test[:,2]**2)
# df_test['pca_comp4'] = np.sqrt(temp_test[:,3]**2)
# df_test['pca_comp5'] = np.sqrt(temp_test[:,4]**2)
# from sklearn.cluster import KMeans
# kmeans = KMeans(n_clusters=5, random_state=42).fit(df_train[['lat','long']])
# coord_cluster_train = kmeans.predict(df_train[['lat','long']])
# coord_cluster_test = kmeans.predict(df_test[['lat','long']])
# df_train['cluster'] = coord_cluster_train
# df_test['cluster'] = coord_cluster_test
def get_distance_from_max(train, test):
"""
train set에서 가장 비싼 집과 떨어진 거리 계산
"""
max_idx = train[train['price'] == train['price'].max()].index[0]
max_location = np.array(train[['lat','long']].loc[max_idx,:]) # 가장 비싼 집의 좌표
train_location = np.array(train[['lat','long']])
test_location = np.array(test[['lat','long']])
# compute distance : ||location1 - location2||
distance_from_max_train = sp.linalg.norm(max_location - train_location, axis=1)
distance_from_max_test = sp.linalg.norm(max_location - test_location, axis=1)
return distance_from_max_train, distance_from_max_test
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
# def make_normal(train, test, cols):
# """
# 평균0으로 scaling (standardization)
# train의 평균, 표준편차 사용하여 test noramlize!!
# 대상 columns : ['sqft_living', 'sqft_lot', 'sqft_living15', 'sqft_lot15', 'lat', 'long']
# 혹시 있을지 모를 outlier의 영향을 최소화하기 위해 RobustScaler활용
# """
# # mean_ls = list(train[['sqft_living', 'sqft_lot', 'sqft_living15', 'sqft_lot15', 'lat', 'long']].mean(axis=0))
# standardScaler = StandardScaler()
# # robustScaler = RobustScaler()
# standardScaler.fit(train[cols])
# temp1 = standardScaler.transform(train[cols])
# temp1 = pd.DataFrame(data=temp1,columns=cols, dtype=np.float32)
# df1 = train.drop(columns=cols)
# df_train = pd.concat([df1, temp1], axis=1)
# temp2 = standardScaler.transform(test[cols])
# temp2 = pd.DataFrame(data=temp2, columns=cols ,dtype=np.float32)
# df2 = test.drop(columns=cols)
# df_test = pd.concat([df2, temp2], axis=1)
# return df_train, df_test
def make_normal(train, test, cols):
standardScaler = StandardScaler()
standardScaler.fit(train[cols])
train[cols] = standardScaler.transform(train[cols])
test[cols] = standardScaler.transform(test[cols])
return train, test
def feature_logscaling(train, test, scaling_cols):
for col in scaling_cols:
train[col] = np.log1p(train[col])
test[col] = np.log1p(test[col])
return train, test
def target_logscaling(train):
train['log_price'] = np.log1p(train['price'])
return train
def column_selection(train, drop_cols):
cols = list(train.columns)
for drop_col in drop_cols:
if drop_col in cols:
idx = cols.index(drop_col)
cols.pop(idx)
return cols
####################################################################################################################################
df_train['distance_from_max'], df_test['distance_from_max'] = get_distance_from_max(df_train, df_test)
# df_train['is_center'] = df_train['lat'].map(lambda x : 1 if x > 47.5 and x < 47.7 else 0)
# df_test['is_center'] = df_test['lat'].map(lambda x : 1 if x > 47.5 and x < 47.7 else 0)
# df_train['over25%'] = df_train['zipcode'].map(lambda x : 1 if (x==98004) or (x==98005) or (x==98007) or (x==98008) or (x==98007) or (x==98040) or (x==98074) or (x==98075) else 0)
# df_test['over25%'] = df_test['zipcode'].map(lambda x : 1 if (x==98004) or (x==98005) or (x==98007) or (x==98008) or (x==98007) or (x==98040) or (x==98074) or (x==98075) else 0)
# df_train['under25%'] = df_train['zipcode'].map(lambda x : 1 if (x==98070) or (x==98108) or (x==98109) or (x==98168) else 0)
# df_test['under25%'] = df_test['zipcode'].map(lambda x : 1 if (x==98070) or (x==98108) or (x==98109) or (x==98168) else 0)
# df_train['total_area'] = df_train['sqft_living'] + df_train['sqft_lot']
# df_test['total_area'] = df_test['sqft_living'] + df_test['sqft_lot']
# df_train['total_area15'] = df_train['sqft_living15'] + df_train['sqft_lot15']
# df_test['total_area15'] = df_test['sqft_living15'] + df_test['sqft_lot15']
# above와 basement의 비율
# df_train['ratio_above/living'] = np.log1p(df_train['sqft_above'] / df_train['sqft_living'])
# df_train['ratio_basement/living'] = np.log1p(df_train['sqft_basement'] / df_train['sqft_living'])
# df_test['ratio_above/living'] = np.log1p(df_test['sqft_above'] / df_test['sqft_living'])
# df_test['ratio_basement/living'] = np.log1p(df_test['sqft_basement'] / df_test['sqft_living'])
# # 면적의 차이 : sqft_living - sqft_living15, sqft_lot - sqft_lot15
# df_train['living_diff'] = df_train['sqft_living'] - df_train['sqft_living15']
# df_test['living_diff'] = df_test['sqft_living'] - df_test['sqft_living15']
# # sqft_living / sqft_lot : 부지면적대비 주거면적의 비율
# df_train['ratio_living/lot'] = np.log1p(df_train['sqft_living'] / df_train['sqft_lot'])
# df_test['ratio_living/lot'] = np.log1p(df_test['sqft_living'] / df_test['sqft_lot'])
# sqft_living / sqft_living15, sqft_lot / sqft_lot15
# df_train['ratio_living/living15'] = np.log1p(df_train['sqft_living'] / df_train['sqft_living15'])
# df_train['ratio_lot/lot15'] = np.log1p(df_train['sqft_lot'] / df_train['sqft_lot15'])
# df_test['ratio_living/living15'] = np.log1p(df_test['sqft_living'] / df_test['sqft_living15'])
# df_test['ratio_lot/lot15'] = np.log1p(df_test['sqft_lot'] / df_test['sqft_lot15'])
# # total ratio : total / total15
# df_train['ratio_total/total15'] = df_train['total_area'] / df_train['total_area15']
# df_test['ratio_total/total15'] = df_test['total_area'] / df_test['total_area15']
# # bathrooms + bedrooms : 총 구성
# df_train['bath+bed'] = df_train['bathrooms'] + df_train['bedrooms']
# df_test['bath+bed'] = df_test['bathrooms'] + df_test['bedrooms']
# waterfront + view + condition
df_train['etc'] = df_train['grade'] + df_train['waterfront'] + df_train['view'] + df_train['condition']
df_test['etc'] = df_test['grade'] + df_test['waterfront'] + df_test['view'] + df_test['condition']
# sqft_above / floor
df_train['ratio_above/floors'] = df_train['sqft_above'] / df_train['floors']
df_test['ratio_above/floors'] = df_test['sqft_above'] / df_test['floors']
# feature : log scaling + normalization
# scaling_cols = ['sqft_living','sqft_lot','sqft_above','sqft_basement','sqft_living15',
# 'sqft_lot15','ratio_above/floors','total_area','total_area15','ratio_above/living','ratio_basement/living',
# 'ratio_living/lot','ratio_living/living15','ratio_lot/lot15','ratio_total/total15']
scaling_cols = ['sqft_living','sqft_lot','sqft_above','sqft_basement','sqft_living15',
'sqft_lot15','ratio_above/floors',
'pca_comp1','pca_comp2']
df_train, df_test = feature_logscaling(df_train, df_test, scaling_cols)
df_train, df_test = make_normal(df_train, df_test,
cols=scaling_cols)
df_train = target_logscaling(df_train)
training_cols = column_selection(df_train, drop_cols=['id','log_price','price'])
target = df_train['log_price']
df_train, df_test = df_train[training_cols], df_test[training_cols]
df_train.tail()
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold, cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.preprocessing import RobustScaler
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.015,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 0,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 4,
"random_state": 9999,
"categorical_feature=name" : 'zipcode'
}
y_reg = target
#prepare fit model with cross-validation
folds = KFold(n_splits=5, shuffle=True, random_state=2019)
oof = np.zeros(len(df_train))
predictions = np.zeros(len(df_test))
feature_importance_df = pd.DataFrame()
#run model
for fold_, (trn_idx, val_idx) in enumerate(folds.split(df_train)):
trn_data = lgb.Dataset(df_train.iloc[trn_idx], label=y_reg.iloc[trn_idx])#, categorical_feature=categorical_feats)
val_data = lgb.Dataset(df_train.iloc[val_idx], label=y_reg.iloc[val_idx])#, categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=500, early_stopping_rounds = 100)
oof[val_idx] = clf.predict(df_train.iloc[val_idx], num_iteration=clf.best_iteration)
#feature importance
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = df_train.columns
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
#predictions
predictions += clf.predict(df_test, num_iteration=clf.best_iteration) / folds.n_splits
cv = np.sqrt(mean_squared_error(oof, y_reg))
print(cv)
# only distance = 116250.94801079197
cv1 = np.sqrt(mean_squared_error(np.expm1(oof), np.expm1(y_reg)))
print(cv1)
##plot the feature importance
cols = (feature_importance_df[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:1000].index)
best_features = feature_importance_df.loc[feature_importance_df.Feature.isin(cols)]
plt.figure(figsize=(14,26))
sns.barplot(x="importance", y="Feature", data=best_features.sort_values(by="importance",ascending=False))
plt.title('LightGBM Features (averaged over folds)')
plt.tight_layout()
submission = pd.read_csv("../dataset/sample_submission.csv")
submission.tail()
submission['price'] = np.expm1(predictions)
submission.tail()
submission.to_csv("33th_submission.csv", index=False)
###Output
_____no_output_____ |
NLP/Learn_by_deeplearning.ai/Course 1 - Classification and Vector Spaces/Original/C1_W4_lecture_nb_01.ipynb | ###Markdown
Vector manipulation in PythonIn this lab, you will have the opportunity to practice once again with the NumPy library. This time, we will explore some advanced operations with arrays and matrices.At the end of the previous module, we used PCA to transform a set of many variables into a set of only two uncorrelated variables. This process was made through a transformation of the data called rotation. In this week's assignment, you will need to find a transformation matrix from English to French vector space embeddings. Such a transformation matrix is nothing else but a matrix that rotates and scales vector spaces.In this notebook, we will explain in detail the rotation transformation. Transforming vectorsThere are three main vector transformations:* Scaling* Translation* RotationIn previous notebooks, we have applied the first two kinds of transformations. Now, let us learn how to use a fundamental transformation on vectors called _rotation_.The rotation operation changes the direction of a vector, letting unaffected its dimensionality and its norm. Let us explain with some examples. In the following cells, we will define a NumPy matrix and a NumPy array. Soon we will explain how this is related to matrix rotation.
###Code
import numpy as np # Import numpy for array manipulation
import matplotlib.pyplot as plt # Import matplotlib for charts
from utils_nb import plot_vectors # Function to plot vectors (arrows)
###Output
_____no_output_____
###Markdown
Example 1
###Code
# Create a 2 x 2 matrix
R = np.array([[2, 0],
[0, -2]])
x = np.array([[1, 1]]) # Create a 1 x 2 matrix
###Output
_____no_output_____
###Markdown
The dot product between a vector and a square matrix produces a rotation and a scaling of the original vector. Remember that our recommended way to get the dot product in Python is np.dot(a, b):
###Code
y = np.dot(x, R) # Apply the dot product between x and R
y
###Output
_____no_output_____
###Markdown
We are going to use Pyplot to inspect the effect of the rotation on 2D vectors visually. For that, we have created a function `plot_vectors()` that takes care of all the intricate parts of the visual formatting. The code for this function is inside the `utils_nb.py` file. Now we can plot the vector $\vec x = [1, 1]$ in a cartesian plane. The cartesian plane will be centered at `[0,0]` and its x and y limits will be between `[-4, +4]`
###Code
plot_vectors([x], axes=[4, 4], fname='transform_x.svg')
###Output
_____no_output_____
###Markdown
Now, let's plot in the same system our vector $\vec x = [1, 1]$ and its dot product with the matrix$$Ro = \begin{bmatrix} 2 & 0 \\ 0 & -2 \end{bmatrix}$$$$y = x \cdot Ro = [[-2, 2]]$$
###Code
plot_vectors([x, y], axes=[4, 4], fname='transformx_and_y.svg')
###Output
_____no_output_____
###Markdown
Note that the output vector `y` (blue) is transformed in another vector. Example 2We are going to use Pyplot to inspect the effect of the rotation on 2D vectors visually. For that, we have created a function that takes care of all the intricate parts of the visual formatting. The following procedure plots an arrow within a Pyplot canvas.Data that is composed of 2 real attributes is telling to belong to a $ RxR $ or $ R^2 $ space. Rotation matrices in $R^2$ rotate a given vector $\vec x$ by a counterclockwise angle $\theta$ in a fixed coordinate system. Rotation matrices are of the form:$$Ro = \begin{bmatrix} cos \theta & -sin \theta \\ sin \theta & cos \theta \end{bmatrix}$$The trigonometric functions in Numpy require the angle in radians, not in degrees. In the next cell, we define a rotation matrix that rotates vectors by $45^o$.
###Code
angle = 100 * (np.pi / 180) #convert degrees to radians
Ro = np.array([[np.cos(angle), -np.sin(angle)],
[np.sin(angle), np.cos(angle)]])
x2 = np.array([2, 2]).reshape(1, -1) # make it a row vector
y2 = np.dot(x2, Ro)
print('Rotation matrix')
print(Ro)
print('\nRotated vector')
print(y2)
print('\n x2 norm', np.linalg.norm(x2))
print('\n y2 norm', np.linalg.norm(y2))
print('\n Rotation matrix norm', np.linalg.norm(Ro))
plot_vectors([x2, y2], fname='transform_02.svg')
###Output
_____no_output_____
###Markdown
Some points to note:* The norm of the input vector is the same as the norm of the output vector. Rotations matrices do not modify the norm of the vector, only its direction.* The norm of any $R^2$ rotation matrix is always $\sqrt 2 = 1.414221$ Frobenius NormThe Frobenius norm is the generalization to $R^2$ of the already known norm function for vectors $$\| \vec a \| = \sqrt {{\vec a} \cdot {\vec a}} $$For a given $R^2$ matrix A, the frobenius norm is defined as:$$\|\mathrm{A}\|_{F} \equiv \sqrt{\sum_{i=1}^{m} \sum_{j=1}^{n}\left|a_{i j}\right|^{2}}$$
###Code
A = np.array([[2, 2],
[2, 2]])
###Output
_____no_output_____
###Markdown
`np.square()` is a way to square each element of a matrix. It must be equivalent to use the * operator in Numpy arrays.
###Code
A_squared = np.square(A)
A_squared
###Output
_____no_output_____
###Markdown
Now you can sum over the elements of the resulting array, and then get the square root of the sum.
###Code
A_Frobenius = np.sqrt(np.sum(A_squared))
A_Frobenius
###Output
_____no_output_____
###Markdown
That was the extended version of the `np.linalg.norm()` function. You can check that it yields the same result.
###Code
print('Frobenius norm of the Rotation matrix')
print(np.sqrt(np.sum(Ro * Ro)), '== ', np.linalg.norm(Ro))
###Output
_____no_output_____ |
Linked List/0902/328. Odd Even Linked List.ipynb | ###Markdown
说明: 给定一个单链表,将所有奇数节点组合在一起,然后是偶数节点。 请注意,这里我们谈论的是节点号,而不是节点中的值。您应该尝试就地进行。 该程序应在O(1)空间复杂度和 O(节点)时间复杂度下运行。约束: 1、偶数和奇数组内的相对顺序应保持输入中的原样。 2、第一个节点被认为是奇数,第二个节点被认为是偶数,依此类推... 3、链表的长度在[0,10 ^ 4]之间。
###Code
class Solution(object):
def oddEvenList(self, head):
"""
:type head: ListNode
:rtype: ListNode
"""
dummy1 = odd = ListNode(0)
dummy2 = even = ListNode(0)
while head:
odd.next = head
even.next = head.next
odd = odd.next
even = even.next
head = head.next.next if even else None
odd.next = dummpy2.next
return dummy1
class Solution(object):
def oddEvenList(self, head):
"""
:type head: ListNode
:rtype: ListNode
"""
if not head or not head.next or not head.next.next:
return head
prev = head
p1 = head.next
p2 = head.next
while p2 and p2.next:
q = p2.next
prev.next = q
prev = q
p2.next = q.next
p2 = p2.next
prev.next = p1
return head
class Solution(object):
def oddEvenList(self, head):
"""
:type head: ListNode
:rtype: ListNode
"""
if not head or not head.next or not head.next.next:
return head
dumpy_1 = odd = head
dumpy_2 = even = head.next
while even and even.next:
odd.next = even.next
odd = odd.next
even.next = odd.next
even = even.next
odd.next = dumpy_2
return dumpy_1
###Output
_____no_output_____ |
notebooks/testing_new_auto_gamma_filtering.ipynb | ###Markdown
Welcome to NeuNormPackage to normalize data using Open Beam (OB) and, optionally Dark Field (DF).The program allows you to select a background region to allow data to be normalized by OB that do not have the same acquisition time. Cropping the image is also possible using the *crop* method This notebook will illustrate the use of the NeuNorm library by going through a typical normalization Set up system
###Code
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib import gridspec
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Add NeuNorm to python path
###Code
root_folder = os.path.dirname(os.getcwd())
sys.path.insert(0, root_folder)
import NeuNorm as neunorm
from NeuNorm.normalization import Normalization
from NeuNorm.roi import ROI
import NeuNorm
print(NeuNorm.__version__)
###Output
_____no_output_____
###Markdown
Data Folders Sample Data
###Code
path_im = '../data/sample/'
assert os.path.exists(path_im)
###Output
_____no_output_____
###Markdown
Loading Data
###Code
o_norm = Normalization()
o_norm.load(folder=path_im)
###Output
_____no_output_____
###Markdown
Exporting Data
###Code
o_norm.export(file_type='tif', folder='/Users/j35/Desktop/tmp/', data_type='sample')
###Output
_____no_output_____ |
W10Coin LightGBM Final Submission V4.ipynb | ###Markdown
Windows 10 Cointrain: (row: 1,347,190, columns: 1,085)test: (row: 374,136, columns: 1,084)y value: if HasClicked == True, app 1.8%How to run1. Put the train and test files in ..\input2. Put the script file in ..\script3. In Jupyter, run all and get submission file in the same script folder
###Code
# Timer and file info
import math
import time
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import gc # We're gonna be clearing memory a lot
import matplotlib.pyplot as plt
import seaborn as sns
import random
from ml_metrics import mapk
from datetime import datetime
import re
import csv
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn import ensemble
from sklearn import model_selection
from sklearn.metrics import matthews_corrcoef, f1_score, classification_report, confusion_matrix, precision_score, recall_score
%matplotlib inline
# Timer
class Timer:
def __init__(self, text=None):
self.text = text
def __enter__(self):
self.cpu = time.clock()
self.time = time.time()
if self.text:
print("{}...".format(self.text))
print(datetime.now())
return self
def __exit__(self, *args):
self.cpu = time.clock() - self.cpu
self.time = time.time() - self.time
if self.text:
print("%s: cpu %0.2f, time %0.2f\n" % (self.text, self.cpu, self.time))
# Split to train and holdout sets with counts
def sample_train_holdout(_df, sample_count, holdout_count):
random.seed(7)
sample_RowNumber = random.sample(list(_df['RowNumber']), (sample_count + holdout_count))
train_RowNumber = random.sample(sample_RowNumber, sample_count)
holdout_RowNumber = list(set(sample_RowNumber) - set(train_RowNumber))
holdout = _df[_df['RowNumber'].isin(holdout_RowNumber)].copy()
_df = _df[_df['RowNumber'].isin(train_RowNumber)]
return _df, holdout
# Sampling for train and holdout with imbalanced binary label
def trainHoldoutSampling(_df, _id, _label, _seed=7, t_tr=0.5, t_ho=0.5, f_tr=0.05, f_ho=0.5):
random.seed(_seed)
positive_id = list(_df[_df[_label]==True][_id].values)
negative_id = list(_df[_df[_label]==False][_id].values)
train_positive_id = random.sample(positive_id, int(len(positive_id) * t_tr))
holdout_positive_id = random.sample(list(set(positive_id)-set(train_positive_id)), int(len(positive_id) * t_ho))
train_negative_id = random.sample(negative_id, int(len(negative_id) * f_tr))
holdout_negative_id = random.sample(list(set(negative_id)-set(train_negative_id)), int(len(negative_id) * f_ho))
train_id = list(set(train_positive_id)|set(train_negative_id))
holdout_id = list(set(holdout_positive_id)|set(holdout_negative_id))
print('train count: {}, train positive count: {}'.format(len(train_id),len(train_positive_id)))
print('holdout count: {}, holdout positive count: {}'.format(len(holdout_id),len(holdout_positive_id)))
return _df[_df[_id].isin(train_id)], _df[_df[_id].isin(holdout_id)]
def datetime_features2(_df, _col):
_format='%m/%d/%Y %I:%M:%S %p'
_df[_col] = _df[_col].apply(lambda x: datetime.strptime(x, _format))
colYear = _col+'Year'
colMonth = _col+'Month'
colDay = _col+'Day'
colHour = _col+'Hour'
#colYearMonthDay = _col+'YearMonthDay'
#colYearMonthDayHour = _col+'YearMonthDayHour'
_df[colYear] = _df[_col].apply(lambda x: x.year)
_df[colMonth] = _df[_col].apply(lambda x: x.month)
_df[colDay] = _df[_col].apply(lambda x: x.day)
_df[colHour] = _df[_col].apply(lambda x: x.hour)
#ymd = [colYear, colMonth, colDay]
#ymdh = [colYear, colMonth, colDay, colHour]
#_df[colYearMonthDay] = _df[ymd].apply(lambda x: '_'.join(str(x)), axis=1)
#_df[colYearMonthDayHour] = _df[ymdh].apply(lambda x: '_'.join(str(x)), axis=1)
return _df
# Change date column datetime type and add date time features
def datetime_features(_df, _col, isDelete = False):
# 1. For years greater than 2017, create year folder with regex and change year to 2017 in datetime column
# find and return 4 digit number (1st finding) in dataframe string columns
year_col = _col + 'Year'
_df[year_col] = _df[_col].apply(lambda x: int(re.findall(r"\D(\d{4})\D", " "+ str(x) +" ")[0]))
years = sorted(list(_df[year_col].unique()))
yearsGreaterThan2017 = sorted(i for i in years if i > 2017)
# Two ways for strange year data (1) change it to 2017 temporarily (2) remove from data; we will go with (1)
# because we cannot remove test rows anyway
if isDelete:
_df = _df[~_df[year_col].isin(yearsGreaterThan2017)]
else:
for i in yearsGreaterThan2017:
print("replace ", i, " to 2017 for conversion")
_df.loc[_df[year_col] == i, _col] = _df[_df[year_col] == i][_col].values[0].replace(str(i), "2017")
# How to remove strange year rows
# train = train[~train['year'].isin(yearsGreaterThan2017)]
# 2. Convert string to datetime
_df[_col] = pd.to_datetime(_df[_col])
print(_col, "column conversion to datetime type is done")
# 3. Add more date time features
month_col = _col + 'Month'
week_col = _col + 'Week'
weekday_col = _col + 'Weekday'
day_col = _col + 'Day'
hour_col = _col + 'Hour'
#year_month_day_col = _col + 'YearMonthDay'
#year_month_day_hour_col = _col + 'YearMonthDayHour'
_df[month_col] = pd.DatetimeIndex(_df[_col]).month
_df[week_col] = pd.DatetimeIndex(_df[_col]).week
_df[weekday_col] = pd.DatetimeIndex(_df[_col]).weekday
_df[day_col] = pd.DatetimeIndex(_df[_col]).day
_df[hour_col] = pd.DatetimeIndex(_df[_col]).hour
#_df[year_month_day_col] = _df[[year_col, month_col, day_col]].apply(lambda x: ''.join(str(x)), axis=1)
#_df[year_month_day_hour_col] = _df[[year_col, month_col, day_col, hour_col]].apply(lambda x: ''.join(str(x)), axis=1)
print("year, month, week, weekday, day, hour features are added")
return _df
# Delete rows with list condition for dataframe
def delRows(_df, _col, _list):
_df = _df[~_df[_col].isin(_list)]
return _df
import re
# Create new column using regex pattern for strings for dataframe
def addFeatureRegex(_df, _col, _newCol):
_df[_newCol] = _df[_col].apply(lambda x: int(re.findall(r"\D(\d{4})\D", " "+ str(x) +" ")[0]))
return _df
# Convert string to datetime type
def stringToDatetime(_df, _col):
_df[_col] = _df[_col].astype('datetime64[ns]')
return _df
# Add features from datetime
def addDatetimeFeatures(_df, _col):
_df[_col + 'Year'] = pd.DatetimeIndex(_df[_col]).year
_df[_col + 'Month'] = pd.DatetimeIndex(_df[_col]).month
_df[_col + 'Week'] = pd.DatetimeIndex(_df[_col]).week
_df[_col + 'Weekday'] = pd.DatetimeIndex(_df[_col]).weekday
_df[_col + 'Day'] = pd.DatetimeIndex(_df[_col]).day
_df[_col + 'Hour'] = pd.DatetimeIndex(_df[_col]).hour
return _df
# Get categorical column names
def categoricalColumns(_df):
cat_columns = _df.select_dtypes(['object']).columns
print("Categorical column count:", len(cat_columns))
print("First 5 values:", cat_columns[:5])
return cat_columns
# Get column names starting with
def columnsStartingWith(_df, _str):
sorted_list = sorted(i for i in list(_df) if i.startswith(_str))
print("Column count:", len(sorted_list))
print("First 5 values:", sorted_list[:5])
return sorted_list
# Get column names ending with
def columnsEndingWith(_df, _str):
sorted_list = sorted(i for i in list(_df) if i.endswith(_str))
print("Column count:", len(sorted_list))
print("First 5 values:", sorted_list[:5])
return sorted_list
# Get constant columns
def constantColumns(_df):
constant_list = []
cols = list(_df) # same as _df.columns.values
for col in cols:
if len(_df[col].unique()) == 1:
constant_list.append(col)
print("Constant column count:", len(constant_list))
print("First 5 values:", constant_list[:5])
return constant_list
# Add null columns
def makeNullColumns(_df, _cols):
null_df = _df[_cols].isnull()
null_df.columns = null_df.columns + 'Null'
_df = pd.concat([_df, null_df], axis=1)
return _df
# Union
def union(a, b):
return list(set(a)|set(b))
def unique(a):
return list(set(a))
# undersampling - sample rate 0.8 for 80% samling using isUndersampled column
def underSampling(_df, _sample_rate):
_df['isUnderSampled'] = 1
_rand_num = 1/(1-_sample_rate)
underSample = np.random.randint(_rand_num, size=len(_df[_df['HasClicked'] == 0]))
_df.loc[_df['HasClicked'] == 0, 'isUnderSampled'] = underSample>0
return _df
# Add column with value count
def valueCountColumn(_df, _col):
_dict = dict([(i, a) for i, a in zip(_df[_col].value_counts().index, _df[_col].value_counts().values)])
_df[_col+'ValueCount'] = _df[_col].apply(lambda x: _dict[x])
return _df
# Add column with bool values to check if keyword is contained or not
def containColumn(_df, _col, _str):
_df[_col+'Cotains'+_str] = _df[_col].str.contains(_str)
return _df
# Feature engineering
def feature_engineering(_df):
print("shape:", _df.shape)
print("Add datetime features...")
datetime_columns = ['BubbleShownTime', 'FirstUpdatedDate', 'OSOOBEDateTime']
for col in datetime_columns:
print(col)
if _df[col].isnull().sum() > 0:
_df[col] = _df[col].fillna('1/1/2017 11:11:11 AM')
_df = datetime_features2(_df, col)
print("shape:", _df.shape)
gc.collect()
# Null count
print("Missing value count...")
_df['CntNs'] = _df.isnull().sum(axis=1)
cols = ['AppCategoryNMinus1', 'AppCategoryNMinus2', 'AppCategoryNMinus3', 'AppCategoryNMinus4', 'AppCategoryNMinus5',
'AppCategoryNMinus6', 'AppCategoryNMinus7', 'AppCategoryNMinus8']
_df['AppCatCntNs'] = _df[cols].isnull().sum(axis=1)
#_df[cols] = _df[cols].fillna("NA")
#for col in cols:
# print(col)
# _df[col+'HighLevel'] = _df[col].apply(lambda x: str(x).split(':')[0])
# Game segment parse with '.'
# to-do: 2nd and 3rd parsed values to add as features later, some exception handling is needed
print("Gamer segment parsing...")
_df['GamerSegment1'] = _df['GamerSegment'].apply(lambda x: str(x).split('.')[0] if str(x).split('.') else 'Unknown')
# Check creativeName contains keyword or not
print("CreativeName contains a keyword...")
keywords = ['SL', 'TS', 'Week7', 'Week 7', 'Meet', 'Skype', 'Battery', 'Switch', 'Performance', 'Security',
'Surge', 'Publish', 'Rewards', 'Aggressive', 'Edge', 'Chrome', 'Firefox', 'Discover', 'Free']
for keyword in keywords:
_df = containColumn(_df, 'creativeName', keyword)
#_df['week7'] = _df['Week7'].values + _df['Week 7'].values
#_df.drop(['Week7', 'Week 7'], axis = 1, inplace = True)
# Convert categorical columns to numeric
print("Convert categorical columns to numeric...")
cat_columns = _df.select_dtypes(['object']).columns
for cat_column in cat_columns:
print(cat_column)
if cat_column == 'creativeName':
_df['creativeNameTest'] = _df['creativeName'].values
_df[cat_column] = _df[cat_column].apply(lambda x: abs(hash(x)))
gc.collect()
# Replace missing values with -1
print("Replace missing values with -1")
_df = _df.fillna(-1)
# Value count
print("Value count...")
cols = ['UniqueUserDeviceKey', 'CampaignId']
for col in cols:
print(col)
_df = valueCountColumn(_df, col)
return _df
# Get best threshold value for F1 score
def f1_best_threshold(_actual, _pred):
thresholds = np.linspace(0.01, 0.2, 500)
fc = np.array([f1_score(_actual, _pred>thr) for thr in thresholds])
plt.plot(thresholds, fc)
best_threshold = thresholds[fc.argmax()]
print('f1 score:', fc.max())
print('best threshold:', best_threshold)
print('TF pred mean:', (_pred>best_threshold).mean())
return best_threshold
with Timer("Read train data..."):
train = pd.read_csv('../input/CoinMlCompetitionSoftlandingTrainWithHeader.tsv', sep='\t') # (1347190, 1085)
print(train.shape)
test_header = train.columns[0:1084]
# Before deleting some columns, get missing value count
train['TotalNulls'] = train.isnull().sum(axis=1)
# Reduce size by removing most of days and time features
features = train.columns
print("features without time_ and days_ columns")
time_columns = columnsStartingWith(train, 'Time_')
days_columns = columnsStartingWith(train, 'Days_')
features = list(set(features) - set(time_columns))
features = list(set(features) - set(days_columns))
# Add important time features from feature importance above 50 and some validation
imp_time_features = ['Time_Accessibility', 'Time_Browser', 'Time_Communications', 'Time_Content', 'Time_DevTools',
'Time_Games', 'Time_Malware', 'Time_Media', 'Time_PersonalProductivity', 'Time_Readers',
'Time_Search', 'Time_Social', 'Time_StudentAndLearning', 'Time_ModernApps',
'Time_Games_Core', 'Time_Games_Casual', 'Time_windows_immersivecontrolpanel',
'Time_msascui_exe', 'Time_chrome_exe', 'Time_microsoft_windows_cortana', 'Time_lockapphost_exe',
'Time_excel_exe','Time_consent_exe','Time_explorer_exe',
'Time_applicationframehost_exe','Time_conhost_exe','Time_csrss_exe',
'Time_microsoft_microsoftedge','Time_onedrive_exe',
'Time_dwm_exe','Time_rundll32_exe','Time_setup_exe','Time_winword_exe',
'Time_dllhost_exe','Time_logonui_exe','Time_microsoft_lockapp',
'Time_microsoft_windows_photos','Time_powerpnt_exe',
'Time_pickerhost_exe','Time_werfault_exe','Time_iexplore_exe',
'Time_taskmgr_exe','Time_softwareupdate_exe',
'Time_microsoft_getstarted','Time_idman_exe','Time_firefox_exe',
'Time_microsoft_windowsstore','Time_notepad_exe']
features = list(set(features) | set(imp_time_features))
train = train[features]
print(train.shape)
# Train feature engineering
with Timer("Train feature engineering..."):
#train = feature_engineering(train, isDeleteOddDateRows=True)
train = feature_engineering(train)
train_y = train['HasClicked'].values
print("train y mean:", train_y.mean())
with Timer("Read test and feature engineering..."):
# Read tsv file
test = pd.read_csv('../input/CoinMlCompetitionSoftlandingEvaluateNoLabel.tsv', sep='\t', header = None)
# Add header because test does not header
test.columns = test_header
# Before deleting some columns, get missing value count
test['TotalNulls'] = test.isnull().sum(axis=1)
# Reduce test size by leaving train features only
test = test[list(set(features) - set(['HasClicked']))]
# Feature engineering - should not delete odd date rows
#test = feature_engineering(test, isDeleteOddDateRows=False)
test = feature_engineering(test)
print(test.shape)
# Get column groups and features
all_columns = train.columns
print("All columns:", len(all_columns))
# Remove constant columns for train (all included in time_ and days_ columns)
print("features without constant columns")
constant_columns = constantColumns(train)
features = list(set(all_columns) - set(constant_columns))
print("features:", len(features))
# With a lot of nulls, exclude time and days columns first and add later for improvement
#print("features without time_ and days_ columns")
#time_columns = columnsStartingWith(train, 'Time_')
#days_columns = columnsStartingWith(train, 'Days_')
#features = list(set(features) - set(time_columns))
#features = list(set(features) - set(days_columns))
# Drop features
drop_features = ['HasClicked', 'RowNumber', 'BubbleShownTime', 'FirstUpdatedDate', 'OSOOBEDateTime', 'creativeNameTest']
features = list(set(features) - set(drop_features))
print("Final features:", len(features))
sorted(features)
from sklearn.model_selection import train_test_split
with Timer('# train validation split'):
#X_train, X_val, y_train, y_val = train_test_split(train[train.isUnderSampled == True][features], train_y[train.isUnderSampled == True], test_size=0.2, random_state=0)
X_train, X_val, y_train, y_val = train_test_split(train[features], train_y, test_size=0.15, random_state=0)
gc.collect()
print(y_train.shape)
print(X_train.shape)
print(y_val.shape)
print(X_val.shape)
print(y_train.mean())
print(y_val.mean())
#del train
gc.collect()
import lightgbm as lgb
#train_data = lgb.Dataset(X_train[X_train.isUnderSampled == True][features], label=X_train[X_train.isUnderSampled == True]['HasClicked'].values)
train_data = lgb.Dataset(X_train[features], label=y_train)
val_data = lgb.Dataset(X_val[features], y_val)
# use train holdout directly with t f ratio
#train_data = lgb.Dataset(train[features], label=train_y)
#val_data = lgb.Dataset(holdout[features], y_holdout)
print(X_train[features].shape)
print(X_val[features].shape)
random.seed(2007)
params = {
'task' : 'train',
'boosting_type' : 'dart', #'gbdt', # dart
'objective' : 'binary',
'metric' : 'auc', # 'binary_logloss', #'binary_logloss', # binary_logloss, auc
'is_training_metric': True,
'max_bin': 255,
'num_leaves' : 64,
'learning_rate' : 0.02, # 0.05, #0.1,
'feature_fraction' : 0.8,
'min_data_in_leaf': 10,
'min_sum_hessian_in_leaf': 5,
# 'num_threads': 16,
}
num_round = 10000
bst = lgb.train(params, train_data, num_round, valid_sets=val_data, early_stopping_rounds=10)
val_preds = bst.predict(X_val[features], num_iteration=bst.best_iteration)
#holdout_preds = bst.predict(holdout[features], num_iteration=bst.best_iteration)
#test_preds = bst.predict(test[features], num_iteration=bst.best_iteration)
# Sampling score
# Including all high level and ymd and ymdh
# [297] valid_0's auc:0.67564 F1 score: 0.096338028169, best thr: 0.325385385385, Click mean: 0.0343981839588
# without ymd; f1 score not improved, so keep this
# [201] valid_0's auc:0.67772 F1 score: 0.0966780126125, best thr: 0.306746746747, Click mean: 0.0379598932823
# With uniqueUserDeviceKey valueCount
# [368] valid_0's auc:0.664831 F1 score: 0.06x ???
# Value counts
# [525] valid_0's auc:0.686445 f1 score: 0.104380886546 thr: 0.325875875876 Click mean: 0.0332386612486 (gain: 0.04)
# Count UniqueUserDeviceKey
# [505] valid_0's auc:0.706443 f1 score: 0.128913201081 thr: 0.371491491491 Click mean: 0.0267462248702 (gain:0.024)
# Count CampaignId
# [544] valid_0's auc:0.707357 f1 score: 0.13101569594 thr: 0.363643643644 Click mean: 0.0274719972684 (gain: 0.002)
# Remove all time and days
# [392] valid_0's auc:0.703582 f1 score: 0.123669773283 thr: 0.378358358358 Click mean: 0.0266139148895
# Include imp time features
# [418] valid_0's auc:0.706095 f1 score: 0.126989843694 thr: 0.386206206206 Click mean: 0.0229143624878 (loss: 0.004)
print('Validaion')
val_best_threshold = f1_best_threshold(y_val, val_preds)
feature_list = X_val[features].columns.values
df_fi = pd.DataFrame(bst.feature_importance(), columns=['importance'])
df_fi['feature'] = feature_list
df_fi = df_fi.sort_values('importance', ascending = 0)
df_fi[df_fi.importance >= 10]
zeroImportance = df_fi[df_fi.importance == 0]['feature'].values
print(len(zeroImportance))
with Timer('# predict test data'):
preds = bst.predict(test[features], num_iteration=bst.best_iteration)
#print(bestEpsilon)
print(val_best_threshold)
test_id = test.RowNumber.values
submission = pd.DataFrame({'RowNumber': test_id})
submission['HasClicked'] = preds > val_best_threshold
print("Click mean:", submission.HasClicked.mean())
print("Submission file...")
submission.to_csv("W10_Coin_LightGBM_FinalV4.csv", index = False)
submission.head()
###Output
Click mean: 0.0191106466348
Submission file...
|
Python_Projects/Mach_Probe/Mach_Probe_Analysis.ipynb | ###Markdown
Mach Probe ExB drift Analysis Reference - Rüdiger Back and Roger D. Bengtson, A Langmuir/Mach probe array for edge plasma turbulence and flow, 1997
###Code
from numpy import sin, pi, sqrt, arccos, log
from pandas import read_excel
e = 1.602e-19 # [C] electron charge
r_p = 0.15e-3 # [m] probe radius
l_p = 1.3e-3 # [m] probe length
h = 0.55e-3 # [m] Hole radius
s = 1.32e-3 # [m] Rotation center to Hole edge
R = 0.86e-3 # [m] Rotation center to Wire center
m_i = (40) * 1.67e-27 #[kg] mass of BF2+
k = 1.38e-23 #[m2kg/s2K] Boltzmann const
alpha = pi/2 # [rad] angle between B-field and Rotation center to Wire center
gamma = (1+0.5)/(2+0.5)
S_probe = pi*r_p*(r_p+2*pi*l_p)
class Machprobe():
def __init__(self, ne, Te, Ti, m_i, I):
self.ne = ne
self.I = I
self.Cs =sqrt(e*(Te+Ti)/(m_i))
d_alpha = arccos((s**2 + R**2 - h**2)/(2*s*R))
self.A_eff = l_p*(R*sin(alpha)+r_p-max(R*sin(alpha)-r_p, s*sin(alpha-d_alpha)))
#print('Te : {} eV'.format(Te))
#print('Ti : {} eV'.format(Ti))
#print('Effective area : {} m2'.format(self.A_eff))
#print('Ion sound speed : {} m/s'.format(self.Cs))
def perp_current(self):
self.I_D = (r_p/l_p)*(1-gamma)*self.A_eff # diffusion current calculation
self.I_sat = gamma*e*self.A_eff*self.Cs*self.ne # saturation current calculation
self.I_perp = self.I - self.I_D - self.I_sat # perpendicular current calculation
#print('diffusion current : ',self.I_D)
#print('saturation current : ',self.I_sat)
#print('perp current : ',self.I_perp)
file_path = 'exp_data/'
name = '128G_Reverse.xlsx'
data = read_excel(file_path + name, encoding='cp1252')
data
for i in range(len(data)):
I_upstream_electron = data.loc[i,'I_Upstream_electron [A]']*S_probe
I_downstream_electron = data.loc[i,'I_Downstream_electron [A]']*S_probe
I_upstream_ion = data.loc[i,'I_Upstream_ion [A]']*S_probe
I_downstream_ion = data.loc[i,'I_Downstream_ion [A]']*S_probe
ne_upstream = data.loc[i,'I_Upstream_electron_density [m-3]']
ne_downstream = data.loc[i,'I_Downstream_electron_density [m-3]']
Te = data.loc[i,'Electron temperature [eV]']
Ti = data.loc[i,'Ion temperature [eV]']
upstream_electron = Machprobe(ne_upstream, Te, Ti, m_i, I_upstream_electron)
downstream_electron = Machprobe(ne_downstream, Te, Ti, m_i, I_downstream_electron)
upstream_ion = Machprobe(ne_upstream, Te, Ti, m_i, I_upstream_ion)
downstream_ion = Machprobe(ne_downstream, Te, Ti, m_i, I_downstream_ion)
upstream_electron.perp_current()
downstream_electron.perp_current()
upstream_ion.perp_current()
downstream_ion.perp_current()
data.loc[i,'Mach number_electron'] = 0.73*log(upstream_electron.I_perp/downstream_electron.I_perp)
data.loc[i,'Mach number_ion'] = 0.73*log(upstream_ion.I_perp/downstream_ion.I_perp)
#data.loc[i,'I_sat (Up) [A]'] = upstream.I_sat
#data.loc[i,'I_D (Up) [A]'] = upstream.I_D
#data.loc[i,'I_Perp (Up) [A]'] = upstream.I_perp
#data.loc[i,'I_sat (Down) [A]'] = downstream.I_sat
#data.loc[i,'I_D (Down) [A]'] = downstream.I_D
#data.loc[i,'I_Perp (Down) [A]'] = downstream.I_perp
data.to_excel(file_path + 'No_ne_correction_Result_' + name,encoding='cp1252')
###Output
_____no_output_____ |
with_numpyro/Ch_23_MetricPredictors_0.ipynb | ###Markdown
Chapter 23.4 The Case of Metric Predictors
###Code
#
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import arviz as az
# numpyro
import jax
import jax.numpy as jnp
from jax import random, vmap
from jax.nn import softmax, softplus
from jax.scipy.special import logsumexp, logit, expit
import numpyro
import numpyro as npr
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS, Predictive
numpyro.set_host_device_count(4) # 4 chains in MCMC
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import scipy
import scipy.stats as stats
from matplotlib import gridspec
from IPython.display import Image
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
f_dict = {'size':16}
%load_ext watermark
%watermark -p pandas,numpy,matplotlib,seaborn,scipy,arviz,numpyro,jax
###Output
pandas : 1.2.1
numpy : 1.19.5
matplotlib: 3.3.3
seaborn : 0.11.1
scipy : 1.6.0
arviz : 0.11.2
numpyro : 0.5.0
jax : 0.2.8
###Markdown
Helper Functions
###Code
# these helper functions convert numpy array into jax.numpy array
# then after computation, convert back to numpy array
def prior_predictive(model, d):
Pred = Predictive(model, num_samples=2021)
jax_data = {k: jnp.array(v) if isinstance(v, np.ndarray) else v for k, v in d.items() }
samples = Pred(random.PRNGKey(0), **jax_data)
np_samples = {k: np.array(v) if isinstance(v, jnp.ndarray) else v for k, v in samples.items() }
return np_samples
def np2jnp(samples):
jnp_samples = {k: np.array(v) if isinstance(v, np.ndarray) else v for k, v in samples.items() }
return jnp_samples
def jnp2np(samples):
np_samples = {k: np.array(v) if isinstance(v, jnp.ndarray) else v for k, v in samples.items() }
return np_samples
def mcmc_sampling(model, d, num_warmup=500, num_samples=2000, num_chains=4):
jax_data = {k: jnp.array(v) if isinstance(v, np.ndarray) else v for k, v in d.items() }
# MCMC
mcmc_engine = MCMC(NUTS(model), num_warmup=num_warmup, num_samples=num_samples, num_chains=num_chains)
mcmc_engine.run(random.PRNGKey(0), **jax_data)
samples = mcmc_engine.get_samples()
#
np_samples = {k: np.array(v) if isinstance(v, jnp.ndarray) else v for k, v in samples.items() }
mcmc_engine.print_summary()
return np_samples
def print_shapes(s):
for k in s.keys():
print(f'{k:12} ', s[k].shape)
def plot_npdfs(a, method=None): #='hist'):
fig, axes = plt.subplots(1, a.shape[1], figsize=(13,2))
for i, ax in enumerate(axes):
if method == 'hist':
ax.hist(a[:,i], bins=20)
else:
az.plot_posterior(a[:,i], ax=ax)
def plot_thresholds(thresh_samples):
thmat = thresh_samples # +1 is necessary to shift to the 1-based world as in DBDA book
th_mean = np.repeat(thmat.mean(axis=1)[:,np.newaxis], thmat.shape[1], axis=1)
# the same dimensions: th_mean.shape, thmat.shape
plt.vlines(x=thmat.mean(axis=0), ymin=th_mean.min(), ymax=th_mean.max(),
colors='#DDBBAA', linestyles='dashed', alpha=.4)
plt.scatter(thmat, th_mean, alpha=.1, s=1);
plt.xticks(ticks=thmat.mean(axis=0))
plt.xlabel('thresholds')
plt.ylabel('Mean Thresholds (per sample)')
###Output
_____no_output_____
###Markdown
One Predictor - $\mu_i = \beta_0 + \beta_1 * X_i$
###Code
df3 = pd.read_csv('data/OrdinalProbitData-LinReg-2.csv')
df3.info()
nYlevels = len(df3.Y.unique())
Y = jnp.array(df3.Y.values) # python is 0-based, Categorical produces 0-based outcomes
X = jnp.array(df3.X.values)
print(nYlevels, len(Y), len(X))
plt.scatter(X, Y); plt.xlabel('X: Metric Predictor'); plt.ylabel('Y: Ordinal Predicted');
Y[:3]
###Output
_____no_output_____
###Markdown
Ordinal Probit, Book's way
###Code
def getAmat(nYlevels):
a = np.eye(nYlevels)
for j in range(a.shape[0]-1):
a[j+1,j] = -1
return jnp.array(a)
def model_1group_1metric(nYlevels, Amat, X, y=None):
# preprocessing
zY = y - 1 if y is not None else None
Xmean = X.mean()
Xstd = X.std()
zX = (X - Xmean) / Xstd
# cutpoints, we need K-1 thresholds
cuts_init = jnp.array([i+.5 for i in range(nYlevels-1)])
with npr.plate('cuts_draw_plate', size=nYlevels-1):
cuts_normal = npr.sample('cuts_normal', dist.Normal(0., 2.))
cuts = cuts_init + cuts_normal
npr.deterministic('cuts_draw', cuts)
cuts = jax.ops.index_update(cuts, jax.ops.index[0], 0.5) # cuts[0] = .5
cuts = jax.ops.index_update(cuts, jax.ops.index[-1], nYlevels - 1 -.5) # cuts[-1] = 5.5
npr.deterministic('cuts', cuts)
# base normal
sigma = npr.sample(f'sigma', dist.Uniform(low=nYlevels/1000, high=nYlevels*10))
zb0 = npr.sample(f'zb0', dist.Normal((1. + nYlevels) / 2, nYlevels))
zb = npr.sample('zb', dist.Normal(0, nYlevels))
# regression,
mu = zb0 + zb * zX # zX[k] --> y[k] , zb0 + zb * (X - mX)/sX
# print('mu', mu.shape)
# print('cuts', cuts.shape)
# probit comp.
cdfs = jax.scipy.stats.norm.cdf(cuts.reshape(-1,1), loc=mu, scale=sigma)
# print('cdfs norm', cdfs.shape)
cdfs = jnp.concatenate((cdfs, jnp.array([[1.]*mu.shape[0]]))) # nYlevels x len(zX)
npr.deterministic(f'cdfs', cdfs)
# print('cdfs', cdfs.shape)
diff = jnp.dot(Amat, cdfs)
npr.deterministic(f'diff', diff)
max0 = jnp.maximum(0., diff) # prob = max(0, cdf[i] - cdf[i-1]), see the matrix A
probs = max0 / max0.sum(axis=0)
npr.deterministic(f'probs', probs)
# print(probs[:,0].sum(), probs.shape)
# observation
yobs = npr.sample(f'obs', dist.Categorical(probs=probs.T), obs=zY) ## The probability should be transposed.
# transform back
cuts1 = cuts + 1
b0 = zb0 - zb*Xmean/Xstd + 1
b = zb * Xmean/Xstd
mu = mu + 1
yobs = yobs + 1
npr.deterministic('cuts1', cuts1)
npr.deterministic('b0', b0)
npr.deterministic('b', b)
npr.deterministic('mu', mu)
npr.deterministic('yobs', yobs)
#
###Output
_____no_output_____
###Markdown
Prior Predictive
###Code
data_prior = dict(nYlevels=nYlevels, Amat=getAmat(nYlevels), X=X) # for Prior Predictive
s= prior_predictive(model_1group_1metric, data_prior)
[(f'{k:12} ', s[k].shape) for k in s.keys()]
s['cuts']
plot_npdfs(s['cuts'], method='hist')
fig, axes = plt.subplots(1,4, figsize=(13,3))
axes[0].hist(s['sigma']);
axes[1].hist(s['zb']);
axes[2].hist(s['zb0'], bins=20);
###Output
_____no_output_____
###Markdown
MCMC Inference
###Code
%%time
data = dict(nYlevels=nYlevels, Amat=getAmat(nYlevels), X=X, y=Y) # for MCMC
ps = mcmc_sampling(model_1group_1metric, data)
[(f'{k:12} ', ps[k].shape) for k in ps.keys()]
plt.scatter(X, Y); plt.xlabel('X: Metric Predictor'); plt.ylabel('Y: Ordinal Predicted');
x = 1.
mu = ps['b0'] + ps['b']*x
fig, axes = plt.subplots(1, 4, figsize=(12,2))
i=0; ax=axes[i]; az.plot_posterior(ps['b0'], ax=axes[i]); ax.set_title('beta_0');
i=1; ax=axes[i]; az.plot_posterior(ps['b'], ax=axes[i]); ax.set_title('beta');
ax=axes[2]; az.plot_posterior(ps['sigma'], ax=axes[2]); ax.set_title('sigma');
plot_thresholds(ps['cuts1']) # shift +1 to have DBDA range
X.shape
# def posterior_predictive(model, ps):
Xtst = jnp.array([1., 2.])
Pred = Predictive(model_1group_1metric, posterior_samples=np2jnp(ps))
samples = Pred(random.PRNGKey(0), nYlevels, getAmat(nYlevels), Xtst)
np_samples = {k: np.array(v) if isinstance(v, jnp.ndarray) else v for k, v in samples.items() }
print_shapes(np_samples)
a = np_samples
yobs = a['yobs']
plt.hist(yobs[:,0], bins=7)
plt.hist(yobs[:,1], bins=7, alpha=.5)
###Output
_____no_output_____ |
0.1-antong-step-1-scripting_catboost.ipynb | ###Markdown
Config
###Code
# Base
random_state = 42
# Raw data
target_raw = '../data/raw/target.feather'
user_features_raw = '../data/raw/user_features.feather'
# Features
categories = ['feature_17', 'feature_21', 'feature_11', 'feature_16', 'feature_22']
features_path = '../data/processed/features.feather'
# Train
top_K_coef = 0.05 # calculate metrics for top 5%
model_path = '../models/model.joblib'
train_metrics = '../reports/train_metrics.json'
###Output
_____no_output_____
###Markdown
Create features Load data
###Code
target_df = pd.read_feather(target_raw)
target_df.head()
target_df.shape
target_df.month.astype(str).unique()
# imbalanced
target_df.target.value_counts()
target_df.month.value_counts()
user_features_df = pd.read_feather(user_features_raw)
user_features_df = user_features_df.loc[user_features_df.user_id.isin(target_df.user_id)]
user_features_df.head()
user_features_df.user_id.isin(target_df.user_id)
user_features_df.user_id.isin(target_df.user_id).value_counts()
user_features_df.month.value_counts()
###Output
_____no_output_____
###Markdown
Process 'month' column
###Code
# Convert 'month' to datetime
target_df['month'] = pd.to_datetime(target_df['month'])
user_features_df['month'] = pd.to_datetime(user_features_df['month'])
target_df.head()
features = user_features_df.copy()
features.head()
# features.sort_values(by=['month']).tail()
###Output
_____no_output_____
###Markdown
Add target column
###Code
%%time
features = pd.merge(
left=features,
right=target_df,
how='left',
on=['user_id', 'month']
)
features.head()
features['target'].value_counts(dropna=False)
features.shape
(
user_features_df
.merge(target_df[target_df.target == 1],
how="left",
on=["user_id", "month"])
.fillna({"target": 0})
.groupby("user_id")
.agg({"target": "sum"})
.value_counts()
.sort_index()
)
###Output
_____no_output_____
###Markdown
Process nulls
###Code
# Fill NaN target 0
features.dropna(inplace=True)
features.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 752128 entries, 0 to 752127
Data columns (total 33 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user_id 752128 non-null int64
1 month 752128 non-null datetime64[ns]
2 feature_1 752128 non-null float64
3 feature_2 752128 non-null int32
4 feature_3 752128 non-null float64
5 feature_4 752128 non-null int32
6 feature_5 752128 non-null int32
7 feature_6 752128 non-null float64
8 feature_7 752128 non-null float64
9 feature_8 752128 non-null float64
10 feature_9 752128 non-null float64
11 feature_10 752128 non-null float64
12 feature_11 752128 non-null category
13 feature_12 752128 non-null float64
14 feature_13 752128 non-null float64
15 feature_14 752128 non-null float64
16 feature_15 752128 non-null float64
17 feature_16 752128 non-null category
18 feature_17 752128 non-null category
19 feature_18 752128 non-null int8
20 feature_19 752128 non-null float64
21 feature_20 752128 non-null float64
22 feature_21 752128 non-null category
23 feature_22 752128 non-null category
24 feature_23 752128 non-null float64
25 feature_24 752128 non-null float64
26 feature_25 752128 non-null int32
27 feature_26 752128 non-null float64
28 feature_27 752128 non-null int32
29 feature_28 752128 non-null float64
30 feature_29 752128 non-null float64
31 feature_30 752128 non-null float64
32 target 752128 non-null float64
dtypes: category(5), datetime64[ns](1), float64(20), int32(5), int64(1), int8(1)
memory usage: 150.6 MB
###Markdown
Save features
###Code
features.to_feather(features_path) # binary Feather format
! tree ../data # features is in processed dir
###Output
[01;34m../data[00m
├── [01;34mprocessed[00m
│ └── features.feather
└── [01;34mraw[00m
├── scoring_target.feather
├── scoring_user_features.feather
├── target.feather
└── user_features.feather
2 directories, 5 files
###Markdown
Split dataTime-based cross validation: forms a type of “sliding window” training approach to create a general and robust model.Example for reference: https://towardsdatascience.com/time-based-cross-validation-d259b13d42b8
###Code
from pandas.tseries.offsets import MonthEnd
def custom_ts_split(months, train_period = 0):
for k, month in enumerate(months):
start_train = pd.to_datetime(months.min())
end_train = start_train + MonthEnd(train_period + k-1)
test_period = pd.to_datetime(end_train + MonthEnd(1))
if test_period <= pd.to_datetime(months.max()):
yield start_train, end_train, test_period
else:
print(test_period)
print(months.max())
# months
months = features.month.sort_values().unique()
months
# test custom_ts_split function - iterate over splits
k = 1
for start_train, end_train, test_period in custom_ts_split(months, train_period=1):
print(f'Fold {k}:')
print(f'Train: {start_train} - {end_train}')
print(f'Test: {test_period} \n')
k+=1
###Output
Fold 1:
Train: 2020-04-30 00:00:00 - 2020-04-30 00:00:00
Test: 2020-05-31 00:00:00
Fold 2:
Train: 2020-04-30 00:00:00 - 2020-05-31 00:00:00
Test: 2020-06-30 00:00:00
Fold 3:
Train: 2020-04-30 00:00:00 - 2020-06-30 00:00:00
Test: 2020-07-31 00:00:00
Fold 4:
Train: 2020-04-30 00:00:00 - 2020-07-31 00:00:00
Test: 2020-08-31 00:00:00
2020-09-30 00:00:00
2020-08-31T00:00:00.000000000
###Markdown
Train MetricsReference: https://towardsdatascience.com/the-lift-curve-unveiled-998851147871 Lift Curve
###Code
def plot_Lift_curve(y_true, y_pred, step=0.01):
"""
Plots a Lift curve using the real label values of a dataset
and the probability predictions of a Machine Learning Algorithm/model
Params:
y_val: true labels
y_pred: probability predictions
step: steps in the percentiles
Reference: https://towardsdatascience.com/the-lift-curve-unveiled-998851147871
"""
# Define an auxiliar dataframe to plot the curve
aux_lift = pd.DataFrame()
aux_lift['real'] = y_true
aux_lift['predicted'] = y_pred
aux_lift.sort_values('predicted', ascending=False, inplace=True)
x_val = np.arange(step, 1+step, step) # values on the X axis of our plot
ratio_ones = aux_lift['real'].sum() / len(aux_lift) # ratio of ones in our data
y_v = [] # empty vector with the values that will go on the Y axis our our plot
# for each x value calculate its corresponding y value
for x in x_val:
num_data = int(np.ceil(x*len(aux_lift)))
data_here = aux_lift.iloc[:num_data,:]
ratio_ones_here = data_here['real'].sum() / len(data_here)
y_v.append(ratio_ones_here / ratio_ones)
# plot the figure
fig, axis = plt.subplots()
fig.figsize = (40,40)
axis.plot(x_val, y_v, 'g-', linewidth= 3, markersize = 5)
axis.plot(x_val, np.ones(len(x_val)), 'k-')
axis.set_xlabel('Proportion of sample')
axis.set_ylabel('Lift')
plt.title('Lift Curve')
plt.show()
###Output
_____no_output_____
###Markdown
Precision @k
###Code
def precision_at_k_score(actual, predicted, predicted_probas, k) -> float:
df = pd.DataFrame({'actual': actual, 'predicted': predicted, 'probas': predicted_probas})
df = df.sort_values(by=['probas'], ascending=False).reset_index(drop=True)
df = df[:k]
return precision_score(df['actual'], df['predicted'])
###Output
_____no_output_____
###Markdown
Recall @k
###Code
def recall_at_k_score(actual, predicted, predicted_probas, k) -> float:
df = pd.DataFrame({'actual': actual, 'predicted': predicted, 'probas': predicted_probas})
df = df.sort_values(by=['probas'], ascending=False).reset_index(drop=True)
df = df[:k]
return recall_score(df['actual'], df['predicted'])
###Output
_____no_output_____
###Markdown
Lift @k
###Code
def lift_score(actual, predicted, predicted_probas, k) -> float:
numerator = precision_at_k_score(actual, predicted, predicted_probas, k)
denominator = np.mean(actual)
lift = numerator / denominator
print(f'Lift: {numerator} / {denominator} = {lift}')
return lift
###Output
_____no_output_____
###Markdown
Load data
###Code
features = pd.read_feather(features_path)
# format data
features['month'] = pd.to_datetime(features['month'])
features.head()
features.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 752128 entries, 0 to 752127
Data columns (total 33 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user_id 752128 non-null int64
1 month 752128 non-null datetime64[ns]
2 feature_1 752128 non-null float64
3 feature_2 752128 non-null int32
4 feature_3 752128 non-null float64
5 feature_4 752128 non-null int32
6 feature_5 752128 non-null int32
7 feature_6 752128 non-null float64
8 feature_7 752128 non-null float64
9 feature_8 752128 non-null float64
10 feature_9 752128 non-null float64
11 feature_10 752128 non-null float64
12 feature_11 752128 non-null category
13 feature_12 752128 non-null float64
14 feature_13 752128 non-null float64
15 feature_14 752128 non-null float64
16 feature_15 752128 non-null float64
17 feature_16 752128 non-null category
18 feature_17 752128 non-null category
19 feature_18 752128 non-null int8
20 feature_19 752128 non-null float64
21 feature_20 752128 non-null float64
22 feature_21 752128 non-null category
23 feature_22 752128 non-null category
24 feature_23 752128 non-null float64
25 feature_24 752128 non-null float64
26 feature_25 752128 non-null int32
27 feature_26 752128 non-null float64
28 feature_27 752128 non-null int32
29 feature_28 752128 non-null float64
30 feature_29 752128 non-null float64
31 feature_30 752128 non-null float64
32 target 752128 non-null float64
dtypes: category(5), datetime64[ns](1), float64(20), int32(5), int64(1), int8(1)
memory usage: 144.9 MB
###Markdown
Fit
###Code
import catboost as ctb
clf = ctb.CatBoostClassifier(
iterations = 10,
thread_count = 50,
has_time = True,
allow_writing_files = False,
cat_features = categories,
loss_function = "Logloss",
# eval_metric='AUC',
)
top_K_coef # defined above in Config section
"""Fit / evaluate estimator for each split
params from config:
- top_K
"""
# DF to store metrics for each fold
metrics_df = pd.DataFrame(columns=['test_period', 'lift', 'precision_at_k', 'recall_at_k'])
top_K = int(features.shape[0] * top_K_coef)
print(f'top_K is {top_K_coef*100}% of dataset_size: {top_K}')
k = 1
for start_train, end_train, test_period in custom_ts_split(months, train_period = 1):
print(f'Fold {k}:')
print(f'Train: {start_train} - {end_train}')
print(f'Test: {test_period} \n')
# Get train / test data for the split
X_train = (features[(features.month >= start_train) & (features.month <= end_train)]
.drop(columns=['user_id', 'month', 'target'], axis=1))
X_test = (features[(features.month == test_period)]
.drop(columns=['user_id', 'month', 'target'], axis=1))
y_train = features.loc[(features.month >= start_train) & (features.month <= end_train), 'target']
y_test = features.loc[(features.month == test_period), 'target']
print(f'Train shapes: X is {X_train.shape}, y is {y_train.shape}')
print(f'Test shapes: X is {X_test.shape}, y is {y_test.shape}')
# Fit estimator
clf.fit(X_train, y_train)
# clf.fit(
# X_train, y_train,
# eval_set=(X_test, y_test),
# cat_features=categories,
# plot=True,
# verbose=False
# );
y_pred = clf.predict(X_test)
probas = clf.predict_proba(X_test)
print(f'Max probas: {probas[:, 1].max()}')
lift = lift_score(y_test, y_pred, probas[:, 1], top_K)
precision_at_k = precision_at_k_score(y_test, y_pred, probas[:, 1], top_K)
recall_at_k = recall_at_k_score(y_test, y_pred, probas[:, 1], top_K)
metrics_df = metrics_df.append(
dict(zip(metrics_df.columns, [test_period, lift, precision_at_k, recall_at_k])),
ignore_index=True
)
k += 1
print(f'Precision at {top_K}: {precision_at_k}')
print(f'Recall at {top_K}: {recall_at_k}')
plot_Lift_curve(y_test[:top_K], y_pred[:top_K], step=0.1)
print('\n')
metrics_df
metrics_aggs = metrics_df[['lift', 'precision_at_k', 'recall_at_k']].agg(['max', 'min', 'std', 'mean'])
metrics = {
f'{metric}_{agg}': metrics_aggs.loc[agg, metric]
for metric in metrics_aggs.columns
for agg in metrics_aggs.index
}
with open(train_metrics, 'w') as metrics_f:
json.dump(obj=metrics, fp=metrics_f, indent=4)
metrics_aggs
###Output
_____no_output_____
###Markdown
Save model
###Code
joblib.dump(clf, model_path)
###Output
_____no_output_____
###Markdown
Predict
###Code
PATH_TO_DATA = '../data/raw'
df_scoring = pd.read_feather(os.path.join(PATH_TO_DATA, 'scoring_user_features.feather'))
df_scoring_targets = pd.read_feather(os.path.join(PATH_TO_DATA, 'scoring_target.feather'))
df_scoring_targets.head()
# Get train / test data for the split
scoring_period = str(df_scoring_targets.month.unique()[0])
print(f'Scoring period: {scoring_period}')
y_true_scoring = df_scoring_targets['target']
y_pred_scoring = clf.predict(df_scoring.drop(columns=['user_id', 'month'], axis=1))
probas_scoring = clf.predict_proba(df_scoring.drop(columns=['user_id', 'month'], axis=1))
print(f'Max probas: {probas[:, 1].max()}')
metrics_scoring = pd.DataFrame(columns=['scoring_period', 'lift', 'precision_at_k', 'recall_at_k'])
lift = lift_score(y_true_scoring, y_pred_scoring, probas_scoring[:, 1], top_K)
precision_at_k = precision_at_k_score(y_true_scoring, y_pred_scoring, probas_scoring[:, 1], top_K)
recall_at_k = recall_at_k_score(y_true_scoring, y_pred_scoring, probas_scoring[:, 1], top_K)
metrics_scoring = metrics_scoring.append(
dict(zip(metrics_scoring.columns, [scoring_period, lift, precision_at_k, recall_at_k])),
ignore_index=True,
)
metrics_scoring
from sklearn.metrics import roc_auc_score
roc_auc_score(y_true_scoring, y_pred_scoring)
###Output
_____no_output_____ |
Python/Regex.ipynb | ###Markdown
Regular expressions, regex, is a syntax to search, extract and manipulate specific string patterns from a larger text. A basic example is '\s+'.Here the '\s' matches any whitespace character. By adding a '+' notation at the end will make the pattern match at least 1 or more spaces. So, this pattern will match even tab '\t' characters as well.
###Code
import re
regex = re.compile('\s+')
###Output
_____no_output_____
###Markdown
split a string separated by a regex
###Code
text = """101 COM Computers
205 MAT Mathematics
189 ENG English"""
###Output
_____no_output_____
###Markdown
I have three course items in the format of “[Course Number] [Course Code] [Course Name]”. The spacing between the words are not equal.I want to split these three course items into individual units of numbers and words. How to do that?This can be split in two ways: 1. By using the re.split method.2. By calling the split method of the regex object.
###Code
# split the text around 1 or more space characters
re.split('\s+', text)
regex.split(text) # this is preferred one
###Output
_____no_output_____
###Markdown
Finding pattern matches using findall, search and match Let’s suppose you want to extract all the course numbers, that is, the numbers 101, 205 and 189 alone from the above text. How ? re.findall() '\d' is a regular expression which matches any digit. Adding a '+' symbol to it mandates the presence of at least 1 digit to be present in order to be found.Similar to '+', there is a '*' symbol which requires 0 or more digits in order to be found. I
###Code
# find all numbers within the text
print(text)
regex_num = re.compile('\d+')
regex_num.findall(text)
###Output
_____no_output_____
###Markdown
re.search() vs re.match()regex.search() searches for the pattern in a given text.regex.match() requires the pattern to be present at the beginning of the text itself.
###Code
text2 = """COM Computers
205 MAT Mathematics 189"""
# compile the regex and search the pattern
regex_num = re.compile('\d+')
s = regex_num.search(text2)
print('Starting Position: ', s.start())
print('Ending Position: ', s.end())
print(text2[s.start():s.end()])
###Output
('Starting Position: ', 17)
('Ending Position: ', 20)
205
###Markdown
we can get the same output using the group() method of the match object.
###Code
print(s.group())
m = regex_num.match(text2)
print(m) # unlike search this doesn't work
###Output
_____no_output_____
###Markdown
substitute one text with another using regexTo replace texts, use the regex.sub().
###Code
text = """101 COM \t Computers
205 MAT \t Mathematics
189 ENG \t English"""
print(text)
###Output
101 COM Computers
205 MAT Mathematics
189 ENG English
###Markdown
we want to even out all the extra spaces and put all the words in one single line.To do this, we just have to use regex.sub to replace the '\s+' pattern with a single space ‘ ‘.
###Code
# replace one or more spaces with single space
regex = re.compile('\s+')
print(regex.sub(' ', text))
# or
print(re.sub('\s+', ' ', text))
###Output
101 COM Computers 205 MAT Mathematics 189 ENG English
###Markdown
if we only want to get rid of the extra spaces but want to keep the course entries in the new line itselfThis can be done using a negative lookahead (?!\n). It checks for an upcoming newline character and excludes it from the pattern.
###Code
# get rid of all extra spaces except newline
regex = re.compile('((?!\n)\s+)')
print(regex.sub(' ', text))
###Output
_____no_output_____
###Markdown
Regex groupsI want to extract the course number, code and the name as separate items, Without groups
###Code
text = """101 COM Computers
205 MAT Mathematics
189 ENG English"""
# 1. extract all course numbers
print(re.findall('[0-9]+', text))
# 2. extract all course codes
print(re.findall('[A-Z]{3}', text))
# 3. extract all course names
print(re.findall('[A-Za-z]{4,}', text))
###Output
['101', '205', '189']
['COM', 'MAT', 'ENG']
['Computers', 'Mathematics', 'English']
###Markdown
we compiled 3 separate regular expressions one each for matching the course number, code and the name.For course number, the pattern [0-9]+ instructs to match all number from 0 to 9. Adding a + symbol at the end makes it look for at least 1 occurrence of numbers 0-9. If you know the course number will certainly have exactly 3 digits, the pattern could have been [0-9]{3} instead.For course code, you can guess that '[A-Z]{3}' will match exactly 3 consequtive occurrences of alphabets capital A-Z.For course name, '[A-Za-z]{4,}' will look for upper and lower case alphabets a-z, assuming all course names will have at least 4 or more characters. if we had to write 1 line for this we use Regex Groups.
###Code
# define the course text pattern groups and extract
course_pattern = '([0-9]+)\s*([A-Z]{3})\s*([A-Za-z]{4,})'
re.findall(course_pattern, text)
###Output
_____no_output_____
###Markdown
greedy matching in regexThe default behavior of regular expressions is to be greedy. That means it tries to extract as much as possible until it conforms to a pattern even when a smaller part would have been syntactically sufficient
###Code
text = "< body>Regex Greedy Matching Example < /body>"
re.findall('<.*>', text)
###Output
_____no_output_____
###Markdown
Instead of matching till the first occurrence of ‘>’, which I was hoping would happen at the end of first body tag itself, it extracted the whole string. This is the default greedy or ‘take it all’ behavior of regex.Lazy matching, on the other hand, ‘takes as little as possible’. This can be effected by adding a `?` at the end of the pattern.
###Code
re.findall('<.*?>', text)
###Output
_____no_output_____
###Markdown
If you want only the first match to be retrieved, use the search method instead.
###Code
re.search('<.*?>', text).group()
###Output
_____no_output_____
###Markdown
regular expression syntax and patterns Regular Expressions SyntaxBASIC SYNTAX. One character except new line\. A period. \ escapes a special character.\d One digit\D One non-digit\w One word character including digits\W One non-word character\s One whitespace\S One non-whitespace\b Word boundary\n Newline\t TabMODIFIERS$ End of string^ Start of stringab|cd Matches ab or de.[ab-d] One character of: a, b, c, d[^ab-d] One character except: a, b, c, d() Items within parenthesis are retrieved(a(bc)) Items within the sub-parenthesis are retrievedREPETITIONS[ab]{2} Exactly 2 continuous occurrences of a or b[ab]{2,5} 2 to 5 continuous occurrences of a or b[ab]{2,} 2 or more continuous occurrences of a or b+ One or more* Zero or more? 0 or 1
###Code
###Output
_____no_output_____
###Markdown
Regular Expressions Examples Any character except for a new line
###Code
text = 'https://www.howlongtoreadthis.com'
print(re.findall('.', text)) # . Any character except for a new line
print(re.findall('...', text))
###Output
['h', 't', 't', 'p', 's', ':', '/', '/', 'w', 'w', 'w', '.', 'h', 'o', 'w', 'l', 'o', 'n', 'g', 't', 'o', 'r', 'e', 'a', 'd', 't', 'h', 'i', 's', '.', 'c', 'o', 'm']
['htt', 'ps:', '//w', 'ww.', 'how', 'lon', 'gto', 'rea', 'dth', 'is.', 'com']
###Markdown
A period
###Code
text = 'https://www.howlongtoreadthis.com'
print(re.findall('\.', text)) # matches a period
print(re.findall('[^\.]', text)) # matches anything but a period
###Output
['.', '.']
['h', 't', 't', 'p', 's', ':', '/', '/', 'w', 'w', 'w', 'h', 'o', 'w', 'l', 'o', 'n', 'g', 't', 'o', 'r', 'e', 'a', 'd', 't', 'h', 'i', 's', 'c', 'o', 'm']
###Markdown
Any digit
###Code
text = '01, Jan 2015'
print(re.findall('\d+', text)) # \d Any digit. The + mandates at least 1 digit.
###Output
['01', '2015']
###Markdown
Anything but a digit
###Code
text = '01, Jan 2015'
print(re.findall('\D+', text)) # \D Anything but a digit
###Output
[', Jan ']
###Markdown
Any character, including digits
###Code
text = '01, Jan 2015'
print(re.findall('\w+', text)) # \w Any character
###Output
['01', 'Jan', '2015']
###Markdown
Anything but a character
###Code
text = '01, Jan 2015'
print(re.findall('\W+', text)) # \W Anything but a character
###Output
[', ', ' ']
###Markdown
Collection of characters
###Code
text = '01, Jan 2015'
print(re.findall('[a-zA-Z]+', text)) # [] Matches any character inside
###Output
['Jan']
###Markdown
Match something upto ‘n’ times
###Code
text = '01, Jan 2015'
print(re.findall('\d{4}', text)) # {n} Matches repeat n times.
print(re.findall('\d{2,4}', text))
###Output
['2015']
['01', '2015']
###Markdown
Match 1 or more occurrences
###Code
print(re.findall(r'Co+l', 'So Cooool')) # Match for 1 or more occurrences
###Output
['Cooool']
###Markdown
Match any number of occurrences (0 or more times)
###Code
print(re.findall(r'Pi*lani', 'Pilani'))
###Output
['Pilani']
###Markdown
Match exactly zero or one occurrence
###Code
print(re.findall(r'colou?r', 'color'))
###Output
['color']
###Markdown
Match word boundaries Word boundaries \b are commonly used to detect and match the beginning or end of a word. That is, one side is a word character and the other side is whitespace and vice versa.For example, the regex \btoy will match the ‘toy’ in ‘toy cat’ and not in ‘tolstoy’. In order to match the ‘toy’ in ‘tolstoy’, you should use toy\bCan you come up with a regex that will match only the first ‘toy’ in ‘play toy broke toys’? (hint: \b on both sides)Likewise, \B will match any non-boundary.For example, \Btoy\B will match ‘toy’ surrounded by words on both sides, as in, ‘antoynet’.
###Code
re.findall(r'\btoy\b', 'play toy broke toys') # match toy with boundary on both sides
###Output
_____no_output_____
###Markdown
Practice Exercises 1. Extract the user id, domain name and suffix from the following email addresses.emails = """[email protected]@[email protected]"""desired_output = [('zuck26', 'facebook', 'com'), ('page33', 'google', 'com'), ('jeff42', 'amazon', 'com')]
###Code
emails = """[email protected]
[email protected]
[email protected]"""
# Solution
pattern = r'(\w+)@([A-Z0-9]+)\.([A-Z]{2,4})'
re.findall(pattern, emails, flags=re.IGNORECASE)
pattern = r'(\w+)@([A-Za-z]+)\.([A-Za-z]{3})'
re.findall(pattern, emails)
###Output
_____no_output_____
###Markdown
2. Retrieve all the words starting with ‘b’ or ‘B’ from the following text.text = """Betty bought a bit of butter, But the butter was so bitter, So she bought some better butter, To make the bitter butter better."""
###Code
text = """Betty bought a bit of butter, But the butter was so bitter, So she bought some better butter, To make the bitter butter better."""
# Solution:
import re
re.findall(r'\bB\w+', text, flags=re.IGNORECASE)
# '\b' mandates the left of 'B' is a word boundary, effectively requiring the word to start with 'B'.
# Setting 'flags' arg to 're.IGNORECASE' makes the pattern case insensitive.
re.findall(r'\b[Bb]\w+', text)
###Output
_____no_output_____
###Markdown
3. Split the following irregular sentence into wordssentence = """A, very very; irregular_sentence"""desired_output = "A very very irregular sentence"
###Code
sentence = """A, very very; irregular_sentence"""
# Solution
import re
" ".join(re.split('[;,\s_]+', sentence))
#> 'A very very irregular sentence'
# Add more delimiters into the pattern as needed.
###Output
_____no_output_____
###Markdown
4. Clean up the following tweet so that it contains only the user’s message. That is, remove all URLs, hashtags, mentions, punctuations, RTs and CCs.tweet = '''Good advice! RT @TheNextWeb: What I would do differently if I was learning to code today http://t.co/lbwej0pxOd cc: @garybernhardt rstats'''desired_output = 'Good advice What I would do differently if I was learning to code today'
###Code
tweet = '''Good advice! RT @TheNextWeb: What I would do differently if I was learning to code today http://t.co/lbwej0pxOd cc: @garybernhardt #rstats'''
# Solution
import re
def clean_tweet(tweet):
tweet = re.sub('http\S+\s*', '', tweet) # remove URLs
tweet = re.sub('RT|cc', '', tweet) # remove RT and cc
tweet = re.sub('#\S+', '', tweet) # remove hashtags
tweet = re.sub('@\S+', '', tweet) # remove mentions
tweet = re.sub('[%s]' % re.escape("""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"""), '', tweet) # remove punctuations
tweet = re.sub('\s+', ' ', tweet) # remove extra whitespace
return tweet
print(clean_tweet(tweet))
###Output
Good advice What I would do differently if I was learning to code today
###Markdown
5. Extract all the text portions between the tags from the following HTML page:https://raw.githubusercontent.com/selva86/datasets/master/sample.htmlimport requestsr = requests.get("https://raw.githubusercontent.com/selva86/datasets/master/sample.html")r.text html text is contained heredesired_output = ['Your Title Here', 'Link Name', 'This is a Header', 'This is a Medium Header', 'This is a new paragraph! ', 'This is a another paragraph!', 'This is a new sentence without a paragraph break, in bold italics.']
###Code
import requests
r = requests.get("https://raw.githubusercontent.com/selva86/datasets/master/sample.html")
r.text
# Solution:
# Note: remove the space after < and /.*> for the pattern to work
re.findall('<.*?>(.*)</.*?>', r.text)
#> ['Your Title Here', 'Link Name', 'This is a Header', 'This is a Medium Header', 'This is a new paragraph! ', 'This is a another paragraph!', 'This is a new sentence without a paragraph break, in bold italics.']
###Output
_____no_output_____ |
docs/_sources/docs/Chapter_6/ODEs.ipynb | ###Markdown
Ordinary Differential Equations (ODEs)If you give an object (e.g., large wooden rabbit) some initial velcoity $v$ over a castle wall, then you will note that the vertical component of velcoity gradually decreases. Eventually, the vertical velocity component changes direction and the object (or wooden rabbit) impacts the ground with a speed roughly equal to the speed it had when it left. This example probably reminds you of the kinematics that you learned in your introductory physics course. Resurrecting that prior knowledge wil assist you to develop intuitions to setup and solve ordinary differential equations (ODEs) numerically. Through kinematics, we define an acceleration $a$ as:$a = \frac{dv}{dt}$ or $dv = a dt$ (with finite steps $d \rightarrow \Delta$).However, we can perform a [Galilean transformation](https://en.wikipedia.org/wiki/Galilean_transformation) to make this result more general, which incorporates the initial velocity $v_o$ and then we can estimate the new velocity (given that we can calculate $a$). Mathematically, this is:$v = v_o + a\Delta t$.Eventually, we'll want to change this into code. This is much easier if we write the above equation as a [recurrence relation](https://en.wikipedia.org/wiki/Recurrence_relation), where the next value can be determined by the previous one:$v_{i+1} = v_i + \frac{dv}{dt}\Delta t$.We can use an identical process to estimate the position of the projectile along a direction $x$ and using the definition $v = dx/dt$:$x_{i+1} = x_i + \frac{dx}{dt}\Delta t$.This method does not give an exact result with large $\Delta t$, but for small enough $\Delta t$ it's close. We can generalize further to define a state **$y$**$=[x,v]$ and use a single relation as:$y_{i+1} = [x_i,v_i] + \frac{d}{dt}[x_i,v_i]\Delta t$. Euler's methodThe method we've described is called *Euler's method* and it is a good first step in solving ODEs. Let's consider a picture of how it works. Starting from an initial point (t_o,x_o), we estimate the slope of $x(t)$ between the current point and a time step $\Delta t$ forward to find the approximate value of $x_1$. Because we used a recurrence relation, these steps can be repeated to find each step $x_i$ in the seris of $x_n$.The error through Euler's method can be large, but you can reduce the error introduced in the approximation by decreasing the time step $\Delta t$. Essentially, you are decreasing the interval for the function $x(t)$ until it becomes approximately linear. To determine the sensitivity of Euler's method relative to the step size $\Delta t$, we perform a Taylor expansion of $x(t)$ as:$x(t+\Delta t) = x(t) + \frac{dx}{dt}\Delta t + \frac{d^2x}{dt^2}\frac{\Delta t^2}{2} + \cdots$.The first two terms on the right hand side are Euler's method and the error in each step is on the order of $\Delta t^2$, since that's the first term omitted in the Taylor series expansion. However you accumulate error over the full time interval $\tau$ with the number of steps $N = \tau/\Delta t$, which changes the total error to $\Delta t$. Notice that decreasing the step size $\Delta t$ improves your result linearly and Euler's method only works on first-order differential equations. This means that if we can re-cast higher order differential equations into a series of first-order differential equations, then we have a very general method.For example, let's explore the case of a mass on a spring. The force of a spring is $F_{spr} = ma = -kx$, where $k$ is a spring constant and $m$ is the mass. This is a second-order ODE, but we can re-write the equations like in the introductory section with the Galilean transformation. This involves a new variable $v = dx/dt$. We have the following:$a = -\frac{k}{m}x$ (original),which can be transformed as two coupled first-order differential equations$\frac{dx}{dt} = v$ and$\frac{dv}{dt} = -\frac{k}{m}x$. Standard Method for Solving ODEsHere we develop a *standard model* for solving ODEs, which will be a blueprint for using different algorithms that have been developed. This way it takes only a minimum amount of reprogramming to change between algorithms. To start, consider the differential equation for a large wooden rabbit in free-fall with the *English pig-dogs* as the intended target:$\ddot{x} = \frac{d^2}{dt^2} = -g$,where we've introduce the *dot* notation to make things a little easier when writing time derivatives and the above equation can be broken into two first-order equations:$\dot{x} = \frac{dx}{dt} = v$ and$\dot{v} = \frac{dv}{dt} = -g$.The individual Eular solutions to those first-order equations are:$x_{i+1} = x_i + \dot{x}\Delta t$and$v_{i+1} = v_i + \dot{v}\Delta t$.There is a symmetry (which will help in producing code later) that lets you write them as a single vector equation:$y_{i+1} = y_i + \dot{y}\Delta t$,where $y = [x,v]$ and $\dot{y} = [v,-g]$. By writing the equations as vectors, we can better define the problem and change our thinking into one where **states** evolve. Let's turn this into code:
###Code
def deriv_freefall(y,t):
#function to define the derivatives need to solve the problem of free-fall
#y is the current state and holds the variables [x,v] (position and first derivative)
#t is the current time; not really used but kept for consistency
yprime = np.zeros(len(y)) #derivative vector to be returned
yprime[0] = y[1] #store the velocity
yprime[1] = -9.8 #store the acceleration
return yprime
def Euler(y,t,dt,derivs):
#function to implement Euler's method given the
#y = [x,v] current state
#t = current time
#dt = time step
#derivs = derivative function that defines the problem
y_next = y + derivs(y,t)*dt
return y_next
###Output
_____no_output_____
###Markdown
The above functions include the time $t$ in their arguments, but it is not used in the functions at all. This is on purpose because we want to create a general method, where in some other case the time variable could be more important. Note also that the derivative function we created *deriv_freefall* is specific to the problem at hand, but the *Euler* function is completely general. Using our standard method, let's put everything together to solve the mass on a spring problem but vertically. **Will the forces will be different?**Let's define the problem using code and see what we get:
###Code
#SHO_Euler.py; Simple Harmonic motion (vertical mass on a spring)
import numpy as np
import matplotlib.pyplot as plt
N = 1000 #number of steps to take
y_o = 0. #initial position (spring unstretched)
v_o = 0. #starting at rest
tau = 3. #total time for simulation (in seconds)
h = tau/float(N-1) #time step
k = 3.5 #spring constant (in N/m)
m = 0.2 #mass (in kg)
g = 9.8 #gravity (in m/s^2); new force since the spring is now vertical
states_Euler = np.zeros((N,2)) #storage for each state (used for plotting later)
times = np.arange(0,tau+h,h)
states_Euler[0,:] = [y_o,v_o] #set initial state (for completeness)
def Euler(y,t,h,derivs):
#function to implement Euler's method
#y = [x,v] current state
#t = current time
#h = time step
#derivs = derivative function that defines the problem
y_next = y + derivs(y,t)*h
return y_next
def SHO(x,time):
#Simple Harmonic Oscillator
#x = [y_t,v_t]; t = time (unused)
#2nd order eqn: dy^2/dt^2 = -k/m y - g
yp = np.zeros(2) #initialize return state
yp[0] = x[1] #dy/dt = v_t
yp[1] = -k/m*x[0] - g #dv/dt = -k/m y - g
return yp
for j in range(0,N-1):
#We obtain the j+1 state by feeding the j state to Euler
states_Euler[j+1,:] = Euler(states[j,:],times[j],h,SHO)
#Now let's visualize our results
fig = plt.figure(1,figsize=(6,12))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(times,states_Euler[:,0],'r.',ms=10)
ax2.plot(times,states_Euler[:,1],'b.',ms=10)
ax1.set_xlim(0,tau)
ax2.set_xlim(0,tau)
ax1.set_ylabel("y (m)",fontsize=20)
ax2.set_ylabel("$v_y$ (m/s)", fontsize=20)
ax2.set_xlabel("Time (s)",fontsize=20)
###Output
_____no_output_____
###Markdown
Notice that we defined our time step using the total time and the number of steps. **What do you think will happen we increase the time or decrease the number of steps?** **Will the results be very different?** Runge-Kutta MethodsThe most popular and general techinque of solving ODE's is a set of methods called *Runge-Kutta* or **rk** methods. The Runge-Kutta algorithms for integrating a differential equation are based upon the formal (exact) integral of a differential equation:$\frac{dy}{dt} = f(t,y) \implies y(t) = \int f(t,y) dt$$\Rightarrow y_{n+1} = y_n + \displaystyle \int_{t_n}^{t_{n+1}} f(t,y) dt$.To derive the second-order Runge-Kutta algorithm (**rk2**), we expand $f(t,y)$ in a Taylor series about the *midpoint* of the integration interval and retain two terms:$f(t,y) \simeq f(t_{n+1/2},y_{n+1/2}) + (t-t_{n+1/2})\frac{df}{dt}(t_{n+1/2}) + O(h^2)$.Because ($t-t_{n+1/2}$) to any odd power is symmetric (equally positive and negative) over the interval $t_n\leq t \leq t_{n+1}$, the integral of the second term with ($t-t_{n+1/2}$) vanishes and we obtain our algorithm:$\displaystyle \int_{t_n}^{t_{n+1}} \simeq f(t_{n+1/2},y_{n+1/2})h + O(h^3)$,$\implies y_{n+1} \simeq y_n + f(t_{n+1/2},y_{n+1/2})h + O(h^3)$ (**rk2**).We should notice that while **rk2** contains the same number of terms as Euler's rule, it obtains a higher level of precision by taking advantage of the cancelation of the $O(h)$ terms (recall something similar happened when comparing Trapezoid to Simpson's rule). The price for improved precision is having to evaluate the derivative function and $y$ at the middle of the time interval, $t = t_n + h/2$. But, we don't have a function to evaluate at this point! The way out of this quagmire is to ues Euler's algorithm for the midpoint $y_{n+1/2}$:$y_{n+1/2} \simeq y_n + \frac{h}{2}\frac{dy}{dt}=y_n + \frac{h}{2}f(t_n,y_n)$.Combining the above expression with our equation for **rk2**, we get:$y_{n+1} \simeq y_n + k_2$, (**rk2**)$k_2 = hf(t_n+h/2,y_n+k_1/2),\;\;\;\;k_1 = hf(t_n,y_n)$,where $y$ is a state vector (and hence $f(t,y)$ is a state vector too). The known derivative function $\frac{dy}{dt} = f(t,y)$ is evaluated at the ends and the midpoint of the interval, but only the known initial value of the $y$ is required. This makes the algorithm self-starting. Just like how we expanded our integration methods to consider more steps, we can do the same with **rk2** to get **rk4**. Here is the algorithm for **rk4**:$y_{n+1} = y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)$,$k_1 = hf(t_n,y_n),\;\;\;\;k_2 = hf(t_n+h/2,y_n+k_1/2)$,$k_3 = hf(t_n+h/2,y_n+k_2/2),\;\;\;\;k_4 = hf(t_n+h,y_n+k_3)$.Let's apply this to our previous problem of the mass on a spring in code!
###Code
#SHO_rk4.py; Simple Harmonic motion (vertical mass on a spring)
import numpy as np
import matplotlib.pyplot as plt
N = 1000 #number of steps to take
y_o = 0. #initial position (spring unstretched)
v_o = 0. #starting at rest
tau = 3. #total time for simulation (in seconds)
h = tau/float(N-1) #time step
k = 3.5 #spring constant (in N/m)
m = 0.2 #mass (in kg)
g = 9.8 #gravity (in m/s^2); new force since the spring is now vertical
states_rk4 = np.zeros((N,2)) #storage for each state (used for plotting later)
times = np.arange(0,tau+h,h)
states_rk4[0,:] = [y_o,v_o] #set initial state (for completeness)
def rk4(y,t,h,derivs):
#function to implement rk4
#y = [x,v] current state
#t = current time
#h = time step
#derivs = derivative function that defines the problem
k1,k2,k3,k4 = np.zeros(2),np.zeros(2),np.zeros(2),np.zeros(2)
k1 = h*derivs(y,t)
y_halfstep = y + k1/2. #Euler half step using k1
k2 = h*derivs(y_halfstep,t+h/2)
y_halfstep = y + k2/2. #Euler half step using k2
k3 = h*derivs(y_halfstep,t+h/2)
k4 = h*derivs(y + k3,t+h) #full step using k3
y_next = y + (k1+2*k2+2*k3+k4)/6.
return y_next
def SHO(x,time):
#Simple Harmonic Oscillator
#x = [y_t,v_t]; t = time (unused)
#2nd order eqn: dy^2/dt^2 = -k/m y - g
yp = np.zeros(2) #initialize return state
yp[0] = x[1] #dy/dt = v_t
yp[1] = -k/m*x[0] - g #dv/dt = -k/m y - g
return yp
for j in range(0,N-1):
#We obtain the j+1 state by feeding the j state to rk4
states_rk4[j+1,:] = rk4(states_rk4[j,:],times[j],h,SHO)
#Now let's visualize our results
fig = plt.figure(1,figsize=(6,12))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(times,states_rk4[:,0]-states_Euler[:,0],'r.',ms=10)
ax2.plot(times,states_rk4[:,1]-states_Euler[:,1],'b.',ms=10)
#ax1.plot(times,states_rk4[:,0],'r.',ms=10)
#ax2.plot(times,states_rk4[:,1],'b.',ms=10)
ax1.set_xlim(0,tau)
ax2.set_xlim(0,tau)
ax1.set_ylabel("y (m)",fontsize=20)
ax2.set_ylabel("$v_y$ (m/s)", fontsize=20)
ax2.set_xlabel("Time (s)",fontsize=20)
###Output
_____no_output_____
###Markdown
The above plots show the differences between **Euler** and **rk4**, where the solutions for the position are virtually identical, but the differences in velocity are more substantial. The **rk4** is the more accurate and versatile of the two methods. There are high-order methods with adaptive step sizes, which you can find in [scipy.integrate](https://docs.scipy.org/doc/scipy/reference/integrate.html). Here is an example functionscipy.integrate.RK45(fun, t0, y0, t_bound=tau, max_step=h, rtol=0.001, atol=1e-06),where fun defines the derivative function (SHO in our case), t0 is the initial time, y0 is the initial **state**, t_bound is the final time, max_step is the maximum step size that we limit to h, rtol is a relative error tolerance level, and atol is the absolute error tolerance level.Let's evaluate a more difficult problem using the mass on a spring, where we include a friction coefficient.
###Code
#SHO_rk4_friction.py; Simple Harmonic motion (vertical mass on a spring)
import numpy as np
import matplotlib.pyplot as plt
N = 1000 #number of steps to take
y_o = 0.2 #initial position (spring unstretched)
v_o = 0. #starting at rest
tau = 3. #total time for simulation (in seconds)
h = tau/float(N-1) #time step
k = 42 #spring constant (in N/m)
m = 0.25 #mass (in kg)
g = 9.8 #gravity (in m/s^2); new force since the spring is now vertical
mu = 0.15 #coefficient of friction
states_rk4_fric = np.zeros((N,2)) #storage for each state (used for plotting later)
times = np.arange(0,tau+h,h)
states_rk4_fric[0,:] = [y_o,v_o] #set initial state (for completeness)
def rk4(y,t,h,derivs):
#function to implement rk4
#y = [x,v] current state
#t = current time
#h = time step
#derivs = derivative function that defines the problem
k1,k2,k3,k4 = np.zeros(2),np.zeros(2),np.zeros(2),np.zeros(2)
k1 = h*derivs(y,t)
y_halfstep = y + k1/2. #Euler half step using k1
k2 = h*derivs(y_halfstep,t+h/2)
y_halfstep = y + k2/2. #Euler half step using k2
k3 = h*derivs(y_halfstep,t+h/2)
k4 = h*derivs(y + k3,t+h) #full step using k3
y_next = y + (k1+2*k2+2*k3+k4)/6.
return y_next
def SHO(x,time):
#Simple Harmonic Oscillator
#x = [y_t,v_t]; t = time (unused)
#2nd order eqn: dy^2/dt^2 = -k/m y - g
yp = np.zeros(2) #initialize return state
yp[0] = x[1] #dy/dt = v_t
if yp[0] > 0: #check if velocity is positive
yp[1] = -k/m*x[0] - g*mu #dv/dt = -k/m y - g mu; w/friction
else: #check if velocity is positive
yp[1] = -k/m*x[0] + g*mu #dv/dt = -k/m y + g mu; w/friction
return yp
for j in range(0,N-1):
#We obtain the j+1 state by feeding the j state to rk4
states_rk4_fric[j+1,:] = rk4(states_rk4_fric[j,:],times[j],h,SHO)
#Now let's visualize our results
fig = plt.figure(1,figsize=(6,12))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
#ax1.plot(times,states_rk4[:,0],'k.',ms=10)
#ax2.plot(times,states_rk4[:,1],'k.',ms=10)
ax1.plot(times,states_rk4_fric[:,0],'r.',ms=10)
ax2.plot(times,states_rk4_fric[:,1],'b.',ms=10)
ax1.set_xlim(0,tau)
ax2.set_xlim(0,tau)
ax1.set_ylabel("y (m)",fontsize=20)
ax2.set_ylabel("$v_y$ (m/s)", fontsize=20)
ax2.set_xlabel("Time (s)",fontsize=20)
###Output
_____no_output_____ |
examples/Example Spine Sagittal Body.ipynb | ###Markdown
Surface
###Code
dist = ss.dist_to_surface()
show_dists(dist)
###Output
2020-11-19 16:01:29.111 | DEBUG | imma.image:resize_to_shape:93 - resize to orig with skimage
###Markdown
Spine
###Code
dist = ss.dist_to_spine()
show_dists(dist)
###Output
C:\Users\Jirik\Miniconda3\envs\lisa3qt5\lib\site-packages\ipykernel_launcher.py:10: UserWarning: No contour levels were found within the data range.
# Remove the CWD from sys.path while we load stuff.
C:\Users\Jirik\Miniconda3\envs\lisa3qt5\lib\site-packages\ipykernel_launcher.py:13: UserWarning: No contour levels were found within the data range.
del sys.path[0]
###Markdown
Lungs
###Code
dist = ss.dist_to_lungs()
show_dists(dist)
###Output
2020-11-19 16:01:39.444 | DEBUG | imma.image:resize_to_shape:93 - resize to orig with skimage
2020-11-19 16:01:43.833 | DEBUG | imma.image:resize_to_shape:93 - resize to orig with skimage
###Markdown
Sagittal plane
###Code
dist = ss.dist_to_sagittal()
show_dists(dist)
###Output
_____no_output_____ |
Pandas - Advanced Pandas/02.Advanced Calculations/02-05-map_apply_functions.ipynb | ###Markdown
`Apply`, `Map`, and `Applymap` in Pandas
###Code
import pandas as pd
df = pd.DataFrame({
"Region":['North','West','East','South','North','West','East','South'],
"Team":['One','One','One','One','Two','Two','Two','Two'],
"Squad":['A','B','C','D','E','F','G','H'],
"Revenue":[7500,5500,2750,6400,2300,3750,1900,575],
"Cost":[5200,5100,4400,5300,1250,1300,2100,50]
})
df.head()
###Output
_____no_output_____
###Markdown
------- ApplyUse `apply()` to alter values along an axis in your **dataframe** or in a **series** by applying a function. + make sure to put axis=1 if you are comparing values by columns
###Code
df['Profit'] = df.apply(lambda x: 'Profit' if x['Revenue'] > x['Cost'] else 'Loss', axis=1)
df
###Output
_____no_output_____
###Markdown
---- MapUse `map` to substitute each value in a **series**, using either a function, dictionary, or series.
###Code
team_color = {
'One': 'Red',
'Two': 'Blue'
}
df['Team Color'] = df['Team'].map(team_color)
df
###Output
_____no_output_____
###Markdown
--------- ApplymapUse `applymap()` to apply a function to each element in your **dataframe**
###Code
df.applymap(lambda x : len(str(x)))
# in this case every elements character length is calculated (both Numeric and String values)
###Output
_____no_output_____
###Markdown
------ If all else fails, use a `for` loop. We want to find the Revenue % of each Team per Region.
###Code
df.head()
# Total Revenue for Row Index 0's region (North)
df[df['Region'] == df.loc[0, 'Region']]['Revenue'].sum()
revenue_percent = []
for i in range(0, len(df)):
total_revenue_per_region = df[df['Region'] == df.loc[i, 'Region']]['Revenue'].sum()
rev = df['Revenue'][i] / total_revenue_per_region
revenue_percent.append(rev)
df['Revenue Share of Region'] = revenue_percent
df.sort_values(by='Region')
###Output
_____no_output_____ |
Project/Sample Based Learning/Assignment3.ipynb | ###Markdown
Assignment3: Dyna-Q and Dyna-Q+ Welcome to this programming assignment! In this notebook, you will:1. implement the Dyna-Q and Dyna-Q+ algorithms. 2. compare their performance on an environment which changes to become 'better' than it was before, that is, the task becomes easier. We will give you the environment and infrastructure to run the experiment and visualize the performance. The assignment will be graded automatically by comparing the behavior of your agent to our implementations of the algorithms. The random seed will be set explicitly to avoid different behaviors due to randomness. Please go through the cells in order. The Shortcut Maze EnvironmentIn this maze environment, the goal is to reach the goal state (G) as fast as possible from the starting state (S). There are four actions – up, down, right, left – which take the agent deterministically from a state to the corresponding neighboring states, except when movement is blocked by a wall (denoted by grey) or the edge of the maze, in which case the agent remains where it is. The reward is +1 on reaching the goal state, 0 otherwise. On reaching the goal state G, the agent returns to the start state S to being a new episode. This is a discounted, episodic task with $\gamma = 0.95$.Later in the assignment, we will use a variant of this maze in which a 'shortcut' opens up after a certain number of timesteps. We will test if the the Dyna-Q and Dyna-Q+ agents are able to find the newly-opened shorter route to the goal state. PackagesWe import the following libraries that are required for this assignment. Primarily, we shall be using the following libraries:1. numpy: the fundamental package for scientific computing with Python.2. matplotlib: the library for plotting graphs in Python.3. RL-Glue: the library for reinforcement learning experiments.**Please do not import other libraries** — this will break the autograder.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os, jdc, shutil
from tqdm import tqdm
from rl_glue import RLGlue
from agent import BaseAgent
from maze_env import ShortcutMazeEnvironment
plt.rcParams.update({'font.size': 15})
plt.rcParams.update({'figure.figsize': [8,5]})
###Output
_____no_output_____
###Markdown
Section 1: Dyna-Q Let's start with a quick recap of the tabular Dyna-Q algorithm.Dyna-Q involves four basic steps:1. Action selection: given an observation, select an action to be performed (here, using the $\epsilon$-greedy method).2. Direct RL: using the observed next state and reward, update the action values (here, using one-step tabular Q-learning).3. Model learning: using the observed next state and reward, update the model (here, updating a table as the environment is assumed to be deterministic).4. Planning: update the action values by generating $n$ simulated experiences using certain starting states and actions (here, using the random-sample one-step tabular Q-planning method). This is also known as the 'Indirect RL' step. The process of choosing the state and action to simulate an experience with is known as 'search control'.Steps 1 and 2 are parts of the [tabular Q-learning algorithm](http://www.incompleteideas.net/book/RLbook2018.pdfpage=153) and are denoted by line numbers (a)–(d) in the pseudocode above. Step 3 is performed in line (e), and Step 4 in the block of lines (f).We highly recommend revising the Dyna videos in the course and the material in the RL textbook (in particular, [Section 8.2](http://www.incompleteideas.net/book/RLbook2018.pdfpage=183)). Alright, let's begin coding.As you already know by now, you will develop an agent which interacts with the given environment via RL-Glue. More specifically, you will implement the usual methods `agent_start`, `agent_step`, and `agent_end` in your `DynaQAgent` class, along with a couple of helper methods specific to Dyna-Q, namely `update_model` and `planning_step`. We will provide detailed comments in each method describing what your code should do. Let's break this down in pieces and do it one-by-one.First of all, check out the `agent_init` method below. As in earlier assignments, some of the attributes are initialized with the data passed inside `agent_info`. In particular, pay attention to the attributes which are new to `DynaQAgent`, since you shall be using them later.
###Code
# Do not modify this cell!
class DynaQAgent(BaseAgent):
def agent_init(self, agent_info):
"""Setup for the agent called when the experiment first starts.
Args:
agent_init_info (dict), the parameters used to initialize the agent. The dictionary contains:
{
num_states (int): The number of states,
num_actions (int): The number of actions,
epsilon (float): The parameter for epsilon-greedy exploration,
step_size (float): The step-size,
discount (float): The discount factor,
planning_steps (int): The number of planning steps per environmental interaction
random_seed (int): the seed for the RNG used in epsilon-greedy
planning_random_seed (int): the seed for the RNG used in the planner
}
"""
# First, we get the relevant information from agent_info
# NOTE: we use np.random.RandomState(seed) to set the two different RNGs
# for the planner and the rest of the code
try:
self.num_states = agent_info["num_states"]
self.num_actions = agent_info["num_actions"]
except:
print("You need to pass both 'num_states' and 'num_actions' \
in agent_info to initialize the action-value table")
self.gamma = agent_info.get("discount", 0.95)
self.step_size = agent_info.get("step_size", 0.1)
self.epsilon = agent_info.get("epsilon", 0.1)
self.planning_steps = agent_info.get("planning_steps", 10)
self.rand_generator = np.random.RandomState(agent_info.get('random_seed', 42))
self.planning_rand_generator = np.random.RandomState(agent_info.get('planning_random_seed', 42))
# Next, we initialize the attributes required by the agent, e.g., q_values, model, etc.
# A simple way to implement the model is to have a dictionary of dictionaries,
# mapping each state to a dictionary which maps actions to (reward, next state) tuples.
self.q_values = np.zeros((self.num_states, self.num_actions))
self.actions = list(range(self.num_actions))
self.past_action = -1
self.past_state = -1
self.model = {} # model is a dictionary of dictionaries, which maps states to actions to
# (reward, next_state) tuples
###Output
_____no_output_____
###Markdown
Now let's create the `update_model` method, which performs the 'Model Update' step in the pseudocode. It takes a `(s, a, s', r)` tuple and stores the next state and reward corresponding to a state-action pair.Remember, because the environment is deterministic, an easy way to implement the model is to have a dictionary of encountered states, each mapping to a dictionary of actions taken in those states, which in turn maps to a tuple of next state and reward. In this way, the model can be easily accessed by `model[s][a]`, which would return the `(s', r)` tuple.
###Code
%%add_to DynaQAgent
# [GRADED]
def update_model(self, past_state, past_action, state, reward):
"""updates the model
Args:
past_state (int): s
past_action (int): a
state (int): s'
reward (int): r
Returns:
Nothing
"""
# Update the model with the (s,a,s',r) tuple (1~4 lines)
### START CODE HERE ###
if past_state not in self.model:
self.model[past_state] = { past_action : (state,reward) }
else:
self.model[past_state][past_action] = (state,reward)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `update_model()`
###Code
# Do not modify this cell!
## Test code for update_model() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,2,0,1)
test_agent.update_model(2,0,1,1)
test_agent.update_model(0,3,1,2)
print("Model: \n", test_agent.model)
###Output
Model:
{0: {2: (0, 1), 3: (1, 2)}, 2: {0: (1, 1)}}
###Markdown
Expected output:```Model: {0: {2: (0, 1), 3: (1, 2)}, 2: {0: (1, 1)}}``` Next, you will implement the planning step, the crux of the Dyna-Q algorithm. You shall be calling this `planning_step` method at every timestep of every trajectory.
###Code
%%add_to DynaQAgent
# [GRADED]
def planning_step(self):
"""performs planning, i.e. indirect RL.
Args:
None
Returns:
Nothing
"""
# The indirect RL step:
# - Choose a state and action from the set of experiences that are stored in the model. (~2 lines)
# - Query the model with this state-action pair for the predicted next state and reward.(~1 line)
# - Update the action values with this simulated experience. (2~4 lines)
# - Repeat for the required number of planning steps.
#
# Note that the update equation is different for terminal and non-terminal transitions.
# To differentiate between a terminal and a non-terminal next state, assume that the model stores
# the terminal state as a dummy state like -1
#
# Important: remember you have a random number generator 'planning_rand_generator' as
# a part of the class which you need to use as self.planning_rand_generator.choice()
# For the sake of reproducibility and grading, *do not* use anything else like
# np.random.choice() for performing search control.
### START CODE HERE ###
for _ in range(self.planning_steps):
state = self.planning_rand_generator.choice(list(self.model.keys()))
action = self.planning_rand_generator.choice(list(self.model[state].keys()))
nstate,reward = self.model[state][action]
if nstate==-1:
self.q_values[state,action] = self.q_values[state,action] + self.step_size*(
reward - self.q_values[state,action])
else:
self.q_values[state,action] = self.q_values[state,action] + self.step_size*(
reward + self.gamma*np.max(self.q_values[nstate]) - self.q_values[state,action])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `planning_step()`
###Code
# Do not modify this cell!
## Test code for planning_step() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 5}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,2,1,1)
test_agent.update_model(2,0,1,1)
test_agent.update_model(0,3,0,1)
test_agent.update_model(0,1,-1,1)
test_agent.planning_step()
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Model:
{0: {2: (1, 1), 3: (0, 1), 1: (-1, 1)}, 2: {0: (1, 1)}}
Action-value estimates:
[[0. 0.1 0. 0.2]
[0. 0. 0. 0. ]
[0.1 0. 0. 0. ]]
###Markdown
Expected output:```Model: {0: {2: (1, 1), 3: (0, 1), 1: (-1, 1)}, 2: {0: (1, 1)}}Action-value estimates: [[0. 0.1 0. 0.2 ] [0. 0. 0. 0. ] [0.1 0. 0. 0. ]]```If your output does not match the above, one of the first things to check is to make sure that you haven't changed the `planning_random_seed` in the test cell. Additionally, make sure you have handled terminal updates correctly. Now before you move on to implement the rest of the agent methods, here are the helper functions that you've used in the previous assessments for choosing an action using an $\epsilon$-greedy policy.
###Code
%%add_to DynaQAgent
# Do not modify this cell!
def argmax(self, q_values):
"""argmax with random tie-breaking
Args:
q_values (Numpy array): the array of action values
Returns:
action (int): an action with the highest value
"""
top = float("-inf")
ties = []
for i in range(len(q_values)):
if q_values[i] > top:
top = q_values[i]
ties = []
if q_values[i] == top:
ties.append(i)
return self.rand_generator.choice(ties)
def choose_action_egreedy(self, state):
"""returns an action using an epsilon-greedy policy w.r.t. the current action-value function.
Important: assume you have a random number generator 'rand_generator' as a part of the class
which you can use as self.rand_generator.choice() or self.rand_generator.rand()
Args:
state (List): coordinates of the agent (two elements)
Returns:
The action taken w.r.t. the aforementioned epsilon-greedy policy
"""
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.choice(self.actions)
else:
values = self.q_values[state]
action = self.argmax(values)
return action
###Output
_____no_output_____
###Markdown
Next, you will implement the rest of the agent-related methods, namely `agent_start`, `agent_step`, and `agent_end`.
###Code
%%add_to DynaQAgent
# [GRADED]
def agent_start(self, state):
"""The first method called when the experiment starts,
called after the environment starts.
Args:
state (Numpy array): the state from the
environment's env_start function.
Returns:
(int) the first action the agent takes.
"""
# given the state, select the action using self.choose_action_egreedy()),
# and save current state and action (~2 lines)
### self.past_state = ?
### self.past_action = ?
### START CODE HERE ###
self.past_action = self.choose_action_egreedy(state)
self.past_state = state
### END CODE HERE ###
return self.past_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state from the
environment's step based on where the agent ended up after the
last step
Returns:
(int) The action the agent takes given this state.
"""
# - Direct-RL step (~1-3 lines)
# - Model Update step (~1 line)
# - `planning_step` (~1 line)
# - Action Selection step (~1 line)
# Save the current state and action before returning the action to be performed. (~2 lines)
### START CODE HERE ###
self.q_values[self.past_state,self.past_action] = self.q_values[self.past_state,self.past_action] + self.step_size*(
reward + self.gamma*np.max(self.q_values[state]) - self.q_values[self.past_state,self.past_action])
self.update_model(self.past_state, self.past_action, state, reward)
self.planning_step()
self.past_action = self.choose_action_egreedy(state)
self.past_state = state
### END CODE HERE ###
return self.past_action
def agent_end(self, reward):
"""Called when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# - Direct RL update with this final transition (1~2 lines)
# - Model Update step with this final transition (~1 line)
# - One final `planning_step` (~1 line)
#
# Note: the final transition needs to be handled carefully. Since there is no next state,
# you will have to pass a dummy state (like -1), which you will be using in the planning_step() to
# differentiate between updates with usual terminal and non-terminal transitions.
### START CODE HERE ###
self.q_values[self.past_state,self.past_action] = self.q_values[self.past_state,self.past_action] + self.step_size*(
reward - self.q_values[self.past_state,self.past_action])
self.update_model(self.past_state, self.past_action, -1, reward)
self.planning_step()
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `agent_start()`
###Code
# Do not modify this cell!
## Test code for agent_start() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
action = test_agent.agent_start(0)
print("Action:", action)
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Action: 1
Model:
{}
Action-value estimates:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
###Markdown
Expected output:```Action: 1Model: {}Action-value estimates: [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]]``` Test `agent_step()`
###Code
# Do not modify this cell!
## Test code for agent_step() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"planning_steps": 2,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
actions.append(test_agent.agent_start(0))
actions.append(test_agent.agent_step(1,2))
actions.append(test_agent.agent_step(0,1))
print("Actions:", actions)
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Actions: [1, 3, 1]
Model:
{0: {1: (2, 1)}, 2: {3: (1, 0)}}
Action-value estimates:
[[0. 0.3439 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]]
###Markdown
Expected output:```Actions: [1, 3, 1]Model: {0: {1: (2, 1)}, 2: {3: (1, 0)}}Action-value estimates: [[0. 0.3439 0. 0. ] [0. 0. 0. 0. ] [0. 0. 0. 0. ]]``` Test `agent_end()`
###Code
# Do not modify this cell!
## Test code for agent_end() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"planning_steps": 2,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQAgent()
test_agent.agent_init(agent_info)
actions.append(test_agent.agent_start(0))
actions.append(test_agent.agent_step(1,2))
actions.append(test_agent.agent_step(0,1))
test_agent.agent_end(1)
print("Actions:", actions)
print("Model: \n", test_agent.model)
print("Action-value Estimates: \n", test_agent.q_values)
###Output
Actions: [1, 3, 1]
Model:
{0: {1: (2, 1)}, 2: {3: (1, 0)}, 1: {1: (-1, 1)}}
Action-value Estimates:
[[0. 0.41051 0. 0. ]
[0. 0.1 0. 0. ]
[0. 0. 0. 0.01 ]]
###Markdown
Expected output:```Actions: [1, 3, 1]Model: {0: {1: (2, 1)}, 2: {3: (1, 0)}, 1: {1: (-1, 1)}}Action-value Estimates: [[0. 0.41051 0. 0. ] [0. 0.1 0. 0. ] [0. 0. 0. 0.01 ]]``` Experiment: Dyna-Q agent in the maze environmentAlright. Now we have all the components of the `DynaQAgent` ready. Let's try it out on the maze environment! The next cell runs an experiment on this maze environment to test your implementation. The initial action values are $0$, the step-size parameter is $0.125$. and the exploration parameter is $\epsilon=0.1$. After the experiment, the sum of rewards in each episode should match the correct result.We will try planning steps of $0,5,50$ and compare their performance in terms of the average number of steps taken to reach the goal state in the aforementioned maze environment. For scientific rigor, we will run each experiment $30$ times. In each experiment, we set the initial random-number-generator (RNG) seeds for a fair comparison across algorithms.
###Code
# Do not modify this cell!
def run_experiment(env, agent, env_parameters, agent_parameters, exp_parameters):
# Experiment settings
num_runs = exp_parameters['num_runs']
num_episodes = exp_parameters['num_episodes']
planning_steps_all = agent_parameters['planning_steps']
env_info = env_parameters
agent_info = {"num_states" : agent_parameters["num_states"], # We pass the agent the information it needs.
"num_actions" : agent_parameters["num_actions"],
"epsilon": agent_parameters["epsilon"],
"discount": env_parameters["discount"],
"step_size" : agent_parameters["step_size"]}
all_averages = np.zeros((len(planning_steps_all), num_runs, num_episodes)) # for collecting metrics
log_data = {'planning_steps_all' : planning_steps_all} # that shall be plotted later
for idx, planning_steps in enumerate(planning_steps_all):
print('Planning steps : ', planning_steps)
os.system('sleep 0.5') # to prevent tqdm printing out-of-order before the above print()
agent_info["planning_steps"] = planning_steps
for i in tqdm(range(num_runs)):
agent_info['random_seed'] = i
agent_info['planning_random_seed'] = i
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
for j in range(num_episodes):
rl_glue.rl_start() # We start an episode. Here we aren't using rl_glue.rl_episode()
# like the other assessments because we'll be requiring some
is_terminal = False # data from within the episodes in some of the experiments here
num_steps = 0
while not is_terminal:
reward, _, action, is_terminal = rl_glue.rl_step() # The environment and agent take a step
num_steps += 1 # and return the reward and action taken.
all_averages[idx][i][j] = num_steps
log_data['all_averages'] = all_averages
np.save("results/Dyna-Q_planning_steps", log_data)
def plot_steps_per_episode(file_path):
data = np.load(file_path).item()
all_averages = data['all_averages']
planning_steps_all = data['planning_steps_all']
for i, planning_steps in enumerate(planning_steps_all):
plt.plot(np.mean(all_averages[i], axis=0), label='Planning steps = '+str(planning_steps))
plt.legend(loc='upper right')
plt.xlabel('Episodes')
plt.ylabel('Steps\nper\nepisode', rotation=0, labelpad=40)
plt.axhline(y=16, linestyle='--', color='grey', alpha=0.4)
plt.show()
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 30, # The number of times we run the experiment
"num_episodes" : 40, # The number of episodes per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"epsilon": 0.1,
"step_size" : 0.125,
"planning_steps" : [0, 5, 50] # The list of planning_steps we want to try
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQAgent # The agent
run_experiment(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)
plot_steps_per_episode('results/Dyna-Q_planning_steps.npy')
shutil.make_archive('results', 'zip', 'results');
###Output
Planning steps : 0
###Markdown
What do you notice?As the number of planning steps increases, the number of episodes taken to reach the goal decreases rapidly. Remember that the RNG seed was set the same for all the three values of planning steps, resulting in the same number of steps taken to reach the goal in the first episode. Thereafter, the performance improves. The slowest improvement is when there are $n=0$ planning steps, i.e., for the non-planning Q-learning agent, even though the step size parameter was optimized for it. Note that the grey dotted line shows the minimum number of steps required to reach the goal state under the optimal greedy policy.--- Experiment(s): Dyna-Q agent in the _changing_ maze environment Great! Now let us see how Dyna-Q performs on the version of the maze in which a shorter path opens up after 3000 steps. The rest of the transition and reward dynamics remain the same. Before you proceed, take a moment to think about what you expect to see. Will Dyna-Q find the new, shorter path to the goal? If so, why? If not, why not?
###Code
# Do not modify this cell!
def run_experiment_with_state_visitations(env, agent, env_parameters, agent_parameters, exp_parameters, result_file_name):
# Experiment settings
num_runs = exp_parameters['num_runs']
num_max_steps = exp_parameters['num_max_steps']
planning_steps_all = agent_parameters['planning_steps']
env_info = {"change_at_n" : env_parameters["change_at_n"]}
agent_info = {"num_states" : agent_parameters["num_states"],
"num_actions" : agent_parameters["num_actions"],
"epsilon": agent_parameters["epsilon"],
"discount": env_parameters["discount"],
"step_size" : agent_parameters["step_size"]}
state_visits_before_change = np.zeros((len(planning_steps_all), num_runs, 54)) # For saving the number of
state_visits_after_change = np.zeros((len(planning_steps_all), num_runs, 54)) # state-visitations
cum_reward_all = np.zeros((len(planning_steps_all), num_runs, num_max_steps)) # For saving the cumulative reward
log_data = {'planning_steps_all' : planning_steps_all}
for idx, planning_steps in enumerate(planning_steps_all):
print('Planning steps : ', planning_steps)
os.system('sleep 1') # to prevent tqdm printing out-of-order before the above print()
agent_info["planning_steps"] = planning_steps # We pass the agent the information it needs.
for run in tqdm(range(num_runs)):
agent_info['random_seed'] = run
agent_info['planning_random_seed'] = run
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
num_steps = 0
cum_reward = 0
while num_steps < num_max_steps-1 :
state, _ = rl_glue.rl_start() # We start the experiment. We'll be collecting the
is_terminal = False # state-visitation counts to visiualize the learned policy
if num_steps < env_parameters["change_at_n"]:
state_visits_before_change[idx][run][state] += 1
else:
state_visits_after_change[idx][run][state] += 1
while not is_terminal and num_steps < num_max_steps-1 :
reward, state, action, is_terminal = rl_glue.rl_step()
num_steps += 1
cum_reward += reward
cum_reward_all[idx][run][num_steps] = cum_reward
if num_steps < env_parameters["change_at_n"]:
state_visits_before_change[idx][run][state] += 1
else:
state_visits_after_change[idx][run][state] += 1
log_data['state_visits_before'] = state_visits_before_change
log_data['state_visits_after'] = state_visits_after_change
log_data['cum_reward_all'] = cum_reward_all
np.save("results/" + result_file_name, log_data)
def plot_cumulative_reward(file_path, item_key, y_key, y_axis_label, legend_prefix, title):
data_all = np.load(file_path).item()
data_y_all = data_all[y_key]
items = data_all[item_key]
for i, item in enumerate(items):
plt.plot(np.mean(data_y_all[i], axis=0), label=legend_prefix+str(item))
plt.axvline(x=3000, linestyle='--', color='grey', alpha=0.4)
plt.xlabel('Timesteps')
plt.ylabel(y_axis_label, rotation=0, labelpad=60)
plt.legend(loc='upper left')
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Did you notice that the environment changes after a fixed number of _steps_ and not episodes? This is because the environment is separate from the agent, and the environment changes irrespective of the length of each episode (i.e., the number of environmental interactions per episode) that the agent perceives. And hence we are now plotting the data per step or interaction of the agent and the environment, in order to comfortably see the differences in the behaviours of the agents before and after the environment changes. Okay, now we will first plot the cumulative reward obtained by the agent per interaction with the environment, averaged over 10 runs of the experiment on this changing world.
###Code
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 10, # The number of times we run the experiment
"num_max_steps" : 6000, # The number of steps per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
"change_at_n": 3000
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"epsilon": 0.1,
"step_size" : 0.125,
"planning_steps" : [5, 10, 50] # The list of planning_steps we want to try
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQAgent # The agent
run_experiment_with_state_visitations(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters, "Dyna-Q_shortcut_steps")
plot_cumulative_reward('results/Dyna-Q_shortcut_steps.npy', 'planning_steps_all', 'cum_reward_all', 'Cumulative\nreward', 'Planning steps = ', 'Dyna-Q : Varying planning_steps')
###Output
Planning steps : 5
###Markdown
We observe that the slope of the curves is almost constant. If the agent had discovered the shortcut and begun using it, we would expect to see an increase in the slope of the curves towards the later stages of training. This is because the agent can get to the goal state faster and get the positive reward. Note that the timestep at which the shortcut opens up is marked by the grey dotted line.Note that this trend is constant across the increasing number of planning steps.Now let's check the heatmap of the state visitations of the agent with `planning_steps=10` during training, before and after the shortcut opens up after 3000 timesteps.
###Code
# Do not modify this cell!
def plot_state_visitations(file_path, plot_titles, idx):
data = np.load(file_path).item()
data_keys = ["state_visits_before", "state_visits_after"]
positions = [211,212]
titles = plot_titles
wall_ends = [None,-1]
for i in range(2):
state_visits = data[data_keys[i]][idx]
average_state_visits = np.mean(state_visits, axis=0)
grid_state_visits = np.rot90(average_state_visits.reshape((6,9)).T)
grid_state_visits[2,1:wall_ends[i]] = np.nan # walls
#print(average_state_visits.reshape((6,9)))
plt.subplot(positions[i])
plt.pcolormesh(grid_state_visits, edgecolors='gray', linewidth=1, cmap='viridis')
plt.text(3+0.5, 0+0.5, 'S', horizontalalignment='center', verticalalignment='center')
plt.text(8+0.5, 5+0.5, 'G', horizontalalignment='center', verticalalignment='center')
plt.title(titles[i])
plt.axis('off')
cm = plt.get_cmap()
cm.set_bad('gray')
plt.subplots_adjust(bottom=0.0, right=0.7, top=1.0)
cax = plt.axes([1., 0.0, 0.075, 1.])
cbar = plt.colorbar(cax=cax)
plt.show()
# Do not modify this cell!
plot_state_visitations("results/Dyna-Q_shortcut_steps.npy", ['Dyna-Q : State visitations before the env changes', 'Dyna-Q : State visitations after the env changes'], 1)
###Output
_____no_output_____
###Markdown
What do you observe?The state visitation map looks almost the same before and after the shortcut opens. This means that the Dyna-Q agent hasn't quite discovered and started exploiting the new shortcut.Now let's try increasing the exploration parameter $\epsilon$ to see if it helps the Dyna-Q agent discover the shortcut.
###Code
# Do not modify this cell!
def run_experiment_only_cumulative_reward(env, agent, env_parameters, agent_parameters, exp_parameters):
# Experiment settings
num_runs = exp_parameters['num_runs']
num_max_steps = exp_parameters['num_max_steps']
epsilons = agent_parameters['epsilons']
env_info = {"change_at_n" : env_parameters["change_at_n"]}
agent_info = {"num_states" : agent_parameters["num_states"],
"num_actions" : agent_parameters["num_actions"],
"planning_steps": agent_parameters["planning_steps"],
"discount": env_parameters["discount"],
"step_size" : agent_parameters["step_size"]}
log_data = {'epsilons' : epsilons}
cum_reward_all = np.zeros((len(epsilons), num_runs, num_max_steps))
for eps_idx, epsilon in enumerate(epsilons):
print('Agent : Dyna-Q, epsilon : %f' % epsilon)
os.system('sleep 1') # to prevent tqdm printing out-of-order before the above print()
agent_info["epsilon"] = epsilon
for run in tqdm(range(num_runs)):
agent_info['random_seed'] = run
agent_info['planning_random_seed'] = run
rl_glue = RLGlue(env, agent) # Creates a new RLGlue experiment with the env and agent we chose above
rl_glue.rl_init(agent_info, env_info) # We pass RLGlue what it needs to initialize the agent and environment
num_steps = 0
cum_reward = 0
while num_steps < num_max_steps-1 :
rl_glue.rl_start() # We start the experiment
is_terminal = False
while not is_terminal and num_steps < num_max_steps-1 :
reward, _, action, is_terminal = rl_glue.rl_step() # The environment and agent take a step and return
# the reward, and action taken.
num_steps += 1
cum_reward += reward
cum_reward_all[eps_idx][run][num_steps] = cum_reward
log_data['cum_reward_all'] = cum_reward_all
np.save("results/Dyna-Q_epsilons", log_data)
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 30, # The number of times we run the experiment
"num_max_steps" : 6000, # The number of steps per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
"change_at_n": 3000
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"step_size" : 0.125,
"planning_steps" : 10,
"epsilons": [0.1, 0.2, 0.4, 0.8] # The list of epsilons we want to try
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQAgent # The agent
run_experiment_only_cumulative_reward(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)
plot_cumulative_reward('results/Dyna-Q_epsilons.npy', 'epsilons', 'cum_reward_all', 'Cumulative\nreward', r'$\epsilon$ = ', r'Dyna-Q : Varying $\epsilon$')
###Output
Agent : Dyna-Q, epsilon : 0.100000
###Markdown
What do you observe?Increasing the exploration via the $\epsilon$-greedy strategy does not seem to be helping. In fact, the agent's cumulative reward decreases because it is spending more and more time trying out the exploratory actions.Can we do better...? Section 2: Dyna-Q+ The motivation behind Dyna-Q+ is to give a bonus reward for actions that haven't been tried for a long time, since there is a greater chance that the dynamics for that actions might have changed.In particular, if the modeled reward for a transition is $r$, and the transition has not been tried in $\tau(s,a)$ time steps, then planning updates are done as if that transition produced a reward of $r + \kappa \sqrt{ \tau(s,a)}$, for some small $\kappa$. Let's implement that!Based on your `DynaQAgent`, create a new class `DynaQPlusAgent` to implement the aforementioned exploration heuristic. Additionally :1. actions that had never been tried before from a state should now be allowed to be considered in the planning step,2. and the initial model for such actions is that they lead back to the same state with a reward of zero.At this point, you might want to refer to the video lectures and [Section 8.3](http://www.incompleteideas.net/book/RLbook2018.pdfpage=188) of the RL textbook for a refresher on Dyna-Q+. As usual, let's break this down in pieces and do it one-by-one.First of all, check out the `agent_init` method below. In particular, pay attention to the attributes which are new to `DynaQPlusAgent`– state-visitation counts $\tau$ and the scaling parameter $\kappa$ – because you shall be using them later.
###Code
# Do not modify this cell!
class DynaQPlusAgent(BaseAgent):
def agent_init(self, agent_info):
"""Setup for the agent called when the experiment first starts.
Args:
agent_init_info (dict), the parameters used to initialize the agent. The dictionary contains:
{
num_states (int): The number of states,
num_actions (int): The number of actions,
epsilon (float): The parameter for epsilon-greedy exploration,
step_size (float): The step-size,
discount (float): The discount factor,
planning_steps (int): The number of planning steps per environmental interaction
kappa (float): The scaling factor for the reward bonus
random_seed (int): the seed for the RNG used in epsilon-greedy
planning_random_seed (int): the seed for the RNG used in the planner
}
"""
# First, we get the relevant information from agent_info
# Note: we use np.random.RandomState(seed) to set the two different RNGs
# for the planner and the rest of the code
try:
self.num_states = agent_info["num_states"]
self.num_actions = agent_info["num_actions"]
except:
print("You need to pass both 'num_states' and 'num_actions' \
in agent_info to initialize the action-value table")
self.gamma = agent_info.get("discount", 0.95)
self.step_size = agent_info.get("step_size", 0.1)
self.epsilon = agent_info.get("epsilon", 0.1)
self.planning_steps = agent_info.get("planning_steps", 10)
self.kappa = agent_info.get("kappa", 0.001)
self.rand_generator = np.random.RandomState(agent_info.get('random_seed', 42))
self.planning_rand_generator = np.random.RandomState(agent_info.get('planning_random_seed', 42))
# Next, we initialize the attributes required by the agent, e.g., q_values, model, tau, etc.
# The visitation-counts can be stored as a table as well, like the action values
self.q_values = np.zeros((self.num_states, self.num_actions))
self.tau = np.zeros((self.num_states, self.num_actions))
self.actions = list(range(self.num_actions))
self.past_action = -1
self.past_state = -1
self.model = {}
###Output
_____no_output_____
###Markdown
Now first up, implement the `update_model` method. Note that this is different from Dyna-Q in the aforementioned way.
###Code
%%add_to DynaQPlusAgent
# [GRADED]
def update_model(self, past_state, past_action, state, reward):
"""updates the model
Args:
past_state (int): s
past_action (int): a
state (int): s'
reward (int): r
Returns:
Nothing
"""
# Recall that when adding a state-action to the model, if the agent is visiting the state
# for the first time, then the remaining actions need to be added to the model as well
# with zero reward and a transition into itself. Something like:
## for action in self.actions:
## if action != past_action:
## self.model[past_state][action] = (past_state, 0)
#
# Note: do *not* update the visitation-counts here. We will do that in `agent_step`.
#
# (3 lines)
if past_state not in self.model:
self.model[past_state] = {past_action : (state, reward)}
### START CODE HERE ###
for action in self.actions:
if action != past_action:
self.model[past_state][action] = (past_state,0)
### END CODE HERE ###
else:
self.model[past_state][past_action] = (state, reward)
###Output
_____no_output_____
###Markdown
Test `update_model()`
###Code
# Do not modify this cell!
## Test code for update_model() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,2,0,1)
test_agent.update_model(2,0,1,1)
test_agent.update_model(0,3,1,2)
test_agent.tau[0][0] += 1
print("Model: \n", test_agent.model)
###Output
Model:
{0: {2: (0, 1), 0: (0, 0), 1: (0, 0), 3: (1, 2)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}
###Markdown
Expected output:```Model: {0: {2: (0, 1), 0: (0, 0), 1: (0, 0), 3: (1, 2)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}```Note that the actions that were not taken from a state are also added to the model, with a loop back into the same state with a reward of 0. Next, you will implement the `planning_step()` method. This will be very similar to the one you implemented in `DynaQAgent`, but here you will be adding the exploration bonus to the reward in the simulated transition.
###Code
%%add_to DynaQPlusAgent
# [GRADED]
def planning_step(self):
"""performs planning, i.e. indirect RL.
Args:
None
Returns:
Nothing
"""
# The indirect RL step:
# - Choose a state and action from the set of experiences that are stored in the model. (~2 lines)
# - Query the model with this state-action pair for the predicted next state and reward.(~1 line)
# - **Add the bonus to the reward** (~1 line)
# - Update the action values with this simulated experience. (2~4 lines)
# - Repeat for the required number of planning steps.
#
# Note that the update equation is different for terminal and non-terminal transitions.
# To differentiate between a terminal and a non-terminal next state, assume that the model stores
# the terminal state as a dummy state like -1
#
# Important: remember you have a random number generator 'planning_rand_generator' as
# a part of the class which you need to use as self.planning_rand_generator.choice()
# For the sake of reproducibility and grading, *do not* use anything else like
# np.random.choice() for performing search control.
### START CODE HERE ###
for _ in range(self.planning_steps):
state = self.planning_rand_generator.choice(list(self.model.keys()))
action = self.planning_rand_generator.choice(list(self.model[state].keys()))
nstate,reward = self.model[state][action]
reward += self.kappa*np.sqrt( self.tau[state,action] )
if nstate==-1:
self.q_values[state,action] = self.q_values[state,action] + self.step_size*(
reward - self.q_values[state,action] )
else:
self.q_values[state,action] = self.q_values[state,action] + self.step_size*(
reward + self.gamma*np.max(self.q_values[nstate]) - self.q_values[state,action] )
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Test `planning_step()`
###Code
# Do not modify this cell!
## Test code for planning_step() ##
actions = []
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 1}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
test_agent.update_model(0,1,-1,1)
test_agent.tau += 1; test_agent.tau[0][1] = 0
test_agent.update_model(0,2,1,1)
test_agent.tau += 1; test_agent.tau[0][2] = 0 # Note that these counts are manually updated
test_agent.update_model(2,0,1,1) # as we'll code them in `agent_step'
test_agent.tau += 1; test_agent.tau[2][0] = 0 # which hasn't been implemented yet.
test_agent.planning_step()
print("Model: \n", test_agent.model)
print("Action-value estimates: \n", test_agent.q_values)
###Output
Model:
{0: {1: (-1, 1), 0: (0, 0), 2: (1, 1), 3: (0, 0)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}
Action-value estimates:
[[0. 0.10014142 0. 0. ]
[0. 0. 0. 0. ]
[0. 0.00036373 0. 0.00017321]]
###Markdown
Expected output:```Model: {0: {1: (-1, 1), 0: (0, 0), 2: (1, 1), 3: (0, 0)}, 2: {0: (1, 1), 1: (2, 0), 2: (2, 0), 3: (2, 0)}}Action-value estimates: [[0. 0.10014142 0. 0. ] [0. 0. 0. 0. ] [0. 0.00036373 0. 0.00017321]]``` Again, before you move on to implement the rest of the agent methods, here are the couple of helper functions that you've used in the previous assessments for choosing an action using an $\epsilon$-greedy policy.
###Code
%%add_to DynaQPlusAgent
# Do not modify this cell!
def argmax(self, q_values):
"""argmax with random tie-breaking
Args:
q_values (Numpy array): the array of action values
Returns:
action (int): an action with the highest value
"""
top = float("-inf")
ties = []
for i in range(len(q_values)):
if q_values[i] > top:
top = q_values[i]
ties = []
if q_values[i] == top:
ties.append(i)
return self.rand_generator.choice(ties)
def choose_action_egreedy(self, state):
"""returns an action using an epsilon-greedy policy w.r.t. the current action-value function.
Important: assume you have a random number generator 'rand_generator' as a part of the class
which you can use as self.rand_generator.choice() or self.rand_generator.rand()
Args:
state (List): coordinates of the agent (two elements)
Returns:
The action taken w.r.t. the aforementioned epsilon-greedy policy
"""
if self.rand_generator.rand() < self.epsilon:
action = self.rand_generator.choice(self.actions)
else:
values = self.q_values[state]
action = self.argmax(values)
return action
###Output
_____no_output_____
###Markdown
Now implement the rest of the agent-related methods, namely `agent_start`, `agent_step`, and `agent_end`. Again, these will be very similar to the ones in the `DynaQAgent`, but you will have to think of a way to update the counts since the last visit.
###Code
%%add_to DynaQPlusAgent
# [GRADED]
def agent_start(self, state):
"""The first method called when the experiment starts, called after
the environment starts.
Args:
state (Numpy array): the state from the
environment's env_start function.
Returns:
(int) The first action the agent takes.
"""
# given the state, select the action using self.choose_action_egreedy(),
# and save current state and action (~2 lines)
### self.past_state = ?
### self.past_action = ?
# Note that the last-visit counts are not updated here.
### START CODE HERE ###
self.past_action = self.choose_action_egreedy(state)
self.past_state = state
### END CODE HERE ###
return self.past_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward (float): the reward received for taking the last action taken
state (Numpy array): the state from the
environment's step based on where the agent ended up after the
last step
Returns:
(int) The action the agent is taking.
"""
# Update the last-visited counts (~2 lines)
# - Direct-RL step (1~3 lines)
# - Model Update step (~1 line)
# - `planning_step` (~1 line)
# - Action Selection step (~1 line)
# Save the current state and action before returning the action to be performed. (~2 lines)
### START CODE HERE ###
self.tau += 1
self.tau[self.past_state][self.past_action] = 0
self.q_values[self.past_state,self.past_action] = self.q_values[self.past_state,self.past_action] + self.step_size*(
reward + self.gamma*np.max(self.q_values[state]) - self.q_values[self.past_state,self.past_action] )
self.update_model(self.past_state, self.past_action, state, reward)
self.planning_step()
self.past_action = self.choose_action_egreedy(state)
self.past_state=state
### END CODE HERE ###
return self.past_action
def agent_end(self, reward):
"""Called when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# Again, add the same components you added in agent_step to augment Dyna-Q into Dyna-Q+
### START CODE HERE ###
self.tau += 1
self.tau[self.past_state][self.past_action] = 0
self.q_values[self.past_state,self.past_action] = self.q_values[self.past_state,self.past_action] + self.step_size*(
reward - self.q_values[self.past_state,self.past_action])
self.update_model(self.past_state, self.past_action, -1, reward)
self.planning_step()
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Let's test these methods one-by-one. Test `agent_start()`
###Code
# Do not modify this cell!
## Test code for agent_start() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
action = test_agent.agent_start(0) # state
print("Action:", action)
print("Timesteps since last visit: \n", test_agent.tau)
print("Action-value estimates: \n", test_agent.q_values)
print("Model: \n", test_agent.model)
###Output
Action: 1
Timesteps since last visit:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Action-value estimates:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Model:
{}
###Markdown
Expected output:```Action: 1Timesteps since last visit: [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]]Action-value estimates: [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]]Model: {}```Remember the last-visit counts are not updated in `agent_start()`. Test `agent_step()`
###Code
# Do not modify this cell!
## Test code for agent_step() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
actions = []
actions.append(test_agent.agent_start(0)) # state
actions.append(test_agent.agent_step(1,2)) # (reward, state)
actions.append(test_agent.agent_step(0,1)) # (reward, state)
print("Actions:", actions)
print("Timesteps since last visit: \n", test_agent.tau)
print("Action-value estimates: \n", test_agent.q_values)
print("Model: \n", test_agent.model)
###Output
Actions: [1, 3, 1]
Timesteps since last visit:
[[2. 1. 2. 2.]
[2. 2. 2. 2.]
[2. 2. 2. 0.]]
Action-value estimates:
[[1.91000000e-02 2.71000000e-01 0.00000000e+00 1.91000000e-02]
[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[0.00000000e+00 1.83847763e-04 4.24264069e-04 0.00000000e+00]]
Model:
{0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}}
###Markdown
Expected output:```Actions: [1, 3, 1]Timesteps since last visit: [[2. 1. 2. 2.] [2. 2. 2. 2.] [2. 2. 2. 0.]]Action-value estimates: [[1.91000000e-02 2.71000000e-01 0.00000000e+00 1.91000000e-02] [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00] [0.00000000e+00 1.83847763e-04 4.24264069e-04 0.00000000e+00]]Model: {0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}}``` Test `agent_end()`
###Code
# Do not modify this cell!
## Test code for agent_end() ##
agent_info = {"num_actions": 4,
"num_states": 3,
"epsilon": 0.1,
"step_size": 0.1,
"discount": 1.0,
"kappa": 0.001,
"planning_steps": 4,
"random_seed": 0,
"planning_random_seed": 0}
test_agent = DynaQPlusAgent()
test_agent.agent_init(agent_info)
actions = []
actions.append(test_agent.agent_start(0))
actions.append(test_agent.agent_step(1,2))
actions.append(test_agent.agent_step(0,1))
test_agent.agent_end(1)
print("Actions:", actions)
print("Timesteps since last visit: \n", test_agent.tau)
print("Action-value estimates: \n", test_agent.q_values)
print("Model: \n", test_agent.model)
###Output
Actions: [1, 3, 1]
Timesteps since last visit:
[[3. 2. 3. 3.]
[3. 0. 3. 3.]
[3. 3. 3. 1.]]
Action-value estimates:
[[1.91000000e-02 3.44083848e-01 0.00000000e+00 4.44632051e-02]
[1.91732051e-02 1.90000000e-01 0.00000000e+00 0.00000000e+00]
[0.00000000e+00 1.83847763e-04 4.24264069e-04 0.00000000e+00]]
Model:
{0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}, 1: {1: (-1, 1), 0: (1, 0), 2: (1, 0), 3: (1, 0)}}
###Markdown
Expected output:```Actions: [1, 3, 1]Timesteps since last visit: [[3. 2. 3. 3.] [3. 0. 3. 3.] [3. 3. 3. 1.]]Action-value estimates: [[1.91000000e-02 3.44083848e-01 0.00000000e+00 4.44632051e-02] [1.91732051e-02 1.90000000e-01 0.00000000e+00 0.00000000e+00] [0.00000000e+00 1.83847763e-04 4.24264069e-04 0.00000000e+00]]Model: {0: {1: (2, 1), 0: (0, 0), 2: (0, 0), 3: (0, 0)}, 2: {3: (1, 0), 0: (2, 0), 1: (2, 0), 2: (2, 0)}, 1: {1: (-1, 1), 0: (1, 0), 2: (1, 0), 3: (1, 0)}} ``` Experiment: Dyna-Q+ agent in the _changing_ environmentOkay, now we're ready to test our Dyna-Q+ agent on the Shortcut Maze. As usual, we will average the results over 30 independent runs of the experiment.
###Code
# Do NOT modify the parameter settings.
# Experiment parameters
experiment_parameters = {
"num_runs" : 30, # The number of times we run the experiment
"num_max_steps" : 6000, # The number of steps per experiment
}
# Environment parameters
environment_parameters = {
"discount": 0.95,
"change_at_n": 3000
}
# Agent parameters
agent_parameters = {
"num_states" : 54,
"num_actions" : 4,
"epsilon": 0.1,
"step_size" : 0.5,
"planning_steps" : [50]
}
current_env = ShortcutMazeEnvironment # The environment
current_agent = DynaQPlusAgent # The agent
run_experiment_with_state_visitations(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters, "Dyna-Q+")
shutil.make_archive('results', 'zip', 'results');
###Output
Planning steps : 50
###Markdown
Let's compare the Dyna-Q and Dyna-Q+ agents with `planning_steps=50` each.
###Code
# Do not modify this cell!
def plot_cumulative_reward_comparison(file_name_dynaq, file_name_dynaqplus):
cum_reward_q = np.load(file_name_dynaq).item()['cum_reward_all'][2]
cum_reward_qPlus = np.load(file_name_dynaqplus).item()['cum_reward_all'][0]
plt.plot(np.mean(cum_reward_qPlus, axis=0), label='Dyna-Q+')
plt.plot(np.mean(cum_reward_q, axis=0), label='Dyna-Q')
plt.axvline(x=3000, linestyle='--', color='grey', alpha=0.4)
plt.xlabel('Timesteps')
plt.ylabel('Cumulative\nreward', rotation=0, labelpad=60)
plt.legend(loc='upper left')
plt.title('Average performance of Dyna-Q and Dyna-Q+ agents in the Shortcut Maze\n')
plt.show()
# Do not modify this cell!
plot_cumulative_reward_comparison('results/Dyna-Q_shortcut_steps.npy', 'results/Dyna-Q+.npy')
###Output
_____no_output_____
###Markdown
What do you observe? (For reference, your graph should look like [Figure 8.5 in Chapter 8](http://www.incompleteideas.net/book/RLbook2018.pdfpage=189) of the RL textbook)The slope of the curve increases for the Dyna-Q+ curve shortly after the shortcut opens up after 3000 steps, which indicates that the rate of receiving the positive reward increases. This implies that the Dyna-Q+ agent finds the shorter path to the goal.To verify this, let us plot the state-visitations of the Dyna-Q+ agent before and after the shortcut opens up.
###Code
# Do not modify this cell!
plot_state_visitations("results/Dyna-Q+.npy", ['Dyna-Q+ : State visitations before the env changes', 'Dyna-Q+ : State visitations after the env changes'], 0)
###Output
_____no_output_____ |
Sequence Models/Dinosaur Island - Character-Level Language Modeling/Dinosaurus_Island_Character_level_language_model_final_v3b.ipynb | ###Markdown
Character level language model - Dinosaurus IslandWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely! Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn:- How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit- How to build a character-level text generation recurrent neural network- Why clipping the gradients is importantWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. Updates If you were working on the notebook before this update...* The current notebook is version "3b".* You can find your original work saved in the notebook with the previous version name ("v3a") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates 3b- removed redundant numpy import* `clip` - change test code to use variable name 'mvalue' rather than 'maxvalue' and deleted it from namespace to avoid confusion.* `optimize` - removed redundant description of clip function to discourage use of using 'maxvalue' which is not an argument to optimize* `model` - added 'verbose mode to print X,Y to aid in creating that code. - wordsmith instructions to prevent confusion - 2000 examples vs 100, 7 displayed vs 10 - no randomization of order* `sample` - removed comments regarding potential different sample outputs to reduce confusion.
###Code
import numpy as np
from utils import *
import random
import pprint
###Output
_____no_output_____
###Markdown
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
###Code
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
###Output
There are 19909 total characters and 27 unique characters in your data.
###Markdown
* The characters are a-z (26 characters) plus the "\n" (or newline character).* In this assignment, the newline character "\n" plays a role similar to the `` (or "End of sentence") token we had discussed in lecture. - Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence. * `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character. - This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer.
###Code
chars = sorted(chars)
print(chars)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(ix_to_char)
###Output
{ 0: '\n',
1: 'a',
2: 'b',
3: 'c',
4: 'd',
5: 'e',
6: 'f',
7: 'g',
8: 'h',
9: 'i',
10: 'j',
11: 'k',
12: 'l',
13: 'm',
14: 'n',
15: 'o',
16: 'p',
17: 'q',
18: 'r',
19: 's',
20: 't',
21: 'u',
22: 'v',
23: 'w',
24: 'x',
25: 'y',
26: 'z'}
###Markdown
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameters with the gradient descent update rule.- Return the learned parameters **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". * At each time-step, the RNN tries to predict what is the next character given the previous characters. * The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set.* $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward. * At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$. 2 - Building blocks of the modelIn this part, you will build two important blocks of the overall model:- Gradient clipping: to avoid exploding gradients- Sampling: a technique used to generate charactersYou will then apply these two functions to build the model. 2.1 - Clipping the gradients in the optimization loopIn this section you will implement the `clip` function that you will call inside of your optimization loop. Exploding gradients* When gradients are very large, they're called "exploding gradients." * Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation.Recall that your overall loop structure usually consists of:* forward pass, * cost computation, * backward pass, * parameter update. Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding." gradient clippingIn the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. * There are different ways to clip gradients.* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. * For example, if the N=10 - The range is [-10, 10] - If any component of the gradient vector is greater than 10, it is set to 10. - If any component of the gradient vector is less than -10, it is set to -10. - If any components are between -10 and 10, they keep their original values. **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. * Your function takes in a maximum threshold and returns the clipped versions of the gradients. * You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html). - You will need to use the argument "`out = ...`". - Using the "`out`" parameter allows you to update a variable "in-place". - If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`.
###Code
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWaa, dWax, dWya, db, dby]:
np.clip(gradient, -maxValue, maxValue, out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
# Test with a maxvalue of 10
mValue = 10
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
###Markdown
** Expected output:**```Pythongradients["dWaa"][1][2] = 10.0gradients["dWax"][3][1] = -10.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 10.]gradients["dby"][1] = [ 8.45833407]```
###Code
# Test with a maxValue of 5
mValue = 5
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
del mValue # avoid common issue
###Output
gradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
###Markdown
** Expected Output: **```Pythongradients["dWaa"][1][2] = 5.0gradients["dWax"][3][1] = -5.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 5.]gradients["dby"][1] = [ 5.]``` 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:- **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$. - This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$ - **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:hidden state: $$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$activation:$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$prediction:$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$- Details about $\hat{y}^{\langle t+1 \rangle }$: - Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). - $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. - We have provided a `softmax()` function that you can use. Additional Hints- $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.- $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work.- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html) Using 2D arrays instead of 1D arrays* You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors.* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.* This becomes a problem when we add two arrays where we expected them to have the same shape.* When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other.* Here is some sample code that shows the difference between using a 1D and 2D array.
###Code
matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)
matrix2 = np.array([[0],[0],[0]]) # (3,1)
vector1D = np.array([1,1]) # (2,)
vector2D = np.array([[1],[1]]) # (2,1)
print("matrix1 \n", matrix1,"\n")
print("matrix2 \n", matrix2,"\n")
print("vector1D \n", vector1D,"\n")
print("vector2D \n", vector2D)
print("Multiply 2D and 1D arrays: result is a 1D array\n",
np.dot(matrix1,vector1D))
print("Multiply 2D and 2D arrays: result is a 2D array\n",
np.dot(matrix1,vector2D))
print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n",
"This is what we want here!\n",
np.dot(matrix1,vector2D) + matrix2)
print("Adding a (3,) vector to a (3 x 1) vector\n",
"broadcasts the 1D array across the second dimension\n",
"Not what we want here!\n",
np.dot(matrix1,vector1D) + matrix2
)
###Output
Adding a (3,) vector to a (3 x 1) vector
broadcasts the 1D array across the second dimension
Not what we want here!
[[2 4 6]
[2 4 6]
[2 4 6]]
###Markdown
- **Step 3**: Sampling: - Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. To make the results more interesting, we will use np.random.choice to select a next letter that is *likely*, but not always the same. - Pick the next character's **index** according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. - This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. - Use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html). Example of how to use `np.random.choice()`: ```python np.random.seed(0) probs = np.array([0.1, 0.0, 0.7, 0.2]) idx = np.random.choice(range(len((probs)), p = probs) ``` - This means that you will pick the index (`idx`) according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$. - Note that the value that's set to `p` should be set to a 1D vector. - Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array. - Also notice, while in your implementation, the first argument to np.random.choice is just an ordered list [0,1,.., vocab_len-1], it is *Not* appropriate to use char_to_ix.values(). The *order* of values returned by a python dictionary .values() call will be the same order as they are added to the dictionary. The grader may have a different order when it runs your routine than when you run it in your notebook. Additional Hints- [range](https://docs.python.org/3/library/functions.htmlfunc-range)- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.```Pythonarr = np.array([[1,2],[3,4]])print("arr")print(arr)print("arr.ravel()")print(arr.ravel())```Output:```Pythonarr[[1 2] [3 4]]arr.ravel()[1 2 3 4]```- Note that `append` is an "in-place" operation. In other words, don't do this:```Pythonfun_hobbies = fun_hobbies.append('learning') Doesn't give you what you want``` - **Step 4**: Update to $x^{\langle t \rangle }$ - The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. - You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction. - You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name. Additional Hints- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero. - You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) - Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)
###Code
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the a zero vector x that can be used as the one-hot vector
# representing the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size, 1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# idx is the index of the one-hot vector x that is set to 1
# All other positions in x are zero.
# We will initialize idx to -1
idx = -1
# Loop over time-steps t. At each time-step:
# sample a character from a probability distribution
# and append its index (`idx`) to the list "indices".
# We'll stop if we reach 50 characters
# (which should be very unlikely with a well trained model).
# Setting the maximum number of characters helps with debugging and prevents infinite loops.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(Waa @ a_prev + Wax @ x + b)
z = Wya @ a + by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
# (see additional hints above)
idx = np.random.choice([i for i in range(vocab_size)], p=y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.
# (see additional hints above)
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:\n", indices)
print("list of sampled characters:\n", [ix_to_char[i] for i in indices])
###Output
Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
###Markdown
** Expected output:**```PythonSampling:list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']``` 3 - Building the language model It is time to build the character-level language model for text generation. 3.1 - Gradient descent * In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). * You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:- Forward propagate through the RNN to compute the loss- Backward propagate through time to compute the gradients of the loss with respect to the parameters- Clip the gradients- Update the parameters using gradient descent **Exercise**: Implement the optimization process (one step of stochastic gradient descent). The following functions are provided:```pythondef rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, adef update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters```Recall that you previously implemented the `clip` function: parameters* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.* Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary).
###Code
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
###Output
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
###Markdown
** Expected output:**```PythonLoss = 126.503975722gradients["dWaa"][1][2] = 0.194709315347np.argmax(gradients["dWax"]) = 93gradients["dWya"][1][2] = -0.007773876032gradients["db"][4] = [-0.06809825]gradients["dby"][1] = [ 0.01538192]a_last[4] = [-1.]``` 3.2 - Training the model * Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. * Every 2000 steps of stochastic gradient descent, you will sample several randomly chosen names to see how the algorithm is doing. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this: Set the index `idx` into the list of examples* Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples".* For example, if there are n_e examples, and the for-loop increments the index to n_e onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is n_e, n_e + 1, etc.* Hint: n_e + 1 divided by n_e is zero with a remainder of 1.* `%` is the modulus operator in python. Extract a single example from the list of examples* `single_example`: use the `idx` index that you set previously to get one word from the list of examples. Convert a string into a list of characters: `single_example_chars`* `single_example_chars`: A string is a list of characters.* You can use a list comprehension (recommended over for-loops) to generate a list of characters.```Pythonstr = 'I love learning'list_of_chars = [c for c in str]print(list_of_chars)``````['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']``` Convert list of characters to a list of integers: `single_example_ix`* Create a list that contains the index numbers associated with each character.* Use the dictionary `char_to_ix`* You can combine this with the list comprehension that is used to get a list of characters from a string. Create the list of input characters: `X`* `rnn_forward` uses the **`None`** value as a flag to set the input vector as a zero-vector.* Prepend the list [**`None`**] in front of the list of input characters.* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']` Get the integer representation of the newline character `ix_newline`* `ix_newline`: The newline character signals the end of the dinosaur name. - get the integer representation of the newline character `'\n'`. - Use `char_to_ix` Set the list of labels (integer representation of the characters): `Y`* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`. - For example, `Y[0]` contains the same value as `X[1]` * The RNN should predict a newline at the last letter so add ix_newline to the end of the labels. - Append the integer representation of the newline character to the end of `Y`. - Note that `append` is an in-place operation. - It might be easier for you to add two lists together.
###Code
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27, verbose = False):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text (size of the vocabulary)
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Set the index `idx` (see instructions above)
idx = j % len(examples)
# Set the input X (see instructions above)
single_example = examples[idx]
single_example_chars = [c for c in single_example]
single_example_ix = [char_to_ix[ch] for ch in single_example_chars]
X = [None] + single_example_ix
# Set the labels Y (see instructions above)
ix_newline = char_to_ix["\n"]
Y = X[1:] + [ix_newline]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# debug statements to aid in correctly forming X, Y
if verbose and j in [0, len(examples) -1, len(examples)]:
print("j = " , j, "idx = ", idx,)
if verbose and j in [0]:
print("single_example =", single_example)
print("single_example_chars", single_example_chars)
print("single_example_ix", single_example_ix)
print(" X = ", X, "\n", "Y = ", Y, "\n")
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result (for grading purposes), increment the seed by one.
print('\n')
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
###Code
parameters = model(data, ix_to_char, char_to_ix, verbose = True)
###Output
j = 0 idx = 0
single_example = turiasaurus
single_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']
single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0]
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
j = 1535 idx = 1535
j = 1536 idx = 0
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901815
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.608779
Onwusceomosaurus
Lieeaerosaurus
Lxussaurus
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.070350
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaksoje
Trodiktonus
Iteration: 10000, Loss: 23.844446
Onyusaurus
Klecalosaurus
Lustodon
Ola
Xusodonia
Eeaeosaurus
Troceosaurus
Iteration: 12000, Loss: 23.291971
Onyxosaurus
Kica
Lustrepiosaurus
Olaagrraiansaurus
Yuspangosaurus
Eealosaurus
Trognesaurus
Iteration: 14000, Loss: 23.382338
Meutromodromurus
Inda
Iutroinatorsaurus
Maca
Yusteratoptititan
Ca
Troclosaurus
Iteration: 16000, Loss: 23.255630
Meustolkanolus
Indabestacarospceryradwalosaurus
Justolopinaveraterasauracoptelalenyden
Maca
Yusocles
Daahosaurus
Trodon
Iteration: 18000, Loss: 22.905483
Phytronn
Meicanstolanthus
Mustrisaurus
Pegalosaurus
Yuskercis
Egalosaurus
Tromelosaurus
Iteration: 20000, Loss: 22.873854
Nlyushanerohyisaurus
Loga
Lustrhigosaurus
Nedalosaurus
Yuslangosaurus
Elagosaurus
Trrangosaurus
Iteration: 22000, Loss: 22.710545
Onyxromicoraurospareiosatrus
Liga
Mustoffankeugoptardoros
Ola
Yusodogongterosaurus
Ehaerona
Trododongxernochenhus
Iteration: 24000, Loss: 22.604827
Meustognathiterhucoplithaloptha
Jigaadosaurus
Kurrodon
Mecaistheansaurus
Yuromelosaurus
Eiaeropeeton
Troenathiteritaus
Iteration: 26000, Loss: 22.714486
Nhyxosaurus
Kola
Lvrosaurus
Necalosaurus
Yurolonlus
Ejakosaurus
Troindronykus
Iteration: 28000, Loss: 22.647640
Onyxosaurus
Loceahosaurus
Lustleonlonx
Olabasicachudrakhurgawamosaurus
Ytrojianiisaurus
Eladon
Tromacimathoshargicitan
Iteration: 30000, Loss: 22.598485
Oryuton
Locaaesaurus
Lustoendosaurus
Olaahus
Yusaurus
Ehadopldarshuellus
Troia
Iteration: 32000, Loss: 22.211861
Meutronlapsaurus
Kracallthcaps
Lustrathus
Macairugeanosaurus
Yusidoneraverataus
Eialosaurus
Troimaniathonsaurus
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
###Markdown
** Expected Output**```Pythonj = 0 idx = 0single_example = turiasaurussingle_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19] X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19] Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0] Iteration: 0, Loss: 23.087336NkzxwtdmfqoeyhsqwasjkjvuKnebKzxwtdmfqoeyhsqwasjkjvuNebZxwtdmfqoeyhsqwasjkjvuEbXwtdmfqoeyhsqwasjkjvuj = 1535 idx = 1535j = 1536 idx = 0Iteration: 2000, Loss: 27.884160...Iteration: 34000, Loss: 22.447230OnyxipaledisonsKiabaeropaLussiamangPacaeptabalsaurusXosalongEiacotegTroia``` ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus! 4 - Writing like ShakespeareThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. Let's become poets! We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
###Output
Using TensorFlow backend.
###Markdown
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
###Code
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
###Output
Write the beginning of your poem, the Shakespeare machine will complete it. Your input is: boy
Here is your poem:
boy,
thing lovese lies a fiver my langes cans,
withoutpation lost with upain the beauty,
the part not bedo a with rigon far i retord.
to be the am his swoemh is evens comfiss,
that beauty corwats wifad dolds to preventedse.
lide and from my fevarned cu mey my higst,
my paeten liver mound be greess in where in my grow creash,
a tite lilen so mid my pull munts thee krows,
that love the cike corse t |
neural_networks/Logistic_Regression_with_a_Neural_Network_mindset_v6a.ipynb | ###Markdown
Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. UpdatesThis notebook has been updated over the past few months. The prior version was named "v5", and the current versionis now named '6a' If you were working on a previous version:* You can find your prior work by looking in the file directory for the older files (named by version name).* To view the file directory, click on the "Coursera" icon in the top left corner of this notebook.* Please copy your work from the older versions to the new version, in order to submit your work for grading. List of Updates* Forward propagation formula, indexing now starts at 1 instead of 0.* Optimization function comment now says "print cost every 100 training iterations" instead of "examples".* Fixed grammar in the comments.* Y_prediction_test variable name is used consistently.* Plot's axis label now says "iterations (hundred)" instead of "iterations".* When testing the model, the test image is normalized by dividing by 255. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
###Code
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
###Output
_____no_output_____
###Markdown
2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code.
###Code
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
###Output
_____no_output_____
###Markdown
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
###Code
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
###Output
y = [1], it's a 'cat' picture.
###Markdown
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
###Code
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
###Output
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)
###Markdown
**Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X```
###Code
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
###Output
train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]
###Markdown
**Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset.
###Code
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
###Output
_____no_output_____
###Markdown
**What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-1*z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
###Output
sigmoid([0, 2]) = [ 0.5 0.88079708]
###Markdown
**Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
###Code
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
###Output
w = [[ 0.]
[ 0.]]
b = 0
###Markdown
**Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T,X) + b) # compute activation
cost = -1/m * np.sum(Y*np.log(A) + (1-Y)*np.log(1-A)) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1/m * np.dot(X, (A - Y).T)
db = 1/m * np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
###Output
dw = [[ 0.99845601]
[ 2.39507239]]
db = 0.00145557813678
cost = 5.80154531939
###Markdown
**Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
###Code
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
###Output
w = [[ 0.19033591]
[ 0.12259159]]
b = 1.92535983008
dw = [[ 0.67752042]
[ 1.41625495]]
db = 0.219194504541
###Markdown
**Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
###Code
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0,i]<=0.5:
Y_prediction[0,i] = 0
else:
Y_prediction[0,i] = 1
pass
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
###Output
predictions = [[ 1. 1. 0.]]
###Markdown
**Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction_test for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
###Code
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w,b, X_test)
Y_prediction_train = predict(w,b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
###Output
_____no_output_____
###Markdown
Run the following cell to train your model.
###Code
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
###Output
Cost after iteration 0: 0.693147
Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %
###Markdown
**Expected Output**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
###Code
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
###Output
y = 1, you predicted that it is a "cat" picture.
###Markdown
Let's also plot the cost function and the gradients.
###Code
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
###Output
_____no_output_____
###Markdown
**Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
###Code
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
###Output
learning rate is: 0.01
train accuracy: 99.52153110047847 %
test accuracy: 68.0 %
-------------------------------------------------------
learning rate is: 0.001
train accuracy: 88.99521531100478 %
test accuracy: 64.0 %
-------------------------------------------------------
learning rate is: 0.0001
train accuracy: 68.42105263157895 %
test accuracy: 36.0 %
-------------------------------------------------------
###Markdown
**Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
###Output
y = 0.0, your algorithm predicts a "non-cat" picture.
|
AN03_Analysis_Integralrechnung.ipynb | ###Markdown
AN03 Analysis - Integralrechnung Übungsaufgaben:C17, C18 (partielle Integration) C1, C2, C7 (Substitution) F1, F2, F3 (Doppelintegral)---F34, F35, F36 (Dreifachintegrale) F17, F18 (Doppelintegrale in Polarkoord.) ---Übungsblatt für gew. IntegraleBeispiel Partialbruchzerl:$$\frac{x+1}{x^3-5x^2+8x-4}$$Beispiele in Vorl.:$$\int_{x=0}^2 \int_{y=-1}^{1} \int_{z=0}^{1} 1+2x-z^3 dz dy dx = 11$$Todo: $$\int_{z=0}^1 \int_{y=1}^{z-1} \int_{x=-y}^{y+z^2} x+y+z^2 dx dy dz = -\frac{8}{5} oder -\frac{91}{90}$$ Def. gew. Integral Regeln und Integrationsmethoden für gew. Integrale Partielle Int., Substitution, Integration mit Hilfe v. PartialbruchzerlegungDef. Mehrfachintegral (Zwei-/Dreifachintegral) Regeln für MehrfachintegraleHauptsatz der Integral- und DifferentialrechnungKoordinatensysteme und entsprechende Integrale- Kartesische Koord.- Polarkoord. - Zylinderkoord.- Kugelkoord.
###Code
from scipy import integrate
def f(x,y,z):
return x + y + z**2
def bounds_z():
return [0, 1]
def bounds_y(z):
return [1, z-1]
def bounds_x(y,z):
return [-y, y+z**2]
I = integrate.nquad(f, [bounds_x, bounds_y, bounds_z])
I
def f(x,y,z):
return 1 + 2*x - z**3
def bounds_x():
return [0, 2]
def bounds_y(x):
return [-1, 1]
def bounds_z(x,y):
return [0, 1]
I = integrate.nquad(f, [bounds_z, bounds_y, bounds_x])
I
###Output
_____no_output_____ |
machine-learning-with-go/ml_with_go/bonus/bonus3/bonus3.ipynb | ###Markdown
Streaming sentiment analysis of tweets Imports
###Code
import (
"encoding/json"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
"fmt"
"os"
"context"
"strings"
"github.com/garyburd/go-oauth/oauth"
"github.com/machinebox/sdk-go/textbox"
)
###Output
_____no_output_____
###Markdown
Previously discussed types, values, and functions Twitter related types:
###Code
// Tweet is a single tweet.
type Tweet struct {
Text string
Terms []string
}
// TweetReader includes the info we need to access Twitter.
type TweetReader struct {
ConsumerKey, ConsumerSecret, AccessToken, AccessSecret string
}
// NewTweetReader creates a new TweetReader with the given credentials.
func NewTweetReader(consumerKey, consumerSecret, accessToken, accessSecret string) *TweetReader {
return &TweetReader{
ConsumerKey: consumerKey,
ConsumerSecret: consumerSecret,
AccessToken: accessToken,
AccessSecret: accessSecret,
}
}
###Output
_____no_output_____
###Markdown
HTTP Client:
###Code
// Create a new HTTP client.
var connLock sync.Mutex
var conn net.Conn
client := &http.Client{
Transport: &http.Transport{
Dial: func(netw, addr string) (net.Conn, error) {
connLock.Lock()
defer connLock.Unlock()
if conn != nil {
conn.Close()
conn = nil
}
netc, err := net.DialTimeout(netw, addr, 5*time.Second)
if err != nil {
return nil, err
}
conn = netc
return netc, nil
},
},
}
###Output
_____no_output_____
###Markdown
Credentials:
###Code
// Create a new Tweet Reader.
consumerKey := ""
consumerSecret := ""
accessToken := ""
accessSecret := ""
r := NewTweetReader(consumerKey, consumerSecret, accessToken, accessSecret)
// Create oauth Credentials.
creds := &oauth.Credentials{
Token: r.AccessToken,
Secret: r.AccessSecret,
}
// Create an oauth Client.
authClient := &oauth.Client{
Credentials: oauth.Credentials{
Token: r.ConsumerKey,
Secret: r.ConsumerSecret,
},
}
###Output
_____no_output_____
###Markdown
MachineBox client:
###Code
machBoxIP := ""
mbClient := textbox.New(machBoxIP)
###Output
_____no_output_____
###Markdown
Streaming sentiment analysis We will perform this analysis in a manner similar to our lasting streaming collection of tweets. However, in this case, our two goroutines will:1. Collect tweets and send them on a channel `tweets`, and2. Analyze the tweets from the channel `tweets` and update our tweet statisticsAs such, we need to define our `Stats` type:
###Code
// Stats stores aggregated stats about
// tweets collected over time
type Stats struct {
SentimentAverage float64
Counts map[string]int
Mux sync.Mutex
}
// Initialize the stats.
myStats := Stats{
SentimentAverage: 0.0,
Counts: map[string]int{
"positive": 0,
"negative": 0,
"neutral": 0,
"total": 0,
},
Mux: sync.Mutex{},
}
###Output
_____no_output_____
###Markdown
And the corresponding functions to update the stats:
###Code
// IncrementCount the count of tweets.
func (s *Stats) IncrementCount(sentiment float64) {
// Get the appropriate counter.
var key string
switch {
case sentiment > 0.80:
key = "positive"
case sentiment < 0.50:
key = "negative"
default:
key = "neutral"
}
// Update the counts.
s.Mux.Lock()
s.Counts[key]++
s.Counts["total"]++
s.Mux.Unlock()
}
// Update the tweet stream sentiment.
func (s *Stats) UpdateSentiment(newSentiment float64) {
// Lock so only the current goroutine can access the sentiment.
s.Mux.Lock()
// Get the current count of tweets.
total, ok := s.Counts["total"]
if !ok {
fmt.Println("Could not get key value \"total\"")
return
}
// Update the value.
s.SentimentAverage = (newSentiment + s.SentimentAverage * float64(total))/(float64(total) + 1.0)
// Unlock the data.
s.Mux.Unlock()
}
###Output
_____no_output_____
###Markdown
Now we are going to start our streaming collection and analysis of tweets. Then after starting the streaming analysis, we are going to checkout our stats occasionally to check the current values. Here we go!
###Code
ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)
tweets := make(chan Tweet)
terms := []string{"Trump", "Russia"}
fmt.Println("Start 1st goroutine to collect tweets...")
go func() {
// Prepare the query.
form := url.Values{"track": terms}
formEnc := form.Encode()
u, err := url.Parse("https://stream.twitter.com/1.1/statuses/filter.json")
if err != nil {
fmt.Println("Error parsing URL:", err)
}
// Prepare the request.
req, err := http.NewRequest("POST", u.String(), strings.NewReader(formEnc))
if err != nil {
fmt.Println("creating filter request failed:", err)
continue
}
req.Header.Set("Authorization", authClient.AuthorizationHeader(creds, "POST", u, form))
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Content-Length", strconv.Itoa(len(formEnc)))
// Execute the request.
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error getting response:", err)
continue
}
if resp.StatusCode != http.StatusOK {
fmt.Println("Unexpected HTTP status code:", resp.StatusCode)
continue
}
// Decode the results.
decoder := json.NewDecoder(resp.Body)
for {
var t Tweet
if err := decoder.Decode(&t); err != nil {
break
}
tweets <- t
}
resp.Body.Close()
}()
fmt.Println("Start a 2nd goroutine that prints the collected tweets...")
go func() {
for {
select {
// Stop the goroutine.
case <-ctx.Done():
return
// Print the tweets.
case t := <-tweets:
// Analyze the tweet.
analysis, err := mbClient.Check(strings.NewReader(t.Text))
if err != nil {
fmt.Println("MachineBox error:", err)
continue
}
// Get the sentiment.
sentimentTotal := 0.0
for _, sentence := range analysis.Sentences {
sentimentTotal += sentence.Sentiment
}
sentimentTotal = sentimentTotal/float64(len(analysis.Sentences))
// Update the stats.
myStats.UpdateSentiment(sentimentTotal)
myStats.IncrementCount(sentimentTotal)
}
}
}()
// Check on our stats.
for i := 0; i < 10; i++ {
fmt.Println("")
time.Sleep(time.Second)
myStats.Mux.Lock()
fmt.Printf("Sentiment: %0.2f\n", myStats.SentimentAverage)
fmt.Printf("Total tweets analyzed: %d\n", myStats.Counts["total"])
fmt.Printf("Total positive tweets: %d\n", myStats.Counts["positive"])
fmt.Printf("Total negative tweets: %d\n", myStats.Counts["negative"])
fmt.Printf("Total neutral tweets: %d\n", myStats.Counts["neutral"])
myStats.Mux.Unlock()
}
###Output
_____no_output_____ |
notebooks/Ch02 - Deep Learning Essentials/NNBasics.ipynb | ###Markdown
Meal Item Price Problem
###Code
#The true prices used by the cashier
p_fish = 150;p_chips = 50;p_ketchup = 100
#sample meal prices: generate data meal prices for 5 days.
np.random.seed(100)
portions = np.random.randint(low=1, high=10, size=3 )
portions
X = [];y = [];days=10
for i in range(days):
portions = np.random.randint(low=1, high=10, size=3 )
price = p_fish * portions[0] + p_chips * portions[1] + p_ketchup * portions[2]
X.append(portions)
y.append(price)
X = np.array(X)
y = np.array(y)
print (X,y)
#Create a linear model
from keras.layers import Input, Dense , Activation
from keras.models import Model
from keras.optimizers import SGD
from keras.callbacks import Callback
price_guess = [np.array([[ 50 ],
[ 50],
[ 50 ]]) ]
model_input = Input(shape=(3,), dtype='float32')
model_output = Dense(1, activation='linear', use_bias=False,
name='LinearNeuron',
weights=price_guess)(model_input)
sgd = SGD(lr=0.01)
model = Model(model_input, model_output)
model.compile(loss="mean_squared_error", optimizer=sgd)
model.summary()
history = model.fit(X, y, batch_size=20, epochs=30,verbose=2)
l4 = history.history['loss']
model.get_layer('LinearNeuron').get_weights()
###Output
_____no_output_____
###Markdown
XOR Problem in Keras
###Code
X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[1],[1],[0]])
#XOR is not a linearly seperable problem
#Lets try Linear Model & see its not working. Add a non-linear layer
model_input = Input(shape=(2,), dtype='float32')
z = Dense(2,name='HiddenLayer', kernel_initializer='ones', activation='relu')(model_input)
#z = Activation('relu')(z)
z = Dense(1, name='OutputLayer')(z)
model_output = Activation('sigmoid')(z)
model = Model(model_input, model_output)
#model.summary()
sgd = SGD(lr=0.5)
#model.compile(loss="mse", optimizer=sgd)
model.compile(loss="binary_crossentropy", optimizer=sgd)
model.fit(X, y, batch_size=4, epochs=200,verbose=0)
preds = np.round(model.predict(X),decimals=3)
pd.DataFrame({'Y_actual':list(y), 'Predictions':list(preds)})
model.get_weights()
hidden_layer_output = Model(inputs=model.input,
outputs=model.get_layer('HiddenLayer').output)
projection = hidden_layer_output.predict(X)
for i in range(4):
print (X[i], projection[i])
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5,10))
ax = fig.add_subplot(211)
plt.scatter(x=projection[:, 0], y=projection[:, 1], c=('g'))
ax.set_xlabel('X axis (h1)')
ax.set_ylabel('Y axis (h2)')
ax.set_label('Transformed Space')
#hidden layer transforming the input to a linearly seperable.
x1, y1 = [projection[0, 0]-0.5, projection[3, 0]], [projection[0, 1]+0.5, projection[3, 1]+0.5]
plt.plot(x1, y1)
for i, inputx in enumerate(X):
ax.annotate(str(inputx), (projection[i, 0]+0.1,projection[i, 1]))
ax = fig.add_subplot(212)
ax.set_label('Original Space')
plt.scatter(x=X[:, 0], y=X[:, 1], c=('b'))
for i, inputx in enumerate(X):
ax.annotate(str(inputx), (X[i, 0]+0.05,X[i, 1]))
plt.show()
projection
#Logistic neuron: Logistic regression
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
X_train.shape
model_input = Input(shape=(30,), dtype='float32')
model_output = Dense(1, activation='sigmoid',
name='SigmoidNeuron')(model_input)
sgd = SGD(lr=0.01)
model = Model(model_input, model_output)
model.compile(loss="binary_crossentropy", optimizer=sgd, metrics=["accuracy"])
scaler = StandardScaler()
model.fit(scaler.fit_transform(X_train), y_train, batch_size=10, epochs=5,verbose=2,
validation_data=(scaler.fit_transform(X_test), y_test))
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from scipy import stats
from sklearn.datasets.samples_generator import make_regression
x, y = make_regression(n_samples = 100,
n_features=1,
n_informative=1,
noise=20,
random_state=2017)
x = x.flatten()
slope, intercept, _,_,_ = stats.linregress(x,y)
print("m={}, c={}".format(slope,intercept))
best_fit = np.vectorize(lambda x: x * slope + intercept)
plt.plot(x,y, 'o', alpha=0.5)
grid = np.arange(-3,3,0.1)
plt.plot(grid,best_fit(grid), '.')
plt.show()
def gradient_descent(x, y, theta_init, step=0.1, maxsteps=0, precision=0.001, ):
costs = []
m = y.size # number of data points
theta = theta_init
history = [] # to store all thetas
preds = []
counter = 0
oldcost = 0
pred = np.dot(x, theta)
error = pred - y
currentcost = np.sum(error ** 2) / (2 * m)
preds.append(pred)
costs.append(currentcost)
history.append(theta)
counter+=1
while abs(currentcost - oldcost) > precision:
oldcost=currentcost
gradient = x.T.dot(error)/m
theta = theta - step * gradient # update
history.append(theta)
pred = np.dot(x, theta)
error = pred - y
currentcost = np.sum(error ** 2) / (2 * m)
costs.append(currentcost)
if counter % 25 == 0: preds.append(pred)
counter+=1
if maxsteps:
if counter == maxsteps:
break
return history, costs, preds, counter
xaug = np.c_[np.ones(x.shape[0]), x]
theta_i = [-15, 40] + np.random.rand(2)
history, cost, preds, iters = gradient_descent(xaug, y, theta_i, step=0.1)
theta = history[-1]
print("Gradient Descent: {:.2f}, {:.2f} {:d}".format(theta[0], theta[1], iters))
print("Least Squares: {:.2f}, {:.2f}".format(intercept, slope))
from mpl_toolkits.mplot3d import Axes3D
def error(X, Y, THETA):
return np.sum((X.dot(THETA) - Y)**2)/(2*Y.size)
ms = np.linspace(theta[0] - 20 , theta[0] + 20, 20)
bs = np.linspace(theta[1] - 40 , theta[1] + 40, 40)
M, B = np.meshgrid(ms, bs)
zs = np.array([error(xaug, y, theta)
for theta in zip(np.ravel(M), np.ravel(B))])
Z = zs.reshape(M.shape)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(M, B, Z, rstride=1, cstride=1, color='b', alpha=0.2)
ax.contour(M, B, Z, 20, color='b', alpha=0.5, offset=0, stride=30)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Cost')
ax.view_init(elev=30., azim=30)
ax.plot([theta[0]], [theta[1]], [cost[-1]] , markerfacecolor='r', markeredgecolor='r', marker='o', markersize=7);
#ax.plot([history[0][0]], [history[0][1]], [cost[0]] , markerfacecolor='r', markeredgecolor='r', marker='o', markersize=7);
ax.plot([t[0] for t in history], [t[1] for t in history], cost , markerfacecolor='r', markeredgecolor='r', marker='.', markersize=2);
ax.plot([t[0] for t in history], [t[1] for t in history], 0 , markerfacecolor='r', markeredgecolor='r', marker='.', markersize=2);
plt.show()
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111)
xlist = np.linspace(-7.0, 7.0, 100) # Create 1-D arrays for x,y dimensions
ylist = np.linspace(-7.0, 7.0, 100)
X,Y = np.meshgrid(xlist, ylist) # Create 2-D grid xlist,ylist values
Z = 50 - X**2 - 2*Y**2 # Compute function values on the grid
plt.contour(X, Y, Z, [10,20,30,40], colors = ['y','orange','r','b'], linestyles = 'solid')
ax.annotate('Direction Of Gradident', xy=(.6, 0.3), xytext=(.6, 0.3))
ax.annotate('Temp=30', xy=(2.8, 2.5), xytext=(2.8, 2.5))
ax.annotate('Temp=40', xy=(2.3, 2), xytext=(2.3, 1.5))
#ax.arrow(0, 0, 6.9, 6.8, head_width=0.5, head_length=0.5, fc='k', ec='k')
ax.arrow(2, 1.75, 2*2/20, 4*1.75/20, head_width=0.2, head_length=0.5, fc='r', ec='r')
ax.arrow(2, 1.75, -2*2/10, -4*1.75/10, head_width=0.3, head_length=0.5, fc='g', ec='g')
plt.show()
50 - 2**2 - 2*1.75**2
import numpy as np
import matplotlib.pylab as plt
def step(x):
return np.array(x > 0, dtype=np.int)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def relu(x):
return np.maximum(0, x)
def tanh(x):
return (np.exp(x)-np.exp(-x)) / (np.exp(x) + np.exp(-x))
x = np.arange(-5.0, 5.0, 0.1)
y_step = step(x)
y_sigmoid = sigmoid(x)
y_relu = relu(x)
y_tanh = tanh(x)
fig, axes = plt.subplots(ncols=4, figsize=(20, 5))
ax = axes[0]
ax.plot(x, y_step,label='Binary Threshold', color='k', lw=1, linestyle=None)
ax.set_ylim(-0.8,2)
ax.set_title('Binary Threshold')
ax = axes[1]
ax.plot(x, y_sigmoid,label='Sigmoid', color='k', lw=1, linestyle=None)
ax.set_ylim(-0.001,1)
ax.set_title('Sigmoid')
ax = axes[2]
ax.plot(x, y_tanh,label='Tanh', color='k', lw=1, linestyle=None)
ax.set_ylim(-1.,1)
ax.set_title('Tanh')
ax = axes[3]
ax.plot(x, y_relu,label='ReLu', color='k', lw=1, linestyle=None)
ax.set_ylim(-0.8,5)
ax.set_title('ReLu')
plt.show()
x = np.arange(-10.0, 10.0, 0.1)
def lineup(x):
return (x-4)/12-1
def cliff(x):
x1 = -tanh(x[x<4])
x2 = np.apply_along_axis(lineup, 0, x[x>4])
return np.concatenate([x1, x2])
y_cliff = cliff(x)
fig, axes = plt.subplots(ncols=1, figsize=(10, 5))
ax = axes
ax.plot(x, y_cliff,label='Steep Cliff', color='k', lw=1, linestyle=None)
ax.set_ylim(-1.,1)
ax.set_title('Steep Cliff')
plt.show()
###Output
_____no_output_____
###Markdown
Polynomial curve fitting: Model Capacity
###Code
from math import sin, pi
N = 100; max_degree = 20
noise = np.random.normal(0, 0.2, N)
df = pd.DataFrame( index=list(range(N)),columns=list(range(1,max_degree)))
for i in range(N):
df.loc[i]=[pow(i/N,n) for n in range(1,max_degree)]
df['y']=[sin(2*pi*x/N)+noise[x] for x in range(N)]
plt.scatter(x=df[1], y=df['y'])
plt.show()
from keras.initializers import RandomNormal
degree = 3
X = df[list(range(1,degree+1))].values
y = df['y'].values
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.60, random_state=42)
model_input = Input(shape=(degree,), dtype='float32')
model_output = Dense(1, activation='linear', name='LinearNeuron')(model_input)
sgd = SGD(lr=0.4)
model = Model(model_input, model_output)
model.compile(loss="mean_squared_error", optimizer=sgd)
history = model.fit(X_train,y_train , batch_size=10, epochs=4000,verbose=0, validation_data=(X_test,y_test) )
y_pred = model.predict(X_train)
plt.scatter(X_train[:,0], y_train)
plt.plot(np.sort(X_train[:,0]), y_pred[X_train[:,0].argsort()], label='poly fit')
plt.plot(np.sort(X_train[:,0]), [sin(2*pi*x) for x in np.sort(X_train[:,0]).tolist()], label='sin (actual pattern)')
plt.title("Model fit for polynomial of degree {}".format(degree))
plt.legend(loc='upper right')
plt.show()
model.get_weights()
y_pred = model.predict(X_test)
plt.scatter(X_test[:,0], y_test)
plt.plot(np.sort(X_test[:,0]), y_pred[X_test[:,0].argsort()])
plt.title("Model fit for plynomial of degree {}".format(degree))
plt.show()
###Output
_____no_output_____ |
AIOpSchool/Zorg/0900_OefeningTitanicEcht.ipynb | ###Markdown
BESLISSINGSBOOM: TITANIC - grote dataset Er is heel wat geweten over de passagiers op de Titanic, zoals langs welk dek ze aan boord gingen, of ze in eerste, tweede of derde klasse reisden, of ze broers, zussen, een echtgenoot(ote), kinderen of ouders aan boord hadden, en of ze het ongeluk met de Titanic overleefden of niet.Je beschikt over een dataset, trainTitanic.csv, die je vindt in de map data. Stel op basis van deze dataset een beslissingsboom op die voorspelt wie overleeft en wie niet. Voorbeeldoplossing
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import tree
titanic = pd.read_csv("data/trainTitanic.csv")
titanic
# overbodige kolommen verwijderen
del titanic["PassengerId"]
del titanic["Name"]
del titanic["Ticket"]
del titanic["Fare"]
del titanic["Cabin"]
# categorische variabelen numeriek maken, naam passagier, passengerID, ticket en fare hebben we niet nodig
titanic["Sex"]= titanic["Sex"].replace("female", 0)
titanic["Sex"]= titanic["Sex"].replace("male", 1)
titanic["Embarked"]= titanic["Embarked"].replace("S", 0)
titanic["Embarked"]= titanic["Embarked"].replace("C", 1)
titanic["Embarked"]= titanic["Embarked"].replace("Q", 2)
titanic
titanic = titanic.dropna()
titanic
titanic = np.array(titanic)
titanic
###Output
_____no_output_____
###Markdown
De beschouwde parameters zijn 'passagiersklasse', 'geslacht', 'leeftijd', 'broers, zussen, echtgenoot(ote)', ook 'ouders en kinderen' en 'aan boord gegaan'.De beschouwde parameters staan in resp. kolom ) 1, 2, ..., 6 van matrix.Elke rij komt overeen met één persoon.De eerste kolom geeft of persoon overleefde ('1') of niet ('0').
###Code
# parameters en klasse onderscheiden
parameters = titanic[:, 1:] # laatste 6 kolommen van matrix zijn beschouwde parameters
klasse = titanic[:, 0] # eerste kolom zijn klasse waartoe persoon behoort
print(parameters)
print(klasse)
# beslissingsboom genereren op basis van data
beslissingsboom = tree.DecisionTreeClassifier(criterion="gini") # boom wordt aangemaakt via gini-index
beslissingsboom.fit(parameters, klasse) # boom genereren die overeenkomt met data
plt.figure(figsize=(20,20))
tree.plot_tree(beslissingsboom,
class_names=["overleefde niet", "overleefde"],
feature_names=["Pclass", "Sex", "Age", "SibSp", "Parch", "Embarked"],
filled=True, rounded=True)
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.