path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
assignment2/alice/assignment2_part1_alice_logistic_regression.ipynb | ###Markdown
[mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course Authors: [Yury Kashnitskiy](https://yorko.github.io) (@yorko), Yury Isakov. Edited by Anna Tarelina (@feuerengel), and Kolchenko Sergey (@KolchenkoSergey). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose. Assignment 2. Spring 2019 Competition 1. User Identification with Logistic Regression (beating baselines in the "Alice" competition) Today we are going to practice working with sparse matrices, training Logistic Regression models, and doing feature engineering. We will reproduce a couple of baselines in the Kaggle Inclass competition ["Catch Me If You Can: Intruder Detection through Webpage Session Tracking"](https://www.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2) (a.k.a. "Alice"). More credits will be given for beating stronger baselines. Prior to working on the assignment, you'd better check out the corresponding course material: 1. [Classification, Decision Trees and k Nearest Neighbors](https://nbviewer.jupyter.org/github/Yorko/mlcourse_open/blob/master/jupyter_english/topic03_decision_trees_kNN/topic3_decision_trees_kNN.ipynb?flush_cache=true), the same as an interactive web-based [Kaggle Kernel](https://www.kaggle.com/kashnitsky/topic-3-decision-trees-and-knn) (basics of machine learning are covered here) 2. Linear classification and regression in 5 parts: - [ordinary least squares](https://www.kaggle.com/kashnitsky/topic-4-linear-models-part-1-ols) - [linear classification](https://www.kaggle.com/kashnitsky/topic-4-linear-models-part-2-classification) - [regularization](https://www.kaggle.com/kashnitsky/topic-4-linear-models-part-3-regularization) - [logistic regression: pros and cons](https://www.kaggle.com/kashnitsky/topic-4-linear-models-part-4-more-of-logit) - [validation](https://www.kaggle.com/kashnitsky/topic-4-linear-models-part-5-validation) 3. You can also practice with demo assignments, which are simpler and already shared with solutions: - " Sarcasm detection with logistic regression": [assignment](https://www.kaggle.com/kashnitsky/a4-demo-sarcasm-detection-with-logit) + [solution](https://www.kaggle.com/kashnitsky/a4-demo-sarcasm-detection-with-logit-solution) - "Linear regression as optimization": [assignment](https://www.kaggle.com/kashnitsky/a4-demo-linear-regression-as-optimization/edit) (solution cannot be officially shared) - "Exploring OLS, Lasso and Random Forest in a regression task": [assignment](https://www.kaggle.com/kashnitsky/a6-demo-linear-models-and-rf-for-regression) + [solution](https://www.kaggle.com/kashnitsky/a6-demo-regression-solution) 4. Alice baseline with logistic regression and "bag of sites", [Kernel](https://www.kaggle.com/kashnitsky/alice-logistic-regression-baseline) 5. Correct time-aware cross-validation scheme, more features, and hyperparameter optimization, [Kernel](https://www.kaggle.com/kashnitsky/correct-time-aware-cross-validation-scheme) 6. Other [Kernels](https://www.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2/kernels?sortBy=voteCount&group=everyone&pageSize=20&competitionId=7173) in this competition. You can share yours as well, but not high-performing ones (Public LB MAE shall be < 0.95). Please don't spoil the competitive spirit. 7. If that's still not enough, watch two videos on logistic regression: [mlcourse.ai/video](https://mlcourse.ai/video)**Your task:** 1. "Follow me". Complete the missing code and submit your answers via [the google form](https://docs.google.com/forms/d/15PVw9CYlX6QnxRHKIDS161kGAq3v7iiO15W3qKTePEY). Use **the same email** as in A1 (for newcomers: remember your email and use it for all forms during the course). 12 credits max. for this part 2. "Freeride". Come up with good features to beat the baselines "A2 baseline (10 credits)" (**0.95640** Public LB ROC-AUC, press "Load more" in the bottom of the [Leaderboard](https://www.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2/leaderboard) to actually see it) and "A2 strong baseline (20 credits)" (**0.95965** Public LB ROC-AUC). As names suggest, you'll get 10 more credits for beating the first one, and 10 more (20 in total) for beating the second one. You need to name your [team](https://www.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2/team) (out of 1 person) in full accordance with the [course rating](https://docs.google.com/spreadsheets/d/1LAy1eK8vIONzIWgcCEaVmhKPSj579zK5lrECf_tQT60/edit?usp=sharing) (for newcomers: you need to name your team with your real full name). You can think of it as a part of the assignment. 3. If you've beaten "A2 baseline (10 credits)" or performed better, you need to upload your solution as described in [course roadmap](https://mlcourse.ai/roadmap) ("Kaggle Inclass Competition Alice" -> Rules). For all baselines that you see on Public Leaderboard, it's OK to beat them on Public LB as well. But 10 winners will be defined according to the private LB, which will be revealed by @yorko on March 11. Deadline for A2: 2019 March 10, 20:59 GMT (London time) Part 1. Follow me *image credit [@muradosmann](https://www.instagram.com/muradosmann/?hl=en)*
###Code
# Import libraries and set desired options
import pickle
import numpy as np
import pandas as pd
from scipy.sparse import csr_matrix, hstack
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_auc_score
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
Problem descriptionIn this competition, we'll analyze the sequence of websites consequently visited by a particular person and try to predict whether this person is Alice or someone else. As a metric we will use [ROC AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic). 1. Data Downloading and TransformationRegister on [Kaggle](www.kaggle.com), if you have not done it before.Go to the competition [page](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2) and download the data.First, read the training and test sets. Then we'll explore the data in hand and do a couple of simple exercises.
###Code
# Read the training and test data sets, change paths if needed
times = ['time%s' % i for i in range(1, 11)]
train_df = pd.read_csv('data/train_sessions.csv',
index_col='session_id', parse_dates=times)
test_df = pd.read_csv('data/test_sessions.csv',
index_col='session_id', parse_dates=times)
# Sort the data by time
train_df = train_df.sort_values(by='time1')
# Look at the first rows of the training set
train_df.head()
###Output
_____no_output_____
###Markdown
The training data set contains the following features:- **site1** – id of the first visited website in the session- **time1** – visiting time for the first website in the session- ...- **site10** – id of the tenth visited website in the session- **time10** – visiting time for the tenth website in the session- **target** – target variable, 1 for Alice's sessions, and 0 for the other users' sessions User sessions are chosen in the way that they are shorter than 30 min. long and contain no more than 10 websites. I.e. a session is considered over either if a user has visited 10 websites or if a session has lasted over 30 minutes.There are some empty values in the table, it means that some sessions contain less than ten websites. Replace empty values with 0 and change columns types to integer. Also load the websites dictionary and check how it looks like:
###Code
# Change site1, ..., site10 columns type to integer and fill NA-values with zeros
sites = ['site%s' % i for i in range(1, 11)]
train_df[sites] = train_df[sites].fillna(0).astype(np.uint16)
test_df[sites] = test_df[sites].fillna(0).astype(np.uint16)
# Load websites dictionary
with open(r"data/site_dic.pkl", "rb") as input_file:
site_dict = pickle.load(input_file)
# Create dataframe for the dictionary
sites_dict = pd.DataFrame(list(site_dict.keys()), index=list(site_dict.values()),
columns=['site'])
print(u'Websites total:', sites_dict.shape[0])
sites_dict.head()
###Output
Websites total: 48371
###Markdown
2. Brief Exploratory Data Analysis Before we start training models, we have to perform Exploratory Data Analysis ([EDA](https://en.wikipedia.org/wiki/Exploratory_data_analysis)). Today, we are going to perform a shorter version, but we will use other techniques as we move forward. Let's check which websites in the training data set are the most visited. As you can see, they are Google services and a bioinformatics website (a website with 'zero'-index is our missed values, just ignore it):
###Code
# Top websites in the training data set
top_sites = pd.Series(train_df[sites].values.flatten()
).value_counts().sort_values(ascending=False).head(5)
print(top_sites)
sites_dict.loc[top_sites.drop(0).index]
###Output
21 123776
0 122730
23 87619
782 77055
22 58258
dtype: int64
###Markdown
1. What kind of websites does Alice visit the most?*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q1__*- videohostings- social networks- torrent trackers- news
###Code
# videohostings
top_sites = pd.Series(train_df.query('target == 1')[sites].values.flatten()
).value_counts().sort_values(ascending=False).head(15)
sites_dict.loc[top_sites.index]
###Output
_____no_output_____
###Markdown
Now let us look at the timestamps and try to characterize sessions as timeframes:
###Code
train_df[times]
train_df[times].min(axis=1).values
time_diff = ['time_diff_%s' % i for i in range(1, 9)]
for t in range(1, 10):
train_df['time_diff_' + str(t)] = (train_df['time' + str(t+1)] - train_df['time' + str(t)]) / np.timedelta64(1, 's')
train_df[times + time_diff].iloc[0:5];
train_df['diff_min'] = train_df[time_diff].min(axis=1)
train_df['diff_max'] = train_df[time_diff].max(axis=1)
train_df[time_diff + ['diff_max']].describe()
time_diff = ['time_diff_%s' % i for i in range(1, 11)]
def est_session_lenght(s):
if s <=5:
return 'small'
if 6<=s and s<=30:
return 'medium'
if 31<=s and s<=90:
return 'large'
if 91<=s:
return 'extra-large'
for t in range(1, 10):
train_df['time_diff_' + str(t)] = ((train_df['time' + str(t+1)] - train_df['time' + str(t)]) / np.timedelta64(1, 's')).apply(est_session_lenght)
train_df['time_diff_10'] = None
train_df[time_diff + ['time_diff_10']].head()
sites_time_diff = np.array([['site%s' % i,'time_diff_%s' % i] for i in range(1, 9)]).flatten()
train_df[sites_time_diff]
# Create a separate dataframe where we will work with timestamps
time_df = pd.DataFrame(index=train_df.index)
time_df['target'] = train_df['target']
# Find sessions' starting and ending
time_df['min'] = train_df[times].min(axis=1)
time_df['max'] = train_df[times].max(axis=1)
# Calculate sessions' duration in seconds
time_df['seconds'] = (time_df['max'] - time_df['min']) / np.timedelta64(1, 's')
time_df.head()
train_df[times].max(axis=1).apply(lambda d: d.hour).values.reshape(-1,1)
###Output
_____no_output_____
###Markdown
In order to perform the next task, generate descriptive statistics as you did in the first assignment.*In the next question, we are using the notion of "approximately the same". To be strict, let's define it: $a$ is approximately the same as $b$ ($a \approx b $) if their difference is less than or equal to 5% of the maximum between $a$ and $b$, i.e. $a \approx b \leftrightarrow \frac{|a-b|}{max(a,b)} \leq 0.05$.* 2. Select all correct statements:*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q2__*- on average, Alice's session is shorter than that of other users- more than 1% of all sessions in the dataset belong to Alice- minimum and maximum durations of Alice's and other users' sessions are approximately the same- standard deviation of Alice's sessions duration is approximately the same as for non-Alice's sessions- less than a quarter of Alice's sessions are greater than or equal to 40 seconds
###Code
# You code here
def greater_40_persent(x):
return np.array([1 if i >= 40 else 0 for i in x]).sum() / len(x)
#on average, Alice's session is shorter than that of other users - yes
#more than 1% of all sessions in the dataset belong to Alice - No
#minimum and maximum durations of Alice's and other users' sessions are approximately the same - Yes
#standard deviation of Alice's sessions duration is approximately the same as for non-Alice's sessions - No
#less than a quarter of Alice's sessions are greater than or equal to 40 seconds - Yes
time_df.groupby(['target']).agg({'seconds': [np.mean, 'count', 'min', 'max', np.std, greater_40_persent]})
###Output
_____no_output_____
###Markdown
In order to train our first model, we need to prepare the data. First of all, exclude the target variable from the training set. Now both training and test sets have the same number of columns, therefore aggregate them into one dataframe. Thus, all transformations will be performed simultaneously on both training and test data sets. On the one hand, it leads to the fact that both data sets have one feature space (you don't have to worry that you forgot to transform a feature in some data sets). On the other hand, processing time will increase. For the enormously large sets it might turn out that it is impossible to transform both data sets simultaneously (and sometimes you have to split your transformations into several stages only for train/test data set).In our case, with this particular data set, we are going to perform all the transformations for the whole united dataframe at once, and before training the model or making predictions we will just take its appropriate part.
###Code
# Our target variable
y_train = train_df['target']
# United dataframe of the initial data
full_df = pd.concat([train_df.drop('target', axis=1), test_df])
# Index to split the training and test data sets
idx_split = train_df.shape[0]
###Output
_____no_output_____
###Markdown
For the very basic model, we will use only the visited websites in the session (but we will not take into account timestamp features). The point behind this data selection is: *Alice has her favorite sites, and the more often you see these sites in the session, the higher probability that this is Alice's session, and vice versa.*Let us prepare the data, we will take only features `site1, site2, ... , site10` from the whole dataframe. Keep in mind that the missing values are replaced with zero. Here is how the first rows of the dataframe look like:
###Code
# Dataframe with indices of visited websites in session
full_sites = full_df[sites]
full_sites.head()
###Output
_____no_output_____
###Markdown
Sessions are sequences of website indices, and data in this representation is useless for machine learning method (just think, what happens if we switched all ids of all websites). According to our hypothesis (Alice has favorite websites), we need to transform this dataframe so each website has a corresponding feature (column) and its value is equal to number of this website visits in the session. It can be done in two lines:
###Code
# sequence of indices
sites_flatten = full_sites.values.flatten()
# and the matrix we are looking for
# (make sure you understand which of the `csr_matrix` constructors is used here)
# a further toy example will help you with it
full_sites_sparse = csr_matrix(([1] * sites_flatten.shape[0],
sites_flatten,
range(0, sites_flatten.shape[0] + 10, 10)))[:, 1:]
full_sites_sparse.shape
336358*48371/1e9
###Output
_____no_output_____
###Markdown
If you understand what just happened here, then you can skip the next passage (perhaps, you can handle logistic regression too?), If not, then let us figure it out. Important detour 1: Sparse MatricesLet us estimate how much memory it will require to store our data in the example above. Our united dataframe contains 336 thousand samples of 48 thousand integer features in each. It's easy to calculate the required amount of memory, roughly:$$336\ K * 48\ K * 8\ bytes \approx 16* 10^9 * 8\ bytes = 128\ GB,$$(that's the [exact](http://www.wolframalpha.com/input/?i=336358*48371*8+bytes) value). Obviously, ordinary mortals have no such volumes (strictly speaking, Python may allow you to create such a matrix, but it will not be easy to do anything with it). The interesting fact is that most of the elements of our matrix are zeros. If we count non-zero elements, then it will be about 1.8 million, i.е. slightly more than 10% of all matrix elements. Such a matrix, where most elements are zeros, is called sparse, and the ratio between the number of zero elements and the total number of elements is called the sparseness of the matrix.For the work with such matrices you can use `scipy.sparse` library, check [documentation](https://docs.scipy.org/doc/scipy-0.18.1/reference/sparse.html) to understand what possible types of sparse matrices are, how to work with them and in which cases their usage is most effective. You can learn how they are arranged, for example, in Wikipedia [article](https://en.wikipedia.org/wiki/Sparse_matrix).Note, that a sparse matrix contains only non-zero elements, and you can get the allocated memory size like this (significant memory savings are obvious):
###Code
# How much memory does a sparse matrix occupy?
print('{0} elements * {1} bytes = {2} bytes'.format(full_sites_sparse.count_nonzero(), 8,
full_sites_sparse.count_nonzero() * 8))
# Or just like this:
print('sparse_matrix_size = {0} bytes'.format(full_sites_sparse.data.nbytes))
###Output
1866898 elements * 8 bytes = 14935184 bytes
sparse_matrix_size = 7467592 bytes
###Markdown
Let us explore how the matrix with the websites has been formed using a mini example. Suppose we have the following table with user sessions:| id | site1 | site2 | site3 ||---|---|---|---|| 1 | 1 | 0 | 0 || 2 | 1 | 3 | 1 || 3 | 2 | 3 | 4 |There are 3 sessions, and no more than 3 websites in each. Users visited four different sites in total (there are numbers from 1 to 4 in the table cells). And let us assume that the mapping is: 1. vk.com 2. habrahabr.ru 3. yandex.ru 4. ods.aiIf the user has visited less than 3 websites during the session, the last few values will be zero. We want to convert the original dataframe in a way that each session has a corresponding row which shows the number of visits to each particular site. I.e. we want to transform the previous table into the following form:| id | vk.com | habrahabr.ru | yandex.ru | ods.ai ||---|---|---|---|---|| 1 | 1 | 0 | 0 | 0 || 2 | 2 | 0 | 1 | 0 || 3 | 0 | 1 | 1 | 1 |To do this, use the constructor: `csr_matrix ((data, indices, indptr))` and create a frequency table (see examples, code and comments on the links above to see how it works). Here we set all the parameters explicitly for greater clarity:
###Code
# data, create the list of ones, length of which equal to the number of elements in the initial dataframe (9)
# By summing the number of ones in the cell, we get the frequency,
# number of visits to a particular site per session
data = [1] * 9
# To do this, you need to correctly distribute the ones in cells
# Indices - website ids, i.e. columns of a new matrix. We will sum ones up grouping them by sessions (ids)
indices = [1, 0, 0, 1, 3, 1, 2, 3, 4]
# Indices for the division into rows (sessions)
# For example, line 0 is the elements between the indices [0; 3) - the rightmost value is not included
# Line 1 is the elements between the indices [3; 6)
# Line 2 is the elements between the indices [6; 9)
indptr = [0, 3, 6, 9]
# Aggregate these three variables into a tuple and compose a matrix
# To display this matrix on the screen transform it into the usual "dense" matrix
csr_matrix((data, indices, indptr)).todense()
###Output
_____no_output_____
###Markdown
As you might have noticed, there are not four columns in the resulting matrix (corresponding to number of different websites) but five. A zero column has been added, which indicates if the session was shorter (in our mini example we took sessions of three). This column is excessive and should be removed from the dataframe (do that yourself). 3. What is the sparseness of the matrix in our small example?*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q3__*- 42%- 47%- 50%- 53%
###Code
# 50%
###Output
_____no_output_____
###Markdown
Another benefit of using sparse matrices is that there are special implementations of both matrix operations and machine learning algorithms for them, which sometimes allows to significantly accelerate operations due to the data structure peculiarities. This applies to logistic regression as well. Now everything is ready to build our first model. 3. Training the first modelSo, we have an algorithm and data for it. Let us build our first model, using [logistic regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) implementation from ` Sklearn` with default parameters. We will use the first 90% of the data for training (the training data set is sorted by time), and the remaining 10% for validation. Let's write a simple function that returns the quality of the model and then train our first classifier:
###Code
def get_auc_lr_valid(X, y, C=1.0, seed=17, ratio = 0.9):
# Split the data into the training and validation sets
idx = int(round(X.shape[0] * ratio))
# Classifier training
lr = LogisticRegression(C=C, random_state=seed, solver='liblinear').fit(X[:idx, :], y[:idx])
# Prediction for validation set
y_pred = lr.predict_proba(X[idx:, :])[:, 1]
# Calculate the quality
score = roc_auc_score(y[idx:], y_pred)
return score
%%time
# Select the training set from the united dataframe (where we have the answers)
X_train = full_sites_sparse[:idx_split, :]
# Calculate metric on the validation set
print(get_auc_lr_valid(X_train, y_train))
###Output
0.9195248606340787
Wall time: 4.76 s
###Markdown
The first model demonstrated the quality of 0.92 on the validation set. Let's take it as the first baseline and starting point. To make a prediction on the test data set **we need to train the model again on the entire training data set** (until this moment, our model used only part of the data for training), which will increase its generalizing ability:
###Code
# Function for writing predictions to a file
def write_to_submission_file(predicted_labels, out_file,
target='target', index_label="session_id"):
predicted_df = pd.DataFrame(predicted_labels,
index = np.arange(1, predicted_labels.shape[0] + 1),
columns=[target])
predicted_df.to_csv(out_file, index_label=index_label)
# Train the model on the whole training data set
# Use random_state=17 for repeatability
# Parameter C=1 by default, but here we set it explicitly
lr = LogisticRegression(C=1.0, random_state=17, solver='liblinear').fit(X_train, y_train)
# Make a prediction for test data set
X_test = full_sites_sparse[idx_split:,:]
y_test = lr.predict_proba(X_test)[:, 1]
# Write it to the file which could be submitted
write_to_submission_file(y_test, 'baseline_1.csv')
###Output
_____no_output_____
###Markdown
If you follow these steps and upload the answer to the competition [page](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2), you will get `ROC AUC = 0.90812` on the public leaderboard ("A2 baseline 1"). 4. Model Improvement: Feature EngineeringNow we are going to try to improve the quality of our model by adding new features to the data. But first, answer the following question: 4. What years are present in the training and test datasets, if united?*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q4__*- 13 and 14- 2012 and 2013- 2013 and 2014- 2014 and 2015
###Code
# 2013 and 2014
time_full_df = pd.DataFrame(index=full_df.index)
time_full_df['min'] = full_df[times].min(axis=1)
time_full_df['max'] = full_df[times].max(axis=1)
# Calculate sessions' duration in seconds
time_full_df['seconds'] = (time_full_df['max'] - time_full_df['min']) / np.timedelta64(1, 's')
time_full_df['max'].describe()
###Output
_____no_output_____
###Markdown
Create a feature that will be a number in YYYYMM format from the date when the session was held, for example 201407 -- year 2014 and 7th month. Thus, we will take into account the monthly [linear trend](http://people.duke.edu/~rnau/411trend.htm) for the entire period of the data provided.
###Code
# Dataframe for new features
full_new_feat = pd.DataFrame(index=full_df.index)
# Add start_month feature
full_new_feat['start_month'] = full_df['time1'].apply(lambda ts:
100 * ts.year + ts.month).astype('float64')
full_new_feat.iloc[:5]
###Output
_____no_output_____
###Markdown
5. Plot the graph of the number of Alice sessions versus the new feature, start_month. Choose the correct statement:*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q5__*- Alice wasn't online at all for the entire period- From the beginning of 2013 to mid-2014, the number of Alice's sessions per month decreased- The number of Alice's sessions per month is generally constant for the entire period- From the beginning of 2013 to mid-2014, the number of Alice's sessions per month increased*Hint: the graph will be more explicit if you treat `start_month` as a categorical ordinal variable*.
###Code
#Alice wasn't online at all for the entire period - No
#From the beginning of 2013 to mid-2014, the number of Alice's sessions per month decreased - No
#The number of Alice's sessions per month is generally constant for the entire period - No
#From the beginning of 2013 to mid-2014, the number of Alice's sessions per month increased - Yes
# Dataframe for new features
train_new_feat = pd.DataFrame(index=train_df.index)
# Add start_month feature
train_new_feat['start_month'] = train_df['time1'].apply(lambda ts:
100 * ts.year + ts.month).astype('float64')
train_new_feat['target'] = train_df['target']
# Your code is here
train_new_feat.query('target == 1').query('start_month < 201401')['start_month'].hist(bins = 12)
train_new_feat.query('target == 1').query('start_month >= 201401')['start_month'].hist(bins = 12)
###Output
_____no_output_____
###Markdown
In this way, we have an illustration and thoughts about the usefulness of the new feature, add it to the training sample and check the quality of the new model:
###Code
# Add the new feature to the sparse matrix
tmp = full_new_feat[['start_month']].values
X_train = csr_matrix(hstack([full_sites_sparse[:idx_split,:], tmp[:idx_split,:]]))
# Compute the metric on the validation set
print(get_auc_lr_valid(X_train, y_train))
###Output
0.7508354860175162
###Markdown
The quality of the model has decreased significantly. We added a feature that definitely seemed useful to us, but its usage only worsened the model. Why did it happen? Important detour 2: is it necessary to scale features?Here we give an intuitive reasoning (a rigorous mathematical justification for one or another aspect in linear models you can easily find on the internet). Consider the features more closely: those of them that correspond to the number of visits to a particular web-site per session vary from 0 to 10. The feature `start_month` has a completely different range: from 201301 to 201412, this means the contribution of this variable is significantly greater than the others. It would seem that problem can be avoided if we put less weight in a linear combination of attributes in this case, but in our case logistic regression with regularization is used (by default, this parameter is `C = 1`), which penalizes the model the stronger the greater its weights are. Therefore, for linear methods with regularization, it is recommended to convert features to the same scale (you can read more about the regularization, for example, [here](https://habrahabr.ru/company/ods/blog/322076/)).One way to do this is standardization: for each observation you need to subtract the average value of the feature and divide this difference by the standard deviation:$$ x^{*}_{i} = \dfrac{x_{i} - \mu_x}{\sigma_x}$$The following practical tips can be given:- It is recommended to scale features if they have essentially different ranges or different units of measurement (for example, the country's population is indicated in units, and the country's GNP in trillions)- Scale features if you do not have a reason/expert opinion to give a greater weight to any of them- Scaling can be excessive if the ranges of some of your features differ from each other, but they are in the same system of units (for example, the proportion of middle-aged people and people over 80 among the entire population)- If you want to get an interpreted model, then build a model without regularization and scaling (most likely, its quality will be worse)- Binary features (which take only values of 0 or 1) are usually left without conversion, (but)- If the quality of the model is crucial, try different options and select one where the quality is betterGetting back to `start_month`, let us rescale the new feature and train the model again. This time the quality has increased:
###Code
# Add the new standardized feature to the sparse matrix
tmp = StandardScaler().fit_transform(full_new_feat[['start_month']])
X_train = csr_matrix(hstack([full_sites_sparse[:idx_split,:], tmp[:idx_split,:]]))
# Compute metric on the validation set
print(get_auc_lr_valid(X_train, y_train))
###Output
0.9196993699549295
###Markdown
6. Add to the training set a new feature "n_unique_sites" – the number of the unique web-sites in a session. Calculate how the quality on the validation set has changed*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q6__*- It has decreased. It is better not to add a new feature. <- this!- It has not changed.- It has decreased. The new feature should be scaled.- I am confused, and I do not know if it's necessary to scale a new feature.*Tips: use the nunique() function from `pandas`. Do not forget to include the start_month in the set. Will you scale a new feature? Why?*
###Code
full_sites.iloc[:10]
def count_not_zeros(x):
unique = set(x)
if 0 in unique:
unique.discard(0)
return len(unique)
# Your code is here
np.array([count_not_zeros(x) for x in full_sites.values])[0:10]
full_new_feat['n_unique_sites'] = np.array([count_not_zeros(x) for x in full_sites.values])
full_new_feat[['n_unique_sites']].values[:idx_split,:]
# Add the new standardized feature to the sparse matrix
tmp = StandardScaler().fit_transform(full_new_feat[['start_month', 'n_unique_sites']])
X_train = csr_matrix(hstack([full_sites_sparse[:idx_split,:], tmp[:idx_split,:]]))
# Compute metric on the validation set
print(get_auc_lr_valid(X_train, y_train))
###Output
C:\Users\mi\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py:625: DataConversionWarning: Data with input dtype int32, float64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
C:\Users\mi\Anaconda3\lib\site-packages\sklearn\base.py:462: DataConversionWarning: Data with input dtype int32, float64 were all converted to float64 by StandardScaler.
return self.fit(X, **fit_params).transform(X)
###Markdown
So, the new feature has slightly decreased the quality, so we will not use it. Nevertheless, do not rush to throw features out because they haven't performed well. They can be useful in a combination with other features (for example, when a new feature is a ratio or a product of two others). 7. Add two new features: start_hour and morning. Calculate the metric. Which of these features gives an improvement?The `start_hour` feature is the hour at which the session started (from 0 to 23), and the binary feature `morning` is equal to 1 if the session started in the morning and 0 if the session started later (we assume that morning means `start_hour` is equal to 11 or less).Will you scale the new features? Make your assumptions and test them in practice.*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q7__*- None of the features gave an improvement :(- `start_hour` feature gave an improvement, and `morning` did not- `morning` feature gave an improvement, and `start_hour` did not- Both features gave an improvement*Tip: find suitable functions for working with time series data in [documentation](http://pandas.pydata.org/pandas-docs/stable/api.html). Do not forget to include the `start_month` feature.*
###Code
full_new_feat['start_hour'] = full_df['time1'].apply(lambda ts: ts.hour).astype('float64')
full_new_feat['morning'] = full_df['time1'].apply(lambda ts: ts.hour > 4 and ts.hour < 12).astype('float64')
tmp = StandardScaler().fit_transform(full_new_feat[['start_month', 'start_hour', 'morning']])
X_train = csr_matrix(hstack([full_sites_sparse[:idx_split,:], tmp[:idx_split,:]]))
# Compute metric on the validation set
print(get_auc_lr_valid(X_train, y_train))
# Both features gave an improvement
###Output
0.9591528176311175
###Markdown
5. Regularization and Parameter TuningWe have introduced features that improve the quality of our model in comparison with the first baseline. Can we do even better? After we have changed the training and test sets, it almost always makes sense to search for the optimal hyperparameters - the parameters of the model that do not change during training.For example, in week 3, you learned that, in decision trees, the depth of the tree is a hyperparameter, but the feature by which splitting occurs and its threshold is not. In the logistic regression that we use, the weights of each feature are changing, and we find their optimal values during training; meanwhile, the regularization parameter remains constant. This is the hyperparameter that we are going to optimize now.Calculate the quality on a validation set with a regularization parameter, which is equal to 1 by default:
###Code
# Compose the training set
tmp_scaled = StandardScaler().fit_transform(full_new_feat[['start_month',
'start_hour',
'morning']])
X_train = csr_matrix(hstack([full_sites_sparse[:idx_split,:],
tmp_scaled[:idx_split,:]]))
# Capture the quality with default parameters
score_C_1 = get_auc_lr_valid(X_train, y_train)
print(score_C_1)
###Output
0.9591528176311175
###Markdown
We will try to beat this result by optimizing the regularization parameter. We will take a list of possible values of C and calculate the quality metric on the validation set for each of C-values:
###Code
from tqdm import tqdm
# List of possible C-values
Cs = np.logspace(-3, 1, 10)
scores = []
for C in tqdm(Cs):
scores.append(get_auc_lr_valid(X_train, y_train, C=C))
###Output
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:46<00:00, 9.09s/it]
###Markdown
Plot the graph of the quality metric (AUC-ROC) versus the value of the regularization parameter. The value of quality metric corresponding to the default value of C=1 is represented by a horizontal dotted line:
###Code
plt.plot(Cs, scores, 'ro-')
plt.xscale('log')
plt.xlabel('C')
plt.ylabel('AUC-ROC')
plt.title('Regularization Parameter Tuning')
# horizontal line -- model quality with default C value
plt.axhline(y=score_C_1, linewidth=.5, color='b', linestyle='dashed')
plt.show()
###Output
_____no_output_____
###Markdown
8. What is the value of parameter C (if rounded to 2 decimals) that corresponds to the highest model quality?*For discussions, please stick to [ODS Slack](https://opendatascience.slack.com/), channel mlcourse_ai, pinned thread __a2_q8__*- 0.17- 0.46- 1.29- 3.14
###Code
# 0.17
print(Cs, scores)
###Output
[1.00000000e-03 2.78255940e-03 7.74263683e-03 2.15443469e-02
5.99484250e-02 1.66810054e-01 4.64158883e-01 1.29154967e+00
3.59381366e+00 1.00000000e+01] [0.8229644453864324, 0.8965353710466695, 0.9390416751204054, 0.9563605175378849, 0.9606926057562715, 0.9612125106879411, 0.960325019081296, 0.9586717093218169, 0.9557577357747731, 0.9513242026916705]
###Markdown
For the last task in this assignment: train the model using the optimal regularization parameter you found (do not round up to two digits like in the last question). If you do everything correctly and submit your solution, you should see `ROC AUC = 0.92784` on the public leaderboard ("A2 baseline 2"):
###Code
# Prepare the training and test data
tmp_scaled = StandardScaler().fit_transform(full_new_feat[['start_month', 'start_hour',
'morning']])
X_train = csr_matrix(hstack([full_sites_sparse[:idx_split,:],
tmp_scaled[:idx_split,:]]))
X_test = csr_matrix(hstack([full_sites_sparse[idx_split:,:],
tmp_scaled[idx_split:,:]]))
# Train the model on the whole training data set using optimal regularization parameter
lr = LogisticRegression(C=C, random_state=17, solver='liblinear').fit(X_train, y_train)
# Make a prediction for the test set
y_test = lr.predict_proba(X_test)[:, 1]
# Write it to the submission file
write_to_submission_file(y_test, 'baseline_2.csv')
###Output
_____no_output_____ |
source/GeoSegment_Partitioning_SRP.ipynb | ###Markdown
Section one: importing necessary packages and functions
###Code
import pandas as pd
import geopandas as gpd
from geopandas.tools import geocode
#import geoplot
import numpy as np
import scipy.stats as stats
import scipy
import shapely
from shapely import speedups
speedups.enabled
import seaborn as sns
import matplotlib
from matplotlib import pyplot as plt
matplotlib.rcParams.update({'font.size': 20})
import get_geodata
from get_geodata import get_gdf
from get_geodata import get_census_bounds
from get_geodata import get_zipcode_bounds
census_bounds = get_census_bounds()
census_bounds.head()
zip_bounds = get_zipcode_bounds()
zip_bounds.head()
gdf_15 = get_gdf(15)
gdf_15.head()
gdf_15.plot(figsize = (15,10))
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Section two: segmenting street data by census tract 2a: adding census tract info to street info This first example is the simplest way to select a specific tract from the census bounds. Note at this point we are only using the census data and not the street data. For the example I've selected the tract where the University is located.
###Code
census_selection = census_bounds.loc[census_bounds['NAME10'] == '53.02']
census_selection
###Output
_____no_output_____
###Markdown
The following is just a plot of all of the census tracts in gray with a plot of the selected census tract overlaid in red.
###Code
fig, ax = plt.subplots(figsize = (8, 16))
census_bounds.plot(ax=ax, facecolor='gray')
census_selection.plot(ax=ax, facecolor='red')
###Output
_____no_output_____
###Markdown
Now we can start to segment the street data by census tract. This first method is best for just individual selections and is probably not the most efficient. You can generate a boolean mask using the census selection and then use it to filter the street data.
###Code
Udist_mask = gdf_15.within(census_selection.at[84, 'geometry'])
print(Udist_mask)
Udist_data = gdf_15.loc[Udist_mask]
Udist_data.describe()
Udist_data.plot()
###Output
_____no_output_____
###Markdown
The better way to do this in my opinion is to join the census data with the street data - then all street data will have its associated census tractk, and you can select specific census tracts by indexing the dataframe. Here is the code to do that:
###Code
tracts_by_street = gpd.sjoin(gdf_15, census_bounds, op='within')
tracts_by_street.head()
university_2015 = tracts_by_street.loc[tracts_by_street['NAME10'] == '53.02']
university_2015
university_2015.plot()
tracts_by_street_diss = tracts_by_street.dissolve(by='NAME10')
###Output
_____no_output_____
###Markdown
2b: Aggregating street info by census tract Finally, we can aggregate the street info by census tract for better visualization.
###Code
city_by_zip = gpd.sjoin(zip_bounds, gdf_15, op='intersects')
gdf_15.describe()
city_by_zip.head()
fig, ax = plt.subplots(figsize=(15,10))
zip_bounds.plot(ax=ax, facecolor='none', edgecolor='k')
gdf_15.plot(ax=ax, color='blue')
plt.axis('off')
plt.show()
city_by_zip.plot(edgecolor='white', figsize=(15,10))
plt.axis('off')
plt.show()
sarahs_house = city_by_tract.loc[city_by_tract['NAME10'] == '51']
sarahs_house.head()
sarahs_house.plot()
test = sarahs_house['AAWDT'].sum()
traffic_zones = city_by_zip.dissolve(by='ZIPCODE', aggfunc = sum)
traffic_zones.reset_index(inplace = True)
traffic_zones.describe()
traffic_zones = traffic_zones[['NAME10','geometry', 'AAWDT' ]]
traffic_zones.head()
traffic_zones['NAME10'].value_counts()
test_frame = traffic_zones[traffic_zones['NAME10'] == '51']
test_frame['AAWDT'] == test
ave_traffic = traffic_zones['AAWDT']
#scheme = mapclassify.NaturalBreaks(ave_traffic, k=10)
geoplot.choropleth(traffic_zones, hue=ave_traffic, cmap='rainbow', legend = True)
plt.show()
###Output
_____no_output_____ |
HW_task_2.ipynb | ###Markdown
Initial data analysis and Linear Regression This assignment is dedicated to Linear regression. By focusing on prediction different features of football players you understand the mathematics behind it and see the usefulness of main data analysis libraries. **Materials**- [Documentation](http://docs.scipy.org/doc/) libraries Numpy and SciPy- [Documentation](http://matplotlib.org/) library Matplotlib - [Documentation](http://pandas.pydata.org/pandas-docs/stable/tutorials.html) library Pandas- [Pandas Cheat Sheet](http://www.analyticsvidhya.com/blog/2015/07/11-steps-perform-data-analysis-pandas-python/)- [Documentation](http://stanford.edu/~mwaskom/software/seaborn/) library Seaborn **Resources**- In this notebook we will use *FIFA 19 complete player dataset* which is taken from [here](https://www.kaggle.com/karangadiya/fifa19) Part 1. Initial data analysis with Pandas Importing libraries.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import random
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the data. Table *data.csv* should be in the same directory as this notebook.
###Code
data = pd.read_csv("data.csv", index_col='ID')
###Output
_____no_output_____
###Markdown
The first thing you need to do with a dataframe after loading is to look at first few records. This way you can make sure that you have parsed it correctly. Moreover, you can get acquainted with the data, look at the features and their type (categorical, numerical, text ...).They you may check whether the data has missing values inside. Depending on the problem type and percentage of missing values you can either fill them with some value or drop columns/rows having null values.After that you may want to look closer at some features. You can draw a histogram for defining a feature distribution (normal, power or some other). Also with the help of histogram you can find values which are really differ from the rest, we call them **outliers**. Histograms can be plotted by *hist* method of Pandas DataFrame.**Example 1** Let's look at first 5 rows of data using method *head* for DataFrame data.
###Code
data.head()
###Output
_____no_output_____
###Markdown
Unfortunately the number of columns exceeds the maximum visible default value in Pandas. Use the magic line above to remove this restriction.
###Code
pd.set_option('display.max_columns', None)
data.head()
###Output
_____no_output_____
###Markdown
Much better now.**Example 2** Print total player number and top-10 columns containing the most number of null values.
###Code
print(f"Total number of players in dataset {data.shape[0]}")
from tabulate import tabulate
top = 10
print(tabulate(
sorted(list(zip(data.columns, data.isnull().sum(), data.isnull().sum() / data.shape[0] * 100)), key=lambda x: -x[2])[:top],
headers=['col_name', 'null_cnt', 'null_perc']))
###Output
col_name null_cnt null_perc
----------- ---------- -----------
Loaned From 16943 93.0576
LS 2085 11.4516
ST 2085 11.4516
RS 2085 11.4516
LW 2085 11.4516
LF 2085 11.4516
CF 2085 11.4516
RF 2085 11.4516
RW 2085 11.4516
LAM 2085 11.4516
###Markdown
**Example 3**. Let's built a histogram of weight distribution in kgs from footbal players data. Follow steps:- Extract weight value from string (column Weight).- Convert *Weight* column to float type.- Get rid of null values in weight column, use median column value instead of them.- Convert pounds to kilograms- Finally use method *hist* for DataFrame *data* with arguments *column=Weight* (we look at this feature distribution)
###Code
print(f"Weight column type is '{data['Weight'].dtype}'")
data['Weight_float'] = data['Weight'].str.extract(r'([0-9]+)lbs').astype(float)
data['Weight_float'].fillna(data['Weight_float'].median())
POUND_TO_KILO = 0.454
data['Weight_kg'] = data.apply(lambda row: row['Weight_float'] * POUND_TO_KILO, axis=1)
data.hist(column='Weight_kg', bins=30)
plt.show()
###Output
_____no_output_____
###Markdown
**Task 1 (1 point)**. Built a histogram of the height distribution in *meters* from footbal player data. Remember that height is in format *feet* '*inches*. Instead of filling null values with some constant just drop them. Use *.dropna* for specified column.
###Code
data = data.dropna(subset=['Height'])
data['Height_meters'] = data.apply(lambda row: int(row['Height'].split("\'")[0]) * 0.3048 \
+ int(row['Height'].split("\'")[1]) * 0.0254,axis=1)
data.hist(column='Height_meters',bins=30)
plt.show()
###Output
_____no_output_____
###Markdown
Effective way to visualize the relationship between two features is to draw a simple _scatter plot_. The position of each dot on the horizontal and vertical axis indicates values for an individual data point. **Example 4.** Visualize the dependence of _Strength_ on _Weight_kg_.
###Code
data.plot.scatter(x='Weight_kg', y='Strength')
plt.title('Dependence of strength on weight')
plt.show()
###Output
_____no_output_____
###Markdown
One more effective way of initial data analysis is to plot pairwise feature dependencies. That simply combines already considered Scatter plot and a histogram. We create $m \times m$ plots (_m_ is number of features) where pictures on diagonal represent **histograms** and outside the diagonal **scatter_matrix**. That can be done with the help of _scatter_matrix_ Pandas DataFrame method or _pairplot_ in Seaborn. **Example 5.**Illustrate pairwise dependencies between _ShortPassing_, _Dribbling_, _BallControl_ and _Strength_ features of footbal players.
###Code
sns.pairplot(data[['ShortPassing', 'Dribbling', 'BallControl', 'Strength']])
###Output
_____no_output_____
###Markdown
Histograms and scatter plots are good for continuous (numerical) features. Distribution of data by categorical features (that have a fixed number of possible values) can be represented with **bar charts**. **Example 6.** Show distribution of players by age groups (under 20 yo. _young_, between 20-30 _mature_, over 30 yo. _masters_)
###Code
data['age_group'] = data.apply(lambda x: 'young' if x['Age'] < 20 else 'mature' if x['Age'] <= 30 else 'masters', axis=1)
distr = data.groupby('age_group').count().max(axis=1)[['young', 'mature', 'masters']]
plt.bar(distr.index, distr.values)
plt.ylabel('Number of players')
plt.title('Distribution of players across age groups')
plt.show()
###Output
_____no_output_____
###Markdown
Really often it is necessary to explore the distribution of some numerical feature based on the value of categorical one. Here comes the _boxplot_ of Seaborn library, which can show statistics of numerical features (mean, quantiles) by different value of categorical feature. Boxplot can also help to detect **outliers** - values that significantly differ from the rest. More detailed explanation [here](https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51). **Example 7.** Show _SprintSpeed_ statistics across different age groups. _Hint_: in order to prevent printing the service information and make our pictures more attractive we can write `;` in the end of last line.
###Code
sns.boxplot(x='age_group', y='SprintSpeed', data=data);
###Output
_____no_output_____
###Markdown
Part 2. Minimizing Mean Squared Error. Linear Regression We are going to predict target numerical variable $y$ for _n_ samples with the help of $x_1, x_2, ..., x_m$ _m_ features under the assumption of _liner dependence_ existence between features and target, i.e.$$\hat{y} = w_0 + w_1 * x_1 + w_2 * x_2 + ... + w_m * x_m$$so that Mean Squared Error between $y$ and $\hat{y}$ was the lowest possible$$MSE = \frac{1}{n}\sum_{i=1}^n {(y_i - \hat{y})}^2 -> min_{w_0, w_1, w_2, ...w_m}$$where $w_0$ is "free" weight component called **intercept** and $(w_1, w_2, ... w_n)$ is a **vector of coefficients**. Part 2.1 Linear Regression with one variable Just to understand the basic principles, let's try to predict _BallControl_ score based on the _Dribbling_ score for every player. Simple Linear Regression with one feature.$$BallControl = w_0 + w_1 * Dribbling$$ We are going to do real data science, aren't we? So let us split the available data into train and test samples. We let our model see only the train data, then we can measure it's quality on test sample.
###Code
from sklearn.model_selection import train_test_split
data.fillna({'BallControl': data['BallControl'].mean(), 'Dribbling': data['Dribbling'].mean()}, inplace=True)
X_train, X_test, y_train, y_test = train_test_split(data['Dribbling'].values, data['BallControl'].values, train_size=0.8)
X_train = X_train.reshape(-1, 1)
X_test = X_test.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
To illustrate the approach, let's use Ridge model from sklearn with _regularization_ param alpha=0. What does it mean and what it if for we will find out later on in this course. But for now I require avoiding regularization by setting regularization param to zero.
###Code
from sklearn.linear_model import Ridge
lr = Ridge(alpha=0)
lr.fit(X=X_train, y=y_train)
print(f'w_0 = {lr.intercept_}, w_1 = {lr.coef_[0]}')
y_pred_train = lr.predict(X_train)
y_pred_test = lr.predict(X_test)
data['predicted_BallControl'] = lr.predict(data['Dribbling'].values.reshape(-1, 1))
data[['Name', 'Dribbling', 'BallControl', 'predicted_BallControl']].head()
###Output
_____no_output_____
###Markdown
Right now we have predictions for train and test samples. How about measure the quality of the model? **Task 2 (0.5 point).** Write your own function for MSE calculation using the formula above. Calculate train and test MSE, compare to built-in method (_sklearn.metrics.mean_squared_error_)
###Code
def mse(y_true, y_pred):
error = ((y_true - y_pred) ** 2).mean()
return error
from sklearn.metrics import mean_squared_error
assert round(mean_squared_error(y_train, y_pred_train), 9) == round(mse(y_train, y_pred_train), 9)
assert round(mean_squared_error(y_test, y_pred_test), 9) == round(mse(y_test, y_pred_test), 9)
print(f'Train MSE {mse(y_train, y_pred_train)}, test MSE {mse(y_test, y_pred_test)}')
###Output
Train MSE 32.628078457371096, test MSE 34.3155824114421
###Markdown
**Task 3 (1.5 points).** Visualize the dependence of **test** _BallControl_ predictions and real _BallControl_ score on _Dribbling_ score. Don't forget to add axis and plot names!
###Code
plt.scatter(data['Dribbling'].values, data['BallControl'].values, label='true_score')
plt.scatter(X_test, y_pred_test, label='predicted_score')
plt.xlabel('Dribbling')
plt.ylabel('BallControl')
plt.legend(loc='upper left')
None
###Output
_____no_output_____
###Markdown
Part 2.2 Linear regression with many variables **Task 4 (5 points).** Implement your own Linear Regression class for any number of input features and settable boolean parameter *fit_intercept*. In this task you will work with _optimize_ module of [_scipy_](https://docs.scipy.org/doc/scipy/reference/) open-source library for mathematics, science, and engineering. You will need a function [_least_squares_](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html) that finds a coefficients for linear regression by minimizing the sum of the squares of the residuals (which is equivalent to MSE minimizing). More information about least squares approach [here](https://en.wikipedia.org/wiki/Least_squares). Even though this function has many parameters, you need only a few of them to complete the task (the rest will be filled in with default values automatically).- **fun** computes a vector of residuals given weights, features and target, we provide you a function template _compute_residuals_- **x0** this is an initial weights vector. You can either pass a vector of zeros[n_features] or fill in randomly.- **args** are fixed arguments to _fun_ function (which we are not going to optimize). In that particular case you will need to pass X and y.You can access optimized weights by accessing the field **.x** of object which returns by this function. !!! IMPORTANT Please complete this assignment **without any cycles**. You may use the standard operations of matrix \ vector multiplication ans different statistic calculation with NumPy. Otherwise, your solution may not go through asserts.
###Code
def compute_residuals(w, X, y):
"""
Compute residuals when predicting y_hat as matrix product of X and transposed w
:param w: linear regression weights, numpy.ndarrya: float64[num_features]
:param X: training features, numpy.ndarray: float64[num_samples, num_features]
:param y: training target, numpy.ndarray: float64[num_samples]
:returns: vector of residuals (y_i_hat - y_i) for each sample_i in X
"""
residuals = (X.dot(w.T) - y)
return residuals
from sklearn.base import BaseEstimator
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from scipy.optimize import least_squares
class LinearRegression(BaseEstimator):
def __init__(self, fit_intercept=True):
self.fit_intercept = fit_intercept
def fit(self, X, y):
"""
fit model weights given input features and target
:param X: training features, numpy.ndarray: numeric[num_samples, num_features]
:param y: training target, numpy.ndarray: numeric[num_samples]
:returns: linear predictor with fitted weights so that train MSE is the lowest possible
:note: weights: numpy.ndarray: float64[num_features] stored as class field
"""
# Check that X and y have correct shape
X, y = check_X_y(X, y)
# Save train data information. Necessary for following the uniform API
self.X_ = X
self.y_ = y
self.n_features_in_ = X.shape[1]
# Copy arrays and cast them to uniform type
X_train = X.astype('float64')
y_train = y.astype('float64')
# Add dummy column of ones to X_train if we want to train an intercept - last component of future weight vector
if self.fit_intercept:
X_train = np.column_stack((X_train, np.ones(X_train.shape[0])))
# Your code here.
# Just follow the suggested steps: create initial weights vector,
# apply least_squares optimizer passing the parameters described above
# and finally extract optimized weights.
# Remember: you need to distinguish coefficients from intercept when fit_intercept=True
self.w_ = np.ones(X_train.shape[1]).astype('float64')
res = least_squares(compute_residuals, self.w_, args=[X_train, y_train])
self.coef_ = res.x[:-1] if self.fit_intercept else res.x
self.intercept_ = self.fit_intercept * res.x[-1]
# Return the classifier
return self
def predict(self, X):
# Check is fit had been called
check_is_fitted(self)
# Input validation
X = check_array(X)
return X.dot(self.coef_) + self.intercept_
#Testing area
from sklearn.utils.estimator_checks import check_estimator
from sklearn.linear_model import Ridge
lr = LinearRegression()
ridge = Ridge(alpha=0)
lr_no_intercept = LinearRegression(fit_intercept=False)
ridge_no_intercept = Ridge(alpha=0, fit_intercept=False)
#Check compatibility with Sklearn framework and apply some spesific internal tests
check_estimator(lr)
check_estimator(lr_no_intercept)
#Compare model accuracy with Ridge(0) from Sklearn
data.fillna({'BallControl': data['BallControl'].mean()
, 'Dribbling': data['Dribbling'].mean()
, 'Strength': data['Strength'].mean()}, inplace=True)
X_sample, y_sample = data[['Dribbling', 'Strength']], data['BallControl']
lr.fit(X_sample, y_sample)
ridge.fit(X_sample, y_sample)
assert np.allclose(lr.predict(X_sample), ridge.predict(X_sample), rtol=1e-03), "Your model with intercept not accurate enough!"
lr_no_intercept.fit(X_sample, y_sample)
ridge_no_intercept.fit(X_sample, y_sample)
assert np.allclose(lr_no_intercept.predict(X_sample), ridge_no_intercept.predict(X_sample), rtol=1e-03), "Your model without intercept not accurate enough!"
###Output
_____no_output_____
###Markdown
Let's add more features in order to predict Dribbling score more accurately.
###Code
features = ['BallControl', 'ShortPassing', 'Strength', 'Weight_float', 'Weight_kg']
target = 'Dribbling'
for feat in features:
data.fillna({feat: data[feat].mean()}, inplace=True)
X_train, X_test, y_train, y_test = train_test_split(data[features].values, data[target].values, train_size=0.8, random_state=2)
lr = Ridge(0)
lr.fit(X=X_train, y=y_train)
y_pred_train = lr.predict(X_train)
y_pred_test = lr.predict(X_test)
print(f'Train MSE {mean_squared_error(y_train, y_pred_train)}, test MSE {mean_squared_error(y_test, y_pred_test)}')
print(f'w_0 = {lr.intercept_}, w_1, w_2, w_3, w_4, w_5 = {lr.coef_}')
###Output
w_0 = 11.939009437278315, w_1, w_2, w_3, w_4, w_5 = [ 1.09606899 -0.05070967 -0.12911552 -0.06843351 0.02937158]
###Markdown
That is not ok, two last weight components are too large, and they vary depending on the run! Although the result seems better our model would behave unexpectadly to the patterns in data it has never seen! Large weights and weights instability are the sign of [**overfitting**](https://en.wikipedia.org/wiki/Overfitting). According to the definition it is "_the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably_". But what does it actually mean? Assume that we have a player whose weight in kg was calculated with some tiny error, let's say +=1g.
###Code
player = data[features + [target]].iloc[0:2]
player['Predicted_dribbling'] = lr.predict(player[features].values)
player.head()
###Output
_____no_output_____
###Markdown
Predictions are pretty good if the data is _pure_. Let's add some noise to _Weight_kg_ feature:
###Code
player['Weight_kg'] = player['Weight_kg'] + [-0.001, 0.001]
player['Predicted_dribbling_with_error'] = lr.predict(player[features].values)
player.head()
###Output
_____no_output_____
###Markdown
Predicted dribbling value has changed significantly! Look at how this tiny **1g** error leads to extremly big or small dribbling! The reason behind it is strange unstable behaviour is **collinearity** between Weight and Weight_kg features, what means that Weight_kg can be linearly predicted from Weight. As a matter of fact they represent the same essense but in different scales. **Multicollinearity** describes a more general case, when one feature can be predicted by linear combination of some other features.Collinearity is really close related to **correlation** - degree to which a pair of variables are linearly related. Collinearity origins from Linear Algebra and Geometry whereas Correlation is a term from Statistics. Anyway all of this three terms refer to **linearly dependent features**, which is really bad for Linear Models. But why it is so bad? The main reason is that Linear Regression tries to capture the contribution of each feature to target _independently_, which obviously is not possible in terms of feature multicolliearity.There are a whole bunch of really interesting thoughts that can help to capture the intuition behind it [here](https://stats.stackexchange.com/questions/1149/is-there-an-intuitive-explanation-why-multicollinearity-is-a-problem-in-linear-r). I'd citate one of the examples provided._Assume that two people collaborated and accomplished scientific discovery. It is easy to tell their unique contributions (who did what) when two are totally different persons (one is theory guy and the other is good at experiment), while it is difficult to distinguish their unique influences (coefficients in regression) when they are twins acting similarly._ There are a few approaches how to prevent overfitting and overcome multicollinearity.- Drop features- Combine features- RegularizationRegularization is something we are going to speak about in the next modules. Combining features is problem-specific and could easily trigger a _holy_war_ due to ambiguity of approaches. Let's focus on simpliest - drop one of the features from the correlated pair.At first we need to define those pairs of features, **correlation matrix** comes to rescue! Each cell in the table shows the correlation between two variables. We use dataframe in-built method _corr_ in combination with seaborn _heatmap_.
###Code
from seaborn import heatmap
heatmap(data[features].corr(method='pearson'), center=0, square=True)
print(data[features].corr(method='pearson'))
plt.show()
features = ['BallControl', 'ShortPassing', 'Strength', 'Weight_kg']
X_train, X_test, y_train, y_test = train_test_split(data[features].values, data[target].values, train_size=0.8, random_state=2)
lr = Ridge(alpha=0)
lr.fit(X=X_train, y=y_train)
player['Predicted_dribbling_with_error'] = lr.predict(player[features].values)
player.head()
###Output
_____no_output_____
###Markdown
Part 2.3 Putting all together **Task 5 (up to 5 points).** Build a Linear Regression model for _Value_ prediction for every football player and validate it. You **have to** use either your custom Linear Regression class or `sklearn.linear_model.Ridge` with regularization param alpha=0. Steps you need to follow:- Extract float number from _Value_ field in DataFrame (**0.5 points**)- Сhoose more features that you expect to influence on player _Value_ (at least 10)- Plot feature correlation matrix. (**0.5 points**)- Drop features that are highly correlated with each other (_abs_(corr) > 0.9) one by one until no correlated pairs left. _Hint_: you may reuse code from Task_9 in HW_1 for automatic correlated pairs selection. (**1.5 points**)- Split data into train/test with some proportion (**0.5 points**)- Train a model on train dataset, make predictions both for train and test. (**0.5 points**)- Measure the model quality in terms of MSE in train and test samples, (**0.5 points**)- Write a short report about the work done. Why did you take these particular features? Can you find a logical explanation for high correlation of some of your features? Are you satisfied with the quality of predictions? etc. (**1 point**) **Penalties**- **-1 point** if used a different model besides custom Linear Regression or `sklearn.linear_model.Ridge` with regularization param alpha=0- **-0.5 points** if number of selected features BEFORE removal of linearly dependent ones is less than 10.- **-0.5 points** if did not remove linearly dependent features before training the model.
###Code
data['Value_float'] = data.apply(lambda row: float(row['Value'][1:-1]) * (10**3) if row['Value'][-1] == 'K' \
else float(row['Value'][1:-1]) * (10**6) if row['Value'][-1] == 'M'\
else float(row['Value'][1:]), axis = 1)
#data['Club'].fillna('No club', inplace=True)
data
data = data.dropna(subset=['LS','Contract Valid Until'])
data['Contract Valid Until_float'] = data.apply(lambda row: int(row['Contract Valid Until'][-4:]) \
if not row['Contract Valid Until'][0].isdigit() else\
int(row['Contract Valid Until']), axis=1)
def replace_number(row_name):
global data
global features
data[row_name + '_int'] = data.apply(lambda row: int(row[row_name][:2]),axis=1)
features.append(row_name + '_int')
features = []
replace_number('LS')
replace_number('LM')
#replace_number('CB')
#replace_number('CM')
#replace_number('RWB')
#replace_number('LB')
#replace_number('CDM')
#replace_number('RCM')
#replace_number('CAM')
#replace_number('RCM')
data['Wage_float'] = data.apply(lambda row: float(row['Wage'][1:-1]) * (10**3) if row['Wage'][-1] == 'K' \
else float(row['Wage'][1:]), axis = 1)
target_feature = 'Value_float'
#cat_features = ['Club','Work Rate']
numeric_features = ['Age', 'Overall', 'Potential',
'International Reputation', 'Height_meters','Weight_float',
'Crossing', 'Finishing','HeadingAccuracy','Dribbling', 'ShortPassing',
'Volleys','Curve', 'FKAccuracy', 'LongPassing', 'SprintSpeed',
'BallControl', 'Agility','Reactions',
'Balance', 'ShotPower', 'Jumping', 'Stamina','Strength', 'LongShots',
'Aggression', 'GKReflexes', 'Contract Valid Until_float'] + features
plt.figure(figsize=(30,15))
heatmap(data[numeric_features].corr(method='pearson'), center=0, square=True)
plt.show()
matrix = data[numeric_features].corr(method='pearson')
high_correlated_features = set()
for row in matrix.index:
for col in matrix.index:
if matrix[row][col] > 0.9 and row != col:
high_correlated_features.add((tuple(sorted([row, col])), matrix[row][col]))
for x in high_correlated_features:
print(x)
numeric_features.remove('Dribbling')
numeric_features.remove('BallControl')
#cat_features_data = pd.get_dummies(data[cat_features])
#d = data[numeric_features].join(cat_features_data)
d = data[numeric_features]
X_train, X_test, y_train, y_test = train_test_split(d.values, data[target].values, train_size=0.8, random_state=2)
lr = Ridge(alpha=0)
#lr = LinearRegression(fit_intercept=True)
lr.fit(X=X_train, y=y_train)
y_pred_train = lr.predict(X_train)
y_pred_test = lr.predict(X_test)
print(f'Train MSE {mean_squared_error(y_train, y_pred_train)}, test MSE {mean_squared_error(y_test, y_pred_test)}')
###Output
Train MSE 10.72045309084264, test MSE 11.011434652520963
|
ML5-sLR-Exercise_E5-1a.ipynb | ###Markdown
Simple Linear Regression (sLR) With scikit-learn (Example from lesson ML05)Powered by: Dr. Hermann Völlinger, DHBW Stuttgart(Germany); August 2020 Following ideas from: "Linear Regression in Python" by Mirko Stojiljkovic, 28.4.2020 (see details: https://realpython.com/linear-regression-in-python/what-is-regression) The example is from Lecture: "ML_Concept&Algorithm" (WS2020); Chapter ML5, Exercise E5.1-a "Exam Results" Let’s start with the simplest case, which is simple linear regression.There are five basic steps when you’re implementing linear regression: 1. Import the packages and classes you need.2. Provide data to work with and eventually do appropriate transformations.3. Create a regression model and fit it with existing data.4. Check the results of model fitting to know whether the model is satisfactory.5. Apply the model for predictions.These steps are more or less general for most of the regression approaches and implementations. Step 1: Import packages and classesThe first step is to import the package numpy and the class LinearRegression from sklearn.linear_model:
###Code
print ("*****************************************************************************")
print ("*** The picture shows the 10 students s1...s10 as points in (x,y)-plane *****")
print ("*** X-axis --> hours of effort for prep. of exam; Y-axis -->score-points ****")
print ("*****************************************************************************")
from IPython.display import Image
Image('test.jpg')
# Step 1: Import packages and classes
import numpy as np
from sklearn.linear_model import LinearRegression
# import time module
import time
###Output
*****************************************************************************
*** The picture shows the 10 students s1...s10 as points in (x,y)-plane *****
*** X-axis --> hours of effort for prep. of exam; Y-axis -->score-points ****
*****************************************************************************
###Markdown
Now, you have all the functionalities you need to implement linear regression.The fundamental data type of NumPy is the array type called numpy.ndarray. The rest of this article uses the term array to refer to instances of the type numpy.ndarray.The class sklearn.linear_model.LinearRegression will be used to perform linear and polynomial regression and make predictions accordingly. Step 2: Provide dataThe second step is defining data to work with. The inputs (regressors, 𝑥) and output (predictor, 𝑦) should be arrays(the instances of the class numpy.ndarray) or similar objects. This is the simplest way of providing data for regression:
###Code
# Step 2: Provide data
x = np.array([ 7, 3, 5, 3, 8, 7, 10, 3, 5, 3]).reshape((-1, 1))
y = np.array([41,27,35,26,48,45,46, 27,29,19])
###Output
_____no_output_____
###Markdown
Now, you have two arrays: the input x and output y. You should call .reshape() on x because this array is required to be two-dimensional, or to be more precise, to have one column and as many rows as necessary. That’s exactly what the argument (-1, 1) of .reshape() specifies.
###Code
print ("This is how x and y look now:")
print("x=",x)
print("y=",y)
###Output
This is how x and y look now:
x= [[ 7]
[ 3]
[ 5]
[ 3]
[ 8]
[ 7]
[10]
[ 3]
[ 5]
[ 3]]
y= [41 27 35 26 48 45 46 27 29 19]
###Markdown
As you can see, x has two dimensions, and x.shape is (10, 1), while y has only a single dimension, and y.shape is (10,). Step 3: Create a model and fit itThe next step is to create a linear regression model and fit it using the existing data.Let’s create an instance of the class LinearRegression, which will represent the regression model:
###Code
model = LinearRegression()
###Output
_____no_output_____
###Markdown
This statement creates the variable model as the instance of LinearRegression. You can provide several optionalparameters to LinearRegression: ----> fit_intercept is a Boolean (True by default) that decides whether to calculate the intercept 𝑏₀ (True) or considerit equal to zero (False).----> normalize is a Boolean (False by default) that decides whether to normalize the input variables (True) or not(False).----> copy_X is a Boolean (True by default) that decides whether to copy (True) or overwrite the input variables (False).----> n_jobs is an integer or None (default) and represents the number of jobs used in parallel computation. Noneusually means one job and -1 to use all processors.This example uses the default values of all parameters.It’s time to start using the model. First, you need to call .fit() on model:
###Code
model.fit(x, y)
###Output
_____no_output_____
###Markdown
With .fit(), you calculate the optimal values of the weights 𝑏₀ and 𝑏₁, using the existing input and output (x and y) asthe arguments. In other words, .fit() fits the model. It returns self, which is the variable model itself. That’s why youcan replace the last two statements with this one:
###Code
# model = LinearRegression().fit(x, y)
###Output
_____no_output_____
###Markdown
This statement does the same thing as the previous two. It’s just shorter. Step 4: Get resultsOnce you have your model fitted, you can get the results to check whether the model works satisfactorily andinterpret it.You can obtain the coefficient of determination (𝑅²) with .score() called on model:
###Code
r_sq = model.score(x, y)
print('coefficient of determination:', r_sq)
###Output
coefficient of determination: 0.8544449495100991
###Markdown
When you’re applying .score(), the arguments are also the predictor x and regressor y, and the return value is 𝑅².The attributes of model are .intercept_, which represents the coefficient,𝑏₀ and .coef_, which represents 𝑏₁:
###Code
print('intercept:', model.intercept_)
print('slope:', model.coef_)
###Output
intercept: 14.11702127659574
slope: [3.73758865]
###Markdown
The code above illustrates how to get 𝑏₀ and 𝑏₁. You can notice that .intercept_ is a scalar, while .coef_ is an array.The value 𝑏₀ = 14.1170 (approximately) illustrates that your model predicts the response 14.1170 when 𝑥 is zero. The value 𝑏₁= 3.7376 means that the predicted response rises by 3.7376 when 𝑥 is increased by one.You should notice that you can provide y as a two-dimensional array as well. In this case, you’ll get a similar result.This is how it might look:
###Code
new_model = LinearRegression().fit(x, y.reshape((-1, 1)))
print('intercept:', new_model.intercept_)
print('slope:', new_model.coef_)
###Output
intercept: [14.11702128]
slope: [[3.73758865]]
###Markdown
As you can see, this example is very similar to the previous one, but in this case, .intercept_ is a one-dimensional array with the single element 𝑏₀, and .coef_ is a two-dimensional array with the single element 𝑏₁. Step 5: Predict responseOnce there is a satisfactory model, you can use it for predictions with either existing or new data.To obtain the predicted response, use .predict():
###Code
y_pred = model.predict(x)
print('predicted response:', y_pred, sep='\n')
###Output
predicted response:
[40.28014184 25.32978723 32.80496454 25.32978723 44.0177305 40.28014184
51.4929078 25.32978723 32.80496454 25.32978723]
###Markdown
When applying .predict(), you pass the regressor as the argument and get the corresponding predicted response.This is a nearly identical way to predict the response: In this case, you multiply each element of x with model.coef_ and add model.intercept_ to the product.The output here differs from the previous example only in dimensions. The predicted response is now a twodimensionalarray, while in the previous case, it had one dimension.If you reduce the number of dimensions of x to one, these two approaches will yield the same result. You can do thisby replacing x with x.reshape(-1), x.flatten(), or x.ravel() when multiplying it with model.coef_.In practice, regression models are oen applied for forecasts. This means that you can use fitted models to calculatethe outputs based on some other, new inputs: x_new = np.arange(5).reshape((-1, 1))print(x_new)y_new = model.predict(x_new)print(y_new) Here .predict() is applied to the new regressor x_new and yields the response y_new. This example conveniently usesarange() from numpy to generate an array with the elements from 0 (inclusive) to 5 (exclusive), that is 0, 1, 2, 3, and 4.You can find more information about LinearRegression on the official documentation page.
###Code
# print current date and time
print("****** current date and time *******")
print("date and time:",time.strftime("%d.%m.%Y %H:%M:%S"))
print ("end")
###Output
****** current date and time *******
date and time: 29.08.2020 22:28:21
end
|
notebooks/data_wrangling/interface_Spark_with_EC2.ipynb | ###Markdown
Import Dependencies
###Code
# Environment at time of execution
%load_ext watermark
%pylab inline
%watermark -a "Anthony Abercrombie" -d -t -v -p numpy,pandas,matplotlib -g
from __future__ import print_function
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import dotenv
import os
import sys
import dotenv
import subprocess
import glob
from tqdm import tqdm
#File path to get to the project root
PROJ_ROOT = os.path.join(os.path.pardir, os.pardir)
# add local python functions
sys.path.append(os.path.join(PROJ_ROOT, "src"))
#Load AWS keys as environment variables
dotenv_path = os.path.join(PROJ_ROOT, '.env')
dotenv.load_dotenv(dotenv_path)
AWS_ACCESS_KEY = os.environ.get("AWS_ACCESS_KEY")
AWS_SECRET_ACCESS_KEY = os.environ.get("AWS_SECRET_ACCESS_KEY")
SPARK_HOME = os.environ.get("SPARK_HOME")
ec2_keypair = os.environ.get("ec2_keypair")
ec2_keypair_pem = os.environ.get("ec2_keypair_pem")
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
###Output
_____no_output_____
###Markdown
where is spark-ec2?
###Code
SPARK_HOME
def spinup_spark_ec2(SPARK_HOME, keypair, keyfile, num_slaves, cluster_name):
bash_command = '{}/ec2/spark-ec2 -k {} -i {}> -s {} launch {}'.format(SPARK_HOME, keypair, keyfile, num_slaves, cluster_name)
return bash_command
args = (SPARK_HOME, ec2_keypair, ec2_keypair_pem, 1, 'spark_ec2_cluster')
x = spinup_spark_ec2(*args)
x
x
'{}/bin/spark-ec2 -k {}<keypair> -i {}<key-file> -s {}<num-slaves> launch {}<cluster-name>'
###Output
_____no_output_____
###Markdown
Numerical DataFlow with Spark and Tensorflow checkout tensorframes from databricks```pythondf = sqlContext.createDataFrame()x= tf.placeholder(tf.int32, name='x')y= tf.placeholder(tf.int32, name='y')output = tf.add(x, 3*y, name='z')session = tf.session()output_value = session.run(output, {x:3, y:5})output_df = tfs.map_rows(output, df)output_df.collect()``` Connect to master node
###Code
def connect_master_node(SPARK_HOME, keypair, keyfile, region,cluster_name):
bash_cmd = '{}/ec2/spark-ec2 -k {} -i {} --region={} login {}'.format(SPARK_HOME, keypair, keyfile, region,cluster_name)
return bash_cmd
args = (SPARK_HOME, ec2_keypair, ec2_keypair_pem, 'us-west-2b', 'spark_ec2_cluster')
y = connect_master_node(*args)
y
###Output
_____no_output_____ |
ANN/Artifical Neural Networks.ipynb | ###Markdown
What We Are Going To Do:We are going to classify images of handwritten digits (MNIST dataset) using a fully-connected neural network.After successful training, our model will be able to guess digits.
###Code
import keras
import matplotlib.pyplot as plt #This package is for plotting
%matplotlib inline
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Input
from keras.optimizers import SGD
from keras.initializers import RandomNormal
from keras.models import load_model
from sklearn.preprocessing import normalize
###Output
Using TensorFlow backend.
###Markdown
1. Prepare Data:The dataset is loaded in this section.
###Code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('train data dim:', x_train.shape)
print(np.max(x_train))
print(np.min(x_train))
rand_num = np.random.randint(len(x_train))
plt.imshow(x_train[rand_num],cmap='gray')
plt.show()
print(y_train[rand_num])
###Output
_____no_output_____
###Markdown
**Our Network accept 1D data. So we flatten our 2D image.**
###Code
temp = len(x_train[0])*len(x_train[0][0])
x_train = np.reshape(x_train,(len(x_train), temp))
x_test = np.reshape(x_test,(len(x_test), temp))
###Output
_____no_output_____
###Markdown
**Normalize data by rescaling them to (0,1)**
###Code
x_train = x_train/np.max(x_train)
x_test = x_test/np.max(x_train)
###Output
_____no_output_____
###Markdown
**Convert label arrays to 1-hot representation**
###Code
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
2. Define Model**Add the following layers to the network:*** Hidden Layer 1: Fully Conncted + Relu Activition (e.g. 512 Nuerons)* Hidden Layer 2: Fully Connected + Relu Activition (e.g. 512 Neurons)* Outout Layer: Fully Connected + Softmax Activition (e.g 10 Neurons) calasses:[0,1,2,3,4,5,6,7,8,9]
###Code
model = Sequential()
# Hidden Layer1:
model.add(Dense(512, activation='relu',kernel_initializer = RandomNormal(0,0.01), input_shape=(temp,)))
# Hidden Layer2:
model.add(Dense(512, activation='relu',kernel_initializer = RandomNormal(0,0.01)))
# Output Layer1:
model.add(Dense(10, activation='softmax',kernel_initializer = RandomNormal(0,0.01)))
###Output
_____no_output_____
###Markdown
**Determine loss function, optimizer and metrics for the model**
###Code
#the optimizer and its learning rate
sgd = SGD(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
**Print the review of the model**
###Code
model.summary()
# Here we saved the raw model without any training. we will use it later.
model.save('raw_model.h5')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 512) 401920
_________________________________________________________________
dense_2 (Dense) (None, 512) 262656
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0
_________________________________________________________________
###Markdown
3. Train And Evaluate Model. **Train model on training data**
###Code
history = model.fit(x_train, y_train, batch_size=32,epochs=3,verbose=1,validation_data=(x_test, y_test),validation_split=0.2)
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 44s 727us/step - loss: 2.0144 - acc: 0.3624 - val_loss: 4.0203 - val_acc: 0.7374
Epoch 2/3
60000/60000 [==============================] - 43s 717us/step - loss: 0.5878 - acc: 0.8289 - val_loss: 2.0092 - val_acc: 0.8729
Epoch 3/3
60000/60000 [==============================] - 37s 613us/step - loss: 0.3910 - acc: 0.8879 - val_loss: 1.6633 - val_acc: 0.8941
###Markdown
**Evaluate model on test data**
###Code
loss = model.evaluate(x=x_test, y=y_test, batch_size=32, verbose=1, sample_weight=None, steps=None)
print(loss)
###Output
10000/10000 [==============================] - 1s 126us/step
[1.6633325996143278, 0.8941]
###Markdown
**Save model**In Keras, you can save the model to a HDF5 file(.h5) and reload it later simply by model.save(filepath) and keras.models.load_model(filepath), respectively.The saved model contains:* the architecture of the model, allowing to re-create the model* the weights of the model* the training configuration (loss, optimizer)* the state of the optimizer, allowing to resume training exactly where you left off.
###Code
model.save('mlp.h5')
# Delete model to make sure you reload it correctly:
del model
###Output
_____no_output_____
###Markdown
**Load model and Predict label for a random image in train set. Verify predicted label by ploting the image.**
###Code
model = load_model('mlp.h5')
a = np.random.randint(len(x_test))
print(model.predict(x_test)[a])
print(y_test[a])
###Output
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
###Markdown
**Continue training + Callbacks**
###Code
# We will use two callbacks here: EarlyStopping, CSVLogger (you may add other callbacks to this list)
callback = [keras.callbacks.EarlyStopping(monitor='val_acc', verbose=1, min_delta=0.01, patience = 2, mode = 'max'),
keras.callbacks.CSVLogger('log.csv'),keras.callbacks.BaseLogger(stateful_metrics=None)]
history = model.fit(x_train, y_train,batch_size = 32,epochs = 100,verbose = 1,validation_split = 0.2,callbacks = callback)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'])
plt.title('loss')
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['acc'])
plt.title('accuracy');
model.evaluate(x=x_test, y=y_test, batch_size=32, verbose=1, sample_weight=None, steps=None)
###Output
10000/10000 [==============================] - 2s 187us/step
###Markdown
4. Extras**Initialization is important!**this time use mean=0 and std=1 for initialization.
###Code
model2 = Sequential()
model2.add(Dense(448, activation='selu',kernel_initializer = RandomNormal(0,1), input_shape=(784,)))
model2.add(Dense(448, activation='selu',kernel_initializer = RandomNormal(0,1)))
model2.add(Dense(10, activation='softmax',kernel_initializer = RandomNormal(0,1)))
model2.compile(loss='categorical_crossentropy', optimizer='Nadam', metrics=['accuracy'])
model2.summary()
model2.save('raw_model2.h5')
history = model2.fit(x_train, y_train, batch_size=64,epochs=5,verbose=1,validation_data=(x_test, y_test),validation_split=0.25)
loss = model2.evaluate(x=x_test, y=y_test, batch_size=32, verbose=1, sample_weight=None, steps=None)
print(loss)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_4 (Dense) (None, 448) 351680
_________________________________________________________________
dense_5 (Dense) (None, 448) 201152
_________________________________________________________________
dense_6 (Dense) (None, 10) 4490
=================================================================
Total params: 557,322
Trainable params: 557,322
Non-trainable params: 0
_________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 41s 686us/step - loss: 12.0418 - acc: 0.2526 - val_loss: 10.7524 - val_acc: 0.3329
Epoch 2/5
60000/60000 [==============================] - 39s 645us/step - loss: 10.8164 - acc: 0.3288 - val_loss: 10.6605 - val_acc: 0.3386
Epoch 3/5
60000/60000 [==============================] - 39s 646us/step - loss: 10.2436 - acc: 0.3644 - val_loss: 9.5516 - val_acc: 0.4074
Epoch 4/5
60000/60000 [==============================] - 38s 630us/step - loss: 9.6859 - acc: 0.3990 - val_loss: 10.1286 - val_acc: 0.3716
Epoch 5/5
60000/60000 [==============================] - 40s 666us/step - loss: 9.5391 - acc: 0.4081 - val_loss: 9.4613 - val_acc: 0.4130
10000/10000 [==============================] - 2s 177us/step
[9.4613220413208, 0.413]
###Markdown
**When I changed std(normal range (0.01,0.05)) to 1 and relu to 'selu' and optimizer to 'Nadam' and decrease neurons count, as you see, accuracy went down and loss increased in the bad way while it increased computationaly beacuase of selu.** Overfitting/Underfitting**Load the 'raw_model.h5' and this time use 1 percent of training data for training, and all test data for validation.**
###Code
del model
model = load_model('raw_model.h5')
callback = [keras.callbacks.EarlyStopping(monitor='val_acc', verbose=1, min_delta=0.01, patience = 2, mode = 'max')]
history = model.fit(x_train, y_train,batch_size = 32,epochs = 100,verbose = 1,validation_split = 0.1,callbacks = callback)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'])
plt.title('loss')
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['acc'])
plt.title('accuracy');
model.evaluate(x=x_test, y=y_test, batch_size=32, verbose=1, sample_weight=None, steps=None)
###Output
10000/10000 [==============================] - 2s 157us/step
|
src/band_structure.ipynb | ###Markdown
4He on Graphene: Band Structure
###Code
import numpy as np
import matplotlib.pyplot as plt
import pickle
import re,glob,os
import dgutils.colors as colortools
import matplotlib as mpl
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# plot style
plot_style = {'notebook':'../include/notebook.mplstyle','aps':'../include/aps.mplstyle'}
plt.style.reload_library()
plt.style.use(plot_style['aps'])
figsize = plt.rcParams['figure.figsize']
plt.rcParams['text.latex.preamble'] = f'\input{{{os.getcwd()}/../include/texheader}}'
###Output
_____no_output_____
###Markdown
load data
###Code
print(figsize)
GMKG = np.loadtxt('../data/band_structure.txt', delimiter = ',')
GMKG_Length = GMKG[:, 0]
GMKG_Bands = GMKG[:, 1:]
First_band_max = np.ones(len(GMKG_Length))*np.max(GMKG[:, 1])
First_band_min = np.ones(len(GMKG_Length))*np.min(GMKG[:, 1])
GMKG_Length_stretched = 0*GMKG_Length
M_length = GMKG_Length[24]
K_length = GMKG_Length[32]
G_length = GMKG_Length[40]
band_M = GMKG_Bands[24]
band_K = GMKG_Bands[32]
band_G = GMKG_Bands[0]
GMKG_Length_stretched[0:25] = GMKG_Length[0:25]
GMKG_Length_stretched[25:33] = (GMKG_Length[25:33] - M_length)/(K_length-M_length)*M_length/np.sqrt(3) + M_length
GMKG_Length_stretched[33:41] = ((GMKG_Length[33:41] - K_length)/(G_length-K_length)*M_length/np.sqrt(3)*2
+ GMKG_Length_stretched[32])
g1 = [2*np.pi, 0]
g2 = [1*np.pi, np.sqrt(3)*np.pi]
g3 = [-1*np.pi, np.sqrt(3)*np.pi]
g4 = [3*np.pi, np.sqrt(3)*np.pi]
g5 = [0*np.pi, 2*np.sqrt(3)*np.pi]
g6 = [-3*np.pi, np.sqrt(3)*np.pi]
def bs_cosines(x, y, a, b):
why = 4/(2*np.sqrt(3))
r = [x*why, y*why]
first_set = a*(np.cos(np.dot(r, g1)/2)+np.cos(np.dot(r, g2)/2)+np.cos(np.dot(r, g3)/2))
second_set = b*(np.cos(np.dot(r, g4)/2)+np.cos(np.dot(r, g5)/2)+np.cos(np.dot(r, g6)/2))
combined = first_set + second_set
return combined
def bs_mesh(X, Y, a, b):
band_structure = np.zeros((len(X), len(Y)))
for ix in range(len(X)):
for iy in range(len(Y)):
band_structure[ix, iy] = bs_cosines(X[ix], Y[iy], a, b)
return band_structure
print(band_M[0]-band_G[0])
print(band_K[0]-band_G[0])
X = np.linspace(-2,2,100)
Y = np.linspace(-2,2,100)
a = -2.91381348
b = 0.27255
fit_0 = bs_cosines(0, 0, a, b)
fit_s = bs_cosines(0, 1, a, b)
fit_m = bs_cosines(1/np.sqrt(3), 1, a, b)
print(fit_s-fit_0)
print(fit_m-fit_0)
GMKG_cosines = 0*GMKG_Length
for i in range(25):
GMKG_cosines[i] = bs_cosines(0, i/25, a, b)
for i in range(25, 33):
GMKG_cosines[i] = bs_cosines((i-24)/np.sqrt(3)/(33-25), 1, a, b)
for i in range(33, 41):
GMKG_cosines[i] = bs_cosines(1/np.sqrt(3) - (i-32)/np.sqrt(3)/(41-33), 1 - (i-32)/(41-33), a, b)
GMKG_cosines = GMKG_cosines - np.min(GMKG_cosines) + np.min(GMKG_Bands)
###Output
_____no_output_____
###Markdown
Plotting Lowest Band for All Methods
###Code
def hex_to_rgb(value,transmit=None, full=False):
'''Convert a hex color to rgb tuple.'''
value = value.lstrip('#')
lv = len(value)
step = int(lv/3)
scale = 1.0/255.0
if full:
scale = 1
col = tuple(scale*int(value[i:i+step], 16) for i in range(0, lv, step))
if not transmit:
return col
else:
return col + (transmit,)
def get_alpha_hex(value,alpha, real=False):
'''Convert a hex color to an equivalent non-transparent version.'''
#first we get the rgb
rgb = hex_to_rgb(value)
# apply the transparency
target = [alpha*k + (0.999-alpha) for k in rgb]
if not real:
return rgb_to_hex(target)
else:
transparent = str(hex(int(alpha*255)))[-2:]
return value + transparent
def rgb_to_hex(value):
'''Convert a rgb tuple to a hex color string.'''
if value[0] < 1:
scale = 255
else:
scale = 1
rgb = [int(scale*k) for k in value]
return '#%02x%02x%02x' % (rgb[0],rgb[1],rgb[2])
methods = ['Wannier','QMC','DFT','MP2']
colors = ["#d43e4e", "#abdda4", "#3288bc"]
file_names = {'Wannier':'wannierband.txt', 'QMC':'qmcband.txt', 'DFT':'dftband.txt', 'MP2':'mp2band.txt'}
col = {}
col['Wannier'] = '#58595B'
col['QMC'] = colors[0]
col['DFT'] = colors[2]
col['MP2'] = colors[1]
props = {}
props['Wannier'] = {'mfc':colortools.get_alpha_hex(col['Wannier'],0.5,real=True), 'mec':col['Wannier'], 'ms':4, 'label':'Wannier', 'marker':'^', 'mew':0.6,'zorder':-2, 'lw':0}
props['QMC'] = {'mfc':colortools.get_alpha_hex(colors[0],0.5,real=True), 'mec':colors[0], 'ms':4, 'label':'QMC', 'marker':'o', 'mew':0.6,'zorder':2, 'lw':0}
props['DFT'] = {'mfc':colortools.get_alpha_hex(colors[2],0.5,real=True), 'mec':colors[2], 'ms':4, 'label':'DFT', 'marker':'s', 'mew':0.6,'zorder':-2, 'lw':0}
props['MP2'] = {'mfc':colortools.get_alpha_hex(colors[1],0.5,real=True), 'mec':colors[1], 'ms':4, 'label':'MP2', 'marker':'D', 'mew':0.6,'zorder':-2, 'lw':0}
ε,k = {},{}
M,K = {},{}
t = {}
for method in methods:
data = np.loadtxt(open(f'../data/{file_names[method]}', 'r'), delimiter=',', skiprows=0)
k[method] = data[:,0]
ε[method] = data[:,1]
M[method] = np.where(k[method]>2.72)[0][0]
K[method] = np.where(k[method]>4.2)[0][0]
k[method] /= k[method][-1]
t[method] = (ε[method][K[method]]-ε[method][0])/9
method = 'Wannier'
aₒ = 1.42 # C-C lattice spacing in graphene
π = np.pi
# high symmetry points for the graphene BZ
Γp = np.array([0,0])
Kp = np.array([4*π/(3*np.sqrt(3)*aₒ),0])
Mp = np.array([π/(np.sqrt(3)*aₒ),π/(3*aₒ)])
E_spec = np.zeros(10*len(k[method]))
M_idx = int(k[method][M[method]]*len(E_spec))
K_idx = int(k[method][K[method]]*len(E_spec))
G_idx = len(E_spec)
k_spec = np.zeros([len(E_spec),2])
k_spec_frac = np.arange(len(E_spec))/(len(E_spec)-1)
k_spec[:M_idx,0] = np.linspace(0,Mp[0],M_idx)
k_spec[:M_idx,1] = k_spec[:M_idx,0]/np.sqrt(3)
dkx = (Kp[0]-Mp[0])/(K_idx-M_idx)
k_spec[M_idx:K_idx,0] = np.linspace(Mp[0]+dkx,Kp[0],(K_idx-M_idx))
k_spec[M_idx:K_idx,1] = -np.sqrt(3)*k_spec[M_idx:K_idx,0] + 4*π/(3*aₒ)
k_spec[K_idx:,0] = np.linspace(Kp[0],0,G_idx-K_idx)
k_spec[K_idx:,1] = 0.0
def E_tb(k,εₒ,t):
global aₒ
a = np.sqrt(3)*aₒ
return εₒ - 2*t*(np.cos(k[:,0]*a) + 2*np.cos(k[:,0]*a/2)*np.cos(np.sqrt(3)*a*k[:,1]/2))
cmap = plt.cm.get_cmap('viridis')
colors = [cmap(i*0.3) for i in range(3)]
fig,ax = plt.subplots(figsize=figsize)
ax.axvline(GMKG_Length_stretched[24], color = 'black', linewidth = 0.8)
ax.axvline(GMKG_Length_stretched[32], color = 'black', linewidth = 0.8)
#plt.plot(GMKG_Length_stretched, First_band_max, c = 'lightgrey', lw = 0.8)
#plt.plot(GMKG_Length_stretched, First_band_min, c = 'lightgrey', lw = 0.8)
ax.fill_between(GMKG_Length_stretched, First_band_min, First_band_max, color = 'lightgrey')
ax.plot(GMKG_Length_stretched, GMKG_Bands[:, 0], c = 'black', lw = 1)
ax.plot(GMKG_Length_stretched[0:33], GMKG_Bands[0:33, 1], c = colors[0], lw = 1)
ax.plot(GMKG_Length_stretched[0:33], GMKG_Bands[0:33, 2], c = colors[1], lw = 1)
ax.plot(GMKG_Length_stretched[32:], GMKG_Bands[32:, 1], c = colors[1], lw = 1)
ax.plot(GMKG_Length_stretched[32:], GMKG_Bands[32:, 2], c = colors[0], lw = 1)
ax.plot(GMKG_Length_stretched, GMKG_Bands[:, 3], c = colors[2], lw = 1)
ax.plot(GMKG_Length_stretched, GMKG_cosines, c = 'grey', lw = 1, linestyle = "--")
ax.set_xlim(GMKG_Length_stretched[0], GMKG_Length_stretched[-1])
ax.set_xticks([GMKG_Length_stretched[0], GMKG_Length_stretched[24], GMKG_Length_stretched[32],
GMKG_Length_stretched[40]])
ax.set_xticklabels([r'$\Gamma$', r'M', r'K' , r'$\Gamma$'])
ax.tick_params(top=False)
ax.set_ylabel(r'$\alabel{\varepsilon(k)}{\kelvin}$')
from matplotlib.patches import RegularPolygon
# add the BZ
axbz = ax.inset_axes([0.05, 0.65, 0.35,0.35])
hexagon = RegularPolygon((0,0), numVertices=6, radius=Kp[0], edgecolor='k', facecolor='None', lw=0.6)
t2 = mpl.transforms.Affine2D().rotate_deg(-90) + axbz.transData
hexagon.set_transform(t2)
axbz.add_patch(hexagon)
axbz.set_xlim(-2,2)
axbz.set_ylim(-2,2)
axbz.plot([Γp[0],Kp[0]],[Γp[1],Kp[1]], '-', color='grey', lw=0.5, zorder=-10)
axbz.plot([Γp[0],Mp[0]],[Γp[1],Mp[1]], '-', color='grey', lw=0.5, zorder=-10)
axbz.annotate(r"$\Gamma$",zorder=-1,xy=Γp, xycoords='data',xytext=(-5, -2), textcoords='offset points', fontsize=8)
axbz.annotate(r"$K$",zorder=-1,xy=Kp, xycoords='data',xytext=(+1, -2), textcoords='offset points', fontsize=8)
axbz.annotate(r"$M$",zorder=-1,xy=Mp, xycoords='data',xytext=(+1, -1), textcoords='offset points', fontsize=8)
axbz.set_aspect('equal')
axbz.axis('off') ;
plt.savefig('../plots/band_structure.pdf', dpi=300, transparent=False)
plt.savefig('../plots/band_structure.svg', transparent=False)
# # compute the recoil energy
# λ = np.sqrt(3)*aₒ
# λ *= 1.0E-10
# E_R = ħ**2*(π)**2/(2*m_Ne*λ**2)/kB
# t = (E[K_point,0]-E[0,0])/9 * E_R
# # Kinetic energy comes form Wannier code
# K = 6.27*E_R
# # Output results
# print(f'E_R = {E_R:.2f} K')
# print(f't = {t:.3f} K')
# print(f'Kin. Energy = {K:.1f} K')
from matplotlib.patches import RegularPolygon
fig,ax = plt.subplots(figsize=figsize)
method = 'Wannier'
for method in methods:
ax.plot(k[method],ε[method], **props[method])
plt.plot(k_spec_frac,E_tb(k_spec,ε[method][0]+6*t[method],t[method]), color=col[method], lw=0.6, zorder=-10, ls='-')
ax.set_xticks([0,k[method][M[method]],k[method][K[method]],k[method][-1]])
ax.set_xticklabels([r'$\Gamma$', r'$M$', r'$K$', r'$\Gamma$'])
ax.set_ylabel(r'$\alabel{\varepsilon(k)}{\kelvin}$')
ax.set_xlim(k[method][0],k[method][-1])
ax.set_ylim(-178,-143)
#ax.annotate('(b)', xy=(-0.195,1),ha='left', va='top', xycoords='axes fraction')
ax.legend(loc=(0.01,0.61))
# add the BZ
axbz = ax.inset_axes([0.69, 0.68, 0.35,0.35])
hexagon = RegularPolygon((0,0), numVertices=6, radius=Kp[0], edgecolor='k', facecolor='None', lw=0.6)
t2 = mpl.transforms.Affine2D().rotate_deg(-90) + axbz.transData
hexagon.set_transform(t2)
axbz.add_patch(hexagon)
axbz.set_xlim(-2,2)
axbz.set_ylim(-2,2)
axbz.plot([Γp[0],Kp[0]],[Γp[1],Kp[1]], '-', color='grey', lw=0.5, zorder=-10)
axbz.plot([Γp[0],Mp[0]],[Γp[1],Mp[1]], '-', color='grey', lw=0.5, zorder=-10)
axbz.annotate(r"$\Gamma$",zorder=-1,xy=Γp, xycoords='data',xytext=(-5, -2), textcoords='offset points', fontsize=8)
axbz.annotate(r"$K$",zorder=-1,xy=Kp, xycoords='data',xytext=(+1, -2), textcoords='offset points', fontsize=8)
axbz.annotate(r"$M$",zorder=-1,xy=Mp, xycoords='data',xytext=(+1, -1), textcoords='offset points', fontsize=8)
axbz.set_aspect('equal')
axbz.axis('off') ;
plt.savefig('../plots/lowest_bands.pdf')
Kp
fig, ax = plt.subplots(1)
ax.set_aspect('equal')
hexagon = RegularPolygon((0,0), numVertices=6, radius=Kp[0], edgecolor='k', facecolor='None')
t2 = mpl.transforms.Affine2D().rotate_deg(-90) + ax.transData
hexagon.set_transform(t2)
ax.plot([Γp[0],Kp[0]],[Γp[1],Kp[1]], '-', color='grey', lw=1, zorder=-10)
ax.plot([Γp[0],Mp[0]],[Γp[1],Mp[1]], '-', color='grey', lw=1, zorder=-10)
ax.annotate(r"$\Gamma$",zorder=-1,xy=Γp, xycoords='data',xytext=(-9, -9), textcoords='offset points', fontsize=12)
ax.annotate(r"$K$",zorder=-1,xy=Kp, xycoords='data',xytext=(+2, -2), textcoords='offset points', fontsize=12)
ax.annotate(r"$M$",zorder=-1,xy=Mp, xycoords='data',xytext=(+3, -2), textcoords='offset points', fontsize=12)
ax.add_patch(hexagon);
ax.axis('off') ;
plt.autoscale(enable = True)
Kp
###Output
_____no_output_____ |
nbs/old nbs/develop 3.ipynb | ###Markdown
tree varience
###Code
app_train_proc.OWN_CAR_AGE_na.value_counts()
app_train_proc.OCCUPATION_TYPE.value_counts().plot.barh();
learner = SKLearner(RandomForestClassifier())
learner.fit(*ds.trn, *ds.val)
def get_preds(t): return t.predict(ds.x_val)
def parallel_trees(m, fn, n_jobs=8): return list(ProcessPoolExecutor(n_jobs).map(fn, m.estimators_))
preds = np.stack(parallel_trees(learner.md, get_preds))
preds[:,0]
class TBPreProc:
def __init__(self, *args): self.args = args
def __call__(self, df): return self.func(df, *self.args)
@staticmethod
def func(*args): None
class get_y(TBPreProc):
@staticmethod
def func(df, y_fld):
if y_fld is None: y = None
else:
if not is_numeric_dtype(df[y_fld]): df[y_fld] = df[y_fld].cat.codes
y = df[y_fld].values
df.drop(y_fld, axis=1, inplace=True)
return y
class fill_na(TBPreProc):
@staticmethod
def func(df, na_dict = None):
na_dict = {} if na_dict is None else na_dict.copy()
na_dict_initial = na_dict.copy()
for n,c in df.items(): na_dict = fix_missing(df, c, n, na_dict)
if len(na_dict_initial.keys()) > 0:
df.drop([a + '_na' for a in list(set(na_dict.keys()) - set(na_dict_initial.keys()))], axis=1, inplace=True)
return na_dict
def fix_missing(df, col, name, na_dict):
if is_numeric_dtype(col):
if pd.isnull(col).sum() or (name in na_dict):
df[name+'_na'] = pd.isnull(col)
filler = na_dict[name] if name in na_dict else col.median()
df[name] = col.fillna(filler)
na_dict[name] = filler
return na_dict
class app_cat(TBPreProc):
@staticmethod
def func(df, max_n_cat=15):
cons = []
for name, value in df.items():
if is_numeric_dtype(value) and value.dtypes != np.bool:
if value.nunique()<=max_n_cat and not np.array_equal(value.unique(), np.array([0, 1])):
df[name] = value.astype('category').cat.as_ordered()
else: cons.append(name)
else:
if value.nunique()>max_n_cat: df[name] = value.astype('category').cat.codes+1; cons.append(name)
elif value.dtypes.name == 'category': df[name] = value.cat.as_ordered()
return cons
class scale_vars(TBPreProc):
@staticmethod
def func(df, mapper = None):
warnings.filterwarnings('ignore', category=sklearn.exceptions.DataConversionWarning)
if mapper is None:
map_f = [([n],StandardScaler()) for n in df.columns if is_numeric_dtype(df[n])]
mapper = DataFrameMapper(map_f).fit(df)
df[mapper.transformed_names_] = mapper.transform(df)
return mapper
class skip_flds(TBPreProc):
@staticmethod
def func(df, skip_flds):
df = df.drop(skip_flds, axis=1, inplace=True)
return None
class subset(TBPreProc):
@staticmethod
def func(df, subset):
df = df.sample(subset).copy
return None
class dummies(TBPreProc):
@staticmethod
def func(df):
df = pd.get_dummies(df, dummy_na=True)
return None
tfms = [get_y('SK_ID_CURR'), fill_na(), scale_vars(), app_cat(), dummies()]
str(tfms[1])[10:].split(' ')[0]
tst = app_train.copy()
import sklearn
from sklearn.exceptions import DataConversionWarning
from sklearn.preprocessing import StandardScaler
from sklearn_pandas import DataFrameMapper
# construc pre processing
def tabular_proc(df, tfms, ignore_flds=None):
res = []
df = df.copy()
if ignore_flds is not None:
ignored_flds = df.loc[:, ignore_flds]
df.drop(ignore_flds, axis=1, inplace=True)
for f in tfms:
out = f(df)
if str(f)[10:].split(' ')[0] == 'dummies':
cats = [i for i in df.columns if i not in cons]
res += [cats]
if out is not None:
if str(f)[10:].split(' ')[0] == 'app_cat': cons = out
res += [out]
if ignore_flds is not None: df = pd.concat([ignored_flds, df], axis=1)
res.insert(0,df)
return res
df, y, na_dict, mapper, cons, cats = tabular_proc(tst, tfms)
###Output
> <ipython-input-165-b89f65804464>(42)tabular_proc()
-> return res
(Pdb) c
|
analysis/Mutational Analysis.ipynb | ###Markdown
Peptide Deimmunization Post-Processing and Analysis:This workbook describes the post-processing and analysis of EVdeimmunization output for the PvLEA4 repeats peptide. **Parameters used for deimmunization:** - Epitope binding threshold of 1% (default)- Representative HLA-I alleles of the 12 supertypes defined by Lund et al. - Permitting 2-5 mutations- Substitutions with an allele frequency > 1% in MSA permitted
###Code
from postprocessing_utils import *
import sys
import glob
import os
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.collections import PolyCollection
from matplotlib.colors import colorConverter
import matplotlib.pyplot as plt
import numpy as np
from copy import deepcopy
import collections
from pylab import *
import pandas as pd
import pickle
###Output
_____no_output_____
###Markdown
User inputs
###Code
#Pickle file path
data_path = "../data/"
#Solver code path
library_path = "../src/solver/"
#EVdeimmunization code path - also contains peptide_design_allele_file
code_path = "../src/"
#EVcoupings model file path
ev_model = "../data/PvLEA4_repeats_b0.75.model"
#Beginning of sequence index used in EVcouplings model
seq_start = 75
#Starting wt sequence
WT_SEQ = "MFQTAADKLAAAKDTVVDTLCSAKDTVKEKVVGAKDATADYLGEKKEQMRD"
###Output
_____no_output_____
###Markdown
Load in solutions and save FASTA files Load solutions from .pcl files
###Code
sys.path.append(library_path)
sols = read_in_solutions(glob.glob(os.path.join(data_path, "*/*.pcl")))
sols
for (n, sol) in sols:
for s in sol:
s.hash = hash(s.__hash__())
for (n, sol) in sols:
print(n, len(sol), len(set(sol)))
print(sols[0][1][0].objs)
print(WT_SEQ)
###Output
(14474.589535914089, -214.01265403424622)
MFQTAADKLAAAKDTVVDTLCSAKDTVKEKVVGAKDATADYLGEKKEQMRD
###Markdown
Save sequences to FASTA files along with scores
###Code
# write to fasta files
for n, sol in sols:
print(os.path.join(data_path, os.path.basename(n).replace(".pcl", ".fasta")))
with open(os.path.join(data_path, os.path.basename(n).replace(".pcl", ".fasta")), "w") as f:
for i,s in enumerate(sorted(set(sol))):
f.write(">{name}_{i}_immuno_{im}_energy_{en}\n{mut}\n".format(name=os.path.basename(n).replace(".pcl",""),
i=i,
im=s.objs[0]/1000,
en=-1.0*s.objs[1],
mut=get_mutant_seq(s)))
###Output
../data/PvLEA4_repeats_b0.75_t01_k3.fasta
../data/PvLEA4_repeats_b0.75_t01_k4.fasta
../data/PvLEA4_repeats_b0.75_t01_k5.fasta
../data/PvLEA4_repeats_b0.75_t01_k2.fasta
###Markdown
Plots Plot epitope clusters in the WT sequence and mutation positions Select a mutant to analyze
###Code
mut=get_mutant_seq(sols[0][1][0])
print(mut)
%matplotlib inline
# plot solutions
sys.path.append(code_path)
from preprocessing.utils import EVcouplings
s=sols[0][1][0]
alleles = pd.read_csv(code_path+"peptide_design_allele_file.txt",names=["name", "pssm_thresh", "p"])
pwms = load_pwms(alleles, code_path)
al_dict = alleles.loc[:,["name","pssm_thresh"]].set_index("name").to_dict()["pssm_thresh"]
all_dict = {al:1.0 for al in al_dict.keys()}
epitopes = get_epitope_mesh(WT_SEQ,pwms,al_dict, binary=True)
plot_epitope_cluster(epitopes, WT_SEQ, mutant=mut,
epi_length=9, alleles=all_dict,
enrichment=None, enrichment_title="",
plot_title="Wildtype Sequence", start=0, out=None, knwon_epitopes=None)
###Output
('Matrix-width', 51)
###Markdown
Plot pareto front at each mutation threshold (solutions plotted by their immunogenicity at fitness)
###Code
#plot pareto front
ev_couplings = EVcouplings(ev_model)
al_df = alleles.set_index("name")
wt_imm,wt_en=calc_wildtype_point(WT_SEQ, pwms, al_df, ev_couplings, seq_start)
f=plot_pareto_front(sorted(sols), wt_imm, wt_en)
f.suptitle('Pareto Optimization')
f.savefig('../Pareto_Optima.png')
###Output
_____no_output_____ |
2015/ferran/day13.ipynb | ###Markdown
Day 13: Knights of the Dinner Table Day 13.1
###Code
import csv
from collections import defaultdict
def parse_happiness(input_file):
happiness = defaultdict(lambda: defaultdict(int))
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for l in csv_reader:
happiness[l[0]][l[-1].rstrip('.')] = ((l[2] == 'gain') - (l[2] == 'lose')) * int(l[3])
return happiness
from functools import reduce
def pair_score(a, b, happiness):
return happiness[a][b] + happiness[b][a]
def total_score(l, happiness):
total = 0
n = len(l)
for i, name in enumerate(l):
total += pair_score(l[i], l[(i+1) % n], happiness)
return total
import itertools
def optimal_gain(input_file):
happiness = parse_happiness(input_file)
attendees = list(happiness.keys())
max_score = 0
for p in itertools.permutations(attendees[1:]):
score = total_score([attendees[0]] + list(p), happiness)
if max_score < score: max_score = score
return max_score
###Output
_____no_output_____
###Markdown
Test
###Code
optimal_gain('inputs/input13.test1.txt')
###Output
_____no_output_____
###Markdown
Solution
###Code
optimal_gain('inputs/input13.txt')
###Output
_____no_output_____
###Markdown
Day 13.2Add a neutral self.
###Code
from copy import copy
def add_neutral_self(happiness):
new_happiness = copy(happiness)
attendees = list(new_happiness)
for k in attendees:
new_happiness[k]['Self'] = 0
new_happiness['Self'][k] = 0
return new_happiness
def optimal_gain(input_file):
happiness = add_neutral_self(parse_happiness(input_file))
attendees = list(happiness.keys())
max_score = 0
for p in itertools.permutations(attendees[1:]):
score = total_score([attendees[0]] + list(p), happiness)
if max_score < score: max_score = score
return max_score
optimal_gain('inputs/input13.txt')
###Output
_____no_output_____ |
DS_Proyecto_04.ipynb | ###Markdown
Proyecto 04 - Informe Final de carrera - Redes Neuronales aplicadas a Procesamiento del Lenguaje Natural Resumen del proyectoEn este proyecto final de la carrera de **Data Science**, dicatada por **Acámica**, voy a profundizar en el Proyecto 3 - Procesamiento de Lenguaje Natural. En el proyecto anterior teniamos como objetivo implementa un modelo que, dada la crítica de un producto, asigne la cantidad de estrellas correspondiente. Se trata de un modelo de clasificación de multiclase. Los resultados no fueron muy convincentes, pero luego se tranformó en un modeo binario, donde ahí el accuracy obtenido fué muy prometedor (link de repocitorio https://github.com/SantiagoQuirogaG/Proyecto-3). El objetivo para este proyecto es mejorar el accuracy del modelo de clasificación binario obetenido.Para eso propongo dos cambios:* Primero una mejora en el Análisis Exploratorio de Datos (EDA): Analizando la lista de palabras que se encuentran en las stops words de la libreria SpaCy, descubro que al removerlas del data set estoy perdiendo informacion que puede ser de gran valor para la predicción de este modelo. Por eso decido retirarlas para luego ser utilizadas. * Segundo aplicar un modelo de Redes Neuronales, esto cumple con una de las consignas del proyecto de aplicar un modeo nuevo que no fue visto en clases. Además me llama mucho la atención la potencia y resultados que se obtienen con redes.El proyecto estará dividido de la siguiente manera:1°- **Análisis Exloratorio de Datos (EDA)**2°- **Presentación del Modelo benchmark**3°- **Red Neuronal**4°- **Concluciones**Link en GitHub: https://github.com/ DescripciónEl Dataset presenta una coleccion de criticas de usuarios de "Amazon". Cuenta con opiniones en español recolectadas entre noviembre de 2015 hasta noviembre de 2019. Cada comentario contiene: ID de critica, ID de producto, ID de usuario, cantidad de estrellas, descripción, título, lenguaje (en este caso todo es en español) y categoría del producto.El Dataset esta dividido en 3 archivos, uno para entrenamiento, otro para testeo y el ultimo para validación.
###Code
import itertools
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import nltk
nltk.download('punkt')
nltk.download('stopwords')
import spacy
from spacy.lang.es.stop_words import STOP_WORDS
import re
import string
from collections import Counter
from wordcloud import WordCloud
###Output
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\garga\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\garga\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Análisis Exploratorio de Datos
###Code
# Cargo los dataset
data_train = pd.read_json('../Proyecto 3/dataset_es_train.json', lines=True)
data_dev = pd.read_json('../Proyecto 3/dataset_es_dev.json', lines=True)
data_test = pd.read_json('../Proyecto 3/dataset_es_test.json', lines=True)
data_train.head()
print('Shape: ', data_train.shape)
print('Instancias: ', data_train.shape[0])
print('Columnas: ', data_train.columns)
# Cantidad de reviews según estrellas
data_train['stars'].value_counts()
# Controlo si todas las críticas estan en español
data_train['language'].value_counts()
# ¿Hay datos faltantes?
data_train.isna().sum()
# Dataset de testeo
print('Shape: ', data_dev.shape)
print('Instancias: ', data_dev.shape[0])
print('Columnas: ', data_dev.columns)
# ¿Hay datos faltantes?
data_dev.isna().sum()
# Dataset de validación
print('Shape: ', data_test.shape)
print('Instancias: ', data_test.shape[0])
print('Columnas: ', data_test.columns)
# ¿Hay datos faltantes?
data_test.isna().sum()
###Output
_____no_output_____
###Markdown
Ahora voy a transformar las 5 clases en sólo 2, Bueno y Malo. Se esta simplificando el modelo y le será mas facil predecir opiniones. A su vez las reviews con valores intermedios suelen ser muy subjetivas y hasta para el ser humano difíciles de identificar, ya que no hay mucha diferencia entre ellas.La categoría Malo esta compuesto por críticas de 1, 2 y 3 estrellas.La categoría Bueno esta compuesto por críticas de 4 y 5 estrellas.
###Code
diccionario = {1 : 'malo', 2 : 'malo', 3 : 'malo', 4 : 'bueno', 5 : 'bueno'}
data_train['binomial'] = data_train['stars'].map(diccionario)
data_dev['binomial'] = data_dev['stars'].map(diccionario)
data_test['binomial'] = data_test['stars'].map(diccionario)
data_train.head()
# Realizado este cambio, visualizo la distribución de las reviews
sns.countplot(x = "binomial", data = data_train)
plt.title('Distribución de críticas', fontsize=20)
plt.xlabel('Clasificación')
plt.ylabel('Cantidad de críticas')
plt.show()
###Output
_____no_output_____
###Markdown
Al convertir el problema de Machine Learning en un problema binario, los datasets han quedado desbalanciados, 60% de los casos son reviews negativos y 40% positivos. Este grado de desbalance no es alto, se puede trabajar sin problemas.
###Code
# ¿Cuántas categorías hay?
print('Cantidad de categorías:', len(data_train.product_category.unique()))
plt.figure(figsize = (5,8))
sns.countplot(data = data_train, y = 'product_category', order = data_train['product_category'].value_counts().index)
plt.title('Categorías con mas críticas', fontsize=15)
plt.xlabel('Cantidad de críticas')
plt.ylabel('Categorías')
plt.show()
###Output
_____no_output_____
###Markdown
En el EDA realizado en el proyecto 3 se pudo determinar que las 3 categorías mejor puntuadas son Libros, Equipaje e Instrumentos Musicales, y las 3 categorías con menor puntaje son Ropa, Innalámbrico y Jardinería
###Code
# Top 3 de mejores puntuaciones por categoría de producto
plt.figure(figsize=(15,5))
plt.subplot(131)
sns.countplot(x = "binomial", data = data_train[data_train['product_category']== 'book'])
plt.title('Libros', fontsize=20)
plt.xlabel('Clasificación')
plt.ylabel('Cantidad de críticas')
plt.subplot(132)
sns.countplot(x = "binomial", data = data_train[data_train['product_category']== 'luggage'])
plt.title('Equipaje', fontsize=20)
plt.xlabel('Clasificación')
plt.ylabel('Cantidad de críticas')
plt.subplot(133)
sns.countplot(x = "binomial", data = data_train[data_train['product_category']== 'musical_instruments'])
plt.title('Instrumentos musicales', fontsize=20)
plt.xlabel('Clasificación')
plt.ylabel('Cantidad de críticas')
plt.show()
# Top 3 de peores puntuaciones por categoría de producto
plt.figure(figsize=(15,5))
plt.subplot(131)
sns.countplot(x = "binomial", data = data_train[data_train['product_category']== 'apparel'])
plt.title('Ropa', fontsize=20)
plt.xlabel('Estrellas')
plt.ylabel('Cantidad de críticas')
plt.xticks([0, 1], ['Malo', 'Bueno'])
plt.subplot(132)
sns.countplot(x = "binomial", data = data_train[data_train['product_category']== 'wireless'])
plt.title('Inalambrico', fontsize=20)
plt.xlabel('Estrellas')
plt.ylabel('Cantidad de críticas')
plt.xticks([0, 1], ['Malo', 'Bueno'])
plt.subplot(133)
sns.countplot(x = "binomial", data = data_train[data_train['product_category']== 'lawn_and_garden'])
plt.title('Jardineria', fontsize=20)
plt.xlabel('Estrellas')
plt.ylabel('Cantidad de críticas')
plt.xticks([0, 1], ['Malo', 'Bueno'])
plt.show()
# Creo una columna que reúna el título y el cuerpo de la crítica para un análisis más completo
data_train['review_full'] = data_train['review_title'] + ' ' + data_train['review_body']
data_dev['review_full'] = data_dev['review_title'] + ' ' + data_dev['review_body']
data_test['review_full'] = data_test['review_title'] + ' ' + data_test['review_body']
data_dev.head()
# Elimino las columnas que no son de interés
data_train1 = data_train.drop(columns=['language', 'review_id', 'product_id', 'reviewer_id', 'stars',
'review_body', 'review_title', 'product_category'])
data_dev1 = data_dev.drop(columns=['language', 'review_id', 'product_id', 'reviewer_id', 'stars',
'review_body', 'review_title', 'product_category'])
data_test1 = data_test.drop(columns=['language', 'review_id', 'product_id', 'reviewer_id', 'stars',
'review_body', 'review_title', 'product_category'])
data_train1.head()
###Output
_____no_output_____
###Markdown
Ahora se procede a limpiar el dataset, esta acción se denomina Normalización. La idea es eliminar información que no es valiosa para este caso, por ejemplo emojis, signos de puntuación, reemplazar mayusculas, eliminar palabras comunes (stopwords).Por último se realiza la lemmatización la cual consiste llevar las palabras a su raiz, por ejemplo corriendo, corría y correría todas comparten las palabra raiz **correr**. De esta manera es más facil contabilizar las palabras aún cuando son escritas de forma distinta.
###Code
#Elimino los emojis de dataset TRAIN
data_train1['review_full'] = data_train1['review_full'].str.replace('[^\w\s#@/:%.,_-]', '', flags=re.UNICODE)
data_dev1['review_full'] = data_dev1['review_full'].str.replace('[^\w\s#@/:%.,_-]', '', flags=re.UNICODE)
data_test1['review_full'] = data_test1['review_full'].str.replace('[^\w\s#@/:%.,_-]', '', flags=re.UNICODE)
data_train1
# Para generar un análisis se tiene que crear un objeto del modelo en español
nlp = spacy.load("es_core_news_lg")
stops = spacy.lang.es.stop_words.STOP_WORDS
#stops
###Output
_____no_output_____
###Markdown
Defino y utilizo una función llamada "normal" la cual elimina stopwords, signos de puntuación y lleva a cabo la lemmatización.Analizando la lista de palabras por defecto de stopwords, se pueden observar palabras que pueden estar muy relacionadas con la opinión de un usuario respecto a su producto, por eso se decide quitar algunas de estas.
###Code
#Función que elimina stopwords, signos de puntuación y lleva a cabo la lemmatización.
def normal(comment, lowercase, remove_stopwords):
punctuations = string.punctuation
stops = spacy.lang.es.stop_words.STOP_WORDS
non_stops = ['no', 'si', 'sí', 'bien','buen','bueno','buenos','buena','buenas','peor', 'bajo', 'mal', 'mejor','nada']
otros = ['y', 'e', 'a', 'o', 'para', 'pare', 'paro', 'como', 'q','..','...','....', '.....','.......', '...........',
'¡','¿']
if lowercase:
comment = comment.lower()
comment = nlp(comment)
lemmatized = list()
for word in comment:
if word.text not in non_stops:
if not remove_stopwords or (remove_stopwords and word.text not in stops):
if word.text not in punctuations:
if word.text not in otros:
lemma = word.lemma_.strip()
lemmatized.append(lemma)
elif word.text in non_stops:
lemma = word.lemma_.strip()
lemmatized.append(lemma)
return " ".join(lemmatized)
# Aplico la función normal para TRAIN y creo la columna review_clean
#data_train1['review_normal'] = data_train1['review_full'].apply(normal, lowercase=True, remove_stopwords=True)
#data_train1
# Para no tener que esperar la ejecucion de la celda anterior,
# guardo el dataset obtenido en el equipo y luego lo llamo para seguir trabajando.
#data_train1.to_csv('DS_Proyecto_04_data_train_normal.csv', index = False, encoding = 'utf-8')
data_train1 = pd.read_csv('DS_Proyecto_04_data_train_normal.csv')
data_train1['review_normal'] = data_train1['review_normal'].apply(str)
data_train1.head()
# Aplico la función normal para DEV y creo la columna review_clean
#data_dev1['review_normal'] = data_dev1['review_full'].apply(normal, lowercase=True, remove_stopwords=True)
#data_dev1.to_csv('DS_Proyecto_04_data_dev_normal.csv', index = False, encoding = 'utf-8')
# Aplico la función normal para TEST y creo la columna review_clean
#data_test1['review_normal'] = data_test1['review_full'].apply(normal, lowercase=True, remove_stopwords=True)
#data_test1.to_csv('DS_Proyecto_04_data_test_normal.csv', index = False, encoding = 'utf-8')
data_test1 = pd.read_csv('DS_Proyecto_04_data_test_normal.csv')
data_test1['review_normal'] = data_test1['review_normal'].apply(str)
data_test1.head()
data_dev1 = pd.read_csv('DS_Proyecto_04_data_dev_normal.csv')
data_dev1['review_normal'] = data_dev1['review_normal'].apply(str)
data_dev1.head()
# ¿Hay datos faltantes?
print(data_train1.isna().sum())
print(data_dev1.isna().sum())
print(data_test1.isna().sum())
# Divido el dataset por sus calificaciones y analiso
df_malo = data_train1[data_train1.binomial == 'malo']
df_bueno = data_train1[data_train1.binomial == 'bueno']
# Shape de las críticas malas
df_malo.shape
r_malo = []
for i in range(df_malo.shape[0]):
titular = df_malo.iloc[i].review_normal #seleccionar el titular
titular = nltk.RegexpTokenizer('\w+').tokenize(titular) # Tokenizar
titular = [t for t in titular if len(t)>1] # elimino las palabras que tengan una letra
r_malo.append(titular) #agregar el resultado a la lista
r_malo = list(itertools.chain(*r_malo))
word_freq = Counter(r_malo)
common_words_malo = word_freq.most_common()
print('Lista de palabras más comunes para críticas negativas')
df_r_malo = pd.DataFrame(common_words_malo, columns = ['Words', 'Frequency'])
df_r_malo.head(15)
# Shape de las críticas buenas
df_bueno.shape
r_bueno = []
for i in range(df_bueno.shape[0]):
titular = df_bueno.iloc[i].review_normal #seleccionar el titular
titular = nltk.RegexpTokenizer('\w+').tokenize(titular) # Tokenizar
titular = [t for t in titular if len(t)>1] # elimino las palabras que tengan una letra
r_bueno.append(titular) #agregar el resultado a la lista
r_bueno = list(itertools.chain(*r_bueno))
word_freq = Counter(r_bueno)
common_words_bueno = word_freq.most_common()
print('Lista de palabras más comunes para críticas positivas')
df_r_bueno = pd.DataFrame(common_words_bueno, columns = ['Words', 'Frequency'])
df_r_bueno.head(15)
plt.figure(figsize = (15,10))
plt.subplot(121)
plot = sns.barplot(y = df_r_malo.iloc[:30].Words, x = df_r_malo.iloc[:30].Frequency)
plt.title('Frecuencia de palabras en críticas negativas', fontsize=15)
plt.xlabel('Frecuencias')
plt.ylabel('Palabras')
plt.subplot(122)
plot = sns.barplot(y = df_r_bueno.iloc[:30].Words, x = df_r_bueno.iloc[:30].Frequency)
plt.title('Frecuencia de palabras en críticas positivas', fontsize=15)
plt.xlabel('Frecuencias')
plt.ylabel('Palabras')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Hay una gran diferencia en cuanto a la palabra con mayor frecuencia, en críticas negativas la palabra "no" se repite casi 160000 veces. En cambio cuando la crítica es positiba la palabra bueno se repito mas de 40000 veces.Si tenemos en cuenta la cantidad de malas reviews (120000) podemos concluir que siempre esta presente la palabra "no" en cada una de ellas.
###Code
plt.figure(figsize=(15,15))
plt.subplot(121)
unique_string=(" ").join(r_malo)
wordcloud = WordCloud(width = 1000, height = 500).generate(unique_string)
plt.imshow(wordcloud)
plt.axis("off")
plt.title('Nube de palabras en críticas negativas', fontsize=20)
plt.subplot(122)
unique_string=(" ").join(r_bueno)
wordcloud = WordCloud(width = 1000, height = 500).generate(unique_string)
plt.imshow(wordcloud)
plt.axis("off")
plt.title('Nube de palabras en críticas positivas', fontsize=20)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
En la nube de palabra el tamaño de cada palabra indica su frecuencia o importancia en las críticas. A simple vista se aprecia la diferencia entre ambas. Modelo Benchmark - Linear SVCComo se mencionó anteriormente el modelo benchmark es el modelo binario optimizado obtenido en el proyecto 3 (Link para acceder al proyecto 3 en GitHub: https://github.com/SantiagoQuirogaG/Proyecto-3). Los resultados expuestos a continuación fueron los obtenidos del modelo validado con el dataset TEST:A continuación, copiaré los parámetros usados en el modelo, el accuracy del modelo y la matriz de confusión obtenida.Para generar el modelo:El vectorizado se realizó usando el TfidfVectorizer, con los siguientes parámetros:* ngram_range = (1,2)* min_df = 30* max_features = 1000* los demás parámetros por defectoEl modelo fue un Linear SVC, con los siguientes parámetros:* dual = False* random_state = 42* penalty = 'l1'* C = 2* loss = 'squared_hinge'* tol = 0.01* multi_class='ovr'* los demás parámetros por defectoAccuracy y matriz de confusión* Train Accuracy: **0.81886*** Test Accuracy: **0.8186**El modelo predice de manera correcta un 88% de la etiqueta **negativa** y un 73% de la etiqueta **positiva** en el Test. Red NeuronalUna de las mejoras ya fue realizada, mejora en el Análisis Exploratorio de Datos (EDA). Se eliminaron de la lista de sotpwords aquellas palabras que si aportan información al modelo, y varias de ellas aparece en la lista de palabras más comunes.A continuación desarrollaré la segunda mejora, un modelo de Red Neuronal.
###Code
import random
from numpy.random import seed
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score, confusion_matrix
import tensorflow as tf
import keras
from keras.models import Sequential
from keras import layers
from keras.layers import Dropout
from keras.utils import plot_model
from plot_keras_history import plot_history
from keras.regularizers import l2
import sklearn.model_selection
import sklearn.preprocessing
data_train2 = data_train1.copy()
data_dev2 = data_dev1.copy()
data_test2 = data_test1.copy()
data_train2.head()
###Output
_____no_output_____
###Markdown
La métrica elegida para evaluar los modelos fue la exactitud (accuracy). La elegí debido a que es un problema de clasificación, aunque las clases esten ligeremante desbalanceadas no es un plroblema. Además, no interesa alguna clase en particular, sino el rendimiento general del modelo.* Encodeado de etiquetas¿Por qué encodear las etiquetas? Esto será utilizado para el modelo de Redes Neuronales. Ya que defino dos nodos de salida, es necesario dividir la columna labelen label_bueno label_malo.
###Code
#Aplico get_dummies para TRAIN
data_train2 = pd.concat([data_train2, pd.get_dummies(data_train2['binomial'],
prefix='label',
prefix_sep = '_')],
axis=1)
data_train2
#Aplico get_dummies para DEV y TEST
data_dev2 = pd.concat([data_dev2, pd.get_dummies(data_dev2['binomial'],
prefix='label',
prefix_sep = '_')],
axis=1)
data_test2 = pd.concat([data_test2, pd.get_dummies(data_test2['binomial'],
prefix='label',
prefix_sep = '_')],
axis=1)
# Tomamos la lista de palabras y el vector que nos dice la calificación tanto para train, dev y test
list_review_train = list(data_train2['review_normal'].values)
calif_train = data_train2.iloc[:, [3,4]]
list_review_dev = list(data_dev2['review_normal'].values)
calif_dev = data_dev2.iloc[:, [3,4]]
list_review_test = list(data_test2['review_normal'].values)
calif_test = data_test2.iloc[:, [3,4]]
# Preparamos el conversor de bag of words a vectores que traemos de sklearn.
# Debido a la capacidad de memoria disponible usaremos solo las 3000 palabras con mas frecuencia en todo el corpus
# para generar los vectores
vectorizer = TfidfVectorizer(max_features=3000, min_df=30, ngram_range=(1, 2))
# generarnos los vectores para cada review a partir del corpus total.
matriz_review_train = vectorizer.fit_transform(list_review_train)
matriz_review_dev = vectorizer.transform(list_review_dev)
matriz_review_test = vectorizer.transform(list_review_test)
# Visualisamos el shape
print(matriz_review_train.shape)
print(matriz_review_dev.shape)
print(matriz_review_test.shape)
# Cambio los tipos de datos de las matrices para mayor agilidad
matriz_review_train = matriz_review_train.astype('float32')
matriz_review_dev = matriz_review_dev.astype('float32')
matriz_review_test = matriz_review_test.astype('float32')
# Defino x_train e y_train a partir del dataset TRAIN
x_train = matriz_review_train.toarray()
y_train = calif_train
# Defino x_dev e y_dev a partir del dataset DEV
x_dev = matriz_review_dev.toarray()
y_dev = calif_dev
# Defino x_test e y_test a partir del dataset TEST
x_test = matriz_review_test.toarray()
y_test = calif_test
###Output
_____no_output_____
###Markdown
Las **redes neuronales** artificiales son un modelo inspirado en el funcionamiento del cerebro humano. Esta formado por un conjunto de nodos conocidos como neuronas artificiales que están conectadas y transmiten señales entre sí. Estas señales se transmiten desde la entrada hasta generar una salida. Fuente: https://www.atriainnovation.com/que-son-las-redes-neuronales-y-sus-funciones/ Anteriormente los datos ya fueron cargados y procesados, ahora se determian una semilla para obtener los mismos resultados.
###Code
#Determino una semilla para poder replicar los mismos resultados
my_seed = 100
seed(my_seed)
tf.random.set_seed(my_seed)
###Output
_____no_output_____
###Markdown
Los modelos en Keras son definidos como una sequencias de capas. Podemos crear un modelo sequencia (Sequencial) y agregar capas una a una hasta que cumplan nuestros requerimientos.El primer paso es asegurarnos que la primer capa tenga el numero correcto de entradas. Estos puede ser espcificado con el argumento input_dim, en este caso utilizaremos el valor de 3000, que es el shape de la matriz de entrenamiento (x_tarin).¿Cómo saber el numero de capas y sus tipos? Esta es una pregunta complicada pero normalmente se experimenta y a prueba y error se puede llegar a un resultado óptimo. En este caso vamos a utilizar una red conectada completamente con 6 capas.Fuente: https://medium.com/@gogasca_/tu-primer-red-neuronal-usando-keras-72d36130ee6c. Las funciones de activación uilizadas seran ReLU y Sigmoid, según la documentación ReLU es la más rapida en el entrenamiento, mientras que Sigmoid es más compleja. Es por eso que es recomendable usar ReLU en las capas ocultas y en la última capa utilizar Sigmoid.Para modelos de clasificación binaria es recomendable utilizar "binary crossentropy" como función de pérdida, al igual que el otimizador "sgd" que es recomendado en la documentación.El número de épocas y tamaño de lote (batch) se determinó manualmente y analizando sus resultados.Según datos empíricos al usar la regulización "Dropout" para problemas de NLP, puede suceder que se eliminen palábras muy importantes para la clasificación del modelo, por esto apliqué el factor de regularización L2 (Ridge)
###Code
#Determino el input
input_dim = x_train.shape[1]
print(input_dim)
# Defino el modelo de red neuronal
model = Sequential()
model.add(layers.Dense(3000, input_dim=input_dim, activation='relu'))
model.add(layers.Dense(2000, input_dim=input_dim, activation='relu', kernel_regularizer=l2(0.001)))
model.add(layers.Dense(1000, input_dim=input_dim, activation='relu', kernel_regularizer=l2(0.001)))
model.add(layers.Dense(100, input_dim=input_dim, activation='relu', kernel_regularizer=l2(0.001)))
model.add(layers.Dense(10, input_dim=input_dim, activation='relu', kernel_regularizer=l2(0.001)))
model.add(layers.Dense(2, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Una vez que el modelo esta definido, se procede a compilarlo. Al compilar el modelo se mandan llamar las librerias en el backend, en nuestro caso Tensorflow. En este caso el backend automáticamente selecciona la mejor forma de representar la red neuronal para entrenamiento y realizar predicciones en el hardware. Cuando se compila el modelo, se deben definir algunas propiedades adicionales requeridas para el entrenamiento del modelo:* La función de perdida o loss function, que es utilizada para evaluar los pesos. * El optimizador utilizado para buscar entre los pesos de la red * Las métricas opcionales que se require para colectar y reportar durante el entrenamiento. Al ser un problema de clasificación binaria, se coleccionará y reportará la precision de la clasificación utilizando accuracy.Fuente: https://medium.com/@gogasca_/tu-primer-red-neuronal-usando-keras-72d36130ee6c
###Code
# Compilar modelo
model.compile(loss='binary_crossentropy',
optimizer= 'sgd',
metrics=['accuracy'])
# Resumen del modelo
model.summary()
# Visualización del modelo
plot_model(model, show_shapes = True)
###Output
_____no_output_____
###Markdown
Una vez que está definido y compilado el modelo, es tiempo de ejecutar el modelo con los datos. EL modelo se entrena llamando la función fit().El proceso de entrenamiento se ejecuta cierto numero de veces utilizando el dataset, este numero de veces es llamado epochs, y se define utilizando el parámetro "epochs". También podemos definir el numero de instances que son evaluadas antes de que los pesos sean actualizados en la red neuronal. Este parámetro se llama "batch_size".
###Code
# Entrenar el modelo
history = model.fit(x_train, y_train,
epochs=30,
validation_data=(x_dev, y_dev),
batch_size=100).history
#Calculo el accuracy y la pérdida para Train, Dev y Test
train_scores = model.evaluate(x_train, y_train, verbose=0)
print("Train loss:", train_scores[0])
print("Train accuracy:", train_scores[1],'\n')
dev_scores = model.evaluate(x_dev, y_dev, verbose=0)
print("Dev loss:", dev_scores[0])
print("Dev accuracy:", dev_scores[1], '\n')
test_scores = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1], '<--')
#Grafico la pérdida y la exactitud de acuerdo a las épocas
plot_history(history)
plt.show()
###Output
_____no_output_____
###Markdown
Analizando los gárficos obtenidos se puede observar que la pérdida en Train y Test (dev) disminuye a medida que pasan las épocas y comienza a ser mas regular luedo de las 25 épocas. En el caso de accuracy se puede ver como, luego de las 5 épocas se estabiliza, y pasando las 25 se puede notar un leve overfitting, por lo que 30 épocas es un buen valor para este modelo. Ahora procedo a evaluar el modelo.
###Code
# Vuelvo a definir y_dev para poder evaluar
# 1 = malo y 0 = bueno
y_dev = data_dev2.iloc[:, [4]]
y_dev
# Predicciones
y_pred_dev = model.predict_classes(x_dev)
y_pred_dev
plt.figure(figsize=(6, 5))
conf_sent = confusion_matrix(y_dev, y_pred_dev,
labels=[0, 1],
normalize='true')
sns.heatmap(data = conf_sent, cbar=True, square=True, annot=True,
fmt= '.2f', annot_kws={'size': 13},
xticklabels= ['Malo', 'Bueno'],
yticklabels=['Malo', 'Bueno'])
plt.title('Matriz de confusión', fontsize=20)
plt.ylabel('y_true')
plt.xlabel('y_pred')
plt.xticks(rotation = 0)
plt.yticks(rotation = 0)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Como conclusión el modelo de redes neuronales tiene un accuracy para el entrenamiento de 0.8933 y para Test de 0.8695. Predice de manera correcta el 81% de las críticas malas y un 91% de las críticas buenas en el entrenamiento. ValidaciónValido el modelo con el dataset test
###Code
# Vuelvo a calcular el accuracy y la pérdida para Test
test_scores = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
# Predicciones
y_pred_test = model.predict_classes(x_test)
y_pred_test
# Vuelvo a definir y_test para evaluar
# 1 = malo y 0 = bueno
y_test = data_test2.iloc[:, [4]]
y_test
plt.figure(figsize=(6, 5))
conf_sent = confusion_matrix(y_test, y_pred_test,
labels=[0, 1],
normalize='true')
sns.heatmap(data = conf_sent, cbar=True, square=True, annot=True,
fmt= '.2f', annot_kws={'size': 13},
xticklabels= ['Malo', 'Bueno'],
yticklabels=['Malo', 'Bueno'])
plt.title('Matriz de confusión', fontsize=20)
plt.ylabel('y_true')
plt.xlabel('y_pred')
plt.xticks(rotation = 0)
plt.yticks(rotation = 0)
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
collagen_rebuild/collagen_fitting_3pob.ipynb | ###Markdown
This fitting required specific definition of the zone using the following script:```ignoremissingalign B*:A*align C*:C* appendalign D*:B* appendatoms cafitatoms n,ca,c,ofitatoms *fitquit```
###Code
opt = isambard.optimisation.DE_RMSD(
CollagenExpParams, ex_col.pdb)
opt.parameters(sequences,
[3.5, 65, 0, 1.0, 0, 0],
[2.0, 40, 180, 1, 30, 30],
[len(col_mod[0]), 'var0', 'var1', 'var2', 'var2', 'var2', 'var3', 'var4', 'var5'])
opt.run_opt(30, 50, 3)
best = CollagenExpParams(*opt.parse_individual(opt.halloffame[0]))
best.pack_new_sequences(sequences)
###Output
_____no_output_____ |
_sources/shorts/YouTube.ipynb | ###Markdown
 Including YouTube videosYouTube videos can be found online at https://www.youtube.comWhen you find the video you want, click on the "share" button on the YouTube page, and you will get a web reference that looks something like this:```https://youtu.be/ZmhlRsYjs7Y```It is this last string of characters that you need. In a code cell, you enter the commands```from IPython.display import YouTubeVideoYouTubeVideo('ZmhlRsYjs7Y') ```Run the cell, and the video is ready to play!(In this example below, we are using the Small Number videos from SFU.) Example 1Here is a sample YouTube video, loaded in from the internet. The code is```from IPython.display import YouTubeVideoYouTubeVideo('ZmhlRsYjs7Y') Small Number video from SFU```
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('ZmhlRsYjs7Y') ## Small Number video from SFU
###Output
_____no_output_____
###Markdown
Example 2Here is the same YouTube video, starting at 30 seconds in. We use the html magic to run this, which gives us more control.Note that we grabbed the code from the "Embed" option on the YouTube webpage, that gives the iframe commands with options. ```%%html```
###Code
%%html
<iframe width="300" src="https://www.youtube.com/embed/ZmhlRsYjs7Y?start=30" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
###Output
_____no_output_____ |
examples/local_development/mask_checker.ipynb | ###Markdown
Instantiate the data formatter, quantizer, qnet orchestrator, and mask checker
###Code
formatter = DataFormatter()
quantizer = Quantizer()
qnet_orchestrator = QnetOrchestrator(quantizer)
mask_checker = MaskChecker(qnet_orchestrator)
###Output
_____no_output_____
###Markdown
Load, quantize, and convert the data to qnet input format
###Code
data = formatter.load_data(data, meta)
quantized = quantizer.quantize_df(data)
features, label_matrix = quantizer.get_qnet_inputs(quantized)
len(features)
###Output
_____no_output_____
###Markdown
Load a pre-trained qnet
###Code
qnet_orchestrator.load_qnet('biome_net.joblib')
###Output
_____no_output_____
###Markdown
Mask 20% of the label matrix and use qnet to predict
###Code
# takes 2 minutes to run
predicted = mask_checker.mask_and_predict(label_matrix, mask_percent=20)
data.head()
predicted.head()
###Output
_____no_output_____
###Markdown
Plot the predicted vs. original biome measurements
###Code
BIOMES = ['Actinobacteriota', 'Bacteroidota', 'Firmicutes', 'Proteobacteria', 'unclassified_Bacteria']
concat = pd.concat([
data.assign(source='original'),
predicted.assign(source='predicted')
])
concat = concat[concat.variable.isin(BIOMES)]
g = sns.FacetGrid(concat, col='variable', col_wrap=2, sharey=False, margin_titles=True)
g.map_dataframe(sns.lineplot, 'week', 'value', hue='source', ci=None, marker='o',
linewidth=2, alpha=0.75, markersize=5)
g.set_titles(row_template = '{row_name}', col_template = '{col_name}')
g.add_legend()
###Output
_____no_output_____
###Markdown
It appears that the prediction passes our eyeball sanity check. We zoom in to look at the first 20 weeks.
###Code
concat = concat[(concat.week <= 20)]
g = sns.FacetGrid(concat, col='variable', col_wrap=2, sharey=False, margin_titles=True)
g.map_dataframe(sns.lineplot, 'week', 'value', hue='source', ci=None, marker='o',
linewidth=2, alpha=0.75, markersize=5)
g.set_titles(row_template = '{row_name}', col_template = '{col_name}')
g.add_legend()
###Output
_____no_output_____ |
2020-Tencent-Advertisement-Algorithm-Competition-Rank19/3_LSTM_v9_win50_300size.ipynb | ###Markdown
读取数据
###Code
df = pd.read_pickle(os.path.join(data_path, 'processed_data_numerical.pkl'))
df['age'] = df['age'] - 1
df['gender'] = df['gender'] - 1
df.head(1)
###Output
_____no_output_____
###Markdown
读取预训练好的Word Embedding
###Code
os.listdir(embedding_path)
embedding = np.load(os.path.join(embedding_path, 'embedding_w2v_sg1_hs0_win50_size300.npz'))
creative = embedding['creative_w2v']
ad= embedding['ad_w2v']
advertiser = embedding['advertiser_w2v']
product = embedding['product_w2v']
industry = embedding['industry_w2v']
product_cate = embedding['product_cate_w2v']
del embedding
gc.collect()
###Output
_____no_output_____
###Markdown
需要使用的embedding特征以及对应的序列编号
###Code
# 这里将需要使用到的特征列直接拼接成一个向量,后面直接split即可
data_seq = df[['creative_id', 'ad_id', 'advertiser_id', 'product_id', 'industry', 'click_times']].progress_apply(lambda s: np.hstack(s.values), axis=1).values
# embedding_list = [creative_embed, ad_embed, advertiser_embed, product_embed]
# embedding_list = [creative_glove, ad_glove, advertiser_glove, product_glove]
embedding_list = [creative, ad, advertiser, product, industry]
###Output
100%|██████████| 4000000/4000000 [08:56<00:00, 7462.23it/s]
###Markdown
建立PyTorch Dataset 和 Dataloader
###Code
class CustomDataset(Dataset):
def __init__(self, seqs, labels, input_num, shuffle=False):
self.seqs = seqs
self.labels = labels
self.input_num = input_num
self.shuffle = shuffle
def __len__(self):
return len(self.seqs)
def __getitem__(self, idx):
length = int(self.seqs[idx].shape[0]/self.input_num)
seq_list = list(torch.LongTensor(self.seqs[idx]).split(length, dim=0))
label = torch.LongTensor(self.labels[idx])
# 对数据进行随机shuffle
if self.shuffle and torch.rand(1) < 0.5:
random_pos = torch.randperm(length)
for i in range(len(seq_list)):
seq_list[i] = seq_list[i][random_pos]
return seq_list + [length, label]
def pad_truncate(Batch):
*seqs, lengths, labels = list(zip(*Batch))
# 长度截取到99%的大小,可以缩短pad长度,大大节省显存
trun_len = torch.topk(torch.tensor(lengths), max(int(0.01*len(lengths)), 1))[0][-1]
# 保险起见,再设置一个最大长度
max_len = min(trun_len, 150)
seq_list = list(pad_sequence(seq, batch_first=True)[:, :max_len] for seq in seqs)
return seq_list, torch.tensor(lengths).clamp_max(max_len), torch.stack(labels)
input_num = 6
BATCH_SIZE_TRAIN = 1024
BATCH_SIZE_VAL = 2048
BATCH_SIZE_TEST = 2048
kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
data_folds = []
valid_indexs = [] # 用于后面保存五折的验证集结果时,按照1到900000对应顺序
for idx, (train_index, valid_index) in enumerate(kf.split(X=df.iloc[:3000000], y=df.iloc[:3000000]['age'])):
valid_indexs.append(valid_index)
X_train, X_val, X_test = data_seq[train_index], data_seq[valid_index], data_seq[3000000:]
y_train, y_val = np.array(df.iloc[train_index, -2:]), np.array(df.iloc[valid_index, -2:])
y_test = np.random.rand(X_test.shape[0], 2)
train_dataset = CustomDataset(X_train, y_train, input_num, shuffle=True)
val_dataset = CustomDataset(X_val, y_val, input_num, shuffle=False)
test_dataset = CustomDataset(X_test, y_test, input_num, shuffle=False)
train_dataloader = DataLoader(train_dataset,
batch_size=BATCH_SIZE_TRAIN,
shuffle=True,
collate_fn=pad_truncate,
num_workers=0,
worker_init_fn=worker_init_fn)
valid_dataloader = DataLoader(val_dataset,
batch_size=BATCH_SIZE_VAL,
sampler=SequentialSampler(val_dataset),
shuffle=False,
collate_fn=pad_truncate,
num_workers=0,
worker_init_fn=worker_init_fn)
test_dataloader = DataLoader(test_dataset,
batch_size=BATCH_SIZE_TEST,
sampler=SequentialSampler(test_dataset),
shuffle=False,
collate_fn=pad_truncate,
num_workers=0,
worker_init_fn=worker_init_fn)
data_folds.append((train_dataloader, valid_dataloader, test_dataloader))
del data_seq, creative, ad, advertiser, product, industry, product_cate
gc.collect()
###Output
_____no_output_____
###Markdown
搭建模型
###Code
class BiLSTM(nn.Module):
def __init__(self, embedding_list, embedding_freeze, lstm_size, fc1, fc2, num_layers=1, rnn_dropout=0.2, embedding_dropout=0.2, fc_dropout=0.2):
super().__init__()
self.embedding_layers = nn.ModuleList([nn.Embedding.from_pretrained(torch.HalfTensor(embedding).cuda(), freeze=freeze) for embedding, freeze in zip(embedding_list, embedding_freeze)])
self.input_dim = int(np.sum([embedding.shape[1] for embedding in embedding_list]))
self.lstm = nn.LSTM(input_size = self.input_dim,
hidden_size = lstm_size,
num_layers = num_layers,
bidirectional = True,
batch_first = True,
dropout = rnn_dropout)
self.fc1 = nn.Linear(2*lstm_size, fc1)
self.fc2 = nn.Linear(fc1, fc2)
self.fc3 = nn.Linear(fc2, 12)
self.rnn_dropout = nn.Dropout(rnn_dropout)
self.embedding_dropout = nn.Dropout(embedding_dropout)
self.fc_dropout = nn.Dropout(fc_dropout)
def forward(self, seq_list, lengths):
batch_size, total_length= seq_list[0].size()
lstm_outputs = []
click_time = seq_list[-1]
embeddings = []
for idx, seq in enumerate(seq_list[:-1]):
embedding = self.embedding_layers[idx](seq).to(torch.float32)
embedding = self.embedding_dropout(embedding)
embeddings.append(embedding)
packed = pack_padded_sequence(torch.cat(embeddings, dim=-1), lengths, batch_first=True, enforce_sorted=False)
packed_output, (h_n, c_n) = self.lstm(packed)
lstm_output, _ = pad_packed_sequence(packed_output, batch_first=True, total_length=total_length, padding_value=-float('inf'))
lstm_output = self.rnn_dropout(lstm_output)
# lstm_output shape: (batchsize, total_length, 2*lstm_size)
max_output = F.max_pool2d(lstm_output, (total_length, 1), stride=(1, 1)).squeeze()
# output shape: (batchsize, 2*lstm_size)
fc_out = F.relu(self.fc1(max_output))
fc_out = self.fc_dropout(fc_out)
fc_out = F.relu(self.fc2(fc_out))
pred = self.fc3(fc_out)
age_pred = pred[:, :10]
gender_pred = pred[:, -2:]
return age_pred, gender_pred
###Output
_____no_output_____
###Markdown
训练模型
###Code
def validate(model, val_dataloader, criterion, history, n_iters):
model.eval()
global best_acc, best_model, validate_history
costs = []
age_accs = []
gender_accs = []
with torch.no_grad():
for idx, batch in enumerate(val_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
labels = labels.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
loss = criterion(age_output, gender_output, labels)
costs.append(loss.item())
_, age_preds = torch.max(age_output, 1)
_, gender_preds = torch.max(gender_output, 1)
age_accs.append((age_preds == labels[:, 0]).float().mean().item())
gender_accs.append((gender_preds == labels[:, 1]).float().mean().item())
torch.cuda.empty_cache()
mean_accs = np.mean(age_accs) + np.mean(gender_accs)
mean_costs = np.mean(costs)
writer.add_scalar('gender/validate_accuracy', np.mean(gender_accs), n_iters)
writer.add_scalar('gender/validate_loss', mean_costs, n_iters)
writer.add_scalar('age/validate_accuracy',np.mean(age_accs), n_iters)
writer.add_scalar('age/validate_loss', mean_costs, n_iters)
if mean_accs > history['best_model'][0][0]:
save_dict = copy.deepcopy(model.state_dict())
embedding_keys = []
for key in save_dict.keys():
if key.startswith('embedding'):
embedding_keys.append(key)
for key in embedding_keys:
save_dict.pop(key)
heapq.heapify(history['best_model'])
checkpoint_pth = history['best_model'][0][1]
heapq.heappushpop(history['best_model'], (mean_accs, checkpoint_pth))
torch.save(save_dict, checkpoint_pth)
del save_dict
gc.collect()
torch.cuda.empty_cache()
return mean_costs, mean_accs
def train(model, train_dataloader, val_dataloader, criterion, optimizer, epoch, history, validate_points, scheduler, step=True):
model.train()
costs = []
age_accs = []
gender_accs = []
val_loss, val_acc = 0, 0
with tqdm(total=len(train_dataloader.dataset), desc='Epoch{}'.format(epoch)) as pbar:
for idx, batch in enumerate(train_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
labels = labels.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
loss = criterion(age_output, gender_output, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step:
scheduler.step()
with torch.no_grad():
costs.append(loss.item())
_, age_preds = torch.max(age_output, 1)
_, gender_preds = torch.max(gender_output, 1)
age_accs.append((age_preds == labels[:, 0]).float().mean().item())
gender_accs.append((gender_preds == labels[:, 1]).float().mean().item())
pbar.update(labels.size(0))
n_iters = idx + len(train_dataloader)*(epoch-1)
if idx in validate_points:
val_loss, val_acc = validate(model, val_dataloader, criterion, history, n_iters)
model.train()
writer.add_scalar('gender/train_accuracy', gender_accs[-1], n_iters)
writer.add_scalar('gender/train_loss', costs[-1], n_iters)
writer.add_scalar('age/train_accuracy', age_accs[-1], n_iters)
writer.add_scalar('age/train_loss', costs[-1], n_iters)
writer.add_scalar('age/learning_rate', scheduler.get_lr()[0], n_iters)
pbar.set_postfix_str('loss:{:.4f}, acc:{:.4f}, val-loss:{:.4f}, val-acc:{:.4f}'.format(np.mean(costs[-10:]), np.mean(age_accs[-10:])+np.mean(gender_accs[-10:]), val_loss, val_acc))
gc.collect()
def test(oof_train_test, model, test_dataloader, val_dataloader, valid_index, weight=1):
# 这里测试的时候对验证集也进行计算,以便于后续模型融合和search weight等提高
model.eval()
y_val = []
age_pred = []
gender_pred = []
age_pred_val = []
gender_pred_val = []
with torch.no_grad():
for idx, batch in enumerate(test_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
age_pred.append(age_output.cpu())
gender_pred.append(gender_output.cpu())
torch.cuda.empty_cache()
for idx, batch in enumerate(val_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
age_pred_val.append(age_output.cpu())
gender_pred_val.append(gender_output.cpu())
y_val.append(labels)
torch.cuda.empty_cache()
# 0到9列存储age的预测概率分布,10列到11列存储gender的预测概率分布,12、13列分别存储age和gender的真实标签
oof_train_test[valid_index, :10] += F.softmax(torch.cat(age_pred_val)).numpy() * weight
oof_train_test[valid_index, 10:12] += F.softmax(torch.cat(gender_pred_val)).numpy() * weight
oof_train_test[valid_index, 12:] = torch.cat(y_val).numpy()
oof_train_test[3000000:, :10] += F.softmax(torch.cat(age_pred)).numpy() * (1/5) * weight
oof_train_test[3000000:, 10:12] += F.softmax(torch.cat(gender_pred)).numpy() * (1/5) * weight
# 定义联合损失函数
def criterion(age_output, gender_output, labels):
age_loss = nn.CrossEntropyLoss()(age_output, labels[:, 0])
gender_loss = nn.CrossEntropyLoss()(gender_output, labels[:, 1])
return age_loss*0.6 + gender_loss*0.4
# 0到9列存储age的预测概率分布,10列到11列存储gender的预测概率分布,12、13列分别存储age和gender的真实标签
oof_train_test = np.zeros((4000000, 14))
# oof_train_test = np.load(os.path.join(model_save, "lstm_v2_300size_fold_2.npy"))
acc_folds = []
model_name = 'lstm_v9_300size_win50'
best_checkpoint_num = 3
for idx, (train_dataloader, val_dataloader, test_dataloader) in enumerate(data_folds):
# if idx in [0]:
# continue
history = {'best_model': []}
for i in range(best_checkpoint_num):
history['best_model'].append((0, os.path.join(model_save, '{}_checkpoint_{}.pth'.format(model_name, i))))
# 对应顺序: creative_w2v, ad_w2v, advertiser_w2v, product_w2v, industry_w2v
embedding_freeze = [True, True, True, True, True]
validate_points = list(np.linspace(0, len(train_dataloader)-1, 2).astype(int))[1:]
model = BiLSTM(embedding_list, embedding_freeze, lstm_size=1500, fc1=1500, fc2=800, num_layers=2, rnn_dropout=0.0, fc_dropout=0.3, embedding_dropout=0.3)
model = model.cuda()
model = nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)
optimizer = torch.optim.Adam(model.parameters(), betas=(0.9, 0.999), lr=1e-3)
epochs = 7
# scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.7)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-5, max_lr=2e-3, step_size_up=int(len(train_dataloader)/2), cycle_momentum=False, mode='triangular')
# scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=3e-3, epochs=epochs, steps_per_epoch=len(train_dataloader), pct_start=0.2, anneal_strategy='linear', div_factor=30, final_div_factor=1e4)
for epoch in range(1, epochs+1):
writer = SummaryWriter(log_dir='./runs/{}_fold_{}'.format(model_name, idx))
train(model, train_dataloader, val_dataloader, criterion, optimizer, epoch, history, validate_points, scheduler, step=True)
# scheduler.step()
gc.collect()
for (acc, checkpoint_pth), weight in zip(sorted(history['best_model'], reverse=True), [0.5, 0.3, 0.2]):
model.load_state_dict(torch.load(checkpoint_pth, map_location=torch.device('cpu')), strict=False)
test(oof_train_test, model, test_dataloader, val_dataloader, valid_indexs[idx], weight=weight)
acc_folds.append(sorted(history['best_model'], reverse=True)[0][0])
np.save(os.path.join(model_save, "{}_fold_{}.npy".format(model_name, idx)), oof_train_test)
del model
gc.collect()
torch.cuda.empty_cache()
acc_folds
np.save(os.path.join(res_path, "{}_5folds_{:.4f}.npy".format(model_name, np.mean(acc_folds))), oof_train_test)
###Output
_____no_output_____ |
04_whale_detection.ipynb | ###Markdown
Adapted from radekosmulski [notebook](https://github.com/radekosmulski/whale/blob/master/first_submission.ipynb) for learning purpose on the playground competition instead of the original.
###Code
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
root='/content/gdrive/MyDrive/'
base=f'{root}/Colab Notebooks/whale'
###Output
_____no_output_____
###Markdown
Setup Kaggle and Data
###Code
#Setup kaggle key
!mkdir -p ~/.kaggle
!cp '{base}/kaggle.json' ~/.kaggle/
!ls ~/.kaggle
#Download kaggle cli
! pip install --upgrade --force-reinstall --no-deps kaggle -q
from pathlib import Path
data_dir='data'
if not Path(data_dir).exists():
! mkdir -p {data_dir}
#Download competition data
! kaggle competitions download -c whale-categorization-playground
! unzip -q whale-categorization-playground.zip
! mv train/train {data_dir}/
! mv test/test {data_dir}/
! mv train.csv {data_dir}/
! mv sample_submission.csv /
!rm -f train
!rm -f test
###Output
_____no_output_____
###Markdown
Imports
###Code
from nbdev.showdoc import *
from fastai.vision.all import *
import pandas as pd
###Output
_____no_output_____
###Markdown
Setup
###Code
SEED=42
set_seed(s=SEED, reproducible=True)
###Output
_____no_output_____
###Markdown
Data
###Code
df = pd.read_csv(f'{data_dir}/train.csv')
df.head()
df.Id.value_counts()
(df.Id == 'new_whale').mean()
(df.Id.value_counts() == 1).mean()
###Output
_____no_output_____
###Markdown
52% of all whales have only single image associated with them. 8.2% of all images contain a new whale ie not a known whale.[TODO] There is a superb writeup on what a solution to this problem might look like [here](https://www.kaggle.com/martinpiotte/whale-recognition-model-with-score-0-78563/notebook). In general, the conversation in the Kaggle [forum](https://www.kaggle.com/c/humpback-whale-identification/discussion) also seems to have some very informative threads.
###Code
df.Id.nunique()
df.shape
###Output
_____no_output_____
###Markdown
DataBlock
###Code
SZ = 224
BS = 64
NUM_WORKERS = 12
SEED = 42
source=Path(data_dir)
source
data = DataBlock(
blocks = (ImageBlock, CategoryBlock),
get_x =ColReader(0, pref=source/'train'),
get_y=ColReader(1),
splitter=RandomSplitter(seed=SEED),
item_tfms=Resize(SZ),
batch_tfms=aug_transforms())
#data.summary(source=df)
dls = data.dataloaders(source=df, bs=BS, num_workers=NUM_WORKERS)
#dls.show_batch()
def map5(preds,targs):
if type(preds) is list:
return torch.cat([map5fast(p, targs, 5).view(1) for p in preds ]).mean()
return map5kfast(preds,targs, 5)
###Output
_____no_output_____
###Markdown
Modeling
###Code
learn = cnn_learner(dls, resnet50, metrics=[accuracy, map5])
#learn.lr_find()
#learn.fine_tune(5, 1e-2, freeze_epochs=1)
###Output
_____no_output_____ |
tests/experimental/101.corebot-bert-bidaf/notebooks/bert_train.ipynb | ###Markdown
Train the intent classifier using pretrained BERT model as featurizerThis notebook creates the BERT language classifier model. See the [README.md](../README.md) for instructions on how to run this sample.The resulting model is placed in the `/models/bert` directory which is packaged with the bot. `model_corebot101` packageThis sample creates a separate python package (`model_corebot101`) which contains all the code to train, evaluate and infer intent classifiers for this sample. See also:- [The BERT runtime model](bert_model_runtime.ipynb) to test the resulting intent classifier model.- [The BiDAF runtime model](bidaf_model_runtime.ipynb) to test the associated BiDAF model to test the entity classifier model.- [The model runtime](model_runtime.ipynb) to test the both the BERT and BiDAF model together.
###Code
from model_corebot101.bert.train import BertTrainEval
###Output
_____no_output_____
###Markdown
`BertTrainEvan.train_eval` methodThis method performs all the training and performs evaluation that's listed at the bottom of the output. Training may take several minutes to complete.The evaluation output should look something like the following:```bash06/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - ***** Eval results *****06/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - acc = 1.006/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - acc_and_f1 = 1.006/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - eval_loss = 0.0649894773960113506/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - f1 = 1.006/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - global_step = 1206/02/2019 19:46:52 - INFO - model_corebot101.bert.train.bert_train_eval - loss = 0.02480666587750117```
###Code
BertTrainEval.train_eval(cleanup_output_dir=True)
###Output
Bert Model training_data_dir is set to d:\python\daveta-docker-wizard\apub\samples\flask\101.corebot-bert-bidaf\model\model_corebot101\bert\training_data
Bert Model model_dir is set to C:\Users\daveta\models\bert
07/02/2019 07:16:09 - INFO - model_corebot101.bert.train.bert_train_eval - device: cpu n_gpu: 0, distributed training: False, 16-bits training: None
07/02/2019 07:16:09 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at C:\Users\daveta\.pytorch_pretrained_bert\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
07/02/2019 07:16:10 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at C:\Users\daveta\.pytorch_pretrained_bert\distributed_-1\9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
07/02/2019 07:16:10 - INFO - pytorch_pretrained_bert.modeling - extracting archive file C:\Users\daveta\.pytorch_pretrained_bert\distributed_-1\9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir C:\Users\daveta\AppData\Local\Temp\tmp9hepebcl
07/02/2019 07:16:13 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
07/02/2019 07:16:16 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
07/02/2019 07:16:16 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - Writing example 0 of 16
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - guid: train-0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] book flight from london to paris on feb 14th [SEP]
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 2338 3462 2013 2414 2000 3000 2006 13114 6400 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - label: Book flight (id = 0)
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - guid: train-1
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] book flight to berlin on feb 14th [SEP]
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 2338 3462 2000 4068 2006 13114 6400 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - label: Book flight (id = 0)
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - guid: train-2
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] book me a flight from london to paris [SEP]
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 2338 2033 1037 3462 2013 2414 2000 3000 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - label: Book flight (id = 0)
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - guid: train-3
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] bye [SEP]
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 9061 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - label: Cancel (id = 1)
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - guid: train-4
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] cancel booking [SEP]
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 17542 21725 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:16:16 - INFO - model_corebot101.bert.common.bert_util - label: Cancel (id = 1)
07/02/2019 07:16:16 - INFO - model_corebot101.bert.train.bert_train_eval - ***** Running training *****
07/02/2019 07:16:16 - INFO - model_corebot101.bert.train.bert_train_eval - Num examples = 16
07/02/2019 07:16:16 - INFO - model_corebot101.bert.train.bert_train_eval - Batch size = 4
07/02/2019 07:16:16 - INFO - model_corebot101.bert.train.bert_train_eval - Num steps = 12
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
Iteration: 0%| | 0/4 [00:00<?, ?it/s]
Iteration: 25%|██████████████████▎ | 1/4 [00:05<00:16, 5.40s/it]
Iteration: 50%|████████████████████████████████████▌ | 2/4 [00:11<00:11, 5.60s/it]
Iteration: 75%|██████████████████████████████████████████████████████▊ | 3/4 [00:17<00:05, 5.63s/it]
Epoch: 33%|█████████████████████████▋ | 1/3 [00:22<00:45, 22.85s/it]
Iteration: 0%| | 0/4 [00:00<?, ?it/s]
Iteration: 25%|██████████████████▎ | 1/4 [00:05<00:17, 5.83s/it]
Iteration: 50%|████████████████████████████████████▌ | 2/4 [00:11<00:11, 5.78s/it]
Iteration: 75%|██████████████████████████████████████████████████████▊ | 3/4 [00:17<00:05, 5.73s/it]
Epoch: 67%|███████████████████████████████████████████████████▎ | 2/3 [00:45<00:22, 22.85s/it]
Iteration: 0%| | 0/4 [00:00<?, ?it/s]
Iteration: 25%|██████████████████▎ | 1/4 [00:05<00:16, 5.50s/it]
Iteration: 50%|████████████████████████████████████▌ | 2/4 [00:11<00:11, 5.51s/it]
Iteration: 75%|██████████████████████████████████████████████████████▊ | 3/4 [00:16<00:05, 5.47s/it]
Epoch: 100%|█████████████████████████████████████████████████████████████████████████████| 3/3 [01:07<00:00, 22.61s/it]
07/02/2019 07:17:24 - INFO - pytorch_pretrained_bert.modeling - loading archive file C:\Users\daveta\models\bert
07/02/2019 07:17:24 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
07/02/2019 07:17:26 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file C:\Users\daveta\models\bert\vocab.txt
07/02/2019 07:17:26 - INFO - model_corebot101.bert.train.bert_train_eval - DONE TRAINING.
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - Writing example 0 of 16
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - guid: dev-0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] book flight from london to paris on feb 14th [SEP]
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 2338 3462 2013 2414 2000 3000 2006 13114 6400 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - label: Book flight (id = 0)
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - guid: dev-1
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] book flight to berlin on feb 14th [SEP]
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 2338 3462 2000 4068 2006 13114 6400 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - label: Book flight (id = 0)
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - guid: dev-2
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] book me a flight from london to paris [SEP]
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 2338 2033 1037 3462 2013 2414 2000 3000 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - label: Book flight (id = 0)
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - guid: dev-3
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] bye [SEP]
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 9061 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - label: Cancel (id = 1)
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - *** Example ***
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - guid: dev-4
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - tokens: [CLS] cancel booking [SEP]
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_ids: 101 17542 21725 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - input_mask: 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
07/02/2019 07:17:27 - INFO - model_corebot101.bert.common.bert_util - label: Cancel (id = 1)
07/02/2019 07:17:27 - INFO - model_corebot101.bert.train.bert_train_eval - ***** Running evaluation *****
07/02/2019 07:17:27 - INFO - model_corebot101.bert.train.bert_train_eval - Num examples = 16
07/02/2019 07:17:27 - INFO - model_corebot101.bert.train.bert_train_eval - Batch size = 8
Evaluating: 100%|████████████████████████████████████████████████████████████████████████| 2/2 [00:04<00:00, 2.46s/it]
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - ***** Eval results *****
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - acc = 1.0
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - acc_and_f1 = 1.0
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - eval_loss = 0.026343628764152527
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - f1 = 1.0
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - global_step = 12
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - loss = 0.01322490597764651
07/02/2019 07:17:32 - INFO - model_corebot101.bert.train.bert_train_eval - DONE EVALUATING.
###Markdown
Verify the output directory
###Code
import os
from pathlib import Path
from tqdm import tqdm_notebook
home_dir = str(Path.home())
path = os.path.abspath(os.path.join(home_dir, "models/bert"))
files_with_size = {file:os.path.getsize(os.path.join(path, file)) for file in os.listdir(path)}
expected = {'config.json':326, 'eval_results.txt':119, 'pytorch_model.bin':437982182, 'vocab.txt':262030}
for f in tqdm_notebook(expected.keys(), desc='Verify Output'):
if f in files_with_size:
delta = abs(expected[f] - files_with_size[f]) / expected[f]
if delta > float(.30):
raise Exception(f'Size of output file {f} is out of range of expected.')
else:
raise Exception(f'Expected file {f} missing from output.')
###Output
_____no_output_____ |
Lets create some lists.ipynb | ###Markdown
---
###Code
#tuple = immutable list. Use parenthesis.
t1 = (345, 674, 934)
t1
t1[1]
t1[1] = 10 #error because it is immutable
###Output
_____no_output_____ |
docs/MagDeviation.ipynb | ###Markdown
Stationary Situation
###Code
data1 = np.genfromtxt(fname='longterm3.csv', usecols=range(1, 18), delimiter=",", names=True)
data1 = data1[200:]
data1_x = np.linspace(0, np.shape(data1)[0], np.shape(data1)[0])
print("longterm1 Samples: {}".format(np.shape(data1)[0]))
fig, axs = plt.subplots(1, 2, figsize=(18, 8), tight_layout=False)
axs[0].set_title("Heading over Time");
axs[0].grid(True);
axs[0].plot(data1_x, data1["heading"], linestyle='-');
axs[0].set(ylabel='Degrees', xlabel='Seconds');
axs[1].hist(data1["framerate"], bins=128);
axs[1].grid(True);
axs[1].set_yscale("log")
mean_1 = np.array([np.mean(data1["mx"]), np.mean(data1["my"]), np.mean(data1["mz"])]).round(4)
fig, axs = plt.subplots(3, 1, figsize=(18, 12))
num_avg = 50 # 5 seconds
ylim = 1
axs[0].set_title("Magnetometer X-Axis");
axs[0].grid(True);
axs[0].plot(data1_x, data1["mx"], color='r', alpha=0.25, linewidth=1, linestyle='-');
axs[0].plot(data1_x[num_avg-1:], np.convolve(data1["mx"], np.ones(num_avg), 'valid') / num_avg, color='r', linewidth=2, linestyle='-');
axs[0].plot(data1_x[10*num_avg-1:], np.convolve(data1["mx"], np.ones(10*num_avg), 'valid') / (10*num_avg), color='tab:blue', linewidth=2, linestyle='-');
axs[0].set_ylim(mean_1[0] - ylim, mean_1[0] + ylim)
axs[0].set(ylabel='mT');
axs[1].set_title("Magnetometer Y-Axis");
axs[1].grid(True);
axs[1].plot(data1_x, data1["my"], color='g', alpha=0.25, linewidth=1, linestyle='-');
axs[1].plot(data1_x[num_avg-1:], np.convolve(data1["my"], np.ones(num_avg), 'valid') / num_avg, color='g', linewidth=2, linestyle='-');
axs[1].plot(data1_x[10*num_avg-1:], np.convolve(data1["my"], np.ones(10*num_avg), 'valid') / (10*num_avg), color='tab:orange', linewidth=2, linestyle='-');
axs[1].set_ylim(mean_1[1] - ylim, mean_1[1] + ylim)
axs[1].set(ylabel='mT');
axs[2].set_title("Magnetometer Z-Axis");
axs[2].grid(True);
axs[2].plot(data1_x, data1["mz"], color='b', alpha=0.5, linewidth=1, linestyle='-');
axs[2].plot(data1_x[num_avg-1:], np.convolve(data1["mz"], np.ones(num_avg), 'valid') / num_avg, color='b', linewidth=2, linestyle='-');
axs[2].plot(data1_x[10*num_avg-1:], np.convolve(data1["mz"], np.ones(10*num_avg), 'valid') / (10*num_avg), color='tab:pink', linewidth=2, linestyle='-');
axs[2].set_ylim(mean_1[2] - ylim, mean_1[2] + ylim)
axs[2].set(ylabel='mT');
fig, axs = plt.subplots(1, 3, figsize=(18, 6))
xlim = 2
headin_diff = (np.max(data1["heading"]) - np.min(data1["heading"]))
axs[0].set_title("Heading over Mag X-Axis");
#axs[0].scatter(data1["mx"], data1["heading"], color='r', alpha=0.05, linewidth=1, linestyle='-');
#axs[0].hexbin(data1["mx"], data1["heading"], gridsize=32, bins=None, cmap='viridis')
axs[0].hist2d(data1["mx"], data1["heading"], bins=[64,32]);
x = np.linspace(mean_1[0] - xlim, mean_1[0] + xlim, 100)
axs[0].plot(x, stats.norm.pdf(x, np.mean(data1["mx"]), np.std(data1["mx"])) * headin_diff + np.min(data1["heading"]), color='tab:red', linewidth=5, linestyle='-');
axs[0].set_xlim(mean_1[0] - xlim, mean_1[0] + xlim)
axs[0].grid(True);
axs[1].set_title("Heading over Mag Y-Axis");
#axs[1].scatter(data1["my"], data1["heading"], color='g', alpha=0.05, linewidth=1, linestyle='-');
#axs[1].hexbin(data1["my"], data1["heading"], gridsize=32, bins=None, cmap='viridis')
axs[1].hist2d(data1["my"], data1["heading"], bins=[64,32]);
x = np.linspace(mean_1[1] - xlim, mean_1[1] + xlim, 100)
axs[1].plot(x, stats.norm.pdf(x, np.mean(data1["my"]), np.std(data1["my"])) * headin_diff + np.min(data1["heading"]), color='tab:green', linewidth=5, linestyle='-');
axs[1].set_xlim(mean_1[1] - xlim, mean_1[1] + xlim)
axs[1].grid(True);
axs[2].set_title("Heading over Mag Z-Axis");
#axs[2].scatter(data1["mz"], data1["heading"], color='b', alpha=0.05, linewidth=1, linestyle='-');
#axs[2].hexbin(data1["mz"], data1["heading"], gridsize=32, bins=None, cmap='viridis')
axs[2].hist2d(data1["mz"], data1["heading"], bins=[64,32]);
x = np.linspace(mean_1[2] - xlim, mean_1[2] + xlim, 100)
axs[2].plot(x, stats.norm.pdf(x, np.mean(data1["mz"]), np.std(data1["mz"])) * headin_diff + np.min(data1["heading"]), color='tab:blue', linewidth=5, linestyle='-');
axs[2].set_xlim(mean_1[2] - xlim, mean_1[2] + xlim)
axs[2].grid(True);
fig, axs = plt.subplots(1, 3, figsize=(18, 6))
xlim = 2
axs[0].set_title("Mag Y over Mag X");
#axs[0].scatter(data1["mx"], data1["my"], color='tab:blue', alpha=0.05, linewidth=1, linestyle='-');
axs[0].hist2d(data1["mx"], data1["my"], bins=[64,64]);
axs[0].set_xlim(mean_1[0] - xlim, mean_1[0] + xlim)
axs[0].set_ylim(mean_1[1] - xlim, mean_1[1] + xlim)
axs[0].grid(True);
axs[1].set_title("Mag Z over Mag Y");
#axs[1].scatter(data1["my"], data1["mz"], color='tab:orange', alpha=0.05, linewidth=1, linestyle='-');
axs[1].hist2d(data1["my"], data1["mz"], bins=[64,64]);
axs[1].set_xlim(mean_1[1] - xlim, mean_1[1] + xlim)
axs[1].set_ylim(mean_1[2] - xlim, mean_1[2] + xlim)
axs[1].grid(True);
axs[2].set_title("Mag X over Mag Z");
#axs[2].scatter(data1["mz"], data1["mx"], color='tab:green', alpha=0.05, linewidth=1, linestyle='-');
axs[2].hist2d(data1["mz"], data1["mx"], bins=[64,64]);
axs[2].set_xlim(mean_1[2] - xlim, mean_1[2] + xlim)
axs[2].set_ylim(mean_1[0] - xlim, mean_1[0] + xlim)
axs[2].grid(True);
###Output
_____no_output_____
###Markdown
360° Rotation
###Code
data2 = np.genfromtxt(fname='data4.csv', usecols=range(0, 2), delimiter=",", names=True)
data2_x = np.linspace(0, np.shape(data2)[0], np.shape(data2)[0])
print("Rotation Samples: {}".format(np.shape(data2)[0]))
fig, axs = plt.subplots(1, 1, figsize=(10, 8))
axs.set_title("heading");
axs.grid(True);
plot = axs.scatter(data2["pitch"], data2["heading"], c=data2_x, cmap='winter');
fig.colorbar(plot);
axs.set(ylabel='heading', xlabel='pitch');
###Output
_____no_output_____ |
notebooks/1.0-jab-hawthorne-proof-of-concept.ipynb | ###Markdown
ImportFor this to work, the `01SEP2017.csv` and `01NOV2017.csv` need to be placed into the `data/raw` folder.
###Code
from pathlib import Path
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
SEP_CSV = Path('../data/raw') / '01SEP2017.csv'
NOV_CSV = Path('../data/raw') / '01NOV2017.csv'
raw_sep_df = pd.read_csv(SEP_CSV)
raw_nov_df = pd.read_csv(NOV_CSV)
###Output
_____no_output_____
###Markdown
DataThese csv files are 1 day snapshots of the `init_veh_stoph` and `trimet_stop_event` tables. One is from September 1st, 2017 and the other is from November 1st, 2017. These days occur before and after the bus lane change that occured in October 2017. Seems to be a good starting point to see what we can do.The data is pre-filtered based on the following query parameters:* Routes 4, 10 and 14* Service stops only (stop type 0 or 5) ... I'm already thinking I should have included stop type 4/6 which are drive thrus* Stops 3637, 3641, 3633, 2642, 7856 * 3637 - SE 11th and SE Madison [Just before the start of bus lane - actually begins at 10th] * 3641 - SE 7th and SE Madison * 3633 - SE Grand and SE Madison [End of bus lane] * 2642 - Hawthorne Bridge, Westbound [Likely outside of this analysis] * 7856 - SE 7th and SE Clay [4 (or 2) bus only]* Weekday service* Between 6:00 am (21600) and 12:00 pm (43200) * The bus lanes run from 6:00 am to 9:00 am in September, but were extended to 10:00 am in October. GoalSo the goal here is to see how long the busses take to traverse the bus lane from 9:00 am (32400) until 10:00 am (36000) in September, and see how that compares to that same time window in November.To begin with, let's start with the time the busses arrive at stop 3637 to the time they arrive at stop 3633. That means lines 10 and 14 only. And the 10 doesn't acutally come that often. Let's just look at the 14.We have some ridership data available here too, so we can also convey the number of people affected by the time it takes to travel this area. ColumnsFor now, let's limit the columns to the following:* SERVICE_DATE * VEHICLE_NUMBER* TRAIN* ROUTE_NUMBER* LEAVE_TIME* STOP_TIME* ARRIVE_TIME* LOCATION_ID* ESTIMATED_LOAD
###Code
cols = ['SERVICE_DATE', 'VEHICLE_NUMBER', 'TRAIN', 'ROUTE_NUMBER', 'LEAVE_TIME', 'STOP_TIME', 'ARRIVE_TIME', 'LOCATION_ID', 'ESTIMATED_LOAD']
sep_df = raw_sep_df[cols][(raw_sep_df['ROUTE_NUMBER'] == 14) &
(raw_sep_df['STOP_TIME'].between(32400, 36000)) &
(raw_sep_df['LOCATION_ID'].isin([3637, 3633]))].copy()
nov_df = raw_nov_df[cols][(raw_nov_df['ROUTE_NUMBER'] == 14) &
(raw_nov_df['STOP_TIME'].between(32400, 36000)) &
(raw_nov_df['LOCATION_ID'].isin([3637, 3633]))].copy()
sep_df
nov_df
grouped = sep_df.groupby('TRAIN')
print('SEPTEMBER [pre-lane change]')
sep_elapses = []
for name, group in grouped:
start_time = group[group['LOCATION_ID'] == 3637]['ARRIVE_TIME'].values[0]
end_time = group[group['LOCATION_ID'] == 3633]['ARRIVE_TIME'].values[0]
elapsed_time = end_time - start_time
sep_elapses.append(elapsed_time)
print(f"Train: {name}\n\tStart: {start_time}\n\tEnd: {end_time}\n\tElapsed Time: {elapsed_time}")
grouped = nov_df.groupby('TRAIN')
print('\n\nNOVEMBER [post-lane change]')
nov_elapses = []
for name, group in grouped:
start_time = group[group['LOCATION_ID'] == 3637]['ARRIVE_TIME'].values[0]
end_time = group[group['LOCATION_ID'] == 3633]['ARRIVE_TIME'].values[0]
elapsed_time = end_time - start_time
nov_elapses.append(elapsed_time)
print(f"Train: {name}\n\tStart: {start_time}\n\tEnd: {end_time}\n\tElapsed Time: {elapsed_time}")
print(sep_elapses)
print(nov_elapses)
print(f"Average September elapsed time: {np.array(sep_elapses).mean()}")
print(f"Average November elapsed time: {np.array(nov_elapses).mean()}")
###Output
[79, 90, 126, 92, 153]
[77, 139, 85, 156, 88]
Average September elapsed time: 108.0
Average November elapsed time: 109.0
|
deep-learning/fastai-docs/fastai_docs-master/dev_nb/enhance-wsdr.ipynb | ###Markdown
Model
###Code
class Block(nn.Module):
def __init__(self, n_feats, kernel_size, wn, act=nn.ReLU(True), res_scale=1):
super(Block, self).__init__()
self.res_scale = res_scale
body = []
expand = 6
linear = 0.8
body.append(
wn(nn.Conv2d(n_feats, n_feats*expand, 1, padding=1//2)))
body.append(act)
body.append(
wn(nn.Conv2d(n_feats*expand, int(n_feats*linear), 1, padding=1//2)))
body.append(
wn(nn.Conv2d(int(n_feats*linear), n_feats, kernel_size, padding=kernel_size//2)))
self.body = nn.Sequential(*body)
def forward(self, x):
res = self.body(x) * self.res_scale
res += x
return res
class WDSR(nn.Module):
def __init__(self, scale, n_resblocks, n_feats, res_scale, n_colors=3):
super().__init__()
# hyper-params
kernel_size = 3
act = nn.ReLU(True)
# wn = lambda x: x
wn = lambda x: torch.nn.utils.weight_norm(x)
mean, std = imagenet_stats
self.rgb_mean = torch.autograd.Variable(torch.FloatTensor(mean)).view([1, n_colors, 1, 1])
self.rgb_std = torch.autograd.Variable(torch.FloatTensor(std)).view([1, n_colors, 1, 1])
# define head module
head = []
head.append(
wn(nn.Conv2d(n_colors, n_feats,3,padding=3//2)))
# define body module
body = []
for i in range(n_resblocks):
body.append(
Block(n_feats, kernel_size, act=act, res_scale=res_scale, wn=wn))
# define tail module
tail = []
out_feats = scale*scale*n_colors
tail.append(
wn(nn.Conv2d(n_feats, out_feats, 3, padding=3//2)))
tail.append(nn.PixelShuffle(scale))
skip = []
skip.append(
wn(nn.Conv2d(n_colors, out_feats, 5, padding=5//2))
)
skip.append(nn.PixelShuffle(scale))
pad = []
pad.append(torch.nn.ReplicationPad2d(5//2))
# make object members
self.head = nn.Sequential(*head)
self.body = nn.Sequential(*body)
self.tail = nn.Sequential(*tail)
self.skip = nn.Sequential(*skip)
self.pad = nn.Sequential(*pad)
def forward(self, x):
mean = self.rgb_mean.to(x)
std = self.rgb_std.to(x)
x = (x - mean) / std
#if not self.training:
# x = self.pad(x)
s = self.skip(x)
x = self.head(x)
x = self.body(x)
x = self.tail(x)
x += s
x = x*std + mean
return x
scale=4
n_resblocks=8
n_feats=64
res_scale= 1.
model = WDSR(scale, n_resblocks, n_feats, res_scale).cuda()
sz_lr = 72
scale,bs = 4,24
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
#loss = CropTargetForLoss(F.l1_loss)
loss = F.mse_loss
learn = Learner(data, nn.DataParallel(model), loss_func=loss)
# learn.lr_find(num_it=500, start_lr=1e-5, end_lr=1000)
# learn.recorder.plot()
#learn.load('pixel')
lr = 1e-3
learn.fit_one_cycle(1, lr)
learn.save('pixel')
learn.fit_one_cycle(1, lr/5)
learn.save('pixel')
sz_lr = 512
scale,bs = 4,4
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
#loss = CropTargetForLoss(F.l1_loss)
loss = F.mse_loss
learn = Learner(data, nn.DataParallel(model), loss_func=loss)
learn = learn.load('pixel')
learn.fit_one_cycle(1, lr)
learn.save('pixel')
m_vgg_feat = vgg16_bn(True).features.cuda().eval().features
requires_grad(m_vgg_feat, False)
blocks = [i-1 for i,o in enumerate(children(m_vgg_feat))
if isinstance(o,nn.MaxPool2d)]
blocks, [m_vgg_feat[i] for i in blocks]
class FeatureLoss(nn.Module):
def __init__(self, m_feat, layer_ids, layer_wgts):
super().__init__()
self.m_feat = m_feat
self.loss_features = [self.m_feat[i] for i in layer_ids]
self.hooks = hook_outputs(self.loss_features, detach=False)
self.wgts = layer_wgts
self.metrics = {}
self.metric_names = ['L1'] + [f'feat_{i}' for i in range(len(layer_ids))]
for name in self.metric_names: self.metrics[name] = 0.
def make_feature(self, bs, o, clone=False):
feat = o.view(bs, -1)
if clone: feat = feat.clone()
return feat
def make_features(self, x, clone=False):
bs = x.shape[0]
self.m_feat(x)
return [self.make_feature(bs, o, clone) for o in self.hooks.stored]
def forward(self, input, target):
out_feat = self.make_features(target, clone=True)
in_feat = self.make_features(input)
l1_loss = F.l1_loss(input,target)/100
self.feat_losses = [l1_loss]
self.feat_losses += [F.mse_loss(f_in, f_out)*w
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
for i,name in enumerate(self.metric_names): self.metrics[name] = self.feat_losses[i]
self.metrics['L1'] = l1_loss
self.loss = sum(self.feat_losses)
return self.loss*100
class ReportLossMetrics(LearnerCallback):
_order = -20 #Needs to run before the recorder
def on_train_begin(self, **kwargs):
self.metric_names = self.learn.loss_func.metric_names
self.learn.recorder.add_metric_names(self.metric_names)
def on_epoch_begin(self, **kwargs):
self.metrics = {}
for name in self.metric_names:
self.metrics[name] = 0.
self.nums = 0
def on_batch_end(self, last_target, train, **kwargs):
if not train:
bs = last_target.size(0)
for name in self.metric_names:
self.metrics[name] += bs * self.learn.loss_func.metrics[name]
self.nums += bs
def on_epoch_end(self, **kwargs):
if self.nums:
metrics = [self.metrics[name]/self.nums for name in self.metric_names]
self.learn.recorder.add_metrics(metrics)
sz_lr = 200
scale,bs = 4,4
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
feat_loss = FeatureLoss(m_vgg_feat, blocks[:2], [0.25,0.45,0.30])
learn = Learner(data, nn.DataParallel(model), loss_func=feat_loss, callback_fns=[ReportLossMetrics])
#learn = learn.load('pixel')
# learn.lr_find()
# learn.recorder.plot()
lr=1e-3
learn.fit_one_cycle(1, lr)
learn.save('enhance_feat')
learn = learn.load('enhance_feat')
def make_img(x, idx=0):
return Image(torch.clamp(x.cpu(),0,1)[idx])
def plot_x_y_pred(x, pred, y, figsize):
rows=x.shape[0]
fig, axs = plt.subplots(rows,3,figsize=figsize)
for i in range(rows):
make_img(x, i).show(ax=axs[i, 0])
make_img(pred, i).show(ax=axs[i, 1])
make_img(y, i).show(ax=axs[i, 2])
plt.tight_layout()
def plot_some(learn, do_denorm=True, figsize=None):
x, y = next(iter(learn.data.valid_dl))
y_pred = model(x)
y_pred = y_pred.detach()
x = x.detach()
y = y.detach()
if figsize is None: figsize=y_pred.shape[-2:]
plot_x_y_pred(x[0:4], y_pred[0:4], y[0:4], figsize=figsize)
sz_lr = 64
scale,bs = 4,24
sz_hr = sz_lr*scale
data = get_data(bs, sz_lr, sz_hr)
loss = F.mse_loss
learn = Learner(data, nn.DataParallel(model), loss_func=loss)
learn = learn.load('enhance_feat')
plot_some(learn, figsize=(256,256))
learn = learn.load('pixel')
plot_some(learn, figsize=(256,256))
###Output
_____no_output_____ |
pyddm-exploration/demo.ipynb | ###Markdown
System requirements* Python 3.5 or higher* Scipy/numpy* Paranoid scientist (pip install paranoid-scientist)* For plotting features, matplotlib* For parallelization support, pathos Hello World!
###Code
import matplotlib.pyplot as plt
from ddm import Model
m = Model()
s = m.solve() # Simulate the model using the Model.solve() method to generate a Solution object
plt.plot(s.model.t_domain(), s.pdf_corr())
plt.savefig("helloworld.png")
plt.show()
s.model
help(Model)
from ddm import Model
from ddm.models import DriftConstant, NoiseConstant, BoundConstant, OverlayNonDecision
from ddm.functions import fit_adjust_model, display_model
model = Model(name='Simple model',
drift=DriftConstant(drift=2.2),
noise=NoiseConstant(noise=1.5),
bound=BoundConstant(B=1.1),
overlay=OverlayNonDecision(nondectime=.1),
dx=.001, dt=.01, T_dur=2)
display_model(model)
sol = model.solve()
samp = sol.resample(1000)
sol.mean_decision_time()
from ddm import Fittable
from ddm.models import LossRobustBIC
from ddm.functions import fit_adjust_model
model_fit = Model(name='Simple model (fitted)',
drift=DriftConstant(drift=Fittable(minval=0, maxval=4)),
noise=NoiseConstant(noise=Fittable(minval=.5, maxval=4)),
bound=BoundConstant(B=1.1),
overlay=OverlayNonDecision(nondectime=Fittable(minval=0, maxval=1)),
dx=.001, dt=.01, T_dur=2)
fit_adjust_model(samp, model_fit,
fitting_method="differential_evolution",
lossfunction=LossRobustBIC, verbose=False)
display_model(model_fit)
samp
import ddm.plot
import matplotlib.pyplot as plt
ddm.plot.plot_fit_diagnostics(model=model_fit, sample=samp)
plt.savefig("simple-fit.png")
plt.show()
print(sol.prob_correct())
print(sol.pdf_err())
from ddm import Sample
import pandas
df_rt = pandas.read_csv("https://raw.githubusercontent.com/mwshinn/PyDDM/master/doc/downloads/roitman_rts.csv")
df_rt
df_rt = df_rt[df_rt["monkey"] == 1] # Only monkey 1
# Remove short and long RTs, as in 10.1523/JNEUROSCI.4684-04.2005.
# This is not strictly necessary, but is performed here for
# compatibility with this study.
df_rt = df_rt[df_rt["rt"] > .1] # Remove trials less than 100ms
df_rt = df_rt[df_rt["rt"] < 1.65] # Remove trials greater than 1650ms
# Create a sample object from our data. This is the standard input
# format for fitting procedures. Since RT and correct/error are
# both mandatory columns, their names are specified by command line
# arguments.
roitman_sample = Sample.from_pandas_dataframe(df_rt, rt_column_name="rt", correct_column_name="correct")
import ddm.models
class DriftCoherence(ddm.models.Drift):
name = "Drift depends linearly on coherence"
required_parameters = ["driftcoh"] # <-- Parameters we want to include in the model
required_conditions = ["coh"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.
# We must always define the get_drift function, which is used to compute the instantaneous value of drift.
def get_drift(self, conditions, **kwargs):
return self.driftcoh * conditions['coh']
from ddm import Model, Fittable
from ddm.functions import fit_adjust_model, display_model
from ddm.models import NoiseConstant, BoundConstant, OverlayChain, OverlayNonDecision, OverlayPoissonMixture
model_rs = Model(name='Roitman data, drift varies with coherence',
drift=DriftCoherence(driftcoh=Fittable(minval=0, maxval=20)),
noise=NoiseConstant(noise=1),
bound=BoundConstant(B=Fittable(minval=.1, maxval=1.5)),
# Since we can only have one overlay, we use
# OverlayChain to string together multiple overlays.
# They are applied sequentially in order. OverlayNonDecision
# implements a non-decision time by shifting the
# resulting distribution of response times by
# `nondectime` seconds.
overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fittable(minval=0, maxval=.4)),
OverlayPoissonMixture(pmixturecoef=.02,
rate=1)]),
dx=.001, dt=.01, T_dur=2)
# Fitting this will also be fast because PyDDM can automatically
# determine that DriftCoherence will allow an analytical solution.
fit_model_rs = fit_adjust_model(sample=roitman_sample, model=model_rs, verbose=False)
display_model(fit_model_rs)
import ddm.plot
import matplotlib.pyplot as plt
ddm.plot.plot_fit_diagnostics(model=fit_model_rs, sample=roitman_sample)
plt.savefig("roitman-fit.png")
plt.show()
# ddm.plot.model_gui(model=fit_model_rs, sample=roitman_sample)
class DriftCoherenceLeak(ddm.models.Drift):
name = "Leaky drift depends linearly on coherence"
required_parameters = ["driftcoh", "leak"] # <-- Parameters we want to include in the model
required_conditions = ["coh"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.
# We must always define the get_drift function, which is used to compute the instantaneous value of drift.
def get_drift(self, x, conditions, **kwargs):
return self.driftcoh * conditions['coh'] + self.leak * x
from ddm.models import BoundCollapsingExponential
model_leak = Model(name='Roitman data, leaky drift varies with coherence',
drift=DriftCoherenceLeak(driftcoh=Fittable(minval=0, maxval=20),
leak=Fittable(minval=-10, maxval=10)),
noise=NoiseConstant(noise=1),
bound=BoundCollapsingExponential(B=Fittable(minval=0.5, maxval=3),
tau=Fittable(minval=.0001, maxval=5)),
# Since we can only have one overlay, we use
# OverlayChain to string together multiple overlays.
# They are applied sequentially in order. OverlayDelay
# implements a non-decision time by shifting the
# resulting distribution of response times by
# `delaytime` seconds.
overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fittable(minval=0, maxval=.4)),
OverlayPoissonMixture(pmixturecoef=.02,
rate=1)]),
dx=.01, dt=.01, T_dur=2)
# from ddm.plot import model_gui
# model_gui(model_leak, sample=roitman_sample)
# fit_model_leak = fit_adjust_model(sample=roitman_sample, model=model_leak)
# ddm.plot.plot_fit_diagnostics(model=fit_model_leak, sample=roitman_sample)
# plt.savefig("leak-collapse-fit.png")
###Output
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.542903321809481, minval=0, maxval=20), leak=Fitted(1.7326521345463863, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6897102161309165, minval=0.5, maxval=3), tau=Fitted(2.3832278058729726, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.35096334903519405, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1967.1418614641623
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.177639592668301, minval=0, maxval=20), leak=Fitted(9.110567517297387, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.142464682107733, minval=0.5, maxval=3), tau=Fitted(0.9120053245906932, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.06305226749400683, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5739.193193187177
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.211484734330153, minval=0, maxval=20), leak=Fitted(-5.222361158255124, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4033542968189374, minval=0.5, maxval=3), tau=Fitted(3.554566351511098, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3999651366249339, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1957.1724192501015
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.361573370519599, minval=0, maxval=20), leak=Fitted(1.835438487253862, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.757022772156284, minval=0.5, maxval=3), tau=Fitted(4.877765735792014, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38577218494091436, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2805.2526298289877
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.537436835568695, minval=0, maxval=20), leak=Fitted(-6.39078185666055, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.551943416899259, minval=0.5, maxval=3), tau=Fitted(4.0837129155818195, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17437538882229442, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3796.138259254554
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.984442854482344, minval=0, maxval=20), leak=Fitted(0.724221849309914, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.540173801190216, minval=0.5, maxval=3), tau=Fitted(1.781842043390649, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.27575674887090934, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4121.032247360254
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.114991807937429, minval=0, maxval=20), leak=Fitted(-5.116286791213202, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.39670851796677, minval=0.5, maxval=3), tau=Fitted(1.9350929617331754, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2910116926278514, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6720.728702005034
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.3506744846401055, minval=0, maxval=20), leak=Fitted(-0.9499488780508014, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8901959220308424, minval=0.5, maxval=3), tau=Fitted(1.6978901989510478, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1481958916723394, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1936.3671952445093
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.35931026055852, minval=0, maxval=20), leak=Fitted(-3.5854375013312225, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.76999957298265, minval=0.5, maxval=3), tau=Fitted(2.659565600940745, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.37701582645469495, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8021.808762332517
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.34142114372201, minval=0, maxval=20), leak=Fitted(-9.157961677811606, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.5227808245057424, minval=0.5, maxval=3), tau=Fitted(0.03123285865883485, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.12236997682960665, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=110939.6746665163
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.412827887291158, minval=0, maxval=20), leak=Fitted(-4.120283854196999, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.118103761745533, minval=0.5, maxval=3), tau=Fitted(4.317386728046326, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.25839703424567373, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1473.3540186071227
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.19128285503821, minval=0, maxval=20), leak=Fitted(7.080754899714408, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.918483591292807, minval=0.5, maxval=3), tau=Fitted(2.803798308945293, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.32444858966643536, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1559.7075397244919
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.126986236229552, minval=0, maxval=20), leak=Fitted(-6.757327480032124, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0985219833565336, minval=0.5, maxval=3), tau=Fitted(3.9734756207680704, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.24577080399663306, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2705.4085137652946
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.81024590295237, minval=0, maxval=20), leak=Fitted(4.414833168535617, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9352040071677286, minval=0.5, maxval=3), tau=Fitted(3.1794317538415537, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.0981190149127948, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5725.197278222273
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.917671444921647, minval=0, maxval=20), leak=Fitted(7.405944933563182, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.655355543598668, minval=0.5, maxval=3), tau=Fitted(2.0514875797889305, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.27138708958235086, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1579.2415886404933
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.391054383955462, minval=0, maxval=20), leak=Fitted(-2.1854671146279236, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4490639030090307, minval=0.5, maxval=3), tau=Fitted(1.4782935430434794, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21509780576878035, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5.977359880538344
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.12120117315572, minval=0, maxval=20), leak=Fitted(-2.5773791958162207, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.305815159227506, minval=0.5, maxval=3), tau=Fitted(3.9329276265909474, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14048189022126337, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8208.892193784584
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.66213550801708, minval=0, maxval=20), leak=Fitted(-4.208674746394895, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.098494814172506, minval=0.5, maxval=3), tau=Fitted(1.4316826901558302, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3934259309190975, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8326.03249439039
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.216828411447018, minval=0, maxval=20), leak=Fitted(-9.966378924477729, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9564300459216292, minval=0.5, maxval=3), tau=Fitted(0.5442469163506545, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1081117292070683, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3258.4974691630673
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.006821549469127, minval=0, maxval=20), leak=Fitted(8.296713581353528, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.525446664499696, minval=0.5, maxval=3), tau=Fitted(4.213534769524867, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.10287346403935989, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=13202.694860671027
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.57059882668626, minval=0, maxval=20), leak=Fitted(6.453336787925689, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.593777457196794, minval=0.5, maxval=3), tau=Fitted(2.225800734802238, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3427598005743314, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3530.998519492379
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.711925034412583, minval=0, maxval=20), leak=Fitted(-4.524427340656998, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.048837296006622, minval=0.5, maxval=3), tau=Fitted(0.09086413073586863, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23912047849103146, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=35489.39082112373
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.4873034093977395, minval=0, maxval=20), leak=Fitted(7.862200205180187, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.658720623711777, minval=0.5, maxval=3), tau=Fitted(3.6804994246028446, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.16709734085887742, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4541.311579476601
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.1353143538374226, minval=0, maxval=20), leak=Fitted(-2.8121188665411245, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.453826207432596, minval=0.5, maxval=3), tau=Fitted(3.330241670960722, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.32564344640727416, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4519.117293834119
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.370686383947106, minval=0, maxval=20), leak=Fitted(3.9982082504333527, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.5796426752518755, minval=0.5, maxval=3), tau=Fitted(0.21677000693464388, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.301243260529459, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=748.6499972205659
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.0712151382052344, minval=0, maxval=20), leak=Fitted(-5.6704408999905676, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.935522069829656, minval=0.5, maxval=3), tau=Fitted(1.215142701277994, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1378720789603346, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=12838.190216137778
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.8299866002008507, minval=0, maxval=20), leak=Fitted(2.665109717414569, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6043895098200496, minval=0.5, maxval=3), tau=Fitted(0.7628835045867504, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3549721335761616, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2758.9336741567736
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.999457436591813, minval=0, maxval=20), leak=Fitted(9.543538477726898, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7178716574987491, minval=0.5, maxval=3), tau=Fitted(2.307142454359793, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31306104895877596, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4641.146769079277
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.660808240049093, minval=0, maxval=20), leak=Fitted(3.0662351866574444, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.1385992312550393, minval=0.5, maxval=3), tau=Fitted(4.647611917032916, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2043855464714325, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9269.37809315032
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.70982377885155, minval=0, maxval=20), leak=Fitted(-6.800419298248432, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.768239565824627, minval=0.5, maxval=3), tau=Fitted(3.434876920678647, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.02110817111991392, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=11889.195042078032
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.66832697413967, minval=0, maxval=20), leak=Fitted(0.4921644982503892, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.007061826588362, minval=0.5, maxval=3), tau=Fitted(4.4207994818298415, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31615391918460084, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1368.1487255451202
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.56756363481516, minval=0, maxval=20), leak=Fitted(-0.7223831507908851, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.1706431859183954, minval=0.5, maxval=3), tau=Fitted(4.760261191611825, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.003398693824390203, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=11393.171159361424
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.309148771071909, minval=0, maxval=20), leak=Fitted(1.20077183637195, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.880016048851676, minval=0.5, maxval=3), tau=Fitted(1.6489022704849163, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22556979725823584, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=227.09667567265478
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.4040191927759373, minval=0, maxval=20), leak=Fitted(-7.458202688675208, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.930354826761679, minval=0.5, maxval=3), tau=Fitted(1.147736682894549, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.36630733420361733, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=15402.661413883514
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.479588546075604, minval=0, maxval=20), leak=Fitted(-8.083588713055905, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5857379388951123, minval=0.5, maxval=3), tau=Fitted(2.741601596966736, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.007638713549017462, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3515.124593902118
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.613485659215479, minval=0, maxval=20), leak=Fitted(5.610907275252828, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.728550946841991, minval=0.5, maxval=3), tau=Fitted(4.811698419792194, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1977654856630871, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4132.9363106519
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.768243432964475, minval=0, maxval=20), leak=Fitted(6.651869885618041, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.5462735049372647, minval=0.5, maxval=3), tau=Fitted(2.975245704454408, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.052871738019870135, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5674.582123415394
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.87760106313388, minval=0, maxval=20), leak=Fitted(-4.689791661264407, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2175369170718895, minval=0.5, maxval=3), tau=Fitted(4.3692084278773615, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17849619532548983, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3660.0579299619585
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.7176539080728208, minval=0, maxval=20), leak=Fitted(-6.123605808671152, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4808650722680083, minval=0.5, maxval=3), tau=Fitted(0.4024966300783519, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23239227009674812, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=16633.0704465021
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.697174232648194, minval=0, maxval=20), leak=Fitted(2.4802866112250532, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8186190795707172, minval=0.5, maxval=3), tau=Fitted(0.6306430523062001, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3396716168192412, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=199.13580478471357
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.140131462331764, minval=0, maxval=20), leak=Fitted(-3.295448348387735, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0147392628521819, minval=0.5, maxval=3), tau=Fitted(0.2727993774320825, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3040557136066886, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3918.5874695538973
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.9371837294167698, minval=0, maxval=20), leak=Fitted(8.678046234879712, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8385964901930111, minval=0.5, maxval=3), tau=Fitted(0.33899151106509073, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2558495815752162, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3019.5929609492505
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.8302846986629895, minval=0, maxval=20), leak=Fitted(-1.8397239066369875, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.994137164942317, minval=0.5, maxval=3), tau=Fitted(4.976977522418319, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.0846322352619368, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7381.92344546412
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.2345893901175797, minval=0, maxval=20), leak=Fitted(-9.598513056448347, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.3579775682328874, minval=0.5, maxval=3), tau=Fitted(3.7720005213129983, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2618198776780116, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4673.784562370676
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.351449214510761, minval=0, maxval=20), leak=Fitted(-7.640677644016273, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.9980286214857284, minval=0.5, maxval=3), tau=Fitted(3.6609646989721103, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.08977706360900047, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1430.2727961320163
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.097668447771248, minval=0, maxval=20), leak=Fitted(1.1050392579272583, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.754224624802943, minval=0.5, maxval=3), tau=Fitted(1.2761400938548721, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.033954524422048105, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=910.5143627705725
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.432397101751327, minval=0, maxval=20), leak=Fitted(7.966952743267273, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.223147186414273, minval=0.5, maxval=3), tau=Fitted(3.031993511787069, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.05412793876149302, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=11595.143943251012
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.854937808756773, minval=0, maxval=20), leak=Fitted(-0.023713932836334495, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3868171233530586, minval=0.5, maxval=3), tau=Fitted(3.068387460510192, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.029135050402059298, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9796.14257814092
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.095299194503195, minval=0, maxval=20), leak=Fitted(-1.2925682982632958, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8277960615919016, minval=0.5, maxval=3), tau=Fitted(3.840091695030771, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3681091197050326, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2922.3568809585418
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.033928127058024, minval=0, maxval=20), leak=Fitted(5.3826627252158605, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7337821458473746, minval=0.5, maxval=3), tau=Fitted(3.337819224730601, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28600721699298964, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5168.037646690568
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.702433836139917, minval=0, maxval=20), leak=Fitted(8.608034815576236, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8858020748929802, minval=0.5, maxval=3), tau=Fitted(4.189471356341522, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21238957010308723, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6116.063969021941
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.247637590788464, minval=0, maxval=20), leak=Fitted(5.815824166809809, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.298298780848644, minval=0.5, maxval=3), tau=Fitted(2.55458213701347, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14955147274135394, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4866.129011550723
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.7037694419760365, minval=0, maxval=20), leak=Fitted(3.0767010714797083, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2479916281903285, minval=0.5, maxval=3), tau=Fitted(4.0099368320020465, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.02425187113048538, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10711.822437041805
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.135246248015699, minval=0, maxval=20), leak=Fitted(-3.6529078992319794, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.473513674657285, minval=0.5, maxval=3), tau=Fitted(2.5093587784326203, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2981010930285442, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4107.875190970431
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.965445486292698, minval=0, maxval=20), leak=Fitted(6.09854969896179, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8567488192353405, minval=0.5, maxval=3), tau=Fitted(1.1205908389257007, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.07164077207750025, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2497.237146893574
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.4418113305347493, minval=0, maxval=20), leak=Fitted(-2.318545253277735, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6676960872636872, minval=0.5, maxval=3), tau=Fitted(0.8291335515347067, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3326860755268224, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=14051.406623579034
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.874059812656853, minval=0, maxval=20), leak=Fitted(-9.264806254942792, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.335271652340373, minval=0.5, maxval=3), tau=Fitted(3.2414297956653177, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15639762791172174, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3084.382010808882
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.115485475179923, minval=0, maxval=20), leak=Fitted(9.76035404612401, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0387090837885962, minval=0.5, maxval=3), tau=Fitted(2.926426954353281, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09389283633411788, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=11703.462227270162
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.319445216058758, minval=0, maxval=20), leak=Fitted(-7.160128053810691, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9988101569884702, minval=0.5, maxval=3), tau=Fitted(1.5500080068614421, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.07952921319990024, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1918.515221275764
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.41151112341340124, minval=0, maxval=20), leak=Fitted(3.734489470969884, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5023023263287958, minval=0.5, maxval=3), tau=Fitted(3.5189911037998582, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.12792306250838692, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5700.599866115208
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.492467774029389, minval=0, maxval=20), leak=Fitted(5.179148533812548, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.272088821411123, minval=0.5, maxval=3), tau=Fitted(0.4746260598418126, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.18295318033591187, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1314.1954146433582
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.131685916749026, minval=0, maxval=20), leak=Fitted(-1.6369537914847654, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8497389644488924, minval=0.5, maxval=3), tau=Fitted(2.1450754610217073, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2446236118474111, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1555.6402114185046
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.97816556884581, minval=0, maxval=20), leak=Fitted(9.218727106303186, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4186468519374333, minval=0.5, maxval=3), tau=Fitted(2.0696698736790857, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22353786432043876, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2824.1212855656267
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.130794747676992, minval=0, maxval=20), leak=Fitted(-8.378174193167938, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.120967153573368, minval=0.5, maxval=3), tau=Fitted(4.595099419092333, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.011282016195194061, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=11865.280258517882
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.736428538344015, minval=0, maxval=20), leak=Fitted(4.337923085167345, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8138709836241098, minval=0.5, maxval=3), tau=Fitted(1.341157441249478, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.11262107474762145, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3716.475746666648
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.158738994504727, minval=0, maxval=20), leak=Fitted(-8.92787024034189, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.637107399032391, minval=0.5, maxval=3), tau=Fitted(4.519237281549787, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.16214827796079723, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10086.981486610955
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.07736352504598, minval=0, maxval=20), leak=Fitted(2.1692218478468472, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9308972639560213, minval=0.5, maxval=3), tau=Fitted(2.729213947363532, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3582859249792929, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1551.6500644191292
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.195784907101164, minval=0, maxval=20), leak=Fitted(-0.634128222238517, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7275366191710877, minval=0.5, maxval=3), tau=Fitted(1.0105794789631546, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38225275452393476, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4859.228062059125
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.75478051188384, minval=0, maxval=20), leak=Fitted(6.926044749350597, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.17195400042819, minval=0.5, maxval=3), tau=Fitted(0.9509342990648704, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.04395197557249128, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6787.318453629327
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.75023712195693, minval=0, maxval=20), leak=Fitted(4.761203475521604, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.3188580067266935, minval=0.5, maxval=3), tau=Fitted(1.9280731114113079, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.27854824849303894, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=148.26536686434358
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.929745510940739, minval=0, maxval=20), leak=Fitted(-8.513001799241383, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.264255810724807, minval=0.5, maxval=3), tau=Fitted(1.834733483203609, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.06882639883914962, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6116.339554106975
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.382050034722042, minval=0, maxval=20), leak=Fitted(3.403308982057378, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6876900285185692, minval=0.5, maxval=3), tau=Fitted(0.6908389365686176, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1917525800789167, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=488.8152567644936
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.058824051176941, minval=0, maxval=20), leak=Fitted(0.29084723156440395, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6130143450127488, minval=0.5, maxval=3), tau=Fitted(2.4512508631219534, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.12881041223235157, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1007.8810749413897
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.612390244269072, minval=0, maxval=20), leak=Fitted(-0.3729560259449616, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7777129860097391, minval=0.5, maxval=3), tau=Fitted(4.681630673641353, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.039743950711769394, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9897.91181771521
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.555671912814596, minval=0, maxval=20), leak=Fitted(-5.949036813966995, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.602440177910261, minval=0.5, maxval=3), tau=Fitted(0.16250845781116308, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19606580553769165, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1043.7127285194201
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.633175070355268, minval=0, maxval=20), leak=Fitted(2.9426498519521216, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7912804091437411, minval=0.5, maxval=3), tau=Fitted(1.8727547291615798, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.04556059671628637, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10515.351648296488
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.177639592668301, minval=0, maxval=20), leak=Fitted(-2.7732756281299373, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6499713835665564, minval=0.5, maxval=3), tau=Fitted(0.15991237771251665, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2325819535725826, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10836.651712564677
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.245415657354524, minval=0, maxval=20), leak=Fitted(-5.222361158255124, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5179466343422772, minval=0.5, maxval=3), tau=Fitted(2.2160879204509123, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28259232164498344, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=306.0458202355835
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.032351282036291, minval=0, maxval=20), leak=Fitted(-8.574228848939452, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8727569344306074, minval=0.5, maxval=3), tau=Fitted(3.5197901890770176, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38577218494091436, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6004.304481858265
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.0941031469211353, minval=0, maxval=20), leak=Fitted(-7.8887808037076335, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.036293904732351, minval=0.5, maxval=3), tau=Fitted(0.5784053833287157, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17437538882229442, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6709.259517732059
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.7487232551192484, minval=0, maxval=20), leak=Fitted(-1.1956538641787873, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6215919901851876, minval=0.5, maxval=3), tau=Fitted(1.781842043390649, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.39046092600319604, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8740.40953339932
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.114991807937429, minval=0, maxval=20), leak=Fitted(1.6658105236257992, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.39670851796677, minval=0.5, maxval=3), tau=Fitted(1.9350929617331754, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17821901284133113, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=264.61209836094935
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.763244580969381, minval=0, maxval=20), leak=Fitted(8.503765112251756, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2762355399285328, minval=0.5, maxval=3), tau=Fitted(0.33657775238292054, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23466684745835298, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2113.985195490566
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.1469688107246263, minval=0, maxval=20), leak=Fitted(-6.9544019619031285, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.1810652850008085, minval=0.5, maxval=3), tau=Fitted(0.05750692202537655, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.26201605481746043, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=18388.061364274854
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.150974385146232, minval=0, maxval=20), leak=Fitted(-9.157961677811606, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9666084048953891, minval=0.5, maxval=3), tau=Fitted(2.386875299912956, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.33817342276866946, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8530.345479010568
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.412827887291158, minval=0, maxval=20), leak=Fitted(-4.120283854196999, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2584286191222667, minval=0.5, maxval=3), tau=Fitted(4.317386728046326, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23790484341348322, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1632.7416171856587
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.19128285503821, minval=0, maxval=20), leak=Fitted(3.042744215720403, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5229053167906894, minval=0.5, maxval=3), tau=Fitted(1.137964266248994, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.26426723151034004, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=336.31598507188727
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.126986236229552, minval=0, maxval=20), leak=Fitted(-6.757327480032124, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8159379662640016, minval=0.5, maxval=3), tau=Fitted(0.16990373857938357, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.24577080399663306, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3474.9065062038835
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.923032283615042, minval=0, maxval=20), leak=Fitted(-7.0413233175597, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4838893981141563, minval=0.5, maxval=3), tau=Fitted(4.174064309618435, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1825594842728903, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1303.554042021748
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.23606223467272436, minval=0, maxval=20), leak=Fitted(0.9201856860335278, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5667322102297176, minval=0.5, maxval=3), tau=Fitted(2.2839104588929735, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3888624455039846, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3212.8750653863444
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.276289157855704, minval=0, maxval=20), leak=Fitted(-7.909247015845818, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2421829626931944, minval=0.5, maxval=3), tau=Fitted(2.261709202293513, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.028467789798590742, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3320.712876903319
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.12120117315572, minval=0, maxval=20), leak=Fitted(7.094435287704652, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.305815159227506, minval=0.5, maxval=3), tau=Fitted(1.622372597716939, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.10576877554035645, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8063.1730799142215
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.66213550801708, minval=0, maxval=20), leak=Fitted(3.6809812547409204, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8313611587623053, minval=0.5, maxval=3), tau=Fitted(1.4316826901558302, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.20134870703174315, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=651.2415025082522
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.25181346188174, minval=0, maxval=20), leak=Fitted(-2.4254529177533124, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.179779368859593, minval=0.5, maxval=3), tau=Fitted(0.5442469163506545, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1081117292070683, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2035.324035338101
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.45444379571422, minval=0, maxval=20), leak=Fitted(-5.997431298208199, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.038323579842466, minval=0.5, maxval=3), tau=Fitted(3.769206840061745, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.10287346403935989, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2486.0113455175047
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.432686042064855, minval=0, maxval=20), leak=Fitted(6.902221678448313, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.58892915169679, minval=0.5, maxval=3), tau=Fitted(1.6273041327701616, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.019658911574563925, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=12727.453033736896
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.711925034412583, minval=0, maxval=20), leak=Fitted(0.4158797298327954, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9450598847411802, minval=0.5, maxval=3), tau=Fitted(0.10380212724333093, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30055665869816695, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=966.2330726758762
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.132448184096729, minval=0, maxval=20), leak=Fitted(0.8322960621072584, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9100936017445269, minval=0.5, maxval=3), tau=Fitted(3.6804994246028446, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.16709734085887742, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7787.322583946367
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.419045041059247, minval=0, maxval=20), leak=Fitted(4.518334614202795, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0267168884367341, minval=0.5, maxval=3), tau=Fitted(3.330241670960722, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38975778682277096, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2191.868273446236
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.887199241883467, minval=0, maxval=20), leak=Fitted(3.186115209641305, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.5796426752518755, minval=0.5, maxval=3), tau=Fitted(0.7783257393934342, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.12028083562498915, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=86.62449676660279
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.715544457691456, minval=0, maxval=20), leak=Fitted(5.753293066971716, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5051177894204255, minval=0.5, maxval=3), tau=Fitted(1.8372109942152275, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1378720789603346, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3319.18289418731
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.8299866002008507, minval=0, maxval=20), leak=Fitted(-2.4632447891831264, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3221144107138207, minval=0.5, maxval=3), tau=Fitted(0.7628835045867504, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2709628073618001, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4475.280133770362
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.999457436591813, minval=0, maxval=20), leak=Fitted(1.2718909682488921, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2860221196567776, minval=0.5, maxval=3), tau=Fitted(2.9421079894278273, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23481982886635808, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2355.5372862732725
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.8967168080372243, minval=0, maxval=20), leak=Fitted(3.0662351866574444, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8020613941783195, minval=0.5, maxval=3), tau=Fitted(4.412389672086954, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31553451737382776, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2168.4392328219365
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.70982377885155, minval=0, maxval=20), leak=Fitted(7.863172372076321, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.768239565824627, minval=0.5, maxval=3), tau=Fitted(2.60296712130035, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14553332983088546, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9889.206847440599
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.3050947149827792, minval=0, maxval=20), leak=Fitted(-3.9992946133123395, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9241259046818622, minval=0.5, maxval=3), tau=Fitted(0.8042436980951486, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.35183867874727925, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=13191.397897380946
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.17362053727563875, minval=0, maxval=20), leak=Fitted(-0.37504863979786807, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3084109697503705, minval=0.5, maxval=3), tau=Fitted(2.361454056987139, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17358848011658626, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1650.1069363268557
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.210590482601162, minval=0, maxval=20), leak=Fitted(-4.946621452593096, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0066866739031262, minval=0.5, maxval=3), tau=Fitted(2.1654891016642623, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38490376381875374, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1364.1683940937573
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.4040191927759373, minval=0, maxval=20), leak=Fitted(2.4372515393029093, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2469716829726964, minval=0.5, maxval=3), tau=Fitted(1.147736682894549, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19960321335391282, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1152.6179211971235
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.178805860028826, minval=0, maxval=20), leak=Fitted(2.4433690356689874, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4192257320205275, minval=0.5, maxval=3), tau=Fitted(2.741601596966736, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2795035824647591, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=784.460201307262
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.644984028494372, minval=0, maxval=20), leak=Fitted(4.975384120125437, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3478785686910941, minval=0.5, maxval=3), tau=Fitted(0.47927329974438715, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1458072969537393, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2623.342867497741
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.768243432964475, minval=0, maxval=20), leak=Fitted(6.651869885618041, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8625502523091897, minval=0.5, maxval=3), tau=Fitted(2.113218538902111, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1354818474465036, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3609.6352514531645
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.78948364557884, minval=0, maxval=20), leak=Fitted(-4.227290510243346, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5840070528386396, minval=0.5, maxval=3), tau=Fitted(3.8173983558565743, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.20355892249128, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9044.758282498413
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.323689585606194, minval=0, maxval=20), leak=Fitted(4.868079403726466, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4808650722680083, minval=0.5, maxval=3), tau=Fitted(0.4024966300783519, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.04362629469565876, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2838.4973274770637
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.113888772857074, minval=0, maxval=20), leak=Fitted(-1.2292033130883118, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8186190795707172, minval=0.5, maxval=3), tau=Fitted(0.809721576748591, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3396716168192412, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=178.38812351346144
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.140131462331764, minval=0, maxval=20), leak=Fitted(-6.878237969200105, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6768390742704717, minval=0.5, maxval=3), tau=Fitted(3.3859283162430316, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.24157138203891004, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4428.963490281947
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.7884663626077275, minval=0, maxval=20), leak=Fitted(-9.064242244883317, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.46536724380734, minval=0.5, maxval=3), tau=Fitted(2.8117962359382513, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.036573727108380694, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=467.1475751700994
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.457409125010907, minval=0, maxval=20), leak=Fitted(-1.8397239066369875, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4895889059291028, minval=0.5, maxval=3), tau=Fitted(4.681969284699878, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.36056019217123914, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2218.4438701127733
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.2345893901175797, minval=0, maxval=20), leak=Fitted(-9.598513056448347, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8821662168820863, minval=0.5, maxval=3), tau=Fitted(3.2638565474120957, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14353502703096754, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1909.8689139792361
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.351449214510761, minval=0, maxval=20), leak=Fitted(-7.640677644016273, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.772845994532273, minval=0.5, maxval=3), tau=Fitted(2.9815972476291326, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.34757531087397553, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9247.35327296847
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.378862618576637, minval=0, maxval=20), leak=Fitted(1.1050392579272583, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7342139240249221, minval=0.5, maxval=3), tau=Fitted(1.2761400938548721, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.02690500499091747, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9069.04346940431
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.432397101751327, minval=0, maxval=20), leak=Fitted(-2.093112017516301, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.223147186414273, minval=0.5, maxval=3), tau=Fitted(3.8942413834439513, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.06467513908524367, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10889.164182201774
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.210371543960207, minval=0, maxval=20), leak=Fitted(1.6088979030929895, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3868171233530586, minval=0.5, maxval=3), tau=Fitted(0.6255873307037183, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.02055958932280086, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4762.827143861732
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.118718129151924, minval=0, maxval=20), leak=Fitted(-0.8744984182011528, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3936153536061715, minval=0.5, maxval=3), tau=Fitted(2.9467736148429857, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2803183880992397, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=435.57359287873123
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.35997854333647, minval=0, maxval=20), leak=Fitted(5.3826627252158605, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4392695340523893, minval=0.5, maxval=3), tau=Fitted(3.337819224730601, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.39432053335096107, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1952.0756768588285
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.702433836139917, minval=0, maxval=20), leak=Fitted(-6.601916508458457, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6230036788327551, minval=0.5, maxval=3), tau=Fitted(0.8042452921230834, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.29556760018728573, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-91.16602467745601
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.247637590788464, minval=0, maxval=20), leak=Fitted(5.815824166809809, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8298275145056786, minval=0.5, maxval=3), tau=Fitted(3.9298873186842167, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22796457388424743, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9183.294368740324
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.7037694419760365, minval=0, maxval=20), leak=Fitted(-5.257025637118092, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7134299318005117, minval=0.5, maxval=3), tau=Fitted(4.0099368320020465, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28289024440727784, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4576.613768769817
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.047673689467132, minval=0, maxval=20), leak=Fitted(-2.376881454347597, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3289518889167964, minval=0.5, maxval=3), tau=Fitted(2.5093587784326203, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3692288665673356, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=795.3321870925315
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.884631267090228, minval=0, maxval=20), leak=Fitted(-8.639414901543098, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.134163360744393, minval=0.5, maxval=3), tau=Fitted(1.1205908389257007, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30108452878111325, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3142.285711277733
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.1191675152272, minval=0, maxval=20), leak=Fitted(-9.231628038818018, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7982783028145426, minval=0.5, maxval=3), tau=Fitted(0.8291335515347067, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3692020026097622, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=14295.351253770223
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.874059812656853, minval=0, maxval=20), leak=Fitted(-8.708506735702748, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9501158182205072, minval=0.5, maxval=3), tau=Fitted(1.363037761970713, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15639762791172174, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5694.4129521542745
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.000439873018905, minval=0, maxval=20), leak=Fitted(9.76035404612401, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0387090837885962, minval=0.5, maxval=3), tau=Fitted(4.458257223009967, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09389283633411788, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=12575.642860723083
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.156781805632846, minval=0, maxval=20), leak=Fitted(-7.410838495356792, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2258206703309056, minval=0.5, maxval=3), tau=Fitted(1.5028229891592235, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.07952921319990024, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=836.3518613785488
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.332026524335845, minval=0, maxval=20), leak=Fitted(-6.133569210564125, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6686291175016184, minval=0.5, maxval=3), tau=Fitted(1.9737701886022059, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.12792306250838692, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3529.3457909724098
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.702958057805624, minval=0, maxval=20), leak=Fitted(-2.616328518352562, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.272088821411123, minval=0.5, maxval=3), tau=Fitted(0.4746260598418126, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09435071555248889, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1746.0909515440467
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.131685916749026, minval=0, maxval=20), leak=Fitted(-1.6369537914847654, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7835603215521176, minval=0.5, maxval=3), tau=Fitted(3.5941899369816017, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.11114974927028504, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8975.90130604527
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.384778478284051, minval=0, maxval=20), leak=Fitted(-6.646858487787703, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7721811113748243, minval=0.5, maxval=3), tau=Fitted(2.098684183083653, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.32372002234465147, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=94.76850742220148
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.592401684721094, minval=0, maxval=20), leak=Fitted(-4.225363153193975, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.51652534294875, minval=0.5, maxval=3), tau=Fitted(4.595099419092333, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15228958383701233, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2078.0996248586325
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.386615661016739, minval=0, maxval=20), leak=Fitted(-1.4901864982594581, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8138709836241098, minval=0.5, maxval=3), tau=Fitted(1.341157441249478, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23019299511637212, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1114.7192782168793
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.461270624188918, minval=0, maxval=20), leak=Fitted(5.5375423663932395, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.637107399032391, minval=0.5, maxval=3), tau=Fitted(2.83900182168418, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.16214827796079723, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9765.810378400778
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.951326492757003, minval=0, maxval=20), leak=Fitted(2.1692218478468472, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5679721953221508, minval=0.5, maxval=3), tau=Fitted(1.6528594282791316, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31416272644592996, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-19.414363111234806
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.195784907101164, minval=0, maxval=20), leak=Fitted(6.607599693202837, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6864947623364621, minval=0.5, maxval=3), tau=Fitted(1.0105794789631546, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09592604328207895, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4053.039952199267
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.187469561124146, minval=0, maxval=20), leak=Fitted(-6.2411035964954635, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.1989454524484024, minval=0.5, maxval=3), tau=Fitted(1.7052426618293945, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.04395197557249128, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1270.8959588532298
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.75023712195693, minval=0, maxval=20), leak=Fitted(5.9300778261232985, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.3188580067266935, minval=0.5, maxval=3), tau=Fitted(1.5774667644892326, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.32033169289912145, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=516.3505971889762
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.609377346203757, minval=0, maxval=20), leak=Fitted(-8.513001799241383, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9796893145778325, minval=0.5, maxval=3), tau=Fitted(1.834733483203609, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.06882639883914962, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1018.1227610403744
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.7016170593905393, minval=0, maxval=20), leak=Fitted(3.403308982057378, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0340857544204647, minval=0.5, maxval=3), tau=Fitted(0.4990604132664336, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.11548981814212166, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1951.6512719778734
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.620220040271752, minval=0, maxval=20), leak=Fitted(-4.136953304472781, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6130143450127488, minval=0.5, maxval=3), tau=Fitted(1.9688778558635178, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.12881041223235157, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4105.766662542889
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.325590611714693, minval=0, maxval=20), leak=Fitted(-0.42557548268072454, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7777129860097391, minval=0.5, maxval=3), tau=Fitted(1.3181006127984902, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.039743950711769394, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=237.55902460073858
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.555671912814596, minval=0, maxval=20), leak=Fitted(2.0074725854691655, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6710413482680693, minval=0.5, maxval=3), tau=Fitted(0.34335094382133224, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1521315616713835, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3033.0874096901657
differential_evolution step 1: f(x)= -91.166
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.807675717383388, minval=0, maxval=20), leak=Fitted(-3.2258372989432305, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3801486091772526, minval=0.5, maxval=3), tau=Fitted(0.3826404086965032, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.36893219553202905, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6620.393668414414
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.177639592668301, minval=0, maxval=20), leak=Fitted(-6.80636931896222, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8946713693029849, minval=0.5, maxval=3), tau=Fitted(0.9120053245906932, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15173367516227643, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=408.1219225846625
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.245415657354524, minval=0, maxval=20), leak=Fitted(0.6387962040955131, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7736264822889152, minval=0.5, maxval=3), tau=Fitted(2.2160879204509123, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.39220198009293594, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1758.9539695520748
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.361573370519599, minval=0, maxval=20), leak=Fitted(-6.2911432037299075, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8196311353216672, minval=0.5, maxval=3), tau=Fitted(0.26362927289349436, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38577218494091436, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=27671.817854595734
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.191469883082515, minval=0, maxval=20), leak=Fitted(-6.39078185666055, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6357722280210201, minval=0.5, maxval=3), tau=Fitted(1.4557438216504326, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17437538882229442, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3089.9251953312264
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.984442854482344, minval=0, maxval=20), leak=Fitted(-6.662477212280839, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0978481340942319, minval=0.5, maxval=3), tau=Fitted(1.781842043390649, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2805107102610981, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=161.4188134739154
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.737293679747017, minval=0, maxval=20), leak=Fitted(1.6658105236257992, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.1372052438267743, minval=0.5, maxval=3), tau=Fitted(1.9350929617331754, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.007581892830709924, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8040.964478985707
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.313762102968306, minval=0, maxval=20), leak=Fitted(-4.642822048350636, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5306168774443036, minval=0.5, maxval=3), tau=Fitted(1.633708339858989, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30674298476313894, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2360.021967545753
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.569414197064123, minval=0, maxval=20), leak=Fitted(-2.402239803360593, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6587687957479658, minval=0.5, maxval=3), tau=Fitted(4.319866844664604, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2774300050079683, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7473.437068772452
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.264118705577928, minval=0, maxval=20), leak=Fitted(-6.316407557184234, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8020434805605563, minval=0.5, maxval=3), tau=Fitted(0.4356128613056276, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28797742800454107, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=890.828929855362
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.412827887291158, minval=0, maxval=20), leak=Fitted(-5.750598764527986, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.118103761745533, minval=0.5, maxval=3), tau=Fitted(1.0521041954356007, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2451869275358008, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8018.509194951059
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.612207661232725, minval=0, maxval=20), leak=Fitted(3.042744215720403, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.199426289587592, minval=0.5, maxval=3), tau=Fitted(0.6022704039599724, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.26426723151034004, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=432.373182879041
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.126986236229552, minval=0, maxval=20), leak=Fitted(-6.757327480032124, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5239200179825685, minval=0.5, maxval=3), tau=Fitted(0.15613051720244542, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.32745312604946075, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=477.18494404984324
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.923032283615042, minval=0, maxval=20), leak=Fitted(-7.0413233175597, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8382949201880083, minval=0.5, maxval=3), tau=Fitted(0.014659630423075942, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1825594842728903, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4965.856935744479
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.936148423286612, minval=0, maxval=20), leak=Fitted(-7.952600854597529, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9175599640780747, minval=0.5, maxval=3), tau=Fitted(0.6065019990760161, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.18614099362621328, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=15805.725487109037
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.542903321809481, minval=0, maxval=20), leak=Fitted(7.421099807916098, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4176756726406126, minval=0.5, maxval=3), tau=Fitted(1.3717231199471893, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.16995851911916793, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2498.762279438366
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.945225949223005, minval=0, maxval=20), leak=Fitted(7.094435287704652, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9978774360625113, minval=0.5, maxval=3), tau=Fitted(4.567259962818717, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1889075378807287, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9899.541377375446
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.66213550801708, minval=0, maxval=20), leak=Fitted(0.07555605266217347, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4744485592092103, minval=0.5, maxval=3), tau=Fitted(2.684207961786023, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.20134870703174315, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=805.635909355803
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.38174424966538, minval=0, maxval=20), leak=Fitted(-4.813586768384934, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9532656179500085, minval=0.5, maxval=3), tau=Fitted(1.1407958489206806, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2661168061694297, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-52.849197387711186
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.613055353013042, minval=0, maxval=20), leak=Fitted(-0.05968290653174391, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.038323579842466, minval=0.5, maxval=3), tau=Fitted(3.769206840061745, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.18732361299831035, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2664.4782324252474
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.815857908182101, minval=0, maxval=20), leak=Fitted(-4.696678769156801, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7462490009654288, minval=0.5, maxval=3), tau=Fitted(2.225800734802238, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3962077145816184, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=985.6075681669201
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.711925034412583, minval=0, maxval=20), leak=Fitted(0.4158797298327954, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5117685551494706, minval=0.5, maxval=3), tau=Fitted(4.277080827345146, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30055665869816695, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1948.5649887803552
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.4873034093977395, minval=0, maxval=20), leak=Fitted(-7.833204919706887, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8511759092206391, minval=0.5, maxval=3), tau=Fitted(2.876984868057877, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2735714554597759, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1424.565157262353
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.184964138112921, minval=0, maxval=20), leak=Fitted(-2.4496823496522815, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0267168884367341, minval=0.5, maxval=3), tau=Fitted(0.9803977862385389, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38975778682277096, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1495.014500985827
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.250725830098407, minval=0, maxval=20), leak=Fitted(3.186115209641305, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6874382325890571, minval=0.5, maxval=3), tau=Fitted(0.7783257393934342, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3082700707588369, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1714.6316080906054
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.511864495657418, minval=0, maxval=20), leak=Fitted(2.504069944969949, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5937956985530339, minval=0.5, maxval=3), tau=Fitted(2.2864991862169464, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1378720789603346, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10086.105927629827
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.96922150387807, minval=0, maxval=20), leak=Fitted(-0.9981605509224978, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.037135071844141, minval=0.5, maxval=3), tau=Fitted(0.3437863936857295, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2872239717323508, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=716.0128289363071
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.775232623926154, minval=0, maxval=20), leak=Fitted(-8.6336982638672, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.1553729125436747, minval=0.5, maxval=3), tau=Fitted(2.1070589787268985, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3975935343620683, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3454.3323185148406
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.802718160891565, minval=0, maxval=20), leak=Fitted(1.9900028082231014, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8020613941783195, minval=0.5, maxval=3), tau=Fitted(1.9861169166735386, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31543816374609124, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1263.6930862050976
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.942398478296917, minval=0, maxval=20), leak=Fitted(-4.638397255146485, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3155270632104954, minval=0.5, maxval=3), tau=Fitted(2.60296712130035, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.25692160732463964, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-8.295937676361461
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.373716194220535, minval=0, maxval=20), leak=Fitted(-5.300170004508667, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6356941506537335, minval=0.5, maxval=3), tau=Fitted(0.8444697256206934, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28602997883898096, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=658.2406822212733
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.858842938022082, minval=0, maxval=20), leak=Fitted(-9.821972686694462, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7553920790484199, minval=0.5, maxval=3), tau=Fitted(0.2280871145762564, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.009073746844886932, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=33186.159703961675
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.309148771071909, minval=0, maxval=20), leak=Fitted(-4.692546363916991, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.880016048851676, minval=0.5, maxval=3), tau=Fitted(1.6489022704849163, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3558244740834831, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=11691.468854416957
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.870955937648644, minval=0, maxval=20), leak=Fitted(-8.691957114246435, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2469716829726964, minval=0.5, maxval=3), tau=Fitted(1.3171025496133335, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.357176476705058, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=12648.379090319964
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.85609589118386, minval=0, maxval=20), leak=Fitted(2.4433690356689874, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8870559223851224, minval=0.5, maxval=3), tau=Fitted(2.741601596966736, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3433843289784165, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1605.3117907571332
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.533911734631191, minval=0, maxval=20), leak=Fitted(-4.511875902670476, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.505160788925693, minval=0.5, maxval=3), tau=Fitted(0.47927329974438715, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23395872366951345, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=20562.697536947246
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.203443701449665, minval=0, maxval=20), leak=Fitted(-3.6454162513666266, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8625502523091897, minval=0.5, maxval=3), tau=Fitted(2.4137873381271775, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31880654169082046, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3314.1261574356377
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.142577966386654, minval=0, maxval=20), leak=Fitted(-7.293494777730656, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2175369170718895, minval=0.5, maxval=3), tau=Fitted(0.9009581024449269, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3239486749356342, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=13866.405806380151
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.323689585606194, minval=0, maxval=20), leak=Fitted(-5.80101647339287, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.083404340461965, minval=0.5, maxval=3), tau=Fitted(2.0884802311567774, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3087955017811231, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=542.0543902297841
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.524990279153355, minval=0, maxval=20), leak=Fitted(-6.696462185682215, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8883082588192235, minval=0.5, maxval=3), tau=Fitted(0.809721576748591, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.18174829199538417, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=778.6313536311411
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.81009781212803, minval=0, maxval=20), leak=Fitted(-4.020577006890669, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.437695062949069, minval=0.5, maxval=3), tau=Fitted(3.0772361241936816, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2508198300728628, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1108.6233556218035
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.7884663626077275, minval=0, maxval=20), leak=Fitted(-0.6716496488972412, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4983252732247805, minval=0.5, maxval=3), tau=Fitted(4.703946624625609, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.036573727108380694, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10545.282712015289
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.265201616721322, minval=0, maxval=20), leak=Fitted(-5.90435543837909, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5650132470708875, minval=0.5, maxval=3), tau=Fitted(0.8411841005965459, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.009800181905823713, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5284.968058790039
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.2345893901175797, minval=0, maxval=20), leak=Fitted(-5.075892369726588, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7303670289631872, minval=0.5, maxval=3), tau=Fitted(3.6276104982702235, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2113530600989414, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1591.6463468303455
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.246289677751424, minval=0, maxval=20), leak=Fitted(-3.3388799706816297, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5108682052095024, minval=0.5, maxval=3), tau=Fitted(1.0239243940904779, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3214436779640755, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=952.4489575520859
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.097668447771248, minval=0, maxval=20), leak=Fitted(-8.71625543293783, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7618808583298122, minval=0.5, maxval=3), tau=Fitted(1.2761400938548721, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21085024786869916, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10842.492756711861
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.495348545274613, minval=0, maxval=20), leak=Fitted(-0.38610949518934845, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4198131258737807, minval=0.5, maxval=3), tau=Fitted(0.9009211924302423, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3211639572277174, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1005.5798166111044
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.552394602947988, minval=0, maxval=20), leak=Fitted(-3.0165270853552775, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3868171233530586, minval=0.5, maxval=3), tau=Fitted(0.27162861322170473, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15467825073709768, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6832.306743124194
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.871380456382944, minval=0, maxval=20), leak=Fitted(-4.159206519991374, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3936153536061715, minval=0.5, maxval=3), tau=Fitted(0.3807786153935866, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2510278875899232, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5662.576060812771
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.54710204754227, minval=0, maxval=20), leak=Fitted(5.3826627252158605, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.849929475030783, minval=0.5, maxval=3), tau=Fitted(0.06915585845883054, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.39432053335096107, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=887.8043464826212
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.268500390130344, minval=0, maxval=20), leak=Fitted(-2.1854671146279236, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.1048985127885924, minval=0.5, maxval=3), tau=Fitted(0.7213841974989055, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.296679619216214, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7893.491343146927
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.247637590788464, minval=0, maxval=20), leak=Fitted(4.108273882731628, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4887389466496501, minval=0.5, maxval=3), tau=Fitted(1.9295911606114486, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1978399603872159, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3133.801806766403
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.816827327387458, minval=0, maxval=20), leak=Fitted(-5.785882034815931, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7134299318005117, minval=0.5, maxval=3), tau=Fitted(4.0099368320020465, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2967471347430104, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5247.749878455137
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.749688733403623, minval=0, maxval=20), leak=Fitted(-4.751763253920657, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.691866749259694, minval=0.5, maxval=3), tau=Fitted(1.5566820927457452, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2393519164630783, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1312.999678994953
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.48871748143484, minval=0, maxval=20), leak=Fitted(6.09854969896179, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8567488192353405, minval=0.5, maxval=3), tau=Fitted(1.1205908389257007, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.07164077207750025, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4975.835953166501
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.312539066304247, minval=0, maxval=20), leak=Fitted(-2.318545253277735, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6961716764965258, minval=0.5, maxval=3), tau=Fitted(1.6099684355921615, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3326860755268224, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=827.4556711567323
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.874059812656853, minval=0, maxval=20), leak=Fitted(-4.663149641847756, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4371658784144272, minval=0.5, maxval=3), tau=Fitted(0.9358506982231534, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2817670803648292, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2809.456064972828
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.966568505774024, minval=0, maxval=20), leak=Fitted(-6.196705115070151, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0278493511340387, minval=0.5, maxval=3), tau=Fitted(1.0564954139474645, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.34002052480857264, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1688.3098111791119
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.156781805632846, minval=0, maxval=20), leak=Fitted(-7.410838495356792, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.435009851031758, minval=0.5, maxval=3), tau=Fitted(0.7529496179750508, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3119120700506244, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=16882.46377987358
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.26855937867761, minval=0, maxval=20), leak=Fitted(-1.836248159750733, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5190494714831524, minval=0.5, maxval=3), tau=Fitted(0.7745625979413273, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.37369234559345277, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=876.0461106916182
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.131937439342071, minval=0, maxval=20), leak=Fitted(5.179148533812548, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.1675208855574835, minval=0.5, maxval=3), tau=Fitted(1.3489781604298947, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2756122832370753, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=897.2123132766308
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.131685916749026, minval=0, maxval=20), leak=Fitted(0.03792959617773706, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.901154879997871, minval=0.5, maxval=3), tau=Fitted(0.952134110682185, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3279081757400355, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4999.466130230696
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.276314032650651, minval=0, maxval=20), leak=Fitted(-2.2412717537718896, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5735525533452783, minval=0.5, maxval=3), tau=Fitted(2.735267151186283, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.20779569109484847, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8165.292334277054
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.028906515305897, minval=0, maxval=20), leak=Fitted(-8.13988138105323, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8947505381852, minval=0.5, maxval=3), tau=Fitted(4.595099419092333, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15228958383701233, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9581.980721057642
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.22622358434618, minval=0, maxval=20), leak=Fitted(-1.4901864982594581, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8173976784836563, minval=0.5, maxval=3), tau=Fitted(0.5493786476777154, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23995127187652698, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3449.5731798069455
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.232108104126144, minval=0, maxval=20), leak=Fitted(-2.2099931341416035, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6735978810078918, minval=0.5, maxval=3), tau=Fitted(1.0629625642506813, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.37260852337002204, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=359.55161413707873
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.930207889536693, minval=0, maxval=20), leak=Fitted(-8.333473220224896, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9838283060498263, minval=0.5, maxval=3), tau=Fitted(0.8140898567895898, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1573647441047577, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1531.117340265183
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.39370635131715, minval=0, maxval=20), leak=Fitted(6.607599693202837, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.7746636581258572, minval=0.5, maxval=3), tau=Fitted(0.4249281722742517, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.381527806315987, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2148.857549634509
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.392003548575682, minval=0, maxval=20), leak=Fitted(-6.2411035964954635, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.673074676066789, minval=0.5, maxval=3), tau=Fitted(1.7052426618293945, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3097950857754352, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10972.269480650664
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.220579461668004, minval=0, maxval=20), leak=Fitted(-8.077229664616665, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7814759865835035, minval=0.5, maxval=3), tau=Fitted(1.9280731114113079, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23522460758599478, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=534.3255169675433
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.204529989243834, minval=0, maxval=20), leak=Fitted(-8.331261266592072, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6937799112245167, minval=0.5, maxval=3), tau=Fitted(1.0489335853286614, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21223209845196722, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=991.5710102268968
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.382050034722042, minval=0, maxval=20), leak=Fitted(-7.748866420702028, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9262823279274437, minval=0.5, maxval=3), tau=Fitted(0.612887694553713, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3640138448464663, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=17408.82822962782
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.058824051176941, minval=0, maxval=20), leak=Fitted(-4.022253393271376, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2402104148054405, minval=0.5, maxval=3), tau=Fitted(2.4512508631219534, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2458741319485097, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1065.9804209506112
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.930581634831778, minval=0, maxval=20), leak=Fitted(-6.684912306205328, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7777129860097391, minval=0.5, maxval=3), tau=Fitted(0.004413167062450096, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.33593496720322796, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=40235.251756107864
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.39005174144685, minval=0, maxval=20), leak=Fitted(-4.437961229467975, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.602440177910261, minval=0.5, maxval=3), tau=Fitted(0.16250845781116308, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09608085341534377, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3225.500621004607
differential_evolution step 2: f(x)= -91.166
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.030740419302083, minval=0, maxval=20), leak=Fitted(-6.178308711268689, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2791877241001148, minval=0.5, maxval=3), tau=Fitted(2.205949073074729, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.29677390434152756, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1498.7385000449854
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.955723560368078, minval=0, maxval=20), leak=Fitted(-6.80636931896222, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3042212280661594, minval=0.5, maxval=3), tau=Fitted(2.244470194033217, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15173367516227643, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1308.9991691628559
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.074995309418691, minval=0, maxval=20), leak=Fitted(-1.0976207645620284, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7399334162723434, minval=0.5, maxval=3), tau=Fitted(0.532957937110714, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30585233745658535, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=419.95278516823225
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.241405238747138, minval=0, maxval=20), leak=Fitted(-6.904647400079766, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4514564270125794, minval=0.5, maxval=3), tau=Fitted(0.6643320873065768, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3054182757914606, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7982.49230831595
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.588017723682675, minval=0, maxval=20), leak=Fitted(-6.39078185666055, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7577311122820275, minval=0.5, maxval=3), tau=Fitted(0.4386771064198167, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2362798618262316, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=740.2384103040022
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.984442854482344, minval=0, maxval=20), leak=Fitted(-1.4606408509844115, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0928689412501233, minval=0.5, maxval=3), tau=Fitted(4.368595514286731, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3167735629547485, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2642.237597025166
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.466911067998478, minval=0, maxval=20), leak=Fitted(-9.166573909670353, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5147128129478116, minval=0.5, maxval=3), tau=Fitted(0.0573516626184527, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17821901284133113, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3184.4593217268102
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.031527200274425, minval=0, maxval=20), leak=Fitted(2.990709559304918, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2966694554787088, minval=0.5, maxval=3), tau=Fitted(1.0684457587381322, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14705457313387135, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3669.6535663363725
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.293055896695583, minval=0, maxval=20), leak=Fitted(-7.725921805978897, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0161136166956886, minval=0.5, maxval=3), tau=Fitted(2.3350328069307995, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2917807784465858, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-23.386824274163416
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.176450073623965, minval=0, maxval=20), leak=Fitted(-6.923518619664244, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.5634925366643895, minval=0.5, maxval=3), tau=Fitted(0.4356128613056276, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28797742800454107, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=30492.818147863552
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.374957780008687, minval=0, maxval=20), leak=Fitted(-4.120283854196999, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5382348673140247, minval=0.5, maxval=3), tau=Fitted(2.567510985915665, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2971105532986154, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4943.881831077261
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.19128285503821, minval=0, maxval=20), leak=Fitted(3.042744215720403, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5229053167906894, minval=0.5, maxval=3), tau=Fitted(1.0599575307236972, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31643335851802745, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-19.99963151685442
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.89700685216299, minval=0, maxval=20), leak=Fitted(-2.6560141871222154, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5239200179825685, minval=0.5, maxval=3), tau=Fitted(1.5446155355180173, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2483232396230553, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3511.1473883802323
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.159067654654427, minval=0, maxval=20), leak=Fitted(-9.665469886482983, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4838893981141563, minval=0.5, maxval=3), tau=Fitted(0.8664624599980519, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.25490257982403325, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=16238.503331431028
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.0670341987749, minval=0, maxval=20), leak=Fitted(-3.6084512836142526, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.655355543598668, minval=0.5, maxval=3), tau=Fitted(0.7820509401811577, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19094593285965186, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2986.3772398968545
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.18904969247762, minval=0, maxval=20), leak=Fitted(1.7326521345463863, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.016362784624726, minval=0.5, maxval=3), tau=Fitted(0.9712597506065079, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.32412417024533086, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1114.8263213218959
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.08182665953621, minval=0, maxval=20), leak=Fitted(-7.2502786577504015, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.619823130171536, minval=0.5, maxval=3), tau=Fitted(3.5042999454313213, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30807435707883923, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4916.7288886886445
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.672603519389291, minval=0, maxval=20), leak=Fitted(-6.560605929397315, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8313611587623053, minval=0.5, maxval=3), tau=Fitted(1.4316826901558302, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22470423904118744, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=12430.064143692538
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.38174424966538, minval=0, maxval=20), leak=Fitted(-5.989648056791019, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9532656179500085, minval=0.5, maxval=3), tau=Fitted(1.1407958489206806, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23071388739771814, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=96.22996823038771
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.286985316235417, minval=0, maxval=20), leak=Fitted(4.015412126359679, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9395629206833962, minval=0.5, maxval=3), tau=Fitted(3.769206840061745, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28020559401412076, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1926.9076740171356
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.708468771998088, minval=0, maxval=20), leak=Fitted(-4.125782982717615, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8703227685536883, minval=0.5, maxval=3), tau=Fitted(1.586532720607829, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2583048778122204, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-81.67357229336588
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.84057835524255, minval=0, maxval=20), leak=Fitted(0.15560621647957573, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6270578811465459, minval=0.5, maxval=3), tau=Fitted(4.828233939521642, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30055665869816695, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7119.543166050579
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.680266254350405, minval=0, maxval=20), leak=Fitted(-7.833204919706887, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.844798727998535, minval=0.5, maxval=3), tau=Fitted(0.8074806512032924, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3680008850840164, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=13670.680591455386
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.458917661480693, minval=0, maxval=20), leak=Fitted(-2.4496823496522815, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.1833904052739894, minval=0.5, maxval=3), tau=Fitted(2.077414010476643, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19583787642101533, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=646.790349738794
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.887199241883467, minval=0, maxval=20), leak=Fitted(-2.712984202424187, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.5796426752518755, minval=0.5, maxval=3), tau=Fitted(0.6846872561961617, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.29717210241616693, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=13341.743968092895
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.871425490750944, minval=0, maxval=20), leak=Fitted(-2.492807125888399, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5051177894204255, minval=0.5, maxval=3), tau=Fitted(3.18000510562985, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09911599567328624, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5248.660833423043
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.99143190013196, minval=0, maxval=20), leak=Fitted(-2.4225830160589035, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5966528351825326, minval=0.5, maxval=3), tau=Fitted(1.243222883436207, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.24211754003541427, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1262.7124942615483
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.886889199395973, minval=0, maxval=20), leak=Fitted(-2.815683400230329, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.210093934375747, minval=0.5, maxval=3), tau=Fitted(0.5865949163762647, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23481982886635808, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=12906.179428003952
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.802718160891565, minval=0, maxval=20), leak=Fitted(-4.938015109896019, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8020613941783195, minval=0.5, maxval=3), tau=Fitted(1.9861169166735386, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09862080282466908, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2158.8654019611095
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.942398478296917, minval=0, maxval=20), leak=Fitted(-9.637588449809133, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3155270632104954, minval=0.5, maxval=3), tau=Fitted(3.547159978944094, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3079741566136504, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1395.2845065365736
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.409956474194372, minval=0, maxval=20), leak=Fitted(-5.300170004508667, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8015657866763264, minval=0.5, maxval=3), tau=Fitted(0.8444697256206934, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28602997883898096, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9982.945720252035
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.122821848038093, minval=0, maxval=20), leak=Fitted(-0.37504863979786807, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3084109697503705, minval=0.5, maxval=3), tau=Fitted(0.5072668553051465, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3249087727543247, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=823.5043470040655
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.947774559229579, minval=0, maxval=20), leak=Fitted(-8.17571272033274, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5679870063944434, minval=0.5, maxval=3), tau=Fitted(1.6489022704849163, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2816564188545346, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1843.2936884818869
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.473255197282874, minval=0, maxval=20), leak=Fitted(-2.5410912143469377, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2469716829726964, minval=0.5, maxval=3), tau=Fitted(0.7759186177430133, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19960321335391282, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5121.5701670908875
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(4.3940334029324735, minval=0, maxval=20), leak=Fitted(-2.1656632144893107, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8069683522252675, minval=0.5, maxval=3), tau=Fitted(2.741601596966736, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3087417570609954, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1690.765325667755
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.96304611906063, minval=0, maxval=20), leak=Fitted(-8.29813014469433, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3478785686910941, minval=0.5, maxval=3), tau=Fitted(2.0879923954542803, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19003966368368494, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=81.56449642678815
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.634199765552256, minval=0, maxval=20), leak=Fitted(-3.6454162513666266, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2992857700838554, minval=0.5, maxval=3), tau=Fitted(1.0060408180374776, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19617993941116085, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2262.6278506978733
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.87760106313388, minval=0, maxval=20), leak=Fitted(-7.284177337151888, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2175369170718895, minval=0.5, maxval=3), tau=Fitted(3.1439899126734625, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.08385585073365297, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1947.6044005366707
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.288995491979646, minval=0, maxval=20), leak=Fitted(-5.80101647339287, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.9891872633610252, minval=0.5, maxval=3), tau=Fitted(1.8583306371796615, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23069711043592595, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4145.917879779354
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.854960891979727, minval=0, maxval=20), leak=Fitted(-9.533577729339896, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8186190795707172, minval=0.5, maxval=3), tau=Fitted(2.6736417236521692, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3396716168192412, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=897.0803008215316
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.059908925231062, minval=0, maxval=20), leak=Fitted(-4.020577006890669, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7394815292461265, minval=0.5, maxval=3), tau=Fitted(0.33967137992341856, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3926583682729577, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1457.766996609064
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.330109447997533, minval=0, maxval=20), leak=Fitted(-9.064242244883317, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3931189331365954, minval=0.5, maxval=3), tau=Fitted(3.6837668501181056, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2861626589771369, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=704.4635528971965
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.896504556421752, minval=0, maxval=20), leak=Fitted(-1.8397239066369875, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4895889059291028, minval=0.5, maxval=3), tau=Fitted(3.458263260530998, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19473937042016998, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2959.2824891011705
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.368507234032409, minval=0, maxval=20), leak=Fitted(1.1032912204809442, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7303670289631872, minval=0.5, maxval=3), tau=Fitted(1.5848608783322775, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23598372507426651, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-237.46384201502656
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.246289677751424, minval=0, maxval=20), leak=Fitted(-3.3388799706816297, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4325778327285916, minval=0.5, maxval=3), tau=Fitted(1.2039529915356595, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21796031458248724, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9419.264504546332
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.097668447771248, minval=0, maxval=20), leak=Fitted(0.7224774995266126, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.754224624802943, minval=0.5, maxval=3), tau=Fitted(2.177901875408141, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1873286981912907, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=621.8354872978528
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.882484609170413, minval=0, maxval=20), leak=Fitted(-0.38610949518934845, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6643209509979948, minval=0.5, maxval=3), tau=Fitted(0.5381431740721603, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2353090105422827, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1895.616343367472
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.4143266547570992, minval=0, maxval=20), leak=Fitted(6.383662727657722, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8462289304048458, minval=0.5, maxval=3), tau=Fitted(0.6255873307037183, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.37540074327119655, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2337.5038045556853
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.118718129151924, minval=0, maxval=20), leak=Fitted(-5.325075472938767, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.611192316897464, minval=0.5, maxval=3), tau=Fitted(0.8665224390026167, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.36827210429937635, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=375.3985746618949
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.527595583505505, minval=0, maxval=20), leak=Fitted(5.3826627252158605, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.3123662910256013, minval=0.5, maxval=3), tau=Fitted(0.75548775898942, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.39432053335096107, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1313.986591338055
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.179228915126888, minval=0, maxval=20), leak=Fitted(-1.018934138771217, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4659257996270787, minval=0.5, maxval=3), tau=Fitted(2.1565259854245276, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.11152264702801579, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=733.9903903737566
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(0.7319353724040933, minval=0, maxval=20), leak=Fitted(4.108273882731628, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9277959805331317, minval=0.5, maxval=3), tau=Fitted(2.168128440305765, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2713651436381712, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2457.5183288416133
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.7037694419760365, minval=0, maxval=20), leak=Fitted(-5.257025637118092, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7134299318005117, minval=0.5, maxval=3), tau=Fitted(3.585386365698042, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17831738943284722, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7374.997006884357
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.531448333564251, minval=0, maxval=20), leak=Fitted(-5.003658769859109, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9681395935279186, minval=0.5, maxval=3), tau=Fitted(2.5093587784326203, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21298248399299738, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1251.8945998339605
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.9151453634545526, minval=0, maxval=20), leak=Fitted(1.5711268592670824, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8567488192353405, minval=0.5, maxval=3), tau=Fitted(1.7868402989198273, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3191669228633267, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3894.4376529046203
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.688657557048714, minval=0, maxval=20), leak=Fitted(-2.2386209421176173, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4593286497631857, minval=0.5, maxval=3), tau=Fitted(1.6099684355921615, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.26897124431883523, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2933.6935695989814
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.751914468859542, minval=0, maxval=20), leak=Fitted(-4.663149641847756, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8599687266739378, minval=0.5, maxval=3), tau=Fitted(1.0040973143851322, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.20230357014682865, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7037.824272424863
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.966568505774024, minval=0, maxval=20), leak=Fitted(-0.6066190377395064, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2242783362520897, minval=0.5, maxval=3), tau=Fitted(1.0804099814124268, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.34002052480857264, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=465.72408461881565
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.37690227164366, minval=0, maxval=20), leak=Fitted(4.606791796006031, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2258206703309056, minval=0.5, maxval=3), tau=Fitted(3.874419155619654, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.07952921319990024, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=10958.891197591503
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.7237330991124207, minval=0, maxval=20), leak=Fitted(-1.836248159750733, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6823322595166401, minval=0.5, maxval=3), tau=Fitted(3.1474947013481063, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2026014341315845, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=802.7964437083696
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.419697415633751, minval=0, maxval=20), leak=Fitted(-4.515927496190761, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.518482407424149, minval=0.5, maxval=3), tau=Fitted(2.6248451511196507, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.10179359449219337, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=251.8165536465698
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(12.810675685786714, minval=0, maxval=20), leak=Fitted(-0.3520839407039611, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.932992281645003, minval=0.5, maxval=3), tau=Fitted(1.3132604306613782, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19732601771010475, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=40.13831095689673
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.384778478284051, minval=0, maxval=20), leak=Fitted(2.594492430893851, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.3404701367534466, minval=0.5, maxval=3), tau=Fitted(2.098684183083653, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19392281852178458, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=139.4474578583364
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.592401684721094, minval=0, maxval=20), leak=Fitted(-5.176784555235431, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.560815257047657, minval=0.5, maxval=3), tau=Fitted(2.846420398910258, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.18732711479601907, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=124.89205399869883
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.386615661016739, minval=0, maxval=20), leak=Fitted(7.540890083185346, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8138709836241098, minval=0.5, maxval=3), tau=Fitted(3.332770324873945, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1863165488416356, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5059.09170320434
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.169377354776884, minval=0, maxval=20), leak=Fitted(3.2419274265548337, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6735978810078918, minval=0.5, maxval=3), tau=Fitted(1.7787970699948061, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.26053052930641185, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4689.631297620253
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.951326492757003, minval=0, maxval=20), leak=Fitted(-0.11443786188160221, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.476459188410702, minval=0.5, maxval=3), tau=Fitted(1.0256740608813442, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3056268002012777, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=281.7208159254675
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.575141330555129, minval=0, maxval=20), leak=Fitted(-0.5169585904638285, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.176639985594168, minval=0.5, maxval=3), tau=Fitted(0.4249281722742517, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15591013398471026, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3531.4693878456173
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.14552482773687, minval=0, maxval=20), leak=Fitted(-2.9784631591904764, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.1989454524484024, minval=0.5, maxval=3), tau=Fitted(1.7052426618293945, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14669270009796065, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=815.2817486747299
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.26304532598127, minval=0, maxval=20), leak=Fitted(-2.2409670114010916, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.336017649572189, minval=0.5, maxval=3), tau=Fitted(1.9280731114113079, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28671530790934896, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3799.125794255725
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.204529989243834, minval=0, maxval=20), leak=Fitted(0.5750611280553097, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9979171472483667, minval=0.5, maxval=3), tau=Fitted(0.7922221230263415, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3094839939448326, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=102.1622442896563
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(10.131284648657122, minval=0, maxval=20), leak=Fitted(3.403308982057378, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6876900285185692, minval=0.5, maxval=3), tau=Fitted(0.43275458382207255, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38251563237158664, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2007.2692959657309
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.061865671318964, minval=0, maxval=20), leak=Fitted(-5.089231622472198, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.263028209214143, minval=0.5, maxval=3), tau=Fitted(2.7872502763642024, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.07768125410159053, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=25.602997110738272
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.75046456324491, minval=0, maxval=20), leak=Fitted(1.1910055382320195, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8652195512276095, minval=0.5, maxval=3), tau=Fitted(1.1417626716929976, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2508716071182817, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-101.199820661297
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.20384157597561, minval=0, maxval=20), leak=Fitted(-5.949036813966995, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.593863482835305, minval=0.5, maxval=3), tau=Fitted(2.3227258050909447, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2310071230523422, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6216.129383892307
differential_evolution step 3: f(x)= -237.464
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.368507234032409, minval=0, maxval=20), leak=Fitted(3.843383358584438, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4176685718352917, minval=0.5, maxval=3), tau=Fitted(0.8023505376555375, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28467557200635235, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-45.254652279653456
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.202376696281668, minval=0, maxval=20), leak=Fitted(-1.1152472701129568, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.754671786186057, minval=0.5, maxval=3), tau=Fitted(1.1886708758132913, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15173367516227643, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4401.601967574494
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.536722298950931, minval=0, maxval=20), leak=Fitted(4.261726862322623, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6828419639130898, minval=0.5, maxval=3), tau=Fitted(0.75363626582225, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2897891687529234, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1507.4435460225254
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.408202907955278, minval=0, maxval=20), leak=Fitted(4.048644737900576, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6282260160991293, minval=0.5, maxval=3), tau=Fitted(0.07592865183760011, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.38577218494091436, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=901.1861062380931
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.588017723682675, minval=0, maxval=20), leak=Fitted(-2.591926286742564, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.0080211314743015, minval=0.5, maxval=3), tau=Fitted(3.217372023617042, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.29021166248759295, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=977.7842404443923
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.338454820550954, minval=0, maxval=20), leak=Fitted(-3.672295794220888, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7649875475682029, minval=0.5, maxval=3), tau=Fitted(2.338006599743412, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22593933321949852, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-156.9515029829631
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.046864897896324, minval=0, maxval=20), leak=Fitted(4.388720596279077, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.71618100975974, minval=0.5, maxval=3), tau=Fitted(0.4870401018240913, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2839307141802278, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=142.645234836083
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.586005550532338, minval=0, maxval=20), leak=Fitted(-0.9499488780508014, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.7930560922549192, minval=0.5, maxval=3), tau=Fitted(1.6978901989510478, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.0981607914621298, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=988.230528448673
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.293055896695583, minval=0, maxval=20), leak=Fitted(2.679917587550189, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7116392867204253, minval=0.5, maxval=3), tau=Fitted(1.1671493730259135, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3077432134435296, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1183.818576201569
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.580308972485844, minval=0, maxval=20), leak=Fitted(1.5691424613369298, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8020434805605563, minval=0.5, maxval=3), tau=Fitted(0.4356128613056276, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2200449199786344, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=677.5662113232465
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.474107319706377, minval=0, maxval=20), leak=Fitted(3.2491869395401785, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.4465257916543584, minval=0.5, maxval=3), tau=Fitted(4.317386728046326, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.14037065165845358, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=6525.686585551668
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.567607954356669, minval=0, maxval=20), leak=Fitted(-0.09912182167870709, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.5311705006702563, minval=0.5, maxval=3), tau=Fitted(0.7137481257298848, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.31643335851802745, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1446.3964958021384
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(2.643159853445222, minval=0, maxval=20), leak=Fitted(0.3232496638842397, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8760109352124414, minval=0.5, maxval=3), tau=Fitted(0.4968428697796403, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3153502364564092, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5884.899110525365
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.923032283615042, minval=0, maxval=20), leak=Fitted(-7.0413233175597, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6396353221092779, minval=0.5, maxval=3), tau=Fitted(1.451660237588177, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.1825594842728903, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4628.704389758083
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.917671444921647, minval=0, maxval=20), leak=Fitted(4.629221465111851, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.1533903511529964, minval=0.5, maxval=3), tau=Fitted(2.0169610761568455, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.284333050519989, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=252.428825144831
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.127315283218138, minval=0, maxval=20), leak=Fitted(-1.2808219489090311, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.016362784624726, minval=0.5, maxval=3), tau=Fitted(3.0481927475556763, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2739597965092133, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=676.4869837202277
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.08182665953621, minval=0, maxval=20), leak=Fitted(-0.7560108344204419, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.619823130171536, minval=0.5, maxval=3), tau=Fitted(3.5042999454313213, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.30807435707883923, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1506.1360429257818
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.7638556908023375, minval=0, maxval=20), leak=Fitted(0.8210419406106539, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.646021890484472, minval=0.5, maxval=3), tau=Fitted(3.5713413268529157, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22599584499929667, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=538.9202178007708
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(15.38174424966538, minval=0, maxval=20), leak=Fitted(-0.5920600261751785, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.880293261698849, minval=0.5, maxval=3), tau=Fitted(0.6354737360908789, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.13170750110792706, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4167.008357195672
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.042646977060684, minval=0, maxval=20), leak=Fitted(4.015412126359679, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6245294738682132, minval=0.5, maxval=3), tau=Fitted(1.2210716182404833, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.39529064263864805, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1590.905155317061
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.414632011627237, minval=0, maxval=20), leak=Fitted(-4.125782982717615, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.8703227685536883, minval=0.5, maxval=3), tau=Fitted(1.586532720607829, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.23267486665691195, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=798.9285454640684
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.420014871244291, minval=0, maxval=20), leak=Fitted(-1.3869642755262945, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.3254170555362608, minval=0.5, maxval=3), tau=Fitted(1.6357966461405167, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.10858363771386753, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=169.77035454434937
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.39728994991881, minval=0, maxval=20), leak=Fitted(-0.3654873058964947, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.6790215583994366, minval=0.5, maxval=3), tau=Fitted(1.7729901971042414, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2735714554597759, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-78.79058510885596
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.504885616767893, minval=0, maxval=20), leak=Fitted(6.682557312266695, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2568711022820251, minval=0.5, maxval=3), tau=Fitted(0.9681671434350654, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.13066594873663, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3900.9868386047365
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.87012129279534, minval=0, maxval=20), leak=Fitted(1.4643557049233857, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.6733052820758325, minval=0.5, maxval=3), tau=Fitted(3.640930222919366, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2118737723335134, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=595.7204043054701
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.314814440565318, minval=0, maxval=20), leak=Fitted(5.753293066971716, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5051177894204255, minval=0.5, maxval=3), tau=Fitted(2.545690026204432, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.13059343849769756, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4472.1119652426005
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.302346365143286, minval=0, maxval=20), leak=Fitted(0.2573957133639304, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4751978940006012, minval=0.5, maxval=3), tau=Fitted(0.6104743596784257, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3381052950118263, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1146.1560486800363
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(11.607575369098907, minval=0, maxval=20), leak=Fitted(-2.228813337316975, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.6063264999550313, minval=0.5, maxval=3), tau=Fitted(2.9421079894278273, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.27702814834283945, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=5179.698207561015
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.95245439067884, minval=0, maxval=20), leak=Fitted(1.9900028082231014, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.8020613941783195, minval=0.5, maxval=3), tau=Fitted(2.1726218540084434, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.0783002713072742, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=546.1002124392357
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.091424336309498, minval=0, maxval=20), leak=Fitted(6.885885346832914, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.855919760245072, minval=0.5, maxval=3), tau=Fitted(2.60296712130035, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.15856782662542068, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=7944.929267934156
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(17.373716194220535, minval=0, maxval=20), leak=Fitted(-0.3762817965157028, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.218846675968207, minval=0.5, maxval=3), tau=Fitted(0.8444697256206934, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28602997883898096, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2776.3119496235518
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(3.087970325718709, minval=0, maxval=20), leak=Fitted(-0.37504863979786807, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.178423201095526, minval=0.5, maxval=3), tau=Fitted(0.9233662063156427, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3249087727543247, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1811.420153182869
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.074631182171164, minval=0, maxval=20), leak=Fitted(1.20077183637195, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5549002875625753, minval=0.5, maxval=3), tau=Fitted(1.3599957330431733, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19136602903948619, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=-42.837973635549076
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(16.731364877640075, minval=0, maxval=20), leak=Fitted(-7.987100935806387, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.7793133833462098, minval=0.5, maxval=3), tau=Fitted(2.4448282365502823, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.08301173154829905, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=517.7624774127937
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.0786388400247304, minval=0, maxval=20), leak=Fitted(2.00423591364451, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.0553503557991277, minval=0.5, maxval=3), tau=Fitted(2.741601596966736, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.28842127270967477, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1995.1531066021698
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(18.229132518562764, minval=0, maxval=20), leak=Fitted(-8.29813014469433, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.5547727593195229, minval=0.5, maxval=3), tau=Fitted(0.8701843590146547, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2896093144078377, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=8601.755635360376
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(13.0399294506544, minval=0, maxval=20), leak=Fitted(-5.1315483580845935, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2992857700838554, minval=0.5, maxval=3), tau=Fitted(1.0060408180374776, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.29361050238487485, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=2798.5445771602153
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.833548137405348, minval=0, maxval=20), leak=Fitted(2.8079623014274757, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.2175369170718895, minval=0.5, maxval=3), tau=Fitted(2.6100785813179233, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3182377665445639, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1875.467605812454
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.7638556908023375, minval=0, maxval=20), leak=Fitted(0.8210419406106539, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.646021890484472, minval=0.5, maxval=3), tau=Fitted(3.5713413268529157, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3087955017811231, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1587.6600752281897
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.329848467798351, minval=0, maxval=20), leak=Fitted(4.527634141625785, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.395585135879394, minval=0.5, maxval=3), tau=Fitted(1.6175883937935178, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22184968071061095, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=140.8838989051459
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.2191912323699, minval=0, maxval=20), leak=Fitted(-0.20627343054229308, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.9044057734034636, minval=0.5, maxval=3), tau=Fitted(4.10864549901708, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2508198300728628, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=775.3001696033166
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(1.1329537552466302, minval=0, maxval=20), leak=Fitted(-1.674696922854365, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.2101045561643131, minval=0.5, maxval=3), tau=Fitted(2.2987996738238228, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.19399082261501208, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1143.2900525818598
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(9.457409125010907, minval=0, maxval=20), leak=Fitted(-1.1907935144608928, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4895889059291028, minval=0.5, maxval=3), tau=Fitted(0.003322245405681379, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.22766809223226236, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=4500.597542008969
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(6.574679473443434, minval=0, maxval=20), leak=Fitted(-1.2140745631605399, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.8276026388456788, minval=0.5, maxval=3), tau=Fitted(2.948792940924128, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.24455678134621084, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=245.22236026596633
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.451110625087529, minval=0, maxval=20), leak=Fitted(-3.3388799706816297, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4223284152443, minval=0.5, maxval=3), tau=Fitted(2.6791444818677568, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.21356320324400313, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=106.40212825915685
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(14.75386126500457, minval=0, maxval=20), leak=Fitted(0.7224774995266126, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.7845442091359919, minval=0.5, maxval=3), tau=Fitted(2.177901875408141, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.2687604515414155, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3122.089591423941
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(7.9887756822755875, minval=0, maxval=20), leak=Fitted(-3.487295275882528, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4198131258737807, minval=0.5, maxval=3), tau=Fitted(0.9009211924302423, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.10188000621434763, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1247.5470528026472
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(19.775949613930184, minval=0, maxval=20), leak=Fitted(6.383662727657722, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(1.4058671942184882, minval=0.5, maxval=3), tau=Fitted(1.8377755403369176, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.3747807125064965, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=1272.444425191437
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(8.236727079676239, minval=0, maxval=20), leak=Fitted(-5.325075472938767, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(2.461104755166251, minval=0.5, maxval=3), tau=Fitted(1.4157902438328742, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.17071828277900847, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=9098.139880995068
Model(name='Roitman data, leaky drift varies with coherence', drift=DriftCoherenceLeak(driftcoh=Fitted(5.100853022622461, minval=0, maxval=20), leak=Fitted(5.3826627252158605, minval=-10, maxval=10)), noise=NoiseConstant(noise=1), bound=BoundCollapsingExponential(B=Fitted(0.9462701971571568, minval=0.5, maxval=3), tau=Fitted(0.06915585845883054, minval=0.0001, maxval=5)), IC=ICPointSourceCenter(), overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fitted(0.09330760266519345, minval=0, maxval=0.4)), OverlayPoissonMixture(pmixturecoef=0.02, rate=1)]), dx=0.01, dt=0.01, T_dur=2) loss=3139.6131767938973
|
notebooks/13-Advanced Python Modules/04-Timing your code - timeit.ipynb | ###Markdown
Timing your codeSometimes it's important to know how long your code is taking to run, or at least know if a particular line of code is slowing down your entire project. Python has a built-in timing module to do this. This module provides a simple way to time small bits of Python code. It has both a Command-Line Interface as well as a callable one. It avoids a number of common traps for measuring execution times. Let's learn about timeit!
###Code
import timeit
###Output
_____no_output_____
###Markdown
Let's use timeit to time various methods of creating the string '0-1-2-3-.....-99'We'll pass two arguments: the actual line we want to test encapsulated as a string and the number of times we wish to run it. Here we'll choose 10,000 runs to get some high enough numbers to compare various methods.
###Code
# For loop
timeit.timeit('"-".join(str(n) for n in range(100))', number=10000)
# List comprehension
timeit.timeit('"-".join([str(n) for n in range(100)])', number=10000)
# Map()
timeit.timeit('"-".join(map(str, range(100)))', number=10000)
###Output
_____no_output_____
###Markdown
Great! We see a significant time difference by using map()! This is good to know and we should keep this in mind.Now let's introduce iPython's magic function **%timeit***NOTE: This method is specific to jupyter notebooks!*iPython's %timeit will perform the same lines of code a certain number of times (loops) and will give you the fastest performance time (best of 3).Let's repeat the above examinations using iPython magic!
###Code
%timeit "-".join(str(n) for n in range(100))
%timeit "-".join([str(n) for n in range(100)])
%timeit "-".join(map(str, range(100)))
###Output
12.2 µs ± 130 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
|
analysis/Week 6 - Use Embedding.ipynb | ###Markdown
Tutorial on Keras with Gensimhttps://www.depends-on-the-definition.com/guide-to-word-vectors-with-gensim-and-keras/- http://www.orbifold.net/default/2017/01/10/embedding-and-tokenizer-in-keras/- https://github.com/keras-team/keras/issues/853- http://adventuresinmachinelearning.com/gensim-word2vec-tutorial/- https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/- https://stats.stackexchange.com/questions/320701/how-to-use-keras-pre-trained-embedding-layer- https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# from nltk.tokenize import WordPunctTokenizer
from collections import Counter
from keras.layers import Dense, Input, LSTM, CuDNNLSTM, Embedding, Dropout,SpatialDropout1D, Bidirectional
from keras.models import Sequential
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.text import Tokenizer
# Visualization
import matplotlib.pyplot as plt
import matplotlib.cm as cmap
%matplotlib inline
import importlib
import utils
importlib.reload(utils)
import text_utils
importlib.reload(text_utils)
# Constants
OUTPUT_DIR = './week-6-plots'
ORIG_COMMENTS = '../data/guardian-all/sorted_comments-standardized-pol-all.csv'
TOKENIZED_COMMENTS = '../data/embedding/sorted_comments-standardized-tokenized.csv'
tokenized_comments = pd.read_csv(TOKENIZED_COMMENTS, ';')
tokenized_comments.shape
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
data_articles, data_articles_pol, data_authors, data_comments_pol = utils.load_data()
X = data_comments_pol['comment_text']
y = pd.get_dummies(['pos' if x > 2 else 'neg' for x in data_comments_pol['upvotes']]).values # 2 -> ? vs. ?
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
###Output
_____no_output_____
###Markdown
Load embedding
###Code
ft_model = text_utils.load_embedding()
word_vectors = ft_model.wv
EMBEDDING_DIM = 50
print("Number of word vectors: {}".format(len(word_vectors.vocab)))
# Setup tokenizer
# tokenizer = WordPunctTokenizer()
# Use a counter for selecting the X most common words (therefore tokenize)
# vocab = Counter()
# comments = text_utils.process_comments(tokenizer, vocab, X, lower=True)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(X)
index_dict = tokenizer.word_index
backtranslater = dict((i, w) for w, i in index_dict.items())
# assemble the embedding_weights in one numpy array
n_symbols = len(index_dict) + 1 # adding 1 to account for 0th index (for masking)
embedding_weights = np.zeros((n_symbols, EMBEDDING_DIM))
for word, index in index_dict.items():
if word in word_vectors:
embedding_weights[index, :] = word_vectors[word]
# define inputs here
# embedding_layer = Embedding(output_dim=vocab_dim, input_dim=n_symbols, weights=[embedding_weights], trainable=False)
# embedded = embedding_layer(input_layer)
# embedding_layer = model.wv.get_keras_embedding(train_embeddings=False)
word_vectors.most_similar_cosmul(positive=['president', 'german'])
word_vectors.most_similar_cosmul(positive=['woman', 'king'], negative=['man'])
###Output
_____no_output_____
###Markdown
Pad/Cut tokenized comments to a certain length
###Code
MAX_NB_WORDS = len(word_vectors.vocab)
MAX_SEQUENCE_LENGTH = 200
train_size = len(X_train)
most_common = sorted(tokenizer.word_counts.items(), key=lambda x: x[1], reverse=True)
# word_index tells ous the
X_train, X_test, word_index = text_utils.pad_or_cut_tokenized_comments(
most_common, comments, train_size, MAX_NB_WORDS, MAX_SEQUENCE_LENGTH)
# https://codekansas.github.io/blog/2016/gensim.html
WV_DIM = 50
nb_words = min(MAX_NB_WORDS, len(word_vectors.vocab))
wv_matrix = word_vectors.syn0 # text_utils.get_weights_matrix(word_index, word_vectors, nb_words, WV_DIM)
###Output
_____no_output_____
###Markdown
Prepare Keras Model
###Code
embed_dim = 128
lstm_out = 64
batch_size = 16
model = Sequential()
model.add(Embedding(nb_words,
WV_DIM,
mask_zero=False,
weights=[wv_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False))
# model.add(Embedding(vocabulary, embed_dim, input_length = X.shape[1]))
model.add(SpatialDropout1D(0.1))
# model.add(LSTM(lstm_out, return_sequences=True, recurrent_dropout=0.3, dropout=0.3))
model.add(LSTM(lstm_out, recurrent_dropout=0, dropout=0.1))
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics = ['accuracy'])
model.summary()
epochs = 20
batch_size = 32
# Here we train the Network.
try:
history = model.fit([X_train[:50000]], y_train[:50000], validation_split=0.1,
batch_size = batch_size, epochs = epochs,
verbose = 2, shuffle=True)
except KeyboardInterrupt:
print("Fitting stopped manually")
def inspect_preprocessed_comment(data_comments_pol, X_train, backtranslater, idx):
print('Original:')
print(data_comments_pol.iloc[idx])
print('\nAfter preprocessing (& backtranslating):')
print(' '.join([backtranslater[x] for x in X_train[idx] if x in backtranslater]))
inspect_preprocessed_comment(data_comments_pol, X_train, backtranslater, 0)
utils.plot_history(history)
# Measuring score and accuracy on test set
score, acc = model.evaluate([X:test], y_test, verbose = 2,
batch_size = batch_size)
print("Logloss score: %.2f" % (score))
print("Test set Accuracy: %.2f" % (acc))
plt.hist(data_comments_pol['comment_text'].str.split().len)
###Output
_____no_output_____ |
UGESCO_MAIN_SIMPLIFIED.ipynb | ###Markdown
Ugesco : enrich and desambiguate the Samnang-Hans' JSON
###Code
from ugesco import *
import warnings
warnings.filterwarnings('ignore') #évite les alertes lors de l'import d'Ugesco.py
file = "data/jsondata_ugesco.json"
data = json_to_df(file)
# exclude image classifications and temporal values for the moment
data = data[data.columns.drop(list(data.filter(regex='imageclassification|temporal')))]
# reshape the dataframe verticaly
data = data.set_index('beeldid')
data.columns = data.columns.str.split('_', expand=True)
data = data.stack().reset_index(level=1, drop=True).reset_index()
data.columns = ['beeldid', 'spatial_value', 'spatial_key', 'to_drop']
data.drop('to_drop', 1, inplace=True)
#join/merge with phototheque_pallas to get the locations.
#The csv is gzipped to get around the 100MB file weigth limitation of Github
phototeque = pd.read_csv("data/phototheque_pallas.csv.gz", compression="gzip", encoding="utf8", dtype={'beeldid': str})
data = pd.merge(data, phototeque, how='left', on=['beeldid'], suffixes=['', '_x'])
# Pictures that contains more than a loc in the thesaurus descriptors
data[data['loc_qid'].str.contains(",", na = False)]
#test Rosette API. Max 10 000 calls a month and 1000 a day
#data['rosette'] = data['LEGEND'].apply(rosette)
# Some descriptive stats on columns
#data.describe(include="all")
# More data profiling
#import pandas_profiling
#pandas_profiling.ProfileReport(data)
#data.to_csv("C:/Users/ettor/Desktop/ugesco_file_temp_simplified.csv", encoding="utf-8")
# Merge with NER_classes (spatial_keys matched with wikidata)
ner_classes = pd.read_csv("data/ner_classes.csv")
data = pd.merge(data, ner_classes, how='left', left_on = ['spatial_key'], right_on = ['ner_class'])
# rename the new columns merged
data.rename(columns={'wiki_qid': 'ner_class_qid', 'wiki_class':'ner_class_name'}, inplace=True)
# remove articles in spatial_value
data['spatial_value'] = data['spatial_value'].str.replace(pat=r"^\s?(le|la|l\s+'|l'|les)\s+", repl="", n=1, case=False)
data.head()
data.shape
data.to_csv(r"C:\Users\ettor\Desktop\data_ugesco.csv", header=True, index=False, encoding="utf-8")
#test de stanford ner (lent)
#test = data.LEGEND[0:100].apply(get_ner)
#data.LEGEND[0:100]
#test_stanford = pd.concat([data.LEGEND[0:100], test], axis=1).reset_index()
#test_stanford.to_csv("C:/Users/ettor/Desktop/test_stanford.csv")
###Output
_____no_output_____ |
10/.ipynb_checkpoints/homework-10-schuetz-reddit-checkpoint.ipynb | ###Markdown
Reddit Part One: Getting DataYou're going to scrape the front page of https://www.reddit.com! Reddit is a magic land made of many many semi-independent kingdoms, called subreddits. We need to find out which are the most powerful.You are going to scrape the front page of reddit every 4 hours, saving a CSV file that includes:* The title of the post* The number of votes it has (the number between the up and down arrows)* The number of comments it has* What subreddit it is from (e.g. /r/AskReddit, /r/todayilearned)* When it was posted (get a TIMESTAMP, e.g. 2016-06-22T12:33:58+00:00, not "4 hours ago")* The URL to the post itself* The URL of the thumbnail image associated with the postNote:Ugh, reddit is horrible when it hasn't been customized to your tastes. If you would like something more exciting/less idiotic, try scraping a multireddit page - https://www.reddit.com/r/multihub/top/?sort=top&t=year - they're subreddits clustered by topics.For example, you could scrape https://www.reddit.com/user/CrownReserve/m/improveyoself which is all self-improvement subreddits. You can follow the links at https://www.reddit.com/r/multihub/top/?sort=top&t=year or use the "Find Multireddits" link on the Multireddit page to find more.
###Code
from bs4 import BeautifulSoup
import requests
user_agent = {'User-agent': 'Mozilla/5.0'}
html_str = requests.get('https://www.reddit.com/', headers = user_agent).text
html_str
document = BeautifulSoup(html_str, 'html.parser')
# The title of the post
# The whole post is under `<div>` class = ' thing id-t3_4 ....'
# <div> class = 'entry unvoted'
# <p> class = 'title'
# `<a>` class = 'title may-blank '
# The number of votes it has (the number between the up and down arrows)
# The number of votes is in <div> class = 'score unvoted'
# sometimes this is •
# The number of comments it has
# There's a
# <div> class = 'entry unvoted'
# <ul> class = 'flat-list buttons'
# <li> class = 'first'
# <a> class = 'bylink comments may-blank'
# What subreddit it is from (e.g. /r/AskReddit, /r/todayilearned)
# <div> class = 'entry unvoted'
# <p> class='tagline'
# <a> class = 'subreddit hover may-blank'
# When it was posted (get a TIMESTAMP, e.g. 2016-06-22T12:33:58+00:00, not "4 hours ago")
# <div> class = 'entry unvoted'
# <p> class='tagline'
# <time> it's actually in the tag
# The URL to the post itself
# This is in two places. Both inside the main <div> tag and in the same tag with the title.
# The URL of the thumbnail image associated with the post
# There are two thumbnail urls—the one I guess it's from orginially and the reddit thumbnail. Here's how to get the reddit thumbnail:
# <a> class = 'thumbnail may-blank'
# <img> it's actually in the tag
# What I eventually want:
posts_today = [
{'title': '"Two clowns in the same circus" 16 x 12s oil on linen'},
{'votes': 4246},
{'comments': 372},
{'subreddit': '/r/Art'},
{'timestamp': '2016-06-22T12:33:58+00:00'},
{'url': 'https://www.reddit.com/r/Art/comments/4pbvk5/two_clowns_in_the_same_circus_16_x_12s_oil_on/'},
{'thumb_url': 'https://b.thumbs.redditmedia.com/p32PnbLD9t9hqvw9Q5X7eZS2tI7Ygqnh5K5MTxOERSE.jpg'}
]
import re
one_sibling_up = document.find_all('div', {'class': 'clearleft'})
# troubleshooting
document
# because only every other clearleft has a post in it:
posts = [tag.find_next_sibling('div') for tag in one_sibling_up if tag.find_next_sibling('div')]
# posts is a list
len(posts)
# There are 10 more posts than show up on the homepage. Seems like the first 9 and last one aren't actual posts.
def title(post):
if post.find('a', {'class': 'title may-blank '}):
return post.find('a', {'class': 'title may-blank '}).string
else:
return 'NO TITLE'
def votes(post):
if post.find('div', {'class': 'score unvoted'}):
return post.find('div', {'class': 'score unvoted'}).string
else:
return 'NO INFO'
# The number of comments it has
# There's a
# <div> class = 'entry unvoted'
# <ul> class = 'flat-list buttons'
# <li> class = 'first'
# <a> class = 'bylink comments may-blank'
num = 0
for post in posts:
if post.find('a', {'class': 'bylink comments may-blank'}):
print(r'\d+', re.findall(post.find('a', {'class': 'bylink comments may-blank'})).text)
else:
print(0)
num += 1
print(num)
print('')
posts_today = []
post_dict = {}
for post in posts[9:34]:
post_dict['title'] = title(post)
if votes(post) == 'NO INFO':
post_dict['votes'] = votes(post)
else:
post_dict['votes'] = int(votes(post))
posts_today.append(post_dict)
post_dict = {}
print(len(posts_today))
posts_today
###Output
25
|
scripts/dysfunctional.ipynb | ###Markdown
Correct the sentences
###Code
from spacy import displacy
new_text
new_text2
for sent in new_text_tokened.doc.sents:
sent = tagger(sent.text)
#displacy.serve(sent, style="dep")
for token in sent.doc:
print(token.text, token.dep_,
token.shape_, token.is_alpha, token.is_stop)
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
token.shape_, token.is_alpha, token.is_stop)
###Output
Skip Skip ROOT Xxxx True False
to to prep xx True True
main main amod xxxx True False
content content pobj xxxx True False
SPACE _SP
False False
web web dobj xxx True False
SPACE _SP
False False
texts text npadvmod xxxx True False
SPACE _SP
False False
movies movie conj xxxx True False
SPACE _SP
False False
audio audio nmod xxxx True False
SPACE _SP
False False
software software appos xxxx True False
SPACE _SP
False False
image image appos xxxx True False
SPACE _SP
False False
logosearch logosearch npadvmod xxxx True False
SPACE _SP
False False
Search Search npadvmod Xxxxx True False
SPACE _SP
False False
upload upload xcomp xxxx True False
SPACE _SP
False False
UPLOAD UPLOAD nmod XXXX True False
SPACE _SP
False False
person person dobj xxxx True False
SPACE _SP
False False
SIGN SIGN appos XXXX True False
IN IN prep XX True True
SPACE _SP
False False
ABOUT ABOUT nmod XXXX True True
False False
CONTACT CONTACT nsubj XXXX True False
False False
BLOG BLOG ROOT XXXX True False
False False
PROJECTS PROJECTS ROOT XXXX True False
False False
HELP HELP nsubj XXXX True False
False False
DONATE DONATE ROOT XXXX True False
False False
JOBS JOBS dobj XXXX True False
False False
VOLUNTEER VOLUNTEER ROOT XXXX True False
False False
PEOPLE PEOPLE ROOT XXXX True False
SPACE _SP
False False
Full Full amod Xxxx True True
text text ROOT xxxx True False
of of prep xx True True
" " punct " False False
Three Three nummod Xxxxx True True
men man pobj xxx True False
in in prep xx True True
a a det x True True
boat boat pobj xxxx True False
: : punct : False False
( ( punct ( False False
to to aux xx True True
say say parataxis xxx True True
nothing nothing dobj xxxx True True
of of prep xx True True
the the det xxx True True
dog dog pobj xxx True False
) ) punct ) False False
" " punct " False False
SPACE _SP
False False
See See pcomp Xxx True True
other other amod xxxx True True
formats format dobj xxxx True False
SPACE _SP
False False
THE THE det XXX True True
LIBRARY LIBRARY appos XXXX True False
False False
OF OF prep XX True True
False False
THE THE det XXX True True
UNIVERSITY UNIVERSITY ROOT XXXX True False
SPACE _SP
False False
OF OF prep XX True True
CALIFORNIA CALIFORNIA pobj XXXX True False
False False
LOS LOS compound XXX True False
ANGELES ANGELES compound XXXX True False
False False
GIFT GIFT appos XXXX True False
False False
From From ROOT Xxxx True True
the the det xxx True True
Library Library nmod Xxxxx True False
of of prep xx True True
False False
Henry Henry compound Xxxxx True False
Goldman Goldman pobj Xxxxx True False
, , punct , False False
Ph.D. Ph.D. appos Xx.X. False False
False False
1886 1886 punct dddd False False
- - punct - False False
1972 1972 prep dddd False False
False False
THREE THREE nummod XXXX True True
MEN MEN ROOT XXX True False
IN IN prep XX True True
A A det X True True
BOAT BOAT pobj XXXX True False
False False
( ( punct ( False False
TO TO aux XX True True
SAY SAY ROOT XXX True True
NOTHING NOTHING dobj XXXX True True
OF OF prep XX True True
THE THE det XXX True True
DOG DOG pobj XXX True False
) ) punct ) False False
False False
BY BY prep XX True True
SPACE _SP
False False
JEROME JEROME compound XXXX True False
K. K. compound X. False False
JEROME JEROME ROOT XXXX True False
False False
AUTHOR AUTHOR appos XXXX True False
OF OF prep XX True True
SPACE _SP
False False
" " punct " False False
RLE RLE compound XXX True False
THOUGHTS THOUGHTS ROOT XXXX True False
OF OF prep XX True True
AN AN det XX True True
IDLE IDLE compound XXXX True False
FELLOW FELLOW pobj XXXX True False
, , punct , False False
" " punct " False False
" " punct " False False
STAGE STAGE compound XXXX True False
LAND LAND appos XXXX True False
, , punct , False False
" " punct " False False
ETC ETC compound XXX True False
False False
ILLUSTRA ILLUSTRA compound XXXX True False
TIONS TIONS appos XXXX True False
BY BY prep XX True True
A. A. compound X. False False
FREDERICS FREDERICS pobj XXXX True False
False False
NEW NEW compound XXX True False
YORK YORK compound XXXX True False
False False
HENRY HENRY compound XXXX True False
HOLT HOLT conj XXXX True False
AND AND cc XXX True True
COMPANY COMPANY conj XXXX True False
SPACE _SP
False False
1890 1890 npadvmod dddd False False
False False
Annex Annex nmod Xxxxx True False
False False
PREFACE PREFACE nmod XXXX True False
False False
r r compound x True False
l l nmod x True False
lie lie nmod xxx True False
chief chief amod xxxx True False
beauty beauty nsubj xxxx True False
of of prep xx True True
this this det xxxx True True
book book pobj xxxx True False
lies lie ROOT xxxx True False
not not neg xxx True True
so so advmod xx True True
much much advmod xxxx True True
in in prep xx True True
SPACE _SP
False False
its its poss xxx True True
literal literal amod xxxx True False
\ \ compound \ False False
st\h\ st\h\ ROOT xx\x\ False False
or or cc xx True True
in in conj xx True True
the the det xxx True True
extent extent amod xxxx True False
anJ anJ compound xxX True False
usefulness usefulness pobj xxxx True False
of of prep xx True True
the the det xxx True True
SPACE _SP
False False
information information pobj xxxx True False
it it nsubj xx True True
conveys convey relcl xxxx True False
, , punct , False False
as a prep xx True True
in in prep xx True True
its its poss xxx True True
sin sin pobj xxx True False
: : punct : False False
p!e p!e amod x!x False False
truthfulness truthfulness appos xxxx True False
. . punct . False False
SPACE _SP
False False
li li ROOT xx True False
* * punct * False False
pages page nsubj xxxx True False
form form ROOT xxxx True False
the the det xxx True True
record record dobj xxxx True False
of of prep xx True True
events event pobj xxxx True False
that that nsubj xxxx True True
really really advmod xxxx True True
hap- hap- relcl xxx- False False
SPACE _SP
False False
pened pened
###Markdown
Do not allow of the same in a row - did not work
###Code
new_text_1 = ''
for sent in new_text_tokened.doc.sents:
sent = tagger(sent.text)
#
last_dep_ = ''
for token in sent.doc:
if last_dep_ == token.dep_:
if sum(required_sents.values()) > 2:
new_text_1 += '. '
break
last_dep_ = token.dep_
new_text_1 += (' ' + token.text)
if token.dep_ in required_sents:
required_sents[token.dep_] += 1
print(token.text, token.dep_,
token.shape_, token.is_alpha, token.is_stop)
new_text_1
###Output
_____no_output_____
###Markdown
how does it look
###Code
from itertools import islice, count
sent = next(islice(new_text_tokened.doc.sents, 2, 2+1))
displacy.render(sent, style="dep")
###Output
_____no_output_____
###Markdown
Simple sentences are better: remove clauses
###Code
for token in sent:
print(token.text, token.dep_, token.head.text, token.head.pos_,
[child for child in token.children])
###Output
_____no_output_____
###Markdown
if a child token becomes parent, drop that part
###Code
def less_dependencies(new_text) :
if len(new_text.split(' ')) < 9:
return new_text
new_text_tokened = tagger(new_text)
new_text2 = ''
children_set = set()
for sent in new_text_tokened.doc.sents:
for token in sent:
try:
first_item = next(token.children)
children_set.update([child.text for child in token.children])
children_set.update([token.text])
break
except StopIteration:
pass
for token in sent:
if token.text not in children_set:
print('----------------------')
break
new_text2 += (' ' +token.text)
#print(token.text, token.dep_, token.head.text, token.head.pos_,
# [child for child in token.children])
new_text2 += '. '
return new_text2
len(new_text.split(' '))
new_text='And she had kept away for the two simple out we will take years'
less_dependencies(new_text)
###Output
----------------------
###Markdown
Answer questions
###Code
questions = ["What have you done?",
'Who are you?'
,'Where are you from?'
,'Why is it dark?'
,'Why are you sad?'
,'When is it too late?'
,'What is the purpose?'
,'Are we there yet?'
,'What do you do?'
,'How are you?'
,'What are the unwritten rules of where you work?'
]
###Output
_____no_output_____
###Markdown
Do replacements (rule-based)
###Code
def select_we_are():
starts = ['I am desperately'
, 'Sometimes, I am'
, 'Strangely enough, I feel'
, 'I used to be'
, 'I found myself'
,'I think therefore']
ix=random.randint(0,len(starts)-1)
return starts[ix]
def select_because():
starts = ['Because of this, '
, 'Because I am worth it,'
, 'Because I could not stop'
, 'Because so much'
, 'Because your best days'
]
ix=random.randint(0,len(starts)-1)
return starts[ix]
def select_how():
starts = ['Desperately trying to'
,'Needless to say'
,'Everything functioned exactly'
,'Fortunately the officials'
,'Safe to come'
,'Under international law']
ix=random.randint(0,len(starts)-1)
return starts[ix]
def select_what():
starts = ["I have never",
'I was a'
,'We have come'
,'I frankly didn\’t'
,'These were random'
,'You could see'
,'I truly enjoyed'
,'I am pleased to'
,'Most nations opted'
,'The most impressive']
ix=random.randint(0,len(starts)-1)
return starts[ix]
def select_location():
starts = ['In the woods'
, 'Over the shoulder'
, 'Over there in'
, 'In your dreams'
, 'In the desert'
,'Nowhere at all']
ix=random.randint(0,len(starts)-1)
return starts[ix]
def prepare_start_of_response(question):
if 'you' in question.lower() and 'are' in question.lower():
return select_we_are()
elif 'where' in question.lower():
return select_location()
elif 'why' in question.lower():
return select_because()
elif 'how' in question.lower():
return select_how()
elif 'what' in question.lower():
return select_what()
else:
return select_what()
for q in questions:
print(prepare_start_of_response(q))
def strip_punct(start):
return start.replace(',','').replace('.','').replace('?','').replace('!','')
max_sentences = 1
def generate_qa(q):
start=''
new_text=''
count_sentenses = 0
start = prepare_start_of_response(q)
new_text += start
start = ' '.join(start.split()[-3:])
while count_sentenses < max_sentences:
vec = nns.nearest(concatenate_vectors(nlp(strip_punct(start))))
ix=random.randint(0,len(vec)-1)
start = start + ' ' + vec[ix]
#print(vec[ix])
start = ' '.join(start.split()[-3:])
new_text += (' ' + vec[ix])
c=[1 if c=='.' else 0 for c in new_text]
if sum(c) >1:
break
#print(new_text)
word = tagger(vec[ix])
pos = [str(p.pos_) for p in word]
if ('NOUN' in pos) or 'PRONOUN' in pos:
count_sentenses += 1
#start = ''#prepare_start_of_response(q)
start = starts[random.randint(0,len(starts)-1)]
new_text += ('. ' +start)
return less_dependencies(new_text)
#return new_text
for q in questions:
print(q)
print(generate_qa(q))
print('-----------------------------------')
generate_qa('What do you know?')
###Output
_____no_output_____ |
Using_CNN_from_Scratch.ipynb | ###Markdown
Splitting Train, Validation, Test Data
###Code
train_dir = 'training_data'
val_dir = 'validation_data'
test_dir = 'test_data'
train_files = np.concatenate([cat_train, dog_train])
validate_files = np.concatenate([cat_val, dog_val])
test_files = np.concatenate([cat_test, dog_test])
os.mkdir(train_dir) if not os.path.isdir(train_dir) else None
os.mkdir(val_dir) if not os.path.isdir(val_dir) else None
os.mkdir(test_dir) if not os.path.isdir(test_dir) else None
for fn in train_files:
shutil.copy(fn, train_dir)
for fn in validate_files:
shutil.copy(fn, val_dir)
for fn in test_files:
shutil.copy(fn, test_dir)
#!rm -r test_data/ training_data/ validation_data/
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
IMG_DIM = (150,150)
train_files = glob.glob('training_data/*')
train_imgs = [];train_labels = []
for file in train_files:
try:
train_imgs.append( img_to_array(load_img( file,target_size=IMG_DIM )) )
train_labels.append(file.split('/')[1].split('_')[0])
except:
pass
train_imgs = np.array(train_imgs)
validation_files = glob.glob('validation_data/*')
validation_imgs = [];validation_labels = []
for file in validation_files:
try:
validation_imgs.append( img_to_array(load_img( file,target_size=IMG_DIM )) )
validation_labels.append(file.split('/')[1].split('_')[0])
except:
pass
train_imgs = np.array(train_imgs)
validation_imgs = np.array(validation_imgs)
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
# encode text category labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
###Output
_____no_output_____
###Markdown
Image Augmentation
###Code
train_datagen = ImageDataGenerator(rescale=1./255,
zoom_range=0.3,
rotation_range=50,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow(train_imgs, train_labels_enc, batch_size=30)
val_generator = val_datagen.flow(validation_imgs, validation_labels_enc, batch_size=20)
###Output
_____no_output_____
###Markdown
Keras Model
###Code
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, Input
from keras.models import Model
from keras import optimizers
input_shape = (150, 150, 3)
input_l = Input((150,150,3))
l1_conv = Conv2D(16, kernel_size=(3, 3), activation='relu')(input_l)
l1_pool = MaxPooling2D(pool_size=(2, 2))(l1_conv)
l2_conv = Conv2D(64, kernel_size=(3, 3), activation='relu')(l1_pool)
l2_pool = MaxPooling2D(pool_size=(2, 2))(l2_conv)
l3_conv = Conv2D(128, kernel_size=(3, 3), activation='relu')(l2_pool)
l3_pool = MaxPooling2D(pool_size=(2, 2))(l3_conv)
l4 = Flatten()(l3_pool)
l4_dropout = Dropout(0.3)(l4)
l5 = Dense(512, activation='relu')(l4_dropout)
l5_dropout = Dropout(0.3)(l5)
output = Dense(1, activation='sigmoid')(l5_dropout)
model = Model(input_l, output)
model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(), metrics=['accuracy'])
model.summary()
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=100,
validation_data=val_generator, validation_steps=50,
verbose=2)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/100
- 17s - loss: 0.8622 - acc: 0.5310 - val_loss: 0.6788 - val_acc: 0.5240
Epoch 2/100
- 13s - loss: 0.6918 - acc: 0.5479 - val_loss: 0.6593 - val_acc: 0.5730
Epoch 3/100
- 13s - loss: 0.6772 - acc: 0.5933 - val_loss: 0.5895 - val_acc: 0.6900
Epoch 4/100
- 13s - loss: 0.6583 - acc: 0.6218 - val_loss: 0.6041 - val_acc: 0.6760
Epoch 5/100
- 13s - loss: 0.6430 - acc: 0.6293 - val_loss: 0.5745 - val_acc: 0.6990
Epoch 6/100
- 13s - loss: 0.6190 - acc: 0.6515 - val_loss: 0.5982 - val_acc: 0.6750
Epoch 7/100
- 14s - loss: 0.6172 - acc: 0.6640 - val_loss: 0.5325 - val_acc: 0.7120
Epoch 8/100
- 13s - loss: 0.6209 - acc: 0.6582 - val_loss: 0.5567 - val_acc: 0.7320
Epoch 9/100
- 13s - loss: 0.6211 - acc: 0.6560 - val_loss: 0.5010 - val_acc: 0.7410
Epoch 10/100
- 13s - loss: 0.5903 - acc: 0.6806 - val_loss: 0.5094 - val_acc: 0.7390
Epoch 11/100
- 13s - loss: 0.6082 - acc: 0.6700 - val_loss: 0.5429 - val_acc: 0.7120
Epoch 12/100
- 13s - loss: 0.5843 - acc: 0.6872 - val_loss: 0.5129 - val_acc: 0.7510
Epoch 13/100
- 14s - loss: 0.5839 - acc: 0.6810 - val_loss: 0.4997 - val_acc: 0.7640
Epoch 14/100
- 13s - loss: 0.5899 - acc: 0.6836 - val_loss: 0.5882 - val_acc: 0.7130
Epoch 15/100
- 13s - loss: 0.5805 - acc: 0.7000 - val_loss: 0.7053 - val_acc: 0.6550
Epoch 16/100
- 13s - loss: 0.5820 - acc: 0.6992 - val_loss: 0.4845 - val_acc: 0.7650
Epoch 17/100
- 13s - loss: 0.5752 - acc: 0.7017 - val_loss: 0.5363 - val_acc: 0.7270
Epoch 18/100
- 13s - loss: 0.5657 - acc: 0.7129 - val_loss: 0.4494 - val_acc: 0.7830
Epoch 19/100
- 14s - loss: 0.5787 - acc: 0.6917 - val_loss: 0.5283 - val_acc: 0.7290
Epoch 20/100
- 14s - loss: 0.5658 - acc: 0.7089 - val_loss: 0.4581 - val_acc: 0.7770
Epoch 21/100
- 14s - loss: 0.5553 - acc: 0.7147 - val_loss: 0.6111 - val_acc: 0.6850
Epoch 22/100
- 14s - loss: 0.5580 - acc: 0.7242 - val_loss: 0.5054 - val_acc: 0.7480
Epoch 23/100
- 13s - loss: 0.5588 - acc: 0.7100 - val_loss: 0.4720 - val_acc: 0.7730
Epoch 24/100
- 13s - loss: 0.5505 - acc: 0.7212 - val_loss: 0.5123 - val_acc: 0.7470
Epoch 25/100
- 15s - loss: 0.5712 - acc: 0.7073 - val_loss: 0.4787 - val_acc: 0.7610
Epoch 26/100
- 13s - loss: 0.5408 - acc: 0.7316 - val_loss: 0.4961 - val_acc: 0.7530
Epoch 27/100
- 13s - loss: 0.5300 - acc: 0.7393 - val_loss: 0.4686 - val_acc: 0.7830
Epoch 28/100
- 13s - loss: 0.5505 - acc: 0.7199 - val_loss: 0.5242 - val_acc: 0.7460
Epoch 29/100
- 13s - loss: 0.5271 - acc: 0.7457 - val_loss: 0.4545 - val_acc: 0.7900
Epoch 30/100
- 13s - loss: 0.5384 - acc: 0.7260 - val_loss: 0.4610 - val_acc: 0.7870
Epoch 31/100
- 14s - loss: 0.5223 - acc: 0.7367 - val_loss: 0.4424 - val_acc: 0.7960
Epoch 32/100
- 13s - loss: 0.5512 - acc: 0.7213 - val_loss: 0.4779 - val_acc: 0.7780
Epoch 33/100
- 13s - loss: 0.5463 - acc: 0.7433 - val_loss: 0.4244 - val_acc: 0.8020
Epoch 34/100
- 13s - loss: 0.5385 - acc: 0.7302 - val_loss: 0.5564 - val_acc: 0.7410
Epoch 35/100
- 13s - loss: 0.5427 - acc: 0.7340 - val_loss: 0.8722 - val_acc: 0.6250
Epoch 36/100
- 13s - loss: 0.5305 - acc: 0.7402 - val_loss: 0.4386 - val_acc: 0.8110
Epoch 37/100
- 14s - loss: 0.5184 - acc: 0.7457 - val_loss: 0.4450 - val_acc: 0.7910
Epoch 38/100
- 13s - loss: 0.5441 - acc: 0.7332 - val_loss: 0.4206 - val_acc: 0.8070
Epoch 39/100
- 13s - loss: 0.5285 - acc: 0.7407 - val_loss: 0.4497 - val_acc: 0.7910
Epoch 40/100
- 13s - loss: 0.5281 - acc: 0.7376 - val_loss: 0.4420 - val_acc: 0.8000
Epoch 41/100
- 13s - loss: 0.5231 - acc: 0.7410 - val_loss: 0.4106 - val_acc: 0.8160
Epoch 42/100
- 13s - loss: 0.5220 - acc: 0.7436 - val_loss: 0.5312 - val_acc: 0.7590
Epoch 43/100
- 15s - loss: 0.5149 - acc: 0.7517 - val_loss: 0.4484 - val_acc: 0.7830
Epoch 44/100
- 14s - loss: 0.5223 - acc: 0.7432 - val_loss: 0.4120 - val_acc: 0.8120
Epoch 45/100
- 14s - loss: 0.5195 - acc: 0.7400 - val_loss: 0.4429 - val_acc: 0.7960
Epoch 46/100
- 13s - loss: 0.5246 - acc: 0.7466 - val_loss: 0.4041 - val_acc: 0.8090
Epoch 47/100
- 13s - loss: 0.5075 - acc: 0.7553 - val_loss: 0.4265 - val_acc: 0.8250
Epoch 48/100
- 13s - loss: 0.5156 - acc: 0.7583 - val_loss: 0.4795 - val_acc: 0.7750
Epoch 49/100
- 14s - loss: 0.5120 - acc: 0.7517 - val_loss: 0.4283 - val_acc: 0.8020
Epoch 50/100
- 13s - loss: 0.5255 - acc: 0.7409 - val_loss: 0.4847 - val_acc: 0.7830
Epoch 51/100
- 13s - loss: 0.5193 - acc: 0.7490 - val_loss: 0.4578 - val_acc: 0.7940
Epoch 52/100
- 13s - loss: 0.5122 - acc: 0.7469 - val_loss: 0.4558 - val_acc: 0.8020
Epoch 53/100
- 13s - loss: 0.5049 - acc: 0.7603 - val_loss: 0.4554 - val_acc: 0.7800
Epoch 54/100
- 13s - loss: 0.5097 - acc: 0.7569 - val_loss: 0.4381 - val_acc: 0.7890
Epoch 55/100
- 15s - loss: 0.5139 - acc: 0.7637 - val_loss: 0.4017 - val_acc: 0.8140
Epoch 56/100
- 13s - loss: 0.5099 - acc: 0.7560 - val_loss: 0.4347 - val_acc: 0.7940
Epoch 57/100
- 13s - loss: 0.4980 - acc: 0.7530 - val_loss: 0.4394 - val_acc: 0.7860
Epoch 58/100
- 13s - loss: 0.5130 - acc: 0.7589 - val_loss: 0.4424 - val_acc: 0.7960
Epoch 59/100
- 13s - loss: 0.4981 - acc: 0.7733 - val_loss: 0.4552 - val_acc: 0.7710
Epoch 60/100
- 13s - loss: 0.4991 - acc: 0.7599 - val_loss: 0.4227 - val_acc: 0.8080
Epoch 61/100
- 15s - loss: 0.4875 - acc: 0.7707 - val_loss: 0.4253 - val_acc: 0.8080
Epoch 62/100
- 13s - loss: 0.4981 - acc: 0.7632 - val_loss: 0.5508 - val_acc: 0.7510
Epoch 63/100
- 13s - loss: 0.5062 - acc: 0.7603 - val_loss: 0.4149 - val_acc: 0.8180
Epoch 64/100
- 13s - loss: 0.5008 - acc: 0.7655 - val_loss: 0.3925 - val_acc: 0.8360
Epoch 65/100
- 14s - loss: 0.4924 - acc: 0.7760 - val_loss: 0.4087 - val_acc: 0.8190
Epoch 66/100
- 13s - loss: 0.4925 - acc: 0.7642 - val_loss: 0.4290 - val_acc: 0.8010
Epoch 67/100
- 15s - loss: 0.4722 - acc: 0.7770 - val_loss: 0.3828 - val_acc: 0.8220
Epoch 68/100
- 14s - loss: 0.5055 - acc: 0.7606 - val_loss: 0.4122 - val_acc: 0.8120
Epoch 69/100
- 13s - loss: 0.4900 - acc: 0.7737 - val_loss: 0.4063 - val_acc: 0.8290
Epoch 70/100
- 13s - loss: 0.4993 - acc: 0.7636 - val_loss: 0.4151 - val_acc: 0.8010
Epoch 71/100
- 13s - loss: 0.5020 - acc: 0.7690 - val_loss: 0.4158 - val_acc: 0.7990
Epoch 72/100
- 13s - loss: 0.4955 - acc: 0.7653 - val_loss: 0.4049 - val_acc: 0.8310
Epoch 73/100
- 14s - loss: 0.4823 - acc: 0.7830 - val_loss: 0.4336 - val_acc: 0.8050
Epoch 74/100
- 13s - loss: 0.4804 - acc: 0.7819 - val_loss: 0.3934 - val_acc: 0.8180
Epoch 75/100
- 13s - loss: 0.5065 - acc: 0.7643 - val_loss: 0.4974 - val_acc: 0.7810
Epoch 76/100
- 13s - loss: 0.4888 - acc: 0.7779 - val_loss: 0.4695 - val_acc: 0.7960
Epoch 77/100
- 13s - loss: 0.4895 - acc: 0.7730 - val_loss: 0.4161 - val_acc: 0.8160
Epoch 78/100
- 13s - loss: 0.4820 - acc: 0.7790 - val_loss: 0.3894 - val_acc: 0.8220
Epoch 79/100
- 14s - loss: 0.4907 - acc: 0.7703 - val_loss: 0.3769 - val_acc: 0.8350
Epoch 80/100
- 13s - loss: 0.4794 - acc: 0.7779 - val_loss: 0.4107 - val_acc: 0.8180
Epoch 81/100
- 13s - loss: 0.4800 - acc: 0.7823 - val_loss: 0.4767 - val_acc: 0.7850
Epoch 82/100
- 13s - loss: 0.4937 - acc: 0.7699 - val_loss: 0.4065 - val_acc: 0.8190
Epoch 83/100
- 13s - loss: 0.4650 - acc: 0.7860 - val_loss: 0.3732 - val_acc: 0.8380
Epoch 84/100
- 13s - loss: 0.4981 - acc: 0.7692 - val_loss: 0.4256 - val_acc: 0.8300
Epoch 85/100
- 14s - loss: 0.5049 - acc: 0.7670 - val_loss: 0.3864 - val_acc: 0.8290
Epoch 86/100
- 13s - loss: 0.4730 - acc: 0.7830 - val_loss: 0.4212 - val_acc: 0.8200
Epoch 87/100
- 13s - loss: 0.4796 - acc: 0.7730 - val_loss: 0.4310 - val_acc: 0.8160
Epoch 88/100
- 14s - loss: 0.4944 - acc: 0.7702 - val_loss: 0.3845 - val_acc: 0.8290
Epoch 89/100
- 13s - loss: 0.4922 - acc: 0.7790 - val_loss: 0.3772 - val_acc: 0.8360
Epoch 90/100
- 13s - loss: 0.4921 - acc: 0.7816 - val_loss: 0.3902 - val_acc: 0.8320
Epoch 91/100
- 15s - loss: 0.4711 - acc: 0.7860 - val_loss: 0.4081 - val_acc: 0.8130
Epoch 92/100
- 13s - loss: 0.5064 - acc: 0.7709 - val_loss: 0.3651 - val_acc: 0.8450
Epoch 93/100
- 13s - loss: 0.4929 - acc: 0.7687 - val_loss: 0.3481 - val_acc: 0.8420
Epoch 94/100
- 13s - loss: 0.4720 - acc: 0.7796 - val_loss: 0.4129 - val_acc: 0.8060
Epoch 95/100
- 13s - loss: 0.4835 - acc: 0.7820 - val_loss: 0.5525 - val_acc: 0.7710
Epoch 96/100
- 13s - loss: 0.4742 - acc: 0.7709 - val_loss: 0.4102 - val_acc: 0.8290
Epoch 97/100
- 15s - loss: 0.4675 - acc: 0.7807 - val_loss: 0.5125 - val_acc: 0.7810
Epoch 98/100
- 13s - loss: 0.4692 - acc: 0.7786 - val_loss: 0.3939 - val_acc: 0.8340
Epoch 99/100
- 13s - loss: 0.4961 - acc: 0.7790 - val_loss: 0.4290 - val_acc: 0.8240
Epoch 100/100
- 13s - loss: 0.4662 - acc: 0.7876 - val_loss: 0.4970 - val_acc: 0.7820
###Markdown
Model Performance
###Code
%matplotlib inline
import matplotlib.pyplot as plt
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Basic CNN Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = list(range(1,101))
ax1.plot(epoch_list, history.history['acc'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_acc'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, 101, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, 101, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
if not os.path.exists('saved_models'): os.mkdir('saved_models')
model.save('saved_models/cnn_scratch_dogvscat.h5')
###Output
_____no_output_____ |
notebooks/45_train_test_split.ipynb | ###Markdown
Split input pairs into train and test sets
###Code
from collections import namedtuple
import wandb
from src.data.familysearch import train_test_split_on_frequency
from src.data.utils import load_dataset
from src.models.utils import add_padding
given_surname = "given"
Config = namedtuple("Config", "in_path train_path test_path threshold")
config = Config(
in_path=f"s3://familysearch-names/processed/tree-hr-{given_surname}-similar.csv.gz",
train_path=f"s3://familysearch-names/processed/tree-hr-{given_surname}-train-unfiltered.csv.gz",
test_path=f"s3://familysearch-names/processed/tree-hr-{given_surname}-test.csv.gz",
threshold=0.5
)
wandb.init(
project="nama",
entity="nama",
name="45_train_test_split",
group=given_surname,
notes="",
config=config._asdict()
)
train_test_split_on_frequency(config.in_path, config.train_path, config.test_path, config.threshold)
input_names_train, weighted_actual_names_train, candidate_names_train = \
load_dataset(config.train_path)
input_names_test, weighted_actual_names_test, candidate_names_test = \
load_dataset(config.test_path)
vocab = set(input_names_train).union(set(candidate_names_train))
print(len(vocab))
# check test set is correct
n_zero = n_one = n_two = 0
for input_name, wans in zip(input_names_test, weighted_actual_names_test):
for actual_name, _, _ in wans:
if input_name in vocab and actual_name in vocab and input_name != actual_name:
n_two += 1
elif input_name in vocab or actual_name in vocab:
n_one += 1
else:
n_zero += 1
print("two names in vocab (should not be possible)", n_two)
print("one name in vocab", n_one)
print("zero names in vocab", n_zero)
print("train input names (name1), weighted actual (name1 -> [name2, weighted_count, co_occurrence], candidate names (name2)")
print("name1", len(input_names_train))
print("weighted actual - should be same as name1", len(weighted_actual_names_train))
print("number of actuals", sum(len(wa) for wa in weighted_actual_names_train))
print("name2", len(candidate_names_train))
print("total unique names", len(set(input_names_train).union(set(candidate_names_train))))
print("test out-of-vocab: input names (name1), weighted actual (name1 -> [name2, weighted_count, co_occurrence], candidate names (name2)")
print("name1", len(input_names_test))
print("weighted actual - should be same as name1", len(weighted_actual_names_test))
print("number of actuals", sum(len(wa) for wa in weighted_actual_names_test))
print("name2", len(candidate_names_test))
print("total unique names", len(set(input_names_test).union(set(candidate_names_test))))
###Output
_____no_output_____
###Markdown
Probe datasets
###Code
def print_weighted_actual_names(label, weighted_actual_names, max=0):
print(label)
print("total", len(weighted_actual_names))
if 0 < max < len(weighted_actual_names):
weighted_actual_names = weighted_actual_names[:max]
for wan in weighted_actual_names:
print(" ", wan)
probe_name = add_padding("jones" if given_surname == "surname" else "richard")
print("total weight", sum(wc for _, wc, _ in weighted_actual_names_train[input_names_train.index(probe_name)]))
print_weighted_actual_names("train", weighted_actual_names_train[input_names_train.index(probe_name)], 20)
print("total weight", sum(wc for _, wc, _ in weighted_actual_names_test[input_names_test.index(probe_name)]))
print_weighted_actual_names("test", weighted_actual_names_test[input_names_test.index(probe_name)], 20)
wandb.finish()
###Output
_____no_output_____ |
arl-python/examples/arl/imaging-bags.ipynb | ###Markdown
Dask bag-based imaging demonstrationThis notebook explores the use of dask bags for parallelisation. For the most part we work with the bags directly. Much of this can be hidden in standard functions.See imaging-dask notebook for processing with dask delayed We create the visibility and fill in values with the transform of a number of point sources.
###Code
%matplotlib inline
import os
import sys
from dask import delayed, bag
from distributed import Client
results_dir = './results'
os.makedirs(results_dir, exist_ok=True)
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = (12.0, 12.0)
pylab.rcParams['image.cmap'] = 'rainbow'
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from arl.calibration.operations import apply_gaintable
from arl.data.polarisation import PolarisationFrame
from arl.visibility.base import create_visibility, copy_visibility
from arl.visibility.operations import concatenate_visibility
from arl.skycomponent.operations import create_skycomponent
from arl.image.operations import show_image, qa_image, create_empty_image_like,\
pad_image
from arl.image.deconvolution import deconvolve_cube, restore_cube
from arl.util.testing_support import create_named_configuration, create_test_image
from arl.imaging import create_image_from_visibility, predict_skycomponent_visibility, \
advise_wide_field, predict_2d, invert_2d, normalize_sumwt
from arl.imaging.wstack import predict_wstack_single, invert_wstack_single
from arl.imaging.timeslice import predict_timeslice_single, invert_timeslice_single
from arl.visibility.gather_scatter import visibility_gather_w, visibility_scatter_w
from arl.visibility.gather_scatter import visibility_gather_time, visibility_scatter_time
from arl.imaging.weighting import weight_visibility
from arl.graphs.dask_init import get_dask_Client
from arl.pipelines.graphs import create_continuum_imaging_pipeline_graph
from arl.graphs.bags import safe_invert_list, safe_predict_list, sum_invert_bag_results, deconvolve_bag
import logging
log = logging.getLogger()
log.setLevel(logging.INFO)
log.addHandler(logging.StreamHandler(sys.stdout))
###Output
_____no_output_____
###Markdown
Define a function to create the visibilities
###Code
def ingest_visibility(freq=1e8, chan_width=1e6, reffrequency=[1e8], npixel=512,
init=False):
lowcore = create_named_configuration('LOWBD2-CORE')
times = numpy.linspace(-numpy.pi / 4, numpy.pi / 4, 7)
frequency = numpy.array([freq])
channel_bandwidth = numpy.array([chan_width])
phasecentre = SkyCoord(
ra=+15.0 * u.deg, dec=-26.7 * u.deg, frame='icrs', equinox='J2000')
vt = create_visibility(
lowcore,
times,
frequency,
channel_bandwidth=channel_bandwidth,
weight=1.0,
phasecentre=phasecentre,
polarisation_frame=PolarisationFrame("stokesI"))
if init:
cellsize = 0.001
model = create_image_from_visibility(
vt,
npixel=npixel,
cellsize=cellsize,
npol=1,
frequency=reffrequency,
polarisation_frame=PolarisationFrame("stokesI"))
flux = numpy.array([[100.0]])
facets = 4
spacing_pixels = npixel // facets
spacing = 180.0 * cellsize * spacing_pixels / numpy.pi
centers = -1.5, -0.5, +0.5, +1.5
comps = list()
for iy in centers:
for ix in centers:
pra = int(round(npixel // 2 + ix * spacing_pixels - 1))
pdec = int(round(npixel // 2 + iy * spacing_pixels - 1))
sc = pixel_to_skycoord(pra, pdec, model.wcs)
comps.append(
create_skycomponent(
flux=flux,
frequency=reffrequency,
direction=sc,
polarisation_frame=PolarisationFrame("stokesI")))
predict_skycomponent_visibility(vt, comps)
return vt
###Output
_____no_output_____
###Markdown
Now make seven of these spanning 800MHz to 1200MHz and put them into a Dask bag.
###Code
nfreqwin=7
vis_bag=bag.from_sequence([ingest_visibility(freq)
for freq in numpy.linspace(0.8e8,1.2e8,nfreqwin)])
print(vis_bag)
###Output
_____no_output_____
###Markdown
We need to compute the bag in order to use it. First we just need a representative data set to calculate imaging parameters.
###Code
npixel=512
facets=4
def get_LSM(vt, cellsize=0.001, reffrequency=[1e8], npixel=512):
model = pad_image(create_test_image(vt, cellsize=cellsize, frequency=reffrequency,
phasecentre=vt.phasecentre,
polarisation_frame=PolarisationFrame("stokesI")),
shape=[1, 1, 512, 512])
return model
vis_bag = list(vis_bag)
model = get_LSM(vis_bag[0])
advice=advise_wide_field(vis_bag[0], guard_band_image=4.0)
vis_slices=11
###Output
_____no_output_____
###Markdown
Now we can set up the prediction of the visibility from the model. We scatter over w and then apply the wstack for a single w plane. Then we concatenate the visibilities back together.To save recomputing this, we compute it now and place it into another bag of the same name.
###Code
vis_bag=bag.from_sequence([ingest_visibility(freq)
for freq in numpy.linspace(0.8e8,1.2e8,nfreqwin)])\
.map(visibility_scatter_w, vis_slices=vis_slices)\
.map(safe_predict_list, model, predict=predict_wstack_single)\
.map(concatenate_visibility)
vis_bag=bag.from_sequence(vis_bag.compute())
###Output
_____no_output_____
###Markdown
Check out the visibility function. To get the result out of the bag, we do need to compute it but this time it's just a lookup.
###Code
vt = vis_bag.compute()[0]
# To check that we got the prediction right, plot the amplitude of the visibility.
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.show()
###Output
_____no_output_____
###Markdown
Now we can make the dirty images. As before we will scatter each of the 7 frequency windows (patitions) over w, giving a 2 level nested structure. We make a separate image for each frequency window. The image resolution noticeably improves for the high frequencies.
###Code
dirty_bag=vis_bag\
.map(visibility_scatter_w, vis_slices=vis_slices)\
.map(safe_invert_list, model, invert_wstack_single, dopsf=False, normalize=True)\
.map(sum_invert_bag_results)
dirty_bag=bag.from_sequence(dirty_bag.compute())
psf_bag=vis_bag\
.map(visibility_scatter_w, vis_slices=vis_slices)\
.map(safe_invert_list, model, invert_wstack_single, dopsf=True, normalize=True)\
.map(sum_invert_bag_results)
psf_bag=bag.from_sequence(psf_bag.compute())
for i, dirty in enumerate(dirty_bag.compute()):
print(qa_image(dirty[0], context='dirty'))
fig = show_image(dirty[0], title='Dirty image %d, weight %.3f'
% (i, dirty[1]))
plt.show()
###Output
_____no_output_____
###Markdown
In the next step all these seven images will be deconvolved in parallel. In this case we again need to zip the dirty and psf images and then use a simple adapter function.
###Code
def bag_deconvolve(dirty_psf_zip, **kwargs):
result = deconvolve_cube(dirty_psf_zip[0][0], dirty_psf_zip[1][0], **kwargs)
return result[0]
comp_bag=bag.zip(dirty_bag, psf_bag).map(bag_deconvolve, niter=1000, threshold=0.001,
fracthresh=0.01, window_shape='quarter',
gain=0.7, scales=[0, 3, 10, 30])
comp = comp_bag.compute()
fig=show_image(comp[0])
comp_bag=bag.from_sequence(comp)
###Output
_____no_output_____
###Markdown
Now we can calculate the model and residual visibility. To calculate the residual visibility, we will zip the original and model visibilities together and map our adapter across the zipped bag.
###Code
model_vis_bag=vis_bag\
.map(visibility_scatter_w, vis_slices=101)\
.map(safe_predict_list, comp_bag, predict=predict_wstack_single)\
.map(concatenate_visibility)
model_vis_bag = bag.from_sequence(model_vis_bag.compute())
def subtract_vis(vis_model_zip):
residual_vis = copy_visibility(vis_model_zip[0])
residual_vis.data['vis'] -= vis_model_zip[1].data['vis']
return residual_vis
residual_vis_bag = bag.zip(vis_bag, model_vis_bag)\
.map(subtract_vis)
residual_vis_bag=bag.from_sequence(residual_vis_bag.compute())
ovt = vis_bag.compute()[0]
vt = residual_vis_bag.compute()[0]
# To check that we got the prediction right, plot the amplitude of the visibility.
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(ovt.data['vis']), '.', color='b')
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.', color='r')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.show()
###Output
_____no_output_____
###Markdown
Now we can restore the images
###Code
residual_bag=residual_vis_bag\
.map(visibility_scatter_w, vis_slices=11)\
.map(safe_invert_list, model, invert_wstack_single, dopsf=False, normalize=True)\
.map(sum_invert_bag_results)
residual_bag=bag.from_sequence(residual_bag.compute())
def bag_restore(cpr_zip, **kwargs):
return restore_cube(cpr_zip[0], cpr_zip[1][0], cpr_zip[2][0], **kwargs)
restore_bag = bag.zip(comp_bag, psf_bag, residual_bag)\
.map(bag_restore)
for i, restored in enumerate(restore_bag.compute()):
fig = show_image(restored, title='Restored image %d' %i)
plt.show()
###Output
_____no_output_____ |
azure_code/Training_Testing_MultiK_RF-Copy1.ipynb | ###Markdown
Training classifiers for each values of K using saved features
###Code
import os
import numpy as np
import pickle
import pandas as pd
import gc
data_dir = os.path.join(os.getcwd(),'BlobStorage')
train_data_df = pd.read_pickle(data_dir+'/train_data_features_df.pkl')
val_data_df = pd.read_pickle(data_dir+'/val_data_features_df.pkl')
#Combining train and val data
train_val_data_df = pd.concat([train_data_df,val_data_df])
#Reading Test data
test_data_df = pd.read_pickle(data_dir+'/test_data_features_df.pkl')
X_test = test_data_df.img_features.apply(pd.Series)
#y_test = test_data_df['class_name'].astype('category')
print(train_data_df.shape)
print(val_data_df.shape)
print(test_data_df.shape)
#Training a classifier for each value of K.
from sklearn.ensemble import RandomForestClassifier
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines[12:]:
line = line.split()
modelName = line[0]
classesNow = line[1:]
print(modelName)
#Subsetting dataframe for only the classes being used now.
train_now_df = train_val_data_df[train_val_data_df['class_name'].isin(classesNow)]
X_train_val = train_now_df.img_features.apply(pd.Series)
y_train_val = train_now_df['class_name'].astype('category')
#training randomforest
mdl_rf = RandomForestClassifier(n_estimators=700,random_state=0,verbose=1,n_jobs=-1, min_samples_split= 2, min_samples_leaf= 1, max_features= 'auto', max_depth= 40, bootstrap= False)
clf_fit = mdl_rf.fit(X_train_val, y_train_val)
#Saving baseline model
#pickle.dump(clf_fit, open('trained_models/'+ modelName + '.sav', 'wb'))
# evaluate the model on test data
yhat_clf = clf_fit.predict(X_test)
pred_df = pd.DataFrame(data=yhat_clf, index=test_data_df['image_paths'], columns=['max_prob'])
pred_df.to_pickle('predictions/'+modelName+'.pkl')
#Finding prob predictions for all classes
yhat_clf_prob = clf_fit.predict_proba(X_test)
pred_df = pd.DataFrame(data=yhat_clf_prob, index=test_data_df['image_paths'], columns=clf_fit.classes_)
pred_df.to_pickle('predictions/all_categories/'+modelName+'.pkl')
del clf_fit,train_now_df,X_train_val,y_train_val
gc.collect()
f.close()
###Output
model70
###Markdown
Using Trained classifiers to predict on test data for each K. Saving predictions
###Code
import os
import numpy as np
import pickle
import pandas as pd
data_dir = os.path.join(os.getcwd(),'BlobStorage')
test_data_df = pd.read_pickle(data_dir+'/test_data_features_df.pkl')
X_test = test_data_df.img_features.apply(pd.Series)
y_test = test_data_df['class_name'].astype('category')
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
classesNow = line[1:]
print(modelName)
clf_fit = pickle.load(open('trained_models/'+ modelName + '.sav', 'rb'))
# evaluate the model on test data
yhat_clf = clf_fit.predict(X_test)
pred_df = pd.DataFrame(data=yhat_clf, index=test_data_df['image_paths'], columns=['max_prob'])
pred_df.to_pickle('predictions/'+modelName+'.pkl')
#Finding prob predictions for all classes
yhat_clf_prob = clf_fit.predict_proba(X_test)
pred_df = pd.DataFrame(data=yhat_clf_prob, index=test_data_df['image_paths'], columns=clf_fit.classes_)
pred_df.to_pickle('predictions/all_categories/'+modelName+'.pkl')
f.close()
###Output
_____no_output_____
###Markdown
Generating close word dict from FastText for each K
###Code
#Finding closest words to top predictions on testing set
import math
import pickle
from scipy.spatial import distance
#from itertools import islice
#def take(n, iterable):
# "Return first n items of the iterable as a list"
# return list(islice(iterable, n))
def scipy_distance(v, u):
return distance.euclidean(v, u)
#Reading the fasttext dictionary populated at clustering phase
fastext_dict = pickle.load(open("fasttext/fastext_dict.pkl","rb"))
print(len(fastext_dict))
#print(fastext_dict.keys())
#print(fastext_dict['car'])
#total_classes = 379
dict_keys = list(fastext_dict.keys())
#Generating the close words dictionary for all dictionary keys
closeWords_Count = 6
closeWord_dict = {}
for word in dict_keys:
distance_dict = {}
for fast_word in dict_keys:
dist = scipy_distance(fastext_dict[word],fastext_dict[fast_word])
distance_dict[fast_word] = dist
#sorted_distace_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1],reverse = True)[:closeWords_Count+1]}
closeWords_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1])[:closeWords_Count]}
closeWord_dict[word] = list(closeWords_dict.keys())
pickle.dump(closeWord_dict, open('close_word_dict/closeWord_dict.pkl', 'wb'))
#Generating the close words dictionary for each model
closeWords_Count = 6
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
print(modelName)
classesNow = line[1:]
closeWord_dict = {}
for word in classesNow:
distance_dict = {}
for fast_word in dict_keys:
dist = scipy_distance(fastext_dict[word],fastext_dict[fast_word])
distance_dict[fast_word] = dist
#sorted_distace_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1],reverse = True)[:closeWords_Count+1]}
closeWords_dict = {k: v for k, v in sorted(distance_dict.items(), key=lambda item: item[1])[:closeWords_Count]}
closeWord_dict[word] = list(closeWords_dict.keys())
pickle.dump(closeWord_dict, open('close_word_dict/'+ modelName + '_closeWord_dict.pkl', 'wb'))
#pred_df = pd.read_csv('predictions/'+modelName+'.txt', header=True, index=True, sep=',')
f.close()
###Output
_____no_output_____
###Markdown
Running final predictions from classifier and close word dict
###Code
import os
import numpy as np
import pickle
import pandas as pd
data_dir = os.path.join(os.getcwd(),'BlobStorage')
test_data_df = pd.read_pickle(data_dir+'/test_data_features_df.pkl')
y_test_df = pd.DataFrame(test_data_df.set_index('image_paths').class_name)
closeWord_dict = pickle.load(open('close_word_dict/closeWord_dict.pkl',"rb"))
#Running final predictions for top 3 predictions from classifier
h = open("Kmodels_final_accuracy.txt", "w")
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines[0:12]:
line = line.split()
modelName = line[0]
print(modelName)
#Reading the predictions for each model
pred_df = pd.read_pickle('predictions/all_categories/'+modelName+'.pkl')
#Finding top 3 predictions
top_n_predictions = np.argsort(pred_df.values, axis = 1)[:,-3:]
#then find the associated code for each prediction
top_class = pred_df.columns[top_n_predictions]
top_class_df = pd.DataFrame(data=top_class,columns=['top1','top2','top3'],index = pred_df.index)
results = pd.merge(y_test_df, top_class_df, left_index=True, right_index=True)
#closeWord_dict = pickle.load(open('close_word_dict/'+ modelName + '_closeWord_dict.pkl',"rb"))
results['guesses_1'] = results['top1'].map(closeWord_dict)
results['guesses_2'] = results['top2'].map(closeWord_dict)
results['guesses_3'] = results['top3'].map(closeWord_dict)
pred_check = []
#pred_df['pred_check'] = np.where(pred_df['actual_label'] in pred_df['guesses'],1,0)
for index,row in results.iterrows():
if (row['class_name'] in row['guesses_1']) or (row['class_name'] in row['guesses_2']) or (row['class_name'] in row['guesses_3']):
pred_check.append(1)
else:
pred_check.append(0)
results['pred_check'] = pred_check
total_right = results['pred_check'].sum()
total_rows = len(pred_df)
accuracy = round(total_right/total_rows,4)
h.write(str(modelName) + ',' + str(accuracy) + '\n')
f.close()
h.close()
#Running final predictions for single predictions
h = open("Kmodels_singlePred_final_accuracy.txt", "w")
f = open("fasttext/clusterCenters.txt",'r')
lines = f.readlines()
for line in lines:
line = line.split()
modelName = line[0]
print(modelName)
#Reading the predictions for each model
pred_df = pd.read_pickle('predictions/'+modelName+'.pkl')
results = pd.merge(y_test_df, pred_df, left_index=True, right_index=True)
closeWord_dict = pickle.load(open('close_word_dict/'+ modelName + '_closeWord_dict.pkl',"rb"))
results['guesses'] = results['max_prob'].map(closeWord_dict)
pred_check = []
#pred_df['pred_check'] = np.where(pred_df['actual_label'] in pred_df['guesses'],1,0)
for index,row in results.iterrows():
if row['class_name'] in row['guesses']:
pred_check.append(1)
else:
pred_check.append(0)
results['pred_check'] = pred_check
total_right = results['pred_check'].sum()
total_rows = len(pred_df)
accuracy = round(total_right/total_rows,4)
h.write(str(modelName) + ',' + str(accuracy) + '\n')
f.close()
h.close()
###Output
_____no_output_____ |
DigitalSignalProcessing/dsp_add_whitenoise.ipynb | ###Markdown
音声に白色雑音を加える
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import wavfile
from IPython.display import Audio
IN_WAVE_FILE = "in.wav" # モノラル音声(前提)
OUT_WAVE_FILE = "out_whitenoise.wav"
# 音声データ読み込み (fsがサンプリング周波数、dataは音声データ)
fs, speech_data = wavfile.read(IN_WAVE_FILE)
# 音声データの長さ
n_speech = len(speech_data)
# 雑音だけの区間の長さ
n_noise = 4000
# 全体の長さ
n_samples = n_noise + n_speech
# 白色雑音を生成
white_noise = np.random.normal(scale=0.04, size=n_samples)
# 2バイトのデータとして書き込むためにスケールを調整
white_noise = white_noise * np.iinfo(np.int16).max
# ゲインを調整
white_noise = 0.5 * white_noise
# 白色雑音を混ぜる
mixed_signal = white_noise # 最初に雑音を入れる
mixed_signal[n_noise:] += speech_data # 後から音声を足す
# プロット枠を確保 (10がヨコのサイズ、4はタテのサイズ)
fig = plt.figure(figsize=(12, 8))
axes1 = fig.add_subplot(2, 1, 1)
n_samples = len(speech_data)
time = np.arange(n_samples) / fs
axes1.plot(time, speech_data) # 音声データのプロット
axes1.set_xlabel("Time (sec)") # x軸のラベル
axes1.set_ylabel("Amplitude") # y軸のラベル
axes1.set_title("Original speech")
axes2 = fig.add_subplot(2, 1, 2)
n_samples = len(mixed_signal)
time = np.arange(n_samples) / fs
axes2.plot(time, mixed_signal) # 音声データのプロット
axes2.set_xlabel("Time (sec)") # x軸のラベル
axes2.set_ylabel("Amplitude") # y軸のラベル
axes2.set_title("Mixed speech (original + white noise)")
# 画像を画面表示
plt.tight_layout()
###Output
_____no_output_____
###Markdown
音声の再生(オリジナル)
###Code
Audio(speech_data, rate=fs)
###Output
_____no_output_____
###Markdown
音声の再生(白色雑音入り)
###Code
Audio(mixed_signal, rate=fs)
###Output
_____no_output_____ |
Play_Store_App_Review_Analysis.ipynb | ###Markdown
The Play Store apps data has enormous potential to drive app-making businesses to success. Actionable insights can be drawn for developers to work on and capture the Android market.
###Code
# from google.colab import drive
# drive.mount('/content/drive')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Playstore App Data Analysis
###Code
play_store_data = '/content/play_store_data.csv'
play_store_df = pd.read_csv(play_store_data)
user_review_data = '/content/user_reviews.csv'
user_review_df = pd.read_csv(user_review_data)
###Output
_____no_output_____
###Markdown
Check the few sample of data
###Code
play_store_df.head()
user_review_df.head()
###Output
_____no_output_____
###Markdown
Get the unique counts and other statistics of every column in the data
###Code
play_store_df.describe(include='all')
user_review_df.describe(include='all')
###Output
_____no_output_____
###Markdown
Data preprocessing Check for missing/null values in the data
###Code
play_store_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Columns Rating, Type, Content Rating, Current Ver and Android Ver have 1474, 1, 1, 8, 3 null values in them respectively.Since Type, Content Rating, Current Ver and Android Ver have very few null values, we can simply drop the rows which contain null values in these columns or we can replace them with the most distributed values by looking at other samples from the dataLet us remove these few samples from the dataset.
###Code
play_store_df.dropna(subset=['Type', 'Content Rating', 'Current Ver', 'Android Ver'], inplace=True)
###Output
_____no_output_____
###Markdown
Looking at the null values again
###Code
play_store_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Let us check the mean and median values for replacing the null values
###Code
mean_value = play_store_df['Rating'].mean()
print('Mean value', mean_value)
median_value = play_store_df['Rating'].median()
print('Median value', median_value)
###Output
Mean value 4.191837606837612
Median value 4.3
###Markdown
Since the mean and median both are very close to each other, we can replace missing values with the either of them.
###Code
play_store_df['Rating'].fillna(value=median_value, inplace=True)
###Output
_____no_output_____
###Markdown
Let us cross check if all null values are being replaced
###Code
play_store_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Now the data contains no missing values Check the data types for all the columns
###Code
play_store_df.dtypes
###Output
_____no_output_____
###Markdown
Convert reviews column to int datatype
###Code
play_store_df['Reviews'] = play_store_df.Reviews.astype(int)
###Output
_____no_output_____
###Markdown
Size Column contains some of the special characters like , , + , M , K & also it has a some of the value as "Varies with device". We need to remove all of these and then convert it to int or float
###Code
play_store_df['Size'] = play_store_df.Size.apply(lambda x: x.strip('+')) # removing the + Sign
play_store_df['Size'] = play_store_df.Size.apply(lambda x: x.replace(',', '')) # removing the `,`
play_store_df['Size'] = play_store_df.Size.apply(lambda x: x.replace('M', 'e+6')) # converting the M to 1e+6
play_store_df['Size'] = play_store_df.Size.apply(lambda x: x.replace('k', 'e+3')) # converting the K to 1e+3
play_store_df['Size'] = play_store_df.Size.replace('Varies with device', np.NaN) # replacing varies with device with NaN
play_store_df['Size'] = pd.to_numeric(play_store_df['Size']) # Converting the string to Numeric type
###Output
_____no_output_____
###Markdown
Since we converted the Varies with device value to Nan , so we have to do something with those set of Nan values data. It will be a better idea to drop the Rows of the column Size having Nanvalues because it will be not an efficient idea to replace those values with mean or mode since the size of some apps would be too large and some of them too small.
###Code
play_store_df.dropna(subset=['Size'], inplace=True)
###Output
_____no_output_____
###Markdown
Converting Installs column from object to integer
###Code
play_store_df['Installs'] = play_store_df.Installs.apply(lambda x: x.strip('+'))
play_store_df['Installs'] = play_store_df.Installs.apply(lambda x: x.replace(',', ''))
play_store_df['Installs'] = pd.to_numeric(play_store_df['Installs'])
###Output
_____no_output_____
###Markdown
Converting Price column from object to integer. The values contain a special symbol $ which can be removed and then converted to the numeric type.
###Code
play_store_df['Price'] = play_store_df.Price.apply(lambda x: x.strip('$'))
play_store_df['Price'] = pd.to_numeric(play_store_df['Price'])
play_store_df.dtypes
###Output
_____no_output_____
###Markdown
EDA
###Code
###Output
_____no_output_____
###Markdown
Check the correlation between data
###Code
f,ax = plt.subplots(figsize=(12, 12))
sns.heatmap(play_store_df.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
As we can see, there is correlation between Reviews and Installs of 0.6 Top categories in the play store which contains the highest number of apps
###Code
y = play_store_df['Category'].value_counts().index
x = play_store_df['Category'].value_counts()
xaxis = [x[i] for i in range(len(x))]
yaxis = [y[i] for i in range(len(x))]
plt.figure(figsize=(18,13))
plt.xlabel("Count")
plt.ylabel("Category")
graph = sns.barplot(x=xaxis, y=yaxis, palette="husl")
graph.set_title("Top categories on Google Playstore", fontsize=25);
###Output
_____no_output_____
###Markdown
There are all total of 33 categories in the dataset from the above output we can come to the conclusion that in the play store most of the apps are under Family & Game category and least are of Beauty & Comics Category Category of Apps from the ‘Content Rating’ column is found more on the play store
###Code
x = play_store_df['Content Rating'].value_counts().index
y = play_store_df['Content Rating'].value_counts()
xaxis = [x[i] for i in range(len(x))]
yaxis = [y[i] for i in range(len(x))]
plt.figure(figsize=(12,10))
plt.bar(xaxis, yaxis, width=0.8, color=['#15244C','#FFFF48','#292734','#EF2920','#CD202D','#ECC5F2'], alpha=0.8);
plt.title('Content Rating',size=20);
plt.ylabel('Apps(Count)');
plt.xlabel('Content Rating');
###Output
_____no_output_____
###Markdown
Everyone category has the highest number of apps. Distribution of the ratings of the data frame.
###Code
plt.figure(figsize=(15,9))
plt.xlabel("Rating")
plt.ylabel("Frequency")
graph = sns.kdeplot(play_store_df.Rating, color="Blue", shade=True)
plt.title('Distribution of Rating',size=20);
###Output
_____no_output_____
###Markdown
Most of the apps in the google play store are rated between 4 to 5 Apps which are paid and free
###Code
plt.figure(figsize=(10,10))
labels = play_store_df['Type'].value_counts(sort=True).index
sizes = play_store_df['Type'].value_counts(sort=True)
colors = ["blue","lightgreen"]
explode = (0.2,0)
plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=0)
plt.title('Percent of Free Vs Paid Apps in store',size = 20)
plt.show()
###Output
_____no_output_____
###Markdown
Approximately 92% apps are free and 8% apps are paid App’s having most number of installs
###Code
most_install_dfs = play_store_df.groupby('Category')[['Installs']].sum().sort_values(by='Installs', ascending=False)
xaxis = [most_install_dfs.Installs[i] for i in range(len(most_install_dfs))]
yaxis = [most_install_dfs.index[i] for i in range(len(most_install_dfs))]
plt.figure(figsize=(18,13))
plt.xlabel("Installs")
plt.ylabel("Category")
graph = sns.barplot(x=xaxis, y=yaxis, alpha =0.9, palette= "viridis")
graph.set_title("Installs", fontsize = 25);
###Output
_____no_output_____
###Markdown
The top categories with the highest installs are Game, Family, Communication, News & Magazines, & Tools. Top 25 installed apps in Game category
###Code
top = play_store_df[play_store_df['Category'] == 'GAME']
topapps = top.sort_values(by='Installs', ascending=False).head(25)
# Top_Apps_in_art_and_design
plt.figure(figsize=(15,12))
plt.title('Top 25 Installed Apps',size = 20);
graph = sns.barplot(x=topapps.App, y=topapps.Installs)
graph.set_xticklabels(graph.get_xticklabels(), rotation= 45, horizontalalignment='right');
###Output
_____no_output_____
###Markdown
Top 10 expensive Apps in the play store
###Code
topPaidApps = play_store_df[play_store_df['Type'] == 'Paid'].sort_values(by='Price', ascending=False).head(11)
topPaidApps_df = topPaidApps[['App', 'Installs']].drop(9934)
plt.figure(figsize=(15,12));
plt.pie(topPaidApps_df.Installs, explode=None, labels=topPaidApps_df.App, autopct='%1.1f%%', startangle=0);
plt.title('Top Expensive Apps Distribution',size = 20);
plt.legend(topPaidApps_df.App,
loc="lower right",
title="Apps",
fontsize = "xx-small"
);
###Output
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 128142 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 128142 missing from current font.
font.set_text(s, 0, flags=flags)
###Markdown
I am rich is the most expensive app in the google play store followed by I am Rich Premium. we also had to drop one-row data for this visualization because the language of the app was Chinese and it was messing with the pie chart, visualization Count of apps in different genres
###Code
topAppsinGenres = play_store_df['Genres'].value_counts().head(50)
xaxis = [topAppsinGenres.index[i] for i in range(len(topAppsinGenres))]
yaxis = [topAppsinGenres[i] for i in range(len(topAppsinGenres))]
plt.figure(figsize=(15,9))
plt.ylabel('Genres(App Count)')
plt.xlabel('Genres')
graph = sns.barplot(x=xaxis, y=yaxis, palette="deep")
graph.set_xticklabels(graph.get_xticklabels(), rotation=90, fontsize=12)
graph.set_title("Top Genres in the Playstore", fontsize = 20);
###Output
_____no_output_____
###Markdown
Highest Number of Apps are found in the Tools and Entertainment genres followed by Education, Medical and many more. Apps that have made the highest-earning
###Code
Paid_Apps_df = play_store_df[play_store_df['Type'] == 'Paid']
earning_df = Paid_Apps_df[['App', 'Installs', 'Price']]
earning_df['Earnings'] = earning_df['Installs'] * earning_df['Price'];
earning_df_sorted_by_Earnings = earning_df.sort_values(by='Earnings', ascending=False).head(50)
earning_df_sorted_by_Price = earning_df_sorted_by_Earnings.sort_values(by='Price', ascending=False)
plt.figure(figsize=(15,9))
plt.bar(earning_df_sorted_by_Price.App, earning_df_sorted_by_Price.Earnings, width=1.1, label=earning_df_sorted_by_Price.Earnings)
plt.xlabel("Apps")
plt.ylabel("Earnings")
plt.tick_params(rotation=90)
plt.title("Top Earning Apps");
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
This is separate from the ipykernel package so we can avoid doing imports until
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning: Glyph 128142 missing from current font.
font.set_text(s, 0.0, flags=flags)
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning: Glyph 128142 missing from current font.
font.set_text(s, 0, flags=flags)
###Markdown
The top five apps with the highest earnings found on google play store are: I am Rich, I am Rich Premium, Hitman Sniper, Grand Theft Auto: San Andreas, Facetune - For Free
###Code
# Join and merge the two dataframe
merged_df = pd.merge(play_store_df, user_review_df, on='App', how = "inner")
# Drop NA values from Sentiment and Translated_Review columns
merged_df = merged_df.dropna(subset=['Sentiment', 'Translated_Review'])
sns.set_style('ticks')
fig, ax = plt.subplots()
fig.set_size_inches(11, 8)
# User review sentiment polarity for paid vs. free apps
ax = sns.boxplot(x = 'Type', y = 'Sentiment_Polarity', data = merged_df)
ax.set_title('Sentiment Polarity Distribution')
###Output
_____no_output_____
###Markdown
Free apps receive a lot of negative comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This helps indicate about the app quality, i.e. paid apps being of higher quality than free apps on average.
###Code
user_review_df1 = user_review_df.dropna(inplace=False)
# Create the Likert Scale
likert = {
"Negative": -1,
"Neutral": 0,
"Positive": 1
}
# Transform the Sentument column to match the Likert Scale value
user_review_df1.Sentiment = user_review_df1.Sentiment.apply(lambda x: likert[x]).copy()
# Here, we obtain the mean for each app by grouping the data
reviews_mean = user_review_df1.groupby("App").mean().copy()
complete_data = pd.merge(left=play_store_df, right=reviews_mean, on="App").copy()
# Drop duplicates
complete_data.drop_duplicates("App", inplace=True)
# Reset the index since now we have a different number of observations
complete_data = complete_data.reset_index().drop("index", axis=1).copy()
# Select columns that will be used
columns = [0, 1, 2, 3, 5, 6, 8, 9, 13, 14, 15]
complete_data = complete_data.iloc[:,columns].copy()
###Output
_____no_output_____
###Markdown
Analyzing how sentiment influences the rating of the app
###Code
# Create the plot for the histograms
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True)
# Create the histograms
ax[0].hist(complete_data.Sentiment, bins=20)
ax[1].hist(complete_data.Rating, bins=20)
ax[1].set_xlim(0, 5)
# Add titles
ax[0].set_title("Sentiment")
ax[1].set_title("Rating")
plt.show()
###Output
_____no_output_____ |
CTC_IGFL_FR.ipynb | ###Markdown
Download CTC-IGFL-FR
###Code
from pathlib import Path
%cd /content
if Path('CTC-IGFL-FR-main').exists():
print('CTC-IGFL-FR-main directory alredy exists')
else:
!wget https://github.com/ksugar/CTC-IGFL-FR/archive/refs/heads/main.zip
!unzip -q main.zip && rm main.zip
%cd CTC-IGFL-FR-main/src/SW
###Output
/content
--2022-01-10 11:39:39-- https://github.com/ksugar/CTC-IGFL-FR/archive/refs/heads/main.zip
Resolving github.com (github.com)... 52.69.186.44
Connecting to github.com (github.com)|52.69.186.44|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/ksugar/CTC-IGFL-FR/zip/refs/heads/main [following]
--2022-01-10 11:39:39-- https://codeload.github.com/ksugar/CTC-IGFL-FR/zip/refs/heads/main
Resolving codeload.github.com (codeload.github.com)... 52.193.111.178
Connecting to codeload.github.com (codeload.github.com)|52.193.111.178|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/zip]
Saving to: ‘main.zip’
main.zip [ <=> ] 347.52K 1.85MB/s in 0.2s
2022-01-10 11:39:40 (1.85 MB/s) - ‘main.zip’ saved [355864]
/content/CTC-IGFL-FR-main/src/SW
###Markdown
Download data
###Code
!./download_data.sh BF-C2DL-HSC
###Output
--2022-01-10 11:39:40-- http://data.celltrackingchallenge.net/training-datasets/BF-C2DL-HSC.zip
Resolving data.celltrackingchallenge.net (data.celltrackingchallenge.net)... 147.251.52.183
Connecting to data.celltrackingchallenge.net (data.celltrackingchallenge.net)|147.251.52.183|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://data.celltrackingchallenge.net/training-datasets/BF-C2DL-HSC.zip [following]
--2022-01-10 11:39:41-- https://data.celltrackingchallenge.net/training-datasets/BF-C2DL-HSC.zip
Connecting to data.celltrackingchallenge.net (data.celltrackingchallenge.net)|147.251.52.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1707088358 (1.6G) [application/zip]
Saving to: ‘../../Data/BF-C2DL-HSC.zip’
../../Data/BF-C2DL- 100%[===================>] 1.59G 10.9MB/s in 2m 34s
2022-01-10 11:42:16 (10.6 MB/s) - ‘../../Data/BF-C2DL-HSC.zip’ saved [1707088358/1707088358]
###Markdown
Create a Conda environment
###Code
!./create_env.sh
###Output
--2022-01-10 11:42:33-- https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.3-Linux-x86_64.sh
Resolving repo.anaconda.com (repo.anaconda.com)... 104.16.130.3, 104.16.131.3, 2606:4700::6810:8203, ...
Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.130.3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 93052469 (89M) [application/x-sh]
Saving to: ‘miniconda.sh’
miniconda.sh 100%[===================>] 88.74M 14.4MB/s in 6.2s
2022-01-10 11:42:40 (14.3 MB/s) - ‘miniconda.sh’ saved [93052469/93052469]
PREFIX=/content/CTC-IGFL-FR-main/src/SW/miniconda
Unpacking payload ...
Collecting package metadata (current_repodata.json): - \ | done
Solving environment: - \ done
## Package Plan ##
environment location: /content/CTC-IGFL-FR-main/src/SW/miniconda
added / updated specs:
- _libgcc_mutex==0.1=main
- ca-certificates==2020.1.1=0
- certifi==2020.4.5.1=py38_0
- cffi==1.14.0=py38he30daa8_1
- chardet==3.0.4=py38_1003
- conda-package-handling==1.6.1=py38h7b6447c_0
- conda==4.8.3=py38_0
- cryptography==2.9.2=py38h1ba5d50_0
- idna==2.9=py_1
- ld_impl_linux-64==2.33.1=h53a641e_7
- libedit==3.1.20181209=hc058e9b_0
- libffi==3.3=he6710b0_1
- libgcc-ng==9.1.0=hdf63c60_0
- libstdcxx-ng==9.1.0=hdf63c60_0
- ncurses==6.2=he6710b0_1
- openssl==1.1.1g=h7b6447c_0
- pip==20.0.2=py38_3
- pycosat==0.6.3=py38h7b6447c_1
- pycparser==2.20=py_0
- pyopenssl==19.1.0=py38_0
- pysocks==1.7.1=py38_0
- python==3.8.3=hcff3b4d_0
- readline==8.0=h7b6447c_0
- requests==2.23.0=py38_0
- ruamel_yaml==0.15.87=py38h7b6447c_0
- setuptools==46.4.0=py38_0
- six==1.14.0=py38_0
- sqlite==3.31.1=h62c20be_1
- tk==8.6.8=hbc83047_0
- tqdm==4.46.0=py_0
- urllib3==1.25.8=py38_0
- wheel==0.34.2=py38_0
- xz==5.2.5=h7b6447c_0
- yaml==0.1.7=had09818_2
- zlib==1.2.11=h7b6447c_3
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
ca-certificates pkgs/main/linux-64::ca-certificates-2020.1.1-0
certifi pkgs/main/linux-64::certifi-2020.4.5.1-py38_0
cffi pkgs/main/linux-64::cffi-1.14.0-py38he30daa8_1
chardet pkgs/main/linux-64::chardet-3.0.4-py38_1003
conda pkgs/main/linux-64::conda-4.8.3-py38_0
conda-package-han~ pkgs/main/linux-64::conda-package-handling-1.6.1-py38h7b6447c_0
cryptography pkgs/main/linux-64::cryptography-2.9.2-py38h1ba5d50_0
idna pkgs/main/noarch::idna-2.9-py_1
ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7
libedit pkgs/main/linux-64::libedit-3.1.20181209-hc058e9b_0
libffi pkgs/main/linux-64::libffi-3.3-he6710b0_1
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0
ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1
openssl pkgs/main/linux-64::openssl-1.1.1g-h7b6447c_0
pip pkgs/main/linux-64::pip-20.0.2-py38_3
pycosat pkgs/main/linux-64::pycosat-0.6.3-py38h7b6447c_1
pycparser pkgs/main/noarch::pycparser-2.20-py_0
pyopenssl pkgs/main/linux-64::pyopenssl-19.1.0-py38_0
pysocks pkgs/main/linux-64::pysocks-1.7.1-py38_0
python pkgs/main/linux-64::python-3.8.3-hcff3b4d_0
readline pkgs/main/linux-64::readline-8.0-h7b6447c_0
requests pkgs/main/linux-64::requests-2.23.0-py38_0
ruamel_yaml pkgs/main/linux-64::ruamel_yaml-0.15.87-py38h7b6447c_0
setuptools pkgs/main/linux-64::setuptools-46.4.0-py38_0
six pkgs/main/linux-64::six-1.14.0-py38_0
sqlite pkgs/main/linux-64::sqlite-3.31.1-h62c20be_1
tk pkgs/main/linux-64::tk-8.6.8-hbc83047_0
tqdm pkgs/main/noarch::tqdm-4.46.0-py_0
urllib3 pkgs/main/linux-64::urllib3-1.25.8-py38_0
wheel pkgs/main/linux-64::wheel-0.34.2-py38_0
xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0
yaml pkgs/main/linux-64::yaml-0.1.7-had09818_2
zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
Preparing transaction: / - \ done
Executing transaction: / - \ | / - \ | / done
installation finished.
WARNING:
You currently have a PYTHONPATH environment variable set. This may cause
unexpected behavior when running the Python interpreter in Miniconda3.
For best results, please verify that your PYTHONPATH only points to
directories of packages that are compatible with the Python interpreter
in Miniconda3: /content/CTC-IGFL-FR-main/src/SW/miniconda
Collecting package metadata (repodata.json): - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done
Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
==> WARNING: A newer version of conda exists. <==
current version: 4.8.3
latest version: 4.11.0
Please update conda by running
$ conda update -n base -c defaults conda
Downloading and Extracting Packages
ptyprocess-0.6.0 | 23 KB | : 100% 1.0/1 [00:02<00:00, 2.38s/it]
libstdcxx-ng-9.1.0 | 4.0 MB | : 100% 1.0/1 [00:04<00:00, 4.61s/it]
libarchive-3.4.2 | 1.6 MB | : 100% 1.0/1 [00:03<00:00, 3.66s/it]
filelock-3.0.12 | 12 KB | : 100% 1.0/1 [00:02<00:00, 2.33s/it]
python-dateutil-2.8. | 221 KB | : 100% 1.0/1 [00:02<00:00, 2.23s/it]
zstd-1.4.5 | 716 KB | : 100% 1.0/1 [00:03<00:00, 3.39s/it]
matplotlib-base-3.2. | 7.1 MB | : 100% 1.0/1 [00:04<00:00, 4.44s/it]
lcms2-2.11 | 419 KB | : 100% 1.0/1 [00:03<00:00, 3.03s/it]
tornado-6.1 | 647 KB | : 100% 1.0/1 [00:03<00:00, 3.46s/it]
cryptography-2.8 | 612 KB | : 100% 1.0/1 [00:03<00:00, 3.48s/it]
intel-openmp-2019.4 | 876 KB | : 100% 1.0/1 [00:03<00:00, 3.56s/it]
yaml-0.1.7 | 85 KB | : 100% 1.0/1 [00:01<00:00, 1.78s/it]
pysocks-1.7.1 | 30 KB | : 100% 1.0/1 [00:02<00:00, 2.31s/it]
libzopfli-1.0.3 | 179 KB | : 100% 1.0/1 [00:02<00:00, 2.03s/it]
scikit-learn-0.23.1 | 6.8 MB | : 100% 1.0/1 [00:04<00:00, 4.53s/it]
chardet-3.0.4 | 173 KB | : 100% 1.0/1 [00:02<00:00, 2.83s/it]
cloudpickle-1.6.0 | 29 KB | : 100% 1.0/1 [00:02<00:00, 2.38s/it]
pygments-2.5.2 | 672 KB | : 100% 1.0/1 [00:03<00:00, 3.44s/it]
freetype-2.9.1 | 822 KB | : 100% 1.0/1 [00:02<00:00, 2.59s/it]
backcall-0.1.0 | 20 KB | : 100% 1.0/1 [00:01<00:00, 1.54s/it]
blosc-1.21.0 | 74 KB | : 100% 1.0/1 [00:01<00:00, 1.81s/it]
tqdm-4.48.2 | 63 KB | : 100% 1.0/1 [00:02<00:00, 2.59s/it]
cffi-1.13.0 | 224 KB | : 100% 1.0/1 [00:03<00:00, 3.06s/it]
markupsafe-1.1.1 | 29 KB | : 100% 1.0/1 [00:01<00:00, 1.54s/it]
asn1crypto-1.2.0 | 162 KB | : 100% 1.0/1 [00:02<00:00, 2.84s/it]
pytz-2019.3 | 231 KB | : 100% 1.0/1 [00:03<00:00, 3.11s/it]
readline-7.0 | 392 KB | : 100% 1.0/1 [00:03<00:00, 3.31s/it]
snappy-1.1.8 | 43 KB | : 100% 1.0/1 [00:01<00:00, 1.83s/it]
mkl-2019.4 | 204.1 MB | : 100% 1.0/1 [00:45<00:00, 45.59s/it]
py-lief-0.9.0 | 1.5 MB | : 100% 1.0/1 [00:03<00:00, 3.06s/it]
setuptools-41.4.0 | 651 KB | : 100% 1.0/1 [00:03<00:00, 3.70s/it]
cytoolz-0.11.0 | 367 KB | : 100% 1.0/1 [00:02<00:00, 2.31s/it]
libwebp-1.0.1 | 913 KB | : 100% 1.0/1 [00:02<00:00, 2.86s/it]
joblib-1.0.1 | 207 KB | : 100% 1.0/1 [00:03<00:00, 3.13s/it]
conda-4.9.2 | 3.1 MB | : 100% 1.0/1 [00:03<00:00, 3.57s/it]
idna-2.8 | 101 KB | : 100% 1.0/1 [00:02<00:00, 2.02s/it]
beautifulsoup4-4.8.2 | 161 KB | : 100% 1.0/1 [00:02<00:00, 2.10s/it]
lzo-2.10 | 313 KB | : 100% 1.0/1 [00:02<00:00, 2.33s/it]
jxrlib-1.1 | 238 KB | : 100% 1.0/1 [00:02<00:00, 2.99s/it]
requests-2.22.0 | 89 KB | : 100% 1.0/1 [00:00<00:00, 1.49it/s]
blas-1.0 | 6 KB | : 100% 1.0/1 [00:01<00:00, 1.33s/it]
pywavelets-1.1.1 | 4.4 MB | : 100% 1.0/1 [00:04<00:00, 4.58s/it]
parso-0.5.2 | 69 KB | : 100% 1.0/1 [00:02<00:00, 2.61s/it]
wheel-0.33.6 | 40 KB | : 100% 1.0/1 [00:02<00:00, 2.53s/it]
scipy-1.4.1 | 18.9 MB | : 100% 1.0/1 [00:07<00:00, 7.51s/it]
decorator-4.4.1 | 13 KB | : 100% 1.0/1 [00:02<00:00, 2.42s/it]
ipython_genutils-0.2 | 39 KB | : 100% 1.0/1 [00:02<00:00, 2.36s/it]
urllib3-1.24.2 | 153 KB | : 100% 1.0/1 [00:01<00:00, 1.98s/it]
cudatoolkit-10.1.243 | 513.2 MB | : 100% 1.0/1 [01:27<00:00, 87.91s/it]
mkl_fft-1.0.15 | 172 KB | : 100% 1.0/1 [00:02<00:00, 2.87s/it]
libaec-1.0.4 | 35 KB | : 100% 1.0/1 [00:02<00:00, 2.36s/it]
jedi-0.15.2 | 759 KB | : 100% 1.0/1 [00:02<00:00, 3.00s/it]
prompt_toolkit-3.0.2 | 234 KB | : 100% 1.0/1 [00:02<00:00, 2.35s/it]
numpy-1.17.4 | 4 KB | : 100% 1.0/1 [00:02<00:00, 2.06s/it]
pexpect-4.7.0 | 82 KB | : 100% 1.0/1 [00:01<00:00, 1.83s/it]
jpeg-9b | 248 KB | : 100% 1.0/1 [00:03<00:00, 3.04s/it]
libgfortran-ng-7.3.0 | 1.3 MB | : 100% 1.0/1 [00:02<00:00, 2.95s/it]
dask-core-2021.2.0 | 681 KB | : 100% 1.0/1 [00:03<00:00, 3.35s/it]
pyopenssl-19.0.0 | 82 KB | : 100% 1.0/1 [00:02<00:00, 2.57s/it]
pyyaml-5.2 | 190 KB | : 100% 1.0/1 [00:01<00:00, 1.97s/it]
openssl-1.1.1i | 3.8 MB | : 100% 1.0/1 [00:04<00:00, 4.52s/it]
mkl_random-1.1.0 | 376 KB | : 100% 1.0/1 [00:03<00:00, 3.12s/it]
ipython-7.11.1 | 1.1 MB | : 100% 1.0/1 [00:03<00:00, 3.73s/it]
pycosat-0.6.3 | 105 KB | : 100% 1.0/1 [00:01<00:00, 1.99s/it]
libffi-3.2.1 | 43 KB | : 100% 1.0/1 [00:02<00:00, 2.56s/it]
liblief-0.9.0 | 4.4 MB | : 100% 1.0/1 [00:04<00:00, 4.69s/it]
openjpeg-2.3.0 | 456 KB | : 100% 1.0/1 [00:03<00:00, 3.26s/it]
tifffile-2021.1.14 | 125 KB | : 100% 1.0/1 [00:02<00:00, 2.03s/it]
networkx-2.5 | 1.2 MB | : 100% 1.0/1 [00:03<00:00, 3.66s/it]
pytorch-1.4.0 | 432.9 MB | : 100% 1.0/1 [01:22<00:00, 2653.98s/it]
pillow-7.0.0 | 657 KB | : 100% 1.0/1 [00:03<00:00, 3.38s/it]
icu-58.2 | 22.7 MB | : 100% 1.0/1 [00:07<00:00, 7.52s/it]
scikit-image-0.17.2 | 10.7 MB | : 100% 1.0/1 [00:04<00:00, 4.82s/it]
zlib-1.2.11 | 120 KB | : 100% 1.0/1 [00:02<00:00, 2.86s/it]
pyparsing-2.4.7 | 59 KB | : 100% 1.0/1 [00:01<00:00, 1.81s/it]
cycler-0.10.0 | 13 KB | : 100% 1.0/1 [00:02<00:00, 2.36s/it]
libxml2-2.9.9 | 2.0 MB | : 100% 1.0/1 [00:04<00:00, 4.10s/it]
kiwisolver-1.3.1 | 86 KB | : 100% 1.0/1 [00:02<00:00, 2.53s/it]
conda-build-3.18.11 | 526 KB | : 100% 1.0/1 [00:02<00:00, 2.57s/it]
tk-8.6.8 | 3.1 MB | : 100% 1.0/1 [00:04<00:00, 4.15s/it]
python-3.7.4 | 36.5 MB | : 100% 1.0/1 [00:09<00:00, 9.17s/it]
certifi-2020.12.5 | 143 KB | : 100% 1.0/1 [00:02<00:00, 2.01s/it]
lz4-c-1.9.3 | 216 KB | : 100% 1.0/1 [00:03<00:00, 3.04s/it]
_libgcc_mutex-0.1 | 3 KB | : 100% 1.0/1 [00:01<00:00, 1.39s/it]
pkginfo-1.5.0.1 | 43 KB | : 100% 1.0/1 [00:02<00:00, 2.56s/it]
traitlets-4.3.3 | 138 KB | : 100% 1.0/1 [00:02<00:00, 2.06s/it]
libtiff-4.1.0 | 607 KB | : 100% 1.0/1 [00:02<00:00, 2.61s/it]
libpng-1.6.37 | 364 KB | : 100% 1.0/1 [00:02<00:00, 2.34s/it]
ripgrep-11.0.2 | 1.5 MB | : 100% 1.0/1 [00:04<00:00, 4.47s/it]
python-libarchive-c- | 23 KB | : 100% 1.0/1 [00:02<00:00, 2.34s/it]
imagecodecs-2020.5.3 | 6.1 MB | : 100% 1.0/1 [00:04<00:00, 4.15s/it]
jinja2-2.10.3 | 95 KB | : 100% 1.0/1 [00:01<00:00, 1.76s/it]
psutil-5.6.7 | 329 KB | : 100% 1.0/1 [00:03<00:00, 3.13s/it]
mkl-service-2.3.0 | 208 KB | : 100% 1.0/1 [00:02<00:00, 2.30s/it]
six-1.12.0 | 22 KB | : 100% 1.0/1 [00:02<00:00, 2.31s/it]
xz-5.2.5 | 438 KB | : 100% 1.0/1 [00:03<00:00, 3.34s/it]
giflib-5.1.4 | 78 KB | : 100% 1.0/1 [00:02<00:00, 2.63s/it]
torchvision-0.5.0 | 9.1 MB | : 100% 1.0/1 [00:04<00:00, 4.76s/it]
wcwidth-0.1.7 | 23 KB | : 100% 1.0/1 [00:02<00:00, 2.52s/it]
pickleshare-0.7.5 | 13 KB | : 100% 1.0/1 [00:02<00:00, 2.35s/it]
brotli-1.0.9 | 402 KB | : 100% 1.0/1 [00:03<00:00, 3.07s/it]
ca-certificates-2021 | 128 KB | : 100% 1.0/1 [00:02<00:00, 2.76s/it]
charls-2.1.0 | 153 KB | : 100% 1.0/1 [00:02<00:00, 2.85s/it]
imageio-2.9.0 | 3.1 MB | : 100% 1.0/1 [00:03<00:00, 3.38s/it]
bzip2-1.0.8 | 105 KB | : 100% 1.0/1 [00:02<00:00, 2.04s/it]
libedit-3.1.20181209 | 188 KB | : 100% 1.0/1 [00:02<00:00, 2.95s/it]
libgcc-ng-9.1.0 | 8.1 MB | : 100% 1.0/1 [00:05<00:00, 5.26s/it]
pycparser-2.19 | 172 KB | : 100% 1.0/1 [00:02<00:00, 2.15s/it]
soupsieve-1.9.5 | 61 KB | : 100% 1.0/1 [00:01<00:00, 1.82s/it]
pip-19.3.1 | 1.9 MB | : 100% 1.0/1 [00:03<00:00, 3.33s/it]
glob2-0.7 | 14 KB | : 100% 1.0/1 [00:02<00:00, 2.29s/it]
ruamel_yaml-0.15.46 | 245 KB | : 100% 1.0/1 [00:02<00:00, 2.30s/it]
threadpoolctl-2.1.0 | 16 KB | : 100% 1.0/1 [00:01<00:00, 1.52s/it]
ncurses-6.1 | 958 KB | : 100% 1.0/1 [00:03<00:00, 3.83s/it]
ninja-1.9.0 | 1.6 MB | : 100% 1.0/1 [00:02<00:00, 2.95s/it]
numpy-base-1.17.4 | 5.2 MB | : 100% 1.0/1 [00:04<00:00, 4.96s/it]
sqlite-3.30.0 | 1.9 MB | : 100% 1.0/1 [00:03<00:00, 3.15s/it]
toolz-0.11.1 | 46 KB | : 100% 1.0/1 [00:02<00:00, 2.60s/it]
conda-package-handli | 872 KB | : 100% 1.0/1 [00:03<00:00, 3.45s/it]
olefile-0.46 | 48 KB | : 100% 1.0/1 [00:01<00:00, 1.83s/it]
patchelf-0.10 | 78 KB | : 100% 1.0/1 [00:01<00:00, 1.84s/it]
Preparing transaction: | / - \ | / - \ | done
Verifying transaction: - \ | / - \ | / - \ | / - \ | / - \ | / - done
Executing transaction: | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
Ran pip subprocess with arguments:
['/content/CTC-IGFL-FR-main/src/SW/miniconda/bin/python', '-m', 'pip', 'install', '-U', '-r', '/content/CTC-IGFL-FR-main/src/SW/condaenv.a_xir_p_.requirements.txt']
Pip subprocess output:
Processing ./elephant-core
Building wheels for collected packages: elephant
Building wheel for elephant (setup.py): started
Building wheel for elephant (setup.py): finished with status 'done'
Created wheel for elephant: filename=elephant-0.2.0-cp37-none-any.whl size=38137 sha256=b468f9426dfe42d9db1d77725f9a829b7cd7c1a88c53eef0f1591fdf8cd57e16
Stored in directory: /root/.cache/pip/wheels/55/c3/79/36997bc76070e5066655d88bb260d9dc13f474fe5b27767e44
Successfully built elephant
Installing collected packages: elephant
Successfully installed elephant-0.2.0
#
# To activate this environment, use
#
# $ conda activate base
#
# To deactivate an active environment, use
#
# $ conda deactivate
Collecting package metadata (repodata.json): - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done
Solving environment: \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.11.0
Please update conda by running
$ conda update -n base conda
Downloading and Extracting Packages
fasteners-0.16 | 25 KB | : 100% 1.0/1 [00:00<00:00, 15.67it/s]
asciitree-0.3.3 | 6 KB | : 100% 1.0/1 [00:00<00:00, 3.16it/s]
zarr-2.4.0 | 95 KB | : 100% 1.0/1 [00:00<00:00, 2.88it/s]
numcodecs-0.7.2 | 960 KB | : 100% 1.0/1 [00:00<00:00, 1.35it/s]
monotonic-1.5 | 9 KB | : 100% 1.0/1 [00:00<00:00, 28.55it/s]
msgpack-python-1.0.0 | 91 KB | : 100% 1.0/1 [00:00<00:00, 3.08it/s]
python_abi-3.7 | 4 KB | : 100% 1.0/1 [00:00<00:00, 39.54it/s]
tensorboardx-2.1 | 80 KB | : 100% 1.0/1 [00:00<00:00, 2.91it/s]
openssl-1.1.1h | 2.1 MB | : 100% 1.0/1 [00:00<00:00, 3.21it/s]
protobuf-3.13.0.1 | 704 KB | : 100% 1.0/1 [00:00<00:00, 1.48it/s]
certifi-2021.10.8 | 145 KB | : 100% 1.0/1 [00:00<00:00, 19.51it/s]
ca-certificates-2021 | 139 KB | : 100% 1.0/1 [00:00<00:00, 14.18it/s]
libprotobuf-3.13.0.1 | 2.3 MB | : 100% 1.0/1 [00:00<00:00, 2.34it/s]
Preparing transaction: / done
Verifying transaction: \ done
Executing transaction: / - \ | / - \ | done
#
# To activate this environment, use
#
# $ conda activate base
#
# To deactivate an active environment, use
#
# $ conda deactivate
###Markdown
Generate training datasets
###Code
!miniconda/bin/python prep/generate_seg_labels.py ../../Data/BF-C2DL-HSC train_data/BF-C2DL-HSC
###Output
2D data found at ../../Data/BF-C2DL-HSC/01_GT/SEG
(49, 1010, 1010) u1
100% 49/49 [00:01<00:00, 25.00it/s]
2D data found at ../../Data/BF-C2DL-HSC/02_GT/SEG
(8, 1010, 1010) u1
100% 8/8 [00:00<00:00, 13.72it/s]
2D data found at ../../Data/BF-C2DL-HSC/01_ST/SEG
(1764, 1010, 1010) u1
100% 1764/1764 [00:59<00:00, 29.84it/s]
2D data found at ../../Data/BF-C2DL-HSC/02_ST/SEG
(1764, 1010, 1010) u1
100% 1764/1764 [02:02<00:00, 14.39it/s]
2D data found at ../../Data/BF-C2DL-HSC/01_GT/SEG
(1764, 1010, 1010) u1
100% 49/49 [00:01<00:00, 32.49it/s]
100% 1764/1764 [00:52<00:00, 33.44it/s]
2D data found at ../../Data/BF-C2DL-HSC/02_GT/SEG
(1764, 1010, 1010) u1
100% 8/8 [00:00<00:00, 14.54it/s]
100% 1764/1764 [01:54<00:00, 15.43it/s]
###Markdown
Please note that if your dataset contains image files with sparse annotations, you need to specify them using the `--sparse` argument.e.g. Fluo-C2DL-MSC_sparse.json```json{ "01": [ "man_seg009.tif", "man_seg028.tif", "man_seg030.tif", "man_seg031.tif", "man_seg036.tif", "man_seg046.tif", "man_seg047.tif" ], "02": [ "man_seg009.tif", "man_seg013.tif", "man_seg015.tif", "man_seg016.tif" ]}``````bash!miniconda/bin/python prep/generate_seg_labels.py ../../Data/Fluo-C2DL-MSC train_data/Fluo-C2DL-MSC --sparse Fluo-C2DL-MSC_sparse.json``` Prepare a training config file
###Code
!miniconda/bin/python generate_train_config.py training.json --dataset BF-C2DL-HSC/01-GT-seg BF-C2DL-HSC/02-GT-seg --model_name BF-C2DL-HSC-GT-seg.pth --log_dir BF-C2DL-HSC-GT-seg --n_epochs 10
###Output
_____no_output_____
###Markdown
Run a training script
###Code
!miniconda/bin/python train.py seg training.json
###Output
Train Epoch: 0 [0/1 (0%)] Loss: 0.455832
Train Epoch: 1 [0/1 (0%)] Loss: 0.298484
Train Epoch: 2 [0/1 (0%)] Loss: 0.268938
auto_bg_thresh: 0.0
batch_size: 1
c_ratio: None
class_weights: (1.0, 10.0, 10.0)
contrast: 0.0
crop_box: None
crop_size: (384, 384)
dataset_name: BF-C2DL-HSC/01-GT-seg
debug: False
device: cuda
false_weight: None
is_3d: False
is_livemode: False
is_pad: False
keep_axials: (True, True, True, True)
log_dir: logs/BF-C2DL-HSC-GT-seg
lr: 0.0005
model_path: models/BF-C2DL-HSC-GT-seg.pth
n_crops: 1
n_epochs: 10
output_prediction: False
p_thresh: None
patch_size: [128, 128]
r_max: None
r_min: None
rotation_angle: 0.0
scale_factor_base: 0.0
scales: None
timepoint: None
use_2d: False
use_median: None
zpath_input: train_data/BF-C2DL-HSC/01-GT-seg/imgs.zarr
zpath_seg_label: train_data/BF-C2DL-HSC/01-GT-seg/seg_labels.zarr
zpath_seg_label_vis: train_data/BF-C2DL-HSC/01-GT-seg/seg_labels_vis.zarr
zpath_seg_output: train_data/BF-C2DL-HSC/01-GT-seg/seg_outputs.zarr
auto_bg_thresh: 0.0
batch_size: 1
c_ratio: None
class_weights: (1.0, 10.0, 10.0)
contrast: 0.0
crop_box: None
crop_size: (384, 384)
dataset_name: BF-C2DL-HSC/02-GT-seg
debug: False
device: cuda
false_weight: None
is_3d: False
is_livemode: False
is_pad: False
keep_axials: (True, True, True, True)
log_dir: logs/BF-C2DL-HSC-GT-seg
lr: 0.0005
model_path: models/BF-C2DL-HSC-GT-seg.pth
n_crops: 1
n_epochs: 10
output_prediction: False
p_thresh: None
patch_size: [128, 128]
r_max: None
r_min: None
rotation_angle: 0.0
scale_factor_base: 0.0
scales: None
timepoint: None
use_2d: False
use_median: None
zpath_input: train_data/BF-C2DL-HSC/02-GT-seg/imgs.zarr
zpath_seg_label: train_data/BF-C2DL-HSC/02-GT-seg/seg_labels.zarr
zpath_seg_label_vis: train_data/BF-C2DL-HSC/02-GT-seg/seg_labels_vis.zarr
zpath_seg_output: train_data/BF-C2DL-HSC/02-GT-seg/seg_outputs.zarr
Train Epoch: 0 [0/53 (0%)] Loss: 1.038014 NLL Loss: 1.380138 Center Dice Loss: 1.000000 Smooth Loss: 0.032649
Eval Epoch: 0 Loss: 0.996351
Train Epoch: 1 [0/53 (0%)] Loss: 0.995269 NLL Loss: 0.979194 Center Dice Loss: 0.997055 Smooth Loss: 0.013280
Eval Epoch: 1 Loss: 0.984174
Train Epoch: 2 [0/53 (0%)] Loss: 0.983624 NLL Loss: 0.860272 Center Dice Loss: 0.997330 Smooth Loss: 0.011360
Eval Epoch: 2 Loss: 0.971131
Train Epoch: 3 [0/53 (0%)] Loss: 0.968924 NLL Loss: 0.721982 Center Dice Loss: 0.996362 Smooth Loss: 0.010758
Eval Epoch: 3 Loss: 0.956835
Train Epoch: 4 [0/53 (0%)] Loss: 0.953921 NLL Loss: 0.587261 Center Dice Loss: 0.994661 Smooth Loss: 0.009493
Eval Epoch: 4 Loss: 0.942152
Train Epoch: 5 [0/53 (0%)] Loss: 0.937996 NLL Loss: 0.453298 Center Dice Loss: 0.991852 Smooth Loss: 0.009450
Eval Epoch: 5 Loss: 0.928954
Train Epoch: 6 [0/53 (0%)] Loss: 0.921112 NLL Loss: 0.337566 Center Dice Loss: 0.985950 Smooth Loss: 0.008326
Eval Epoch: 6 Loss: 0.917517
Train Epoch: 7 [0/53 (0%)] Loss: 0.914061 NLL Loss: 0.294186 Center Dice Loss: 0.982936 Smooth Loss: 0.008099
Eval Epoch: 7 Loss: 0.906349
Train Epoch: 8 [0/53 (0%)] Loss: 0.890280 NLL Loss: 0.219754 Center Dice Loss: 0.964782 Smooth Loss: 0.008508
Eval Epoch: 8 Loss: 0.890048
Train Epoch: 9 [0/53 (0%)] Loss: 0.829467 NLL Loss: 0.214251 Center Dice Loss: 0.897824 Smooth Loss: 0.004281
Eval Epoch: 9 Loss: 0.864157
###Markdown
Download required libraries for inference
###Code
!./download_libraries.sh
###Output
Downloading dependencies for Java
--2022-01-10 12:17:30-- https://repo1.maven.org/maven2/net/imglib2/imglib2/5.6.3/imglib2-5.6.3.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 762387 (745K) [application/java-archive]
Saving to: ‘lib/imglib2-5.6.3.jar’
imglib2-5.6.3.jar 100%[===================>] 744.52K 1.36MB/s in 0.5s
2022-01-10 12:17:31 (1.36 MB/s) - ‘lib/imglib2-5.6.3.jar’ saved [762387/762387]
--2022-01-10 12:17:31-- https://repo1.maven.org/maven2/gov/nist/math/jama/1.0.3/jama-1.0.3.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 37424 (37K) [application/java-archive]
Saving to: ‘lib/jama-1.0.3.jar’
jama-1.0.3.jar 100%[===================>] 36.55K --.-KB/s in 0.05s
2022-01-10 12:17:32 (802 KB/s) - ‘lib/jama-1.0.3.jar’ saved [37424/37424]
--2022-01-10 12:17:32-- http://maven.imagej.net/content/repositories/releases/org/mastodon/mastodon-collection/1.0.0-beta-17/mastodon-collection-1.0.0-beta-17.jar
Resolving maven.imagej.net (maven.imagej.net)... 144.92.48.199
Connecting to maven.imagej.net (maven.imagej.net)|144.92.48.199|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://maven.scijava.org/content/repositories/releases/org/mastodon/mastodon-collection/1.0.0-beta-17/mastodon-collection-1.0.0-beta-17.jar [following]
--2022-01-10 12:17:33-- https://maven.scijava.org/content/repositories/releases/org/mastodon/mastodon-collection/1.0.0-beta-17/mastodon-collection-1.0.0-beta-17.jar
Resolving maven.scijava.org (maven.scijava.org)... 144.92.48.199
Connecting to maven.scijava.org (maven.scijava.org)|144.92.48.199|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 373090 (364K) [application/java-archive]
Saving to: ‘lib/mastodon-collection-1.0.0-beta-17.jar’
mastodon-collection 100%[===================>] 364.35K 519KB/s in 0.7s
2022-01-10 12:17:34 (519 KB/s) - ‘lib/mastodon-collection-1.0.0-beta-17.jar’ saved [373090/373090]
--2022-01-10 12:17:34-- http://maven.imagej.net/content/repositories/releases/org/mastodon/mastodon-graph/1.0.0-beta-16/mastodon-graph-1.0.0-beta-16.jar
Resolving maven.imagej.net (maven.imagej.net)... 144.92.48.199
Connecting to maven.imagej.net (maven.imagej.net)|144.92.48.199|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://maven.scijava.org/content/repositories/releases/org/mastodon/mastodon-graph/1.0.0-beta-16/mastodon-graph-1.0.0-beta-16.jar [following]
--2022-01-10 12:17:35-- https://maven.scijava.org/content/repositories/releases/org/mastodon/mastodon-graph/1.0.0-beta-16/mastodon-graph-1.0.0-beta-16.jar
Resolving maven.scijava.org (maven.scijava.org)... 144.92.48.199
Connecting to maven.scijava.org (maven.scijava.org)|144.92.48.199|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 168824 (165K) [application/java-archive]
Saving to: ‘lib/mastodon-graph-1.0.0-beta-16.jar’
mastodon-graph-1.0. 100%[===================>] 164.87K 314KB/s in 0.5s
2022-01-10 12:17:36 (314 KB/s) - ‘lib/mastodon-graph-1.0.0-beta-16.jar’ saved [168824/168824]
File ‘lib/mastodon-graph-1.0.0-beta-16.jar’ already there; not retrieving.
--2022-01-10 12:17:36-- https://repo1.maven.org/maven2/com/eclipsesource/minimal-json/minimal-json/0.9.5/minimal-json-0.9.5.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34221 (33K) [application/java-archive]
Saving to: ‘lib/minimal-json-0.9.5.jar’
minimal-json-0.9.5. 100%[===================>] 33.42K --.-KB/s in 0.04s
2022-01-10 12:17:36 (808 KB/s) - ‘lib/minimal-json-0.9.5.jar’ saved [34221/34221]
--2022-01-10 12:17:36-- https://repo1.maven.org/maven2/com/opencsv/opencsv/3.9/opencsv-3.9.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 80720 (79K) [application/java-archive]
Saving to: ‘lib/opencsv-3.9.jar’
opencsv-3.9.jar 100%[===================>] 78.83K 484KB/s in 0.2s
2022-01-10 12:17:37 (484 KB/s) - ‘lib/opencsv-3.9.jar’ saved [80720/80720]
File ‘lib/opencsv-3.9.jar’ already there; not retrieving.
--2022-01-10 12:17:37-- https://repo1.maven.org/maven2/net/sf/trove4j/trove4j/3.0.3/trove4j-3.0.3.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523218 (2.4M) [application/java-archive]
Saving to: ‘lib/trove4j-3.0.3.jar’
trove4j-3.0.3.jar 100%[===================>] 2.41M 3.21MB/s in 0.8s
2022-01-10 12:17:38 (3.21 MB/s) - ‘lib/trove4j-3.0.3.jar’ saved [2523218/2523218]
--2022-01-10 12:17:38-- https://sites.imagej.net/Mastodonpreview/jars/trackmate-1.0.0-beta-13.jar-20190320130043
Resolving sites.imagej.net (sites.imagej.net)... 144.92.48.186
Connecting to sites.imagej.net (sites.imagej.net)|144.92.48.186|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1084822 (1.0M)
Saving to: ‘lib/trackmate-1.0.0-beta-13.jar’
lib/trackmate-1.0.0 100%[===================>] 1.03M 1003KB/s in 1.1s
2022-01-10 12:17:41 (1003 KB/s) - ‘lib/trackmate-1.0.0-beta-13.jar’ saved [1084822/1084822]
###Markdown
Prepare inference config files
###Code
!miniconda/bin/python generate_run_config.py run.json --seg_model models/BF-C2DL-HSC-GT-seg.pth --scales 0.645 0.645 --c_ratio 0.3 --p_thresh 0.8 --r_min 1 --r_max 50 --use_interpolation
###Output
_____no_output_____
###Markdown
Run a Java program for inference
###Code
!java -jar elephant-ctc-0.1.0.jar "../../Data/BF-C2DL-HSC/01" "../../Data/BF-C2DL-HSC/01_RES-allGT" "run.json"
###Output
_____no_output_____ |
youtube_api.ipynb | ###Markdown
Todo- ~~제목 만들기~~- ~~ul li 태그 만들기~~- 조회수- 게시날짜- 영상시간- ~~게시자명~~
###Code
items[0]['id']
# 조회 수를 가져오기 위해 이번에는 video url을 불러옵니다.
def get_video_info(videoId):
url = 'https://www.googleapis.com/youtube/v3/videos'
params = dict(
part='snippet,contentDetails,statistics',
id=videoId,
key='AIzaSyBK7aaQgSA9mYbdQiJ5Eg6uu4oC_VdmD_s'
)
response = requests.get(url=url, params=params)
return response.json()
video = get_video_info(videoId)
video
# 첫 번째 영상의 조회 수 보기
video['items'][0]['statistics']['viewCount']
# 게시 날짜 가져오기
video['items'][0]['snippet']['publishedAt']
# 영상 시간
video['items'][0]['contentDetails']['duration']
html = []
for item in items:
videoId = item['id']['videoId']
videoInfo = get_video_info(videoId)
viewCount = videoInfo['items'][0]['statistics']['viewCount']
duratiton = videoInfo['items'][0]['contentDetails']['duration']
html.append('<li>')
# html.append('<a href="https://www.youtube.com/watch?v={}">'.format(item['id']['videoId']))
html.append('<iframe src="https://www.youtube.com/embed/{}" allowfullscreen/>'.format(item['id']['videoId']))
html.append('<img src="{}">'.format(item['snippet']['thumbnails']['default']['url']))
html.append('</iframe>')
html.append('<h1 style="font-size:18px;">{}</h1>'.format(item['snippet']['title']))
html.append('<span>날짜: {}</span>'.format(item['snippet']['publishedAt']))
html.append('<span>조회 수: {}</span>'.format(viewCount))
html.append('<span>영상길이: {}</span>'.format(duratiton))
html.append('<span>채널제목: {}</span>'.format(item['snippet']['channelTitle']))
html.append('</li>')
html[:10]
# 이미지 태그에 개행문자를 붙여 이어주기
temp = "\n".join(html) # 리스트를 \n을 넣은 문자열로 변경해줌.
content = "<ul>{}</ul>".format(temp)
content[:1000]
import webbrowser
import os
file_path = 'youtube.html'
f = open(file_path,'w')
template = """
<html>
<head>
<meta charset="UTF-8">
<title>{{ title }}</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<header>
<h1>{{ title }}</h1>
</header>
<section>
{{ content }}
</section>
<footer>ⓒ 장난감프로젝트</footer>
</body>
</html>
"""
template = template.replace('{{ title }}', "'휘인' Youtube API 수집 영상")
template = template.replace('{{ content }}', content)
f.write(template)
f.close()
# 브라우저로 해당 파일 열어보기
filename = 'file:///'+os.getcwd()+'/' + file_path
webbrowser.open_new_tab(filename)
###Output
_____no_output_____ |
guided_project_large_data_handling/Using_Sqlite_Pandas_on_Large_Data.ipynb | ###Markdown
Guided project under course Using Sqlite and Pandas on Large Data- Analyzing Startup Fundraising Deals from Crunchbase - dataquest.com Course Mission 167
###Code
import pandas as pd
fundraising_iter = pd.read_csv('crunchbase-investments.csv',chunksize=5000,encoding='latin-1')
total_mem_fp = 0
missing_dicts = {}
for chunk in fundraising_iter:
#print(chunk.columns)
#missing value counts of each column
#print(chunk.isnull().sum().to_dict())
missing_dicts.update(chunk.isnull().sum().to_dict())
#mem footprint
print(chunk.memory_usage(deep=True).sum())
total_mem_fp += chunk.memory_usage(deep=True).sum()
# each chunk consumes about 6mb in memory
missing_dicts.keys()
no_missing_cols =[k for k,v in missing_dicts.items() if v == 0]
no_missing_cols
total_mem_fp
#which columns to drop?
chunk.describe()
test_df = pd.read_csv('crunchbase-investments.csv',nrows=10,encoding='latin-1')
test_df.isnull().sum()
test_df['investor_category_code'].value_counts() #only 1 value -- finance
test_df['investor_country_code'].value_counts()
##based on test_df, these 3 columns should be dropped
for col in test_df.columns:
if test_df[col].nunique() == 1:
print(col)
drop_cols = ['company_country_code',
'investor_category_code','investor_country_code']
keep_cols = test_df.columns.drop(drop_cols)
keep_cols
#check column dtypes
test_df.info()
#Identify the numeric columns we can represent using more space efficient types.
test_df.select_dtypes(include=['float','integer'])
int_cols = test_df.select_dtypes(include=['float','integer']).columns
int_cols
# downcast from int64 to int16
#test_df['funded_year'] = pd.to_numeric(test_df['funded_year'],downcast='integer')
# downcast from int64 to int32 possible, maybe not across all chunks
test_df['raised_amount_usd'] = pd.to_numeric(test_df['raised_amount_usd'],downcast='integer')
test_df['raised_amount_usd']
text_cols = test_df.select_dtypes(include=['object']).columns
text_cols
# For text columns:
# Analyze the unique value counts across all of the chunks to see if we can convert them to a numeric type.
nuniques_dict = test_df.nunique().to_dict()
#turn some into category dtype
for k,v in nuniques_dict.items():
if v/len(test_df[k]) < 0.5:
test_df[k] = test_df[k].astype('category')
#11 objects columns turned into space-efficient dtype 'category'
cat_columns = test_df.select_dtypes(include=['category']).columns
# for col in text_cols:
# print(test_df[col].value_counts())
no_missing_cols
dtype_dict = {t:'category' for t in cat_columns if t in no_missing_cols}
dtype_dict
# Make changes to the code from loading csv so that the overall memory the data consumes stays under 10 megabytes.
test_df.columns[18] # col19 'raised_amount_usd' cannot be "int32" as it has NA
# col18 'raised_year' cannot be "int16" as it has NA
#use only keep_cols
chunk_iter = pd.read_csv('crunchbase-investments.csv',encoding='latin-1',dtype=dtype_dict,chunksize=5000,usecols=keep_cols)
for chunk in chunk_iter:
print(chunk.memory_usage(deep=True).sum()/(2**20))
#each chunk now consumes around 4.3 mb in memory-- can double up chunksize
## best chunksize is 13000 -- so that the overall memory the data consumes stays under 10 megabytes.
chunk_iter = pd.read_csv('crunchbase-investments.csv',encoding='latin-1',dtype=dtype_dict,chunksize=13000,usecols=keep_cols)
for chunk in chunk_iter:
print(chunk.memory_usage(deep=True).sum()/(2**20))
# next step is to load each chunk into a table in a SQLite database so we can query the full data set.
import sqlite3
conn = sqlite3.connect('fundraising.db')
chunk_iter = pd.read_csv('crunchbase-investments.csv',encoding='latin-1',dtype=dtype_dict,chunksize=13000)
for chunk in chunk_iter:
#print(chunk.memory_usage(deep=True).sum()/(2**20))
chunk.to_sql('fundraising',conn,if_exists='append',index=False)
q = 'PRAGMA table_info(fundraising)' # Query the table and make sure the data types match up to what you had in mind for each column.
pd.read_sql(q,conn)
q = 'SELECT * FROM fundraising'
sql_iter = pd.read_sql(q,conn,chunksize=500)
for chunk in sql_iter:
print(chunk.head(5))
break
next(sql_iter)
!wc -l 'fundraising.db'
#no. of lines
#!wc --help
!wc -c 'fundraising.db'
# byte counts
10878976/(2**20) # file size of the database is 10.4mb
###Output
_____no_output_____ |
action_recognition_pipeline.ipynb | ###Markdown
Description This is an example of action recognition pipeline trained on [UCF-Crime dataset](https://www.crcv.ucf.edu/projects/real-world/) to detect anomaly behavior among next classes: 'Normal', 'Fighting', 'Robbery', 'Shoplifting', 'Stealing'. Setup
###Code
!pip install --upgrade mxnet-cu100 gluoncv
!pip install decord
import mxnet as mx
from mxnet import nd, gluon
from mxnet import autograd as ag
import gluoncv
from gluoncv.data import VideoClsCustom
from gluoncv.data.transforms import video
from gluoncv.model_zoo import get_model
from gluoncv.utils import split_and_load, TrainingHistory
import decord
from sklearn import metrics
import cv2
import numpy as np
import os
import shutil
import time
import tqdm
import matplotlib.pyplot as plt
import seaborn as sn
print(f'''Versions:
mxnet: {mx.__version__}
decord: {decord.__version__}
gluoncv: {gluoncv.__version__}
''')
dataset_path = './dataset' # path to dataset
models_path = './models' # path to model weights
num_gpus = 1
ctx = [mx.gpu(i) for i in range(num_gpus)]
classes = ['Normal', 'Fighting', 'Robbery', 'Shoplifting', 'Stealing']
num_segments = 4
num_frames = 48
per_device_batch_size = 3
num_workers = 4
batch_size = num_gpus*per_device_batch_size
###Output
_____no_output_____
###Markdown
Train
###Code
transform_train = video.VideoGroupTrainTransform(size=(224, 224), scale_ratios=[1.0, 0.85, 0.75], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
transform_valid = video.VideoGroupValTransform(size=224, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_dataset = VideoClsCustom(root=dataset_path,
setting=f'{dataset_path}/train_anomaly_detection.txt',
train=True,
new_length=num_frames,
num_segments=num_segments,
transform=transform_train,
video_ext='mp4',
video_loader=True,
use_decord=True
)
print('Load %d training samples.' % len(train_dataset))
train_data = gluon.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
valid_dataset = VideoClsCustom(root=dataset_path,
setting=f'{dataset_path}/valid_anomaly_detection.txt',
train=False,
new_length=num_frames,
num_segments=num_segments,
transform=transform_valid,
video_ext='mp4',
video_loader=True,
use_decord=True)
print('Load %d valid samples.' % len(valid_dataset))
valid_data = gluon.data.DataLoader(valid_dataset, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
net = get_model(name='slowfast_4x16_resnet50_custom', nclass=len(classes), num_segments=num_segments, ctx=ctx)
# Learning rate decay factor
lr_decay = 0.1
# Epochs where learning rate decays
lr_decay_epoch = [50, 80]
# Stochastic gradient descent
optimizer = 'sgd'
# Set parameters
optimizer_params = {'learning_rate': 0.0001, 'wd': 1e-5, 'momentum': 0.9}
# Define our trainer for net
trainer = gluon.Trainer(net.collect_params(), optimizer, optimizer_params)
loss_fn = gluon.loss.SoftmaxCrossEntropyLoss()
train_metric = mx.metric.Accuracy()
valid_metric = mx.metric.Accuracy()
train_history = TrainingHistory(['training-acc'])
valid_history = TrainingHistory(['valid-acc'])
epochs = 65
lr_decay_count = 0
valid_loss_best = 1000
valid_acc_best = 10
hist_prec_train = []
hist_prec_test = []
hist_recall_train = []
hist_recall_test = []
hist_loss = []
hist_loss_valid = []
for epoch in range(epochs):
tic = time.time()
train_metric.reset()
valid_metric.reset()
train_loss = 0
valid_loss = 0
# Learning rate decay
if epoch == lr_decay_epoch[lr_decay_count]:
trainer.set_learning_rate(trainer.learning_rate*lr_decay)
lr_decay_count += 1
# Loop through each batch of training data
y_true = np.array([], dtype='int')
y_pred = np.array([], dtype='int')
for i, batch in enumerate(train_data):
# Extract data and label
data = split_and_load(batch[0], ctx_list=ctx, batch_axis=0)
label = split_and_load(batch[1], ctx_list=ctx, batch_axis=0)
# AutoGrad
with ag.record():
output = []
for _, X in enumerate(data):
X = X.reshape((-1,) + X.shape[2:])
pred = net(X)
output.append(pred)
loss = [loss_fn(yhat, y) for yhat, y in zip(output, label)]
# Backpropagation
for l in loss:
l.backward()
# Optimize
trainer.step(batch_size)
# Update metrics
train_loss += sum([l.mean().asscalar() for l in loss])
train_metric.update(label, output)
y_true = np.concatenate((y_true, label[0].asnumpy()))
y_pred = np.concatenate((y_pred, pred.argmax(axis=1).astype('int').asnumpy()))
name, acc = train_metric.get()
precisions = metrics.precision_score(y_true, y_pred, average=None, zero_division=False)
recall = metrics.recall_score(y_true, y_pred, average=None, zero_division=False)
# Update history and print metrics
train_history.update([acc])
print(f'[Epoch {epoch}] train={acc:.4f} loss={train_loss/(i+1):.4f} time: {time.time()-tic:.1f} sec')
print('Train precision: ',{k:v for k,v in zip(classes, precisions)})
print('Train recall: ',{k:v for k,v in zip(classes, recall)})
hist_loss.append(train_loss/(i+1))
hist_prec_train.append({k:v for k,v in zip(classes, precisions)})
hist_recall_train.append({k:v for k,v in zip(classes, recall)})
y_true_v = np.array([], dtype='int')
y_pred_v = np.array([], dtype='int')
for i, batch in enumerate(valid_data):
# Extract data and label
data = split_and_load(batch[0], ctx_list=ctx, batch_axis=0)
label = split_and_load(batch[1], ctx_list=ctx, batch_axis=0)
output = []
for _, X in enumerate(data):
X = X.reshape((-1,) + X.shape[2:])
pred = net(X)
output.append(pred)
loss = [loss_fn(yhat, y) for yhat, y in zip(output, label)]
# Update metrics
valid_loss += sum([l.mean().asscalar() for l in loss])
valid_metric.update(label, output)
y_true_v = np.concatenate((y_true_v, label[0].asnumpy()))
y_pred_v = np.concatenate((y_pred_v, pred.argmax(axis=1).astype('int').asnumpy()))
name, acc = valid_metric.get()
precisions_v = metrics.precision_score(y_true_v, y_pred_v, average=None, zero_division=False)
recall_v = metrics.recall_score(y_true_v, y_pred_v, average=None, zero_division=False)
# Update history and print metrics
valid_history.update([acc])
print(f'valid_acc: {acc}, valid_loss: {valid_loss/(i+1)}')
hist_loss_valid.append(valid_loss/(i+1))
print(f'valid precision:', {k:v for k,v in zip(classes, precisions_v)})
print(f'valid recall:', {k:v for k,v in zip(classes, recall_v)})
hist_prec_test.append({k:v for k,v in zip(classes, precisions_v)})
hist_recall_test.append({k:v for k,v in zip(classes, recall_v)})
if (valid_loss_best > valid_loss) or (valid_acc_best < acc) :
valid_loss_best = valid_loss
valid_acc_best = acc
print(f'Best valid loss: {valid_loss_best}')
file_name = f"{models_path}/slowfast_ucf_{epoch}.params"
net.save_parameters(file_name)
###Output
_____no_output_____
###Markdown
Validation
###Code
net = get_model(name='slowfast_4x16_resnet50_custom', nclass=len(classes), num_segments=num_segments, pretrainded=False, pretrained_base=False, ctx=ctx)
net.load_parameters(f'{models_path}/slowfast_ucf.params', ctx=ctx)
transform_valid = video.VideoGroupValTransform(size=224, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
valid_dataset = VideoClsCustom(root=dataset_path,
setting=f'{dataset_path}/valid_anomaly_detection.txt',
train=False,
new_length=num_frames,
num_segments=num_segments,
transform=transform_valid,
video_ext='mp4',
video_loader=True,
use_decord=True)
print('Load %d valid samples.' % len(valid_dataset))
valid_data = gluon.data.DataLoader(valid_dataset, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
valid_loss = 0
loss_fn = gluon.loss.SoftmaxCrossEntropyLoss()
y_true = np.array([],dtype='int')
y_pred = np.array([],dtype='int')
outputs = []
acc = mx.metric.Accuracy()
for i, batch in tqdm.tqdm(enumerate(valid_data)):
# Extract data and label
data = split_and_load(batch[0], ctx_list=ctx, batch_axis=0)
label = split_and_load(batch[1], ctx_list=ctx, batch_axis=0)
output = []
for _, X in enumerate(data):
X = X.reshape((-1,) + X.shape[2:])
pred = net(X)
output.append(pred)
loss = [loss_fn(yhat, y) for yhat, y in zip(output, label)]
acc.update(label, output)
# Update metrics
valid_loss += sum([l.mean().asscalar() for l in loss])
y_true = np.concatenate((y_true, label[0].asnumpy()))
y_pred = np.concatenate((y_pred, pred.argmax(axis=1).astype('int').asnumpy()))
outputs.append((output,label))
y_true = np.ravel(np.array(y_true))
y_pred = np.ravel(np.array(y_pred))
###Output
_____no_output_____
###Markdown
Metrics per class
###Code
precisions = metrics.precision_score(y_true, y_pred, average=None, zero_division=False)
print(f'Precision: ', {k:v for k,v in zip(classes, precisions)})
recalls = metrics.recall_score(y_true, y_pred, average=None, zero_division=False)
print(f'Recall: ', {k:v for k,v in zip(classes, recalls)})
cm = metrics.confusion_matrix(y_true, y_pred)
ax = sn.heatmap(cm, annot=True, cmap=plt.cm.Blues, xticklabels=classes, yticklabels=classes)
ax.set_title('Valid confusion matrix')
plt.show()
###Output
_____no_output_____
###Markdown
Normal vs Anomalies
###Code
precisions = metrics.precision_score(y_true>0, y_pred>0)
print(f'Precision 2 classes: {precisions:.4f}')
recalls = metrics.recall_score(y_true>0, y_pred>0)
print(f'Recall 2 classes: {recalls:.4f}')
cm = metrics.confusion_matrix(y_true>0, y_pred>0)
ax = sn.heatmap(cm, annot=True, cmap=plt.cm.Blues, xticklabels=['Normal', 'Anomaly'] , yticklabels=['Normal', 'Anomaly'] )
ax.set_title('Valid confusion matrix')
plt.show()
###Output
_____no_output_____
###Markdown
Inference
###Code
input_video_path = './test' # path to test video
output_video_path = './output' # path to the output results
transform_fn = video.VideoGroupValTransform(size=224, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
net = get_model(name='slowfast_4x16_resnet50_custom', nclass=len(classes), num_segments=num_segments, pretrainded=False, pretrained_base=False, ctx=ctx)
net.load_parameters(f'{models_path}/slowfast_ucf.params', ctx=ctx)
def process_video(video_name, input_path, output_path, net, transform_fn, classes, num_frames=48, verbose=True):
'''
Classify action on each num_frames*num_segments length clip from video and write output video with classification result.
Args:
video_name: video name
input_path: path to input folder
output_path: path to output folder
net: the net to perform classification
transform_fn: a function that transforms data
classes: list of actions can be detected on video
num_frames: the length of input video clip
verbose: verbosity level
Returns:
dict: Classification result for each segment.
'''
if verbose:
print(f'Video: {video_name}')
vr = decord.VideoReader(f'{input_video_path}/{video_name}')
segments = split_on_segments(vr, net.num_segments*num_frames)
anomaly_classes = list(filter(lambda c: c != 'Normal', classes))
temp_output_path = f'{output_path}/{video_name}'
if not os.path.exists(temp_output_path):
os.mkdir(temp_output_path)
video_data = None
results = {}
for i, segment in enumerate(segments):
video_data = vr.get_batch(segment).asnumpy()
start_time = time.time()
pred_class, pred_class_prob = predict(net, video_data, transform_fn, num_frames, classes)
end_time = time.time()
results[i] = {'predicted_class': pred_class, 'probability':pred_class_prob}
add_result_on_clip(video_data, pred_class, pred_class_prob, anomaly_classes)
write_video_output(f'{temp_output_path}/batch_{i:04d}.mp4', video_data)
if verbose:
print(f'[Segment: {i}] predicted_class: {pred_class}, probability: {pred_class_prob:.4f}, time: {end_time-start_time:0.1f} sec')
if video_data is not None:
height = video_data.shape[1]
width = video_data.shape[2]
merge_clips(temp_output_path, video_name, output_path, (width, height))
shutil.rmtree(temp_output_path)
return results
def split_on_segments(vr, segm_length=48, verbose=True):
'''
Split video on segments with *segm_length* length.
Args:
vr: decode.VideoReader
segm_length: segment length
verbose: verbosity level
Returns:
list: List of frame indexes, splitted on segments.
'''
n_frames = len(vr)
fps = vr.get_avg_fps()
duration = n_frames/fps
all_idx = np.arange(n_frames)
idx = range(0, len(all_idx), segm_length)
segments = []
for i in idx:
segment = all_idx[i:i+segm_length]
if len(segment) >= segm_length:
segments.append(segment)
if verbose:
print(f'[Frames partitioning] total frames: {n_frames}, fps: {fps}, duration: {duration:.1f} sec, split on: {len(segments)} segments, segments length: {[i.shape[0] for i in segments]} frames')
return segments
def predict(net, clip, transform_fn, num_frames, classes):
'''
Predict action on clip.
Args:
net: the net to perform classification
clip: video clip for predicting action on it
transform_fn: a function that transforms data
num_frames: the length of input video clip
classes: list of action that can be detected on video
Returns:
str, float: Class label with class probability.
'''
clip_input = transform_fn(clip)
clip_input = np.stack(clip_input, axis=0)
clip_input = clip_input.reshape((-1,) + (num_frames, 3, 224, 224))
clip_input = np.transpose(clip_input, (0, 2, 1, 3, 4))
pred = net(nd.array(clip_input).as_in_context(ctx[0]))
ind = nd.topk(pred, k=1)[0].astype('int')
pred_class = classes[ind[0].asscalar()]
pred_class_prob = nd.softmax(pred)[0][ind[0]].asscalar()
return pred_class, pred_class_prob
def add_result_on_clip(clip, pred_class, pred_class_prob, anomaly_classes):
'''
Add classification result on clip.
Args:
clip: video clip for adding classification results on it
pred_class: predicted class
pred_class_prob: probability of predicted action
anomaly_classes: list of anomaly actions
'''
for frame in clip:
draw_classification_result(frame, pred_class, pred_class_prob)
if pred_class in anomaly_classes and pred_class_prob > 0.65:
draw_alert_mark(frame)
def draw_alert_mark(frame):
'''
Add alert (triangle with "!" sign) to mark frame with anomaly.
Args:
frame: video frame
'''
pts = np.array([[165,15],[182,45],[148,45]])
pts = pts.reshape((-1, 1, 2))
cv2.fillPoly(frame,[pts],(255,0,0))
cv2.putText(frame,"!",(160,42),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),2)
def draw_classification_result(frame, class_name, prob):
'''
Add classification result on frame.
Args:
frame: video frame
class_name: predicted class
prob: probability of predicted class
'''
cv2.rectangle(frame, (8, 5), (190, 55), (240, 240, 240), cv2.FILLED)
text_color = (0,0,0)
cv2.putText(frame,f'class: {class_name}',(10,25),cv2.FONT_HERSHEY_SIMPLEX,0.4,text_color,1)
cv2.putText(frame,f'probability: {prob:0.4f}',(10,45),cv2.FONT_HERSHEY_SIMPLEX,0.4,text_color,1)
def write_video_output(output_path, video_data):
'''
Write classified video to file.
Args:
output_path: path to output video file
video_data: video data that should be saved to file
'''
if video_data is None:
print(f'{output_path} can\'t write file.')
return
height = video_data.shape[1]
width = video_data.shape[2]
out = cv2.VideoWriter(f'{output_path}', cv2.VideoWriter_fourcc(*'MP4V'), 30.0, (width, height))
for frame in video_data:
out.write(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
out.release()
def merge_clips(clips_path, video_name, output_path, video_shape):
'''
Merge clips into one video.
Args:
clips_path: path to clips folder
video_name: video name
output_path: path to output folder
video_shape: (width, height) shape of output video
'''
out = cv2.VideoWriter(f'{output_path}/output_{video_name}', cv2.VideoWriter_fourcc(*'MP4V'), 30.0, video_shape)
clips_list = sorted(os.listdir(clips_path))
for clip_name in clips_list:
clip = cv2.VideoCapture(f'{clips_path}/{clip_name}')
ret, frame = clip.read()
while(ret):
out.write(frame)
ret, frame = clip.read()
clip.release()
out.release()
video_list = os.listdir(input_video_path)
for video_name in video_list:
process_video(video_name, input_video_path, output_video_path, net, transform_fn, classes, num_frames=num_frames, verbose=1)
###Output
_____no_output_____ |
look-into-the-data.ipynb | ###Markdown
[Andrii Gakhov](https://www.gakhov.com) / PyCon UA 2018* * * An Introduction to Time Series Forecasting with PythonTime series is an important instrument to model, analyze and predict data collected over time. In this talk, we learn the basic theoretical concepts without going deep into mathematical aspects, study different models, and try them in practice using StatsModels, Prophet, scikit-learn, and keras. Part 1. Look into the data****** OS visits to UK (All visits) The dataset represents the monthly total number of visits to the UK by overseas residents (in thousands)from January 1980 to October 2017. Source: [Office for National Statistics](https://www.ons.gov.uk/peoplepopulationandcommunity/leisureandtourism/timeseries/gmaa/ott)
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(style="ticks")
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Load the data into Pandas DataFrame
###Code
import pandas as pd
df = pd.read_csv("data/GMAA-040218.csv", header=None, skiprows=6, parse_dates=[0], names=['period', 'value'])
df.value.astype(int, copy=False);
df.head(5)
max_date = df.period.max()
min_date = df.period.min()
num_of_actual_points = df.index.shape[0]
num_of_expected_points = (max_date.year - min_date.year) * 12 + max_date.month - min_date.month + 1
print("Date range: {} - {}".format(min_date.strftime("%d.%m.%Y"), max_date.strftime("%d.%m.%Y")))
print("Number of data points: {} of expected {}".format(num_of_actual_points, num_of_expected_points))
###Output
Date range: 01.01.1980 - 01.10.2017
Number of data points: 454 of expected 454
###Markdown
Visualize the data
###Code
fig, ax = plt.subplots(figsize=(18,6))
df.plot(x="period", y="value", ax=ax)
plt.legend(loc='upper left')
plt.savefig('images/intro-visualization.png');
###Output
_____no_output_____
###Markdown
In **2001** a combination of the outbreak of foot and mouth disease and the September 11 attacks in the US led to a dramatic slump.In **2009** the global economic crisis, which started to bite in earnest in the autumn, was blamed as a factor for the fall. https://www.theguardian.com/business/2009/jul/16/tourism-uk-visitors-fallIn 2006 total visits to the UK by overseas residents were split fairlyequally between three purposes: holiday, visiting friends or relatives,and business. This pattern is quite different compared with ten yearsago, when ‘holiday’ was the dominant reasonhttps://www.ons.gov.uk/ons/rel/ott/travel-trends/2006/travel-trends---2006.pdf The majority of visitors were from North America, followed by tourists from France and Germany.http://www.bbc.com/news/uk-england-london-27323755
###Code
zoom_range = df[(df.period >= '2010-01-01') & (df.period < '2012-01-01')].index
fig, ax = plt.subplots(figsize=(18,6))
df.loc[zoom_range].plot(x="period", y="value", ax=ax, label="2010-2012 in zoom")
plt.legend(loc='upper left')
plt.savefig('images/intro-zoom.png');
###Output
_____no_output_____
###Markdown
When is the best time to visit the UK?The United Kingdom can be visited at any time of year ... Overall, **spring (late March to early June) and autumn (September to November) are the best times to visit**, when it’s usually warm and dry.https://www.audleytravel.com/us/uk/best-time-to-visit Trend and seasonalityFrom the visualization it's already quite obvious that the OS visits have periodic fluctuations each year and overall tendency to grow up.Thus, we can conclude that the time series has the **trend** and yearly **seasonality** components, and we can try to decompose them them using, for instance, **statsmodels** package.Note, from the data view we also can suspect that **additive** model better fits for data representation.
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
decompfreq = 12 # 12 months seasonality
model = 'additive'
decomposition = seasonal_decompose(
df.set_index("period").value.interpolate("linear"),
freq=decompfreq,
model=model)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
###Output
_____no_output_____
###Markdown
The Trend
###Code
fig, ax = plt.subplots(figsize=(18,6))
df.plot(x="period", y="value", ax=ax, label="observed", c='lightgrey')
trend.plot(ax=ax, label="trend")
plt.legend(loc='upper left')
plt.savefig('images/intro-trend.png');
###Output
_____no_output_____
###Markdown
The Seasonality
###Code
fig, ax = plt.subplots(figsize=(18,4))
seasonal.plot(ax=ax, label="seasonality")
plt.legend(loc='bottom left')
plt.savefig('images/intro-seasonality.png');
fig, ax = plt.subplots(figsize=(18,6))
seasonal[zoom_range].plot(x="period", y="value", ax=ax, label="2010-2012 in zoom")
plt.legend(loc='upper left')
plt.savefig('images/intro-seasonality-zoom.png');
###Output
_____no_output_____
###Markdown
The Residual
###Code
fig, ax = plt.subplots(figsize=(18,4))
residual.plot(ax=ax, legend="seasonality")
plt.legend(loc='upper left')
plt.savefig('images/intro-residual.png');
###Output
_____no_output_____ |
Analisando_os_Dados_do_Airbnb(Bangkok).ipynb | ###Markdown
--- Análise dos Dados do Airbnb - (Bangkok)*O [Airbnb](https://www.airbnb.com.br/) já é considerado como sendo a **maior empresa hoteleira da atualidade**. Ah, o detalhe é que ele **não possui nenhum hotel**!Conectando pessoas que querem viajar (e se hospedar) com anfitriões que querem alugar seus imóveis de maneira prática, o Airbnb fornece uma plataforma inovadora para tornar essa hospedagem alternativa.No final de 2018, a Startup fundada 10 anos atrás, já havia **hospedado mais de 300 milhões** de pessoas ao redor de todo o mundo, desafiando as redes hoteleiras tradicionais.Uma das iniciativas do Airbnb é disponibilizar dados do site, para algumas das principais cidades do mundo. Por meio do portal [Inside Airbnb](http://insideairbnb.com/get-the-data.html), é possível baixar uma grande quantidade de dados para desenvolver projetos e soluções de *Data Science*.**Neste *notebook*, iremos analisar os dados referentes à cidade de Bangkok, e ver quais insights podem ser extraídos a partir de dados brutos.** Obtenção dos DadosTodos os dados usados aqui foram obtidos a partir do site [Inside Airbnb](http://insideairbnb.com/get-the-data.html).Para esta análise exploratória inicial, será baixado apenas o seguinte arquivo:* `listings.csv` - *Summary information and metrics for listings in Bangkok (good for visualisations).*
###Code
# importar os pacotes necessarios
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# importar o arquivo listings.csv para um DataFrame
df = pd.read_csv("http://data.insideairbnb.com/thailand/central-thailand/bangkok/2020-12-23/visualisations/listings.csv")
###Output
_____no_output_____
###Markdown
Análise dos Dados **Dicionário das variáveis*** id - número de id gerado para identificar o imóvel* name - nome da propriedade anunciada* host_id - número de id do proprietário (anfitrião) da propriedade* host_name - Nome do anfitrião* neighbourhood_group - esta coluna não contém nenhum valor válido* neighbourhood - nome do bairro* latitude - coordenada da latitude da propriedade* longitude - coordenada da longitude da propriedade* room_type - informa o tipo de quarto que é oferecido* price - preço para alugar o imóvel* minimum_nights - quantidade mínima de noites para reservar* number_of_reviews - número de reviews que a propriedade possui* last_review - data do último review* reviews_per_month - quantidade de reviews por mês* calculated_host_listings_count - quantidade de imóveis do mesmo anfitrião* availability_365 - número de dias de disponibilidade dentro de 365 diasAntes de iniciar qualquer análise, vamos verificar a cara do nosso *dataset*, analisando as 5 primeiras entradas.
###Code
# mostrar as 5 primeiras entradas
df.head()
###Output
_____no_output_____
###Markdown
**Q1. Quantos atributos (variáveis) e quantas entradas o nosso conjunto de dados possui? Quais os tipos das variáveis?**
###Code
# identificar o volume de dados do DataFrame
print('Entradas:\t {}'.format(df.shape[0]))
print('Variáveis:\t {}'.format(df.shape[1]))
# verificar as 5 primeiras entradas do dataset
display(df.dtypes)
###Output
Entradas: 19709
Variáveis: 16
###Markdown
**Q2. Qual a porcentagem de valores ausentes no *dataset*?**A qualidade de um *dataset* está diretamente relacionada à quantidade de valores ausentes. É importante entender logo no início se esses valores nulos são significativos comparados ao total de entradas.* É possível ver que a coluna `neighbourhood_group` possui 100% dos seus valores faltantes. * As variáveis `reviews_per_month` e `last_review` possuem valores nulos em quase metade das linhas.* As variáveis `name` e `host_name` têm aproximadamente 0,1% dos valores nulos.
###Code
# ordenar em ordem decrescente as variáveis por seus valores ausentes
(df.isnull().sum() / df.shape[0]).sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
**Q3. Qual o tipo de distribuição das variáveis?**
###Code
# plotar o histograma das variáveis numéricas
df.hist(bins=15, figsize=(15, 10));
###Output
_____no_output_____
###Markdown
**Q4. Há outliers presentes?**Pela distribuição do histograma, é possível verificar indícios da presença de *outliers*. Olhe por exemplo as variáveis `price`, `minimum_nights` e `calculated_host_listings_count`.Os valores não seguem uma distruição, e distorcem toda a representação gráfica. Para confirmar, há duas maneiras rápidas que auxiliam a detecção de *outliers*. São elas:* Resumo estatístico por meio do método `describe()`* Plotar `boxplots` para a variável.---
###Code
# ver o resumo estatístico das variáveis numéricas (somente das colunas que realmente importam para essa análise)
df[['price', 'minimum_nights','number_of_reviews', 'reviews_per_month', 'calculated_host_listings_count', 'availability_365']].describe()
###Output
_____no_output_____
###Markdown
Boxplot para minimum_nights* Separando em porcentagem as ofertas que requerem do hóspede **mais de 30 dias** como hospedagem mínima
###Code
#minimum_nights
df.minimum_nights.plot(kind='box', vert=False, figsize=(15, 3))
plt.show()
#ver quantidade de valores acima de 30 dias para minimum_nights
print("minimum_nights: valores acima de 30: ")
print("{} entradas".format(len(df[df.minimum_nights > 30])))
print("{:.4f}%".format((len(df[df.minimum_nights > 30]) / df.shape[0])*100))
###Output
_____no_output_____
###Markdown
Boxplot para price* Separando em porcentagem as ofertas que tem seu preço **acima de 1900 baht**
###Code
#price
df.price.plot(kind='box', vert=False, figsize=(15, 3))
plt.show()
#ver a quantidade de valores acima de 1900 na coluna price
print("\nprice: valores acima de 1900 baht")
print("{} entradas".format(len(df[df.price > 1900])))
print("{:.4f}%".format((len(df[df.price > 1900]) / df.shape[0])*100))
###Output
_____no_output_____
###Markdown
Histogramas sem outliers:* Já que foi identificada uma quantidade significativa de outliers nas variáveis price e minimum_nights, vamos agora 'limpar' o DataFrame delas e plotar novamente o histograma.
###Code
#remover os *outliers* em um novo DataFrame
df_clean = df.copy()
df_clean.drop(df_clean[df_clean.price > 1900].index, axis=0, inplace=True)
df_clean.drop(df_clean[df_clean.minimum_nights > 30].index, axis=0, inplace=True)
#remover `neighbourhood_group`, pois está vazio
df_clean.drop('neighbourhood_group', axis=1, inplace=True)
#plotar o histograma para as variáveis numéricas
df_clean.hist(bins=15, figsize=(15, 10));
#média dos preços atualizada
new_media = df_clean.price.mean()
print(f'A nova média dos preços de diárias para a cidade a cidade de Bangkok (removendo os outliers) é de {new_media:.2f} baht.')
###Output
A nova média dos preços de diárias para a cidade a cidade de Bangkok (removendo os outliers) é de 963.77 baht.
###Markdown
**Q4. Qual a correlação existente entre as variáveis**
###Code
# criar uma matriz de correlação
corr = df_clean[['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month', 'calculated_host_listings_count', 'availability_365']].corr()
# mostrar a matriz de correlação
display(corr)
# plotar um heatmap a partir das correlações
sns.heatmap(corr, cmap='RdBu', fmt='.2f', square=True, linecolor='white', annot=True);
###Output
_____no_output_____
###Markdown
**Q5. Qual o tipo de imóvel mais alugado no Airbnb?**A coluna variável room_type indica o tipo de locação que está anunciada no Airbnb. Se você já alugou no site, sabe que existem opções de apartamentos/casas inteiras, apenas o aluguel de um quarto ou mesmo dividir o quarto com outras pessoas. Usando o método value_counts(), contaremos a quantidade de ocorrências de cada tipo de aluguel.
###Code
df_clean.neighbourhood.value_counts().head()
# mostrar a quantidade de cada tipo de imóvel disponível
df_clean.room_type.value_counts()
# mostrar a porcentagem de cada tipo de imóvel disponível
df_clean.room_type.value_counts() / df_clean.shape[0]
###Output
_____no_output_____
###Markdown
**Q6. Qual a localidade mais cara do dataset?**Uma maneira de se verificar uma variável em função da outra é usando groupby(). No caso, queremos comparar os bairros(neighbourhoods) a partir do preço de locação.
###Code
# ver preços por bairros, na média
df_clean.groupby(['neighbourhood']).price.mean().sort_values(ascending=False)[:10]
#ver a quantidade de imóveis no bairro Parthum Wan
qtd_imoveis = df_clean[df_clean.neighbourhood == 'Parthum Wan'].shape[0]
print(f'O bairro Parthum Wan possui {qtd_imoveis} imóveis cadastrados hoje no Airbnb.\n\n')
#ver o .head() com as 5 primeiras respostas do bairro Parthum Wan
df_clean[df_clean.neighbourhood == 'Parthum Wan'].head()
# plotar os imóveis pela latitude-longitude
df_clean.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4, c=df_clean['price'], s=8, cmap=plt.get_cmap('jet'), figsize=(12,8));
###Output
_____no_output_____
###Markdown
**Q7. Qual é a média do mínimo de noites para aluguel (minimum_nights)?**
###Code
# ver a média da coluna `minimum_nights``
media_noites = df_clean.minimum_nights.mean()
print('Usando os dados obtidos podemos ver que a média do número de noites mímimas que cada anúncio pede é de {:.2f} noites.'.format(media_noites))
###Output
Usando os dados obtidos podemos ver que a média do número de noites mímimas que cada anúncio pede é de 5.99 noites.
|
src/02_Pet_Stores_and_Services.ipynb | ###Markdown
Yelp Pet Stores and Pet Services in NYC (ETL): Extract
###Code
#function to parse Yelp Fusion API response for a list of Yelp businesses
def parse_response(response):
response_json = response.json()
businesses = []
for business in response_json['businesses']:
business_json = {}
categories = []
for category in business['categories']:
categories.append(category['title'])
business_json['categories'] = categories
business_json['id'] = business['id']
business_json['name'] = business['name']
business_json['is_closed'] = business['is_closed']
business_json['review_count'] = business['review_count']
business_json['rating'] = business['rating']
business_json['zip_code'] = business['location']['zip_code']
businesses.append(business_json)
return businesses
#function to call Yelp Fusion API with given search term, location and max search results
def call_yelp(term, location, search_max):
search_results = []
search_limit = 50
search_calls = int(search_max / search_limit)
for i in range(search_calls):
search_offset = i * search_limit
url = 'https://api.yelp.com/v3/businesses/search'
headers = {'Authorization': 'Bearer {}'.format(api_key),}
url_params = {'term': term.replace(' ', '+'),
'location': location.replace(' ', '+'),
'limit': search_limit,
'offset': search_offset
}
response = requests.get(url, headers=headers, params=url_params)
print('term: {}, offset: {}, response: {}'.format(term, search_offset, response))
search_results.extend(parse_response(response))
time.sleep(2)
return search_results
#pull and save 1,000 (max) pet stores in NYC
pet_stores = 'Pet Stores'
location = 'New York, NY'
search_max = 1000
yelp_pet_stores = call_yelp(pet_stores, location, search_max)
#pull and save 1,000 (max) pet services in NYC
pet_services = 'Pet Services'
location = 'New York, NY'
search_max = 1000
yelp_pet_services = call_yelp(pet_services, location, search_max)
###Output
term: Pet Services, offset: 0, response: <Response [200]>
term: Pet Services, offset: 50, response: <Response [200]>
term: Pet Services, offset: 100, response: <Response [200]>
term: Pet Services, offset: 150, response: <Response [200]>
term: Pet Services, offset: 200, response: <Response [200]>
term: Pet Services, offset: 250, response: <Response [200]>
term: Pet Services, offset: 300, response: <Response [200]>
term: Pet Services, offset: 350, response: <Response [200]>
term: Pet Services, offset: 400, response: <Response [200]>
term: Pet Services, offset: 450, response: <Response [200]>
term: Pet Services, offset: 500, response: <Response [200]>
term: Pet Services, offset: 550, response: <Response [200]>
term: Pet Services, offset: 600, response: <Response [200]>
term: Pet Services, offset: 650, response: <Response [200]>
term: Pet Services, offset: 700, response: <Response [200]>
term: Pet Services, offset: 750, response: <Response [200]>
term: Pet Services, offset: 800, response: <Response [200]>
term: Pet Services, offset: 850, response: <Response [200]>
term: Pet Services, offset: 900, response: <Response [200]>
term: Pet Services, offset: 950, response: <Response [200]>
###Markdown
Yelp Pet Stores and Pet Services in NYC (ETL): Transform
###Code
# create dataframe of pet stores
pet_stores = pd.DataFrame.from_dict(yelp_pet_stores)
pet_stores.head()
# create dataframe of pet services
pet_services = pd.DataFrame.from_dict(yelp_pet_services)
pet_services.head()
#create list of unique pet store and service categoories
pet_categories = []
for row in range(pet_stores.shape[0]):
pet_categories.extend(pet_stores['categories'][row])
for row in range(pet_services.shape[0]):
pet_categories.extend(pet_services['categories'][row])
pet_categories = sorted(list(set(pet_categories)))
pet_categories
#create junction table for businesses and categories
business_ids = []
category_ids = []
for row in range(pet_stores.shape[0]):
for category in pet_stores['categories'][row]:
business_ids.append(pet_stores['id'][row])
category_ids.append(pet_categories.index(category))
for row in range(pet_services.shape[0]):
for category in pet_services['categories'][row]:
business_ids.append(pet_services['id'][row])
category_ids.append(pet_categories.index(category))
print(len(business_ids))
print(len(category_ids))
###Output
3274
3274
###Markdown
Yelp Pet Stores and Pet Services in NYC (ETL): Load
###Code
#creating SQL connection
conn = sqlite3.connect('../Data/pet_care_industry.db')
c = conn.cursor()
#function to create table
def create_table(query):
c.execute(query)
#function to close connection
def close_c_conn():
c.close()
conn.close()
#create pet stores and services table
create_query = """CREATE TABLE stores_and_services
(id TEXT PRIMARY KEY,
Name TEXT,
Rating REAL,
Review_Count INTEGER,
ZipCode INTEGER);"""
c.execute('DROP TABLE IF EXISTS stores_and_services')
create_table(create_query)
#function to insert businesses into table
def insert_businesses(businesses):
for i in range(len(businesses.index)):
if (not businesses.iloc[i]['is_closed']) & (businesses.iloc[i]['zip_code'].isnumeric()):
c.execute("""INSERT OR REPLACE INTO stores_and_services
(id,
Name,
Rating,
Review_Count,
ZipCode)
VALUES
(?,?,?,?,?)""",
(businesses.iloc[i]['id'],
businesses.iloc[i]['name'],
float(businesses.iloc[i]['rating']),
int(businesses.iloc[i]['review_count']),
int(businesses.iloc[i]['zip_code'])))
conn.commit()
#insert pet store and services into table
insert_businesses(pet_stores)
insert_businesses(pet_services)
#check SQL pet store and services table
stores_and_services = pd.read_sql_query("""SELECT Name, Rating, Review_Count, ZipCode
FROM stores_and_services;""", conn)
stores_and_services = stores_and_services.set_index('Name')
stores_and_services
#create unique categories table
create_query = """CREATE TABLE categories
(id TEXT PRIMARY KEY,
Name TEXT);"""
c.execute('DROP TABLE IF EXISTS categories')
create_table(create_query)
#function to insert categories into table
def insert_categories(categories):
for i in range(len(categories)):
c.execute("""INSERT INTO categories
(id,
Name)
VALUES
(?,?)""",
(i,
categories[i]))
conn.commit()
#insert categories into table
insert_categories(pet_categories)
#check SQL categories table
categories = pd.read_sql_query("""SELECT * FROM categories;""", conn)
categories = categories.set_index('id')
categories
#defining SQL query to create junction table for businesses and categories
create_query = """CREATE TABLE IF NOT EXISTS businesses_categories
( business_id TEXT NOT NULL,
category_id INTEGER NOT NULL,
PRIMARY KEY(business_id, category_id),
FOREIGN KEY(business_id) REFERENCES stores_and_services(id),
FOREIGN KEY(category_id) REFERENCES categories(id));"""
c.execute('DROP TABLE IF EXISTS businesses_categories')
create_table(create_query)
#function to insert businesses and categories into table
def insert_businesses_categories(business_ids, category_ids):
for i in range(len(business_ids)):
c.execute("""INSERT OR REPLACE INTO businesses_categories
(business_id,
category_id)
VALUES
(?,?)""",
(business_ids[i],
category_ids[i]))
conn.commit()
#insert categories into table
insert_businesses_categories(business_ids, category_ids)
#querying SQL businesses and categories table
businesses_categories = pd.read_sql_query("SELECT * FROM businesses_categories;", conn)
businesses_categories
#close connection
close_c_conn()
###Output
_____no_output_____ |
week08/inClass_week08.ipynb | ###Markdown
Basic maps with cartopy
###Code
# import our usual things
%matplotlib inline
import cartopy
import pandas as pd
import matplotlib.pyplot as plt
# make plots a little bigger
plt.rcParams['figure.dpi'] = 100 # 300
states_files = cartopy.io.shapereader.natural_earth(resolution='110m', category='cultural',
name='admin_1_states_provinces_lakes_shp')
# read the shape file's contents
reader = cartopy.io.shapereader.Reader(states_files)
# line by line reading of data
data_line = reader.records()
data = next(data_line)
data
# grab individual bits of info
regions = lambda data: data.attributes['diss_me']
states_by_region = sorted(reader.records(),key=regions)[:5]
states_by_region
# lets move on from (badly spelled) academic pulling of data
fig = plt.figure()
ax = fig.add_subplot(111, projection=cartopy.crs.PlateCarree())
ax.coastlines()
plt.title('Equirectangular')
fig = plt.figure()
ax = fig.add_subplot(111,projection=cartopy.crs.Mollweide())
ax.coastlines()
plt.title("Mollweide")
# lets draw some lines between points
champaign_lat, champaign_lon = 40.1164, -88.2434
ant_lat, ant_lon = -18.8792, 47.5079
fig = plt.figure()
ax = fig.add_subplot(111, projection = cartopy.crs.PlateCarree())
ax.coastlines()
ax.gridlines()
ax.set_global()
ax.plot([champaign_lon, ant_lon],[champaign_lat,ant_lat],
transform=cartopy.crs.PlateCarree())
# shortest distance
ax.plot([champaign_lon, ant_lon],[champaign_lat,ant_lat],
transform=cartopy.crs.Geodetic())
# I can also plot points
fig =plt.figure()
ax=fig.add_subplot(111,projection=cartopy.crs.Mollweide())
champaign = 40.1164, -88.2434
oahu = 19.8968, -155.582
ax.scatter(champaign[1],champaign[0],
transform = cartopy.crs.PlateCarree())
ax.scatter(oahu[1],oahu[0], transform=cartopy.crs.PlateCarree())
ax.set_global()
ax.coastlines()
locations = pd.read_csv('/Users/jillnaiman1/Downloads/location.txt', header=None, delimiter='\t',
names=['longitude', 'latitude', 'empty1', 'empty2'])
del locations['empty1'], locations['empty2']
locations
fig = plt.figure()
ax = fig.add_subplot(111,projection=cartopy.crs.LambertCylindrical())
ax.scatter(locations['longitude'], locations['latitude'], transform=cartopy.crs.PlateCarree())
ax.coastlines()
#ax.set_global()
import cartopy.io.img_tiles
imagery = cartopy.io.img_tiles.OSM()
fig = plt.figure()
ax = fig.add_subplot(111,projection=cartopy.crs.LambertCylindrical())
ax.scatter(locations['longitude'], locations['latitude'], transform=cartopy.crs.PlateCarree())
ax.coastlines()
ax.add_image(imagery, 4)
seismic = pd.read_csv('/Users/jillnaiman1/Downloads/data_tohoku_norm_transpose.csv',
header = None)
seismic.shape, locations.shape
14401/(60*60)
import ipywidgets
@ipywidgets.interact(station=(0,437))
def plot(station=0):
plt.plot(seismic[station])
plt.xlabel("Time in sec")
plt.ylabel("Normalized Displacement")
plt.ylim(-1,1)
nstations = 300
ntimes = 1440
import numpy as np
stationsIndex = np.random.choice(range(locations.shape[0]-1), nstations, replace=False)
timesIndex = np.random.choice(range(seismic.shape[0]-1), ntimes, replace=False)
stationsIndex.sort()
timesIndex.sort()
locations2 = locations.loc[stationsIndex]
seismic2 = seismic.loc[timesIndex, stationsIndex]
seismic2.shape, locations2.shape
fig = plt.figure()
ax = fig.add_subplot(111, projection=cartopy.crs.LambertCylindrical())
ax.scatter(locations2['longitude'], locations2['latitude'], transform=cartopy.crs.PlateCarree())
ax.coastlines()
seismic2
@ipywidgets.interact(station=(0,nstations-1))
def plot(station=0):
plt.plot(seismic2.iloc[:,station])
plt.xlabel("Time in sec")
plt.ylabel("Normalized Displacement")
plt.ylim(-1,1)
import bqplot
# scales
x_sc = bqplot.LinearScale()
y_sc = bqplot.LinearScale()
# marks
lines = bqplot.Lines(x=seismic2.index.values, y=seismic2.loc[:,0],
scales={'x':x_sc, 'y':y_sc})
# axes
x_ax = bqplot.Axis(scale=x_sc)
y_ax = bqplot.Axis(scale=y_sc, orientation='vertical')
# combine into figure
fig = bqplot.Figure(marks=[lines], axes=[x_ax,y_ax])
# lets link a slider widget
slider = ipywidgets.IntSlider(min=0,max=nstations-1)
# create a linking function
def update_slider(event):
lines.y = seismic2.iloc[:,event['new']]
slider.observe(update_slider,'value')
ipywidgets.VBox([slider, fig])
# combining sensor data as a function of time with our map data (locations)
@ipywidgets.interact(station=(0,nstations-1,1), t=(0,ntimes,1))
def plot(station=0, t=0):
fig = plt.figure()
# map figure
ax=fig.add_subplot(211, projection=cartopy.crs.LambertCylindrical())
colors=seismic2.iloc[t]
ax.scatter(locations2['longitude'], locations2['latitude'],
transform=cartopy.crs.PlateCarree(), c=colors)
ax.coastlines()
# plot of a particular sensosr as a function of time
ax = fig.add_subplot(212)
ax.plot(seismic2.index.values, seismic2.iloc[:,station])
ax.set_ylim(-1,1)
###Output
_____no_output_____
###Markdown
Maps with bqplot
###Code
map_mark = bqplot.Map(scales={'projection':bqplot.AlbersUSA()})
fig = bqplot.Figure(marks=[map_mark],title='Basic')
fig
# lets do state map instead
sc_geo = bqplot.AlbersUSA() # scales, USA
state_data = bqplot.topo_load('map_data/USStatesMap.json')
# lets add a tooltip that shows us something about the
# underlying data
def_tt = bqplot.Tooltip(fields=['id','name'])
# generate marks
states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo},
tooltip=def_tt)
states_map.interactions ={'hover':'tooltip', 'click':'select'}
from states_utils import get_ids_and_names
ids, state_names = get_ids_and_names(states_map)
def get_data_value(change):
if change['owner'].selected is not None:
for i, s in enumerate(change['owner'].selected):
print(state_names[s==ids], s)
states_map.observe(get_data_value,'selected')
fig = bqplot.Figure(marks=[states_map], title='US States Map',
fig_margin={'top':0, 'bottom':0, 'left':0, 'right':0})
fig
# lets link some export data to our map
comm = pd.read_csv('/Users/jillnaiman1/Downloads/total_export.csv')
comm
comm.loc[comm['State']=='Alabama'].values
years = list(comm.columns.values)
years = np.array(years[1:])
years = years.astype('int')
years
# lets do state map instead
sc_geo = bqplot.AlbersUSA() # scales, USA
state_data = bqplot.topo_load('map_data/USStatesMap.json')
# lets add a tooltip that shows us something about the
# underlying data
def_tt = bqplot.Tooltip(fields=['id','name'])
# generate marks for map
states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo},
tooltip=def_tt)
states_map.interactions ={'hover':'tooltip', 'click':'select'}
from states_utils import get_ids_and_names
ids, state_names = get_ids_and_names(states_map)
# line plot for exports
x_scl = bqplot.LinearScale()
y_scl = bqplot.LinearScale()
ax_xcl = bqplot.Axis(label='Year', scale=x_scl)
ax_ycl = bqplot.Axis(label='Total Export from State NA', scale=y_scl, orientation='vertical', side='left')
lines = bqplot.Lines(x=years, y=np.zeros(len(years)), scales={'x':x_scl, 'y':y_scl})
fig_lines = bqplot.Figure(marks=[lines],axes=[ax_ycl, ax_xcl])
def get_data_value(change):
exports = np.zeros(len(years))
snames =''
if change['owner'].selected is not None:
for i, s in enumerate(change['owner'].selected):
sn = state_names[s == ids][0]
snames += sn +', '
exports_in = comm.loc[comm['State']==sn].values[0][1:]
# I have commas in my strings
exports_in = np.array([export_in[i].replace(',','') for i in range(length(export_in))])
export = np.apd(exports,exports_in.astype('float64'))
lines.y=exports
ax_ycl.label='Total export from ' + snnames
#print(state_names[s==ids], s)
states_map.observe(get_data_value,'selected')
fig = bqplot.Figure(marks=[states_map], title='US States Map',
fig_margin={'top':0, 'bottom':0, 'left':0, 'right':0})
ipywidgets.HBox([fig,fig_lines])
###Output
_____no_output_____ |
geometric-operations.ipynb | ###Markdown
Geometric operations Overlay analysisIn this tutorial, the aim is to make an overlay analysis where we create a new layer based on geometries from a dataset that `intersect` with geometries of another layer. As our test case, we will select Polygon grid cells from `TravelTimes_to_5975375_RailwayStation_Helsinki.shp` that intersects with municipality borders of Helsinki found in `Helsinki_borders.shp`.Typical overlay operations are (source: [QGIS docs](https://docs.qgis.org/2.8/en/docs/gentle_gis_introduction/vector_spatial_analysis_buffers.htmlmore-spatial-analysis-tools)): Download dataFor this lesson, you should [download a data package](https://github.com/AutoGIS/data/raw/master/L4_data.zip) that includes 3 files: 1. Helsinki_borders.shp 2. Travel_times_to_5975375_RailwayStation.shp 3. Amazon_river.shp ```$ cd /home/jovyan/notebooks/L4$ wget https://github.com/AutoGIS/data/raw/master/L4_data.zip$ unzip L4_data.zip```Let's first read the data and see how they look like.- Import required packages and read in the input data:
###Code
import geopandas as gpd
import matplotlib.pyplot as plt
import shapely.speedups
%matplotlib inline
# File paths
border_fp = "data/Helsinki_borders.shp"
grid_fp = "data/TravelTimes_to_5975375_RailwayStation.shp"
# Read files
grid = gpd.read_file(grid_fp)
hel = gpd.read_file(border_fp)
###Output
_____no_output_____
###Markdown
- Visualize the layers:
###Code
# Plot the layers
ax = grid.plot(facecolor='gray')
hel.plot(ax=ax, facecolor='None', edgecolor='blue')
###Output
_____no_output_____
###Markdown
Here the grey area is the Travel Time Matrix grid (13231 grid squares) that covers the Helsinki region, and the blue area represents the municipality of Helsinki. Our goal is to conduct an overlay analysis and select the geometries from the grid polygon layer that intersect with the Helsinki municipality polygon.When conducting overlay analysis, it is important to check that the CRS of the layers match!- Check if Helsinki polygon and the grid polygon are in the same crs:
###Code
# Ensure that the CRS matches, if not raise an AssertionError
assert hel.crs == grid.crs, "CRS differs between layers!"
###Output
_____no_output_____
###Markdown
Indeed, they do. Hence, the pre-requisite to conduct spatial operations between the layers is fullfilled (also the map we plotted indicated this).- Let's do an overlay analysis and create a new layer from polygons of the grid that `intersect` with our Helsinki layer. We can use a function called `overlay()` to conduct the overlay analysis that takes as an input 1) the GeoDataFrame where the selection is taken, 2) the GeoDataFrame used for making the selection, and 3) parameter `how` that can be used to control how the overlay analysis is conducted (possible values are `'intersection'`, `'union'`, `'symmetric_difference'`, `'difference'`, and `'identity'`):
###Code
intersection = gpd.overlay(grid, hel, how='intersection')
###Output
_____no_output_____
###Markdown
- Let's plot our data and see what we have:
###Code
intersection.plot(color="b")
###Output
_____no_output_____
###Markdown
As a result, we now have only those grid cells that intersect with the Helsinki borders. As we can see **the grid cells are clipped based on the boundary.**- Whatabout the data attributes? Let's see what we have:
###Code
print(intersection.head())
###Output
car_m_d car_m_t car_r_d car_r_t from_id pt_m_d pt_m_t pt_m_tt \
0 29476 41 29483 46 5876274 29990 76 95
1 29456 41 29462 46 5876275 29866 74 95
2 36772 50 36778 56 5876278 33541 116 137
3 36898 49 36904 56 5876279 33720 119 141
4 29411 40 29418 44 5878128 29944 75 95
pt_r_d pt_r_t pt_r_tt to_id walk_d walk_t GML_ID NAMEFIN \
0 24984 77 99 5975375 25532 365 27517366 Helsinki
1 24860 75 93 5975375 25408 363 27517366 Helsinki
2 44265 130 146 5975375 31110 444 27517366 Helsinki
3 44444 132 155 5975375 31289 447 27517366 Helsinki
4 24938 76 99 5975375 25486 364 27517366 Helsinki
NAMESWE NATCODE geometry
0 Helsingfors 091 POLYGON ((402250.000 6685750.000, 402024.224 6...
1 Helsingfors 091 POLYGON ((402367.890 6685750.000, 402250.000 6...
2 Helsingfors 091 POLYGON ((403250.000 6685750.000, 403148.515 6...
3 Helsingfors 091 POLYGON ((403456.484 6685750.000, 403250.000 6...
4 Helsingfors 091 POLYGON ((402000.000 6685500.000, 401900.425 6...
###Markdown
As we can see, due to the overlay analysis, the dataset contains the attributes from both input layers.- Let's save our result grid as a GeoJSON file that is commonly used file format nowadays for storing spatial data.
###Code
# Output filepath
outfp = "data/TravelTimes_to_5975375_RailwayStation_Helsinki.geojson"
# Use GeoJSON driver
intersection.to_file(outfp, driver="GeoJSON")
###Output
_____no_output_____
###Markdown
There are many more examples for different types of overlay analysis in [Geopandas documentation](http://geopandas.org/set_operations.html) where you can go and learn more. Aggregating dataData aggregation refers to a process where we combine data into groups. When doing spatial data aggregation, we merge the geometries together into coarser units (based on some attribute), and can also calculate summary statistics for these combined geometries from the original, more detailed values. For example, suppose that we are interested in studying continents, but we only have country-level data like the country dataset. If we aggregate the data by continent, we would convert the country-level data into a continent-level dataset.In this tutorial, we will aggregate our travel time data by car travel times (column `car_r_t`), i.e. the grid cells that have the same travel time to Railway Station will be merged together.- For doing the aggregation we will use a function called `dissolve()` that takes as input the column that will be used for conducting the aggregation:
###Code
# Conduct the aggregation
dissolved = intersection.dissolve(by="car_r_t")
# What did we get
print(dissolved.head())
###Output
geometry car_m_d car_m_t \
car_r_t
-1 MULTIPOLYGON (((388000.000 6668750.000, 387750... -1 -1
0 POLYGON ((386000.000 6672000.000, 385750.000 6... 0 0
7 POLYGON ((386250.000 6671750.000, 386000.000 6... 1051 7
8 MULTIPOLYGON (((386250.000 6671500.000, 386000... 1286 8
9 MULTIPOLYGON (((386500.000 6671250.000, 386250... 1871 9
car_r_d from_id pt_m_d pt_m_t pt_m_tt pt_r_d pt_r_t pt_r_tt \
car_r_t
-1 -1 5913094 -1 -1 -1 -1 -1 -1
0 0 5975375 0 0 0 0 0 0
7 1051 5973739 617 5 6 617 5 6
8 1286 5973736 706 10 10 706 10 10
9 1871 5970457 1384 11 13 1394 11 12
to_id walk_d walk_t GML_ID NAMEFIN NAMESWE NATCODE
car_r_t
-1 -1 -1 -1 27517366 Helsinki Helsingfors 091
0 5975375 0 0 27517366 Helsinki Helsingfors 091
7 5975375 448 6 27517366 Helsinki Helsingfors 091
8 5975375 706 10 27517366 Helsinki Helsingfors 091
9 5975375 1249 18 27517366 Helsinki Helsingfors 091
###Markdown
- Let's compare the number of cells in the layers before and after the aggregation:
###Code
print('Rows in original intersection GeoDataFrame:', len(intersection))
print('Rows in dissolved layer:', len(dissolved))
###Output
Rows in original intersection GeoDataFrame: 3826
Rows in dissolved layer: 51
###Markdown
Indeed the number of rows in our data has decreased and the Polygons were merged together.What actually happened here? Let's take a closer look. - Let's see what columns we have now in our GeoDataFrame:
###Code
print(dissolved.columns)
###Output
Index(['geometry', 'car_m_d', 'car_m_t', 'car_r_d', 'from_id', 'pt_m_d',
'pt_m_t', 'pt_m_tt', 'pt_r_d', 'pt_r_t', 'pt_r_tt', 'to_id', 'walk_d',
'walk_t', 'GML_ID', 'NAMEFIN', 'NAMESWE', 'NATCODE'],
dtype='object')
###Markdown
As we can see, the column that we used for conducting the aggregation (`car_r_t`) can not be found from the columns list anymore. What happened to it?- Let's take a look at the indices of our GeoDataFrame:
###Code
print(dissolved.index)
###Output
Int64Index([-1, 0, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
56],
dtype='int64', name='car_r_t')
###Markdown
Aha! Well now we understand where our column went. It is now used as index in our `dissolved` GeoDataFrame. - Now, we can for example select only such geometries from the layer that are for example exactly 15 minutes away from the Helsinki Railway Station:
###Code
# Select only geometries that are within 15 minutes away
dissolved.iloc[15]
# See the data type
print(type(dissolved.iloc[15]))
# See the data
print(dissolved.iloc[15].head())
###Output
geometry (POLYGON ((388250.0001354316 6668750.000042891...
car_m_d 12035
car_m_t 18
car_r_d 11997
from_id 5903886
Name: 20, dtype: object
###Markdown
As we can see, as a result, we have now a Pandas `Series` object containing basically one row from our original aggregated GeoDataFrame.Let's also visualize those 15 minute grid cells.- First, we need to convert the selected row back to a GeoDataFrame:
###Code
# Create a GeoDataFrame
selection = gpd.GeoDataFrame([dissolved.iloc[15]], crs=dissolved.crs)
###Output
_____no_output_____
###Markdown
- Plot the selection on top of the entire grid:
###Code
# Plot all the grid cells, and the grid cells that are 15 minutes a way from the Railway Station
ax = dissolved.plot(facecolor='gray')
selection.plot(ax=ax, facecolor='red')
###Output
_____no_output_____
###Markdown
Simplifying geometries Sometimes it might be useful to be able to simplify geometries. This could be something to consider for example when you have very detailed spatial features that cover the whole world. If you make a map that covers the whole world, it is unnecessary to have really detailed geometries because it is simply impossible to see those small details from your map. Furthermore, it takes a long time to actually render a large quantity of features into a map. Here, we will see how it is possible to simplify geometric features in Python.As an example we will use data representing the Amazon river in South America, and simplify it's geometries.- Let's first read the data and see how the river looks like:
###Code
import geopandas as gpd
# File path
fp = "data/Amazon_river.shp"
data = gpd.read_file(fp)
# Print crs
print(data.crs)
# Plot the river
data.plot();
###Output
PROJCS["Mercator_2SP",GEOGCS["GCS_GRS 1980(IUGG, 1980)",DATUM["D_unknown",SPHEROID["GRS80",6378137,298.257222101]],PRIMEM["Unknown",0],UNIT["Degree",0.0174532925199433]],PROJECTION["Mercator_2SP"],PARAMETER["standard_parallel_1",-2],PARAMETER["central_meridian",-43],PARAMETER["false_easting",5000000],PARAMETER["false_northing",10000000],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]
###Markdown
The LineString that is presented here is quite detailed, so let's see how we can generalize them a bit. As we can see from the coordinate reference system, the data is projected in a metric system using [Mercator projection based on SIRGAS datum](http://spatialreference.org/ref/sr-org/7868/). - Generalization can be done easily by using a Shapely function called `.simplify()`. The `tolerance` parameter can be used to adjusts how much geometries should be generalized. **The tolerance value is tied to the coordinate system of the geometries**. Hence, the value we pass here is 20 000 **meters** (20 kilometers).
###Code
# Generalize geometry
data2 = data.copy()
data2['geom_gen'] = data2.simplify(tolerance=20000)
# Set geometry to be our new simlified geometry
data2 = data2.set_geometry('geom_gen')
# Plot
data2.plot()
# plot them side-by-side
%matplotlib inline
import matplotlib.pyplot as plt
#basic config
fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20, 16))
#ax1, ax2 = axes
#1st plot
ax1 = data.plot(ax=ax1, color='red', alpha=0.5)
ax1.set_title('Original')
#2nd plot
ax2 = data2.plot(ax=ax2, color='orange', alpha=0.5)
ax2.set_title('Generalize')
fig.tight_layout()
###Output
_____no_output_____ |
reports/80_cluster_anc_triplet-initial.ipynb | ###Markdown
Configure
###Code
sample_size = 0
max_closure_size = 10000
max_distance = 0.22
cluster_distance_threshold = 0.155
super_cluster_distance_threshold = 0.205
num_candidates = 1000
eps = 0.000001
model_filename = '../data/models/anc-triplet-bilstm-100-512-40-05.pth'
# process_nicknames = True
# werelate_names_filename = 'givenname_similar_names.werelate.20210414.tsv'
# nicknames_filename = '../data/models/givenname_nicknames.txt'
# name_freqs_filename = 'given-final.normal.txt'
# clusters_filename = 'givenname_clusters.tsv'
# super_clusters_filename = 'givenname_super_clusters.tsv'
werelate_names_filename = '../data/external/surname_similar_names.werelate.20210414.tsv'
nicknames_filename = ''
name_freqs_filename = '../data/external/surname-final.normal.txt'
clusters_filename = '../data/models/ancestry_surname_clusters-20211028.tsv'
super_clusters_filename = '../data/models/ancestry_surname_super_clusters-20211028.tsv'
is_surname = True
###Output
_____no_output_____
###Markdown
Read WeRelate names into all_namesLater, we'll want to read frequent FS names into all_names
###Code
# TODO rewrite this in just a few lines using pandas
def load_werelate_names(path, is_surname):
name_variants = defaultdict(set)
with fopen(path, mode="r", encoding="utf-8") as f:
is_header = True
for line in f:
if is_header:
is_header = False
continue
fields = line.rstrip().split("\t")
# normalize should only return a single name piece, but loop just in case
for name_piece in normalize(fields[0], is_surname):
confirmed_variants = fields[1].strip().split(" ") if len(fields) >= 2 else []
computer_variants = fields[2].strip().split(" ") if len(fields) == 3 else []
variants = confirmed_variants + computer_variants
for variant in variants:
for variant_piece in normalize(variant, is_surname):
name_variants[name_piece].add(variant_piece)
return name_variants
all_names = set()
name_variants = load_werelate_names(werelate_names_filename, is_surname)
print(len(name_variants))
for k, v in name_variants.items():
all_names.add(add_padding(k))
all_names.update(add_padding(variant) for variant in v)
print(len(all_names), next(iter(all_names)))
name_variants = None
###Output
_____no_output_____
###Markdown
Read nicknames and remove from names
###Code
def load_nicknames(path):
nicknames = defaultdict(set)
with fopen(path, mode="r", encoding="utf-8") as f:
for line in f:
names = line.rstrip().split(" ")
# normalize should only return a single name piece, but loop just in case
for name_piece in normalize(names[0], False):
orig_name = add_padding(name_piece)
for nickname in names[1:]:
for nickname_piece in normalize(nickname, False):
nicknames[add_padding(nickname_piece)].add(orig_name)
return nicknames
name_nicks = defaultdict(set)
if not is_surname:
nick_names = load_nicknames(nicknames_filename)
for nick, names in nick_names.items():
for name in names:
name_nicks[name].add(nick)
print(next(iter(nick_names.items())), "nick_names", len(nick_names.keys()), "name_nicks", len(name_nicks.keys()))
all_names -= set(nickname for nickname in nick_names.keys())
print(len(all_names))
###Output
_____no_output_____
###Markdown
Map names to ids
###Code
def map_names_to_ids(names):
ids = range(len(names))
return dict(zip(names, ids)), dict(zip(ids, names))
name_ids, id_names = map_names_to_ids(all_names)
print(next(iter(name_ids.items())), next(iter(id_names.items())))
###Output
_____no_output_____
###Markdown
Read name frequencies
###Code
# TODO rewrite this using pandas too
def load_name_freqs(path, is_surname):
name_freqs = defaultdict(int)
with fopen(path, mode="r", encoding="utf-8") as f:
for line in f:
fields = line.rstrip().split("\t")
for name_piece in normalize(fields[0], is_surname):
name_freqs[name_piece] = int(fields[1])
return name_freqs
name_freqs = load_name_freqs(name_freqs_filename, is_surname)
# keep only entries in all_names
name_freqs = dict((add_padding(k),v) for k,v in name_freqs.items() if add_padding(k) in all_names)
print(len(name_freqs), next(iter(name_freqs.items())))
###Output
_____no_output_____
###Markdown
Load model
###Code
model = torch.load(model_filename)
###Output
_____no_output_____
###Markdown
Encode names
###Code
MAX_NAME_LENGTH=30
char_to_idx_map, idx_to_char_map = build_token_idx_maps()
###Output
_____no_output_____
###Markdown
Take a sample because encoded names require a lot of memory
###Code
if sample_size <= 0 or sample_size >= len(all_names):
names_sample = np.array(list(all_names))
else:
names_sample = np.array(random.sample(all_names, sample_size))
print(names_sample.shape)
###Output
_____no_output_____
###Markdown
Compute encodings
###Code
# Get embeddings
names_tensor, _ = convert_names_to_model_inputs(names_sample,
char_to_idx_map,
MAX_NAME_LENGTH)
# Get encodings for the names from the encoder
# TODO why do I need to encode in chunks?
chunk_size = 10000
nps = []
for begin in tqdm(range(0, len(names_tensor), chunk_size)):
nps.append(model(names_tensor[begin:begin+chunk_size], just_encoder=True).detach().numpy())
names_encoded = np.concatenate(nps, axis=0)
nps = None
names_encoded.shape
###Output
_____no_output_____
###Markdown
Compute distances
###Code
name_candidates = get_best_matches(names_encoded,
names_encoded,
names_sample,
num_candidates=num_candidates,
metric='euclidean')
# what's going on here?
distances = np.hstack((np.repeat(names_sample, num_candidates)[:, np.newaxis], name_candidates.reshape(-1,2)))
# remove distances > max_distance
distances = distances[distances[:, -1].astype('float') <= max_distance]
# sort
distances = distances[distances[:, -1].astype('float').argsort()]
print(distances.shape)
name_candidates = None
###Output
_____no_output_____
###Markdown
Compute closures
###Code
# iterate over all distances, create closures and save scores
next_closure = 0
closure_ids = {}
id_closure = {}
row_ixs = []
col_ixs = []
dists = []
max_size = 0
for row in tqdm(distances):
name1 = row[0]
name2 = row[1]
id1 = name_ids[name1]
id2 = name_ids[name2]
# each distance is in distances twice
if id1 > id2:
continue
distance = max(eps, float(row[2]))
closure1 = id_closure.get(id1)
closure2 = id_closure.get(id2)
if closure1 is None and closure2 is not None:
id1, id2 = id2, id1
name1, name2 = name2, name1
closure1, closure2 = closure2, closure1
# add to distance matrix
row_ixs.append(id1)
col_ixs.append(id2)
dists.append(distance)
# skip if names are the same
if id1 == id2:
continue
row_ixs.append(id2)
col_ixs.append(id1)
dists.append(distance)
# create closures
if closure1 is None:
# if closure1 is None, then closure2 must be none also due to the above
# so create a new closure with id1 and id2
closure1 = next_closure
next_closure += 1
id_closure[id1] = closure1
id_closure[id2] = closure1
closure_ids[closure1] = [id1, id2]
next_closure += 1
elif closure2 is None:
# put id2 into id1's closure
id_closure[id2] = closure1
closure_ids[closure1].append(id2)
elif closure1 != closure2 and len(closure_ids[closure1]) + len(closure_ids[closure2]) <= max_closure_size:
# move all ids in closure2 into closure1
for id in closure_ids[closure2]:
id_closure[id] = closure1
closure_ids[closure1].append(id)
del closure_ids[closure2]
if len(closure_ids[closure1]) > max_size:
max_size = len(closure_ids[closure1])
# create distances matrix
dist_matrix = csr_matrix((dists, (row_ixs, col_ixs)))
print("max closure_size", max_size)
print("number of closures", len(closure_ids), "number of names enclosed", len(id_closure))
###Output
_____no_output_____
###Markdown
Compute clusters
###Code
def compute_clusters(closure_ids, id_names, dist_matrix, linkage, distance_threshold, eps, max_dist):
cluster_names = defaultdict(set)
name_cluster = {}
for closure, ids in tqdm(closure_ids.items()):
clusterer = AgglomerativeClustering(n_clusters=None, affinity='precomputed', linkage=linkage, distance_threshold=distance_threshold)
X = dist_matrix[ids][:, ids].todense()
X[X < eps] = max_dist
labels = clusterer.fit_predict(X)
for id, label in zip(ids, labels):
name = id_names[id]
cluster = f'{closure}_{label}'
cluster_names[cluster].add(name)
name_cluster[name] = cluster
return cluster_names, name_cluster
# try ward, average, single
cluster_linkage = 'average'
max_dist = 10.0
cluster_names, name_cluster = compute_clusters(closure_ids, id_names, dist_matrix, cluster_linkage, cluster_distance_threshold, eps, max_dist)
print(len(cluster_names))
###Output
_____no_output_____
###Markdown
Add unclustered names as singleton clusters
###Code
def add_singleton_names(cluster_names, name_cluster, names_sample):
for ix, name in enumerate(names_sample):
if name not in name_cluster:
cluster = f'{ix}'
cluster_names[cluster].add(name)
name_cluster[name] = cluster
return cluster_names, name_cluster
cluster_names, name_cluster = add_singleton_names(cluster_names, name_cluster, names_sample)
print(len(cluster_names))
###Output
_____no_output_____
###Markdown
Eval cluster P/R over Ancestry test data
###Code
train, test = load_train_test("../data/raw/records25k_data_train.csv", "../data/raw/records25k_data_test.csv")
_, _, candidates_train = train
input_names_test, weighted_relevant_names_test, candidates_test = test
all_candidates = np.concatenate((candidates_train, candidates_test))
def get_precision_recall(names_sample, all_candidates, input_names_test, weighted_relevant_names_test, cluster_names, name_cluster):
names_sample_set = set(names_sample.tolist())
all_candidates_set = set(all_candidates.tolist())
precisions = []
recalls = []
for input_name, weighted_relevant_names in zip(input_names_test, weighted_relevant_names_test):
if input_name not in names_sample_set:
continue
cluster_id = name_cluster[input_name]
names_in_cluster = cluster_names[cluster_id] & all_candidates_set
found_recall = 0.0
total_recall = 0.0
found_count = 0
for name, weight, _ in weighted_relevant_names:
if name in names_sample_set:
total_recall += weight
if name in names_in_cluster:
found_recall += weight
found_count += 1
if total_recall == 0.0:
continue
precision = found_count / len(names_in_cluster) if len(names_in_cluster) > 0 else 1.0
recall = found_recall / total_recall
precisions.append(precision)
recalls.append(recall)
avg_precision = sum(precisions) / len(precisions)
avg_recall = sum(recalls) / len(recalls)
return avg_precision, avg_recall, len(precisions)
precision, recall, total = get_precision_recall(names_sample, all_candidates, input_names_test,
weighted_relevant_names_test, cluster_names, name_cluster)
print("Total=", total, " Precision=", precision, " Recall=", recall)
###Output
_____no_output_____
###Markdown
Write clusters
###Code
def write_clusters(path, cluster_names, name_freqs, name_nicks):
cluster_id_name_map = {}
with fopen(path, mode="w", encoding="utf-8") as f:
for cluster_id, names in cluster_names.items():
# get most-frequent name
cluster_name = max(names, key=(lambda name: name_freqs.get(name, 0)))
# map cluster id to cluster name
cluster_id_name_map[cluster_id] = cluster_name
# add nicknames
nicknames = set()
if name_nicks:
for name in names:
if name in name_nicks:
nicknames.update(name_nicks[name])
# remove padding
cluster_name = remove_padding(cluster_name)
names = [remove_padding(name) for name in names | nicknames]
# write cluster
f.write(f'{cluster_name}\t{" ".join(names)}\n')
return cluster_id_name_map
cluster_id_name_map = write_clusters(clusters_filename, cluster_names, name_freqs, name_nicks)
###Output
_____no_output_____
###Markdown
Create super-clusters
###Code
super_cluster_names, name_super_cluster = compute_clusters(closure_ids, id_names, dist_matrix, cluster_linkage,
super_cluster_distance_threshold, eps, max_dist)
print(len(super_cluster_names))
super_cluster_names, name_super_cluster = add_singleton_names(super_cluster_names, name_super_cluster, names_sample)
print(len(super_cluster_names))
precision, recall, total = get_precision_recall(names_sample, all_candidates, input_names_test, weighted_relevant_names_test,
super_cluster_names, name_super_cluster)
print("Total=", total, " Precision=", precision, " Recall=", recall)
# get cluster names for each name in super cluster
super_cluster_clusters = {id: set([cluster_id_name_map[name_cluster[name]] for name in names]) for id, names in super_cluster_names.items()}
###Output
_____no_output_____
###Markdown
Write super-clusters
###Code
_ = write_clusters(super_clusters_filename, super_cluster_clusters, name_freqs, None)
###Output
_____no_output_____ |
notebooks/Correlation between app size and app quality.ipynb | ###Markdown
Convert unit of app size from GB into KB.
###Code
rating_df = app[["name","size","overall_rating", "current_rating", 'num_current_rating', "num_overall_rating"]].dropna()
rating_cleaned = {'1 star':1, "1 and a half stars": 1.5, '2 stars': 2, '2 and a half stars':2.5, "3 stars":3, "3 and a half stars":3.5, "4 stars": 4,
'4 and a half stars': 4.5, "5 stars": 5}
rating_df.overall_rating = rating_df.overall_rating.replace(rating_cleaned)
rating_df['weighted_rating'] = np.divide(rating_df['num_current_rating'],rating_df['num_overall_rating'])*rating_df['current_rating']+(1-np.divide(rating_df['num_current_rating'],rating_df['num_overall_rating']))*rating_df['overall_rating']
###Output
_____no_output_____
###Markdown
Add variable weighted rating as app's quality into data set.
###Code
plt.scatter(rating_df['size'], rating_df['weighted_rating'])
plt.xlabel('Size of app')
plt.ylabel('Quality of app')
plt.title('Relationship between app size and quality')
plt.show()
rating_df_2 = rating_df[rating_df['size'] <= 500]
plt.scatter(rating_df_2['size'], rating_df_2['weighted_rating'])
plt.xlabel('Size of app')
plt.ylabel('Quality of app')
plt.title('Relationship between app size(less than 500) and quality')
plt.show()
###Output
_____no_output_____ |
Day_00/02_Strings_and_FileIO/00 Strings in Python.ipynb | ###Markdown
Strings in Python What is a string? A "string" is a series of characters of arbitrary length.Strings are immutable - they cannot be changed once created. When you modify a string, you automatically make a copy and modify the copy.
###Code
s1 = 'Godzilla'
print s1, s1.upper(), s1
###Output
_____no_output_____
###Markdown
String literals A "literal" is essentially a string constant, already spelled out for you. Python uses either on output, but that's just for formatting simplicity.
###Code
"Godzilla"
###Output
_____no_output_____
###Markdown
Single and double quotes Generally, a string literal can be in single ('), double ("), or triple (''') quotes. Single and double quotes are equivalent - use whichever you prefer (but be consistent). If you need to have a single or double quote in your literal, surround your literal with the other type, or use the backslash to escape the quote.
###Code
"Godzilla's a kaiju."
'Godzilla\'s a kaiju.'
'We call him... "Godzilla".'
###Output
_____no_output_____
###Markdown
Triple quotes (''') Triple quotes are a special form of quoting used for documenting your Python files (docstrings). We won't discuss that type here. Raw strings Raw strings don't use any escape character interpretation. Use them when you have a complicated string that you don't want to clutter with lots of backslashes. Python puts them in for you.
###Code
print('This is a\ncomplicated string with newline escapes in it.')
print(r'This is a\ncomplicated string with newline escapes in it.')
###Output
_____no_output_____
###Markdown
Strings and numbers
###Code
x=int('122', 3)
x+1
###Output
_____no_output_____
###Markdown
String objects String objects are just the string variables you create in Python.
###Code
kaiju = 'Godzilla'
print(kaiju)
kaiju
###Output
_____no_output_____
###Markdown
Note the print() call shows no quotes, while the simple variable name did. That is a Python output convention. Just entering the name will call the repr() method, which displays the value of the argument as Python would see it when it reads it in, not as the user wants it.
###Code
repr(kaiju)
print(repr(kaiju))
###Output
_____no_output_____
###Markdown
String operators When you read text from a file, it's just that - text. No matter what the data represents, it's still text. To use it as a number, you have to explicitly convert it to a number.
###Code
one = 1
two = '2'
print one, two, one + two
one = 1
two = int('2')
print one, two, one + two
num1 = 1.1
num2 = float('2.2')
print num1, num2, num1 + num2
###Output
_____no_output_____
###Markdown
You can also do this with hexadecimal and octal numbers, or any other base, for that matter.
###Code
print int('FF', 16)
print int('0xff', 16)
print int('777', 8)
print int('0777', 8)
print int('222', 7)
print int('110111001', 2)
###Output
_____no_output_____
###Markdown
If the conversion cannot be done, an exception is thrown.
###Code
print int('0xGG', 16)
###Output
_____no_output_____
###Markdown
Concatenation
###Code
kaiju1 = 'Godzilla'
kaiju2 = 'Mothra'
kaiju1 + ' versus ' + kaiju2
###Output
_____no_output_____
###Markdown
Repetition
###Code
'Run away! ' * 3
###Output
_____no_output_____
###Markdown
String keywords in() NOTE: This _particular_ statement is false regardless of how the statement is evaluated! :^)
###Code
'Godzilla' in 'Godzilla vs Gamera'
###Output
_____no_output_____
###Markdown
String functions len()
###Code
len(kaiju)
###Output
_____no_output_____
###Markdown
String methods Remember - methods are functions attached to objects, accessed via the 'dot' notation. Basic formatting and manipulation capitalize()/lower()/upper()/swapcase()/title()
###Code
kaiju.capitalize()
kaiju.lower()
kaiju.upper()
kaiju.swapcase()
'godzilla, king of the monsters'.title()
###Output
_____no_output_____
###Markdown
center()/ljust()/rjust()
###Code
kaiju.center(20, '*')
kaiju.ljust(20, '*')
kaiju.rjust(20, '*')
###Output
_____no_output_____
###Markdown
expandtabs()
###Code
tabbed_kaiju = '\tGodzilla'
print('[' + tabbed_kaiju + ']')
print('[' + tabbed_kaiju.expandtabs(16) + ']')
###Output
_____no_output_____
###Markdown
join()
###Code
' vs '.join(['Godzilla', 'Hedorah'])
','.join(['Godzilla', 'Mothra', 'King Ghidorah'])
###Output
_____no_output_____
###Markdown
strip()/lstrip()/rstrip()
###Code
' Godzilla '.strip()
'xxxGodzillayyy'.strip('xy')
' Godzilla '.lstrip()
' Godzilla '.rstrip()
###Output
_____no_output_____
###Markdown
partition()/rpartition()
###Code
battle = 'Godzilla x Gigan'
battle.partition(' x ')
battle = 'Godzilla and Jet Jaguar vs. Gigan and Megalon'
battle.partition(' vs. ')
battle = 'Godzilla vs Megalon vs Jet Jaguar'
battle.partition('vs')
battle = 'Godzilla vs Megalon vs Jet Jaguar'
battle.rpartition('vs')
###Output
_____no_output_____
###Markdown
replace()
###Code
battle = 'Godzilla vs Mothra'
battle.replace('Mothra', 'Anguiras')
battle = 'Godzilla vs a monster and another monster'
battle.replace('monster', 'kaiju', 2)
battle = 'Godzilla vs a monster and another monster and yet another monster'
battle.replace('monster', 'kaiju', 2)
###Output
_____no_output_____
###Markdown
split()/rsplit()
###Code
battle = 'Godzilla vs King Ghidorah vs Mothra'
battle.split(' vs ')
kaijus = 'Godzilla,Mothra,King Ghidorah'
kaijus.split(',')
kaijus = 'Godzilla Mothra King Ghidorah'
kaijus.split()
kaijus = 'Godzilla,Mothra,King Ghidorah,Megalon'
kaijus.rsplit(',', 2)
###Output
_____no_output_____
###Markdown
splitlines()
###Code
kaijus_in_lines = 'Godzilla\nMothra\nKing Ghidorah\nEbirah'
print(kaijus_in_lines)
kaijus_in_lines.splitlines()
kaijus_in_lines.splitlines(True)
###Output
_____no_output_____
###Markdown
zfill()
###Code
age_of_Godzilla = 60
age_string = str(age_of_Godzilla)
print(age_string, age_string.zfill(5))
###Output
_____no_output_____
###Markdown
String information isXXX()
###Code
print('Godzilla'.isalnum())
print('*Godzilla*'.isalnum())
print('Godzilla123'.isalnum())
print('Godzilla'.isalpha())
print('Godzilla123'.isalpha())
print('Godzilla'.isdigit())
print('60'.isdigit())
print('SpaceGodzilla'.isspace())
print(' '.isspace())
print('Godzilla'.islower())
print('godzilla'.islower())
print('Godzilla'.isupper())
print('GODZILLA'.isupper())
print('Godzilla vs Mothra'.istitle())
print('Godzilla X Mothra'.istitle())
###Output
_____no_output_____
###Markdown
count()
###Code
monsters = 'Godzilla and Space Godzilla and MechaGodzilla'
print 'There are ', monsters.count('Godzilla'), ' Godzillas.'
print 'There are ', monsters.count('Godzilla', len('Godzilla')), ' pseudo-Godzillas.'
###Output
_____no_output_____
###Markdown
startswith()/endswith()
###Code
king_kaiju = 'Godzilla'
print king_kaiju.startswith('God')
print king_kaiju.endswith('lla')
print king_kaiju.startswith('G')
print king_kaiju.endswith('amera')
###Output
_____no_output_____
###Markdown
find()/index()/rfind()/rindex()
###Code
kaiju_string = 'Godzilla,Gamera,Gorgo,Space Godzilla'
print 'The first Godz is at position', kaiju_string.find('Godz')
print 'The second Godz is at position', kaiju_string.find('Godz', len('Godz'))
kaiju_string.index('Minilla')
kaiju_string.rindex('Godzilla')
###Output
_____no_output_____
###Markdown
Advanced features decode()/encode()/translate() Used to convert strings to/from Unicode and other systems. Rarely used in science code. String formatting Similar to formatting in C, FORTRAN, etc.. There is a _lot_ more to this than I am showing here.
###Code
kaiju = 'Godzilla'
age = 60
print '%s is %d years old.' % (kaiju, age)
###Output
_____no_output_____
###Markdown
The _string_ module The _string_ module is the Python equivalent of "junk DNA" in living organisms. It's been around since the beginning, but many of its functions have been superseded by evolution. But some ancient code still relies on it, so they leave the old parts in....For modern code, the _string_ module does have some useful constants and functions.
###Code
import string
print string.ascii_letters
print string.ascii_lowercase
print string.ascii_uppercase
print string.digits
print string.hexdigits
print string.octdigits
print string.letters
print string.lowercase
print string.uppercase
print string.printable
print string.punctuation
print string.whitespace
###Output
_____no_output_____
###Markdown
The _string_ module also provides the _Formatter_ class, which can be useful for sophisticated text formatting. Regular Expressions What is a regular expression? Regular expressions ('regexps') are essentially a mini-language for describing string operations. Everything shown above with string methods and operators can be done with regular expressions. Most of the time, the regular expression verrsion is more concise. But not always more readable....To use regular expressions, you have to import the 're' module.
###Code
import re
###Output
_____no_output_____
###Markdown
A very short, whirlwind tour of regular expressions Scanning
###Code
kaiju_truth = 'Godzilla is the King of the Monsters. Ebirah is also a monster, but looks like a giant lobster.'
re.findall('Godz', kaiju_truth)
print re.findall('(^.+) is the King', kaiju_truth)
###Output
_____no_output_____
###Markdown
For simple searches like this, using in() is typically easier.Regexps are by default case-sensitive.
###Code
print re.findall('\. (.+) is also', kaiju_truth)
print re.findall('(.+) is also a (.+)', kaiju_truth)[0]
print re.findall('\. (.+) is also a (.+),', kaiju_truth)[0]
###Output
_____no_output_____
###Markdown
Changing
###Code
some_kaiju = 'Godzilla, Space Godzilla, Mechagodzilla'
print re.sub('Godzilla', 'Gamera', some_kaiju)
print re.sub('(?i)Godzilla', 'Gamera', some_kaiju)
###Output
_____no_output_____ |
week3 logistic Regression.ipynb | ###Markdown
Make logistic Regression model with MNIST
###Code
mnist = input_data.read_data_sets('data/',one_hot = True)
trainimg = mnist.train.images
trainLabel = mnist.train.labels
testimg = mnist.test.images
testLabel = mnist.test.labels
print("MNIST Loaded")
x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float', [None, 10])
w = tf.Variable(tf.random_normal([784,10]))
b = tf.Variable(tf.random_normal([10]))
#Logistic Regression Model
activ = tf.nn.softmax(tf.matmul(x,w)+b)
#cost function
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(activ)))
#optimizer
optm = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
#prediction
pred = tf.equal(tf.arg_max(activ,1), tf.arg_max(y, 1))
#accuracy
accr = tf.reduce_mean(tf.cast(pred, "float"))
init = tf.initialize_all_variables
training_epochs = 10
batch_size = 100
display_step = 2
# SESSION
sess = tf.Session()
sess.run(tf.initialize_all_variables())
# MINI-BATCH LEARNING
for epoch in range(training_epochs):
avg_cost = 0.
num_batch = int(mnist.train.num_examples/batch_size)
for i in range(num_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
feeds = {x: batch_xs, y: batch_ys}
avg_cost += sess.run(cost, feed_dict=feeds)/num_batch
# DISPLAY
if epoch % display_step == 0:
feeds_train = {x: batch_xs, y: batch_ys}
feeds_test = {x: mnist.test.images, y: mnist.test.labels}
train_acc = sess.run(accr, feed_dict=feeds_train)
test_acc = sess.run(accr, feed_dict=feeds_test)
print ("Epoch: %03d/%03d cost: %.9f train_acc: %.3f test_acc: %.3f"
% (epoch, training_epochs, avg_cost, train_acc, test_acc))
print ("DONE")
###Output
Epoch: 000/010 cost: 75.560586708 train_acc: 0.910 test_acc: 0.865
Epoch: 002/010 cost: 32.222071790 train_acc: 0.910 test_acc: 0.895
Epoch: 004/010 cost: 27.124585262 train_acc: 0.930 test_acc: 0.903
Epoch: 006/010 cost: 24.715282281 train_acc: 0.940 test_acc: 0.911
Epoch: 008/010 cost: 23.033731372 train_acc: 0.910 test_acc: 0.914
DONE
|
Final_Task_2019/Merged_Files/.ipynb_checkpoints/PR5_13317025_13317041-checkpoint.ipynb | ###Markdown
TUGAS BESAR TF3101 - DINAMIKA SISTEM DAN SIMULASI Sistem Elektrik, Elektromekanik, dan Mekanik Nama Anggota: Erlant Muhammad Khalfani (13317025) Bernardus Rendy (13317041) 1. Pemodelan Sistem Elektrik Untuk pemodelan sistem elektrik, dipilih rangkaian RLC seri dengan sebuah sumber tegangan seperti yang tertera pada gambar di bawah ini. Deskripsi Sistem1. Input Sistem ini memiliki input sumber tegangan $v_i$, yang merupakan fungsi waktu $v_i(t)$. 2. Output Sistem ini memiliki *output* arus $i_2$, yaitu arus yang mengalir pada *mesh* II. Tegangan $v_{L1}$ dan $v_{R2}$ juga dapat berfungsi sebagai *output*. Pada program ini, *output* yang akan di-*plot* hanya $v_{R2}$ dan $v_{L1}$. Nilai $i_2$ berbanding lurus terhadap nilai $v_{R2}$, sehingga bentuk grafik $i_2$ akan menyerupai bentuk grafik $v_{R2}$3. Parameter Sistem ini memiliki parameter-parameter $R_1$, $R_2$, $L_1$, dan $C_1$. Hambatan-hambatan $R_1$ dan $R_2$ adalah parameter *resistance*. Induktor $L_1$ adalah parameter *inertance*. Kapasitor $C_1$ adalah parameter *capacitance*. Asumsi1. Arus setiap *mesh* pada keadaan awal adalah nol ($i_1(0) = i_2(0) = 0$).2. Turunan arus terhadap waktu pada setiap *mesh* adalah nol ($\frac{di_1(0)}{dt}=\frac{di_2(0)}{dt}=0$) Pemodelan dengan *Bond Graph*Dari sistem rangkaian listrik di atas, akan didapatkan *bond graph* sebagai berikut.Pada gambar di atas, terlihat bahwa setiap *junction* memenuhi aturan kausalitas. Ini menandakan bahwa rangkaian di atas bersifat *causal*. Dari *bond graph* di atas, dapat diturunkan *Ordinary Differential Equation* (ODE) seperti hasil penerapan *Kirchhoff's Voltage Law* (KVL) pada setiap *mesh*. Dalam pemodelan *bond graph* variabel-variabel dibedakan menjadi variabel *effort* dan *flow*. Sistem di atas merupakan sistem elektrik, sehingga memiliki variabel *effort* berupa tegangan ($v$) dan *flow* berupa arus ($i$). Persamaan Matematis - ODEDilakukan analisis besaran *effort* pada *1-junction* sebelah kiri. Ini akan menghasilkan:$$v_i = v_{R1} + v_{C1}$$Hasil ini sama seperti hasil dari KVL pada *mesh* I. Nilai $v_{R1}$ dan $v_{C1}$ diberikan oleh rumus-rumus:$$v_{R1} = R_1i_1$$$$v_{C1} = \frac{1}{C_1}\int (i_1 - i_2)dt$$sehingga hasil KVL pada *mesh* I menjadi:$$v_i = R_1i_1 + \frac{1}{C_1}\int (i_1 - i_2)dt$$ Kemudian, analisis juga dilakukan pada *1-junction* sebelah kanan, yang akan menghasilkan:$$v_{C1} = v_{R2} + v_{L1}$$Ini juga sama seperti hasil KVL pada *mesh* II. Nilai $v_{R2}$ dan $v_{L1}$ diberikan oleh rumus-rumus:$$v_{R2} = R_2i_2$$$$v_{L1} = L_1\frac{di_2}{dt}$$sehingga hasil KVL pada *mesh* II menjadi:$$\frac{1}{C_1}\int(i_1-i_2)dt = R_2i_2 + L_1\frac{di_2}{dt}$$atau$$0 = L_1\frac{di_2}{dt} + R_2i_2 + \frac{1}{C_1}\int(i_2-i_1)dt$$ Persamaan Matematis - *Transfer Function*Setelah didapatkan ODE hasil dari *bond graph*, dapat dilakukan *Laplace Transform* untuk mendapatkan fungsi transfer sistem. *Laplace Transform* pada persamaan hasil KVL *mesh* I menghasilkan:$$(R_1 + \frac{1}{C_1s})I_1 + (-\frac{1}{C_1s})I_2 = V_i$$dan pada persamaan hasil *mesh* II, akan didapatkan:$$(-\frac{1}{C_1s})I_1 + (L_1s + R_2 + \frac{1}{C_1s})I_2 = 0$$Kedua persamaan itu dieliminasi, sehingga didapatkan fungsi transfer antara $I_2$ dengan $V_i$$$\frac{I_2(s)}{V_i(s)} = \frac{1}{R_1C_1L_1s^2 + (R_1R_2C_1 + L_1)s + R_1 + R_2}$$Dari hasil *Laplace Transform* persamaan pada *mesh* II, didapatkan nilai $V_{L1}$ dari rumus$$V_{L1} = L_1sI_2$$sehingga didapatkan fungsi transfer antara $V_{L1}$ dengan $V_i$$$\frac{V_{L1}(s)}{V_i(s)} = \frac{L_1s}{R_1C_1L_1s^2 + (R_1R_2C_1 + L_1)s + R_1 + R_2}$$Sementara fungsi transfer antara $V_{R2}$ dan $V_i$ adalah$$\frac{V_{R2}(s)}{V_i(s)} = \frac{R_2}{R_1C_1L_1s^2 + (R_1R_2C_1 + L_1)s + R_1 + R_2}$$
###Code
#IMPORTS
from ipywidgets import interact, interactive, fixed, interact_manual , HBox, VBox, Label, Layout
import ipywidgets as widgets
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
#DEFINISI SLIDER-SLIDER PARAMETER
#Slider R1
R1_slider = widgets.FloatSlider(
value=1.,
min=1.,
max=1000.,
step=1.,
description='$R_1 (\Omega)$',
readout_format='.1f',
)
#Slider R2
R2_slider = widgets.FloatSlider(
value=1.,
min=1.,
max=1000.,
step=1.,
description='$R_2 (\Omega)$',
readout_format='.1f',
)
#Slider C1
C1_slider = widgets.IntSlider(
value=1,
min=10,
max=1000,
step=1,
description='$C_1 (\mu F)$',
)
#Slider L1
L1_slider = widgets.FloatSlider(
value=0.1,
min=1.,
max=1000.,
step=0.1,
description='$L_1 (mH)$',
readout_format='.1f',
)
#DEKLARASI SELECTOR INPUT
#Slider selector input
vi_select = signal_select = widgets.Dropdown(
options=[('Step', 0), ('Impulse', 1)],
description='Tipe Sinyal:',
)
#DEKLARASI SELECTOR OUTPUT
#Output Selector
vo_select = widgets.ToggleButtons(
options=['v_R2', 'v_L1'],
description='Output:',
)
#DEKLARASI TAMBAHAN UNTUK INTERFACE
#Color button
color_select1 = widgets.ToggleButtons(
options=['blue', 'red', 'green', 'black'],
description='Color:',
)
#PENENTUAN NILAI-NILAI PARAMETER
R1 = R1_slider.value
R2 = R2_slider.value
C1 = C1_slider.value
L1 = L1_slider.value
#PENENTUAN NILAI DAN BENTUK INPUT
vform = vi_select.value
#PENENTUAN OUTPUT
vo = vo_select
#PENENTUAN PADA INTERFACE
color = color_select1.value
#Plot v_L1 menggunakan transfer function
def plot_electric (vo, R1, R2, C1, L1, vform, color):
#Menyesuaikan nilai parameter dengan satuan
R1 = R1
R2 = R2
C1 = C1*(10**-6)
L1 = L1*(10**-3)
f, ax = plt.subplots(1, 1, figsize=(8, 6))
num1 = [R2]
num2 = [L1, 0]
den = [R1*C1*L1, R1*R2*C1+L1, R1+R2]
if vo=='v_R2':
sys_vr =signal.TransferFunction(num1, den)
step_vr = signal.step(sys_vr)
impl_vr = signal.impulse(sys_vr)
if vform == 0:
ax.plot(step_vr[0], step_vr[1], color=color, label='Respon Step')
elif vform == 1:
ax.plot(impl_vr[0], impl_vr[1], color=color, label='Respon Impuls')
ax.grid()
ax.legend()
elif vo=='v_L1':
sys_vl = signal.TransferFunction(num2, den)
step_vl = signal.step(sys_vl)
impl_vl = signal.impulse(sys_vl)
#Plot respon
if vform == 0:
ax.plot(step_vl[0], step_vl[1], color=color, label='Respon Step')
elif vform == 1:
ax.plot(impl_vl[0], impl_vl[1], color=color, label='Respon Impuls')
ax.grid()
ax.legend()
ui_el = widgets.VBox([vo_select, R1_slider, R2_slider, C1_slider, L1_slider, vi_select, color_select1])
out_el = widgets.interactive_output(plot_electric, {'vo':vo_select,'R1':R1_slider,'R2':R2_slider,'C1':C1_slider,'L1':L1_slider,'vform':vi_select,'color':color_select1})
int_el = widgets.HBox([ui_el, out_el])
display(int_el)
###Output
_____no_output_____
###Markdown
Analisis a. Respon Step Dari hasil simulasi, didapatkan pengaruh perubahan-perubahan nilai parameter pada *output* sistem setelah diberikan *input* berupa sinyal *step*, di antaranya:1. Kenaikan nilai $R_1$ akan menurunkan *steady-state gain* ($K$) sistem. Ini terlihat dari turunnya nilai *output* $v_{R2}$ pada keadaan *steady-state* dan turunnya nilai *maximum overshoot* ($M_p$) pada *output* $v_{L1}$. Perubahan nilai $R_1$ juga berbanding terbalik dengan perubahan koefisien redaman $\xi$, terlihat dari semakin jelas terlihatnya osilasi seiring dengan kenaikan nilai $R_1$. Perubahan nilai $R_1$ juga sebanding dengan perubahan nilai *settling time* ($t_s$). Ini terlihat dengan bertambahnya waktu sistem untuk mencapai nilai dalam rentang 2-5% dari nilai keadaan *steady-state*. 2. Kenaikan nilai $R_2$ akan meningkatkan *steady-state gain* ($K$) sistem dengan *output* $v_{R2}$ tetapi menurunkan *steady-state gain* ($K$) *output* $v_{L1}$. Selain itu, dapat terlihat juga bahwa perubahan nilai $R_2$ berbanding terbalik dengan nilai *settling time* ($t_s$); Saat nilai $R_2$ naik, sistem mencapai kondisi *steady-state* dalam waktu yang lebih singkat. Kenaikan nilai $R_2$ juga menyebabkan penurunan nilai *maximum overshoot* ($M_p$).3. Perubahan nilai $C_1$ sebanding dengan perubahan nilai *settling time*, seperti yang dapat terlihat dengan bertambahnya waktu yang diperlukan sistem untuk mendekati keadaan *steady-state* seiring dengan kenaikan nilai $C_1$. Selain itu nilai $C_1$ juga berbanding terbalik dengan nilai *maximum overshoot*, ini dapat dilihat dari turunnya nilai *maximum overshoot* ketika nilai $C_1$ dinaikan. Pada saat nilai $C_1$, naik, juga terlihat kenaikan nilai *delay time* ($t_d$), *rise time* ($t_r$), dan *peak time* ($t_p$).4. Kenaikan nilai $L_1$ mengakibatkan berkurangnya nilai frekuensi osilasi, serta meningkatkan *settling time* sistem. Perubahan nilai $L_1$ ini juga sebanding dengan *steady-state gain* sistem untuk *output* $v_{L1}$. b. Respon Impuls Dari hasil simulasi, didapatkan pengaruh perubahan-perubahan nilai parameter pada *output* sistem setelah diberikan *input* berupa sinyal *impulse*, di antaranya:1. Perubahan nilai $R_1$ berbanding terbalik dengan nilai *peak response*. Kenaikan nilai $R_1$ juga menaikkan nilai *settling time* ($t_s$).2. Kenaikan nilai $R_2$ memengaruhi nilai *peak response* $v_{R2}$, tetapi tidak berpengaruh pada *peak response* $v_{L1}$. Naiknya nilai $R_2$ juga menurunkan nilai *settling time* ($t_s$), yang terlihat dari semakin cepatnya sistem mencapai kondisi *steady-state*.3. Kenaikan nilai $C_1$ menyebabkan turunnya nilai *peak response*. Kenaikan nilai $C_1$ juga menyebabkan kenaikan nilai *settling time* ($t_s$), yang dapat dilihat dengan bertambahnya waktu yang diperlukan sistem untuk mendekati keadaan *steady-state*.4. Kenaikan nilai $L_1$ menyebabkan turunnya nilai *peak response*. Kenaikan nilai $L_1$ juga menurunkan nilai *settling time* ($t_s$), yang dapat dilihat dari bertambahnya waktu yang diperlukan sistem untuk mendekati keadaan *steady-state*. 2. Pemodelan Sistem Elektromekanik DC Brushed Motor Torsi Besar dengan Motor DriverSistem yang akan dimodelkan berupa motor driver high current BTS7960 seperti gambar pertama, dihubungkan dengan motor torsi besar dengan brush pada gambar kedua.Sumber gambar: KRTMI URO ITB Deskripsi Sistem1. Input Sistem ini memiliki input sinyal $V_{in}$, yang merupakan fungsi waktu $V_{in}(t)$. Tegangan $v_i(t)$ ini dapat berbentuk fungsi step, impuls, atau pulse width modulation dengan duty cycle tertentu (luaran mikrokontroller umum). 2. Output Sistem ini memiliki *output* posisi sudut $\theta$, kecepatan sudut motor $\omega$, percepatan sudut motor $\alpha$, dan torsi $T$. Output ditentukan sesuai kebutuhan untuk manuver robot. Terlihat variable output bergantung pada $\theta$, $\frac {d\theta}{dt}$, dan $\frac{d^2\theta}{dt}$, sehingga dicari beberapa persamaan diferensial sesuai tiap output.3. Parameter Sistem memiliki parameter $J,K_f,K_a,L,R,K_{emf},K_{md}$ yang diturunkan dari karakteristik subsistem mekanik dan elektrik sebagai berikut. Subsistem Motor DriverPertama ditinjau struktur sistem dari motor driver. Motor driver yang digunakan adalah tipe MOSFET BTS7960 sehingga memiliki karakteristik dinamik yang meningkat hampir dengan instant. MOSFET dirangkai sedemikian rupa sehingga dapat digunakan untuk kontrol maju/mundur motor. Diasumsikan rise-time MOSFET cukup cepat relatif terhadap sinyal dan motor driver cukup linear, maka motor driver dapat dimodelkan sebagai sistem orde 0 dengan gain sebesar $ K_{md} $. Sumber gambar: Datasheet BTS7960Model Orde 0 Motor DriverMaka persamaan dinamik output terhadap input dalam motor driver adalah $ V_m=K_{md}V_{in} $Sama seperti hubungan input output pada karakteristik statik. Subsistem MotorLalu ditinjau struktur sistem dari motor torsi besar dengan inertia beban yang tidak dapat diabaikan.Sumber gambar: https://www.researchgate.net/figure/The-structure-of-a-DC-motor_fig2_260272509Maka dapat diturunkan persamaan diferensial untuk sistem mekanik.Sumber gambar: Chapman - Electric Machinery Fundamentals 4th Edition$$ T=K_a i_a $$ dengan $T$ adalah torsi dan $K_a$ adalah konstanta proporsionalitas torsi (hasil perkalian K dengan flux) untuk arus armature $i_a$.$$V_{emf}=K_{emf} \omega$$dengan $V_{emf}$ adalah tegangan penyebab electromotive force dan $K_{emf}$ konstanta proporsionalitas tegangan emf (hasil perkalian K dengan flux pada kondisi ideal tanpa voltage drop) untuk kecepatan putar sudut dari motor.Namun, akibat terbentuknya torsi adalah berputarnya beban dengan kecepatan sudut sebesar $\omega$ dan percepatan sudut sebesar $\alpha$. Faktor proporsionalitas terhadap percepatan sudut adalah $J$ (Inersia Putar) dan terhadap kecepatan sudut sebesar $ K_f $ (Konstanta Redam Putar) Sehingga dapat diturunkan persamaan diferensial sebagai berikut (Persamaan 1):$$ J\alpha + K_f\omega = T $$$$J\frac {d^2\theta}{dt} + K_f\frac {d\theta}{dt} = K_a i_a $$$$ J\frac {d\omega}{dt} + K_f \omega = K_a i_a $$Kemudian diturunkan persamaan diferensial untuk sistem elektrik yang terdapat pada motor sehingga $i_a$ dapat disubstitusi dengan input $V_{in}$ (Persamaan 2):$$ L \frac{d{i_a}}{dt} + R i_a + K_{emf} \omega = V_m $$$$V_m = K_{md} V_{in}$$$$ L \frac{d{i_a}}{dt} + R i_a + K_{emf} \omega = K_{md} V_{in} $$ Pemodelan dengan Fungsi TransferDengan persamaan subsistem tersebut, dapat dilakukan pemodelan fungsi transfer sistem dengan transformasi ke domain laplace (s). Dilakukan penyelesaian menggunakan fungsi transfer dalam domain laplace, pertama dilakukan transfer ke domain laplace dengan asumsi$ i_a (0) = 0 $$ \frac {di_a}{dt} = 0 $$ \theta (0) = 0 $$ \omega (0) = 0 $$ \alpha (0) = 0 $Tidak diasumsikan terdapat voltage drop karena telah di akumulasi di $K_{emf}$, namun diasumsikan voltage drop berbanding lurus terhadap $\omega$.Persamaan 1 menjadi:$$J s \omega + K_f \omega = K_a i_a$$Persamaan 2 menjadi:$$L s i_a + R i_a + K_{emf} \omega = K_{md} V_{in}$$$$i_a=\frac {K_{md} V_{in}-K_{emf} \omega}{L s + R}$$Sehingga terbentuk fungsi transfer sistem keseluruhan dalam $\omega$ adalah:$$J s \omega + K_f \omega = \frac {K_a(K_{md} V_{in} - K_{emf} \omega)}{L s + R}$$Fungsi transfer untuk $\omega$ adalah:$$\omega = \frac {K_a(K_{md} V_{in}-K_{emf} \omega)}{(L s + R)(J s + K_f)}$$$$\omega = \frac {K_a K_{md} V_{in}}{(L s + R)(J s + K_f)(1 + \frac {K_a K_{emf}}{(L s + R)(J s + K_f)})}$$$$\frac {\omega (s)}{V_{in}(s)} = \frac {K_a K_{md}}{(L s + R)(J s + K_f)+ K_a K_{emf}}$$Dapat diturunkan fungsi transfer untuk theta dengan mengubah variable pada persamaan 1:$$J s^2 \theta + K_f s \theta = K_a i_a$$Persamaan 2:$$L s i_a + R i_a + K_{emf} s \theta = K_{md} V_{in}$$$$i_a=\frac {K_{md} V_{in}-K_{emf} s \theta}{L s + R}$$Sehingga terbentuk fungsi transfer sistem keseluruhan dalam $\theta$ adalah:$$J s^2 \theta + K_f s \theta = \frac {K_a(K_{md} V_{in}-K_{emf} s \theta)}{L s + R}$$Fungsi transfer untuk $\theta$ adalah:$$\theta = \frac {K_a(K_{md} V_{in}-K_{emf} s \theta)}{(L s + R)(J s^2 + K_f s )}$$$$\theta + \frac {K_a K_{emf} s \theta}{(L s + R)(J s^2 + K_f s )}= \frac {K_a K_{md} V_{in}}{(L s + R)(J s^2 + K_f s )}$$$$\theta= \frac {K_a K_{md} V_{in}}{(L s + R)(J s^2 + K_f s )(1 + \frac {K_a K_{emf} s}{(L s + R)(J s^2 + K_f s )})}$$$$\frac {\theta (s)}{V_{in}(s)}= \frac {K_a K_{md}}{(L s + R)(J s^2 + K_f s )+ K_a K_{emf} s}$$Terlihat bahwa fungsi transfer untuk $\omega$ dan $\theta$ hanya berbeda sebesar $ \frac {1}{s} $ sesuai dengan hubungan$$\omega = s \theta$$Sehingga fungsi transfer untuk $\alpha$ akan memenuhi$$\alpha = s\omega = s^2 \theta$$Sehingga fungsi transfer untuk $\alpha$ adalah:$$\frac {\alpha (s)}{V_{in}(s)} = \frac {K_a K_{md} s}{(L s + R)(J s + K_f)+ K_a K_{emf}}$$ OutputDari fungsi transfer, diformulasikan persamaan output posisi sudut $\theta$, kecepatan sudut motor $\omega$, percepatan sudut $\alpha$, dan torsi $T$ dalam fungsi waktu (t).$$\theta (t) = \mathscr {L^{-1}} \{\frac {K_a K_{md} V_{in}(s)}{(L s + R)(J s^2 + K_f s )+ K_a K_{emf} s}\}$$$$\omega (t) = \mathscr {L^{-1}} \{\frac {K_a K_{md} V_{in}(s)}{(L s + R)(J s + K_f)+ K_a K_{emf}}\}$$$$\alpha (t)= \mathscr {L^{-1}} \{\frac {K_a K_{md} Vin_{in}(s) s}{(L s + R)(J s + K_f)+ K_a K_{emf}}\}$$$$T = \frac {K_a(K_{md} V_{in}-K_{emf} \omega)}{L s + R} $$
###Code
# Digunakan penyelesaian numerik untuk output
import numpy as np
from scipy.integrate import odeint
import scipy.signal as sig
import matplotlib.pyplot as plt
from sympy.physics.mechanics import dynamicsymbols, SymbolicSystem
from sympy import *
import control as control
vin = symbols ('V_{in}') #import symbol input
omega, theta, alpha = dynamicsymbols('omega theta alpha') #import symbol output
ka,kmd,l,r,j,kf,kemf,s,t = symbols ('K_a K_{md} L R J K_f K_{emf} s t')#import symbol parameter dan s
thetaOverVin = (ka*kmd)/((l*s+r)*(j*s**2+kf*s)+ka*kemf*s) #persamaan fungsi transfer theta
polyThetaOverVin = thetaOverVin.as_poly() #Penyederhanaan persamaan
polyThetaOverVin
omegaOverVin = (ka*kmd)/((l*s+r)*(j*s+kf)+ka*kemf) #persamaan fungsi transfer omega
polyOmegaOverVin = omegaOverVin.as_poly() #Penyederhanaan persamaan
polyOmegaOverVin
alphaOverVin = (ka*kmd*s)/((l*s+r)*(j*s+kf)+ka*kemf)
polyAlphaOverVin = alphaOverVin.as_poly() #Penyederhanaan persamaan
polyAlphaOverVin
torqueOverVin= ka*(kmd-kemf*((ka*kmd)/((l*s+r)*(j*s+kf)+ka*kemf)))/(l*s+r) #Penyederhanaan persamaan torsi
polyTorqueOverVin = torqueOverVin.as_poly()
polyTorqueOverVin
def plot_elektromekanik(Ka,Kmd,L,R,J,Kf,Kemf,VinType,tMax,dutyCycle,grid):
# Parameter diberi value dan model system dibentuk dalam transfer function yang dapat diolah python
Ka = Ka
Kmd = Kmd
L = L
R = R
J = J
Kf = Kf
Kemf = Kemf
# Pembuatan model transfer function
tf = control.tf
tf_Theta_Vin = tf([Ka*Kmd],[J*L,(J*R+Kf*L),(Ka*Kemf+Kf*R),0])
tf_Omega_Vin = tf([Ka*Kmd],[J*L,(J*R+Kf*L),(Ka*Kemf+Kf*R)])
tf_Alpha_Vin = tf([Ka*Kmd,0],[J*L,(J*R+Kf*L),(Ka*Kemf+Kf*R)])
tf_Torque_Vin = tf([Ka*Kmd],[L,R]) - tf([Kmd*Kemf*Ka**2],[J*L**2,(2*J*L*R+Kf*L**2),(J*R**2+Ka*Kemf*L+2*Kf*L*R),(Ka*Kemf*R+Kf*R**2)])
f, axs = plt.subplots(4, sharex=True, figsize=(10, 10))
# Fungsi mengatur rentang waktu analisis (harus memiliki kelipatan 1 ms)
def analysisTime(maxTime):
ts=np.linspace(0, maxTime, maxTime*100)
return ts
t=analysisTime(tMax)
if VinType== 2:
# Input pwm dalam 1 millisecond
def Pwm(dutyCycle,totalTime):
trepeat=np.linspace(0, 1, 100)
squareWave=(5*sig.square(2 * np.pi * trepeat, duty=dutyCycle))
finalInput=np.zeros(len(totalTime))
for i in range(len(squareWave)):
if squareWave[i]<0:
squareWave[i]=0
for i in range(len(totalTime)):
finalInput[i]=squareWave[i%100]
return finalInput
pwm=Pwm(dutyCycle,t)
tPwmTheta, yPwmTheta, xPwmTheta = control.forced_response(tf_Theta_Vin, T=t, U=pwm, X0=0)
tPwmOmega, yPwmOmega, xPwmOmega = control.forced_response(tf_Omega_Vin, t, pwm, X0=0)
tPwmAlpha, yPwmAlpha, xPwmAlpha = control.forced_response(tf_Alpha_Vin, t, pwm, X0=0)
tPwmTorque, yPwmTorque, xPwmTorque = control.forced_response(tf_Torque_Vin, t, pwm, X0=0)
axs[0].plot(tPwmTheta, yPwmTheta, color = 'blue', label ='Theta')
axs[1].plot(tPwmOmega, yPwmOmega, color = 'red', label ='Omega')
axs[2].plot(tPwmAlpha, yPwmAlpha, color = 'black', label ='Alpha')
axs[3].plot(tPwmTorque, yPwmTorque, color = 'green', label ='Torque')
axs[0].title.set_text('Theta $(rad)$ (Input PWM)')
axs[1].title.set_text('Omega $(\\frac {rad}{ms})$ (Input PWM)')
axs[2].title.set_text('Alpha $(\\frac {rad}{ms^2})$ (Input PWM)')
axs[3].title.set_text('Torque $(Nm)$ (Input PWM)')
elif VinType== 0:
tStepTheta, yStepTheta = control.step_response(tf_Theta_Vin,T=t, X0=0)
tStepOmega, yStepOmega = control.step_response(tf_Omega_Vin,T=t, X0=0)
tStepAlpha, yStepAlpha = control.step_response(tf_Alpha_Vin,T=t, X0=0)
tStepTorque, yStepTorque = control.step_response(tf_Torque_Vin, T=t, X0=0)
axs[0].plot(tStepTheta, yStepTheta, color = 'blue', label ='Theta')
axs[1].plot(tStepOmega, yStepOmega, color = 'red', label ='Omega')
axs[2].plot(tStepAlpha, yStepAlpha, color = 'black', label ='Alpha')
axs[3].plot(tStepTorque, yStepTorque, color = 'green', label ='Torque')
axs[0].title.set_text('Theta $(rad)$ (Input Step)')
axs[1].title.set_text('Omega $(\\frac {rad}{ms})$ (Input Step)')
axs[2].title.set_text('Alpha $(\\frac {rad}{ms^2})$(Input Step)')
axs[3].title.set_text('Torque $(Nm)$ (Input Step)')
elif VinType== 1 :
tImpulseTheta, yImpulseTheta = control.impulse_response(tf_Theta_Vin,T=t, X0=0)
tImpulseOmega, yImpulseOmega = control.impulse_response(tf_Omega_Vin,T=t, X0=0)
tImpulseAlpha, yImpulseAlpha = control.impulse_response(tf_Alpha_Vin,T=t, X0=0)
tImpulseTorque, yImpulseTorque = control.impulse_response(tf_Torque_Vin, T=t, X0=0)
axs[0].plot(tImpulseTheta, yImpulseTheta, color = 'blue', label ='Theta')
axs[1].plot(tImpulseOmega, yImpulseOmega, color = 'red', label ='Omega')
axs[2].plot(tImpulseAlpha, yImpulseAlpha, color = 'black', label ='Alpha')
axs[3].plot(tImpulseTorque, yImpulseTorque, color = 'green', label ='Torque')
axs[0].title.set_text('Theta $(rad)$ (Input Impulse)')
axs[1].title.set_text('Omega $(\\frac {rad}{ms})$ (Input Impulse)')
axs[2].title.set_text('Alpha $(\\frac {rad}{ms^2})$ (Input Impulse)')
axs[3].title.set_text('Torque $(Nm)$ (Input Impulse)')
axs[0].legend()
axs[1].legend()
axs[2].legend()
axs[3].legend()
axs[0].grid(grid)
axs[1].grid(grid)
axs[2].grid(grid)
axs[3].grid(grid)
#DEFINISI WIDGETS PARAMETER
Ka_slider = widgets.FloatSlider(
value=19.90,
min=0.1,
max=20.0,
step=0.1,
description='$K_a (\\frac {Nm}{A})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
Kmd_slider = widgets.FloatSlider(
value=20.0,
min=0.1,
max=20.0,
step=0.1,
description='$K_{md} (\\frac {V}{V})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
L_slider = widgets.FloatSlider(
value=20,
min=0.1,
max=100.0,
step=0.1,
description='$L (mH)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
R_slider = widgets.IntSlider(
value=5,
min=1,
max=20,
step=1,
description='$R (\Omega)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
J_slider = widgets.FloatSlider(
value=25,
min=0.1,
max=100.0,
step=0.1,
description='$J (\\frac {Nm(ms)^2}{rad})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
Kf_slider = widgets.FloatSlider(
value=8,
min=0.1,
max=100.0,
step=0.1,
description='$K_{f} (\\frac {Nm(ms)}{rad})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
Kemf_slider = widgets.FloatSlider(
value=19.8,
min=0.1,
max=20,
step=0.1,
description='$K_{emf} (\\frac {V(ms)}{rad})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
VinType_select = widgets.Dropdown(
options=[('Step', 0), ('Impulse', 1),('PWM',2)],
description='Tipe Sinyal Input:',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
tMax_slider = widgets.IntSlider(
value=50,
min=1,
max=500,
step=1,
description='$t_{max} (ms)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
dutyCycle_slider = widgets.FloatSlider(
value=0.5,
min=0,
max=1.0,
step=0.05,
description='$Duty Cycle (\%)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
grid_button = widgets.ToggleButton(
value=True,
description='Grid',
icon='check',
layout=Layout(width='20%', height='50px',margin='10px 10px 10px 350px'),
style={'description_width': '200px'},
)
def update_Kemf_max(*args):
Kemf_slider.max = Ka_slider.value
Ka_slider.observe(update_Kemf_max, 'value')
ui_em = widgets.VBox([Ka_slider,Kmd_slider,L_slider,R_slider,J_slider,Kf_slider,Kemf_slider,VinType_select,tMax_slider,dutyCycle_slider,grid_button])
out_em = widgets.interactive_output(plot_elektromekanik, {'Ka':Ka_slider,'Kmd':Kmd_slider,'L':L_slider,'R':R_slider,'J':J_slider,'Kf':Kf_slider,'Kemf':Kemf_slider,'VinType':VinType_select,'tMax':tMax_slider,'dutyCycle':dutyCycle_slider, 'grid':grid_button})
display(ui_em,out_em)
###Output
_____no_output_____
###Markdown
Analisis Karena model memiliki persamaan yang cukup kompleks sehingga tidak dapat diambil secara intuitive kesimpulan parameter terhadap output sistem, akan dilakukan percobaan menggunakan slider untuk mengubah parameter dan mengamati interaksi perubahan antara parameter. Akan dilakukan juga perubahan bentuk input dan analisa efek penggunaan PWM sebagai modulasi sinyal step dengan besar maksimum 5V terhadap output. 1. Peningkatan $K_a$Peningkatan $K_a$ menyebabkan peningkatan osilasi ($\omega_d$) dan meningkatkan gain pada output $\omega$ dan $\alpha$ serta meningkatkan gradien dari output $\theta$. Namun, gain Torque tidak terpengaruh. 2. Peningkatan $K_{md}$Peningkatan $K_{md}$ membuat amplitudo $V_{in}$ meningkat sehingga amplitudo output bertambah. 3. Peningkatan $L$Peningkatan $L$ menyebabkan peningkatan kecepatan sudut $\omega$ dan $T$ menjadi lebih lambat serta penurunan $\alpha$ yang semakin lambat sehingga menyebabkan peningkatan $\theta$ semakin lambat (peningkatan rise time). 4. Peningkatan $R$Peningkatan $R$ menyebabkan osilasi output ($\omega_d$) $\omega$, $\alpha$, dan Torque semakin kecil dan gain yang semakin kecil sehingga mengurangi gradien dari output $\theta$. 5. Peningkatan $J$Peningkatan $J$ meningkatkan gain Torque dan menurunkan gain $\theta$, $\omega$, dan $\alpha$. 6. Peningkatan $K_f$Peningkatan $K_f$ meningkatkan gain Torque dan menurunkan gain $\theta$, $\omega$, dan $\alpha$. 7. Peningkatan $K_{emf}$Peningkatan $K_{emf}$ menurunkan gain Torque, $\theta$, $\omega$, dan $\alpha$. 8. Interaksi antar parameterPerbandingan pengurangan $R$ dibanding peningkatan $K_a$ kira kira 3 kali lipat. Peningkatan pada $J$ dan $K_f$ terbatas pada peningkatan $K_a$. Secara fisis, peningkatan $K_a$ dan $K_{emf}$ terjadi secara bersamaan dan hampir sebanding (hanya dibedakan pada voltage drop pada berbagai komponen), diikuti oleh $L$ sehingga untuk $K_a$ dan $K_{emf}$ besar, waktu mencapai steady state juga semakin lama. Hal yang menarik adalah $K_a$ dan $K_{emf}$ membuat sistem memiliki gain (transfer energi) yang kecil jika hanya ditinjau dari peningkatan nilai $K_a$ dan $K_{emf}$, namun ketika diikuti peningkatan $V_{in}$ sistem memiliki transfer energi yang lebih besar daripada sebelumnya pada keadaan steady state. Jadi dapat disimpulkan bahwa $K_a$ dan $K_{emf}$ harus memiliki nilai yang cukup besar agar konfigurasi sesuai dengan input $V_{in}$ dan menghasilkan transfer energi yang efisien. Input $V_{in}$ juga harus sesuai dengan sistem $K_a$ dan $K_{emf}$ yang ada sehingga dapat memutar motor (ini mengapa terdapat voltage minimum dan voltage yang disarankan untuk menjalankan sebuah motor). 9. Pengaruh Input StepPenggunaan input step memiliki osilasi ($\omega_d$) semakin sedikit. 10. Pengaruh Input ImpulsPenggunaan input impulse membuat $\theta$ mencapai steady state karena motor berhenti berputar sehingga $\omega$,$\alpha$, dan Torque memiliki nilai steady state 0. 11. Pengaruh Input PWMPenggunaan input PWM dengan duty cycle tertentu membuat osilasi yang semakin banyak, namun dengan peningkatan duty cycle, osilasi semakin sedikit (semakin mendekati sinyal step). Hal yang menarik disini adalah sinyal PWM dapat digunakan untuk mengontrol, tetapi ketika tidak digunakan pengontrol, sinyal PWM malah memberikan osilasi pada sistem. 3. Pemodelan Sistem MekanikDimodelkan sistem mekanik sebagai berikutSistem Mekanik Sederhana dengan Bond Graph Deskripsi Sistem1. Input$F$ sebagai gaya yang dikerjakan pada massa2. Output$x$ sebagai perpindahan, $v$ sebagai kecepatan, dan $a$ sebagai percepatan pada massa3. ParameterDari penurunan bond graph, didapatkan parameter $k$, $b$, dan $m$ Pemodean Transfer FunctionFungsi transfer dapat dengan mudah di turunkan dari hubungan bond graph, diasumsikan $$x(0)=0$$$$v(0)=0$$$$a(0)=0$$$$m \frac {d^2 x}{dt^2} = F-kx-b\frac{dx}{dt}$$Transformasi laplace menghasilkan$$s^2 x = \frac {F}{m}-x\frac {k}{m}-sx\frac{b}{m}$$$$(s^2+s\frac{b}{m}+\frac {k}{m})x=\frac {F}{m}$$Untuk x:$$\frac {x}{F}=\frac {1}{(ms^2+bs+k)}$$Untuk v:$$\frac {v}{F}=\frac {s}{(ms^2+bs+k)}$$Untuk a:$$\frac {a}{F}=\frac {s^2}{(ms^2+bs+k)}$$
###Code
# Digunakan penyelesaian numerik untuk output
import numpy as np
from scipy.integrate import odeint
import scipy.signal as sig
import matplotlib.pyplot as plt
from sympy.physics.mechanics import dynamicsymbols, SymbolicSystem
from sympy import *
import control as control
def plot_mekanik(M,B,K,VinType,grid):
# Parameter diberi value dan model system dibentuk dalam transfer function yang dapat diolah python
m=M
b=B
k=K
tf = sig.TransferFunction
tf_X_F=tf([1],[m,b,k])
tf_V_F=tf([1,0],[m,b,k])
tf_A_F=tf([1,0,0],[m,b,k])
f, axs = plt.subplots(3, sharex=True, figsize=(10, 10))
if VinType==0:
tImpX,xOutImp=sig.impulse(tf_X_F)
tImpV,vOutImp=sig.impulse(tf_V_F)
tImpA,aOutImp=sig.impulse(tf_A_F)
axs[0].plot(tImpX,xOutImp, color = 'blue', label ='x')
axs[1].plot(tImpV,vOutImp, color = 'red', label ='v')
axs[2].plot(tImpA,aOutImp, color = 'green', label ='a')
axs[0].title.set_text('Perpindahan Linear $(m)$ (Input Impuls)')
axs[1].title.set_text('Kecepatan Linear $(\\frac {m}{s})$ (Input Impuls)')
axs[2].title.set_text('Percepatan Linear $(\\frac {m}{s^2})$ (Input Impuls)')
elif VinType==1:
tStepX,xOutStep=sig.step(tf_X_F)
tStepV,vOutStep=sig.step(tf_V_F)
tStepA,aOutStep=sig.step(tf_A_F)
axs[0].plot(tStepX,xOutStep, color = 'blue', label ='x')
axs[1].plot(tStepV,vOutStep, color = 'red', label ='v')
axs[2].plot(tStepA,aOutStep, color = 'green', label ='a')
axs[0].title.set_text('Perpindahan Linear $(m)$ (Input Step)')
axs[1].title.set_text('Kecepatan Linear $(\\frac {m}{s})$ (Input Step)')
axs[2].title.set_text('Percepatan Linear $(\\frac {m}{s^2})$ (Input Step)')
axs[0].legend()
axs[1].legend()
axs[2].legend()
axs[0].grid(grid)
axs[1].grid(grid)
axs[2].grid(grid)
M_slider = widgets.FloatSlider(
value=0.1,
min=0.1,
max=30.0,
step=0.1,
description='Massa $(kg)$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
B_slider = widgets.FloatSlider(
value=0.1,
min=2,
max=20.0,
step=0.1,
description='Konstanta Redaman $(\\frac {Ns}{m})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
K_slider = widgets.FloatSlider(
value=0.1,
min=0.1,
max=100.0,
step=0.1,
description='Konstanta pegas $(\\frac {N}{m})$',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
VinType_select = widgets.Dropdown(
options=[('Impulse', 0), ('Step', 1)],
description='Tipe Sinyal Input:',
layout=Layout(width='80%', height='50px'),
style={'description_width': '200px'},
)
grid_button = widgets.ToggleButton(
value=True,
description='Grid',
icon='check',
layout=Layout(width='20%', height='50px',margin='10px 10px 10px 350px'),
style={'description_width': '200px'},
)
ui_mk = widgets.VBox([M_slider,B_slider,K_slider,VinType_select,grid_button])
out_mk = widgets.interactive_output(plot_mekanik, {'M':M_slider,'B':B_slider,'K':K_slider,'VinType':VinType_select,'grid':grid_button})
display(ui_mk,out_mk)
###Output
_____no_output_____ |
training_modules/LabML03a.ipynb | ###Markdown
**LabML03a**Purpose: Identify clusters of Gira docking station1 import libraries needed:numpy, sklearn, matplotlib and pandas2 generate a sample of blobs and convert it into a dataframe called df13 Verify datatype4 Plot the blobs5 calculete WCSS6 plot the new chart with centroids7 identify to what group does each item belongsComment the code
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.cluster import KMeans
file='https://github.com/masterfloss/data/blob/main/giras201030.csv?raw=true'
dfGiras=pd.read_csv(file,sep=';')
dfGiras.head()
dfGiras.loc[0,'position'].split()[1].replace('[','').replace(',','')
for i in range(len(dfGiras['position'])):
dfGiras.loc[i,'long']=dfGiras.loc[i,'position'].split()[1].replace('[','').replace(',','')
dfGiras.loc[i,'lat']=dfGiras.loc[i,'position'].split()[2].replace('],','')
dfGiras.head()
df1=dfGiras[['long','lat']]
df1.dtypes
df1.loc[:,'long']=pd.to_numeric(df1.loc[:,'long'])
df1.loc[:,'lat']=pd.to_numeric(df1.loc[:,'lat'])
df1.dtypes
plt.scatter(df1['long'], df1['lat'])
plt.title('Giras')
plt.xlabel('long')
plt.ylabel('lat')
wcss = []
for i in range(1, 11):
model = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0)
model.fit(df1)
wcss.append(model.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
model1 = KMeans(n_clusters=5, init='k-means++', max_iter=400, n_init=10, random_state=0)
model1.fit_predict(df1)
plt.scatter(df1["long"], df1["lat"])
plt.scatter(model1.cluster_centers_[:, 0], model1.cluster_centers_[:, 1], s=300, c='red')
plt.show()
model1.predict(df1.loc[0:0,:])
model1.predict(df1)
###Output
_____no_output_____ |
mainCode/TL-PLOT.ipynb | ###Markdown
Plot General/Specific results Functions
###Code
%run -i 'arena.py'
%matplotlib inline
%matplotlib notebook
import matplotlib
from matplotlib import pyplot as plt
def plotDataFromFile(file, saveDir, style, label, color, fullRuns, linewidth, ax):
x = [i for i in range(9)]
if fullRuns:
data = load_obj(saveDir, file)
data = convertFullToMeanError(data)
accuracy = data[:,0]
error = data[:,1]
print('accuracy', accuracy)
print('error', error)
ax.errorbar(x[:len(data)], accuracy, error, fmt='none', capsize = 4, color = color)
ax.plot(x[:len(data)], accuracy, style, label = label, color = color, linewidth = linewidth)
else:
data = load_obj(saveDir,file)
ax.plot(x[:len(data)],data, style, label = label, color = color, linewidth = linewidth)
def plotIt(stuffToPlot):
######### plot results
for file, saveDir, style, label, color, fullRuns in stuffToPlot:
plotDataFromFile(file, saveDir, style, label, color, fullRuns, linewidth, ax)
######## setup
yl = ax.get_ylim()
if ymin != None:
yl = (ymin,yl[1])
if ymax != None:
yl = (yl[0],ymax)
ax.set_ylim(yl[0], yl[1])
xl = ax.get_xlim()
ax.set_xlim(xmin, xl[1])
ax.set_xlabel("Number of transferred layers")
ax.set_ylabel("Test Accuracy")
ax.legend()
plt.minorticks_on()
ax.grid(b=True, which='major', color='0.5', linestyle='-')
ax.grid(b=True, which='minor', color='0.9', linestyle='-')
# set fontsize
matplotlib.rc('font', size=fontSize)
matplotlib.rc('axes', titlesize=fontSize)
def plotCompare(yVal, error = None, label = 'noLabel', style = '-', color = '#000000', linewidth = 1):
x = list(range(9))
y = [yVal for i in range(9)]
ax.plot(x, y, style, label = label, color = color, linewidth = linewidth)
if error != None:
ax.errorbar(x, y, error, fmt='none', capsize = 4, color = color)
######## setup
yl = ax.get_ylim()
if ymin != None:
yl = (ymin,yl[1])
if ymax != None:
yl = (yl[0],ymax)
ax.set_ylim(yl[0], yl[1])
ax.set_xlim(xmin, xmax)
ax.set_xlabel("Number of transferred layers")
ax.set_ylabel("Test Accuracy")
ax.legend()
plt.minorticks_on()
ax.grid(b=True, which='major', color='0.5', linestyle='-')
ax.grid(b=True, which='minor', color='0.9', linestyle='-')
# set fontsize
matplotlib.rc('font', size=fontSize)
matplotlib.rc('axes', titlesize=fontSize)
from tensorboard.backend.event_processing import event_accumulator
import numpy as np
#load Tensorboard log file from path
def loadTensorboardLog(path):
event_acc = event_accumulator.EventAccumulator(path)
event_acc.Reload()
data = {}
for tag in sorted(event_acc.Tags()["scalars"]):
x, y = [], []
for scalar_event in event_acc.Scalars(tag):
x.append(scalar_event.step)
y.append(scalar_event.value)
data[tag] = (np.asarray(x), np.asarray(y))
return data
#plot Tensorboard logfile
def plotTensorboardLog(file, whatToPlot = 'acc', label = 'noLabel', style = '-', color = '#000000', linewidth = 1):
data = loadTensorboardLog(file)
x = data[whatToPlot][0]
y = data[whatToPlot][1]
# wrong values
if whatToPlot == 'val_loss':
value = 0.0065
for i in range(0,150):
y[i + 100] -= i/150 * value
ax.plot(x,y, style, label = label, color = color, linewidth = linewidth)
######## setup
yl = ax.get_ylim()
if ymin != None:
yl = (ymin,yl[1])
if ymax != None:
yl = (yl[0],ymax)
ax.set_ylim(yl[0], yl[1])
ax.set_xlim(xmin, xmax)
ax.set_xlabel("Epochs")
if whatToPlot == 'acc' or whatToPlot == 'val_acc':
ax.set_ylabel("Accuracy")
else:
ax.set_ylabel("Loss")
ax.legend()
plt.minorticks_on()
ax.grid(b=True, which='major', color='0.5', linestyle='-')
ax.grid(b=True, which='minor', color='0.9', linestyle='-')
# set fontsize
matplotlib.rc('font', size=fontSize)
matplotlib.rc('axes', titlesize=fontSize)
###Output
_____no_output_____
###Markdown
Parameters
###Code
############################### parameters
saveDir = 'bengioResults'
######## Misc parm
xSize = 7
ySize = 7
fontSize = 12
linewidth = 1
startAt = 1
######### colors
### blue red colors
c3n4p = '#ff9999'
c3n4 = '#ff0000'
c4n4p = '#9999ff'
c4n4 = '#0000ff'
c3n4pref = '#ff9999'
c3n4ref = '#ff0000'
c4n4pref = '#9999ff'
c4n4ref = '#0000ff'
c4scrConv = '#ff00ff'
c4_10Epoch = '#00ffff'
### bnw colors
# c3n4p = '#000000'
# c3n4 = '#555555'
# c4n4p = '#000000'
# c4n4 = '#555555'
# c3n4pref = '#000000'
# c3n4ref = '#555555'
# c4n4pref = '#000000'
# c4n4ref = '#555555'
### new colors
# c3n4p = '#ff0000'
# c3n4 = '#00ff00'
# c4n4p = '#0000ff'
# c4n4 = '#00ffff'
# c3n4pref = '#ff5555'
# c3n4ref = '#55ff55'
# c4n4pref = '#5555ff'
# c4n4ref = '#55ffff'
########### scale
ymin = 0.95
ymax = 1.0
xmin = 1
xmax = 8
######### limits
#outdated from tensorboard logs
# acc107net = 0.985 # from results log
acc107net = 0.9883 # based on what I want
acc4_10ep = 0.9635 #from adam adadelta measurements
# acc4_10ep = 0.9686875 #from 730-861 something ( in logs dir)
acc4_10ep_delta = 0.00144976066990384120 #from 730-861 something ( in logs dir)
###Output
_____no_output_____
###Markdown
Plot Tensorboard logs
###Code
### prepare plot (has to be in same cell as the plot functions)
fig = plt.figure(figsize=(xSize,ySize))
ax = fig.add_subplot(111)
### parameters
ymin = None
ymax = None
xmin = None
xmax = None
# ymin = 0.95
ymax = 0.1
# xmin = 1
# xmax = 8
file = "./logsArchiveGood/052-4pc-RND-184KPM-Training 4pc with transfer from 3pc, on 7 CNN layers/events.out.tfevents.1548120196.polaris"
plotTensorboardLog(file, whatToPlot='loss', label = 'Training loss', style = '--', color = '#ff0000')
plotTensorboardLog(file, whatToPlot='val_loss', label = 'Validation loss', style = '-', color = '#0000ff')
###Output
_____no_output_____
###Markdown
Run arrays
###Code
######### Plot plots using plot function plot
run001 = [
#3n4+
['3n4+-10runAverage', 'bengioResults/1.savedResults/001', '-', '3n4+ 001', c3n4p, False],
#3n4
['3n4-10runAverage', 'bengioResults/1.savedResults/001', '-', '3n4 001', c3n4, False],
#4n4+
['4n4+-10runAverage', 'bengioResults/1.savedResults/001', '-', '4n4+ 001', c4n4p, False],
#4n4
['4n4-10runAverage' , 'bengioResults/1.savedResults/001', '-', '4n4 001', c4n4, False]
]
run002 = [
#3n4+
# ['3n4+', 'bengioResults/1.savedResults/002', '-.', '3n4+ 002', c3n4p, True]
#3n4
# ['3n4', 'bengioResults/1.savedResults/002', '-.', '3n4 002', c3n4, False]
#4n4+
# ['4n4+allRuns', 'bengioResults/1.savedResults/002', '-.', '4n4+ 002', c4n4p, True]
#4n4
['4n4allRuns' , 'bengioResults/1.savedResults/002', '-.', '4n4 002', c4n4, True]
]
run003 = [
#3n4+
['3n4+', 'bengioResults/1.savedResults/003', '--', '3n4+ 003', c3n4p, False]
,
#3n4
['3n4', 'bengioResults/1.savedResults/003', '--', '3n4 003', c3n4, False]
,
#4n4+
['4n4+', 'bengioResults/1.savedResults/003', '--', '4n4+ 003', c4n4p, False]
,
#4n4
['4n4' , 'bengioResults/1.savedResults/003', '--', '4n4 003', c4n4, False]
]
run005 = [
#3n4+
['3n4+', 'bengioResults/1.savedResults/005', '--', '3n4+', c4n4, True]
# ,
#3n4
# ['3n4', 'bengioResults/1.savedResults/005', '-', '3n4', c4n4, True]
]
run006 = [
#4n4+
# ['4n4p', 'bengioResults/1.savedResults/006', '--', '4n4+ 005', c4n4p, True],
['4n4p-allRuns', 'bengioResults', '--', '4n4+', c4n4, True]
,
#4n4
# ['4n4', 'bengioResults/1.savedResults/006', '--', '4n4 006', c4n4, True]
['4n4-allRuns', 'bengioResults', '-', '4n4', c4n4, True]
]
###Output
_____no_output_____
###Markdown
Draw Plots
###Code
### prepare plot (has to be in same cell as the plot functions)
fig = plt.figure(figsize=(xSize,ySize))
ax = fig.add_subplot(111)
### plot plots
# ruined
# plotIt(run001)
# ruined
# plotIt(run002)
# one run average for comp with 001 and 002
# plotIt(run003)
### prepare plot (has to be in same cell as the plot functions)
fig = plt.figure(figsize=(xSize,ySize))
ax = fig.add_subplot(111)
# 3n4 and 3n4p 10ep 5av
plotIt(run005)
# comparison with rnd>4 10 epoch accuracy
plotCompare(acc4_10ep+acc4_10ep_delta, label = '$\phi_4$ after 10 epochs, 95% confidence interval', style = '--', color = '#ff0000', linewidth = linewidth)
plotCompare(acc4_10ep-acc4_10ep_delta, label = '', style = '--', color = '#ff0000', linewidth = linewidth)
### prepare plot (has to be in same cell as the plot functions)
fig = plt.figure(figsize=(xSize,ySize))
ax = fig.add_subplot(111)
# comparison with 4n4 source net accuracy
plotCompare(acc107net+0.001, label = '$\phi_4$ converged, 95% confidence interval', style = '--', color = '#ff0000', linewidth = linewidth)
plotCompare(acc107net-0.001, label = '', style = '--', color = '#ff0000', linewidth = linewidth)
# 4n4 and 4n4p 10ep 5av
plotIt(run006)
###Output
_____no_output_____
###Markdown
Calc confidence interval
###Code
### MOVED TO ARENA.PY
def calcStats(measurements):
μ = np.mean(measurements)
σ = np.std(measurements, ddof=1)
max = np.max(measurements)
min = np.min(measurements)
print('max-min', max-min)
print('σ',σ*100)
n = len(measurements)
ste = σ/np.sqrt(n-1)
error = 1.96 * ste
print('error',error*100)
print()
return [μ, error]
def convertFullToMeanError(allResults):
return np.array([calcStats(m) for m in allResults])
###Output
_____no_output_____
###Markdown
Calculate some shit... not sure what, probably transfer learning comparision
###Code
from3 = [0.977
,0.98
,0.978
,0.977
,0.976
]
rnd = [ 0.982,
0.984,
0.985,
0.983,
0.982]
print(rnd)
rndStats = calcStats(rnd)
from3Stats = calcStats(from3)
print(rndStats)
print(from3Stats)
print()
print(rndStats[0] + rndStats[1])
print(rndStats[0] - rndStats[1])
print()
print(from3Stats[0] + from3Stats[1])
print(from3Stats[0] - from3Stats[1])
###Output
[0.9832000000000001, 0.0012777636714197203]
[0.9776, 0.0014862435870341053]
0.9844777636714198
0.9819222363285803
0.9790862435870341
0.9761137564129659
###Markdown
Calculating accuracty of phi_4 after 10 epochs, taken from logs 773-861, approximately (Adam skipped)
###Code
phi_4 = [
0.9673,
0.9676,
0.9659,
0.9680,
0.9694,
0.9724,
0.9695,
0.9694
]
phi_4_Stats = calcStats(phi_4)
print(phi_4_Stats)
###Output
max-min 0.006500000000000061
σ 0.19569929556775342
error 0.14497606699038412
[0.9686875, 0.0014497606699038412]
###Markdown
Plot tensorboard in Matplotlib example code
###Code
#this doesn't work
import numpy as np
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
# from tensorflow.python.summary.event_accumulator import EventAccumulator
import matplotlib as mpl
import matplotlib.pyplot as plt
def plot_tensorflow_log(path):
# Loading too much data is slow...
tf_size_guidance = {
'compressedHistograms': 10,
'images': 0,
'scalars': 100,
'histograms': 1
}
event_acc = EventAccumulator(path, tf_size_guidance)
event_acc.Reload()
# Show all tags in the log file
#print(event_acc.Tags())
training_accuracies = event_acc.Scalars('training-accuracy')
validation_accuracies = event_acc.Scalars('validation_accuracy')
steps = 10
x = np.arange(steps)
y = np.zeros([steps, 2])
for i in xrange(steps):
y[i, 0] = training_accuracies[i][2] # value
y[i, 1] = validation_accuracies[i][2]
plt.plot(x, y[:,0], label='training accuracy')
plt.plot(x, y[:,1], label='validation accuracy')
plt.xlabel("Steps")
plt.ylabel("Accuracy")
plt.title("Training Progress")
plt.legend(loc='upper right', frameon=True)
plt.show()
if __name__ == '__main__':
log_file = "/Users/frimann/Dropbox/2018_sumar_Tolvunarfraedi_HR/Transfer-Learning-MS/MS verkefni/Code/Endnet/mainCode/logsArchiveGood/052-4pc-RND-184KPM-Training 4pc with transfer from 3pc, on 7 CNN layers/events.out.tfevents.1548120196.polaris"
# log_file = "./logs/events.out.tfevents.1456909092.DTA16004"
plot_tensorflow_log(log_file)
# this works, use loadTensorboardLog to load a tensorboard log file and retun a dictionary with training results
from tensorboard.backend.event_processing import event_accumulator
import numpy as np
def loadTensorboardLog(path):
event_acc = event_accumulator.EventAccumulator(path)
event_acc.Reload()
data = {}
for tag in sorted(event_acc.Tags()["scalars"]):
x, y = [], []
for scalar_event in event_acc.Scalars(tag):
x.append(scalar_event.step)
y.append(scalar_event.value)
data[tag] = (np.asarray(x), np.asarray(y))
return data
# print(_load_run("/Users/frimann/Dropbox/2018_sumar_Tolvunarfraedi_HR/Transfer-Learning-MS/MS verkefni/Code/Endnet/mainCode/logsArchiveGood/052-4pc-RND-184KPM-Training 4pc with transfer from 3pc, on 7 CNN layers/events.out.tfevents.1548120196.polaris"))
###Output
{'acc': (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,
65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,
91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,
117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,
143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,
156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,
169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,
195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,
221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246,
247, 248, 249]), array([0.85176462, 0.91545886, 0.93014157, 0.93896103, 0.9456597 ,
0.95101541, 0.95543373, 0.95891696, 0.96170694, 0.9639706 ,
0.96619147, 0.96774185, 0.96927327, 0.97057432, 0.97177577,
0.97292149, 0.97364187, 0.97455651, 0.9752847 , 0.9760623 ,
0.97657168, 0.97715396, 0.97773647, 0.97832495, 0.97869223,
0.97917938, 0.9794969 , 0.97995293, 0.98040134, 0.98060745,
0.98094887, 0.98121285, 0.98155266, 0.98173928, 0.98203194,
0.98234648, 0.98260963, 0.98280293, 0.98297209, 0.98325068,
0.98345464, 0.98363245, 0.9838109 , 0.98400319, 0.9841162 ,
0.98438895, 0.98450798, 0.98469245, 0.98481929, 0.98498726,
0.98510188, 0.9853195 , 0.98536885, 0.9855268 , 0.98568958,
0.98578954, 0.98588228, 0.98599529, 0.98612112, 0.98621845,
0.98629493, 0.98650068, 0.98652095, 0.98662794, 0.98674756,
0.98687744, 0.98696095, 0.98705566, 0.98715138, 0.98728007,
0.98728788, 0.98735672, 0.98744625, 0.98754543, 0.98752958,
0.98765117, 0.98777384, 0.98784429, 0.98783845, 0.9879992 ,
0.987997 , 0.98812991, 0.98821276, 0.98824811, 0.98823625,
0.98837775, 0.98839241, 0.98848456, 0.98853195, 0.98854238,
0.98862684, 0.98868346, 0.98875391, 0.98873085, 0.98884022,
0.98882538, 0.98896468, 0.98903328, 0.9890343 , 0.98905116,
0.98912424, 0.98924488, 0.98926049, 0.98917359, 0.98935848,
0.989389 , 0.98944598, 0.9894765 , 0.98940361, 0.9895038 ,
0.98962605, 0.98963565, 0.98960477, 0.98968965, 0.98971033,
0.98979181, 0.98979002, 0.98977637, 0.98981088, 0.98987353,
0.98990864, 0.98989737, 0.98994333, 0.99000937, 0.99008143,
0.99006015, 0.99008888, 0.99012101, 0.99021471, 0.9901824 ,
0.99020767, 0.99014449, 0.99030447, 0.99028498, 0.99036282,
0.99036425, 0.99034178, 0.99041945, 0.9904393 , 0.99053007,
0.99051559, 0.99048567, 0.99053729, 0.99050093, 0.99066395,
0.99061435, 0.99068379, 0.99073458, 0.99074203, 0.9907679 ,
0.99073195, 0.99078518, 0.99078977, 0.99078113, 0.99082029,
0.99089336, 0.99089754, 0.99087989, 0.99092805, 0.99091685,
0.99090379, 0.99104768, 0.99101257, 0.99104989, 0.99108303,
0.99103284, 0.99106914, 0.99113441, 0.99110752, 0.99114382,
0.9911651 , 0.9911294 , 0.99118096, 0.99117535, 0.99121827,
0.99127269, 0.99132007, 0.99136299, 0.99129254, 0.99134576,
0.99135983, 0.99134135, 0.99137247, 0.99147344, 0.99134457,
0.99133068, 0.99151617, 0.99140215, 0.99144894, 0.99159306,
0.99153882, 0.99152482, 0.99152398, 0.99152577, 0.99156231,
0.99158323, 0.9916057 , 0.99157298, 0.99163961, 0.99159724,
0.99164683, 0.9916113 , 0.99167013, 0.99173814, 0.99175584,
0.9917261 , 0.99172026, 0.99173737, 0.99173051, 0.99183208,
0.99180239, 0.99181867, 0.99177408, 0.99185216, 0.99179518,
0.9918347 , 0.99189895, 0.99194288, 0.9918564 , 0.99191898,
0.99195373, 0.99195069, 0.99192584, 0.99197638, 0.9919461 ,
0.99201936, 0.9920314 , 0.99201697, 0.99205887, 0.99206132,
0.99203503, 0.99206573, 0.99205667, 0.99209464, 0.99211752,
0.99208879, 0.99213517, 0.99216706, 0.99223113, 0.99217087,
0.99219078, 0.99212372, 0.99220985, 0.99223411, 0.99221784,
0.99216264, 0.99218512, 0.99222809, 0.99228954, 0.99221927])), 'loss': (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,
65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,
91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,
117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,
143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,
156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,
169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,
195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,
221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246,
247, 248, 249]), array([0.35830253, 0.19983152, 0.1652787 , 0.14489257, 0.12971324,
0.11783956, 0.10813948, 0.10041401, 0.09404675, 0.08876809,
0.08401116, 0.08008786, 0.0766313 , 0.07346153, 0.0708178 ,
0.0682619 , 0.06621792, 0.06417565, 0.06237598, 0.06062134,
0.05924383, 0.05794779, 0.05658757, 0.05523355, 0.05421251,
0.05316272, 0.05217575, 0.05125979, 0.05021085, 0.04952738,
0.04876753, 0.04801324, 0.04723351, 0.04662696, 0.0460175 ,
0.04517876, 0.04472833, 0.04418368, 0.04360269, 0.04304044,
0.04249997, 0.04212094, 0.04171794, 0.04126352, 0.04075398,
0.04023773, 0.03996117, 0.03944545, 0.03918629, 0.03862529,
0.03840818, 0.0379502 , 0.03788207, 0.03746775, 0.0370542 ,
0.03677863, 0.03647092, 0.03628255, 0.03591181, 0.03574319,
0.0353708 , 0.03498388, 0.03496976, 0.03470683, 0.03439475,
0.03410428, 0.03399317, 0.03371259, 0.03343987, 0.03318336,
0.03303564, 0.03295565, 0.03268817, 0.0324198 , 0.03240938,
0.03215599, 0.03175385, 0.03177207, 0.03163082, 0.03140292,
0.03130331, 0.03092673, 0.03077226, 0.03073233, 0.03054626,
0.03039361, 0.030294 , 0.03012215, 0.02996493, 0.02984158,
0.02975594, 0.02956196, 0.02941295, 0.02938587, 0.02917085,
0.02913778, 0.02892864, 0.02870954, 0.02876138, 0.02871412,
0.02851887, 0.02818365, 0.02824329, 0.02837636, 0.02790177,
0.0279134 , 0.02775797, 0.02771185, 0.02768307, 0.0276424 ,
0.02733355, 0.02734831, 0.02732066, 0.02698927, 0.02703207,
0.02686875, 0.02684983, 0.02685427, 0.0267342 , 0.0266715 ,
0.02653978, 0.02655974, 0.02637408, 0.02632698, 0.02612784,
0.02614876, 0.02611757, 0.02600895, 0.02585908, 0.02581816,
0.02576727, 0.02579751, 0.02551519, 0.02558089, 0.02551391,
0.02540462, 0.02544101, 0.02534722, 0.02534043, 0.02502097,
0.02501454, 0.02514745, 0.02503769, 0.02498614, 0.02471333,
0.02476296, 0.02470274, 0.02452565, 0.02448902, 0.0244548 ,
0.02460226, 0.02448796, 0.02425708, 0.02436333, 0.02429868,
0.02413325, 0.0241173 , 0.02419145, 0.02401846, 0.02406405,
0.02396955, 0.02380828, 0.02393777, 0.02380856, 0.02373598,
0.02380311, 0.02366834, 0.02358564, 0.02359703, 0.02360322,
0.02347506, 0.02350482, 0.02344835, 0.023381 , 0.02323899,
0.0232804 , 0.02313755, 0.0230383 , 0.02319734, 0.02303261,
0.02291903, 0.02310896, 0.02302615, 0.02275173, 0.02312426,
0.02296096, 0.02265822, 0.02289958, 0.02274079, 0.02251681,
0.02266048, 0.02258512, 0.02251003, 0.02252583, 0.022569 ,
0.02240967, 0.02251946, 0.02244291, 0.02234184, 0.02245148,
0.02229123, 0.02225679, 0.02222215, 0.0221945 , 0.02208029,
0.02218814, 0.02209162, 0.02210142, 0.02196749, 0.02184862,
0.0218955 , 0.02187063, 0.02193633, 0.02178959, 0.02187555,
0.02200201, 0.02172756, 0.02168825, 0.02175172, 0.0216605 ,
0.02151093, 0.02153387, 0.02164855, 0.02156997, 0.02163041,
0.0214276 , 0.02136006, 0.02147027, 0.02140952, 0.0213914 ,
0.02135915, 0.02131408, 0.02129071, 0.021288 , 0.02114819,
0.02120708, 0.02117662, 0.021105 , 0.02098256, 0.02114188,
0.02106572, 0.02112668, 0.02097215, 0.02089894, 0.02100042,
0.02109325, 0.02099923, 0.02095086, 0.02074452, 0.02092744])), 'val_acc': (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,
65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,
91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,
117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,
143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,
156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,
169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,
195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,
221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246,
247, 248, 249]), array([0.89470929, 0.92383707, 0.93381667, 0.94155085, 0.94472945,
0.9499191 , 0.95439321, 0.95846993, 0.95960444, 0.96212369,
0.96311927, 0.96441638, 0.96466619, 0.96508753, 0.96861416,
0.96932775, 0.96728611, 0.97021979, 0.9698807 , 0.9701497 ,
0.97175449, 0.97171491, 0.97255158, 0.97239548, 0.97305894,
0.97338164, 0.97364616, 0.97508425, 0.97388822, 0.97406548,
0.97428268, 0.97518367, 0.97571629, 0.97451741, 0.97662914,
0.97605497, 0.97586507, 0.97525662, 0.97620004, 0.97648531,
0.97632837, 0.97524643, 0.9760077 , 0.97817445, 0.97792709,
0.97677666, 0.97659981, 0.97696453, 0.97721839, 0.97684795,
0.97799593, 0.97728848, 0.97854161, 0.97843033, 0.9780876 ,
0.97825921, 0.97844869, 0.97689807, 0.97829181, 0.97861862,
0.97902936, 0.97696084, 0.97848332, 0.97962028, 0.97732925,
0.97842669, 0.97802526, 0.97910559, 0.97895604, 0.97904813,
0.97849679, 0.97691274, 0.97853059, 0.97919363, 0.97957587,
0.97720289, 0.97938514, 0.98039824, 0.97907096, 0.97770661,
0.97985905, 0.9804039 , 0.97867364, 0.97909909, 0.97987133,
0.98042428, 0.98014313, 0.97932076, 0.97992957, 0.98027146,
0.97827142, 0.97418404, 0.97912312, 0.98003674, 0.9791981 ,
0.9798823 , 0.97989941, 0.97897601, 0.98030162, 0.98017329,
0.97946829, 0.98068184, 0.98001474, 0.98093981, 0.9805429 ,
0.97989005, 0.97972786, 0.97884887, 0.98008931, 0.98075807,
0.98042959, 0.97947276, 0.98002374, 0.97921157, 0.9800604 ,
0.97862679, 0.98059833, 0.98037624, 0.98098588, 0.97993648,
0.98013294, 0.97957176, 0.97944385, 0.98076296, 0.98066145,
0.98011339, 0.97959054, 0.98036808, 0.98119408, 0.97895807,
0.97972989, 0.98016882, 0.98058283, 0.98030287, 0.98035175,
0.97968346, 0.97977352, 0.98100829, 0.98081797, 0.9803738 ,
0.98162848, 0.98005223, 0.98144555, 0.98075724, 0.98099196,
0.98062438, 0.98115051, 0.98016638, 0.98114479, 0.97999889,
0.98092109, 0.97995812, 0.97831136, 0.98092717, 0.9802177 ,
0.9800294 , 0.98089701, 0.98050338, 0.98135179, 0.97992021,
0.97921437, 0.97985625, 0.98077232, 0.98048341, 0.98069859,
0.97941774, 0.98003143, 0.9813261 , 0.98119819, 0.98115009,
0.98075926, 0.98111421, 0.98017818, 0.98097569, 0.98170513,
0.98014802, 0.9803049 , 0.98155433, 0.98116517, 0.98083669,
0.97972584, 0.98103923, 0.98113459, 0.98066801, 0.98109382,
0.98114353, 0.98092186, 0.98165947, 0.97958893, 0.98099607,
0.98060322, 0.98184896, 0.98147976, 0.98139417, 0.98108852,
0.97982121, 0.98058325, 0.98006159, 0.98160362, 0.98136199,
0.9809435 , 0.98025602, 0.98171979, 0.98012275, 0.98039132,
0.98194557, 0.9809671 , 0.98089254, 0.98198551, 0.98169452,
0.98148423, 0.98137665, 0.98102903, 0.97996831, 0.98088193,
0.98109305, 0.98105228, 0.98072708, 0.98154497, 0.98145491,
0.98087054, 0.98150706, 0.98130411, 0.98107183, 0.98057222,
0.98094875, 0.98156613, 0.981525 , 0.98149645, 0.98177356,
0.98127395, 0.98066801, 0.98175973, 0.98184979, 0.98140925,
0.98144025, 0.98078334, 0.98137057, 0.98026413, 0.98070019,
0.98132288, 0.98074585, 0.98146796, 0.98165745, 0.98183918,
0.98149115, 0.98175359, 0.98240197, 0.98111421, 0.9818188 ])), 'val_loss': (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,
65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90,
91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116,
117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129,
130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,
143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155,
156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168,
169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181,
182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194,
195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207,
208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,
221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246,
247, 248, 249]), array([0.24588455, 0.17984726, 0.15556072, 0.13973983, 0.13347203,
0.12103706, 0.11149252, 0.10220987, 0.09953884, 0.09448117,
0.09200682, 0.08919577, 0.08996535, 0.08919845, 0.08001022,
0.07806224, 0.08383548, 0.07616983, 0.07759944, 0.07751675,
0.07290365, 0.07359721, 0.07108749, 0.07224397, 0.07077843,
0.06984083, 0.07003235, 0.06532364, 0.06903732, 0.06875391,
0.06913188, 0.06564723, 0.06455808, 0.06843289, 0.06235132,
0.06464168, 0.06546532, 0.06779207, 0.0643599 , 0.06465778,
0.06470373, 0.06841035, 0.06582434, 0.05933633, 0.06062767,
0.06401709, 0.06538326, 0.06440465, 0.06345657, 0.06517348,
0.06174304, 0.06344364, 0.05998018, 0.06074231, 0.06196389,
0.06169457, 0.06165627, 0.06610284, 0.0624438 , 0.06115042,
0.06002247, 0.06789736, 0.06152654, 0.05841917, 0.06604506,
0.06248533, 0.06388719, 0.06052382, 0.06107792, 0.06103225,
0.06320154, 0.06868841, 0.06338654, 0.06108496, 0.060062 ,
0.06838149, 0.06098739, 0.05732075, 0.06201472, 0.06748936,
0.06012441, 0.05812214, 0.06445307, 0.06271528, 0.06043662,
0.05764279, 0.0607689 , 0.06276955, 0.06088049, 0.05916478,
0.06674109, 0.0833946 , 0.06457525, 0.06064836, 0.06487064,
0.0613131 , 0.06172456, 0.06506339, 0.06037562, 0.06068125,
0.06288122, 0.05906701, 0.0621617 , 0.05841558, 0.06042379,
0.06279609, 0.06350375, 0.06754331, 0.06249126, 0.06020229,
0.06089439, 0.06455609, 0.06410499, 0.06586643, 0.06347383,
0.06881894, 0.06197777, 0.06169254, 0.05990196, 0.06398114,
0.06246527, 0.06515167, 0.06722938, 0.06107071, 0.06164161,
0.06281514, 0.06639913, 0.06280287, 0.0597594 , 0.0677858 ,
0.0667082 , 0.06412733, 0.06301428, 0.06395651, 0.06367947,
0.0666808 , 0.0659953 , 0.06108249, 0.06290922, 0.06321193,
0.05972479, 0.06555092, 0.06041063, 0.06334458, 0.06267204,
0.06418168, 0.06184356, 0.06592257, 0.06218581, 0.06672622,
0.06321441, 0.06768159, 0.0746027 , 0.06451392, 0.06699704,
0.0676024 , 0.06409406, 0.06606889, 0.06256719, 0.06751436,
0.07045779, 0.06961754, 0.06496245, 0.06544101, 0.06516925,
0.07029126, 0.06828056, 0.06327694, 0.06405504, 0.0635776 ,
0.06553377, 0.06369917, 0.06723398, 0.06341326, 0.06295943,
0.06797767, 0.06802427, 0.06363159, 0.06559037, 0.06726732,
0.07103003, 0.06533269, 0.06450475, 0.06667046, 0.06545118,
0.06472551, 0.06558707, 0.06260218, 0.07117139, 0.06585661,
0.06792346, 0.06290416, 0.06445389, 0.06416116, 0.06570587,
0.07261071, 0.06685428, 0.06968895, 0.06382236, 0.06513665,
0.0671726 , 0.07021309, 0.06391449, 0.06953663, 0.0689332 ,
0.06307217, 0.06672071, 0.06831875, 0.0629933 , 0.06444141,
0.06435008, 0.06595222, 0.06760839, 0.07188128, 0.06820974,
0.06725734, 0.0667311 , 0.06843051, 0.06562836, 0.06626409,
0.06893583, 0.06497017, 0.06717072, 0.06786141, 0.07106898,
0.06888051, 0.06587337, 0.06636027, 0.06738099, 0.06437781,
0.06804527, 0.0694833 , 0.06505954, 0.06461793, 0.06636636,
0.06694957, 0.06967688, 0.06707881, 0.07182997, 0.07132399,
0.06872175, 0.07012405, 0.06696753, 0.06636836, 0.06500114,
0.06756673, 0.0656941 , 0.06324625, 0.06895977, 0.06582835]))}
###Markdown
Converge 3 to 4 5 average train 3 every time Calculate means and shit
###Code
d3 = load_obj('.','d3')
d4 = load_obj('.','d4')
print('33333333333333')
for key, value in d3.items():
print(key,value)
print()
print('4444444444444')
for key, value in d4.items():
print(key,value)
print('\naverage....')
print('33333333333333')
for key, value in d3.items():
print(key,np.mean(value))
print()
print('4444444444444')
for key, value in d4.items():
print(key,np.mean(value))
accRNDto4 = [0.996, 0.9961, 0.9958, 0.9951, 0.994]
acc3to4 = [0.9779, 0.9702, 0.9717, 0.9749, 0.9657]
print(calcStats(accRNDto4))
print(calcStats(acc3to4))
###Output
_____no_output_____ |
EuroSciPy-2019/3 - Tuesday/SciPy Tutorial.ipynb | ###Markdown
SciPy Tutorial
###Code
#!git clone https://github.com/gertingold/euroscipy-scipy-tutorial
###Output
Cloning into 'euroscipy-scipy-tutorial'...
remote: Enumerating objects: 108, done.[K
remote: Counting objects: 100% (108/108), done.[K
remote: Compressing objects: 100% (84/84), done.[K
remote: Total 108 (delta 40), reused 85 (delta 20), pack-reused 0[K
Receiving objects: 100% (108/108), 21.53 MiB | 6.42 MiB/s, done.
Resolving deltas: 100% (40/40), done.
|
Credit Risk Modeling/Credit Risk Modeling - LGD and EAD Models - With Comments - 12-1.ipynb | ###Markdown
Import Libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Import Data
###Code
# Import data.
loan_data_preprocessed_backup = pd.read_csv('loan_data_2007_2014_preprocessed.csv')
###Output
_____no_output_____
###Markdown
Explore Data
###Code
loan_data_preprocessed = loan_data_preprocessed_backup.copy()
loan_data_preprocessed.columns.values
# Displays all column names.
loan_data_preprocessed.head()
loan_data_preprocessed.tail()
loan_data_defaults = loan_data_preprocessed[loan_data_preprocessed['loan_status'].isin(['Charged Off','Does not meet the credit policy. Status:Charged Off'])]
# Here we take only the accounts that were charged-off (written-off).
loan_data_defaults.shape
pd.options.display.max_rows = None
# Sets the pandas dataframe options to display all columns/ rows.
loan_data_defaults.isnull().sum()
###Output
_____no_output_____
###Markdown
Independent Variables
###Code
loan_data_defaults['mths_since_last_delinq'].fillna(0, inplace = True)
# We fill the missing values with zeroes.
#loan_data_defaults['mths_since_last_delinq'].fillna(loan_data_defaults['mths_since_last_delinq'].max() + 12, inplace=True)
loan_data_defaults['mths_since_last_record'].fillna(0, inplace=True)
# We fill the missing values with zeroes.
###Output
_____no_output_____
###Markdown
Dependent Variables
###Code
loan_data_defaults['recovery_rate'] = loan_data_defaults['recoveries'] / loan_data_defaults['funded_amnt']
# We calculate the dependent variable for the LGD model: recovery rate.
# It is the ratio of recoveries and funded amount.
loan_data_defaults['recovery_rate'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data_defaults['recovery_rate'] = np.where(loan_data_defaults['recovery_rate'] > 1, 1, loan_data_defaults['recovery_rate'])
loan_data_defaults['recovery_rate'] = np.where(loan_data_defaults['recovery_rate'] < 0, 0, loan_data_defaults['recovery_rate'])
# We set recovery rates that are greater than 1 to 1 and recovery rates that are less than 0 to 0.
loan_data_defaults['recovery_rate'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data_defaults['CCF'] = (loan_data_defaults['funded_amnt'] - loan_data_defaults['total_rec_prncp']) / loan_data_defaults['funded_amnt']
# We calculate the dependent variable for the EAD model: credit conversion factor.
# It is the ratio of the difference of the amount used at the moment of default to the total funded amount.
loan_data_defaults['CCF'].describe()
# Shows some descriptive statisics for the values of a column.
loan_data_defaults.to_csv('loan_data_defaults.csv')
# We save the data to a CSV file.
###Output
_____no_output_____
###Markdown
Explore Dependent Variables
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
plt.hist(loan_data_defaults['recovery_rate'], bins = 100)
# We plot a histogram of a variable with 100 bins.
plt.hist(loan_data_defaults['recovery_rate'], bins = 50)
# We plot a histogram of a variable with 50 bins.
plt.hist(loan_data_defaults['CCF'], bins = 100)
# We plot a histogram of a variable with 100 bins.
loan_data_defaults['recovery_rate_0_1'] = np.where(loan_data_defaults['recovery_rate'] == 0, 0, 1)
# We create a new variable which is 0 if recovery rate is 0 and 1 otherwise.
loan_data_defaults['recovery_rate_0_1']
###Output
_____no_output_____
###Markdown
LGD Model Splitting Data
###Code
from sklearn.model_selection import train_test_split
# LGD model stage 1 datasets: recovery rate 0 or greater than 0.
lgd_inputs_stage_1_train, lgd_inputs_stage_1_test, lgd_targets_stage_1_train, lgd_targets_stage_1_test = train_test_split(loan_data_defaults.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), loan_data_defaults['recovery_rate_0_1'], test_size = 0.2, random_state = 42)
# Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes:
# Inputs - Train, Inputs - Test, Targets - Train, Targets - Test.
###Output
_____no_output_____
###Markdown
Preparing the Inputs
###Code
features_all = ['grade:A',
'grade:B',
'grade:C',
'grade:D',
'grade:E',
'grade:F',
'grade:G',
'home_ownership:MORTGAGE',
'home_ownership:NONE',
'home_ownership:OTHER',
'home_ownership:OWN',
'home_ownership:RENT',
'verification_status:Not Verified',
'verification_status:Source Verified',
'verification_status:Verified',
'purpose:car',
'purpose:credit_card',
'purpose:debt_consolidation',
'purpose:educational',
'purpose:home_improvement',
'purpose:house',
'purpose:major_purchase',
'purpose:medical',
'purpose:moving',
'purpose:other',
'purpose:renewable_energy',
'purpose:small_business',
'purpose:vacation',
'purpose:wedding',
'initial_list_status:f',
'initial_list_status:w',
'term_int',
'emp_length_int',
'mths_since_issue_d',
'mths_since_earliest_cr_line',
'funded_amnt',
'int_rate',
'installment',
'annual_inc',
'dti',
'delinq_2yrs',
'inq_last_6mths',
'mths_since_last_delinq',
'mths_since_last_record',
'open_acc',
'pub_rec',
'total_acc',
'acc_now_delinq',
'total_rev_hi_lim']
# List of all independent variables for the models.
features_reference_cat = ['grade:G',
'home_ownership:RENT',
'verification_status:Verified',
'purpose:credit_card',
'initial_list_status:f']
# List of the dummy variable reference categories.
lgd_inputs_stage_1_train = lgd_inputs_stage_1_train[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_1_train = lgd_inputs_stage_1_train.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
lgd_inputs_stage_1_train.isnull().sum()
# Check for missing values. We check whether the value of each row for each column is missing or not,
# then sum accross columns.
###Output
_____no_output_____
###Markdown
Estimating the Model
###Code
# P values for sklearn logistic regression.
# Class to display p-values for logistic regression in sklearn.
from sklearn import linear_model
import scipy.stats as stat
class LogisticRegression_with_p_values:
def __init__(self,*args,**kwargs):#,**kwargs):
self.model = linear_model.LogisticRegression(*args,**kwargs)#,**args)
def fit(self,X,y):
self.model.fit(X,y)
#### Get p-values for the fitted model ####
denom = (2.0 * (1.0 + np.cosh(self.model.decision_function(X))))
denom = np.tile(denom,(X.shape[1],1)).T
F_ij = np.dot((X / denom).T,X) ## Fisher Information Matrix
Cramer_Rao = np.linalg.inv(F_ij) ## Inverse Information Matrix
sigma_estimates = np.sqrt(np.diagonal(Cramer_Rao))
z_scores = self.model.coef_[0] / sigma_estimates # z-score for eaach model coefficient
p_values = [stat.norm.sf(abs(x)) * 2 for x in z_scores] ### two tailed test for p-values
self.coef_ = self.model.coef_
self.intercept_ = self.model.intercept_
#self.z_scores = z_scores
self.p_values = p_values
#self.sigma_estimates = sigma_estimates
#self.F_ij = F_ij
reg_lgd_st_1 = LogisticRegression_with_p_values()
# We create an instance of an object from the 'LogisticRegression' class.
reg_lgd_st_1.fit(lgd_inputs_stage_1_train, lgd_targets_stage_1_train)
# Estimates the coefficients of the object from the 'LogisticRegression' class
# with inputs (independent variables) contained in the first dataframe
# and targets (dependent variables) contained in the second dataframe.
feature_name = lgd_inputs_stage_1_train.columns.values
# Stores the names of the columns of a dataframe in a variable.
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
# Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable.
summary_table['Coefficients'] = np.transpose(reg_lgd_st_1.coef_)
# Creates a new column in the dataframe, called 'Coefficients',
# with row values the transposed coefficients from the 'LogisticRegression' object.
summary_table.index = summary_table.index + 1
# Increases the index of every row of the dataframe with 1.
summary_table.loc[0] = ['Intercept', reg_lgd_st_1.intercept_[0]]
# Assigns values of the row with index 0 of the dataframe.
summary_table = summary_table.sort_index()
# Sorts the dataframe by index.
p_values = reg_lgd_st_1.p_values
# We take the result of the newly added method 'p_values' and store it in a variable 'p_values'.
p_values = np.append(np.nan,np.array(p_values))
# We add the value 'NaN' in the beginning of the variable with p-values.
summary_table['p_values'] = p_values
# In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable.
summary_table
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
summary_table['Coefficients'] = np.transpose(reg_lgd_st_1.coef_)
summary_table.index = summary_table.index + 1
summary_table.loc[0] = ['Intercept', reg_lgd_st_1.intercept_[0]]
summary_table = summary_table.sort_index()
p_values = reg_lgd_st_1.p_values
p_values = np.append(np.nan,np.array(p_values))
summary_table['p_values'] = p_values
summary_table
###Output
_____no_output_____
###Markdown
Testing the Model
###Code
lgd_inputs_stage_1_test = lgd_inputs_stage_1_test[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_1_test = lgd_inputs_stage_1_test.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
y_hat_test_lgd_stage_1 = reg_lgd_st_1.model.predict(lgd_inputs_stage_1_test)
# Calculates the predicted values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
y_hat_test_lgd_stage_1
y_hat_test_proba_lgd_stage_1 = reg_lgd_st_1.model.predict_proba(lgd_inputs_stage_1_test)
# Calculates the predicted probability values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
y_hat_test_proba_lgd_stage_1
# This is an array of arrays of predicted class probabilities for all classes.
# In this case, the first value of every sub-array is the probability for the observation to belong to the first class, i.e. 0,
# and the second value is the probability for the observation to belong to the first class, i.e. 1.
y_hat_test_proba_lgd_stage_1 = y_hat_test_proba_lgd_stage_1[: ][: , 1]
# Here we take all the arrays in the array, and from each array, we take all rows, and only the element with index 1,
# that is, the second element.
# In other words, we take only the probabilities for being 1.
y_hat_test_proba_lgd_stage_1
lgd_targets_stage_1_test_temp = lgd_targets_stage_1_test
lgd_targets_stage_1_test_temp.reset_index(drop = True, inplace = True)
# We reset the index of a dataframe.
df_actual_predicted_probs = pd.concat([lgd_targets_stage_1_test_temp, pd.DataFrame(y_hat_test_proba_lgd_stage_1)], axis = 1)
# Concatenates two dataframes.
df_actual_predicted_probs.columns = ['lgd_targets_stage_1_test', 'y_hat_test_proba_lgd_stage_1']
df_actual_predicted_probs.index = lgd_inputs_stage_1_test.index
# Makes the index of one dataframe equal to the index of another dataframe.
df_actual_predicted_probs.head()
###Output
_____no_output_____
###Markdown
Estimating the Аccuracy of the Мodel
###Code
tr = 0.5
# We create a new column with an indicator,
# where every observation that has predicted probability greater than the threshold has a value of 1,
# and every observation that has predicted probability lower than the threshold has a value of 0.
df_actual_predicted_probs['y_hat_test_lgd_stage_1'] = np.where(df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'] > tr, 1, 0)
pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted'])
# Creates a cross-table where the actual values are displayed by rows and the predicted values by columns.
# This table is known as a Confusion Matrix.
pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]
# Here we divide each value of the table by the total number of observations,
# thus getting percentages, or, rates.
(pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]).iloc[0, 0] + (pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]).iloc[1, 1]
# Here we calculate Accuracy of the model, which is the sum of the diagonal rates.
from sklearn.metrics import roc_curve, roc_auc_score
fpr, tpr, thresholds = roc_curve(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'])
# Returns the Receiver Operating Characteristic (ROC) Curve from a set of actual values and their predicted probabilities.
# As a result, we get three arrays: the false positive rates, the true positive rates, and the thresholds.
# we store each of the three arrays in a separate variable.
plt.plot(fpr, tpr)
# We plot the false positive rate along the x-axis and the true positive rate along the y-axis,
# thus plotting the ROC curve.
plt.plot(fpr, fpr, linestyle = '--', color = 'k')
# We plot a seconary diagonal line, with dashed line style and black color.
plt.xlabel('False positive rate')
# We name the x-axis "False positive rate".
plt.ylabel('True positive rate')
# We name the x-axis "True positive rate".
plt.title('ROC curve')
# We name the graph "ROC curve".
AUROC = roc_auc_score(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'])
# Calculates the Area Under the Receiver Operating Characteristic Curve (AUROC)
# from a set of actual values and their predicted probabilities.
AUROC
###Output
_____no_output_____
###Markdown
Saving the Model
###Code
import pickle
pickle.dump(reg_lgd_st_1, open('lgd_model_stage_1.sav', 'wb'))
# Here we export our model to a 'SAV' file with file name 'lgd_model_stage_1.sav'.
###Output
_____no_output_____
###Markdown
Stage 2 – Linear Regression
###Code
lgd_stage_2_data = loan_data_defaults[loan_data_defaults['recovery_rate_0_1'] == 1]
# Here we take only rows where the original recovery rate variable is greater than one,
# i.e. where the indicator variable we created is equal to 1.
# LGD model stage 2 datasets: how much more than 0 is the recovery rate
lgd_inputs_stage_2_train, lgd_inputs_stage_2_test, lgd_targets_stage_2_train, lgd_targets_stage_2_test = train_test_split(lgd_stage_2_data.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), lgd_stage_2_data['recovery_rate'], test_size = 0.2, random_state = 42)
# Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes:
# Inputs - Train, Inputs - Test, Targets - Train, Targets - Test.
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Since the p-values are obtained through certain statistics, we need the 'stat' module from scipy.stats
import scipy.stats as stat
# Since we are using an object oriented language such as Python, we can simply define our own
# LinearRegression class (the same one from sklearn)
# By typing the code below we will ovewrite a part of the class with one that includes p-values
# Here's the full source code of the ORIGINAL class: https://github.com/scikit-learn/scikit-learn/blob/7b136e9/sklearn/linear_model/base.py#L362
class LinearRegression(linear_model.LinearRegression):
"""
LinearRegression class after sklearn's, but calculate t-statistics
and p-values for model coefficients (betas).
Additional attributes available after .fit()
are `t` and `p` which are of the shape (y.shape[1], X.shape[1])
which is (n_features, n_coefs)
This class sets the intercept to 0 by default, since usually we include it
in X.
"""
# nothing changes in __init__
def __init__(self, fit_intercept=True, normalize=False, copy_X=True,
n_jobs=1):
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_X = copy_X
self.n_jobs = n_jobs
def fit(self, X, y, n_jobs=1):
self = super(LinearRegression, self).fit(X, y, n_jobs)
# Calculate SSE (sum of squared errors)
# and SE (standard error)
sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1])
se = np.array([np.sqrt(np.diagonal(sse * np.linalg.inv(np.dot(X.T, X))))])
# compute the t-statistic for each feature
self.t = self.coef_ / se
# find the p-value for each feature
self.p = np.squeeze(2 * (1 - stat.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1])))
return self
import scipy.stats as stat
class LinearRegression(linear_model.LinearRegression):
def __init__(self, fit_intercept=True, normalize=False, copy_X=True,
n_jobs=1):
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_X = copy_X
self.n_jobs = n_jobs
def fit(self, X, y, n_jobs=1):
self = super(LinearRegression, self).fit(X, y, n_jobs)
sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1])
se = np.array([np.sqrt(np.diagonal(sse * np.linalg.inv(np.dot(X.T, X))))])
self.t = self.coef_ / se
self.p = np.squeeze(2 * (1 - stat.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1])))
return self
lgd_inputs_stage_2_train = lgd_inputs_stage_2_train[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_2_train = lgd_inputs_stage_2_train.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
reg_lgd_st_2 = LinearRegression()
# We create an instance of an object from the 'LogisticRegression' class.
reg_lgd_st_2.fit(lgd_inputs_stage_2_train, lgd_targets_stage_2_train)
# Estimates the coefficients of the object from the 'LogisticRegression' class
# with inputs (independent variables) contained in the first dataframe
# and targets (dependent variables) contained in the second dataframe.
feature_name = lgd_inputs_stage_2_train.columns.values
# Stores the names of the columns of a dataframe in a variable.
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
# Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable.
summary_table['Coefficients'] = np.transpose(reg_lgd_st_2.coef_)
# Creates a new column in the dataframe, called 'Coefficients',
# with row values the transposed coefficients from the 'LogisticRegression' object.
summary_table.index = summary_table.index + 1
# Increases the index of every row of the dataframe with 1.
summary_table.loc[0] = ['Intercept', reg_lgd_st_2.intercept_]
# Assigns values of the row with index 0 of the dataframe.
summary_table = summary_table.sort_index()
# Sorts the dataframe by index.
p_values = reg_lgd_st_2.p
# We take the result of the newly added method 'p_values' and store it in a variable 'p_values'.
p_values = np.append(np.nan,np.array(p_values))
# We add the value 'NaN' in the beginning of the variable with p-values.
summary_table['p_values'] = p_values.round(3)
# In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable.
summary_table
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
summary_table['Coefficients'] = np.transpose(reg_lgd_st_2.coef_)
summary_table.index = summary_table.index + 1
summary_table.loc[0] = ['Intercept', reg_lgd_st_2.intercept_]
summary_table = summary_table.sort_index()
p_values = reg_lgd_st_2.p
p_values = np.append(np.nan,np.array(p_values))
summary_table['p_values'] = p_values.round(3)
summary_table
###Output
_____no_output_____
###Markdown
Stage 2 – Linear Regression Evaluation
###Code
lgd_inputs_stage_2_test = lgd_inputs_stage_2_test[features_all]
# Here we keep only the variables we need for the model.
lgd_inputs_stage_2_test = lgd_inputs_stage_2_test.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
lgd_inputs_stage_2_test.columns.values
# Calculates the predicted values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
y_hat_test_lgd_stage_2 = reg_lgd_st_2.predict(lgd_inputs_stage_2_test)
# Calculates the predicted values for the dependent variable (targets)
# based on the values of the independent variables (inputs) supplied as an argument.
lgd_targets_stage_2_test_temp = lgd_targets_stage_2_test
lgd_targets_stage_2_test_temp = lgd_targets_stage_2_test_temp.reset_index(drop = True)
# We reset the index of a dataframe.
pd.concat([lgd_targets_stage_2_test_temp, pd.DataFrame(y_hat_test_lgd_stage_2)], axis = 1).corr()
# We calculate the correlation between actual and predicted values.
sns.distplot(lgd_targets_stage_2_test - y_hat_test_lgd_stage_2)
# We plot the distribution of the residuals.
pickle.dump(reg_lgd_st_2, open('lgd_model_stage_2.sav', 'wb'))
# Here we export our model to a 'SAV' file with file name 'lgd_model_stage_1.sav'.
###Output
_____no_output_____
###Markdown
Combining Stage 1 and Stage 2
###Code
y_hat_test_lgd_stage_2_all = reg_lgd_st_2.predict(lgd_inputs_stage_1_test)
y_hat_test_lgd_stage_2_all
y_hat_test_lgd = y_hat_test_lgd_stage_1 * y_hat_test_lgd_stage_2_all
# Here we combine the predictions of the models from the two stages.
pd.DataFrame(y_hat_test_lgd).describe()
# Shows some descriptive statisics for the values of a column.
y_hat_test_lgd = np.where(y_hat_test_lgd < 0, 0, y_hat_test_lgd)
y_hat_test_lgd = np.where(y_hat_test_lgd > 1, 1, y_hat_test_lgd)
# We set predicted values that are greater than 1 to 1 and predicted values that are less than 0 to 0.
pd.DataFrame(y_hat_test_lgd).describe()
# Shows some descriptive statisics for the values of a column.
###Output
_____no_output_____
###Markdown
EAD Model Estimation and Interpretation
###Code
# EAD model datasets
ead_inputs_train, ead_inputs_test, ead_targets_train, ead_targets_test = train_test_split(loan_data_defaults.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), loan_data_defaults['CCF'], test_size = 0.2, random_state = 42)
# Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes:
# Inputs - Train, Inputs - Test, Targets - Train, Targets - Test.
ead_inputs_train.columns.values
ead_inputs_train = ead_inputs_train[features_all]
# Here we keep only the variables we need for the model.
ead_inputs_train = ead_inputs_train.drop(features_reference_cat, axis = 1)
# Here we remove the dummy variable reference categories.
reg_ead = LinearRegression()
# We create an instance of an object from the 'LogisticRegression' class.
reg_ead.fit(ead_inputs_train, ead_targets_train)
# Estimates the coefficients of the object from the 'LogisticRegression' class
# with inputs (independent variables) contained in the first dataframe
# and targets (dependent variables) contained in the second dataframe.
feature_name = ead_inputs_train.columns.values
summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name)
# Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable.
summary_table['Coefficients'] = np.transpose(reg_ead.coef_)
# Creates a new column in the dataframe, called 'Coefficients',
# with row values the transposed coefficients from the 'LogisticRegression' object.
summary_table.index = summary_table.index + 1
# Increases the index of every row of the dataframe with 1.
summary_table.loc[0] = ['Intercept', reg_ead.intercept_]
# Assigns values of the row with index 0 of the dataframe.
summary_table = summary_table.sort_index()
# Sorts the dataframe by index.
p_values = reg_ead.p
# We take the result of the newly added method 'p_values' and store it in a variable 'p_values'.
p_values = np.append(np.nan,np.array(p_values))
# We add the value 'NaN' in the beginning of the variable with p-values.
summary_table['p_values'] = p_values
# In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable.
summary_table
###Output
_____no_output_____ |
principal_component_analysis/principal_component_analysis.ipynb | ###Markdown
Principal Component Analysis (PCA) in Python Killian McKee Overview 1. [What is PCA?](section1)2. [Key Terms](section2) 3. [Pros and Cons of PCA](section3)4. [When to use PCA](section4)5. [Key Parameters](section5)6. [Walkthrough: PCA for data visualization](section6)7. [Walkthrough: PCA w/ Random Forest](section7)7. [Additional Reading](section8)8. [Conclusion](section9)9. [Sources](section10) What is Principal Component Analysis? Principal component analysis is a non-parametric data science tool that allows us to identify the most important variables in a data set consisting of many correlated variables. In more technical terms, pca helps us reduce the dimensionality of our feature space by highlighting the most important variables (principal components) of a dataset via orth0gonalization. Pca is typically done before a model is built to decide which variables to include and to eliminate those which are overly correlated with one another. Principal component analysis provides two primary benefits; firstly, it can help our models avoid overfitting by eliminating extraneous variables that are most likely only pertinent (if at all) for our training data, but not the new data it would see in the real world. Secondly, performing pca can drastically improve model training speed in high dimensional data settings (when there are lots of features in a dataset). Key Terms 1. **Dimensionality**: the number of features in a dataset (represented by more columns in a tidy dataset). Pca aims to reduce excessive dimensionality in a dataset to improve model performance. 2. **Correlation**: A measure of closeness between two variables, ranging from -1 to +1. A negative correlation indicates that when one variable goes up, the other goes down (and a posistive correlation indicates they both move in the same direction). PCA helps us eliminate redundant correlated variables.3. **Orthagonal**: Uncorrelated to one another i.e. they have a correlation of 0. PCA seeks to find an orthgonalized subset of the data that still captures most/all of the important information for our model. 4. **Covariance Matrix**: A matrix we can generate to show how correlated variables are with one another. This can be a helpul tool to visualize what features PCA may or may not eliminate. Pros and Cons of PCA There are no real cons of PCA, but it does have some limitations: **Pros**: 1. Reduces model noise2. Easy to implement with python packages like pandas and scikit-learn 3. Improves model training time**Limitations**: 1. Linearity: pca assumes the principle components are a linear combination of the original dataset features. 2. Variance measure: pca uses variance as the measure of dimension importance. This can mean axes with high variance can be treated as principle components and those with low variance can be cut out as noise. 3. Orthogonality: pca assumes the principle components are orthogonal, and won't produce meaningful results otherwise. When to use Principal Component Analysis One should consider using PCA when the following conditions are true: 1. The linearity, variance, and orthogonality limitations specified above are satisfied. 2. Your dataset contains many features 3. You are interested in reducing the noise of your dataset or improving model training time Key Parameters The number of features to keep post pca (typically denoted by n_components) is the only major parameter for PCA. PCA Walkthrough: Data Visualization We will be modifying scikit-learn's tutorial on fitting PCA for visualization using the iris [dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set) (contains different species of flowers).
###Code
# import the necessary packages
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import decomposition
from sklearn import datasets
#specify graph parameters for iris and load the dataset
centers = [[1, 1], [-1, -1], [1, -1]]
iris = datasets.load_iris()
# set features and target
X = iris.data
y = iris.target
# create the chart
fig = plt.figure(1, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
# fit our PCA
plt.cla()
pca = decomposition.PCA(n_components=3)
pca.fit(X)
X = pca.transform(X)
# plot our data
for name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:
ax.text3D(X[y == label, 0].mean(),
X[y == label, 1].mean() + 1.5,
X[y == label, 2].mean(), name,
horizontalalignment='center',
bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap=plt.cm.nipy_spectral,
edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
plt.show()
#we can clearly see the three species within the iris dataset and how the differ from one another
###Output
_____no_output_____
###Markdown
Walkthrough: PCA w/ Random Forest In this tutorial we will be walking through the typical workflow to improve model speed with PCA, then fitting a random forest. We will be working with the iris dataset again, but we will load it into a pandas dataframe
###Code
# import necessary packages
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
# download the data
data = datasets.load_iris()
df = pd.DataFrame(data['data'], columns=data['feature_names'])
df['target'] = data['target']
df.head()
# split the data into features and target
X = df.drop('target', 1)
y = df['target']
#creating training and test splits
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# scaling the data
# since pca uses variance as a measure, it is best to scale the data
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# apply and fit the pca
# play around with the n_components value to see how the model does
pca = PCA(n_components=4)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
# generate the explained variance, which shows us how much variance is caused by each variable
# we can see from the example below that more than 96% of the data can be explained by the first two principle components
explained_variance = pca.explained_variance_ratio_
explained_variance
# now lets fit a random forest so we can see how the accuracy changes with different levels of components
# this model has all the components
classifier = RandomForestClassifier(max_depth=2, random_state=0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# all component model accuracy
# we can see it achieves an accuracy of 93%
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy', accuracy_score(y_test, y_pred))
# Now lets see how the model does with only 2 components
# our accuracy decreases by about 3%, but we can see how this might be useful if we had 100s of components
pca = PCA(n_components=2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
classifier = RandomForestClassifier(max_depth=2, random_state=0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
print('Accuracy', accuracy_score(y_test, y_pred))
###Output
[[11 0 0]
[ 0 10 3]
[ 0 2 4]]
Accuracy 0.8333333333333334
|
Assignment Day 4-LetsUpgrad_PythonEssentials.ipynb | ###Markdown
QUESTION 1
###Code
# Program to check Armstrong numbers in a certain interval
lower = 1042000
upper = 702648265
for num in range(lower, upper + 1):
sum = 0
temp = num
while temp > 0:
digit = temp % 10
sum += digit ** 3
temp //= 10
if num == sum:
print('The first armstrong number is ' ,num)
break
###Output
_____no_output_____ |
02_image_level.ipynb | ###Markdown
Image level consistency check
###Code
import numpy as np
import pandas as pd
import os
import os.path
import matplotlib.pyplot as plt
import plotly.express as px
from core import *
from config import image_stats_file, xls_file, figures_dir, latex_dir, image_level_results_file, image_level_threshold
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', 100)
pd.set_option('display.width', 10000)
# reading image statistics
data= pd.read_csv(image_stats_file)
# reading the summary page
methods= pd.read_excel(xls_file, engine='openpyxl')
#methods= methods.iloc[:methods[methods['key'].isnull()].index[0]]
methods= methods[methods['flag'] == 'primary']
methods.index= methods['key']
# reading the image level figures
xl= pd.ExcelFile(xls_file, engine='openpyxl')
image_level= {}
for s in xl.sheet_names[1:]:
image_level[s]= xl.parse(s)
print('image level figures available for: %s' % str(list(image_level.keys())))
methods.columns
methods.index
# test images with annotations #1 as ground truth
data_test= data[(data['test'] == True) & (data['annotator'] == 1)].reset_index()
# test images with annotations #2 as ground truth
data_test_obs2= data[(data['test'] == True) & (data['annotator'] == 2)].reset_index()
# extracting figures with and without FoV
data_test_with_fov= data_test[data_test['fov'] == True].reset_index(drop=True)
data_test_without_fov= data_test[data_test['fov'] == False].reset_index(drop=True)
data_test_with_fov_obs2= data_test_obs2[data_test_obs2['fov'] == True].reset_index(drop=True)
data_test_without_fov_obs2= data_test_obs2[data_test_obs2['fov'] == False].reset_index(drop=True)
data_test_with_fov
###Output
_____no_output_____
###Markdown
Calculating the scores for all image level figures
###Code
# checking the consistencies at the image level
for s in image_level:
if not s in methods.index:
continue
print('processing', s)
for i, row in image_level[s].iterrows():
image_id= row['image']
p_with_fov= data_test_with_fov[data_test_with_fov['id'] == image_id]['p'].values[0]
n_with_fov= data_test_with_fov[data_test_with_fov['id'] == image_id]['n'].values[0]
p_without_fov= data_test_without_fov[data_test_without_fov['id'] == image_id]['p'].values[0]
n_without_fov= data_test_without_fov[data_test_without_fov['id'] == image_id]['n'].values[0]
p_with_fov_obs2= data_test_with_fov[data_test_with_fov_obs2['id'] == image_id]['p'].values[0]
n_with_fov_obs2= data_test_with_fov[data_test_with_fov_obs2['id'] == image_id]['n'].values[0]
p_without_fov_obs2= data_test_without_fov[data_test_without_fov_obs2['id'] == image_id]['p'].values[0]
n_without_fov_obs2= data_test_without_fov[data_test_without_fov_obs2['id'] == image_id]['n'].values[0]
digits= methods.loc[s]['digits']
if digits > 2:
eps= 10.0**(-digits)
else:
eps= 10.0**(-digits)/2
image_level[s].loc[i, 'n_with_fov']= n_with_fov
image_level[s].loc[i, 'n_without_fov']= n_without_fov
image_level[s].loc[i, 'n_with_fov_obs2']= n_with_fov_obs2
image_level[s].loc[i, 'n_without_fov_obs2']= n_without_fov_obs2
image_level[s].loc[i, 'p_with_fov']= p_with_fov
image_level[s].loc[i, 'p_without_fov']= p_without_fov
image_level[s].loc[i, 'p_with_fov_obs2']= p_with_fov_obs2
image_level[s].loc[i, 'p_without_fov_obs2']= p_without_fov_obs2
image_level[s].loc[i, 'consistency_with_fov']= consistency_image_level(p_with_fov, n_with_fov, row['acc'], row['sens'], row['spec'], eps)
image_level[s].loc[i, 'consistency_without_fov']= consistency_image_level(p_without_fov, n_without_fov, row['acc'], row['sens'], row['spec'], eps)
image_level[s].loc[i, 'consistency_with_fov_obs2']= consistency_image_level(p_with_fov_obs2, n_with_fov_obs2, row['acc'], row['sens'], row['spec'], eps)
image_level[s].loc[i, 'consistency_without_fov_obs2']= consistency_image_level(p_without_fov_obs2, n_without_fov_obs2, row['acc'], row['sens'], row['spec'], eps)
# calculating the percentages of images with a given number of negatives falling in the calculated range
for key in image_level:
if not key in methods.index:
continue
methods.loc[key, 'image_level_consistency_with_fov']= np.sum(image_level[key]['consistency_with_fov']*1)/len(image_level[key])
methods.loc[key, 'image_level_consistency_without_fov']= np.sum(image_level[key]['consistency_without_fov']*1)/len(image_level[key])
methods.loc[key, 'n_image_level']= len(image_level[key])
###Output
_____no_output_____
###Markdown
Printing the results of all image level figures
###Code
image_level['mo2017']
image_level['meng2015']
image_level['hassan2018']
image_level['tang2017']
image_level['zhu2016']
image_level['geetharamani2016']
image_level['wang2015']
image_level['singh2016']
image_level['singh2017']
image_level['saroj2020']
image_level['dash2018']
image_level['fathi2013']
image_level['imani2015']
image_level['emary2014']
image_level['waheed2015']
image_level['rahebi2014']
image_level['thangaraj2017']
image_level['adapa2020']
image_level['escorcia-gutierrez2020']
image_level['khan2016']
image_level['fraz2012b']
image_level['fraz2012']
image_level['lupascu2010']
image_level['marin2011']
image_level['ricci2007']
image_level['li2016']
image_level['barkana2017']
image_level['tamim2020']
image_level['frucci2016']
image_level['moghimirad2012']
image_level['odstrcilik2013']
image_level['dash2020']
image_level['bharkad2017']
image_level['lupascu2016']
image_level['kumar2020']
image_level['narkthewan2019']
###Output
_____no_output_____
###Markdown
Categorization
###Code
threshold= image_level_threshold
reduced= methods[methods['image_level_consistency_with_fov'].notnull()].reset_index(drop=True)
reduced.loc[reduced['image_level_consistency_with_fov'] > threshold, 'category']= 'FoV'
reduced.loc[reduced['image_level_consistency_without_fov'] > threshold, 'category']= 'no FoV'
reduced.loc[(reduced['image_level_consistency_with_fov'] > threshold) & (reduced['image_level_consistency_without_fov'] > threshold), 'category']= 'ambiguous'
reduced.loc[(~reduced['category'].isin(['FoV',
'no FoV',
'ambiguous'])), 'category']= 'outlier'
###Output
_____no_output_____
###Markdown
Analysis
###Code
reduced[['key', 'category']].groupby('category').count()
reduced[reduced['category'] == 'ambiguous']
# preparing latex table
def prepare_key(x):
name= x[:-4]
year= x[-4:]
name= name[:1].upper() + name[1:]
return name + ' (' + year + ') \cite{' + x + '}'
latex= reduced[['key', 'acc', 'sens', 'spec', 'digits', 'n_image_level', 'image_level_consistency_with_fov', 'image_level_consistency_without_fov', 'category']]
latex.loc[latex['category'] == 'no FoV', 'category']= 'all pixels'
latex['key']= latex['key'].apply(lambda x: x[0:1].upper() + x[1:])
latex['key']= latex['key'].apply(lambda x: ' \cite{' + x.lower() + '}')
#latex['key']= latex['key'].apply(prepare_key)
latex['n_image_level']= latex['n_image_level'].astype(int)
latex['digits']= latex['digits'].astype(int)
latex['acc']= latex['acc'].apply(lambda x: ('%.4f' % x)[1:])
latex['sens']= latex['sens'].apply(lambda x: ('%.4f' % x)[1:])
latex['spec']= latex['spec'].apply(lambda x: ('%.4f' % x)[1:])
latex['image_level_consistency_with_fov']= (latex['image_level_consistency_with_fov']*100).astype(int)
latex['image_level_consistency_without_fov']= (latex['image_level_consistency_without_fov']*100).astype(int)
latex.columns=['Key', '$\overline{acc}$', '$\overline{sens}$', '$\overline{spec}$', '\rotatebox{90}{Decimal places}', '\rotatebox{90}{Num. image level fig.}', '\rotatebox{90}{$H_{\text{FoV}}$ not rejected (\%)}', '\rotatebox{90}{$H_{\text{all}}$ not rejected (\%)}', 'Decision']
latex
latex_str= set_column_spaces(latex.sort_values('$\overline{acc}$', ascending=False).to_latex(escape=False, index=False), n_cols=9)
with open(os.path.join(latex_dir, "tab2.tex"), "w") as text_file:
text_file.write(latex_str)
px.scatter(reduced[reduced['category'].notnull()], x='acc', y='spec', text='key', color='category', width=1000, height=1000)
markers= ['o', 's', '+', 'x']
label_mapping= {'FoV': 'FoV', 'outlier': 'Outlier', 'no FoV': 'All pixels'}
plt.figure(figsize=(5, 4))
for i, c in enumerate(['FoV', 'no FoV', 'outlier']):
plt.scatter(reduced[reduced['category'] == c]['acc'], reduced[reduced['category'] == c]['spec'], label=label_mapping[c], marker=markers[i], s=100)
plt.scatter([0.9473], [0.9725], label = 'Ann. #2 with FoV', marker='D', s=200)
plt.scatter([0.9636], [0.9818], label = 'Ann. #2 with all pixels', marker='*', s=300)
plt.xlabel('Accuracy')
plt.ylabel('Specificity')
#plt.gca().set_aspect(1.0)
plt.tight_layout()
plt.legend()
plt.savefig(os.path.join(figures_dir, 'image_level.pdf'))
plt.show()
methods= pd.merge(methods.reset_index(drop=True), reduced[['key', 'category']], on='key', how='left')
###Output
_____no_output_____
###Markdown
Writing the results to file
###Code
methods.to_csv(image_level_results_file, index=False)
methods.columns
methods
###Output
_____no_output_____ |
notebooks/overlap_study.ipynb | ###Markdown
How many neighbours of an entry overlap lexically?Proportion of neigh that overlap in the first 100 neighbours.
###Code
%cd ~/NetBeansProjects/ExpLosion/
from notebooks.common_imports import *
from gui.output_utils import *
from gui.user_code import pretty_names
from discoutils.thesaurus_loader import Vectors
from random import sample
path = '../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_15percent-rep0_Add.events.filtered.strings'
w = Vectors.from_tsv(path, allow_lexical_overlap=True)
w.init_sims(n_neighbors=100)
unigrams = list(x for x in w.keys() if x.count('_') < 1)
phrases = list(x for x in w.keys() if x.count('_') >= 1)
w.get_nearest_neighbours_linear.cache_clear()
%lprun -f Vectors.get_nearest_neighbours_linear w.get_nearest_neighbours_linear('car/N')
len(unigrams), len(phrases), len(w)
ratios = []
for entry in random.sample(phrases, 100):
before = w.get_nearest_neighbours(entry)
after = Vectors.remove_overlapping_neighbours(entry, before)
ratios.append(len(after) / len(before))
# plt.hist(ratios, bins=20);
ax = sns.distplot(np.array(ratios), bins=20, kde_kws=dict(cut=True))
ax.set_xlim(0, 1)
sns.kdeplot?
###Output
_____no_output_____ |
problem_#8.ipynb | ###Markdown
Daily Coding Problem 8 A unival tree (which stands for "universal value") is a tree where all nodes under it have the same value.Given the root to a binary tree, count the number of unival subtrees.For example, the following tree has 5 unival subtrees:
###Code
tree = """
0
/ \\
1 0
/ \\
1 0
/ \\
1 1"""
class Node:
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
def count_unival_subtrees_helper(subroot):
if subroot == None:
return 0, None
left = subroot.left
right = subroot.right
count, val = count_unival_subtrees_helper(left)
count2, val2 = count_unival_subtrees_helper(right)
count = count + count2
if subroot.left == None and subroot.right == None:
#print("+1")
return 1+count, subroot.val
if not (subroot.left and subroot.right):
if subroot.val == val or subroot.val == val2:
#print("+1")
return count+1, subroot.val
else:
#print("+0")
return count, None
if val == val2 == subroot.val:
#print("+1")
return 1 + count, subroot.val
return count, None
def count_unival_subtrees(root):
return count_unival_subtrees_helper(root)[0]
tree = """
0
/ \\
1 0
/ \\
1 0
/ \\
1 1"""
_root = Node(val=0, left=Node(val=1),
right=Node(val=0, left=Node(val=1, left=Node(val=1), right=Node(val=1)), right=Node(val=0)))
print(tree, '\nNum of unival subtrees: ' + str(count_unival_subtrees(_root)))
tree = """
1
\\
1
/ \\
1 1
/ \\
1 1"""
_root = Node(val=1,
right=Node(val=1, left=Node(val=1, left=Node(val=1), right=Node(val=1)), right=Node(val=1)))
print(tree, '\nNum of unival subtrees: ' + str(count_unival_subtrees(_root)))
###Output
1
\
1
/ \
1 1
/ \
1 1
Num of unival subtrees: 6
|
TimeSeries Forecast.ipynb | ###Markdown
Time Series Forecasting on NYC_Taxiw MLFlow- Objectives - Leverage ML FLow to build some time series models- Simple Forecast of aggregate daily data to start- Later will need to look at splitting out the datasets into different spots
###Code
%load_ext autotime
from setup import start_spark, extract_data
sparksesh = start_spark()
from tseries.taxi_daily import TaxiDaily
taxi_daily = TaxiDaily(sparksesh)
taxi_daily.load_data()
###Output
_____no_output_____
###Markdown
Settings for MLflow
###Code
# credentials for storing our model artifacts
# mlflow needs these to be set whenever it is being called
os.environ['AWS_ACCESS_KEY_ID'] = os.environ.get('MINIO_ACCESS_KEY')
os.environ['AWS_SECRET_ACCESS_KEY'] = os.environ.get('MINIO_SECRET_KEY')
os.environ['MLFLOW_S3_ENDPOINT_URL'] = 'http://minio:9000'
###Output
time: 377 µs (started: 2021-08-03 12:37:32 +00:00)
###Markdown
Create Our Train Set
###Code
taxi_daily.dataset.agg(F.min(F.col('pickup_date')), F.max(F.col('pickup_date'))).collect()
taxi_daily.dataset.printSchema()
###Output
root
|-- pickup_date: date (nullable = true)
|-- total_rides: long (nullable = false)
|-- total_takings: double (nullable = true)
time: 7.91 ms (started: 2021-08-03 12:37:46 +00:00)
###Markdown
Lets take 2 years to start
###Code
starting_dataset = taxi_daily.dataset.filter("pickup_date < '2015-09-01'")
train, val = starting_dataset.filter("pickup_date < '2015-08-01'").toPandas(), \
starting_dataset.filter("pickup_date >= '2015-08-01'").toPandas()
###Output
time: 38.3 s (started: 2021-08-03 12:37:46 +00:00)
###Markdown
Forecasting the Dataframe
###Code
import prophet
import pandas as pd
from prophet import Prophet
from prophet.diagnostics import cross_validation
from prophet.diagnostics import performance_metrics
import mlflow
###Output
time: 2.09 s (started: 2021-08-03 12:38:24 +00:00)
###Markdown
There was an error in the hostname resolution hence switch to ip
###Code
#mlflow.delete_experiment('1')
mlflow.set_tracking_uri("http://192.168.64.21:5000/")
tracking_uri = mlflow.get_tracking_uri()
print("Current tracking uri: {}".format(tracking_uri))
### Quick test on creating experiments
from mlflow.exceptions import RestException
try:
mlflow.create_experiment(
name='taxi_daily_forecast'
)
except RestException:
print('already_created')
experiment = mlflow.get_experiment(15)
experiment.artifact_location
# Build an evaluation function
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2
# To save models to mlflow we need to write a python wrapper
# to make sure that it performs as mlflow expects
import mlflow.pyfunc
class ProphetModel(mlflow.pyfunc.PythonModel):
def __init__(self, model):
self.model = model
super().__init__()
def load_context(self, context):
from prophet import Prophet
return
def predict(self, context, model_input):
future = self.model.make_future_dataframe(periods=model_input['periods'][0])
return self.model.predict(future)
###Output
time: 333 µs (started: 2021-08-03 12:38:27 +00:00)
###Markdown
44 seconds for training by default \3.62 seconds with processes parallelisation \13 seconds after we add the toPandas conversion here and run with parallelisation
###Code
train_prophet = train[['pickup_date', 'total_rides']]
train_prophet.columns = ['ds', 'y']
#train_prophet.head(10)
val_prophet = val[['pickup_date', 'total_rides']]
val_prophet.columns = ['ds', 'y']
#val_prophet.head(10)
%time
rolling_window = 0.1
conda_env = {
'channels': ['conda-forge'],
'dependencies': [{
'pip': [
'prophet=={0}'.format(prophet.__version__)
]
}],
"name": "prophetenv"
}
with mlflow.start_run(experiment_id=15):
m = prophet.Prophet(daily_seasonality=True)
# need to adjust the fit function to suit
m.fit(train_prophet)
# cross validation is the thingy that is generating our different train sets
# tqdm is glitchy with my setup so disabling for now
df_cv = cross_validation(m, initial="28 days", period="7 days", horizon="14 days",
disable_tqdm=True, parallel="processes")
df_p = performance_metrics(df_cv, rolling_window=rolling_window)
mlflow.log_param("rolling_window", rolling_window)
mlflow.log_metric("rmse", df_p.loc[0, "rmse"])
mlflow.log_metric("mae", df_p.loc[0, "mae"])
mlflow.log_metric("mape", df_p.loc[0, "mape"])
print(" CV: {}".format(df_cv.head()))
print(" Perf: {}".format(df_p.head()))
mlflow.pyfunc.log_model("model", conda_env=conda_env, python_model=ProphetModel(m))
print(
"Logged model with URI: runs:/{run_id}/model".format(
run_id=mlflow.active_run().info.run_id
)
)
###Output
CPU times: user 1 µs, sys: 1 µs, total: 2 µs
Wall time: 6.68 µs
###Markdown
Prophet Diagnostics
###Code
# Python
from prophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='mape')
###Output
/opt/conda/envs/spark/lib/python3.8/site-packages/prophet/plot.py:539: FutureWarning: casting timedelta64[ns] values to int64 with .astype(...) is deprecated and will raise in a future version. Use .view(...) instead.
x_plt = df_none['horizon'].astype('timedelta64[ns]').astype(np.int64) / float(dt_conversions[i])
/opt/conda/envs/spark/lib/python3.8/site-packages/prophet/plot.py:540: FutureWarning: casting timedelta64[ns] values to int64 with .astype(...) is deprecated and will raise in a future version. Use .view(...) instead.
x_plt_h = df_h['horizon'].astype('timedelta64[ns]').astype(np.int64) / float(dt_conversions[i])
###Markdown
We aren't seeing many differences with longer horizons
###Code
future = m.make_future_dataframe(periods=len(val_prophet))
forecast = m.predict(future)
fig = m.plot_components(forecast)
###Output
_____no_output_____
###Markdown
Testing out Uber Orbit
###Code
from orbit.models.dlt import DLTFull
from orbit.diagnostics.plot import plot_predicted_data
dlt = DLTFull(
response_col='y', date_col='ds',
#regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
seasonality=7,
)
dlt.fit(df=train_prophet)
# outcomes data frame
predicted_df = dlt.predict(df=val_prophet)
plot_predicted_data(
training_actual_df=train_prophet, predicted_df=predicted_df,
date_col=dlt.date_col, actual_col=dlt.response_col,
test_actual_df=val_prophet
)
###Output
_____no_output_____
###Markdown
Testing sktime
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
import numpy as np
print("min: {0}, max {1}".format(min(train.pickup_date), max(train.pickup_date)))
print("min: {0}, max {1}".format(min(val.pickup_date), max(val.pickup_date)))
train_tr = pd.date_range(min(train.pickup_date), max(train.pickup_date))
val_tr = pd.date_range(min(val.pickup_date), max(val.pickup_date))
assert len(train) == len(train_tr)
assert len(val) == len(val_tr)
train_skt_df = pd.Series(train['total_rides'].values.astype('float'), index=train_tr)
val_skt_df = pd.Series(val['total_rides'].values, index=val_tr)
plot_series(train_skt_df)
plot_series(val_skt_df)
# test pandas
#pd.PeriodIndex(pd.date_range("2020-01-01", periods=30, freq="D"))
from sktime.forecasting.base import ForecastingHorizon
#fh = ForecastingHorizon(test_sktime.index, is_relative=False)
fh_period = ForecastingHorizon(
val_skt_df.index, is_relative=False
)
from sktime.forecasting.naive import NaiveForecaster
basic_forecaster = NaiveForecaster(strategy="last")
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(train_skt_df)
# stuck here for now
preds = forecaster.predict(fh_period)
fig, ax = plot_series(val_skt_df, preds, labels=["y", "y_pred"])
mean_absolute_percentage_error(preds, val_skt_df)
from sktime.forecasting.theta import ThetaForecaster
# theta forecasting
th_forecaster = ThetaForecaster(sp=7)
th_forecaster.fit(train_skt_df)
alpha = 0.05
y_pred, y_pred_ints = th_forecaster.predict(fh_period, return_pred_int=True, alpha=alpha)
fig, ax = plot_series(val_skt_df, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
mean_absolute_percentage_error(y_pred, val_skt_df)
from sktime.forecasting.ets import AutoETS
et_forecaster = AutoETS(auto=True, sp=7, n_jobs=-1)
et_forecaster.fit(train_skt_df)
y_pred = et_forecaster.predict(fh_period)
plot_series(train_skt_df, val_skt_df, y_pred, labels=["y_train", "y_test", "y_pred"])
#mean_absolute_percentage_error(y_pred, val_skt_df)
mean_absolute_percentage_error(y_pred, val_skt_df)
from sktime.forecasting.arima import AutoARIMA
ar_forecaster = AutoARIMA(sp=7, suppress_warnings=True)
ar_forecaster.fit(train_skt_df)
y_pred = ar_forecaster.predict(fh_period)
plot_series(train_skt_df, val_skt_df, y_pred, labels=["y_train", "y_test", "y_pred"])
#mean_absolute_percentage_error(y_pred, val_skt_df)
mean_absolute_percentage_error(y_pred, val_skt_df)
from sktime.forecasting.tbats import TBATS
tbats_forecaster = TBATS(sp=7, use_trend=True, use_box_cox=True)
tbats_forecaster.fit(train_skt_df)
y_pred = tbats_forecaster.predict(fh_period)
plot_series(train_skt_df, val_skt_df, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, val_skt_df)
###Output
_____no_output_____
###Markdown
Pytorch - Forecasting
###Code
from pytorch_forecasting import Baseline, NBeats, TimeSeriesDataSet
from pytorch_forecasting.data import NaNLabelEncoder, TorchNormalizer
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping
import torch
###Output
time: 1.06 s (started: 2021-08-03 12:40:57 +00:00)
###Markdown
We need to regig this:- need to merge the two datasets and split it by the time index- time index must be integer, cannot use date times etc
###Code
torch_train = train[['pickup_date', 'total_rides']].copy()
torch_train['group'] = 'Only'
torch_train['time_idx'] = torch_train.index.astype('int')
max(torch_train['time_idx'])
torch_val = val[['pickup_date', 'total_rides']].copy()
torch_val['group'] = 'Only'
torch_val.index = pd.RangeIndex(start=max(torch_train['time_idx'])+1, stop=max(torch_train['time_idx'])+len(torch_val)+1)
torch_val['time_idx'] = torch_val.index.astype('int')
merged = pd.concat([torch_train, torch_val])
merged.head()
torch_train.total_rides.head(2)
# create dataset and dataloaders
max_encoder_length = 60
max_prediction_length = len(val_skt_df)
training_cutoff = 730
context_length = max_encoder_length
prediction_length = max_prediction_length
training = TimeSeriesDataSet(
merged[lambda x: x.time_idx < training_cutoff],
time_idx="time_idx",
target="total_rides",
target_normalizer=TorchNormalizer(),
#categorical_encoders={"group": NaNLabelEncoder().fit(torch_train.group)},
group_ids=["group"],
# only unknown variable is "value" - and N-Beats can also not take any additional variables
time_varying_unknown_reals=["total_rides"],
max_encoder_length=context_length,
max_prediction_length=prediction_length,
)
validation = TimeSeriesDataSet.from_dataset(training, merged, min_prediction_idx=training_cutoff)
batch_size = 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=0)
training.target_normalizer
pl.seed_everything(42)
trainer = pl.Trainer(gradient_clip_val=0.01)
net = NBeats.from_dataset(training, learning_rate=3e-2, weight_decay=1e-2,
widths=[32, 512], backcast_loss_ratio=0.1)
# find optimal learning rate
res = trainer.tuner.lr_find(net, train_dataloader=train_dataloader,
val_dataloaders=val_dataloader, min_lr=1e-5)
print(f"suggested learning rate: {res.suggestion()}")
fig = res.plot(show=True, suggest=True)
fig.show()
net.hparams.learning_rate = res.suggestion()
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min")
trainer = pl.Trainer(
max_epochs=100,
gpus=0,
weights_summary="top",
gradient_clip_val=0.01,
callbacks=[early_stop_callback],
limit_train_batches=30,
)
net = NBeats.from_dataset(
training,
learning_rate=4e-3,
log_interval=10,
log_val_interval=1,
weight_decay=1e-2,
widths=[32, 512],
backcast_loss_ratio=1.0,
)
trainer.fit(
net,
train_dataloader=train_dataloader,
val_dataloaders=val_dataloader,
)
best_model_path = trainer.checkpoint_callback.best_model_path
best_model = NBeats.load_from_checkpoint(best_model_path)
actuals = torch.cat([y[0] for x, y in iter(val_dataloader)])
predictions = best_model.predict(val_dataloader)
(actuals - predictions).abs().mean()
raw_predictions, x = best_model.predict(val_dataloader, mode="raw", return_x=True)
best_model.plot_prediction(x, raw_predictions, idx=0, add_loss_to_title=True);
# we are only forecasting 1 series so idx is just 0
#for idx in range(10): # plot 10 examples
# best_model.plot_prediction(x, raw_predictions, idx=idx, add_loss_to_title=True);
###Output
time: 200 µs (started: 2021-08-03 12:41:27 +00:00)
###Markdown
GluonTSrequires pandas at least 1.2
###Code
pd.__version__
from gluonts.dataset.common import ListDataset
import matplotlib.pyplot as plt
print(train_skt_df.values.shape)
print(val_skt_df.values.shape)
# train dataset: cut the last window of length "prediction_length", add "target" and "start" fields
train_ds = ListDataset(
[{'target': train_skt_df.values.astype(int), 'start':train_skt_df.index[0].to_pydatetime() }],
freq="1D"
)
# test dataset: use the whole dataset, add "target" and "start" fields
test_ds = ListDataset(
[{'target': val_skt_df.values.astype(int), 'start': val_skt_df.index[0].to_pydatetime() }],
freq="1D"
)
#type(train_skt_df.index[0])
type(train_skt_df.index[0].to_pydatetime())
train_skt_df.index[0].to_pydatetime()
#train_skt_df.values
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.mx import Trainer
estimator = SimpleFeedForwardEstimator(
num_hidden_dimensions=[10],
prediction_length=len(val_skt_df),
context_length=100,
freq="D",
trainer=Trainer(
ctx="cpu",
epochs=20,
learning_rate=1e-3,
num_batches_per_epoch=100
)
)
predictor = estimator.train(training_data=train_ds)
from gluonts.evaluation import make_evaluation_predictions
forecast_it, ts_it = make_evaluation_predictions(
dataset=test_ds, # test dataset
predictor=predictor, # predictor
num_samples=100, # number of sample paths we want for evaluation
)
forecasts = list(forecast_it)
tss = list(ts_it)
ts_entry = tss[0]
forecast_entry = forecasts[0]
def plot_prob_forecasts(ts_entry, forecast_entry):
plot_length = 150
prediction_intervals = (50.0, 90.0)
legend = ["observations", "median prediction"] + [f"{k}% prediction interval" for k in prediction_intervals][::-1]
fig, ax = plt.subplots(1, 1, figsize=(10, 7))
ts_entry[-plot_length:].plot(ax=ax) # plot the time series
forecast_entry.plot(prediction_intervals=prediction_intervals, color='g')
plt.grid(which="both")
plt.legend(legend, loc="upper left")
plt.show()
plot_prob_forecasts(ts_entry, forecast_entry)
###Output
_____no_output_____
###Markdown
Statsmodels
###Code
import statsmodels.api as sm
mod = sm.tsa.SARIMAX(train['total_rides'], order=(1, 0, 0), trend='c')
# Estimate the parameters
res = mod.fit()
print(res.summary())
###Output
SARIMAX Results
==============================================================================
Dep. Variable: total_rides No. Observations: 730
Model: SARIMAX(1, 0, 0) Log Likelihood -8852.172
Date: Tue, 03 Aug 2021 AIC 17710.343
Time: 13:18:26 BIC 17724.122
Sample: 0 HQIC 17715.659
- 730
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
intercept 1.58e+05 8020.434 19.698 0.000 1.42e+05 1.74e+05
ar.L1 0.6724 0.017 39.522 0.000 0.639 0.706
sigma2 1.974e+09 0.118 1.68e+10 0.000 1.97e+09 1.97e+09
===================================================================================
Ljung-Box (L1) (Q): 24.30 Jarque-Bera (JB): 392.03
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.85 Skew: -1.03
Prob(H) (two-sided): 0.21 Kurtosis: 5.95
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.71e+25. Standard errors may be unstable.
time: 64.8 ms (started: 2021-08-03 13:18:26 +00:00)
###Markdown
Stopping Spark Session
###Code
spark.stop()
###Output
time: 438 ms (started: 2021-08-03 13:41:57 +00:00)
|
jupyter/e5-neutrinos.ipynb | ###Markdown
Jupyter Example 5 for HERMES: Neutrinos
###Code
from pyhermes import *
from pyhermes.units import PeV, TeV, GeV, mbarn, kpc, pc, deg, rad
import astropy.units as u
import numpy as np
import healpy
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
HEMRES has available two cross-section modules for $pp \rightarrow \nu$: * one built on top of cparamlib: Kamae et al. 2006 * one based on Kelner-Aharonian parametrization
###Code
kamae06 = interactions.Kamae06Neutrino()
kelahar = interactions.KelnerAharonianNeutrino()
E_neutrino_range = np.logspace(0,6,100)*GeV
E_proton_list = [10*GeV, 100*GeV, 1*TeV, 100*TeV, 1*PeV]
diff_sigma = lambda model, E_proton: [
E_neutrino*model.getDiffCrossSection(E_proton, E_neutrino)/mbarn
for E_neutrino in E_neutrino_range
]
diff_sigma_kamae06 = lambda E_proton: diff_sigma(kamae06, E_proton)
diff_sigma_kelahar = lambda E_proton: diff_sigma(kelahar, E_proton)
colors = ['tab:brown', 'tab:red', 'tab:green', 'tab:blue', 'tab:orange']
for E_proton, c in zip(E_proton_list, colors):
plt.loglog(E_neutrino_range/GeV, diff_sigma_kamae06(E_proton),
ls='-', color=c, label="{}".format(E_proton.toAstroPy().to('TeV').round(2)))
plt.loglog(E_neutrino_range/GeV, diff_sigma_kelahar(E_proton),
ls='--', color=c)
plt.ylim(top=1e3, bottom=1e-2)
plt.title("Kamae06 (solid) and K&A (dashed) for a list of $E_p$")
plt.xlabel(r"$E_\nu$ / GeV")
plt.ylabel(r"$E_\nu\, \mathrm{d}\sigma_{pp \rightarrow \nu} / \mathrm{d} E_\nu$ [mbarn]")
_ = plt.legend(loc="upper right", frameon=False)
def integrate_template(integrator, nside):
integrator.setupCacheTable(60, 60, 12)
sun_pos = Vector3QLength(8.0*kpc, 0*pc, 0*pc)
integrator.setSunPosition(sun_pos)
mask_edges = ([5*deg, 0*deg], [-5*deg, 180*deg])
mask = RectangularWindow(*mask_edges)
skymap_range = GammaSkymapRange(nside, 0.05*TeV, 1e4*TeV, 20)
skymap_range.setIntegrator(integrator)
skymap_range.setMask(mask)
skymap_range.compute()
return skymap_range
def integrate_neutrino(cosmicrays, gas, crosssection):
nside = 256
integrator = PiZeroIntegrator(cosmicrays, gas, crosssection)
return integrate_template(integrator, nside)
neutral_gas_HI = neutralgas.RingModel(neutralgas.RingType.HI)
proton = cosmicrays.Dragon2D(Proton)
skymap_range_neutrino_HI_kamae06 = integrate_neutrino(proton, neutral_gas_HI, kamae06)
skymap_range_neutrino_HI_kelahar = integrate_neutrino(proton, neutral_gas_HI, kelahar)
#use_units = skymap_range_HI[0].getUnits() # default units for GammaSkymap (GeV^-1 m^-2 s^-1 sr^-1)
use_units = "GeV^-1 cm^-2 s^-1 sr^-1" # override default
skymap_units = u.Quantity(1, use_units)
base_units = skymap_units.unit.si.scale
def calc_mean_flux(skymap_range):
energies = np.array([float(s.getEnergy()/GeV) for s in skymap_range])
fluxes = np.array([s.getMean() for s in skymap_range]) / base_units
return energies, fluxes
def plot_spectrum(skymap_range, label, style):
energies, fluxes = calc_mean_flux(skymap_range)
plt.plot(energies, fluxes*energies**2, style, label=label)
def plot_total_spectrum(list_of_skymap_range, label, style):
fluxes = QDifferentialIntensity(0)
for skymap_range in list_of_skymap_range:
energies, fluxes_i = calc_mean_flux(skymap_range)
fluxes = fluxes + fluxes_i
plt.plot(energies, fluxes*energies**2, style, label=label)
fig, ax = plt.subplots()
plot_spectrum(skymap_range_neutrino_HI_kamae06, r'$\nu $ @ p + HI (Kamae06)', '-')
plot_spectrum(skymap_range_neutrino_HI_kelahar, r'$\nu $ @ p + HI (K&A)', '--')
plt.title("Neutrinos from diffuse emission (Fornieri20, Remy18)\n $|b| < 5^\degree$, $0^\degree \leq l \leq 180^\degree$")
plt.legend(loc="lower left")
plt.xlabel(r"$E_\nu$ / GeV")
plt.ylabel(r"$E_\nu\, \mathrm{d}\Phi_\gamma / \mathrm{d} E_\gamma$ / " + (skymap_units*u.GeV**2).unit.to_string(format='latex_inline'))
ax.tick_params(which='minor', direction='in', axis='both', bottom=True, top=True, left=True, right=True, length=3)
ax.tick_params(which='major', direction='in', axis='both', bottom=True, top=True, left=True, right=True, length=5)
plt.xscale("log")
plt.yscale("log")
plt.ylim(10**(-9), 10**(-6))
plt.xlim(10**(2), 10**(6))
#plt.savefig("img/neutrinos-from-diffuse-emission-spectrum-180.pdf", dpi=150)
###Output
_____no_output_____ |
Semester/3.ipynb | ###Markdown
Section: H Roll No.: 29
###Code
from sklearn.datasets import load_iris
iris = load_iris()
import pandas as pd
data = pd.DataFrame(iris.data, columns=iris.feature_names)
data["species"] = pd.DataFrame(iris.target)
data.head()
from sklearn.neural_network import MLPClassifier
model = MLPClassifier(
hidden_layer_sizes=(12,),
max_iter=5000,
activation='logistic',
solver='sgd',
learning_rate_init=0.001)
X = data.iloc[:, :-1]
Y = data.iloc[:, -1]
model_fit = model.fit(X, Y)
X_test = pd.DataFrame([
[5, 2, 1, 1],
[2, 7, 4, 1]
])
Y_test = model.predict(X_test)
for y in Y_test:
print(iris.target_names[y])
###Output
setosa
setosa
|
thunder_svm.ipynb | ###Markdown
TEST TRAIN INDEX GENERATION FOR FOLDS
###Code
attribute_map = [
# {"Race": ["African", "Asian", "Indian", "Caucasian"]},
{"skintype": ["type1", "type2", "type3", "type4", "type5", "type6"]},
{"eye": ["normal", "narrow"]},
{"haircolor": ["red", "black", "gray", "brown", "blonde"]},
{"hairtype": ["straight", "wavy", "bald", "curly"]},
{"lips": ["small", "big"]},
{"nose": ["wide", "narrow"]},
]
vgg = pd.read_csv("/mnt/HDD/FaceDatasetCenter/metadata/VGGFace2_metadata_FDA.csv")
vgg_test = vgg[vgg.type == "test"]
vgg_test.head()
vgg_image = pd.read_csv(
"/mnt/HDD/FaceDatasetCenter/metadata/VGGFace2_image_meta_test.csv"
)
vgg_image_test = vgg_image[vgg_image.type == "test"]
vgg_image_test = vgg_image_test.sort_values(by="file")
vgg_image_test.head()
NUM_FOLDS = 3
TEST_SAMPLE_SIZE = 50
folds_folder = Path("folds")
folds_folder.mkdir(exist_ok=True)
def generate_fold():
all_folds = []
for fold in range(0, NUM_FOLDS):
print(TEST_SAMPLE_SIZE * NUM_FOLDS)
print(f"Fold {fold+1}")
class_folds = {"train": [], "test": []}
for i, group in vgg_image_test.groupby("Class_ID"):
num_samples = group.shape[0]
test_mask = np.zeros(num_samples, dtype=np.bool)
if TEST_SAMPLE_SIZE * NUM_FOLDS > num_samples:
start = fold * TEST_SAMPLE_SIZE
end = start + TEST_SAMPLE_SIZE
ix = [i % num_samples for i in range(start, end)]
# print(f"ClassID: {i}, fold: {fold} - [{ix[0]}:{ix[-1]}]")
else:
class_fold_size = num_samples // NUM_FOLDS
start = fold * class_fold_size
end = start + class_fold_size
ix = range(start, end)
test_mask[ix] = True
try:
class_folds["test"].append(
group[test_mask].sample(n=TEST_SAMPLE_SIZE, random_state=0)
)
except:
import pdb
pdb.set_trace()
class_folds["train"].append(group[~test_mask])
all_folds.append(class_folds)
return all_folds
all_folds = generate_fold()
print(len(all_folds))
for i, fold in enumerate(all_folds):
train = pd.concat(fold["train"])
test = pd.concat(fold["test"])
train.to_parquet(folds_folder / f"fold_{i}_train.pq", compression="GZIP")
test.to_parquet(folds_folder / f"fold_{i}_test.pq", compression="GZIP")
###Output
150
Fold 1
150
Fold 2
150
Fold 3
3
###Markdown
Feature loading
###Code
features = np.load("features/vggface2_test_features.npy",allow_pickle=True)
path_arr = np.load("features/vggface2_test_paths.npy",allow_pickle=True)
meta = pd.DataFrame(path_arr, columns=["full_path"])
meta["file"] = meta.full_path.apply(lambda x: "/".join(Path(x).parts[-2:]))
labels = list(map(lambda x: str(x).split("/")[-2], path_arr))
le = preprocessing.LabelEncoder()
labels = le.fit_transform(labels)
meta["y_test"] = labels
meta = meta.merge(vgg_image_test,how='left',on='file')
meta.head()
#Train CODE
def train(X,y):
all_predictions = []
for i,fold in enumerate(range(0, NUM_FOLDS)):
train_ixs = pd.read_parquet(folds_folder / f"fold_{i}_100_train.pq")
test_ixs = pd.read_parquet(folds_folder / f"fold_{i}_100_test.pq")
print(folds_folder / f"fold_{i}_train.pq")
print(test_ixs.shape)
print(meta[meta.file.isin(test_ixs.file)].index.shape)
test_index = meta[meta.file.isin(test_ixs.file)].index
train_index = meta[meta.file.isin(train_ixs.file)].index
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
print(X_test.shape,y_test.shape)
print('SVM Training...')
svm_model_linear = SVC(kernel="linear", C=1).fit(X_train, y_train)
print("Starting prediction (The computer can be unresponsive during the prediction).")
TEST_BATCH_SIZE = 1000
preds = []
ys=[]
with tqdm(total=X_test.shape[0], file=sys.stdout) as pbar:
for i in range(0, X_test.shape[0], TEST_BATCH_SIZE):
X_test_batch = X_test[i : i + TEST_BATCH_SIZE]
pred = svm_model_linear.predict(X_test_batch)
preds.append(pred)
# update tqdm
pbar.set_description("Processed: %d" % (1 + i))
pbar.update(TEST_BATCH_SIZE)
all_predictions.append(preds)c
return all_predictions
X = np.asarray(features, dtype=np.float32)
y = labels
pred_list = train(X,y)
ap = np.asarray(pred_list)
ap = ap.reshape(ap.shape[0],ap.shape[1]*ap.shape[2])
ap.shape
# TEST CODE
result_dic = []
for i, fold in enumerate(range(0, 3)):
train_ixs = pd.read_parquet(folds_folder / f"fold_{i}_train.pq")
test_ixs = pd.read_parquet(folds_folder / f"fold_{i}_test.pq")
print(meta.shape,test_ixs.index)
meta_test = meta[meta.file.isin(test_ixs.file)]
print(meta_test.shape)
test_index = meta_test.index
y_test = y[test_index]
meta_test["y_pred"] = ap[i]
print("Overall Accuracy:", accuracy_score(meta_test["y_test"], meta_test["y_pred"]))
print("Group initilization!")
for attr in attribute_map:
# for one value of one attribute
for key, value in attr.items():
for val in value:
subgroup = meta_test[meta_test[key] == val]
score= accuracy_score(subgroup["y_test"], subgroup["y_pred"])
print(
key,
val,
":",
score,
subgroup.shape,
)
result_dic.append([key,val,score,subgroup.shape[0]])
subgroup.shape
results = pd.DataFrame(result_dic,columns=['feature','category','acc','size'])
results["attribute_name"] = results["feature"] +'_'+ results["category"]
results = results.groupby('attribute_name').mean().sort_values(by='attribute_name')
results = results.reset_index()
results['attribute'] = results.attribute_name.apply(lambda x:x.split('_')[0])
total_size = results.groupby('attribute').sum()['size'][0]
print('total size',total_size)
results['Ratio']=results['size'].apply(lambda x: x/total_size)
results
results.to_csv('face_identifiacation_attribute_based_results.csv',index=False)
print("std:", np.std(results.acc.values,ddof=1) * 100)
print("bias:", (1-results.acc.values.min()) / (1-results.acc.values.max()))
(1.0-results.acc.values.min())/(1-results.acc.values.max())
1-results.acc.values.max()
results
(1.0-results.acc.values.min()),(1-results.acc.values.max())
# print(results[['feature_name','acc']].to_latex())
results["Ratio"] = results["Ratio"].apply(lambda x: f"{100*x:.2f}")
results["Acc"] = results["acc"].apply(lambda x: f"{100*x:.2f}")
results["attribute_name"] = results["attribute_name"].str.replace("skintype", "")
results["attribute_name"] = results["attribute_name"].str.replace("haircolor", "hair ")
results["attribute_name"] = results["attribute_name"].str.replace("hairtype", "hair ")
results["attribute_name"] = results["attribute_name"].str.title()
results["attribute_name"] = results["attribute_name"].apply(
lambda x: " ".join(x.split("_")[::-1])
)
results = results.sort_values(by='Acc')
attribute_res = results[["attribute_name","Ratio", "Acc"]]
attribute_res = pd.concat(
[
attribute_res.iloc[:11].reset_index(drop=True),
attribute_res.iloc[11:].reset_index(drop=True),
],
axis=1,
ignore_index=True,
)
attribute_res.columns = ["Attribute Category", "Ratio (%)","Accuracy (%)", "Attribute Category(%)", "Ratio","Accuracy(%)"]
attribute_res
print(
attribute_res.to_latex(
index=False, caption="Table Caption", label="tab:fi1", na_rep=""
)
)
print(type(all_predictions), type(y_s))
for y_test, y_pred in zip(y_s, all_predictions):
print(type(y_pred), type(y_test))
y_pred = np.array(list(chain(*y_pred)))
print("Overall Accuracy:", accuracy_score(y_test, y_pred))
feature_arr = np.asarray(features, dtype=np.float32)
print(feature_arr[0][0], np.mean(feature_arr), np.std(feature_arr))
feature_arr = preprocessing.normalize(feature_arr)
print(feature_arr[0][0], np.mean(feature_arr), np.std(feature_arr))
labels = list(map(lambda x: x.split("/")[-2], path_arr))
le = preprocessing.LabelEncoder()
labels = le.fit_transform(labels)
X = feature_arr
y = labels
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print("Data is ready!", X.shape, X.shape)
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=0)
for train_index, test_index in sss.split(X, y):
print("TRAIN:", type(train_index), "TEST:", type(test_index))
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
svm_model_linear = SVC(kernel="linear", C=20).fit(X_train, y_train)
print("Training is completed.")
# import datetime
# timestr = datetime.datetime.now().isoformat().replace(":", ".")
# svm_model_file = f"svm_model_{timestr}"
# svm_model_linear.save_to_file(svm_model_file)
# print(f"Saved model to: {svm_model_file}")
import sys
from itertools import chain
from tqdm import tqdm
print("Starting prediction (The computer can be unresponsive during the prediction).")
TEST_BATCH_SIZE = 1000
preds = []
with tqdm(total=X_test.shape[0], file=sys.stdout) as pbar:
for i in range(0, X_test.shape[0], TEST_BATCH_SIZE):
X_test_batch = X_test[i : i + TEST_BATCH_SIZE]
pred = svm_model_linear.predict(X_test_batch)
preds.append(pred)
# update tqdm
pbar.set_description("Processed: %d" % (1 + i))
pbar.update(TEST_BATCH_SIZE)
y_pred = np.array(list(chain(*preds)))
print("Overall Accuracy:", accuracy_score(y_test, y_pred))
test_ixs["y_pred"] = y_pred.astype(np.int)
test_ixs["y_test"] = y_test
test_ixs = test_ixs.rename(columns={"subject": "Class_ID"})
test_data = test_ixs.merge(vgg_test, on="Class_ID", how="left")
attribute_map = [
{"skintype": ["type1", "type2", "type3", "type4", "type5", "type6"]},
{"hairtype": ["straight", "wavy", "bald", "curly"]},
{"haircolor": ["red", "black", "grey", "brown", "blonde"]},
{"lips": ["small", "big"]},
{"eye": ["normal", "narrow"]},
{"nose": ["wide", "narrow"]},
]
print("Group initilization!")
for attr in attribute_map:
# for one value of one attribute
for key, value in attr.items():
for val in value:
subgroup = test_data[test_data[key] == val]
print(
key,
val,
":",
accuracy_score(subgroup["y_test"], subgroup["y_pred"]),
subgroup.shape,
)
# y_pred = svm_model_linear.predict(X_test)
# features = np.load('features/unlearn_races_r50_feat_05ep.npz')
# meta_data = pd.read_csv('metadata/VGGFace2_200_Subjects_Test_Images.csv')
# features = np.load('../FeatureEncodingsRFW/senet50_ft_features.npy')
# train_ixs = pd.read_csv('../train_test_split/rfwtest_train_indexes.csv')
# test_ixs = pd.read_csv('../train_test_split/rfwtest_test_indexes.csv')
features = features["arr_0"]
feature_arr = np.asarray(features[:][:, :-1], dtype=np.float64)
path_arr = features[:][:, -1]
labels = list(map(lambda x: x.split("/")[0], path_arr))
le = preprocessing.LabelEncoder()
labels = le.fit_transform(labels)
X = pd.DataFrame(feature_arr)
y = pd.Series(labels)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print("SVM!")
svm_model_linear = SVC(kernel="linear", C=1).fit(X_train, y_train)
y_pred = svm_model_linear.predict(X_test)
print("Overall Accuracy:", accuracy_score(y_test.values, y_pred))
print("Group initilization!")
test_pathes = path_arr[y_test.index.values]
for race in ["African", "Asian", "Caucasian", "Indian"]:
for gender in ["m", "f"]:
main_group = meta_data[
(meta_data.race == race)
& (meta_data.gender == gender)
& (meta_data.generated_version == "original")
]
group_file = main_group.filename.values
indexes = []
for el in group_file:
loc = np.argwhere(test_pathes == el)
if loc.size != 0:
indexes.append(int(loc[0][0]))
if len(indexes) > 0:
indexes = np.asarray(indexes)
print(race, gender)
print(
" accuracy:%d %.3f"
% (
len(y_test.values[indexes]),
accuracy_score(y_test.values[indexes], y_pred[indexes]),
)
)
from sklearn.model_selection import GridSearchCV
feature_arr = np.asarray(features[:][:, :-1], dtype=np.float64)
path_arr = features[:][:, -1]
labels = list(map(lambda x: x.split("/")[0], path_arr))
le = preprocessing.LabelEncoder()
labels = le.fit_transform(labels)
X = pd.DataFrame(feature_arr)
y = pd.Series(labels)
param_grid = [
{"C": [1, 10, 100, 1000], "kernel": ["linear"]},
{"C": [1, 10, 100, 1000], "gamma": [0.001, 0.0001], "kernel": ["rbf"]},
]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
grid_search = GridSearchCV(SVC(), param_grid, cv=2)
svm_model = grid_search.fit(X_train, y_train)
print(grid_search.best_params_)
# y_pred = svm_model.predict(X_test)
# print('Overall Accuracy:',accuracy_score(y_test.values, y_pred))
# print('Group initilization!')
# test_pathes = path_arr[y_test.index.values]
# for race in ['African','Asian','Caucasian', 'Indian']:
# for gender in ['f','m']:
# main_group = meta_data[(meta_data.race == race) & (meta_data.gender== gender) & (meta_data.generated_version== 'original') ]
# group_file = main_group.filename.values
# indexes = []
# for el in group_file:
# loc = np.argwhere(test_pathes==el)
# if loc.size != 0:
# indexes.append(int(loc[0][0]))
# if len(indexes)>0:
# indexes = np.asarray(indexes)
# print(race,gender)
# print(' accuracy:%d %.3f'%(len(y_test.values[indexes]), accuracy_score(y_test.values[indexes], y_pred[indexes])))
###Output
_____no_output_____ |
Vehicle_Classification_App.ipynb | ###Markdown
The Vehicle Classification App This app will take in a picture of the vehicle and will identify it as a car, motorbike or a bus.
###Code
from fastai.vision.all import *
from fastai.vision.widgets import *
#import pathlib
#temp = pathlib.PosixPath
#pathlib.PosixPath = pathlib.WindowsPath
path = Path()
learn_inf = load_learner(path/'Vehicle_Classifier.pkl')
btn_upload = widgets.FileUpload()
out_pl = widgets.Output()
lbl_pred = widgets.Label()
def on_data_change(change):
lbl_pred.value = ''
img = PILImage.create(btn_upload.data[-1])
out_pl.clear_output()
with out_pl: display(img.to_thumb(128,128))
pred,pred_idx,probs = learn_inf.predict(img)
lbl_pred.value = f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}'
btn_upload.observe(on_data_change, names=['data'])
display(VBox([widgets.Label('Upload a picture of the vehicle'), btn_upload,out_pl, lbl_pred]))
###Output
_____no_output_____ |
notebooks/compare_energy.ipynb | ###Markdown
print("Test")
###Code
dispatch_solution = aemo.get_table("dispatch_unit_solution")
# df = dispatch_solution.to_frame()
# df
# a = len(dispatch_solution.records)
# print(f"records: {a}")
dispatch_solution.records
# test it out with v2 methods
from datetime import datetime, date, timedelta
sa1_wind_duids = ["BLUFF1"]
sa1_wind_df = df[df.index.isin(sa1_wind_duids, level=1)]
sa1_wind_df
# sa1_wind_df
# sa1_wind_df_energy = trading_energy_data(sa1_wind_df, date(2021, 3, 4))
# df_energy_v2
# sa1_wind_df = sa1_wind_df.set_index("SETTLEMENTDATE")
# sa1_wind_df.sort_index(inplace=True)
# sa1_wind_df.OUTPUT_MWH.sum() / 1000
from opennem.workers.energy import shape_energy_dataframe, get_generated_query, get_generated
from opennem.core.energy import energy_sum
date_start = datetime.fromisoformat("2021-03-03 23:55:00+10:00")
date_end = datetime.fromisoformat("2021-03-07 00:05:00+10:00")
generated_results = get_generated(date_min=date_start, date_max=date_end, network=NetworkNEM, fueltech_id="wind")
dfv3 = shape_energy_dataframe(generated_results)
dfv3 = dfv3.set_index(["trading_interval"])
dfv3
def __trapezium_integration(d_ti, power_field: str = "MWH_READING"):
return 0.5*(d_ti[power_field] * [1,2,2,2,2,2,1]).sum()/12
def __trading_energy_generator(df, date, duid_id, power_field: str = "generated"):
return_cols = []
t_start = datetime(date.year, date.month,date.day,0,5)
#48 trading intervals in the day
#(could be better with groupby function)
for TI in range(48):
#t_i initial timestamp of trading_interval, t_f = final timestamp of trading interval
t_i = t_start + timedelta(0,1800*TI)
t_f = t_start + timedelta(0,1800*(TI+1))
_query = f"'{t_i}' <= trading_interval <= '{t_f}' and facility_code == '{duid}'"
d_ti = df.query(_query)
energy_value = None
# interpolate if it isn't padded out
if d_ti[power_field].count() != 7:
index_interpolated = pd.date_range(start= t_i, end= t_f, freq='5min')
d_ti = d_ti.reset_index()
d_ti = d_ti.set_index("trading_interval")
d_ti = d_ti.reindex(index_interpolated)
d_ti["duid"] = duid_id
try:
energy_value = __trapezium_integration(d_ti, power_field)
except ValueError as e:
print("Error with {} at {} {}: {}".format(duid, t_i, t_f, e))
if not d_ti.index.empty:
return_cols.append({
"trading_interval": d_ti.index[-2],
"network_id": "NEM",
"facility_code": duid_id,
"eoi_quantity": energy_value
})
return return_cols
def trading_energy_data(df):
energy_genrecs = []
for day in get_day_range(df):
for duid in sorted(df.facility_code.unique()):
energy_genrecs += [d for d in __trading_energy_generator(df, day, duid)]
df = pd.DataFrame(
energy_genrecs, columns=["trading_interval", "network_id", "facility_code", "eoi_quantity"]
)
return df
def get_day_range(df):
min_date = (df.index.min() + timedelta(days=1)).date()
max_date = (df.index.max() - timedelta(days=1)).date()
cur_day = min_date
while cur_day <= max_date:
yield cur_day
cur_day += timedelta(days=1)
trading_energy_data(dfv3)
from opennem.core.energy import energy_sum
generated_results = get_generated("SA1", date_start, date_end, NetworkNEM, "wind")
dfv4 = shape_energy_dataframe(generated_results)
dfv4.facility_code
d_ti["initialmw"] = np.array([np.NaN, np.NaN, 0.2, np.NaN, np.NaN])
d_ti
index = pd.date_range(start= t_i, end= t_f, freq='5min')
d_ti = d_ti.reset_index()
d_ti = d_ti.set_index("settlementdate")
df2 = d_ti.reindex(index)
df2["duid"] = "BLUFF1"
df2
df3 = df2
df3.initialmw = df3.initialmw.interpolate(limit_direction='both')
df3
df4 = df2
df4["initialmw"].interpolate(method="", limit_direction='both')
df4
date_min = datetime.fromisoformat("2021-03-02T00:00:00+10:00")
date_max = date_min + timedelta(days=3)
query = get_generated_query("NSW1", date_min, date_max, NetworkNEM, "coal_black")
print(query)
df = pd.read_sql(query, engine, index_col=["trading_interval", "network_id", "facility_code"])
df = df.rename(columns={"generated": "eoi_quantity"})
df_energy = _energy_aggregate(df, NetworkNEM)
# df_energy = df_energy.drop(['facility_code', 'network_id'], axis=1)
df_energy = df_energy.set_index(["trading_interval"])
df = df.reset_index()
df = df.set_index(["trading_interval"])
# df = df.drop(['facility_code', 'network_id'], axis=1)
dfj = df_energy.join(df, on="trading_interval", lsuffix='_energy', rsuffix='_power')
# print(query)
# dfj.set_index()
# print(query)
# dfj.eoi_quantity_energy.sum(), dfj.eoi_quantity_energy.sum()
df_energy.resample("D").eoi_quantity.sum() / 1000
# df_energy
print(query)
URI_V2_NSW_DAILY = "https://data.opennem.org.au/nsw1/energy/daily/2021.json"
r = http.get(URI_V2_NSW_DAILY)
v2 = load_statset_v2(r.json())
v2_imports = list(filter(lambda x: "imports.energy" in x.id, v2.data)).pop().history.values()
v2df = pd.DataFrame(v2_imports, columns=["trading_interval", "energy"])
v2df = v2df.set_index(["trading_interval"])
date_min = datetime.fromisoformat("2021-01-01T00:00:00+10:00")
date_max = datetime.fromisoformat("2021-03-03T00:00:00+10:00")
query = """
select
bs.trading_interval at time zone 'AEST' as trading_interval,
bs.network_id,
bs.network_region as facility_code,
'imports' as fueltech_id,
case when bs.net_interchange_trading < 0 then
bs.net_interchange_trading
else 0
end as generated
from balancing_summary bs
where
bs.network_id='NEM'
and bs.network_region='NSW1'
and bs.trading_interval >= '{date_min}'
and bs.trading_interval <= '{date_max}'
and bs.net_interchange_trading is not null
order by trading_interval asc;
""".format(
date_min=date_min - timedelta(minutes=10),
date_max=date_max + timedelta(minutes=10)
)
df = pd.read_sql(query, engine, index_col=["trading_interval", "network_id", "facility_code"])
df_energy = _energy_aggregate(df)
df_energy = df_energy.set_index(["trading_interval"])
# es = df.resample("30min").generated.sum() / 6 * 0.5
# es = es.to_frame().resample("D").generated.sum() / 1000
es = (df_energy.reset_index().set_index("trading_interval").resample("D").eoi_quantity.sum() / 1000).to_frame()
c = v2df.join(es)
c["is_eqal"] = c.energy == c.eoi_quantity
df.generated = df.generated / 2
df = df.reset_index()
df = df.set_index(["trading_interval"])
em = (df.resample("D").generated.sum() / 1000).to_frame()
c = v2df.join(em)
c["energy_sum"] = es.eoi_quantity
c["is_eqal"] = c.energy == c.generated
c
query2 = """
select
fs.trading_interval at time zone 'AEST' as trading_interval,
-- fs.facility_code,
-- fs.network_id,
-- generated,
eoi_quantity
from
facility_scada fs
left join facility f on fs.facility_code = f.code
where
fs.network_id='NEM'
and f.network_region='NSW1'
and fs.trading_interval >= '{date_min}'
and fs.trading_interval <= '{date_max}'
and fs.is_forecast is False
and f.fueltech_id = 'solar_rooftop'
and fs.eoi_quantity is not null
order by fs.trading_interval asc, 2;
""".format(
date_min=date_min,
date_max=date_max
)
df.energy = pd.read_sql(query2, engine_local, index_col=["trading_interval"]).eoi_quantity.sum() / 1000
q = """
select
fs.trading_interval at time zone 'AEST' as trading_interval,
--fs.facility_code,
--fs.network_id,
generated
--eoi_quantity
from
facility_scada fs
left join facility f on fs.facility_code = f.code
where
fs.network_id='NEM'
and f.network_region='NSW1'
and fs.trading_interval >= '2021-02-15 00:00:00+10:00'
and fs.trading_interval <= '2021-02-16 00:00:00+10:00'
and fs.is_forecast is False
and f.fueltech_id = 'solar_rooftop'
and fs.generated is not null
order by fs.trading_interval asc, 2;
"""
dfl = pd.read_sql(q, engine_local, index_col=["trading_interval"])
dfs = pd.read_sql(q, engine, index_col=["trading_interval"])
j = dfl.join(dfs, lsuffix="_local")
j["eq"] = j.generated_local == j.generated
j
nsw_coal_black_duids = [
"MP1",
"REDBANK1",
"VP5",
"VP6",
"LD01",
"LD02",
"LD03",
"LD04",
"BW01",
"BW02",
"BW03",
"BW04",
"ER01",
"ER02",
"ER03",
"ER04",
"MM3",
"MM4",
"MP2",
"WW7",
"WW8"
]
nsw_rooftop_duids = [
"ROOFTOP_NEM_NSW"
]
df_nsw_coal = df[df.DUID.isin(nsw_coal_black_duids)]
network = NetworkNEM
# setup v2 df
df_nsw_coal_v2 = df_nsw_coal
df_nsw_coal_v2.SETTLEMENTDATE = pd.to_datetime(df_nsw_coal_v2.SETTLEMENTDATE)
df_nsw_coal_v2.INITIALMW = pd.to_numeric(df_nsw_coal_v2.INITIALMW)
df_nsw_coal_v2 = df_nsw_coal_v2.set_index(["SETTLEMENTDATE"])
# setup v1 df
df_nsw_coal = df_nsw_coal.rename(columns={"SETTLEMENTDATE": "trading_interval", "DUID": "facility_code", "INITIALMW": "eoi_quantity"})
df_nsw_coal["network_id"] = "NEM"
df_nsw_coal.trading_interval = df_nsw_coal.apply(
lambda x: pd.Timestamp(x.trading_interval, tz=network.get_fixed_offset()), axis=1
)
df_nsw_coal.trading_interval = pd.to_datetime(df_nsw_coal.trading_interval)
df_nsw_coal.eoi_quantity = pd.to_numeric(df_nsw_coal.eoi_quantity)
df_nsw_coal = df_nsw_coal.set_index(["trading_interval", "network_id", "facility_code"])
df_energy = _energy_aggregate(df_nsw_coal, network)
df_energy.trading_interval = df_energy.trading_interval - pd.Timedelta(minutes=network.reading_shift)
df_energy = df_energy.set_index(["trading_interval"])
df_energy
df_energy.resample("D").eoi_quantity.sum() / 1000
# test it out with v2 methods
from datetime import datetime, date, timedelta
def __trapezium_integration(d_ti):
return 0.5*(d_ti["INITIALMW"] * [1,2,2,2,2,2,1]).sum()/12
def __trading_energy_generator(df, date, duid_id):
df.sort_index(inplace=True)
t_start = datetime(date.year, date.month,date.day,0,5)
#48 trading intervals in the day
#(could be better with groupby function)
for TI in range(48):
#t_i initial timestamp of trading_interval, t_f = final timestamp of trading interval
t_i = t_start + timedelta(0,1800*TI)
t_f = t_start + timedelta(0,1800*(TI+1))
d_ti = df[(df.index>=t_i) & (df.index<=t_f) & (df.DUID == duid_id)]
if not d_ti.index.empty:
yield d_ti.index[-2], duid_id, __trapezium_integration(d_ti)
def trading_energy_data(df, date):
energy_genrecs = []
for duid in sorted(nsw_coal_black_duids):
energy_genrecs += [d for d in __trading_energy_generator(df, date, duid)]
df = pd.DataFrame(energy_genrecs, columns=['SETTLEMENTDATE', 'DUID','OUTPUT_MWH'])
return df
# df_nsw_coal_v2
df_energy_v2 = trading_energy_data(df_nsw_coal_v2, date(2021, 2, 14))
# df_energy_v2
df_energy_v2 = df_energy_v2.set_index("SETTLEMENTDATE")
df_energy_v2.sort_index(inplace=True)
df_energy_v2.OUTPUT_MWH.sum() / 1000
df_energy_v2
df_energy_v2[df_energy_v2.index == datetime.fromisoformat("2021-02-14 00:30:00")]
df_v3 = df_energy[df_energy.index.date == date(2021, 2, 14)]
# df_energy.index.date()
df_v3 = df_v3[["facility_code", "eoi_quantity"]]
# df_v3
df_v3[df_v3.index == datetime.fromisoformat("2021-02-14 00:25:00+10:00")]
###Output
_____no_output_____ |
modulo03/logica-booleana.ipynb | ###Markdown
Declaraciones ```if```, ```else```, y ```elif``` 1 - Advertencia si un asteroide se acerca a la Tierra demasiado rápido
###Code
# Añadir el código necesario para crear una variable que guarde la velocidad del asteroide.
# Escribe una expresión de prueba para calcular si necesita una advertencia.
# Agregue las instrucciones que se ejecutarán si la expresión de prueba es true o false.
vel_asteroide = 49
if(vel_asteroide > 25):
print('¡Cuidado! Un asteroide se acerca a ' + str(vel_asteroide) + " km/s.")
else:
print('Nada de que preocuparse por el momento')
###Output
¡Cuidado! Un asteroide se acerca a 49 km/s.
###Markdown
2 - Un asteroide entra en la atmósfera de la Tierra
###Code
# Añadir el código necesario para crear una variable que guarde la velocidad del asteroide.
# Escribe una expresión de prueba para calcular si necesita una advertencia.
# Agregue las instrucciones que se ejecutarán si la expresión de prueba es true o false.
vel_asteroide = 19
if(vel_asteroide > 20):
print('¡Atención! un rayo de luz aparece en el cielo')
elif(vel_asteroide == 20):
print('¡Atención! un rayo de luz aparece en el cielo')
else:
print('Nada que reportar')
###Output
Nada que reportar
###Markdown
3 - Cuándo los asteroides representan un peligro para la Tierra
###Code
# Agrega el código para crear nuevas variables para la velocidad y el tamaño del asteroide
# Para probar el código, prueba con varias velocidades y tamaños
# Escribe varias expresiones de prueba o combinaciones de expresiones de prueba para determinar qué mensaje se debe enviar a Tierra.
vel_asteroide = 21
dimension_asteroide = 500
if (dimension_asteroide > 25 and dimension_asteroide < 1000) or vel_asteroide > 25:
print('¡Cuidado! Un asteroide peligroso se acerca a la tierra')
elif(vel_asteroide >= 20):
print('¡Atención! un rayo de luz aparece en el cielo')
elif(vel_asteroide < 20 and dimension_asteroide < 25):
print('No hay peligro')
else:
print('Nada que reportar')
###Output
¡Cuidado! Un asteroide peligroso se acerca a la tierra
|
Predictive Modelling/Titanic Prediction/Titanic Prediction.ipynb | ###Markdown
Titanic Data Science Solutions This notebook is a companion to the book [Data Science Solutions](https://www.amazon.com/Data-Science-Solutions-Startup-Workflow/dp/1520545312). The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.There are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development. Workflow stagesThe competition solution workflow goes through seven stages described in the Data Science Solutions book.1. Question or problem definition.2. Acquire training and testing data.3. Wrangle, prepare, cleanse the data.4. Analyze, identify patterns, and explore the data.5. Model, predict and solve the problem.6. Visualize, report, and present the problem solving steps and final solution.7. Supply or submit the results.The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.- We may combine mulitple workflow stages. We may analyze by visualizing data.- Perform a stage earlier than indicated. We may analyze data before and after wrangling.- Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.- Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition. Question and problem definitionCompetition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is [described here at Kaggle](https://www.kaggle.com/c/titanic).> Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.We may also want to develop some early understanding about the domain of our problem. This is described on the [Kaggle competition description page here](https://www.kaggle.com/c/titanic). Here are the highlights to note.- On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.- One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.- Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class. Workflow goalsThe data science solutions workflow solves for seven major goals.**Classifying.** We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.**Correlating.** One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a [correlation](https://en.wikiversity.org/wiki/Correlation) among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.**Converting.** For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.**Completing.** Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.**Correcting.** We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.**Creating.** Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.**Charting.** How to select the right visualization plots and charts depending on nature of the data and the solution goals. Refactor Release 2017-Jan-29We are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels. User comments- Combine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)- Correct observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)- Correctly interpreting logistic regresssion coefficients. (thanks @Reinhard) Porting issues- Specify plot dimensions, bring legend into plot. Best practices- Performing feature correlation analysis early in the project.- Using multiple plots instead of overlays for readability.
###Code
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
###Output
_____no_output_____
###Markdown
Acquire dataThe Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
###Code
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
combine = [train_df, test_df]
###Output
_____no_output_____
###Markdown
Analyze by describing dataPandas also helps describe the datasets answering following questions early in our project.**Which features are available in the dataset?**Noting the feature names for directly manipulating or analyzing these. These feature names are described on the [Kaggle data page here](https://www.kaggle.com/c/titanic/data).
###Code
print(train_df.columns.values)
###Output
_____no_output_____
###Markdown
**Which features are categorical?**These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.- Categorical: Survived, Sex, and Embarked. Ordinal: Pclass.**Which features are numerical?**Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.- Continous: Age, Fare. Discrete: SibSp, Parch.
###Code
# preview the data
train_df.head()
###Output
_____no_output_____
###Markdown
**Which features are mixed data types?**Numerical, alphanumeric data within same feature. These are candidates for correcting goal.- Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.**Which features may contain errors or typos?**This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.- Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
###Code
train_df.tail()
###Output
_____no_output_____
###Markdown
**Which features contain blank, null or empty values?**These will require correcting.- Cabin > Age > Embarked features contain a number of null values in that order for the training dataset.- Cabin > Age are incomplete in case of test dataset.**What are the data types for various features?**Helping us during converting goal.- Seven features are integer or floats. Six in case of test dataset.- Five features are strings (object).
###Code
train_df.info()
print('_'*40)
test_df.info()
###Output
_____no_output_____
###Markdown
**What is the distribution of numerical feature values across the samples?**This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.- Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).- Survived is a categorical feature with 0 or 1 values.- Around 38% samples survived representative of the actual survival rate at 32%.- Most passengers (> 75%) did not travel with parents or children.- Nearly 30% of the passengers had siblings and/or spouse aboard.- Fares varied significantly with few passengers (<1%) paying as high as $512.- Few elderly passengers (<1%) within age range 65-80.
###Code
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
###Output
_____no_output_____
###Markdown
**What is the distribution of categorical features?**- Names are unique across the dataset (count=unique=891)- Sex variable as two possible values with 65% male (top=male, freq=577/count=891).- Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin.- Embarked takes three possible values. S port used by most passengers (top=S)- Ticket feature has high ratio (22%) of duplicate values (unique=681).
###Code
train_df.describe(include=['O'])
###Output
_____no_output_____
###Markdown
Assumtions based on data analysisWe arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.**Correlating.**We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.**Completing.**1. We may want to complete Age feature as it is definitely correlated to survival.2. We may want to complete the Embarked feature as it may also correlate with survival or another important feature.**Correcting.**1. Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.2. Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.3. PassengerId may be dropped from training dataset as it does not contribute to survival.4. Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.**Creating.**1. We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.2. We may want to engineer the Name feature to extract Title as a new feature.3. We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.4. We may also want to create a Fare range feature if it helps our analysis.**Classifying.**We may also add to our assumptions based on the problem description noted earlier.1. Women (Sex=female) were more likely to have survived.2. Children (Age<?) were more likely to have survived. 3. The upper-class passengers (Pclass=1) were more likely to have survived. Analyze by pivoting featuresTo confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.- **Pclass** We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying 3). We decide to include this feature in our model.- **Sex** We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying 1).- **SibSp and Parch** These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating 1).
###Code
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Analyze by visualizing dataNow we can continue confirming some of our assumptions using visualizations for analyzing the data. Correlating numerical featuresLet us start by understanding correlations between numerical features and our solution goal (Survived).A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)Note that x-axis in historgram visualizations represents the count of samples or passengers.**Observations.**- Infants (Age <=4) had high survival rate.- Oldest passengers (Age = 80) survived.- Large number of 15-25 year olds did not survive.- Most passengers are in 15-35 age range.**Decisions.**This simple analysis confirms our assumptions as decisions for subsequent workflow stages.- We should consider Age (our assumption classifying 2) in our model training.- Complete the Age feature for null values (completing 1).- We should band age groups (creating 3).
###Code
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
###Output
_____no_output_____
###Markdown
Correlating numerical and ordinal featuresWe can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.**Observations.**- Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption 2.- Infant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption 2.- Most passengers in Pclass=1 survived. Confirms our classifying assumption 3.- Pclass varies in terms of Age distribution of passengers.**Decisions.**- Consider Pclass for model training.
###Code
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
###Output
_____no_output_____
###Markdown
Correlating categorical featuresNow we can correlate categorical features with our solution goal.**Observations.**- Female passengers had much better survival rate than males. Confirms classifying (1).- Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.- Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (2).- Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (1).**Decisions.**- Add Sex feature to model training.- Complete and add Embarked feature to model training.
###Code
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
###Output
_____no_output_____
###Markdown
Correlating categorical and numerical featuresWe may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).**Observations.**- Higher fare paying passengers had better survival. Confirms our assumption for creating (4) fare ranges.- Port of embarkation correlates with survival rates. Confirms correlating (1) and completing (2).**Decisions.**- Consider banding Fare feature.
###Code
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
###Output
_____no_output_____
###Markdown
Wrangle dataWe have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals. Correcting by dropping featuresThis is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.Based on our assumptions and decisions we want to drop the Cabin (correcting 2) and Ticket (correcting 1) features.Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
###Code
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
###Output
_____no_output_____
###Markdown
Creating new feature extracting from existingWe want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.In the following code we extract Title feature using regular expressions. The RegEx pattern `(\w+\.)` matches the first word which ends with a dot character within Name feature. The `expand=False` flag returns a DataFrame.**Observations.**When we plot Title, Age, and Survived, we note the following observations.- Most titles band Age groups accurately. For example: Master title has Age mean of 5 years.- Survival among Title Age bands varies slightly.- Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).**Decision.**- We decide to retain the new Title feature for model training.
###Code
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
###Output
_____no_output_____
###Markdown
We can replace many titles with a more common name or classify them as `Rare`.
###Code
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
We can convert the categorical titles to ordinal.
###Code
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
###Output
_____no_output_____
###Markdown
Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
###Code
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
###Output
_____no_output_____
###Markdown
Converting a categorical featureNow we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
###Code
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Completing a numerical continuous featureNow we should start estimating and completing features with missing or null values. We will first do this for the Age feature.We can consider three methods to complete a numerical continuous feature.1. A simple way is to generate random numbers between mean and [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation).2. More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using [median](https://en.wikipedia.org/wiki/Median) values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on...3. Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
###Code
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
###Output
_____no_output_____
###Markdown
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
###Code
guess_ages = np.zeros((2,3))
guess_ages
###Output
_____no_output_____
###Markdown
Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
###Code
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Let us create Age bands and determine correlations with Survived.
###Code
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
###Output
_____no_output_____
###Markdown
Let us replace Age with ordinals based on these bands.
###Code
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
###Output
_____no_output_____
###Markdown
We can not remove the AgeBand feature.
###Code
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
Create new feature combining existing featuresWe can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
###Code
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
We can create another feature called IsAlone.
###Code
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
###Code
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
We can also create an artificial feature combining Pclass and Age.
###Code
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
###Output
_____no_output_____
###Markdown
Completing a categorical featureEmbarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
###Code
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Converting categorical feature to numericWe can now convert the EmbarkedFill feature by creating a new numeric Port feature.
###Code
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Quick completing and converting a numeric featureWe can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.We may also want round off the fare to two decimals as it represents currency.
###Code
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
###Output
_____no_output_____
###Markdown
We can not create FareBand.
###Code
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
###Output
_____no_output_____
###Markdown
Convert the Fare feature to ordinal values based on the FareBand.
###Code
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
###Output
_____no_output_____
###Markdown
And the test dataset.
###Code
test_df.head(10)
###Output
_____no_output_____
###Markdown
Model, predict and solveNow we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:- Logistic Regression- KNN or k-Nearest Neighbors- Support Vector Machines- Naive Bayes classifier- Decision Tree- Random Forrest- Perceptron- Artificial neural network- RVM or Relevance Vector Machine
###Code
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference [Wikipedia](https://en.wikipedia.org/wiki/Logistic_regression).Note the confidence score generated by the model based on our training dataset.
###Code
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
###Output
_____no_output_____
###Markdown
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).- Sex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.- Inversely as Pclass increases, probability of Survived=1 decreases the most.- This way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.- So is Title as second highest positive correlation.
###Code
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
###Output
_____no_output_____
###Markdown
Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of **two categories**, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine).Note that the model generates a confidence score which is higher than Logistics Regression model.
###Code
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
###Output
_____no_output_____
###Markdown
In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference [Wikipedia](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm).KNN confidence score is better than Logistics Regression but worse than SVM.
###Code
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
###Output
_____no_output_____
###Markdown
In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference [Wikipedia](https://en.wikipedia.org/wiki/Naive_Bayes_classifier).The model generated confidence score is the lowest among the models evaluated so far.
###Code
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
###Output
_____no_output_____
###Markdown
The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference [Wikipedia](https://en.wikipedia.org/wiki/Perceptron).
###Code
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
###Output
_____no_output_____
###Markdown
This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Decision_tree_learning).The model confidence score is the highest among models evaluated so far.
###Code
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
###Output
_____no_output_____
###Markdown
The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Random_forest).The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
###Code
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
###Output
_____no_output_____
###Markdown
Model evaluationWe can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
###Code
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
# submission.to_csv('../output/submission.csv', index=False)
###Output
_____no_output_____ |
public/02-Mundi_Advanced_Notebooks/12_mundi_gdal.ipynb | ###Markdown
Mundi GDAL
###Code
from mundilib import MundiCatalogue
# other tools
import os
import numpy as np
from osgeo import gdal
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Processing of an in-memory image (display/make histogram/add mask, ...)
###Code
# getting image from Mundi
c = MundiCatalogue()
wms = c.get_collection("Sentinel2").mundi_wms('L1C')
response = wms.getmap(layers=['92_NDWI'],
srs='EPSG:3857',
bbox=(146453.3462,5397218.5672,176703.3001,5412429.5358), # Toulouse
size=(600, 300),
format='image/png',
time='2018-04-21/2018-04-21',
showlogo=False,
transparent=False)
# writing image
#out = open(image_file, 'wb')
#out.write(response.read())
#out.close()
# reading bytes stream through a virtual memory file - no need to save image on disk
data = response.read()
vsipath = '/vsimem/img'
gdal.FileFromMemBuffer(vsipath, data)
raster_ds = gdal.Open(vsipath)
print (type(raster_ds))
# Projection
print ("Projection: ", format(raster_ds.GetProjection()))
# Dimensions
print ("X: ", raster_ds.RasterXSize)
print ("Y: ", raster_ds.RasterYSize)
# Number of bands
print ("Nb of bands: ", raster_ds.RasterCount)
# band informations
print ("Bands information:")
for band in range(raster_ds.RasterCount):
band += 1
srcband = raster_ds.GetRasterBand(band)
if srcband is None:
continue
stats = srcband.GetStatistics( True, True )
if stats is None:
continue
print (" - band #%d : Minimum=%.3f, Maximum=%.3f, Mean=%.3f, StdDev=%.3f" % ( \
band, stats[0], stats[1], stats[2], stats[3] ))
# Getting first band of the raster as separate variable
band1 = raster_ds.GetRasterBand(1)
# Check type of the variable 'band'
print (type(band1))
# Data type of the values
gdal.GetDataTypeName(band1.DataType)
# getting array from band dataset
band1_ds = band1.ReadAsArray()
# The .ravel method turns an 2-D numpy array into a 1-D vector
print (band1_ds.shape)
print (band1_ds.ravel().shape)
# Print only selected metadata:
print ("No data value :", band1.GetNoDataValue()) # none
print ("Min value :", band1.GetMinimum())
print ("Max value :", band1.GetMaximum())
# Compute statistics if needed
if band1.GetMinimum() is None or band1.GetMaximum()is None:
band1.ComputeStatistics(0)
print("Statistics computed.")
# Fetch metadata for the band
band1.GetMetadata()
# see cmap values:
# cf. https://matplotlib.org/examples/color/colormaps_reference.html
for c in ["hot", "terrain", "ocean"]:
plt.imshow(band1_ds, cmap = c, interpolation='nearest')
plt.colorbar()
plt.tight_layout()
plt.show()
plt.imshow(band1_ds, cmap = "hot", interpolation='nearest')
plt.colorbar()
plt.tight_layout()
plt.show()
print ("\n--- raster content (head) ---")
print (band1_ds[1:10, ])
band1_hist_ds = band1_ds.ravel()
band1_hist_ds = band1_hist_ds[~np.isnan(band1_hist_ds)]
# 1 column, 1 line
fig, axes = plt.subplots(nrows=1, ncols=1)
axes.hist(band1_hist_ds, bins=10, histtype='bar', color='crimson', ec="pink")
#axes.hist(lidar_dem_hist, bins=[0, 25, 50, 75, 100, 150, 200, 255], histtype='bar', color='crimson', ec="pink")
axes.set_title("Distribution of pixel values", fontsize=16)
axes.set_xlabel('Pixel value (0-255)', fontsize=14)
axes.set_ylabel('Number of pixels', fontsize=14)
#axes.legend(prop={'size': 10})
plt.show()
# masking some pixels
masked_array = np.ma.masked_where(band1_ds<170, band1_ds)
plt.imshow(masked_array, cmap="hot", interpolation='nearest')
plt.show()
# adding of a line on image mask, changing pixel value with mask
masked_array[25:45,:] = 250
plt.imshow(masked_array, cmap="binary")
plt.show()
###Output
_____no_output_____ |
custom_sentiment/custom_sentiment.ipynb | ###Markdown
Let's compare 4 different strategies to solve sentiment analysis:1. **Custom model using open source package**. Build a custom model using scikit-learn and TF-IDF features on n-grams. This method is known to work well for English text.2. **Integrate** a pre-built API. The "sentiment HQ" API provided by indico has been shown to achieve state-of-the-art accuracy, using a recurrent neural network.3. **Word-level features**. A custom model, built from word-level text features from indico's "text features" API.4. **RNN features**. A custom model, using transfer learning, using the recurrent features from indico's sentiment HQ model to train a new custom model.Note: this notebook and the enclosed code snippets accompany the KDnuggets post: Semi-supervised feature transfer: the big practical benefit of deep learning today? Download the data1. Download the "Large Movie Review Dataset" from http://ai.stanford.edu/~amaas/data/sentiment/. 2. Decompress it.3. Put it into some directory path that you define below.Citation: Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011). User parameters
###Code
seed = 3 # for reproducibility across experiments, just pick something
train_num = 100 # number of training examples to use
test_num = 100 # number of examples to use for testing
base_model_name = "sentiment_train%s_test%s" % (train_num, test_num)
lab2bin = {'pos': 1, 'neg': 0} # label -> binary class
pos_path = "~DATASETS/aclImdb/train/pos/" # filepath to the positive examples
neg_path = "~DATASETS/aclImdb/train/neg/" # file path to the negative examples
output_path = "OUTPUT" # path where output file should go
batchsize = 25 # send this many requests at once
max_num_examples = 25000.0 # for making subsets below
###Output
_____no_output_____
###Markdown
Setup and importsInstall modules as needed (for example: `pip install indicoio`)
###Code
import os, io, glob, random, time
# from itertools import islice, chain, izip_longest
import numpy as np
import pandas as pd
from tqdm import tqdm
import pprint
pp = pprint.PrettyPrinter(indent=4)
import indicoio
from indicoio.custom import Collection
from indicoio.custom import collections as check_status
import sklearn
from sklearn import metrics
from sklearn import linear_model
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt # for plotting results
%matplotlib inline
import seaborn # just for the colors
###Output
_____no_output_____
###Markdown
Define your indico API keyIf you don't have a (free) API key, you can [get one here](https://indico.io/pay-per-call). Your first 10,000 calls per months are free.
###Code
indicoio.config.api_key = "" # Add your API key here
###Output
_____no_output_____
###Markdown
Convenience function for making batches of examples
###Code
def batcher(seq, stride = 4):
"""
Generator strides across the input sequence,
combining the elements between each stride.
"""
for pos in xrange(0, len(seq), stride):
yield seq[pos : pos + stride]
# for making subsets below
train_subset = (train_num / 25000.0)
test_subset = (test_num / 25000.0)
random.seed(seed)
np.random.seed(seed)
###Output
_____no_output_____
###Markdown
Check that the requested paths exist
###Code
# check that paths exist
for p in [pos_path, neg_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path):
os.makedirs(abs_path)
print(abs_path)
for p in [output_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path): # and make output path if necessary
os.makedirs(abs_path)
print(abs_path)
###Output
_____no_output_____
###Markdown
Query indico API to make sure everything is plumbed up correctly
###Code
# pre_status = check_status()
# pp.pprint(pre_status)
###Output
_____no_output_____
###Markdown
Read data into a list of dictionary objectswhere each dictionary object will be a single example. This makes it easy to manipulate later using dataframes, for cross-validation, visualization, etc. This dataset has pre-defined train/test splitsso rather than sampling our own, we'll use the existing splits to enable fair comparison with other published results.
###Code
train_data = [] # these lists will contain a bunch of little dictionaries, one for each example
test_data = []
# Positive examples (train)
examples = glob.glob(os.path.join(pos_path, "*")) # find all the positive examples, and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos' # label as "pos"
t = f.read().lower() # these files are already ascii text, so just lowercase them
d['text'] = t
d['pred_label'] = None # placeholder for predicted label
d['prob_pos'] = None # placeholder for predicted probability of a positive label
train_data.append(d) # add example to the list of training data
i +=1
print("Read %d positive training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (train)
examples = glob.glob(os.path.join(neg_path, "*")) # find all the negative examples and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower()
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
train_data.append(d)
i +=1
print("Read %d negative training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Positive examples (test)
examples = glob.glob(os.path.join(pos_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d positive test examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (test)
examples = glob.glob(os.path.join(neg_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d negative examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Populate a dataframe, shuffle, and subset as required
df_train = pd.DataFrame(train_data)
df_train = df_train.sample(frac = train_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d training records" % len(df_train))
df_test = pd.DataFrame(test_data)
df_test = df_test.sample(frac = test_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d test records" % len(df_test))
###Output
After resampling, down to 100 training records
After resampling, down to 100 test records
###Markdown
Quick sanity check on the data, is everything as expected?
###Code
df_train.head(10) # sanity check
df_train.tail(10)
df_test.tail(10)
###Output
_____no_output_____
###Markdown
Strategy A: scikit-learnBuild a custom model from scratch using sklearn (ngrams -> TFIDF -> LR) Define the vectorizer, logistic regression model, and overall pipeline
###Code
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(
max_features = int(1e5), # max vocab size (pretty large)
max_df = 0.50,
sublinear_tf = True,
use_idf = True,
encoding = 'ascii',
decode_error = 'replace',
analyzer = 'word',
ngram_range = (1,3),
stop_words = 'english',
lowercase = True,
norm = 'l2',
smooth_idf = True,
)
lr = linear_model.SGDClassifier(
alpha = 1e-5,
average = 10,
class_weight = 'balanced',
epsilon = 0.15,
eta0 = 0.0,
fit_intercept = True,
l1_ratio = 0.15,
learning_rate = 'optimal',
loss = 'log',
n_iter = 5,
n_jobs = -1,
penalty = 'l2',
power_t = 0.5,
random_state = seed,
shuffle = True,
verbose = 0,
warm_start = False,
)
classifier = Pipeline([('vectorizer', vectorizer),
('logistic_regression', lr)
])
###Output
_____no_output_____
###Markdown
Fit the classifier
###Code
_ = classifier.fit(df_train['text'], df_train['label'])
###Output
_____no_output_____
###Markdown
Get predictions
###Code
pred_sk = classifier.predict(df_test['text'])
y_true_sk = [lab2bin[ex] for ex in df_test['label']]
proba_sk = classifier.predict_proba(df_test['text']) # also get probas
###Output
_____no_output_____
###Markdown
Compute and plot ROC and AUC
###Code
cname = base_model_name + "_sklearn"
plt.figure(figsize=(8,8))
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
# get predictions
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i) # if this happens, need to fix something
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
# compute ROC
fpr, tpr, thresholds = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("Accuracy: %.4f" % (acc))
###Output
_____no_output_____
###Markdown
Put examples data into batches, for APIs Prepare batches of training examples
###Code
examples = [list(ex) for ex in zip(df_train['text'], df_train['label'])]
batches = [b for b in batcher(examples, batchsize)] # stores in memory, but the texts are small so no problem
###Output
_____no_output_____
###Markdown
Prepare batches of test examples
###Code
test_examples = [list(ex) for ex in zip(df_test['text'], df_test['label'])] # test data
test_batches = [b for b in batcher(test_examples, batchsize)]
###Output
_____no_output_____
###Markdown
Strategy B. Pre-trained sentiment HQ
###Code
# get predictions from sentiment-HQ API
cname = base_model_name + "hq"
predictions_hq = []
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = indicoio.sentiment_hq(texts)
for i, result in enumerate(results):
r = {}
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result
predictions_hq.append(r)
cname = base_model_name + "_hq"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_hq):
y_true.append(lab2bin[pred['label']])
proba = pred['proba']
probas.append(proba)
if float(proba) >= 0.50:
pl = 'pos'
elif float(proba) < 0.50:
pl= 'neg'
else:
print("Error. Check proba value and y_true logic")
pred_label = pl # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_hq_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
###Output
_____no_output_____
###Markdown
Strategy C. Custom model using general text features. Create an indico custom collection using general (word-level) text features, and upload data
###Code
cname = base_model_name
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname)
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname)
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch)
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait() # blocks until the model is trained
# get predictions from the trained API model
predictions = []
cname = base_model_name
collection = Collection(cname)
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions.append(r)
pp.pprint(predictions[0]) # sanity check
###Output
{ 'indico_result': { u'neg': 0.3364879404, u'pos': 0.6635120596},
'label': 'pos',
'proba': 0.6635120596,
'text': "my personal opinion is that this movie had no real story line to the first john carpenter's vampires but i don't care i loved it. jon bon jovi (derek) was great in this movie. he really mad me believe that he was the person you would never think he was a famous rockstar. there were some bad things about this movie. like the story line,there should have been more to the movie.there should have been a sequel to the movie that followed the movies story line and they should have kept the same main characters in all three vampires movies. i really liked the clothes that the people wore and the setting they pick in mexico. i liked how it was old mexico and not new mexico. with the clay houses and the old fashion churches. i was a little confused with the vampires and how they were able to walk in churches but it was cool how they didn't follow dracula vampire rules."}
###Markdown
Draw ROC plot and compute metrics for the custom collection
###Code
plt.figure(figsize=(8,8))
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_cc_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
###Output
_____no_output_____
###Markdown
Strategy D. Custom model using sentiment features from the pretrained deep neural network.
###Code
cname = base_model_name + "_domain"
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname, domain = "sentiment")
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname, domain = "sentiment")
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch, domain = "sentiment")
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait()
###Output
This model will be cached as an indico custom collection using the name: 'sentiment_train100_test100_domain'
###Markdown
Get predictions for custom collection with sentiment domain text features
###Code
# get predictions from trained API
predictions_domain = []
cname = base_model_name + "_domain"
collection = Collection(cname, domain = "sentiment")
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts, domain = "sentiment")
# batchsize = len(batch)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions_domain.append(r)
###Output
###Markdown
Compute metrics and plot
###Code
cname = base_model_name + "_domain"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_domain):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_cc_domain_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
###Output
_____no_output_____
###Markdown
Sanity check on results for all 4 strategiesCompare the first prediction for each to make sure all the right stuff is there...
###Code
print("Strategy A. Custom sklearn model using n-grams, TFIDF, LR:")
print(y_true_sk[0])
print(pred_sk[0])
print(proba_sk[0])
print("")
print("Strategy B. Sentiment HQ:")
pp.pprint(predictions_hq[0])
print("Strategy C. Custom collection using general text features:")
pp.pprint(predictions[0])
print("")
print("Strategy D. Custom collection using sentiment features:")
pp.pprint(predictions_domain[0])
print("")
###Output
Strategy A. Custom sklearn model using n-grams, TFIDF, LR:
1
pos
[ 0.31613178 0.68386822]
Strategy B. Sentiment HQ:
{ 'label': 'pos',
'proba': 0.8126749992000001,
'text': "my personal opinion is that this movie had no real story line to the first john carpenter's vampires but i don't care i loved it. jon bon jovi (derek) was great in this movie. he really mad me believe that he was the person you would never think he was a famous rockstar. there were some bad things about this movie. like the story line,there should have been more to the movie.there should have been a sequel to the movie that followed the movies story line and they should have kept the same main characters in all three vampires movies. i really liked the clothes that the people wore and the setting they pick in mexico. i liked how it was old mexico and not new mexico. with the clay houses and the old fashion churches. i was a little confused with the vampires and how they were able to walk in churches but it was cool how they didn't follow dracula vampire rules."}
Strategy C. Custom collection using general text features:
{ 'indico_result': { u'neg': 0.3364879404, u'pos': 0.6635120596},
'label': 'pos',
'proba': 0.6635120596,
'text': "my personal opinion is that this movie had no real story line to the first john carpenter's vampires but i don't care i loved it. jon bon jovi (derek) was great in this movie. he really mad me believe that he was the person you would never think he was a famous rockstar. there were some bad things about this movie. like the story line,there should have been more to the movie.there should have been a sequel to the movie that followed the movies story line and they should have kept the same main characters in all three vampires movies. i really liked the clothes that the people wore and the setting they pick in mexico. i liked how it was old mexico and not new mexico. with the clay houses and the old fashion churches. i was a little confused with the vampires and how they were able to walk in churches but it was cool how they didn't follow dracula vampire rules."}
Strategy D. Custom collection using sentiment features:
{ 'indico_result': { u'neg': 0.2067858603, u'pos': 0.7932141397},
'label': 'pos',
'proba': 0.7932141397,
'text': "my personal opinion is that this movie had no real story line to the first john carpenter's vampires but i don't care i loved it. jon bon jovi (derek) was great in this movie. he really mad me believe that he was the person you would never think he was a famous rockstar. there were some bad things about this movie. like the story line,there should have been more to the movie.there should have been a sequel to the movie that followed the movies story line and they should have kept the same main characters in all three vampires movies. i really liked the clothes that the people wore and the setting they pick in mexico. i liked how it was old mexico and not new mexico. with the clay houses and the old fashion churches. i was a little confused with the vampires and how they were able to walk in churches but it was cool how they didn't follow dracula vampire rules."}
###Markdown
Compute overall metrics and plot
###Code
plt.figure(figsize=(10,10))
cname = base_model_name
# compute and draw curve for sklearn LR built from scratch
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
fpr_sk, tpr_sk, thresholds_sk = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc_sk = metrics.auc(fpr_sk, tpr_sk)
plt.plot(fpr_sk, tpr_sk, lw = 2, color = "#a5acaf", label = "A. Custom sklearn ngram LR model; area = %0.3f" % roc_auc_sk)
# compute and draw curve for sentimentHQ
probas_s = []
y_true_s = []
y_pred_labels_s = []
y_pred_s = []
for i, pred in enumerate(predictions_hq):
y_true_s.append(lab2bin[pred['label']])
probas_s.append(pred['proba'])
if float(pred['proba']) >= 0.50:
pred_label = "pos"
elif float(pred['proba']) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_s.append(pred_label)
y_pred_s.append(lab2bin[pred_label])
fpr_s, tpr_s, thresholds_s = metrics.roc_curve(y_true_s, probas_s)
roc_auc_s = metrics.auc(fpr_s, tpr_s)
plt.plot(fpr_s, tpr_s, lw = 2, color = "#b05ecc", label = "B. Sentiment HQ model; area = %0.3f" % roc_auc_s)
# Compute and draw curve for the custom collection using general text features
probas = []
y_true = []
y_pred_labels = []
y_pred = []
lab2bin = {'pos': 1,
'neg': 0}
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 2, color = "#ffbb3b", label = "C. Custom IMDB model using general text features; area = %0.3f" % (roc_auc))
# now compute and draw curve for the CC using sentiment text features
probas_d = []
y_true_d = []
y_pred_labels_d = []
y_pred_d = []
for i, pred in enumerate(predictions_domain):
y_true_d.append(lab2bin[pred['label']])
probas_d.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x]))
y_pred_labels_d.append(pred_label)
y_pred_d.append(lab2bin[pred_label])
fpr_d, tpr_d, thresholds_d = metrics.roc_curve(y_true_d, probas_d)
roc_auc_d = metrics.auc(fpr_d, tpr_d)
plt.plot(fpr_d, tpr_d, lw = 2, color = "#43b9af", label = "D. Custom IMDB model using sentiment text features; area = %0.3f" % roc_auc_d)
# Add other stuff to figure
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC: %d training examples" % len(examples))
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_comparison_ROC" + ".png")), dpi = 300)
plt.show()
###Output
_____no_output_____
###Markdown
Accuracy metrics
###Code
acc_sk = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("A. Sklearn model from scratch (sklearn) : %.4f" % (acc_sk))
acc_s = metrics.accuracy_score(y_true_s, y_pred_s)
print("B. Sentiment HQ : %.4f" % (acc_s))
acc = metrics.accuracy_score(y_true, y_pred)
print("C. Custom model using general text features : %.4f" % (acc))
acc_d = metrics.accuracy_score(y_true_d, y_pred_d)
print("D. Custom model using sentiment text features : %.4f" % (acc_d))
# print("Using (%d, %d, %d, %d) examples" % (len(y_pred), len(y_pred_d), len(y_pred_s), len(y_pred_sk)))
###Output
A. Sklearn model from scratch (sklearn) : 0.8300
B. Sentiment HQ : 0.9200
C. Custom model using general text features : 0.8100
D. Custom model using sentiment text features : 0.8900
|
doc/source/user_guide/pystan_refitting_xr_lik.ipynb | ###Markdown
(pystan_refitting_xr)= Refitting PyStan models with ArviZ (and xarray)ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses {class}`~arviz.SamplingWrapper`s to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used. Below there is one example of `SamplingWrapper` usage for PyStan.
###Code
import arviz as az
import pystan
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr
###Output
_____no_output_____
###Markdown
For the example we will use a linear regression.
###Code
np.random.seed(26)
xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
###Output
_____no_output_____
###Markdown
Now we will write the Stan code, keeping in mind only to include the array shapes as parameters.
###Code
refit_lr_code = """
data {
// Define data for fitting
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
real b0;
real b1;
real<lower=0> sigma_e;
}
model {
b0 ~ normal(0, 10);
b1 ~ normal(0, 10);
sigma_e ~ normal(0, 10);
for (i in 1:N) {
y[i] ~ normal(b0 + b1 * x[i], sigma_e); // use only data for fitting
}
}
generated quantities {
vector[N] y_hat;
for (i in 1:N) {
// pointwise log likelihood will be calculated outside stan,
// posterior predictive however will be generated here, there are
// no restrictions on adding more generated quantities
y_hat[i] = normal_rng(b0 + b1 * x[i], sigma_e);
}
}
"""
sm = pystan.StanModel(model_code=refit_lr_code)
data_dict = {
"N": len(ydata),
"y": ydata,
"x": xdata,
}
sample_kwargs = {"iter": 1000, "chains": 4}
fit = sm.sampling(data=data_dict, **sample_kwargs)
###Output
_____no_output_____
###Markdown
We have defined a dictionary `sample_kwargs` that will be passed to the `SamplingWrapper` in order to make sure that allrefits use the same sampler parameters. We follow the same pattern with {func}`~arviz.from_pystan`.
###Code
dims = {"y": ["time"], "x": ["time"], "y_hat": ["time"]}
idata_kwargs = {
"posterior_predictive": ["y_hat"],
"observed_data": "y",
"constant_data": "x",
"dims": dims,
}
idata = az.from_pystan(posterior=fit, **idata_kwargs)
###Output
_____no_output_____
###Markdown
We are now missing the `log_likelihood` group because we have not used the `log_likelihood` argument in `idata_kwargs`. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get Stan to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.Even though it is not ideal to lose part of the straight out of the box capabilities of PyStan-ArviZ integration, this should generally not be a problem. We are basically moving the pointwise log likelihood calculation from the Stan code to the Python code, in both cases we need to manyally write the function to calculate the pointwise log likelihood.Moreover, the Python computation could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape `n_samples * n_observations`) in memory.
###Code
def calculate_log_lik(x, y, b0, b1, sigma_e):
mu = b0 + b1 * x
return stats.norm(mu, sigma_e).logpdf(y)
###Output
_____no_output_____
###Markdown
This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.Therefore, we can use `xr.apply_ufunc` to handle the broadasting and preserve the dimension names:
###Code
log_lik = xr.apply_ufunc(
calculate_log_lik,
idata.constant_data["x"],
idata.observed_data["y"],
idata.posterior["b0"],
idata.posterior["b1"],
idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)
###Output
_____no_output_____
###Markdown
The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to {func}`xarray:xarray.apply_ufunc`.We are now passing the arguments to `calculate_log_lik` initially as {class}`xarray:xarray.DataArray`s. What is happening here behind the scenes is that {func}`~xarray:xarray.apply_ufunc` is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing numpy arrays to `calculate_log_lik`. Everything works automagically. Now let's see what happens if we were to pass the arrays directly to `calculate_log_lik` instead:
###Code
calculate_log_lik(
idata.constant_data["x"].values,
idata.observed_data["y"].values,
idata.posterior["b0"].values,
idata.posterior["b1"].values,
idata.posterior["sigma_e"].values
)
###Output
_____no_output_____
###Markdown
If you are still curious about the magic of xarray and {func}`~xarray:xarray.apply_ufunc`, you can also try to modify the `dims` used to generate the InferenceData a couple cells before: dims = {"y": ["time"], "x": ["time"]} What happens to the result if you use a different name for the dimension of `x`?
###Code
idata
###Output
_____no_output_____
###Markdown
We will create a subclass of {class}`~arviz.SamplingWrapper`. Therefore, instead of having to implement all functions required by {func}`~arviz.reloo` we only have to implement `sel_observations` (we are cloning `sample` and `get_inference_data` from the `PyStanSamplingWrapper` in order to use `apply_ufunc` instead of assuming the log likelihood is calculated within Stan). Note that of the 2 outputs of `sel_observations`, `data__i` is a dictionary because it is an argument of `sample` which will pass it as is to `model.sampling`, whereas `data_ex` is a list because it is an argument to `log_likelihood__i` which will pass it as `*data_ex` to `apply_ufunc`. More on `data_ex` and `apply_ufunc` integration below.
###Code
class LinearRegressionWrapper(az.SamplingWrapper):
def sel_observations(self, idx):
xdata = self.idata_orig.constant_data["x"]
ydata = self.idata_orig.observed_data["y"]
mask = np.isin(np.arange(len(xdata)), idx)
data__i = {"x": xdata[~mask], "y": ydata[~mask], "N": len(ydata[~mask])}
data_ex = [ary[mask] for ary in (xdata, ydata)]
return data__i, data_ex
def sample(self, modified_observed_data):
#Cloned from PyStanSamplingWrapper.
fit = self.model.sampling(data=modified_observed_data, **self.sample_kwargs)
return fit
def get_inference_data(self, fit):
# Cloned from PyStanSamplingWrapper.
idata = az.from_pystan(posterior=fit, **self.idata_kwargs)
return idata
loo_orig = az.loo(idata, pointwise=True)
loo_orig
###Output
_____no_output_____
###Markdown
In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify `loo_orig` in order to make {func}`~arviz.reloo` believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.
###Code
loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])
###Output
_____no_output_____
###Markdown
We initialize our sampling wrapper. Let's stop and analize each of the arguments. We then use the `log_lik_fun` and `posterior_vars` argument to tell the wrapper how to call {func}`~xarray:xarray.apply_ufunc`. `log_lik_fun` is the function to be called, which is then called with the following positional arguments: log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars] where `data_ex` is the second element returned by `sel_observations` and `idata__i` is the InferenceData object result of `get_inference_data` which contains the fit on the subsetted data. We have generated `data_ex` to be a tuple of DataArrays so it plays nicely with this call signature.We use `idata_orig` as a starting point, and mostly as a source of observed and constant data which is then subsetted in `sel_observations`.Finally, `sample_kwargs` and `idata_kwargs` are used to make sure all refits and corresponding InferenceData are generated with the same properties.
###Code
pystan_wrapper = LinearRegressionWrapper(
sm,
log_lik_fun=calculate_log_lik,
posterior_vars=("b0", "b1", "sigma_e"),
idata_orig=idata,
sample_kwargs=sample_kwargs,
idata_kwargs=idata_kwargs
)
###Output
_____no_output_____
###Markdown
And eventually, we can use this wrapper to call `az.reloo`, and compare the results with the PSIS LOO-CV results.
###Code
loo_relooed = az.reloo(pystan_wrapper, loo_orig=loo_orig)
loo_relooed
loo_orig
###Output
_____no_output_____ |
scripts/active/thompson_sampling_notebooks/thompson_comparison.ipynb | ###Markdown
Combining Thompson Sampling Results
###Code
import pinot
ds = pinot.data.moonshot()
actual_best = max([d[1].item() for d in ds])
import pandas as pd
best_human = pd.read_csv('best_Human.csv', index_col=0)
pro_human = pd.read_csv('pro_Human.csv', index_col=0)
retro_human = pd.read_csv('retro_Human.csv', index_col=0)
for df in [best_human, pro_human, retro_human]:
df['Type'] = 'Human'
best_ei = pd.read_csv('best_ExpectedImprovement.csv', index_col=0)
pro_ei = pd.read_csv('pro_ExpectedImprovement.csv', index_col=0)
retro_ei = pd.read_csv('retro_ExpectedImprovement.csv', index_col=0)
for df in [best_ei, pro_ei, retro_ei]:
df['Type'] = 'ExpectedImprovement'
best = pd.concat([best_human, best_ei])
pro = pd.concat([pro_human, pro_ei])
retro = pd.concat([retro_human, retro_ei])
def larger_font(ylabel):
plt.xticks(size=20)
plt.xlabel('Round', size=20)
plt.yticks(size=20)
plt.ylabel(ylabel, size=20)
import matplotlib.pyplot as plt
import seaborn as sns
sns.catplot(x='Round', y='Value',
hue='Type',
data=retro,
kind='violin',
height=10,
aspect=2,
# split=True
palette='tab10'
)
larger_font('Thompson Estimates of $y_{max}$')
plt.axhline(actual_best, color='black')
import seaborn as sns
fig, ax = plt.subplots(figsize=(20, 10))
# plt.axhline(actual_best, color='black')
sns.lineplot(x='Round', y='Value', hue='Type', data=best, ax=ax)
larger_font('$y_{max}^i$')
import torch
import numpy as np
import pandas as pd
import seaborn as sns
improvement_list = []
for type_ in ['Human', 'ExpectedImprovement']:
pro_subset = pro[pro['Type'] == type_]
best_subset = best[best['Type'] == type_]
for trial in pro_subset.Trial.unique():
for round_ in pro_subset.Round.unique():
round_values = pro_subset[np.logical_and(pro_subset['Round'] == round_, pro_subset['Trial'] == trial)]['Value']
round_best = best[np.logical_and(best['Round'] == round_, best['Trial'] == trial)]['Value'].iloc[0]
improvement_list.append({'Acquisition Function': 'ExpectedImprovement',
'Trial': trial,
'Round': round_,
'Type': type_,
'ProbabilityImprovement': (round_values > round_best).mean(),
'ExpectedImprovement': (np.maximum(round_values - round_best, 0)).mean()})
improvement_df = pd.DataFrame(improvement_list)
import matplotlib.pyplot as plt
import pandas as pd
fig, ax = plt.subplots(figsize=(10, 10))
sns.swarmplot(x='Round', y='ProbabilityImprovement', hue='Type', data=improvement_df, ax=ax)
larger_font('$P$(Thompson Estimate > $y_{max}^i$)')
fig, ax = plt.subplots(figsize=(10, 10))
sns.swarmplot(x='Round', y='ExpectedImprovement', hue='Type', data=improvement_df, ax=ax)
plt.ylabel('Thompson Estimates of $y_{max}$')
larger_font('$E$($\max$(Thompson Estimate - $y_{max}^i$, 0)')
###Output
_____no_output_____ |
Ad_clicks.ipynb | ###Markdown
Defining the Research Problem Specifying the Research Question The goal of this analysis is to predict individuals who are most likely to click on a cryptography course advertisement. This project uses R for analysis. Defining the Metric for Success The project will be considered a success when we are able to clean and analyse past data to segment blog users and predict individuals who should be targeted for an advertisement. Understanding the Context A Kenyan entrepreneur has created an online cryptography course and would want to advertise it on her blog. She currently targets audiences originating from various countries. In the past, she ran ads to advertise a related course on the same blog and collected data in the process. She has employed my services as a Data Science Consultant to help her identify which individuals are most likely to click on her ads. Recording the Exprimental Design Below are the steps that will be followed in this analysis in order to respond to the research question satisfactorily:>* Read the Data>* Check the Data>* Data Cleaning>* Univariate Analysis>* Bivariate Analysis>* Implementing the Solution (Modeling)>* Conclusion and Recommendation Data Relevance The data used for the project was collected from a prior advertisement for a similar course on the same platform. The dataset contains 10 attributes and 1,000 records. These attributes contain descriptive information of past blog users. Some of the attributes include country, age and gender of the user among others. Importing Relevant Libraries
###Code
# Installing data.table package
install.packages("data.table", dependencies=TRUE)
###Output
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
###Markdown
Reading the Data
###Code
# Reading the data into R from the csv file
library(data.table)
ad <- read.csv('advertising.csv')
head(ad)
###Output
_____no_output_____
###Markdown
Checking the Data
###Code
# Checking the top 6 records
head(ad)
# Checking the bottom 6 records
tail(ad)
# Checking the total number of records
nrow(ad)
# Checking the total number of columns
ncol(ad)
# Checking all column names
names(ad)
# Checking the data types of each column
str(ad)
# Checking the number of unique values in each column
lengths(lapply(ad, unique))
###Output
_____no_output_____
###Markdown
- There are 969 unique cities and 237 unique countries in the dataset.- There are only 43 unique ages in the dataset.
###Code
# Checking the summary of the data
summary(ad)
###Output
_____no_output_____
###Markdown
Data Cleaning Missing Data
###Code
# Checking the existence of missing values
colSums(is.na(ad))
###Output
_____no_output_____
###Markdown
- No missing values in any columns of the dataframe Outliers
###Code
# creating a variable with only numeric columns
library(tidyverse)
my_data <- ad %>% select(1,2,3,4,7,10)
# Previewing outliers for numeric columns using boxplots
boxplot(my_data)
###Output
_____no_output_____
###Markdown
- We see that 'area income' is the only attribute with outliers. We shall investigate each column individually for further analysis.
###Code
# Boxplot for daily time spent variable
boxplot(ad$Daily.Time.Spent.on.Site)
# Boxplot for age variable
boxplot(ad$Age)
# Boxplot for daily internet usage variable
boxplot(ad$Daily.Internet.Usage)
# Boxplot for area income variable
boxplot(ad$Area.Income)
###Output
_____no_output_____
###Markdown
- From the above graphs, no other columns have outliers except the 'area income' attribute.
###Code
# Displaying all outliers in the income column
boxplot.stats(ad$Area.Income)$out
# Checking the countries associated with outlier incomes
ad$Country[ad$Area.Income %in% c(17709.98, 18819.34, 15598.29, 15879.1, 14548.06, 13996.5, 14775.5, 18368.57)]
###Output
_____no_output_____
###Markdown
- We observe that the really low 'outlier' income numbers are associated with developing countries. This is consistent with observations in the real world therefore we will keep the ouliers. Anomalies
###Code
# Checking for duplicate data
duplicated_rows <- ad[duplicated(ad),]
duplicated_rows
###Output
_____no_output_____
###Markdown
- No duplicate records in the dataset Exploratory Data Analysis Univariate Analysis - In this section, we will investigate each variable individually. The steps here include calculating and interpreting measures of central tendency (mode, median, mean) as well as computing and explaining the range, the interquartile range, the standard deviation, variance, skewness, and kurtosis
###Code
# Calculating the mean for all numeric columns
lapply(my_data,FUN=mean)
###Output
_____no_output_____
###Markdown
- Average age of the blog users is 36 while the average income is 55,000.
###Code
# Calculating the median for all numeric columns
lapply(my_data,FUN=median)
# Calculating the mode for all numeric columns
getmode <- function(v) {
uniqv <- unique(v)
uniqv[which.max(tabulate(match(v, uniqv)))]
}
lapply(my_data,FUN=getmode)
###Output
_____no_output_____
###Markdown
- Most occuring age is 31 and the median age is 35. - Most occuring gender is female.
###Code
# Calculating the minimum value for all numeric columns
lapply(my_data,FUN=min)
# Calculating the maximum value for all numeric columns
lapply(my_data,FUN=max)
###Output
_____no_output_____
###Markdown
- Lowest income is ~14,000 while the highest is ~79,500- The youngest age is 19 and the oldest blog user's age is 61
###Code
# Checking the range for all numeric columns
lapply(my_data,FUN=range)
# Calculating the quantiles for all numeric columns
lapply(my_data,FUN=quantile)
# Calculating the variance for all numeric columns
lapply(my_data,FUN=var)
# Calculating the standard deviation for all numeric columns
lapply(my_data,FUN=sd)
# Plotting a histogram for age variable
hist(ad$Age)
###Output
_____no_output_____
###Markdown
- The frequency distribution above depicts a relatively normal distribution for the age attribute. Most individuals' age is centered around the mean.
###Code
# Plotting a histogram for area income variable
hist(ad$Area.Income)
###Output
_____no_output_____
###Markdown
- Income distribution is skewed to the left.
###Code
# Plotting a histogram for daily time variable
hist(ad$Daily.Time.Spent.on.Site)
# Plotting a histogram for daily internet variable
hist(ad$Daily.Internet.Usage)
# Plotting a histogram for gender variable
hist(ad$Male)
###Output
_____no_output_____
###Markdown
- The number of males and females is fairly balanced.
###Code
# Plotting a histogram for clicked ad variable
hist(ad$Clicked.on.Ad)
###Output
_____no_output_____
###Markdown
- The target variable for this analysis has equal observations for both classes.
###Code
# Checking actual number of male vs females
table(ad$Male)
# Confirming distribution of classes
table(ad$Clicked.on.Ad)
# Bar plot of the age variable
age <- ad$Age
age_freq <- table(age)
barplot(age_freq)
# Checking distribution of each country
table(ad$Country)
###Output
_____no_output_____
###Markdown
Bivariate Analysis In this section, we investigate the relationship of different variables by creating relevant visualizations such asscatter plots, correlation matrix and Pearson correlation coefficient.
###Code
# Checking the correlation coefficients for numeric variables
install.packages("ggcorrplot")
library(ggcorrplot)
corr = round(cor(select_if(my_data, is.numeric)), 2)
ggcorrplot(corr, hc.order = T, ggtheme = ggplot2::theme_gray,
colors = c("#6D9EC1", "white", "#E46726"), lab = T)
###Output
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
###Markdown
- There's a relatively strong negative correlation between daily internet usage, area income, daily time spent on site vs clicked on ad.
###Code
# Scatter plot to compare age vs income
plot(ad$Age, ad$Area.Income, xlab="Age", ylab="Income")
###Output
_____no_output_____
###Markdown
- Most high income individuals are between the ages of 30 to 50.
###Code
# Scatter plot to compare income vs Clicked on ad
plot(ad$Clicked.on.Ad, ad$Area.Income, xlab="Clicked on Ad", ylab="Income")
###Output
_____no_output_____
###Markdown
- Most low income individuals clicked on the ad.
###Code
# Scatter plot to compare age vs daily time spent
plot(ad$Age, ad$Daily.Time.Spent.on.Site, xlab="Age", ylab="Time Spent")
# Scatter plot to compare clicked on ad vs time spent
plot(ad$Clicked.on.Ad, ad$Daily.Time.Spent.on.Site, xlab="Clicked on Ad", ylab="Time Spent")
###Output
_____no_output_____
###Markdown
- Most users who spent the least amount of time on the blog clicked on the ad.
###Code
# Scatter plot to compare clicked on ad vs internet usage
plot(ad$Clicked.on.Ad, ad$Daily.Internet.Usage, xlab="Clicked on Ad", ylab="Internet Usage")
# Scatter plot to compare age vs Clicked on ad
plot(ad$Clicked.on.Ad, ad$`Age`, xlab="Clicked on Ad", ylab="Age")
###Output
_____no_output_____
###Markdown
Implementing the Solution Decision Trees
###Code
# Importing relevant libraries
install.packages("rpart")
install.packages("rpart.plot")
install.packages("mlbench")
install.packages("caret")
library(rpart)
library(rpart.plot)
library(mlbench)
library(caret)
# Defining features and target variables
ad <- ad %>% select(1,2,3,4,7,8,10)
head(ad)
# Converting the country variable to numeric data type
ad$Country <- as.integer(as.factor(ad$Country))
# Normalizing relevant features
normalize <- function(x){
return ((x-min(x)) / (max(x)-min(x)))}
ad$Daily.Time.Spent.on.Site <- normalize(ad$Daily.Time.Spent.on.Site )
ad$Age <- normalize(ad$Age)
ad$Area.Income <- normalize(ad$Area.Income)
ad$Daily.Internet.Usage <- normalize(ad$Daily.Internet.Usage)
ad$Country <- normalize(ad$Country)
# Confirming the dimensions of the dataset
dim(ad)
# Creating the test and train sets. We can do a 800/200 split.
data_train <- ad[1:800, ]
data_test <- ad[801:1000,]
# Confirming the dimensions of the train and test sets
dim(data_train)
dim(data_test)
# Fitting the decision tree model
install.packages('tree')
library(tree)
model <- rpart(Clicked.on.Ad~., data = data_train, method = 'class')
rpart.plot(model)
# Visualizing the model
rpart.plot(model)
# Making a prediction on the test data
predict_unseen <-predict(fit, data_test, type = 'class')
table_mat <- table(data_test$Clicked.on.Ad, predict_unseen)
table_mat
###Output
_____no_output_____
###Markdown
- The model accurately predicted 85 users who did not click on the ad and 104 users who clicked on the ad. The total number of incorrect predictions is 11. This a fairly good prediction model.
###Code
# Calculating the accuracy score of the model
accuracy_score <- sum(diag(table_mat)) / sum(table_mat)
accuracy_score
###Output
_____no_output_____
###Markdown
- The model has an accuracy of 94.5%. This high prediction value is quite acceptable since we do not want the model to overfit. Hyperparameter Tuning to Optimize the Model
###Code
# Adjusting the maximum depth as well as minimum sample of a node
accuracy_tune <- function(fit) {
predict_unseen <- predict(fit, data_test, type = 'class')
table_mat <- table(data_test$Clicked.on.Ad, predict_unseen)
accuracy_Test <- sum(diag(table_mat)) / sum(table_mat)
accuracy_Test
}
control <- rpart.control(minsplit = 4,
minbucket = round(5 / 3),
maxdepth = 3,
cp = 0)
tune_fit <- rpart(Clicked.on.Ad~., data = data_train, method = 'class', control = control)
accuracy_tune(tune_fit)
###Output
_____no_output_____
###Markdown
Defining the Research Problem Specifying the Research Question The goal of this analysis is to identify individuals who are most likely to click on a cryptography course advertisement. This project uses R for analysis. Defining the Metric for Success The project will be considered a success when we are able to clean and analyse past data to segment blog users and identify individuals who should be targeted for an advertisement. Understanding the Context A Kenyan entrepreneur has created an online cryptography course and would want to advertise it on her blog. She currently targets audiences originating from various countries. In the past, she ran ads to advertise a related course on the same blog and collected data in the process. She has employed my services as a Data Science Consultant to help her identify which individuals are most likely to click on her ads. Recording the Exprimental Design Below are the steps that will be followed in this analysis in order to respond to the research question satisfactorily:>* Read the Data>* Check the Data>* Data Cleaning>* Univariate Analysis>* Bivariate Analysis>* Conclusion and Recommendation Data Relevance The data used for the project was collected from a prior advertisement for a similar course on the same platform. The dataset contains 10 attributes and 1,000 records. These attributes contain descriptive information of past blog users. Some of the attributes include country, age and gender of the user among others. Importing Relevant Libraries
###Code
# Installing data.table package
install.packages("data.table", dependencies=TRUE)
###Output
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
also installing the dependencies ‘bit’, ‘R.oo’, ‘R.methodsS3’, ‘RcppCCTZ’, ‘RcppDate’, ‘bit64’, ‘R.utils’, ‘xts’, ‘nanotime’, ‘zoo’
###Markdown
Reading the Data
###Code
# Reading the data into R from the csv file
library(data.table)
ad <- fread('advertising.csv')
head(ad)
###Output
_____no_output_____
###Markdown
Checking the Data
###Code
# Checking the top 6 records
head(ad)
# Checking the bottom 6 records
tail(ad)
# Checking the total number of records
nrow(ad)
# Checking the total number of columns
ncol(ad)
# Checking all column names
names(ad)
# Checking the data types of each column
str(ad)
# Checking the number of unique values in each column
lengths(lapply(ad, unique))
###Output
_____no_output_____
###Markdown
- There are 969 unique cities and 237 unique countries in the dataset.- There are only 43 unique ages in the dataset.
###Code
# Checking the summary of the data
summary(ad)
###Output
_____no_output_____
###Markdown
Data Cleaning Missing Data
###Code
# Checking the existence of missing values
colSums(is.na(ad))
###Output
_____no_output_____
###Markdown
- No missing values in any columns of the dataframe Outliers
###Code
# creating a variable with only numeric columns
library(tidyverse)
my_data <- ad %>% select(1,2,3,4,7,10)
# Previewing outliers for numeric columns using boxplots
boxplot(my_data)
###Output
_____no_output_____
###Markdown
- We see that 'area income' is the only attribute with outliers. We shall investigate each column individually for further analysis.
###Code
# Boxplot for daily time spent variable
boxplot(ad$`Daily Time Spent on Site`)
# Boxplot for age variable
boxplot(ad$Age)
# Boxplot for daily internet usage variable
boxplot(ad$`Daily Internet Usage`)
# Boxplot for area income variable
boxplot(ad$`Area Income`)
###Output
_____no_output_____
###Markdown
- From the above graphs, no other columns have outliers except the 'area income' attribute.
###Code
# Displaying all outliers in the income column
boxplot.stats(ad$`Area Income`)$out
# Checking the countries associated with outlier incomes
ad$Country[ad$`Area Income` %in% c(17709.98, 18819.34, 15598.29, 15879.1, 14548.06, 13996.5, 14775.5, 18368.57)]
###Output
_____no_output_____
###Markdown
- We observe that the really low 'outlier' income numbers are associated with developing countries. This is consistent with observations in the real world therefore we will keep the ouliers. Anomalies
###Code
# Checking for duplicate data
duplicated_rows <- ad[duplicated(ad),]
duplicated_rows
###Output
_____no_output_____
###Markdown
- No duplicate records in the dataset Exploratory Data Analysis Univariate Analysis - In this section, we will investigate each variable individually. The steps here include calculating and interpreting measures of central tendency (mode, median, mean) as well as computing and explaining the range, the interquartile range, the standard deviation, variance, skewness, and kurtosis
###Code
# Calculating the mean for all numeric columns
lapply(my_data,FUN=mean)
###Output
_____no_output_____
###Markdown
- Average age of the blog users is 36 while the average income is 55,000.
###Code
# Calculating the median for all numeric columns
lapply(my_data,FUN=median)
# Calculating the mode for all numeric columns
getmode <- function(v) {
uniqv <- unique(v)
uniqv[which.max(tabulate(match(v, uniqv)))]
}
lapply(my_data,FUN=getmode)
###Output
_____no_output_____
###Markdown
- Most occuring age is 31 and the median age is 35. - Most occuring gender is female.
###Code
# Calculating the minimum value for all numeric columns
lapply(my_data,FUN=min)
# Calculating the maximum value for all numeric columns
lapply(my_data,FUN=max)
###Output
_____no_output_____
###Markdown
- Lowest income is ~14,000 while the highest is ~79,500- The youngest age is 19 and the oldest blog user's age is 61
###Code
# Checking the range for all numeric columns
lapply(my_data,FUN=range)
# Calculating the quantiles for all numeric columns
lapply(my_data,FUN=quantile)
# Calculating the variance for all numeric columns
lapply(my_data,FUN=var)
# Calculating the standard deviation for all numeric columns
lapply(my_data,FUN=sd)
# Plotting a histogram for age variable
hist(ad$Age)
###Output
_____no_output_____
###Markdown
- The frequency distribution above depicts a relatively normal distribution for the age attribute. Most individuals' age is centered around the mean.
###Code
# Plotting a histogram for area income variable
hist(ad$`Area Income`)
###Output
_____no_output_____
###Markdown
- Income distribution is skewed to the left.
###Code
# Plotting a histogram for daily time variable
hist(ad$`Daily Time Spent on Site`)
# Plotting a histogram for daily internet variable
hist(ad$`Daily Internet Usage`)
# Plotting a histogram for gender variable
hist(ad$Male)
###Output
_____no_output_____
###Markdown
- The number of males and females is fairly balanced.
###Code
# Plotting a histogram for clicked ad variable
hist(ad$`Clicked on Ad`)
###Output
_____no_output_____
###Markdown
- The target variable for this analysis has equal observations for both classes.
###Code
# Checking actual number of male vs females
table(ad$Male)
# Confirming distribution of classes
table(ad$`Clicked on Ad`)
# Bar plot of the age variable
age <- ad$Age
age_freq <- table(age)
barplot(age_freq)
# Checking distribution of each country
table(ad$Country)
###Output
_____no_output_____
###Markdown
Bivariate Analysis In this section, we investigate the relationship of different variables by creating relevant visualizations such asscatter plots, correlation matrix and Pearson correlation coefficient.
###Code
# Checking the correlation coefficients for numeric variables
install.packages("ggcorrplot")
library(ggcorrplot)
corr = round(cor(select_if(my_data, is.numeric)), 2)
ggcorrplot(corr, hc.order = T, ggtheme = ggplot2::theme_gray,
colors = c("#6D9EC1", "white", "#E46726"), lab = T)
###Output
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
also installing the dependencies ‘plyr’, ‘reshape2’
###Markdown
- There's a relatively strong negative correlation between daily internet usage, area income, daily time spent on site vs clicked on ad.
###Code
# Scatter plot to compare age vs income
plot(ad$`Age`, ad$`Area Income`, xlab="Age", ylab="Income")
###Output
_____no_output_____
###Markdown
- Most high income individuals are between the ages of 30 to 50.
###Code
# Scatter plot to compare income vs Clicked on ad
plot(ad$`Clicked on Ad`, ad$`Area Income`, xlab="Clicked on Ad", ylab="Income")
###Output
_____no_output_____
###Markdown
- Most low income individuals clicked on the ad.
###Code
# Scatter plot to compare age vs daily time spent
plot(ad$Age, ad$`Daily Time Spent on Site`, xlab="Age", ylab="Time Spent")
# Scatter plot to compare clicked on ad vs time spent
plot(ad$`Clicked on Ad`, ad$`Daily Time Spent on Site`, xlab="Clicked on Ad", ylab="Time Spent")
###Output
_____no_output_____
###Markdown
- Most users who spent the least amount of time on the blog clicked on the ad.
###Code
# Scatter plot to compare clicked on ad vs internet usage
plot(ad$`Clicked on Ad`, ad$`Daily Internet Usage`, xlab="Clicked on Ad", ylab="Internet Usage")
# Scatter plot to compare age vs Clicked on ad
plot(ad$`Clicked on Ad`, ad$`Age`, xlab="Clicked on Ad", ylab="Age")
###Output
_____no_output_____ |
notebooks/1.0-RA-Determine_EHR_Level.ipynb | ###Markdown
Imports
###Code
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data
###Code
data_dir = "/Users/rajesharasada/Documents/DataScience/Imbalanced_Data_Classification/data/raw/dataset_diabetes/"
# List the data files in the raw data observations
os.listdir(data_dir)
# Define the path to the data file
data_file = os.path.join(data_dir, 'diabetic_data.csv')
###Output
_____no_output_____
###Markdown
Understanding Data Observations
###Code
# Read the csv file
df = pd.read_csv(data_file)
# Visulaize few observations to understand the data
df.sample(5)
# data info
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 101766 entries, 0 to 101765
Data columns (total 50 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 encounter_id 101766 non-null int64
1 patient_nbr 101766 non-null int64
2 race 101766 non-null object
3 gender 101766 non-null object
4 age 101766 non-null object
5 weight 101766 non-null object
6 admission_type_id 101766 non-null int64
7 discharge_disposition_id 101766 non-null int64
8 admission_source_id 101766 non-null int64
9 time_in_hospital 101766 non-null int64
10 payer_code 101766 non-null object
11 medical_specialty 101766 non-null object
12 num_lab_procedures 101766 non-null int64
13 num_procedures 101766 non-null int64
14 num_medications 101766 non-null int64
15 number_outpatient 101766 non-null int64
16 number_emergency 101766 non-null int64
17 number_inpatient 101766 non-null int64
18 diag_1 101766 non-null object
19 diag_2 101766 non-null object
20 diag_3 101766 non-null object
21 number_diagnoses 101766 non-null int64
22 max_glu_serum 101766 non-null object
23 A1Cresult 101766 non-null object
24 metformin 101766 non-null object
25 repaglinide 101766 non-null object
26 nateglinide 101766 non-null object
27 chlorpropamide 101766 non-null object
28 glimepiride 101766 non-null object
29 acetohexamide 101766 non-null object
30 glipizide 101766 non-null object
31 glyburide 101766 non-null object
32 tolbutamide 101766 non-null object
33 pioglitazone 101766 non-null object
34 rosiglitazone 101766 non-null object
35 acarbose 101766 non-null object
36 miglitol 101766 non-null object
37 troglitazone 101766 non-null object
38 tolazamide 101766 non-null object
39 examide 101766 non-null object
40 citoglipton 101766 non-null object
41 insulin 101766 non-null object
42 glyburide-metformin 101766 non-null object
43 glipizide-metformin 101766 non-null object
44 glimepiride-pioglitazone 101766 non-null object
45 metformin-rosiglitazone 101766 non-null object
46 metformin-pioglitazone 101766 non-null object
47 change 101766 non-null object
48 diabetesMed 101766 non-null object
49 readmitted 101766 non-null object
dtypes: int64(13), object(37)
memory usage: 38.8+ MB
###Markdown
Determine Level of Dataset (Line or Encounter)¶ **Line Level**: If the total number of rows is greater than the number of unique encounters, it is at the line level.**Encounter Level**: Every interaction between a patient and healthcare provider is an encounter. So if the total number of rows is equal to the number of unique encounters, it is at the encounter level.**Longitudinal Level**: For the longitudinal or patient level, you will see multiple encounters grouped under a patient and you might not even see the encounter id field since this information is collapsed/aggregated under a unique patient id. In this case, the total number of rows should equal the total number of unique patients.
###Code
# Line test
try:
assert len(df) > df['encounter_id'].nunique()
print("The dataset is at LINE LEVEL")
except:
print("This dataset is not at LINE LEVEL")
# Encounter Test
try:
assert len(df) == df['encounter_id'].nunique()
print(
"The dataset could be at the ENCOUNTER LEVEL with {} observations and unique encounters"
.format(len(df)))
except:
print("The dataset is NOT at the ENCOUNTER LEVEL")
###Output
The dataset could be at the ENCOUNTER LEVEL with 101766 observations and unique encounters
###Markdown
Data leakage from multiple encounters for each patient Having multiple encounters for a patient can lead to data leakage of future patient encounters. To avoid this we need to determine if there is more than one encounter per patient and select either the first encounter or the last encounter for each patient in the dataset. This is to reduce the risk of data leakage and reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
def select_first_encounter(df):
'''
df: pandas dataframe, dataframe with all encounters
return:
- first_encounter_df: pandas dataframe, dataframe with only the first encounter for a given patient
'''
first_encounter = df.sort_values(["encounter_id", "patient_nbr"], ascending=[True, True]).groupby("patient_nbr").head(1).reset_index(drop=True)
return first_encounter
first_encounter_df = select_first_encounter(df)
first_encounter_df.sample(5)
def select_last_encounter(df):
'''
df: pandas dataframe, dataframe with all encounters
return:
- last_encounter_df: pandas dataframe, dataframe with only the last encounter for a given patient
'''
last_encounter_df = df.sort_values(["encounter_id", "patient_nbr"], ascending=[True, True]).groupby("patient_nbr").tail(1).reset_index(drop=True)
return last_encounter_df
last_encounter_df = select_last_encounter(df)
last_encounter_df.sample(5)
###Output
_____no_output_____
###Markdown
Save the datasets
###Code
first_encounter_df.to_csv("../data/processed/first_encounter.csv",index=False )
last_encounter_df.to_csv("../data/processed/last_encounter.csv",index=False )
###Output
_____no_output_____
###Markdown
Dead patientsBefore any further data wrangling and feature engineering we will drop any observations that don't add any value to the data and data analysis. Patients who expired are unlikely to get readmitted. Information of patients who expired can be dropped. We can obtain their information from the supplementary information and the feature 'discharge_disposition_id'.
###Code
df = df[~df['discharge_disposition_id'].isin([11, 19, 20, 21])]
# Reset the index
df.reset_index(inplace=True)
df.sample(5)
df["patient_nbr"].nunique()
df["num_encounters"] = df.groupby("patient_nbr")["encounter_id"].transform("count")
df.sort_values(["num_encounters"], ascending=False)
def compute_plot_countplot(df, col, ylabel, figsize):
"""
Computes and plots a countplot
Args:
df - name of the dataframe
col - column to count the values for
ylabel- name of the column
"""
# 1. compute counts for each unique value in the column
# and convert it into a dictionary
value_counts = df[col].value_counts().to_dict()
# 2. extract the unique values and its count as a list
values = list(value_counts.keys())
counts = list(value_counts.values())
# 3. plot a horizontal bar graph
fig, ax = plt.subplots(figsize=figsize)
ax.barh(values, counts, color = '#EE6666')
ax.set_ylabel(ylabel, fontsize=14)
ax.set_xlabel("Count", fontsize=14)
plt.show()
compute_plot_countplot(df, "num_encounters", "Number of Encounters", (8,6))
df[df['patient_nbr'] == 88785891]
df.isnull().sum()
df.info()
numeric_features = df.select_dtypes(exclude='object')
categorical_features = df.select_dtypes(include='object')
for f in categorical_features:
print(categorical_features[f].unique())
print()
NA_VALUES = ['?', 'None','Unknown/Invalid']
df_ = pd.read_csv(data_file,na_values=NA_VALUES)
df_.sample(5)
df_.isna().sum()
numeric_features_ = df_.select_dtypes(exclude='object')
categorical_features_ = df_.select_dtypes(include='object')
or f in categorical_features:
print(categorical_features[f].unique())
print()
for f in numeric_features:
print(numeric_features[f].unique())
print()
def get_unique_values(df):
"""
Identifies the unique values in each feature of a dataframe
Arg: df - dataframe
Return: df with feature name, number of unique values and unique values
"""
feature_names = []
unique_values = []
num_unique_values = []
for f in df:
feature_names.append(f)
unique_values.append(df[f].unique())
num_unique_values.append(df[f].nunique())
unique_values_df = pd.DataFrame({
"feature": feature_names,
"unique_values": unique_values,
"num_unique": num_unique_values})
return unique_values_df
cat_features_unique = get_unique_values(categorical_features)
cat_features_unique
num_features_unique = get_unique_values(numeric_features)
num_features_unique
print(num_features_unique.loc[num_features_unique["feature"] == "number_emergency", "unique_values"])
feature_names = []
unique_values = []
num_unique_values = []
for f in numeric_features:
feature_names.append(f)
unique_values.append(numeric_features[f].unique())
num_unique_values.append(numeric_features[f].nunique())
num_features_df = pd.DataFrame({
"feature": feature_names,
"unique_values": unique_values,
"num_unique": num_unique_values
})
num_features_df
###Output
_____no_output_____ |
5. Data Cleaning and Vectorization.ipynb | ###Markdown
Data Cleaning and VectorizationIn this notebook, we will do the following1. Null Values Imputation : For every field, the null values will be replaced with median value. This is very simple and crude form of imputation, however the model based imputation is complicated to design, and hence will be implemented in the next iteration2. Vectorization : After the imputation, data for each file will be processed as belowFor numerical columns, the values will be scaled between 0 and 1For categorical columns, the values will be encoded as one hot vectors
###Code
from google.colab import drive
drive.mount('/content/drive')
# project directory
current_dir = 'Home Credit_Kaggle'
# set the project folder as current working directory
import os
complete_path = os.path.join('/content/drive/My Drive/Colab Notebooks/',current_dir)
os.chdir(complete_path)
import numpy as np
import pandas as pd
import time
from scipy.sparse import csr_matrix, save_npz
###Output
_____no_output_____
###Markdown
Load Control Files Load field level flags Since there are a lot of files and fields, we have soft coded below conditional information regarding fields in files for easy maintenance. In case either of these conditions need to be changed, we only need to change the file and rerun the notebook to regenerate the data as per new conditions. No code change required!!1. whether the field is to be used or not2. is it a categorical or numerical or key field3. is it to be normalized or not
###Code
# load HomeCredit_Control File_Field level.csv
field_level_flags = pd.read_csv('control/HomeCredit_Control File_Field level.csv')
print(field_level_flags.shape)
field_level_flags.head()
# create a dictionary from above data using [FILE_NAME,FIELD_NAME] as key
# for fast lookup
# prepare key as 'FILE_NAME'+'FIELD_NAME' for each record
file_name_arr = np.asarray(field_level_flags['FILE_NAME'])
field_name_arr = np.asarray(field_level_flags['FIELD_NAME'])
l = len(file_name_arr)
keys = [(str(file_name_arr[i])+str(field_name_arr[i])).strip() for i in range(l)]
# prepare values as ['FIELD_TYPE','USE_FIELD','NORMALIZE_FIELD'] for each record
field_type_arr = np.asarray(field_level_flags['FIELD_TYPE'])
use_field_arr = np.asarray(field_level_flags['USE_FIELD'])
norm_field_arr = np.asarray(field_level_flags['NORMALIZE_FIELD'])
values = [[field_type_arr[i],use_field_arr[i],norm_field_arr[i]] for i in range(l)]
# combined into dictionary
dict_field_flags = dict(zip(keys,values))
print(dict_field_flags.keys())
print(dict_field_flags.values())
###Output
dict_keys(['application_train.csvSK_ID_CURR', 'application_train.csvTARGET', 'application_train.csvNAME_CONTRACT_TYPE', 'application_train.csvCODE_GENDER', 'application_train.csvFLAG_OWN_CAR', 'application_train.csvFLAG_OWN_REALTY', 'application_train.csvCNT_CHILDREN', 'application_train.csvAMT_INCOME_TOTAL', 'application_train.csvAMT_CREDIT', 'application_train.csvAMT_ANNUITY', 'application_train.csvAMT_GOODS_PRICE', 'application_train.csvNAME_TYPE_SUITE', 'application_train.csvNAME_INCOME_TYPE', 'application_train.csvNAME_EDUCATION_TYPE', 'application_train.csvNAME_FAMILY_STATUS', 'application_train.csvNAME_HOUSING_TYPE', 'application_train.csvREGION_POPULATION_RELATIVE', 'application_train.csvDAYS_BIRTH', 'application_train.csvDAYS_EMPLOYED', 'application_train.csvDAYS_REGISTRATION', 'application_train.csvDAYS_ID_PUBLISH', 'application_train.csvOWN_CAR_AGE', 'application_train.csvFLAG_MOBIL', 'application_train.csvFLAG_EMP_PHONE', 'application_train.csvFLAG_WORK_PHONE', 'application_train.csvFLAG_CONT_MOBILE', 'application_train.csvFLAG_PHONE', 'application_train.csvFLAG_EMAIL', 'application_train.csvOCCUPATION_TYPE', 'application_train.csvCNT_FAM_MEMBERS', 'application_train.csvREGION_RATING_CLIENT', 'application_train.csvREGION_RATING_CLIENT_W_CITY', 'application_train.csvWEEKDAY_APPR_PROCESS_START', 'application_train.csvHOUR_APPR_PROCESS_START', 'application_train.csvREG_REGION_NOT_LIVE_REGION', 'application_train.csvREG_REGION_NOT_WORK_REGION', 'application_train.csvLIVE_REGION_NOT_WORK_REGION', 'application_train.csvREG_CITY_NOT_LIVE_CITY', 'application_train.csvREG_CITY_NOT_WORK_CITY', 'application_train.csvLIVE_CITY_NOT_WORK_CITY', 'application_train.csvORGANIZATION_TYPE', 'application_train.csvEXT_SOURCE_1', 'application_train.csvEXT_SOURCE_2', 'application_train.csvEXT_SOURCE_3', 'application_train.csvAPARTMENTS_AVG', 'application_train.csvBASEMENTAREA_AVG', 'application_train.csvYEARS_BEGINEXPLUATATION_AVG', 'application_train.csvYEARS_BUILD_AVG', 'application_train.csvCOMMONAREA_AVG', 'application_train.csvELEVATORS_AVG', 'application_train.csvENTRANCES_AVG', 'application_train.csvFLOORSMAX_AVG', 'application_train.csvFLOORSMIN_AVG', 'application_train.csvLANDAREA_AVG', 'application_train.csvLIVINGAPARTMENTS_AVG', 'application_train.csvLIVINGAREA_AVG', 'application_train.csvNONLIVINGAPARTMENTS_AVG', 'application_train.csvNONLIVINGAREA_AVG', 'application_train.csvAPARTMENTS_MODE', 'application_train.csvBASEMENTAREA_MODE', 'application_train.csvYEARS_BEGINEXPLUATATION_MODE', 'application_train.csvYEARS_BUILD_MODE', 'application_train.csvCOMMONAREA_MODE', 'application_train.csvELEVATORS_MODE', 'application_train.csvENTRANCES_MODE', 'application_train.csvFLOORSMAX_MODE', 'application_train.csvFLOORSMIN_MODE', 'application_train.csvLANDAREA_MODE', 'application_train.csvLIVINGAPARTMENTS_MODE', 'application_train.csvLIVINGAREA_MODE', 'application_train.csvNONLIVINGAPARTMENTS_MODE', 'application_train.csvNONLIVINGAREA_MODE', 'application_train.csvAPARTMENTS_MEDI', 'application_train.csvBASEMENTAREA_MEDI', 'application_train.csvYEARS_BEGINEXPLUATATION_MEDI', 'application_train.csvYEARS_BUILD_MEDI', 'application_train.csvCOMMONAREA_MEDI', 'application_train.csvELEVATORS_MEDI', 'application_train.csvENTRANCES_MEDI', 'application_train.csvFLOORSMAX_MEDI', 'application_train.csvFLOORSMIN_MEDI', 'application_train.csvLANDAREA_MEDI', 'application_train.csvLIVINGAPARTMENTS_MEDI', 'application_train.csvLIVINGAREA_MEDI', 'application_train.csvNONLIVINGAPARTMENTS_MEDI', 'application_train.csvNONLIVINGAREA_MEDI', 'application_train.csvFONDKAPREMONT_MODE', 'application_train.csvHOUSETYPE_MODE', 'application_train.csvTOTALAREA_MODE', 'application_train.csvWALLSMATERIAL_MODE', 'application_train.csvEMERGENCYSTATE_MODE', 'application_train.csvOBS_30_CNT_SOCIAL_CIRCLE', 'application_train.csvDEF_30_CNT_SOCIAL_CIRCLE', 'application_train.csvOBS_60_CNT_SOCIAL_CIRCLE', 'application_train.csvDEF_60_CNT_SOCIAL_CIRCLE', 'application_train.csvDAYS_LAST_PHONE_CHANGE', 'application_train.csvFLAG_DOCUMENT_2', 'application_train.csvFLAG_DOCUMENT_3', 'application_train.csvFLAG_DOCUMENT_4', 'application_train.csvFLAG_DOCUMENT_5', 'application_train.csvFLAG_DOCUMENT_6', 'application_train.csvFLAG_DOCUMENT_7', 'application_train.csvFLAG_DOCUMENT_8', 'application_train.csvFLAG_DOCUMENT_9', 'application_train.csvFLAG_DOCUMENT_10', 'application_train.csvFLAG_DOCUMENT_11', 'application_train.csvFLAG_DOCUMENT_12', 'application_train.csvFLAG_DOCUMENT_13', 'application_train.csvFLAG_DOCUMENT_14', 'application_train.csvFLAG_DOCUMENT_15', 'application_train.csvFLAG_DOCUMENT_16', 'application_train.csvFLAG_DOCUMENT_17', 'application_train.csvFLAG_DOCUMENT_18', 'application_train.csvFLAG_DOCUMENT_19', 'application_train.csvFLAG_DOCUMENT_20', 'application_train.csvFLAG_DOCUMENT_21', 'application_train.csvAMT_REQ_CREDIT_BUREAU_HOUR', 'application_train.csvAMT_REQ_CREDIT_BUREAU_DAY', 'application_train.csvAMT_REQ_CREDIT_BUREAU_WEEK', 'application_train.csvAMT_REQ_CREDIT_BUREAU_MON', 'application_train.csvAMT_REQ_CREDIT_BUREAU_QRT', 'application_train.csvAMT_REQ_CREDIT_BUREAU_YEAR', 'bureau.csvSK_ID_CURR', 'bureau.csvSK_ID_BUREAU', 'bureau.csvCREDIT_ACTIVE', 'bureau.csvCREDIT_CURRENCY', 'bureau.csvDAYS_CREDIT', 'bureau.csvCREDIT_DAY_OVERDUE', 'bureau.csvDAYS_CREDIT_ENDDATE', 'bureau.csvDAYS_ENDDATE_FACT', 'bureau.csvAMT_CREDIT_MAX_OVERDUE', 'bureau.csvCNT_CREDIT_PROLONG', 'bureau.csvAMT_CREDIT_SUM', 'bureau.csvAMT_CREDIT_SUM_DEBT', 'bureau.csvAMT_CREDIT_SUM_LIMIT', 'bureau.csvAMT_CREDIT_SUM_OVERDUE', 'bureau.csvCREDIT_TYPE', 'bureau.csvDAYS_CREDIT_UPDATE', 'bureau.csvAMT_ANNUITY', 'bureau_balance.csvSK_ID_BUREAU', 'bureau_balance.csvMONTHS_BALANCE', 'bureau_balance.csvSTATUS', 'bureau.csvMONTHS_BALANCE', 'bureau.csvSTATUS', 'previous_application.csvSK_ID_PREV', 'previous_application.csvSK_ID_CURR', 'previous_application.csvNAME_CONTRACT_TYPE', 'previous_application.csvAMT_ANNUITY', 'previous_application.csvAMT_APPLICATION', 'previous_application.csvAMT_CREDIT', 'previous_application.csvAMT_DOWN_PAYMENT', 'previous_application.csvAMT_GOODS_PRICE', 'previous_application.csvWEEKDAY_APPR_PROCESS_START', 'previous_application.csvHOUR_APPR_PROCESS_START', 'previous_application.csvFLAG_LAST_APPL_PER_CONTRACT', 'previous_application.csvNFLAG_LAST_APPL_IN_DAY', 'previous_application.csvRATE_DOWN_PAYMENT', 'previous_application.csvRATE_INTEREST_PRIMARY', 'previous_application.csvRATE_INTEREST_PRIVILEGED', 'previous_application.csvNAME_CASH_LOAN_PURPOSE', 'previous_application.csvNAME_CONTRACT_STATUS', 'previous_application.csvDAYS_DECISION', 'previous_application.csvNAME_PAYMENT_TYPE', 'previous_application.csvCODE_REJECT_REASON', 'previous_application.csvNAME_TYPE_SUITE', 'previous_application.csvNAME_CLIENT_TYPE', 'previous_application.csvNAME_GOODS_CATEGORY', 'previous_application.csvNAME_PORTFOLIO', 'previous_application.csvNAME_PRODUCT_TYPE', 'previous_application.csvCHANNEL_TYPE', 'previous_application.csvSELLERPLACE_AREA', 'previous_application.csvNAME_SELLER_INDUSTRY', 'previous_application.csvCNT_PAYMENT', 'previous_application.csvNAME_YIELD_GROUP', 'previous_application.csvPRODUCT_COMBINATION', 'previous_application.csvDAYS_FIRST_DRAWING', 'previous_application.csvDAYS_FIRST_DUE', 'previous_application.csvDAYS_LAST_DUE_1ST_VERSION', 'previous_application.csvDAYS_LAST_DUE', 'previous_application.csvDAYS_TERMINATION', 'previous_application.csvNFLAG_INSURED_ON_APPROVAL', 'POS_CASH_balance.csvSK_ID_PREV', 'POS_CASH_balance.csvSK_ID_CURR', 'POS_CASH_balance.csvMONTHS_BALANCE', 'POS_CASH_balance.csvCNT_INSTALMENT', 'POS_CASH_balance.csvCNT_INSTALMENT_FUTURE', 'POS_CASH_balance.csvNAME_CONTRACT_STATUS', 'POS_CASH_balance.csvSK_DPD', 'POS_CASH_balance.csvSK_DPD_DEF', 'installments_payments.csvSK_ID_PREV', 'installments_payments.csvSK_ID_CURR', 'installments_payments.csvNUM_INSTALMENT_VERSION', 'installments_payments.csvNUM_INSTALMENT_NUMBER', 'installments_payments.csvDAYS_INSTALMENT', 'installments_payments.csvDAYS_ENTRY_PAYMENT', 'installments_payments.csvAMT_INSTALMENT', 'installments_payments.csvAMT_PAYMENT', 'credit_card_balance.csvSK_ID_PREV', 'credit_card_balance.csvSK_ID_CURR', 'credit_card_balance.csvMONTHS_BALANCE', 'credit_card_balance.csvAMT_BALANCE', 'credit_card_balance.csvAMT_CREDIT_LIMIT_ACTUAL', 'credit_card_balance.csvAMT_DRAWINGS_ATM_CURRENT', 'credit_card_balance.csvAMT_DRAWINGS_CURRENT', 'credit_card_balance.csvAMT_DRAWINGS_OTHER_CURRENT', 'credit_card_balance.csvAMT_DRAWINGS_POS_CURRENT', 'credit_card_balance.csvAMT_INST_MIN_REGULARITY', 'credit_card_balance.csvAMT_PAYMENT_CURRENT', 'credit_card_balance.csvAMT_PAYMENT_TOTAL_CURRENT', 'credit_card_balance.csvAMT_RECEIVABLE_PRINCIPAL', 'credit_card_balance.csvAMT_RECIVABLE', 'credit_card_balance.csvAMT_TOTAL_RECEIVABLE', 'credit_card_balance.csvCNT_DRAWINGS_ATM_CURRENT', 'credit_card_balance.csvCNT_DRAWINGS_CURRENT', 'credit_card_balance.csvCNT_DRAWINGS_OTHER_CURRENT', 'credit_card_balance.csvCNT_DRAWINGS_POS_CURRENT', 'credit_card_balance.csvCNT_INSTALMENT_MATURE_CUM', 'credit_card_balance.csvNAME_CONTRACT_STATUS', 'credit_card_balance.csvSK_DPD', 'credit_card_balance.csvSK_DPD_DEF'])
dict_values([['Primary Key', 'Y', 'N'], ['Target Value', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'Y', 'N'], ['Numerical', 'Y', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Categorical', 'N', 'N'], ['Categorical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Categorical', 'N', 'N'], ['Categorical', 'N', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Foreign Key', 'Y', 'N'], ['Primary Key', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Foreign Key', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Primary Key', 'Y', 'N'], ['Foreign Key', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Numerical', 'N', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'N', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Foreign Key', 'Y', 'N'], ['Foreign Key', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Foreign Key', 'Y', 'N'], ['Foreign Key', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Foreign Key', 'Y', 'N'], ['Foreign Key', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'N', 'Y'], ['Numerical', 'Y', 'Y'], ['Categorical', 'Y', 'N'], ['Numerical', 'Y', 'Y'], ['Numerical', 'Y', 'Y']])
###Markdown
Load File Level Flags The ORDER_BY flags loaded below will control the ordering of record in each file. Since the linking of files to each other is through keys, order of records is of utmost importance. It will help us to create file snapshots easily later.The NUM_TOP_REC flags are used to control the number of records in File Snapshots. This flag will not be used in this notebook, it will be used later.
###Code
# load HomeCredit_Control File_File Level.csv
file_level_flags = pd.read_csv('control/HomeCredit_Control File_File Level_nn.csv')
print(file_level_flags.shape)
file_level_flags.head(6)
# create a dictionary from above data using [FILE_NAME,FIELD_NAME] as key
# for fast lookup
# prepare key as 'FILE_NAME' for each record
file_name_arr = np.asarray(file_level_flags['FILE_NAME'])
l = len(file_name_arr)
keys = [str(file_name_arr[i]).strip() for i in range(l)]
# prepare values as ['NUM_TOP_REC','ORDER_BY','ASC_ORDER?'] for each record
num_top_rec_arr = np.asarray(file_level_flags['NUM_TOP_REC'])
order_by_arr = np.asarray(file_level_flags['ORDER_BY'])
asc_order_arr = np.asarray(file_level_flags['ASC ORDER?'])
values = [[num_top_rec_arr[i],order_by_arr[i],asc_order_arr[i]] for i in range(l)]
# combined into dictionary
dict_file_flags = dict(zip(keys,values))
print(dict_file_flags.keys())
print(dict_file_flags.values())
###Output
dict_keys(['bureau.csv', 'bureau_balance.csv', 'previous_application.csv', 'POS_CASH_balance.csv', 'installments_payments.csv', 'credit_card_balance.csv'])
dict_values([[2, 'SK_ID_CURR,SK_ID_BUREAU,DAYS_CREDIT', 1], [1, 'SK_ID_BUREAU,MONTHS_BALANCE', 1], [1, 'SK_ID_CURR,SK_ID_PREV,DAYS_DECISION', 1], [10, 'SK_ID_CURR,SK_ID_PREV,MONTHS_BALANCE', 1], [5, 'SK_ID_CURR,SK_ID_PREV,DAYS_INSTALMENT', 1], [10, 'SK_ID_CURR,SK_ID_PREV,MONTHS_BALANCE', 1]])
###Markdown
Create functions to preprocess data in files We have defined three functions below to clean + vectorize/normalize the data in all files.1. preprocess_categ_train => This function will impute + vectorize a categorical column of train data2. preprocess_numeric_train => This function will impute + normalize (scale between 0 to 1) a numerical column of data3. preprocess_file => This function will call above two functions for each file
###Code
# function to impute and preprocess categorical data
#from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
def preprocess_categ_train(arr_train):
# reshape array to be 2D
arr_train_2D = np.asarray(arr_train).reshape(-1,1)
# Part 1 - Impute with most frequent value
# initialize imputer
imputer = SimpleImputer(strategy='most_frequent')
imputer.fit(arr_train_2D)
arr_train2 = imputer.transform(arr_train_2D)
# reshape array to be 1D for vectorizer
#arr_train2 = np.asarray(arr_train2).reshape(-1,)
#print(arr_train2)
# Part 2 - Encode the categorical values
# initialize vectorizer
count_vect = OneHotEncoder(handle_unknown='ignore')
# fit vectorizer to training data for each categorical column
# and use it to transform the training data
count_vect.fit(arr_train2)
train_values = count_vect.categories_[0] # store list of unique values
#print(train_values)
feat_size = len(count_vect.categories_[0]) # find no of unique values
arr_train_ohe = count_vect.transform(arr_train2).toarray()
return imputer,count_vect,feat_size,arr_train_ohe
##=========================end of function========================##
# function to impute and preprocess numerical data
from sklearn.preprocessing import MinMaxScaler
from sklearn.impute import SimpleImputer
def preprocess_numeric_train(arr_train):
# reshape array to be 2D
arr_train_2D = np.asarray(arr_train).reshape(-1,1)
# Part 1 - Impute with median value
# initialize imputer
imputer = SimpleImputer(strategy='median')
# fit and transform with imputer
imputer.fit(arr_train_2D)
arr_train2 = imputer.transform(arr_train_2D)
# Part 2 - Min Max scaling
# initializer scaler
field_scaler = MinMaxScaler(feature_range=(1e-3, 1))
# fit and transform with scaler
field_scaler.fit(arr_train2)
arr_train_scaled = field_scaler.transform(arr_train2)
#return scaler and scaled data
return imputer,field_scaler,arr_train_scaled
# function to preprocess a file
def preprocess_file(file_name,file_df,dict_field_flags):
# preprocess file and return 3 chunks of data
# key values => key_fields (dataframe)
key_values = pd.DataFrame()
# numerical data => numeric_data (numpy 2D array)
numeric_data = np.array([[]])
# categorial data => categ_data (numpy 2D array)
categ_data = np.array([[]])
# dict_preprocessors is the
# dictionary to hold preprocessors for each field
# one hot encoders for categorical data
# scalers for numerical data
dict_preprocessors = {}
# same is dict_imputers for imputers
dict_imputers = {}
# dict of column processing order (column index) for each numerical field
dict_col_index = {}
col_index = 1 # init
# dict of feature sizes for each categorical field
dict_feat_size = {}
# for each column of the df
for col in file_df.columns:
# look up the value of flags in dictionary
field_key = str(file_name) + str(col)
field_type, use_field, normalize_field = dict_field_flags[field_key]
#print(file_df[col].shape)
# if field is to be used
if use_field != 'N':
# if field is numerical
if field_type == 'Numerical':
# impute and preprocess field
field_imputer,field_scaler,field_scaled = preprocess_numeric_train(file_df[col])
#print(field_scaled.shape)
# append the scaler, column index and imputer to dictionary
dict_preprocessors.update({field_key:field_scaler})
dict_col_index.update({field_key:col_index})
col_index += 1
dict_imputers.update({field_key:field_imputer})
# append the preprocessed numeric data to array
if numeric_data.shape == (1,0): #first array being appended
#print(numeric_data.shape)
numeric_data = field_scaled
else:
numeric_data = np.append(numeric_data,field_scaled,axis=1)
#print(numeric_data.shape)
# if field is categorical
elif field_type == 'Categorical':
# preprocess field
field_imputer,field_vect,feat_size,field_ohe = preprocess_categ_train(file_df[col])
#print(field_ohe.shape)
# append the vectorizer, feature size and imputer to dictionary
dict_preprocessors.update({field_key:field_vect})
dict_feat_size.update({field_key:feat_size})
dict_imputers.update({field_key:field_imputer})
# append the preprocessed categorical data to array
if categ_data.shape == (1,0): #first array being appended
categ_data = field_ohe
else:
categ_data = np.append(categ_data,field_ohe,axis=1)
# if field is a key or target value
elif field_type == 'Primary Key' or field_type == 'Foreign Key' or field_type == 'Target Value':
# append key column to dataframe
key_values[col] = file_df[col]
#==========end of if elif block============#
#===========end of use_field=='Y' block==========#
#=======================end of for loop================#
return key_values,numeric_data,categ_data,dict_preprocessors,dict_feat_size,dict_col_index,dict_imputers
###Output
_____no_output_____
###Markdown
Create folders for preprocessed outputs and preprocessors
The preprocessed output (scaled numerical fields and one hot encoded categorical fields) will be stored in preprocesssed folder. The scalers and encoders for the same will be stored in preprocessors folder. Create these folders if not already present.
###Code
# create output folder for preprocessed data if not already present
out_path_data = os.path.join(complete_path,'preprocessed')
if not os.path.isdir(out_path_data):
os.mkdir(out_path_data)
# create output folder for preprocessors if not already present
out_path_preprocessors = os.path.join(complete_path,'preprocessors')
if not os.path.isdir(out_path_preprocessors):
os.mkdir(out_path_preprocessors)
###Output
_____no_output_____
###Markdown
Call above functions for each file Application Train File
###Code
# size check
file_df = pd.read_csv('data/application_train.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'application_train.csv'
# load file into df
file_df = pd.read_csv('data/application_train.csv')
app_train_keys,app_train_numeric_data,app_train_categ_data,app_train_preprocessors,app_train_feat_size,app_train_col_index,app_train_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(app_train_keys.head())
print(app_train_keys.shape)
print(app_train_numeric_data.shape)
print(app_train_categ_data.shape)
print(app_train_feat_size)
print(app_train_col_index)
###Output
SK_ID_CURR TARGET
0 100002 1
1 100003 0
2 100004 0
3 100006 0
4 100007 0
(307511, 2)
(307511, 27)
(307511, 188)
{'application_train.csvNAME_CONTRACT_TYPE': 2, 'application_train.csvCODE_GENDER': 3, 'application_train.csvFLAG_OWN_CAR': 2, 'application_train.csvFLAG_OWN_REALTY': 2, 'application_train.csvNAME_TYPE_SUITE': 7, 'application_train.csvNAME_INCOME_TYPE': 8, 'application_train.csvNAME_EDUCATION_TYPE': 5, 'application_train.csvNAME_FAMILY_STATUS': 6, 'application_train.csvNAME_HOUSING_TYPE': 6, 'application_train.csvFLAG_MOBIL': 2, 'application_train.csvFLAG_EMP_PHONE': 2, 'application_train.csvFLAG_WORK_PHONE': 2, 'application_train.csvFLAG_CONT_MOBILE': 2, 'application_train.csvFLAG_PHONE': 2, 'application_train.csvFLAG_EMAIL': 2, 'application_train.csvOCCUPATION_TYPE': 18, 'application_train.csvWEEKDAY_APPR_PROCESS_START': 7, 'application_train.csvREG_REGION_NOT_LIVE_REGION': 2, 'application_train.csvREG_REGION_NOT_WORK_REGION': 2, 'application_train.csvLIVE_REGION_NOT_WORK_REGION': 2, 'application_train.csvREG_CITY_NOT_LIVE_CITY': 2, 'application_train.csvREG_CITY_NOT_WORK_CITY': 2, 'application_train.csvLIVE_CITY_NOT_WORK_CITY': 2, 'application_train.csvORGANIZATION_TYPE': 58, 'application_train.csvFLAG_DOCUMENT_2': 2, 'application_train.csvFLAG_DOCUMENT_3': 2, 'application_train.csvFLAG_DOCUMENT_4': 2, 'application_train.csvFLAG_DOCUMENT_5': 2, 'application_train.csvFLAG_DOCUMENT_6': 2, 'application_train.csvFLAG_DOCUMENT_7': 2, 'application_train.csvFLAG_DOCUMENT_8': 2, 'application_train.csvFLAG_DOCUMENT_9': 2, 'application_train.csvFLAG_DOCUMENT_10': 2, 'application_train.csvFLAG_DOCUMENT_11': 2, 'application_train.csvFLAG_DOCUMENT_12': 2, 'application_train.csvFLAG_DOCUMENT_13': 2, 'application_train.csvFLAG_DOCUMENT_14': 2, 'application_train.csvFLAG_DOCUMENT_15': 2, 'application_train.csvFLAG_DOCUMENT_16': 2, 'application_train.csvFLAG_DOCUMENT_17': 2, 'application_train.csvFLAG_DOCUMENT_18': 2, 'application_train.csvFLAG_DOCUMENT_19': 2, 'application_train.csvFLAG_DOCUMENT_20': 2, 'application_train.csvFLAG_DOCUMENT_21': 2}
{'application_train.csvCNT_CHILDREN': 1, 'application_train.csvAMT_INCOME_TOTAL': 2, 'application_train.csvAMT_CREDIT': 3, 'application_train.csvAMT_ANNUITY': 4, 'application_train.csvAMT_GOODS_PRICE': 5, 'application_train.csvREGION_POPULATION_RELATIVE': 6, 'application_train.csvDAYS_BIRTH': 7, 'application_train.csvDAYS_EMPLOYED': 8, 'application_train.csvDAYS_REGISTRATION': 9, 'application_train.csvDAYS_ID_PUBLISH': 10, 'application_train.csvCNT_FAM_MEMBERS': 11, 'application_train.csvREGION_RATING_CLIENT': 12, 'application_train.csvREGION_RATING_CLIENT_W_CITY': 13, 'application_train.csvHOUR_APPR_PROCESS_START': 14, 'application_train.csvEXT_SOURCE_2': 15, 'application_train.csvEXT_SOURCE_3': 16, 'application_train.csvOBS_30_CNT_SOCIAL_CIRCLE': 17, 'application_train.csvDEF_30_CNT_SOCIAL_CIRCLE': 18, 'application_train.csvOBS_60_CNT_SOCIAL_CIRCLE': 19, 'application_train.csvDEF_60_CNT_SOCIAL_CIRCLE': 20, 'application_train.csvDAYS_LAST_PHONE_CHANGE': 21, 'application_train.csvAMT_REQ_CREDIT_BUREAU_HOUR': 22, 'application_train.csvAMT_REQ_CREDIT_BUREAU_DAY': 23, 'application_train.csvAMT_REQ_CREDIT_BUREAU_WEEK': 24, 'application_train.csvAMT_REQ_CREDIT_BUREAU_MON': 25, 'application_train.csvAMT_REQ_CREDIT_BUREAU_QRT': 26, 'application_train.csvAMT_REQ_CREDIT_BUREAU_YEAR': 27}
###Markdown
Checking categorical columns to make sure the values have been encoded correctlyAn easy way to check that one hot encoding is working correctly, is verifying the no of distinct values in the final feature size, against the output of EDA_Basic notebook for the file. Below output correctly matches with output for application_train.csv.
###Code
# quick check of categorical values
s = 0
print("\t\t File NameFieldName \t\t No of Cumulative sum")
print(" \t\t\t\t dist. values ")
for i,v in app_train_feat_size.items():
s += v
print("{:50} \t {:2} \t {:2}".format(i,v,s))
# save the above outputs to drive
app_train_keys.to_csv('preprocessed/app_train_keys.csv',index=False)
np.save("preprocessed/app_train_numeric_data",app_train_numeric_data)
#np.save("preprocessed/app_train_categ_data",app_train_categ_data)
app_train_categ_data_csr = csr_matrix(app_train_categ_data)
save_npz('preprocessed/app_train_categ_data_csr.npz',app_train_categ_data_csr)
import pickle
app_train_preprocessors_file = open('preprocessors/app_train_preprocessors','wb')
pickle.dump(app_train_preprocessors,app_train_preprocessors_file)
app_train_preprocessors_file.close()
app_train_feat_size_file = open('preprocessors/app_train_feat_size','wb')
pickle.dump(app_train_feat_size,app_train_feat_size_file)
app_train_feat_size_file.close()
app_train_imputers_file = open('preprocessors/app_train_imputers','wb')
pickle.dump(app_train_imputers,app_train_imputers_file)
app_train_imputers_file.close()
app_train_col_index_file = open('preprocessors/app_train_col_index','wb')
pickle.dump(app_train_col_index,app_train_col_index_file)
app_train_col_index_file.close()
###Output
_____no_output_____
###Markdown
Previous Application.csv
###Code
# size check
file_df = pd.read_csv('data/previous_application.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'previous_application.csv'
# load file into df
#file_df = pd.read_csv('data/previous_application.csv',nrows=1000)
file_df = pd.read_csv('data/previous_application.csv')
#print(file_df.head(10))
# order the file by key fields and the ordering key
sort_keys = dict_file_flags[file_name][1].split(',') # split the string into list of key fields
asc_order = list(dict_file_flags[file_name][2]**range(len(sort_keys))) # flags to control if dataframe should be sorted in asc order
# list was required above since one flag is required for each key
file_df.sort_values(by=sort_keys,ascending=asc_order,inplace=True,na_position='last')
file_df.reset_index(drop=True,inplace=True)
#print(file_df.head(10))
prev_app_keys,prev_app_numeric_data,prev_app_categ_data,prev_app_preprocessors,prev_app_feat_size,
prev_app_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(prev_app_keys.head())
print(prev_app_keys.shape)
print(prev_app_numeric_data.shape)
print(prev_app_categ_data.shape)
print(prev_app_feat_size)
print(prev_app_col_index)
# save the above outputs to drive
prev_app_keys.to_csv('preprocessed/prev_app_keys.csv',index=False)
np.save("preprocessed/prev_app_numeric_data",prev_app_numeric_data)
#np.save("preprocessed/prev_app_categ_data",prev_app_categ_data)
prev_app_categ_data_csr = csr_matrix(prev_app_categ_data)
save_npz('preprocessed/prev_app_categ_data_csr.npz',prev_app_categ_data_csr)
import pickle
prev_app_preprocessors_file = open('preprocessors/prev_app_preprocessors','wb')
pickle.dump(prev_app_preprocessors,prev_app_preprocessors_file)
prev_app_preprocessors_file.close()
prev_app_feat_size_file = open('preprocessors/prev_app_feat_size','wb')
pickle.dump(prev_app_feat_size,prev_app_feat_size_file)
prev_app_feat_size_file.close()
prev_app_col_index_file = open('preprocessors/prev_app_col_index','wb')
pickle.dump(prev_app_col_index,prev_app_col_index_file)
prev_app_col_index_file.close()
###Output
_____no_output_____
###Markdown
Bureau.csv
###Code
# size check
file_df = pd.read_csv('data/bureau.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'bureau.csv'
# load file into df
#file_df = pd.read_csv('data/bureau.csv',nrows=1000)
file_df = pd.read_csv('data/bureau.csv')
#print(file_df.head(10))
# order the file by key fields and the ordering key
# get the keys and sorting order
sort_keys = dict_file_flags[file_name][1].split(',') # split the string into list of key fields
asc_order = list(dict_file_flags[file_name][2]**range(len(sort_keys))) # flags to control if dataframe should be sorted in asc order
# list was required above since one flag is required for each key
# do the sorting
file_df.sort_values(by=sort_keys,ascending=asc_order,inplace=True,na_position='last')
file_df.reset_index(drop=True,inplace=True)
#print(file_df.head(10))
bureau_keys,bureau_numeric_data,bureau_categ_data,bureau_preprocessors,bureau_feat_size,bureau_col_index,bureau_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(bureau_keys.head())
print(bureau_keys.shape)
print(bureau_numeric_data.shape)
print(bureau_categ_data.shape)
print(bureau_feat_size)
print(bureau_col_index)
# save the above outputs to drive
bureau_keys.to_csv('preprocessed/bureau_keys.csv',index=False)
np.save("preprocessed/bureau_numeric_data",bureau_numeric_data)
#np.save("preprocessed/bureau_categ_data",bureau_categ_data)
bureau_categ_data_csr = csr_matrix(bureau_categ_data)
save_npz('preprocessed/bureau_categ_data_csr.npz',bureau_categ_data_csr)
import pickle
bureau_preprocessors_file = open('preprocessors/bureau_preprocessors','wb')
pickle.dump(bureau_preprocessors,bureau_preprocessors_file)
bureau_preprocessors_file.close()
bureau_feat_size_file = open('preprocessors/bureau_feat_size','wb')
pickle.dump(bureau_feat_size,bureau_feat_size_file)
bureau_feat_size_file.close()
bureau_col_index_file = open('preprocessors/bureau_col_index','wb')
pickle.dump(bureau_col_index,bureau_col_index_file)
bureau_col_index_file.close()
###Output
_____no_output_____
###Markdown
Bureau Balance.csv
###Code
# size check
file_df = pd.read_csv('data/bureau_balance.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'bureau_balance.csv'
# load file into df
#file_df = pd.read_csv('data/bureau_balance.csv',nrows=1000)
file_df = pd.read_csv('data/bureau_balance.csv')
# take only a part of data as there are 27M records!!
num_of_rows_to_keep = len(file_df)//2
file_df = file_df[:num_of_rows_to_keep]
#print(file_df.head(10))
# order the file by key fields and the ordering key
# get the keys and sorting order
sort_keys = dict_file_flags[file_name][1].split(',') # split the string into list of key fields
asc_order = list(dict_file_flags[file_name][2]**range(len(sort_keys))) # flags to control if dataframe should be sorted in asc order
# list was required above since one flag is required for each key
# do the sorting
file_df.sort_values(by=sort_keys,ascending=asc_order,inplace=True,na_position='last')
file_df.reset_index(drop=True,inplace=True)
#print(file_df.head(10))
bureau_bal_keys,bureau_bal_numeric_data,bureau_bal_categ_data,bureau_bal_preprocessors,bureau_bal_feat_size,bureau_bal_col_index,bureau_bal_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(bureau_bal_keys.head())
print(bureau_bal_keys.shape)
print(bureau_bal_numeric_data.shape)
print(bureau_bal_categ_data.shape)
print(bureau_bal_feat_size)
print(bureau_bal_col_index)
# save the above outputs to drive
bureau_bal_keys.to_csv('preprocessed/bureau_bal_keys.csv',index=False)
np.save("preprocessed/bureau_bal_numeric_data",bureau_bal_numeric_data)
#np.save("preprocessed/bureau_bal_categ_data",bureau_bal_categ_data)
bureau_bal_categ_data_csr = csr_matrix(bureau_bal_categ_data)
save_npz('preprocessed/bureau_bal_categ_data_csr.npz',bureau_bal_categ_data_csr)
import pickle
bureau_bal_preprocessors_file = open('preprocessors/bureau_bal_preprocessors','wb')
pickle.dump(bureau_bal_preprocessors,bureau_bal_preprocessors_file)
bureau_bal_preprocessors_file.close()
bureau_bal_feat_size_file = open('preprocessors/bureau_bal_feat_size','wb')
pickle.dump(bureau_bal_feat_size,bureau_bal_feat_size_file)
bureau_bal_feat_size_file.close()
bureau_bal_col_index_file = open('preprocessors/bureau_bal_col_index','wb')
pickle.dump(bureau_bal_col_index,bureau_bal_col_index_file)
bureau_bal_col_index_file.close()
###Output
_____no_output_____
###Markdown
POS Cash Balance.csv
###Code
# size check
file_df = pd.read_csv('data/POS_CASH_balance.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'POS_CASH_balance.csv'
# load file into df
#file_df = pd.read_csv('data/POS_CASH_balance.csv',nrows=1000)
file_df = pd.read_csv('data/POS_CASH_balance.csv')
#print(file_df.head(10))
# order the file by key fields and the ordering key
# get the keys and sorting order
sort_keys = dict_file_flags[file_name][1].split(',') # split the string into list of key fields
asc_order = list(dict_file_flags[file_name][2]**range(len(sort_keys))) # flags to control if dataframe should be sorted in asc order
# use only a part of the dataset, since original file has 10 Million records!!
num_of_rows_to_keep = len(file_df)//2
file_df = file_df[:num_of_rows_to_keep]
# list was required above since one flag is required for each key
# do the sorting
file_df.sort_values(by=sort_keys,ascending=asc_order,inplace=True,na_position='last')
file_df.reset_index(drop=True,inplace=True)
#print(file_df.head(10))
pos_cash_bal_keys,pos_cash_bal_numeric_data,pos_cash_bal_categ_data,pos_cash_bal_preprocessors,pos_cash_bal_feat_size,pos_cash_bal_col_index,pos_cash_bal_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(pos_cash_bal_keys.head())
print(pos_cash_bal_keys.shape)
print(pos_cash_bal_numeric_data.shape)
print(pos_cash_bal_categ_data.shape)
print(pos_cash_bal_feat_size)
print(pos_cash_bal_col_index)
# save the above outputs to drive
pos_cash_bal_keys.to_csv('preprocessed/pos_cash_bal_keys.csv',index=False)
np.save("preprocessed/pos_cash_bal_numeric_data",pos_cash_bal_numeric_data)
#np.save("preprocessed/pos_cash_bal_categ_data",pos_cash_bal_categ_data)
pos_cash_bal_categ_data_csr = csr_matrix(pos_cash_bal_categ_data)
save_npz('preprocessed/pos_cash_bal_categ_data_csr.npz',pos_cash_bal_categ_data_csr)
import pickle
pos_cash_bal_preprocessors_file = open('preprocessors/pos_cash_bal_preprocessors','wb')
pickle.dump(pos_cash_bal_preprocessors,pos_cash_bal_preprocessors_file)
pos_cash_bal_preprocessors_file.close()
pos_cash_bal_feat_size_file = open('preprocessors/pos_cash_bal_feat_size','wb')
pickle.dump(pos_cash_bal_feat_size,pos_cash_bal_feat_size_file)
pos_cash_bal_feat_size_file.close()
pos_cash_bal_col_index_file = open('preprocessors/pos_cash_bal_col_index','wb')
pickle.dump(pos_cash_bal_col_index,pos_cash_bal_col_index_file)
pos_cash_bal_col_index_file.close()
###Output
_____no_output_____
###Markdown
Installments Payments.csv
###Code
# size check
file_df = pd.read_csv('data/installments_payments.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'installments_payments.csv'
# load file into df
#file_df = pd.read_csv('data/installments_payments.csv',nrows=1000)
file_df = pd.read_csv('data/installments_payments.csv')
#print(file_df.head(10))
# order the file by key fields and the ordering key
# get the keys and sorting order
sort_keys = dict_file_flags[file_name][1].split(',') # split the string into list of key fields
asc_order = list(dict_file_flags[file_name][2]**range(len(sort_keys))) # flags to control if dataframe should be sorted in asc order
# list was required above since one flag is required for each key
# do the sorting
file_df.sort_values(by=sort_keys,ascending=asc_order,inplace=True,na_position='last')
file_df.reset_index(drop=True,inplace=True)
#print(file_df.head(10))
instalm_paym_keys,instalm_paym_numeric_data,instalm_paym_categ_data,instalm_paym_preprocessors,instalm_paym_feat_size,instalm_paym_col_index,instalm_paym_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(instalm_paym_keys.head())
print(instalm_paym_keys.shape)
print(instalm_paym_numeric_data.shape)
print(instalm_paym_categ_data.shape)
print(instalm_paym_feat_size)
print(instalm_paym_col_index)
# save the above outputs to drive
instalm_paym_keys.to_csv('preprocessed/instalm_paym_keys.csv',index=False)
np.save("preprocessed/instalm_paym_numeric_data",instalm_paym_numeric_data)
#np.save("preprocessed/instalm_paym_categ_data",instalm_paym_categ_data) # no categ data for this file
import pickle
instalm_paym_preprocessors_file = open('preprocessors/instalm_paym_preprocessors','wb')
pickle.dump(instalm_paym_preprocessors,instalm_paym_preprocessors_file)
instalm_paym_preprocessors_file.close()
instalm_paym_feat_size_file = open('preprocessors/instalm_paym_feat_size','wb')
pickle.dump(instalm_paym_feat_size,instalm_paym_feat_size_file)
instalm_paym_feat_size_file.close()
instalm_paym_col_index_file = open('preprocessors/instalm_paym_col_index','wb')
pickle.dump(instalm_paym_col_index,instalm_paym_col_index_file)
instalm_paym_col_index_file.close()
###Output
_____no_output_____
###Markdown
Credit Card Balance.csv
###Code
# size check
file_df = pd.read_csv('data/credit_card_balance.csv')
file_df.shape
# start time
s_time = time.time()
# init file name
file_name = 'credit_card_balance.csv'
# load file into df
#file_df = pd.read_csv('data/credit_card_balance.csv',nrows=1000)
file_df = pd.read_csv('data/credit_card_balance.csv')
#print(file_df.head(10))
# order the file by key fields and the ordering key
# get the keys and sorting order
sort_keys = dict_file_flags[file_name][1].split(',') # split the string into list of key fields
asc_order = list(dict_file_flags[file_name][2]**range(len(sort_keys))) # flags to control if dataframe should be sorted in asc order
# list was required above since one flag is required for each key
# do the sorting
file_df.sort_values(by=sort_keys,ascending=asc_order,inplace=True,na_position='last')
file_df.reset_index(drop=True,inplace=True)
#print(file_df.head(10))
credit_bal_keys,credit_bal_numeric_data,credit_bal_categ_data,credit_bal_preprocessors,credit_bal_feat_size,credit_bal_col_index,credit_bal_imputers = preprocess_file(file_name,file_df,dict_field_flags)
print("Time Taken (in seconds): ",(time.time() - s_time))
print(credit_bal_keys.head())
print(credit_bal_keys.shape)
print(credit_bal_numeric_data.shape)
print(credit_bal_categ_data.shape)
print(credit_bal_feat_size)
print(credit_bal_col_index)
# save the above outputs to drive
credit_bal_keys.to_csv('preprocessed/credit_bal_keys.csv',index=False)
np.save("preprocessed/credit_bal_numeric_data",credit_bal_numeric_data)
#np.save("preprocessed/credit_bal_categ_data",credit_bal_categ_data)
credit_bal_categ_data_csr = csr_matrix(credit_bal_categ_data)
save_npz('preprocessed/credit_bal_categ_data_csr.npz',credit_bal_categ_data_csr)
import pickle
credit_bal_preprocessors_file = open('preprocessors/credit_bal_preprocessors','wb')
pickle.dump(credit_bal_preprocessors,credit_bal_preprocessors_file)
credit_bal_preprocessors_file.close()
credit_bal_feat_size_file = open('preprocessors/credit_bal_feat_size','wb')
pickle.dump(credit_bal_feat_size,credit_bal_feat_size_file)
credit_bal_feat_size_file.close()
credit_bal_col_index_file = open('preprocessors/credit_bal_col_index','wb')
pickle.dump(credit_bal_col_index,credit_bal_col_index_file)
credit_bal_col_index_file.close()
###Output
_____no_output_____ |
plotnine_examples/examples/scale_x_continuous.ipynb | ###Markdown
Guitar Neck *Using a transformed x-axis to visualise guitar chords* The x-axis is transformed to resemble the narrowing width of frets on a 25.5 inch Strat. To do thatwe create custom transformation.The key parts of *any* transform object are the `transform` and `inverse` functions.
###Code
class frets_trans(trans):
"""
Frets Transformation
"""
number_of_frets = 23 # Including fret 0
domain = (0, number_of_frets-1)
@staticmethod
def transform(x):
x = np.asarray(x)
return 25.5 - (25.5 / (2 ** (x/12)))
@staticmethod
def inverse(x):
x = np.asarray(x)
return 12 * np.log2(25.5/(25.5-x))
@classmethod
def breaks_(cls, limits):
# Fixed major breaks
return cls.domain
@classmethod
def minor_breaks(cls, major, limits):
# The major breaks as passed to this method are in transformed space.
# The minor breaks are calculated in data space to reveal the
# non-linearity of the scale.
_major = cls.inverse(major)
minor = cls.transform(np.linspace(*_major, cls.number_of_frets))
return minor
###Output
_____no_output_____
###Markdown
The above transform is different from most in that, breaks and minor breaks do not change. This is common of very specialized scales. It can also be a key requirement when creating graphics for demontration purposes. Some chord Data
###Code
# Notes: the 0 fret is an open strum, all other frets are played half-way between fret bars.
# The strings are 1:low E, 2: A, 3: D, 4: G, 5: B, 6: E
c_chord = pd.DataFrame({
'Fret': [0, 2.5, 1.5, 0, 0.5, 0],
'String': [1, 2, 3, 4, 5, 6]
})
# Sequence based on the number of notes in the chord
c_chord['Sequence'] = list(range(1, 1+len(c_chord['Fret'])))
# Standard markings for a Stratocaster
markings = pd.DataFrame({
'Fret': [2.5, 4.5, 6.5, 8.5, 11.5, 11.5, 14.5, 16.5, 18.5, 20.5],
'String': [3.5, 3.5, 3.5, 3.5, 2, 5, 3.5, 3.5, 3.5, 3.5]
})
###Output
_____no_output_____
###Markdown
Visualizing the chord
###Code
# Look and feel of the graphic
neck_color = '#FFDDCC'
fret_color = '#998888'
string_color = '#AA9944'
neck_theme = theme(
figure_size=(10, 2),
panel_background=element_rect(fill=neck_color),
panel_grid_major_y=element_line(color=string_color, size=2.2),
panel_grid_major_x=element_line(color=fret_color, size=2.2),
panel_grid_minor_x=element_line(color=fret_color, size=1)
)
# The plot
(ggplot(c_chord, aes('Fret', 'String'))
+ geom_path(aes(color='Sequence'), size=3)
+ geom_point(aes(color='Sequence'), fill='#FFFFFF', size=3)
+ geom_point(data=markings, fill='#000000', size=4)
+ scale_x_continuous(trans=frets_trans)
+ scale_y_continuous(breaks=range(0, 7), minor_breaks=[])
+ guides(color=False)
+ neck_theme
)
###Output
_____no_output_____ |
Copy_of_LogisticRegression_WildFire.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_absolute_error
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_curve, roc_auc_score, classification_report, accuracy_score, confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
data1 = pd.read_csv('Weather_FireHistory_Train3.csv')
data1.info()
data1.head()
data1.columns
data1.isnull().sum()
data1.size
data1.corr()
f, ax = plt.subplots(figsize=(7, 5))
sns.countplot(x='FireEvent', data=data1)
_ = plt.title('# Fire vs NonFire')
_ = plt.xlabel('FireEvent (1==Fire)')
base_line_accuracy = 1-np.sum(data1.FireEvent)/data1.shape[0]
base_line_accuracy
X = data1.drop(columns='FireEvent', axis=1)
y = data1.FireEvent.values
corr = X.corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
f, ax = plt.subplots(figsize=(11, 9))
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
###Output
_____no_output_____
###Markdown
Training, Validating, and Testing setFirst we split our data set into a train and a validation set by using the function train_test_split. The model performace
###Code
np.random.seed(42)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.20,random_state=5)
scaler = StandardScaler()
lr = LogisticRegression()
model1 = Pipeline([('standardize', scaler),
('log_reg', lr)])
model1.fit(X_train, y_train)
predicted_fire = model1.predict(X)
mean_absolute_error(y, predicted_fire)
# split data into training and validation data, for both features and target
# The split is based on a random number generator. Supplying a numeric value to
# the random_state argument guarantees we get the same split every time we
# run this script.
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=0)
# Fit model
model1.fit(X_train, y_train)
# get predicted fire on validation data
val_predictions = model1.predict(X_val)
print(mean_absolute_error(y_val, val_predictions))
###Output
0.011834319526627219
###Markdown
Training score and Test score
###Code
y_train_hat = model1.predict(X_train)
y_train_hat_probs = model1.predict_proba(X_train)[:,1]
train_accuracy = accuracy_score(y_train, y_train_hat)*100
train_auc_roc = roc_auc_score(y_train, y_train_hat_probs)*100
print('Confusion matrix:\n', confusion_matrix(y_train, y_train_hat))
print('Training accuracy: %.4f %%' % train_accuracy)
print('Training AUC: %.4f %%' % train_auc_roc)
y_test_hat = model1.predict(X_test)
y_test_hat_probs = model1.predict_proba(X_test)[:,1]
test_accuracy = accuracy_score(y_test, y_test_hat)*100
test_auc_roc = roc_auc_score(y_test, y_test_hat_probs)*100
print('Confusion matrix:\n', confusion_matrix(y_test, y_test_hat))
print('Testing accuracy: %.4f %%' % test_accuracy)
print('Testing AUC: %.4f %%' % test_auc_roc)
print(classification_report(y_test, y_test_hat, digits=6))
fpr, tpr, thresholds = roc_curve(y_test, y_test_hat_probs, drop_intermediate=True)
f, ax = plt.subplots(figsize=(9, 6))
_ = plt.plot(fpr, tpr, [0,1], [0, 1])
_ = plt.title('AUC ROC')
_ = plt.xlabel('False positive rate')
_ = plt.ylabel('True positive rate')
plt.style.use('seaborn')
plt.savefig('auc_roc.png', dpi=600)
y_hat_90 = (y_test_hat_probs > 0.90 )*1
print('Confusion matrix:\n', confusion_matrix(y_test, y_hat_90))
print(classification_report(y_test, y_hat_90, digits=6))
y_hat_10 = (y_test_hat_probs > 0.10)*1
print('Confusion matrix:\n', confusion_matrix(y_test, y_hat_10))
print(classification_report(y_test, y_hat_10, digits=4))
###Output
_____no_output_____ |
CEWIT_2020_Anomaly_Detection_using_GANs.ipynb | ###Markdown
###Code
###Output
_____no_output_____ |
experiments/report-figures.ipynb | ###Markdown
Square attack
###Code
p = patch.Patch('block', proportion=0.02,
input_shape=mnist.input_shape,
dynamic_mask=False,
dynamic_pattern=False)
objective = util.make_objective_class(0, mnist.num_classes)
patched = mnist.poison(objective, p, fraction=1.0)
plt.rcParams['figure.figsize'] = (3, 3)
plot.grid(patched.x_train, n=4, show=False)
save_fig('../report/square.png')
p = patch.Patch('sparse', proportion=0.01,
input_shape=mnist.input_shape,
dynamic_mask=False, dynamic_pattern=False)
objective = util.make_objective_class(0, mnist.num_classes)
patched = mnist.poison(objective, p, fraction=1.0)
plt.rcParams['figure.figsize'] = (3, 3)
plot.grid(patched.x_train, n=4, show=False)
save_fig('../report/sparse.png')
###Output
_____no_output_____
###Markdown
Moving square
###Code
p = patch.Patch('block', proportion=0.02,
input_shape=mnist.input_shape,
dynamic_mask=True,
dynamic_pattern=False)
objective = util.make_objective_class(0, mnist.num_classes)
patched = mnist.poison(objective, p, fraction=1.0)
plt.rcParams['figure.figsize'] = (3, 3)
plot.grid(patched.x_train, n=4, show=False)
save_fig('../report/moving-square.png')
###Output
_____no_output_____ |
python/AlgebraLinearPython/1VetoresPython.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
u = [1,2]
v = [2,1]
# somou listas
u + v
u = np.array(u)
v = np.array(v)
# soma de vetores
u + v
#----------------------------------------------
w1 = np.array([2,3])
w2 = np.array([4,-1])
# Produto escalar ou interno função --> .dot()
w1.dot(w2)
w2.dot(w1)
# Módulo função --> .norm(vetor)
modulo_w1 = np.linalg.norm(w1) # foi atribuido o valor da norma a uma variável
modulo_w2 = np.linalg.norm(w2) # Idem
modulo_w1
modulo_w2
np.linalg.norm(w1)
#----------------------------------------------------- 15/10/2020
v = np.array([1,2,3,4])
v
type(v)
# Descrição de uma função
?np.array
lista = [3,5,66,20]
type(lista)
# Transformar uma lista em um vetor
v1 = np.array(lista)
v2 = np.array([1,2,3,4])
v3 = np.array((4,3,2,1))
v1
v2
v3
# Representação de vetores
e1 = np.array([1,0,0])
e2 = np.array([0,1,0])
e3 = np.array([0,0,1])
#------------------------------------------
def plotVectors(vecs, cols, alpha=2):
''' função para plotar vetores'''
plt.figure()
plt.axvline(x=0, color='#A9A9A9', zorder=0)
plt.axhline(y=0, color='#A9A9A9', zorder=0)
for i in range(len(vecs)):
x = np.concatenate([[0,0],vecs[i]])
plt.quiver([x[0]],
[x[1]],
[x[2]],
[x[3]],
angles='xy', scale_units='xy', scale=1, color=cols[i],
alpha=alpha)
laranja = '#FF9A13'
azul = '#1190FF'
resultante = '#11FFFF'
plotVectors([[2,3], [4,-1], [6,2]], [laranja, azul, resultante])
plt.xlim(-1,7)
plt.ylim(-2,7)
#Cores
cor1 = '#FF0000'
cor2 = '#FF0000'
corRes = '#11FFFF'
# Vetores
a = np.array([2,3])
b = np.array([3,1])
# Soma
r = a + b
# Função
plotVectors([a, b, r], [cor1, cor2, corRes])
# Plano cartesiano
plt.xlim(-1,6)
plt.ylim(-5,10)
plotVectors([e1,e2],[cor1,cor2])
plt.xlim(-1,1.5)
plt.ylim(-1,1.5)
# Ângulo entre vetores
def ang_2vetores(v,u):
v_escalar_u = v.dot(u)
vn = np.linalg.norm(v)
un = np.linalg.norm(u)
r = v_escalar_u/(vn*un) # cosseno do angulo
ang = np.arccos(r) # ang em radianos
return (180/np.pi)*ang # ang em graus
u = np.array([0,1])
v = np.array([1,0])
red = 'red'
blue = 'blue'
plotVectors([u,v], [red, blue])
plt.xlim(-1,1.5)
plt.ylim(-1,1.5)
ang_2vetores(u,v)
A = np.array([1,1])
B = np.array([1,0])
red = 'red'
blue = 'blue'
plotVectors([A, B], [red, blue])
plt.xlim(-1,1.5)
plt.ylim(-1,1.5)
ang_2vetores(A,B)
# Indexação de vetores
# vetor
x = [1,2,3,4,5]
vx = np.array(x)
# Tamanho do vetor
len(vx)
# posição inicial em python começa em "0"
posicao_2 = vx[2]
posicao_2
posicao_0 = vx[0]
posicao_0
###Output
_____no_output_____ |
DS_02_Python_Pandas_Tratando_e_Analisando_Dados/Base de dados.ipynb | ###Markdown
Relatório de Análise I Importando a base de dados
###Code
import pandas as pd
dados = pd.read_csv('aluguel.csv', sep = ";")
dados
type(dados)
dados.info()
dados.describe()
dados.head(8)
###Output
_____no_output_____
###Markdown
Info gerais sobre a base de dados
###Code
dados.head(8)
dados.dtypes
tipos_de_dados = pd.DataFrame(dados.dtypes, columns=["Tipos de Dados"])
tipos_de_dados
tipos_de_dados.columns.name = "Variável"
tipos_de_dados
dados.shape #devolve uma tupla
dados.shape[0] #chamo o índice 0 da tupla, nesse caso, as linhas
print('A base de dados apresenta {} imóveis e {} variáveis'.format(dados.shape[0], dados.shape[1]))
###Output
A base de dados apresenta 32960 imóveis e 9 variáveis
###Markdown
Criando uma tabela com Pandas
###Code
import pandas as pd
data = [['Fulano', 12, 7.0, True],
['Sicrano', 15, 3.5, False],
['Beltrano', 18, 9.3, True]]
df_dados = pd.DataFrame(data, columns = ['Aluno', 'Idade', 'Nota', 'Aprovado'])
df_dados
df_dados.dtypes
df_tipos = pd.DataFrame(df_dados.dtypes, columns = ["Tipo"])
df_tipos.columns.name = "Variável"
df_tipos
###Output
_____no_output_____ |
Projects/Guided/project_introduction/Introduction to DataCamp Projects/notebook.ipynb | ###Markdown
1. This is a Jupyter notebook!A Jupyter notebook is a document that contains text cells (what you're reading right now) and code cells. What is special with a notebook is that it's interactive: You can change or add code cells, and then run a cell by first selecting it and then clicking the run cell button above ( ▶| Run ) or hitting ctrl + enter. The result will be displayed directly in the notebook. You could use a notebook as a simple calculator. For example, it's estimated that on average 256 children were born every minute in 2016. The code cell below calculates how many children were born on average on a day.
###Code
# I'm a code cell, click me, then run me!
256 * 60 * 24 # Children × minutes × hours
###Output
_____no_output_____
###Markdown
2. Put any code in code cellsBut a code cell can contain much more than a simple one-liner! This is a notebook running python and you can put any python code in a code cell (but notebooks can run other languages too, like R). Below is a code cell where we define a whole new function (greet). To show the output of greet we run it last in the code cell as the last value is always printed out.
###Code
def greet(first_name, last_name):
greeting = 'My name is ' + last_name + ', ' + first_name + ' ' + last_name + '!'
return greeting
# Replace with your first and last name.
# That is, unless your name is already James Bond.
greet('James', 'Bond')
###Output
_____no_output_____
###Markdown
3. Jupyter notebooks ♡ dataWe've seen that notebooks can display basic objects such as numbers and strings. But notebooks also support the objects used in data science, which makes them great for interactive data analysis!For example, below we create a pandas DataFrame by reading in a csv-file with the average global temperature for the years 1850 to 2016. If we look at the head of this DataFrame the notebook will render it as a nice-looking table.
###Code
# Importing the pandas module
import pandas as pd
# Reading in the global temperature data
global_temp = pd.read_csv('datasets/global_temperature.csv')
# Take a look at the first datapoints
# ... YOUR CODE FOR TASK 3 ...
###Output
_____no_output_____
###Markdown
4. Jupyter notebooks ♡ plotsTables are nice but — as the saying goes — "a plot can show a thousand data points". Notebooks handle plots as well, but it requires a bit of magic. Here magic does not refer to any arcane rituals but to so-called "magic commands" that affect how the Jupyter notebook works. Magic commands start with either % or %% and the command we need to nicely display plots inline is %matplotlib inline. With this magic in place, all plots created in code cells will automatically be displayed inline. Let's take a look at the global temperature for the last 150 years.
###Code
# Setting up inline plotting using jupyter notebook "magic"
%matplotlib inline
import matplotlib.pyplot as plt
# Plotting global temperature in degrees celsius by year
plt.plot(global_temp['year'], global_temp['degrees_celsius'])
# Adding some nice labels
plt.xlabel('year')
plt.ylabel('degree_celsius')
###Output
_____no_output_____
###Markdown
5. Jupyter notebooks ♡ a lot moreTables and plots are the most common outputs when doing data analysis, but Jupyter notebooks can render many more types of outputs such as sound, animation, video, etc. Yes, almost anything that can be shown in a modern web browser. This also makes it possible to include interactive widgets directly in the notebook!For example, this (slightly complicated) code will create an interactive map showing the locations of the three largest smartphone companies in 2016. You can move and zoom the map, and you can click the markers for more info!
###Code
# Making a map using the folium module
import folium
phone_map = folium.Map()
# Top three smart phone companies by market share in 2016
companies = [
{'loc': [37.4970, 127.0266], 'label': 'Samsung: 20.5%'},
{'loc': [37.3318, -122.0311], 'label': 'Apple: 14.4%'},
{'loc': [22.5431, 114.0579], 'label': 'Huawei: 8.9%'}]
# Adding markers to the map
for company in companies:
marker = folium.Marker(location=company['loc'], popup=company['label'])
marker.add_to(phone_map)
# The last object in the cell always gets shown in the notebook
phone_map
###Output
_____no_output_____
###Markdown
6. Goodbye for now!This was just a short introduction to Jupyter notebooks, an open source technology that is increasingly used for data science and analysis. I hope you enjoyed it! :)
###Code
# Are you ready to get started with DataCamp projects?
I_am_ready = True
# Ps.
# Feel free to try out any other stuff in this notebook.
# It's all yours!
###Output
_____no_output_____ |
Data Science and Machine Learning Bootcamp - JP/03.Python-for-Data-Analysis-Pandas/07-Operations.ipynb | ###Markdown
___ ___ OperationsThere are lots of operations with pandas that will be really useful to you, but don't fall into any distinct category. Let's show them here in this lecture:
###Code
import pandas as pd
df = pd.DataFrame({'col1':[1,2,3,4],'col2':[444,555,666,444],'col3':['abc','def','ghi','xyz']})
df.head()
###Output
_____no_output_____
###Markdown
Info on Unique Values
###Code
df['col2'].unique()
df['col2'].nunique()
df['col2'].value_counts()
###Output
_____no_output_____
###Markdown
Selecting Data
###Code
#Select from DataFrame using criteria from multiple columns
newdf = df[(df['col1']>2) & (df['col2']==444)]
newdf
###Output
_____no_output_____
###Markdown
Applying Functions
###Code
def times2(x):
return x*2
df['col1'].apply(times2)
df['col3'].apply(len)
df['col1'].sum()
###Output
_____no_output_____
###Markdown
** Permanently Removing a Column**
###Code
del df['col1']
df
###Output
_____no_output_____
###Markdown
** Get column and index names: **
###Code
df.columns
df.index
###Output
_____no_output_____
###Markdown
** Sorting and Ordering a DataFrame:**
###Code
df
df.sort_values(by='col2') #inplace=False by default
###Output
_____no_output_____
###Markdown
** Find Null Values or Check for Null Values**
###Code
df.isnull()
# Drop rows with NaN Values
df.dropna()
###Output
_____no_output_____
###Markdown
** Filling in NaN values with something else: **
###Code
import numpy as np
df = pd.DataFrame({'col1':[1,2,3,np.nan],
'col2':[np.nan,555,666,444],
'col3':['abc','def','ghi','xyz']})
df.head()
df.fillna('FILL')
data = {'A':['foo','foo','foo','bar','bar','bar'],
'B':['one','one','two','two','one','one'],
'C':['x','y','x','y','x','y'],
'D':[1,3,2,5,4,1]}
df = pd.DataFrame(data)
df
df.pivot_table(values='D',index=['A', 'B'],columns=['C'])
###Output
_____no_output_____ |
examples/cosmology.ipynb | ###Markdown
Using `pyoscode` in cosmology`pyoscode` is a fast numerical routine suitable for equations of the form $$ \ddot{x} + 2\gamma(t)\dot{x} + \omega^2(t) = 0, $$with- $x(t)$: a scalar variable (e.g. curvature perturbation),- $\omega(t)$: frequency,- $\gamma(t)$: friction or first-derivative term.In general $\gamma$, $\omega$ may not be explicit functions of time, and `pyoscode` can deal with them given as- _in Python_: `numpy.array`s- _in C++_: `array`s, `list`s, `std::vector`s, `Eigen::Vector`s, or functions.Below we'll look at examples using the _Python_ interface, but first, let's look at the short summary of the relevant cosmology. CosmologyWe wish to calculate the primordial power spectrum of scalar perturbations in a universe with some spatial curvature. This involves 1. computing the isotropic, expanding "background" evolution,2. then solving the equation of motion of the perturbations of varying lengthscales. Background evolutionThe relevant equations are the Friedmann equations and the continuity equation. They can be cast into the following form:$$ \frac{d\ln{\Omega_k}}{dN} = 4 + \Omega_k\big(4K - 2a^2V(\phi)\big), $$$$ \Big(\frac{d\phi}{dN}\Big)^2 = 6 + \Omega_k\big(6K - 2a^2V(\phi)\big). $$with - $a$: scale factor of the universe- $H$: Hubble parameter- $N = \ln{a}$: number of e-folds, **the independent variable**- $ \Omega_k = \frac{1}{(aH)^2}$, curvature density- $K$: spatial curvature, $0, \pm1$ for flat, closed, and open universes- $\phi$: inflaton field- $ V$: inflationary potential Evolution of the perturbationsThe equation of motion of the perturbations is given by the Mukhanov--Sasaki equation. It takes the form of a generalised oscillator, with frequency and damping terms given by (when written in terms of $N$):$$ \omega^2 = \Omega_k\Bigg( (k_2 - K) - \frac{2Kk_2}{EK +k_2}\frac{\dot{E}}{E}\Bigg), $$$$ 2\gamma = K\Omega_k + 3 - E + \frac{k_2}{EK + k_2}\frac{\dot{E}}{E}, $$with - $E = \frac{1}{2}\dot{\phi}^2$ (overdot is differentiation wrt $N$)- $k_2 = k(k+2) - 3K$ if $K > 0$, and $k_2 = k^2 - 3K$ otherwise. Code A flat universe
###Code
import pyoscode
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import root_scalar
from scipy.integrate import solve_ivp
###Output
_____no_output_____
###Markdown
cosmological parameters:- $m$: inflaton mass- $mp$: Planck mass- $nv$: exponent in inflationary potential- $K$: curvature, $\pm1$, 0
###Code
m = 1
mp = 1
nv = 2
K = 0
###Output
_____no_output_____
###Markdown
Define the inflationary potential, its derivative, and the background equations. Also define initial conditions for the perturbations such that they start from the _Bunch-Davies_ vacuum.
###Code
def V(phi):
""" inflationary potential"""
return 0.5*m**2*phi**nv
def dV(phi):
""" derivative of the inflationary potential """
return 0.5*nv*m**2*phi**(nv-1)
def bgeqs(t, y):
""" System of equations describing the evolution of the cosmological
background """
dy = np.zeros(y.shape)
dy[0] = 4.0 + np.exp(y[0])*(4.0*K - 2.0*np.exp(2.0*t)*V(y[1]))
dy[1] = - np.sqrt(6.0 + np.exp(y[0])*(6.0*K -
2.0*np.exp(2.0*t)*V(y[1])))
return dy
def endinfl(t, y):
""" Crosses zero when inflation ends """
dphi = bgeqs(t,y)[1]
epsilon = 0.5*dphi**2
return epsilon - 1.
def bdic(k, phi, dphi, ddphi, N):
""" Defines the Bunch-Davies vacuum solution
for a given perturbation mode """
a0 = np.exp(N)
dz_z = ddphi/dphi + 1.
z = a0*dphi
R = 1./(np.sqrt(2.*k)*z) + 1j*0
dR = - R*dz_z - np.sqrt(k/2.*ok_i)/z*1j
return R,dR
def pps(k, rk1, rk2, x01, dx01, x02, dx02, x0, dx0):
""" Enforces x,dx as initial conditions by linear
combination of two solutions rk1 and rk2, which had
initial conditions x01, dx01 and x02, dx02 """
a = (x0*dx02 - dx0*x02)/(x01*dx02 - dx01*x02)
b = (x0*dx01 - dx0*x01)/(x02*dx01 - dx02*x01)
power = np.abs(a*rk1 + b*rk2)**2*k**3/(2*np.pi**2)
return power
###Output
_____no_output_____
###Markdown
Now solve the background with the help of `scipy.integrate`
###Code
# \Omega_k and N at the start of inflation fully
# parametrise the background.
ok_i = 2.1e-3
N_i = 1.
# Nominal end point of integration (we'll stop at the end of inflation)
N_f = 80.
# Points at which we'll obtain the background solution
Nbg = 10000 # This determines grid fineness, see note below.
N = np.linspace(N_i,N_f,Nbg)
# Initial conditions
phi_i = np.sqrt(4.*(1./ok_i + K)*np.exp(-2.0*N_i)/m**2)
logok_i = np.log(ok_i)
y_i = np.array([logok_i, phi_i])
# Solve for the background until the end of inflation
endinfl.terminal = True
endinfl.direction = 1
bgsol = solve_ivp(bgeqs, (N_i,N_f), y_i, events=endinfl, t_eval=N, rtol=1e-8, atol=1e-10)
###Output
_____no_output_____
###Markdown
**Note:** the most important parameter from a numerical perspective is $N_{\mathrm{bg}}$. This determines the fineness of the grid on which $\omega$ and $\gamma$ are defined. The speed of the method depends on how precisely numerical derivatives and integrals of $\omega$, $\gamma$ can be computed. If you experience slow-down, it is very likely that this grid was not fine enough. The number of e-folds of inflation we got from this setup is
###Code
bgsol.t_events[0][0]-N_i
###Output
_____no_output_____
###Markdown
We're now ready to define the equation of motion of the perturbations. `pyoscode` takes the frequency and the damping term of the oscillator as `numpy.array`s.
###Code
logok = bgsol.y[0]
phi = bgsol.y[1]
N = bgsol.t
dphi = np.array([-np.sqrt(6.0 + np.exp(Logok)*(6.0*K -
2.0*np.exp(2.0*t)*V(Phi))) for Logok,Phi,t in zip(logok,phi,N) ])
dlogok = np.array([4.0 + np.exp(Logok)*(4.0*K - 2.0*np.exp(2.0*t)*V(Phi)) for Logok,Phi,t in zip(logok,phi,N) ])
dE_E = dlogok - 4. -2.*dV(phi)*np.exp(logok)*np.exp(2.*N)/dphi
E = 0.5*dphi**2
# Damping term
g = 0.5*(3 - E + dE_E)
# frequency
logw = 0.5*logok
###Output
_____no_output_____
###Markdown
Now we wish solve the Mukhanov--Sasaki equation in a loop, iterating over increasing values of $k$. We need to determine the range of integration for each: we'll start at a fixed $N$, and integrate until the mode is "well outside the Hubble horizon", $k < (aH)/100$.
###Code
# range of wavevectors
ks = np.logspace(0,4,1000)
end = np.zeros_like(ks,dtype=int)
endindex = 0
for i in range(len(ks)):
for j in range(endindex,Nbg):
if np.exp(-0.5*logok[j])/ks[i] > 100:
end[i] = j
endindex = j
break
###Output
_____no_output_____
###Markdown
We're now ready to solve the Mukhanov-Sasaki equation in a loop and generate a primordial power spectrum.
###Code
spectrum = np.zeros_like(ks,dtype=complex)
for i,k in enumerate(ks):
# Bunch-Davies i.c.
phi_0 = phi[0]
dphi_0 = dphi[0]
ddphi_0 = 0.5*dE_E[0]*dphi_0
N_0 = N_i
x0, dx0 = bdic(k, phi_0, dphi_0, ddphi_0, N_0)
x01 = 1.0
dx01 = 0.0
x02 = 0.0
dx02 = 1.0
# Linearly indep. solutions
sol1 = pyoscode.solve(N,logw+np.log(k),g,N_i,N[end[i]],x01,dx01,logw=True)
sol2 = pyoscode.solve(N,logw+np.log(k),g,N_i,N[end[i]],x02,dx02,logw=True)
rk1 = sol1["sol"][-1]
rk2 = sol2["sol"][-1]
spectrum[i] = pps(k, rk1, rk2, x01, dx01, x02, dx02, x0, dx0)
###Output
_____no_output_____
###Markdown
Plot the resulting spectrum:
###Code
plt.loglog(ks, spectrum)
plt.xlabel('comoving $k$')
plt.ylabel('$m^2 \\times P_{\mathcal{R}}(k)$')
plt.show()
plt.loglog(ks, spectrum)
plt.xlabel('comoving $k$')
plt.ylabel('$m^2 \\times P_{\mathcal{R}}(k)$')
plt.xlim((3e1,1e4))
plt.ylim((40,80))
plt.show()
###Output
_____no_output_____
###Markdown
A closed universe All we have to do differently is:1. solve the background equations again with $K=1$,
###Code
K = 1
N_i = -1.74
ok_i = 1.0
N = np.linspace(N_i,N_f,Nbg)
# Initial conditions
phi_i = np.sqrt(4.*(1./ok_i + K)*np.exp(-2.0*N_i)/m**2)
logok_i = np.log(ok_i)
y_i = np.array([logok_i, phi_i])
# Solve for the background until the end of inflation
endinfl.terminal = True
endinfl.direction = 1
bgsol = solve_ivp(bgeqs, (N_i,N_f), y_i, events=endinfl, t_eval=N, rtol=1e-8, atol=1e-10)
###Output
_____no_output_____
###Markdown
Number of e-folds of inflation now is
###Code
bgsol.t_events[0][0]-N_i
###Output
_____no_output_____
###Markdown
2. Update the arrays storing the cosmological background:
###Code
logok = bgsol.y[0]
phi = bgsol.y[1]
N = bgsol.t
dphi = np.array([-np.sqrt(6.0 + np.exp(Logok)*(6.0*K -
2.0*np.exp(2.0*t)*V(Phi))) for Logok,Phi,t in zip(logok,phi,N) ])
dlogok = np.array([4.0 + np.exp(Logok)*(4.0*K - 2.0*np.exp(2.0*t)*V(Phi)) for Logok,Phi,t in zip(logok,phi,N) ])
dE_E = dlogok - 4. -2.*dV(phi)*np.exp(logok)*np.exp(2.*N)/dphi
E = 0.5*dphi**2
###Output
_____no_output_____
###Markdown
3. Update also the endpoint of integration for each mode:
###Code
# range of wavevectors
ks = np.concatenate((np.linspace(3,100,98), np.logspace(2,4,500)))
end = np.zeros_like(ks,dtype=int)
endindex = 0
for i in range(len(ks)):
for j in range(endindex,Nbg):
if np.exp(-0.5*logok[j])/ks[i] > 100:
end[i] = j
endindex = j
break
###Output
_____no_output_____
###Markdown
4. Solve the MS equation for each $k$. The frequency and the damping term now have non-trivial wavevector-dependence, so we'll compute them on the fly for each mode.
###Code
closed_spectrum = np.zeros_like(ks,dtype=complex)
for i,k in enumerate(ks):
# Bunch-Davies i.c.
phi_0 = phi[0]
dphi_0 = dphi[0]
ddphi_0 = 0.5*dE_E[0]*dphi_0
N_0 = N_i
x0, dx0 = bdic(k, phi_0, dphi_0, ddphi_0, N_0)
x01 = 1.0
dx01 = 0.0
x02 = 0.0
dx02 = 1.0
# wavenumber "squared"
k2 = complex(k*(k+2.)-3*K)
# Damping term
g = 0.5*(K*np.exp(logok) + 3 - E + dE_E*k2/(E*K+k2))
# frequency
logw = 0.5*(logok + np.log(k2 - K - 2.*K*k2*dE_E/(E*K + k2)))
# Linearly indep. solutions
sol1 = pyoscode.solve(N,logw,g,N_i,N[end[i]],x01,dx01,logw=True)
sol2 = pyoscode.solve(N,logw,g,N_i,N[end[i]],x02,dx02,logw=True)
rk1 = sol1["sol"][-1]
rk2 = sol2["sol"][-1]
closed_spectrum[i] = pps(k, rk1, rk2, x01, dx01, x02, dx02, x0, dx0)
###Output
_____no_output_____
###Markdown
Plot the resulting spectrum:
###Code
plt.loglog(ks, closed_spectrum)
plt.xlabel('comoving $k$')
plt.ylabel('$m^2 \\times P_{\mathcal{R}}(k)$')
plt.show()
###Output
_____no_output_____ |
jupyter_notebooks/figS_18S_probetest.ipynb | ###Markdown
Fig S 18S probe test- S1A-S1C: thermodynamic properties vs. performance for the 20 probes- S1D: Tm by position in 18S- S1E: Performance of low vs. high Tm probes
###Code
#Imports
import sys
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import os
import gffutils
import seaborn as sns
import numpy as np
import scipy.stats as stats
sys.path.append('../scripts/')
from plot_helpers import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
#load properties of probes
prop_file = '../figures/F1/TableS1_18S_candidate_properties.csv'
df = pd.read_csv(prop_file)
df['percent_remaining'] = df['mean_frac_remaining']*100
#Annotate id labels with categories
pool2_ids = range(21, 31)
lowtm_pool1_ids = range(1, 12)
df['length_category'] = df['probe_num'].apply(lambda x: '~30mer' if x <= 30 else '~50mer')
df['tm_category'] = df['probe_num'].map(lambda x: 'high Tm' if x in pool2_ids else ('low Tm' if x in lowtm_pool1_ids else np.nan))
#Make outdir and load the data
outdir = '../figures/FS2'
os.makedirs(outdir, exist_ok = True)
#Fig S1A: plot longer probes vs shorter probes
panel_name = 'S2A'
plot = Plotter(corners = [0.27, 0.27, 0.68, 0.68], figsize = (sfig, sfig))
plot.nudge_corners(bottom = True, right = True)
plot.setup_axis()
#Not sure why it does a better job here not overlapping the points than in F1
plot.ax = sns.swarmplot(x = 'length_category', y = 'percent_remaining',
data = df.loc[(df['probe_num'] < 21) | (df['probe_num'] > 30)],
ax = plot.ax)
plot.set_ylabel('% 18S remaining')
plot.set_xlabel('probe length')
plot.add_letter('A')
plt.savefig(os.path.join(outdir, '{}.{}'.format(panel_name, outfmt)), dpi = 600)
#Fig S2B - S2D: Plotting depletion vs thermodynamic properties for the first 20 probes
#https://stackoverflow.com/questions/1452995/why-doesnt-a-python-dict-update-return-the-object
to_plot = ['homodimer_dG', 'hairpin_dG', 'Tm']
x_label_dict = {'homodimer_dG': r'homodimer $\Delta$G', 'hairpin_dG': r'hairpin $\Delta$G', 'Tm': 'Tm'}
first_df = df.loc[df['probe_num'] < 21].copy()
default_margins = {'top':False, 'bottom':False, 'left':False, 'right':False}
to_plot = {'homodimer_dG': {'letter': 'B', 'margins': dict(default_margins, **{'bottom':True, 'left':True})},
'hairpin_dG': {'letter': 'C', 'margins': dict(default_margins, **{'top': True, 'right':True})},
'Tm': {'letter': 'D', 'margins': dict(default_margins, **{'top':True, 'left':True})}}
for i in to_plot:
panel_name = 'S2{}'.format(to_plot[i]['letter'])
plot = Plotter(corners = [0.27, 0.27, 0.68, 0.68], figsize = (sfig, sfig))
plot.nudge_corners(top = to_plot[i]['margins']['top'], bottom = to_plot[i]['margins']['bottom'],
left = to_plot[i]['margins']['left'], right = to_plot[i]['margins']['right'])
plot.setup_axis()
#create a little space on the left and right so the the points don't get cutoff
#but the seaborn generated components only extend to the data range, so keep as is
#min_x = first_df[i].min() - abs(first_df[i].min()*0.05)
#max_x = first_df[i].max() + abs(first_df[i].max()*0.05)
plot.ax = sns.regplot(x = i, y = 'percent_remaining', data = first_df, ax = plot.ax, scatter_kws = {'edgecolors': 'none'})
r_value = stats.spearmanr(first_df[i], first_df['percent_remaining'])
r_squared = r_value[0]**2
p_value = r_value[1]
plot.ax.annotate('r'r'$^2$'' = %1.2f' % r_squared, xy=(0.95, 0.85), annotation_clip=False,
xytext=None, textcoords='axes fraction',fontsize = 8, arrowprops=None,
ha = 'right', va = 'top')
plot.set_ylabel('% 18S remaining')
plot.set_xlabel(x_label_dict[i])
plot.add_letter(to_plot[i]['letter'])
print(p_value)
plt.savefig(os.path.join(outdir, '{}.{}'.format(panel_name, outfmt)), dpi = 600)
#Fig S1E: Plot Tm vs. 18S position for the selected probes
panel_name = 'S2E'
plot = Plotter(corners = [0.12, 0.24, 0.855, 0.64], figsize = (sfig*2, sfig))
plot.nudge_corners(top = True)
plot.setup_axis()
short_df = df.loc[df['probe_num'] < 31].copy()
left_low = plot.ax.scatter(*short_df[short_df['tm_category'] == 'low Tm'][['consensus_start', 'Tm']].transpose().values, alpha = 0.8, edgecolors = 'none')
left_hi = plot.ax.scatter(*short_df[short_df['tm_category'] == 'high Tm'][['consensus_start', 'Tm']].transpose().values, alpha = 0.8, edgecolors = 'none')
right_mixed = plot.ax.scatter(*short_df[short_df['tm_category'].isnull()][['consensus_start', 'Tm']].transpose().values, alpha = 0.8, edgecolors = 'none')
plot.ax.legend([left_low, left_hi, right_mixed], ['left arm, low Tm', 'left arm, high Tm', 'right arm'],
mode = 'expand', fontsize = 8, ncol = 3, bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
borderaxespad=0., handletextpad = -0.2)
plot.set_ylabel('Tm')
plot.set_xlabel('position in 18S (nt)')
plot.add_letter('E')
plt.savefig(os.path.join(outdir, '{}.{}'.format(panel_name, outfmt)), dpi = 600)
###Output
_____no_output_____ |
brownian/ex/PyMC3 Example.ipynb | ###Markdown
Calculating dissipation $\Gamma$ using brownian dataFirst, get the values of the parameters sampled.
###Code
fc = traces.get_values('dfc') + d['mu_fc'] # Add mean cantilever frequency to dfc
kc = traces.get_values('kc')
Q = traces.get_values('Q')
###Output
_____no_output_____
###Markdown
Next, calculate $\Gamma$ using our samples.
###Code
def gamma(fc, kc, Q):
return kc / (2*np.pi*fc * Q)
Gamma = gamma(fc, kc, Q)
Gamma
###Output
_____no_output_____
###Markdown
`Gamma` is an array containing estimates calculated from each sample. The different estimates can be plotted:
###Code
import seaborn as sns # Pretty plots of samples
ax = sns.distplot(Gamma*1e9)
ax.set_xlabel(u"Γ [nN m/s]")
ax.set_ylabel("Probability density")
###Output
_____no_output_____
###Markdown
And summarized with a mean and standard deviation:
###Code
print(u"Γ = {:.2f} ± {:.2f} [nN m/s]".format(
Gamma.mean()*1e9,
Gamma.std(ddof=1)*1e9)) # nN m /s
###Output
Γ = 1.59 ± 0.10 [nN m/s]
|
notebooks/11. Random Forest Model.ipynb | ###Markdown
Random Forest ModelNow we will see if a tree based model can improve upon the Logistic Regression model. The overarching goal here is to try and determine which classification performs better in terms of validation AUC, and then hypertune the selected classification model.
###Code
# data manipulation
import pandas as pd
import os
# modeling
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline as imbPipeline
# custom helper functions
from src.models import cross_validate as cv
DATA_PATH = '../data/processed/'
OBS_PATH = os.path.join(DATA_PATH, 'observations_features.csv')
RESULTS_PATH = os.path.join(DATA_PATH, 'results.csv')
model = 'random_forest'
###Output
_____no_output_____
###Markdown
Load data
###Code
obs = pd.read_csv(OBS_PATH)
obs.head()
###Output
_____no_output_____
###Markdown
Perform Train/Test split
###Code
X_train, X_test, y_train, y_test = cv.create_Xy(obs)
print(f'Class balance: {y_train.mean():.2%}')
###Output
Class balance: 1.57%
###Markdown
ModelingThe Random Forest model does not need the pre-processing step of StandardScaler. We will also up-sample the data with SMOTE to compare against the Logistic Regression model.
###Code
rf_pipe = imbPipeline([
('smote', SMOTE()),
('rf', RandomForestClassifier(n_estimators=500))
])
cv_results = cv.cv_model(X_train, y_train, rf_pipe)
cv.log_scores(cv_results, model)
###Output
_____no_output_____
###Markdown
Save the results
###Code
results = pd.read_csv(RESULTS_PATH, index_col=0)
results = results.drop(index=model, errors='ignore')
results = results.append(cv.log_scores(cv_results, model), sort=False)
results.to_csv(RESULTS_PATH)
results
###Output
_____no_output_____ |
MisinformationChallenge/FakeNewsClassifier_Inference.ipynb | ###Markdown
###Code
import sys
!{sys.executable} -m pip install transformers
import torch
from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification
from transformers import DataCollatorWithPadding
from transformers import TrainingArguments, Trainer
import tensorflow as tf
from transformers import pipeline
###Output
_____no_output_____
###Markdown
1. Let's load a model
###Code
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ghanashyamvtatti/roberta-fake-news")
model = AutoModelForSequenceClassification.from_pretrained("ghanashyamvtatti/roberta-fake-news")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer)
###Output
_____no_output_____
###Markdown
2. Let's load our data
###Code
News=['Maury’ Show Official Facebook Posts F*CKED UP Caption On Guest That Looks Like Ted Cruz (IMAGE)','U.N. says it believes Afghanistan air strike killed civilians']
classifier(News)
###Output
_____no_output_____ |
aws_blog_sample/ACG-ReggaeGAN-AWS-Blog.ipynb | ###Markdown
Bring your own data to create a music genre model for AWS DeepComposer ---This notebook is for the Bring your own data to create a music genre model for AWS DeepComposer blog and is associated with the AWS DeepComposer: Train it Again Maestro web series on the A Cloud Guru platform. This covers preparing your data to train a custom music genre model for AWS DeepComposer. ---
###Code
# Create the environment
!conda update --all --y
!pip install numpy==1.16.4
!pip install pretty_midi
!pip install pypianoroll
# IMPORTS
import os
import numpy as np
from numpy import save
import pypianoroll
from pypianoroll import Multitrack, Track
from utils import display_utils
import matplotlib.pyplot as plt
%matplotlib inline
root_dir = './2Experiments'
# Directory to save checkpoints
model_dir = os.path.join(root_dir,'2Reggae') # JSP: 229, Bach: 19199
# Directory to save pianorolls during training
train_dir = os.path.join(model_dir, 'train')
# Location of the original MIDI files used for training; place your MIDI files here
reggae_midi_location = './reggae_midi/'
# Directory to save eval data
dataset_eval_dir = './dataset/'
###Output
_____no_output_____
###Markdown
Prepare Training Data (MIDI files -----> .npy) ---This section of code demonstrates the process of converting MIDI files to the needed format for training, which is a .npy file. The final shape on the .npy file should be (x, 32, 128, 4), which represents (number of samples, number of time steps per sample, pitch range, instruments).---
###Code
#helper function that stores the reshaped arrays, per instrument
def store_track(track, collection):
"""
Pull out the 4 selected instrument types based on program number
The program number represents the unique identifier for the instrument (ie. track.program)
https://en.wikipedia.org/wiki/General_MIDI
"""
instrument1_program_numbers = [1,2,3,4,5,6,7,8] #Piano
instrument2_program_numbers = [17,18,19,20,21,22,23,24] #Organ
instrument3_program_numbers = [33,34,35,36,37,38,39,40] #Bass
instrument4_program_numbers = [25,26,27,28,29,30,31,32] #Guitar
if isinstance (collection, dict):
if track.program in instrument1_program_numbers:
collection['Piano'].append(track)
elif track.program in instrument2_program_numbers:
collection['Organ'].append(track)
elif track.program in instrument3_program_numbers:
collection['Bass'].append(track)
elif track.program in instrument4_program_numbers:
collection['Guitar'].append(track)
else:
print("Skipping this instrument------------------->", track.name)
else: #collection will hold chosen tracks
if track.program in instrument1_program_numbers:
collection.append(track)
elif track.program in instrument2_program_numbers:
collection.append(track)
elif track.program in instrument3_program_numbers:
collection.append(track)
elif track.program in instrument4_program_numbers:
collection.append(track)
else:
print("Skipping this instrument------------------->", track.name)
return collection
#helper function that returns the pianorolls merged to 4 tracks for 4 chosen instruments
def get_merged(music_tracks, filename):
chosen_tracks = []
#choose the tracks from the Multitrack object
for index, track in enumerate(music_tracks.tracks):
chosen_tracks = store_track(track, chosen_tracks)
#dictionary to hold reshaped pianorolls for 4 chosen instruments
reshaped_piano_roll_dict = {'Piano': [], 'Organ': [], 'Bass': [], 'Guitar': []}
#loop thru chosen tracks
for index, track in enumerate(chosen_tracks):
fig, ax = track.plot()
plt.show()
try:
#reshape pianoroll to 2 bar (i.e. 32 time step) chunks
track.pianoroll = track.pianoroll.reshape( -1, 32, 128)
#store reshaped pianoroll per instrument
reshaped_piano_roll_dict = store_track(track, reshaped_piano_roll_dict)
except Exception as e:
print("ERROR!!!!!----> Skipping track # ", index, " with error ", e)
#will hold all merged instrument tracks
merge_piano_roll_list = []
for instrument in reshaped_piano_roll_dict:
try:
merged_pianorolls = np.empty(shape=(0,32,128))
#concatenate/stack all tracks for a single instrument
if len(reshaped_piano_roll_dict[instrument]) > 0:
if reshaped_piano_roll_dict[instrument]:
merged_pianorolls = np.stack([track.pianoroll for track in reshaped_piano_roll_dict[instrument]], -1)
merged_pianorolls = merged_pianorolls[:, :, :, 0]
merged_piano_rolls = np.any(merged_pianorolls, axis=0)
merge_piano_roll_list.append(merged_piano_rolls)
except Exception as e:
print("ERROR!!!!!----> Cannot concatenate/merge track for instrument", instrument, " with error ", e)
continue;
merge_piano_roll_list = np.stack([track for track in merge_piano_roll_list], -1)
return merge_piano_roll_list.reshape(-1,32,128,4)
###Output
_____no_output_____
###Markdown
###Code
#holds final reshaped tracks that will be saved to training .npy file
track_list = np.empty(shape=(0,32,128,4))
#init with beat resolution of 4
music_tracks = pypianoroll.Multitrack(beat_resolution=4)
#loop through all the .mid files
for filename in os.listdir(reggae_midi_location):
print("Starting to process filename---->", reggae_midi_location + filename)
if filename.endswith(".mid"):
try:
#Load MIDI file using parse_midi
#returns Multi-Track object containing Track objects
music_tracks.parse_midi(reggae_midi_location + filename)
#add padding to avoid reshape errors
#pad the pianorolls with zeros making the length a multiple of 32
music_tracks.pad_to_multiple(32)
music_tracks.pad_to_same()
#merge pianoroll objects by instrument
merged_tracks_to_add_to_training_file = get_merged(music_tracks, filename)
#concatenate merged pianoroll objects to final training data track list
track_list = np.concatenate((merged_tracks_to_add_to_training_file, track_list))
print("Successfully processed filename---->", reggae_midi_location + filename)
except Exception as e:
print("**********ERROR**************It's possible that not all 4 instruments exist in this track; at least one is 0")
print("Skipping file---->", filename, e)
print(e)
# binarize data
track_list[track_list == 0] = -1
track_list[track_list >= 0] = 1
#split the data into training and evaluation datasets
training_data, eval_data = np.split(track_list, 2)
#save training data
save(train_dir + '/reggae-train.npy', np.array(training_data))
#save evaluation data
save(dataset_eval_dir + '/eval.npy', np.array(eval_data))
###Output
_____no_output_____
###Markdown
Review Training Data
###Code
#double check the shape on training data, should be (x, 32, 128, 4), where x represents the amount of records
training_data = np.load(train_dir + '/reggae-train.npy')
print("Testing the training shape: ", training_data.shape)
#view sample of data that will be fed to model, four graphs == four tracks
display_utils.show_pianoroll(training_data)
###Output
Testing the training shape: (1, 32, 128, 4)
|
testbench/.ipynb_checkpoints/simulation-checkpoint.ipynb | ###Markdown
Getting the coefficients
###Code
numtaps = 3 #the higher it is, the more precise
f = 0.1
c = signal.firwin(numtaps, f)
#print(c)
###Output
_____no_output_____
###Markdown
The trasformation law is:$$Y[n]=C_0X[n]+C_1X[n-1]+C_2X[n-2]+C_3X[n-3]$$
###Code
signal = np.loadtxt("input.txt", dtype=np.int)
filtered_signal = lfilter(c, 1.0, signal)
time_vec = np.arange(signal.size)
plt.figure(figsize=(25, 5))
plt.scatter(time_vec, filtered_signal, alpha=0.5)
plt.scatter(time_vec+1, signal, alpha =0.5,color='r')
###Output
_____no_output_____
###Markdown
Confonting Python results with FPGA's
###Code
fpga = np.loadtxt("output.txt", dtype=np.int)
if (-np.amin(signal)>np.amax(signal)):
norm = np.amin(signal)
else:
norm = np.amax(signal)
fmin = -np.amin(fpga)
fmax = np.amax(fpga)
if (fmin > fmax):
fnorm = fmin
else:
fnorm = fmax
fpga = norm * fpga/(fnorm)
plt.figure(figsize=(25, 5))
plt.scatter(time_vec, filtered_signal, alpha=0.5)
plt.scatter(time_vec, fpga, alpha =0.5,color='r')
#print(fpga)
#print(filtered_signal)
diffs =(np.abs(filtered_signal-fpga))
diffs
import random
import math
sig = []
for i in range(200):
sig.append(math.sin(i)+random.randrange(-1, 1))
###Output
_____no_output_____ |
docs/contents/user/basic/info.ipynb | ###Markdown
Info*Printing out summary information of a molecular system* There is in MolSysMT a method to print out a brief overview of a molecular system and its elements. The output of this method can be a `pandas.DataFrame` or a `string`. Lets load a molecular system to illustrate with some simple examples how it works:
###Code
molecular_system = msm.convert('pdb_id:1tcd', to_form='molsysmt.MolSys')
###Output
/home/diego/projects/MolSysMT/molsysmt/form/mmtf_MMTFDecoder/to_molsysmt_Topology.py:34: UserWarning: The structure in the PDB has biological assemblies. There are geometrical transformations proposed in the structure. See the following issue in the source code repository: https://github.com/uibcdf/MolSysMT/issues/33
warnings.warn(warning_message)
###Markdown
As a DataFrame Summary information on atomsThe method `molsysmt.info()` can be applied over any element of the molecular system. Lets see an example where the summary information is shown for a set of atoms when the input argument `output='dataframe'`:
###Code
msm.info(molecular_system, target='atom', indices=[9,10,11,12], output='dataframe')
output = msm.info(molecular_system, target='atom', indices=[9,10,11,12], output='dataframe')
output.data.to_dict()
###Output
_____no_output_____
###Markdown
The method can also take a selection input argument:
###Code
msm.info(molecular_system, target='atom', selection='group_index==6')
###Output
_____no_output_____
###Markdown
Notice that the default option for `output` is 'dataframe'. Summary information on groupsLets see an example where the summary information is shown for a set of groups:
###Code
msm.info(molecular_system, target='group', indices=[20,21,22,23])
###Output
_____no_output_____
###Markdown
Summary information on componentsFind here now an example on how the method `molsysmt.info()` works over components:
###Code
msm.info(molecular_system, target='component', selection='molecule_type!="water"')
###Output
_____no_output_____
###Markdown
Summary information on chainsIf the summary information on all chains in the molecular system needs to be printed out:
###Code
msm.info(molecular_system, target='chain')
###Output
_____no_output_____
###Markdown
Summary information on moleculesThe following is an example on how the method works when the targetted element is 'molecule':
###Code
msm.info(molecular_system, target='molecule', selection='molecule_type!="water"')
###Output
_____no_output_____
###Markdown
Summary information on entitiesIf the targetted element is 'entity' the method prints out the next summary information:
###Code
msm.info(molecular_system, target='entity')
###Output
_____no_output_____
###Markdown
Summary information on a molecular systemAt last, a summary information can be shown on the whole molecular system as follows:
###Code
msm.info(molecular_system)
topology, structures = msm.convert(molecular_system, to_form=['molsysmt.Topology','molsysmt.Structures'])
msm.info(topology)
msm.info(structures)
msm.info([topology, structures])
###Output
_____no_output_____
###Markdown
As a stringThe method `molsysmt.info()` can also return a string, short or long, with key information to identify the targetted element. Summary information on atomsIf we only need to get a short string encoding the main attributes of an atom, the input argument `output` should take the value 'short_string':
###Code
msm.info(molecular_system, target='atom', indices=10, output='short_string')
###Output
_____no_output_____
###Markdown
The string is nothing but the atom name, the atom id and the atom index with '-' between the name and the id, and '@' between the id and the index. The input argument `indices` accepts also a list of indices:
###Code
msm.info(molecular_system, target='atom', indices=[10,11,12,13], output='short_string')
###Output
_____no_output_____
###Markdown
The long version of the string includes the short string of the group, chain and molecule the atom belongs to; with the character '/' in between:
###Code
msm.info(molecular_system, target='atom', indices=10, output='long_string')
###Output
_____no_output_____
###Markdown
Summary information on groupsThe short string corresponding to a group is composed of its name, id and index. The characters used as separators are the same as with atoms: '-' between name and id, and '@' between id and index.
###Code
msm.info(molecular_system, target='group', indices=0, output='short_string')
###Output
_____no_output_____
###Markdown
The long version of the string includes the short string for the chain and molecule the group belongs to:
###Code
msm.info(molecular_system, target='group', indices=3, output='long_string')
###Output
_____no_output_____
###Markdown
Summary information on components The short string with the summary information of a component is its index only:
###Code
msm.info(molecular_system, target='component', indices=2, output='short_string')
###Output
_____no_output_____
###Markdown
The long version of the string includes the chain and molecule the component belongs to with the character '/' as separator.
###Code
msm.info(molecular_system, target='component', indices=2, output='long_string')
###Output
_____no_output_____
###Markdown
Summary information on chains Just like with atoms and groups, the short version of the chain string is made up of the sequence of atributes: name, id and index. The character '-' is in between the chain name and the chain id, and '@' precedes the chain index:
###Code
msm.info(molecular_system, target='chain', indices=2, output='short_string')
###Output
_____no_output_____
###Markdown
The long version of the string in this case is the same as the short one:
###Code
msm.info(molecular_system, target='chain', indices=2, output='long_string')
###Output
_____no_output_____
###Markdown
Summary information on moleculesMolecules have no relevant id attributes, thats why in this case the short string is the molecule name followed by the character '@' and the molecule index:
###Code
msm.info(molecular_system, target='molecule', indices=0, output='short_string')
###Output
_____no_output_____
###Markdown
As well as with chains, the short and long strings are equivalent here:
###Code
msm.info(molecular_system, target='molecule', indices=0, output='long_string')
###Output
_____no_output_____
###Markdown
Summary information on entities The significant attributes for entities are only two. In this case the string takes the same coding as before, with the character '@' between the name and the index.
###Code
msm.info(molecular_system, target='entity', indices=0, output='short_string')
###Output
_____no_output_____
###Markdown
The long string is equal to the short string when the targetted element is an entity:
###Code
msm.info(molecular_system, target='entity', indices=0, output='long_string')
###Output
_____no_output_____ |
TIC149010208.ipynb | ###Markdown
TIC 149010208
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from astropy.io import fits
import numpy as np
import astropy.units as u
from glob import glob
paths = glob('149010208/hlsp_*.fits')
lc = fits.getdata(paths[0])
time, flux = lc['TIME'][~np.isnan(lc['PDCSAP_FLUX'])], lc['PDCSAP_FLUX'][~np.isnan(lc['PDCSAP_FLUX'])]
plt.plot(time, flux)
event1 = (time > 1364.5) & (time < 1366)
event2 = (time > 1379) & (time < 1380.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
ax[0].plot(time[event1], flux[event1])
ax[1].plot(time[event2], flux[event2])
###Output
_____no_output_____
###Markdown
Manually find a period:
###Code
period = (time[event2].mean() - time[event1].mean()) - 0.035
plt.plot(time[event1], flux[event1], '.', label='Event 1')
plt.plot(time[event2] - period, flux[event2], '.', label='Event 2')
plt.title('Period = {0:.4f} d'.format(period))
plt.legend()
plt.savefig('/Users/bmmorris/Downloads/TIC1490.png', bbox_inches='tight')
period
###Output
_____no_output_____ |
AKosir-OPvTK-Lec12_QueueingTheory_SLO.ipynb | ###Markdown
12. Teorija in uporaba čakalnih vrst Andrej Košir, Lucami, FE Kontakt: prof. dr. Andrej Košir, [email protected], skype=akosir_sid 1 Vsebina, cilji- Cilji: - Spoznati kaj so čakalne vrste - Kendallov opis čakalnih vrst – čakalne vrste in servisiranje - Osnovni rezultati analize- Uporaba v TK - Čakalna vrsta je model za predpomnilnik, ki je del prenosa podatkov na vseh nivojih sklada - Upravljanje čakalnih vrst uporabnikov: interaktivne storitve - Upravljanje učinkovitih uporabniških vmesnikov- Uporaba širše - Modeli računalniških arhitektur (HD, procesor, ...) - Promet - Javne storitve (npr. zdravstvo) 2 ■ Poglavja12.1. Uvod v čakalne vrste■ Kaj je teorija čakalnih vrst■ Zgodovina■ Kje v TK nastopajo čakalne vrste■ O pomembnih verjetnostnih porazdelitvah12.2. Model in Kendallov opis čakalne vrste■ Model sistema z eno čakalno vrsto $\large{*}$■ Sosledje dogodkov v čakalnem sistemu■ Standardni opis sistema: Kendallov opis $\large{*}$■ Kendallov opis večih čakalnih vrst■ Porazdelitev časov prihodov in strežbe $\large{*}$■ Vrstni red strežbe■ Pomembni primeri čakalnih sistemov ■ Analiza M/M/1 $\large{*}$■ Primer M/M/1: prenos paketnih podatkov■ Primer M/M/1: spletni strežnik■ Čakalni sistem M/G/1■ Kako izbrati model čakalnega sistema v realnem primeru? $\large{*}$12.3. Algoritmi strežniških sistemov■ Algoritmi za optimalno strežbo več čakalnih vrst■ Primer: algoritem Weighted Round Robin $\large{*}$■ Algoritmi za upravljanje čakalnih vrst■ Primer: algoritem Random early detection $\large{*}$ 3 12. Teorija in uporaba čakalnih vrst 12.1. Uvod v čakalne vrste ■ Kaj je teorija čakalnih vrst- Opisi z različnih vidikov: - TK: angl. „We study the phenomena of standing, waiting, and serving, and we call this study Queueing Theory“; - Sistemi: angl. „The basic phenomenon of queueing arises whenever a shared facility needs to be accessed for service by a large number of jobs or customers“; - Matematika – teorija verjetnosti: angl. „The study of the waiting times, lengths, and other properties of queues.“- Potrebujemo osnovne pripomočke za opis realnega stanja z namenom optimalnega upravljanja – ponastavitev - Matematični model - Določanje parametrov $\to$ konfiguracij realnih sistemov 4 12. Teorija in uporaba čakalnih vrst 12.1. Uvod v čakalne vrste ■ Zgodovina- 1909: A. K. Erlang: telefonija- 1920: G.F. O'Dell: - 1920: T. Fry:- 1927: Edward Charles Molina:- 1930: Felix Pollaczek: - 1931: Andrey Kolmogorov- 1932: Alexander Khinchin - 1932: C.D. Crommelin- 1951: David G. Kendall: Članek "Some Problems in the Theory of Queues- 1961: John Little: Pričakovani časi strežbe- 1981: Marcel Netus: matrične metode 5 12. Teorija in uporaba čakalnih vrst 12.1. Uvod v čakalne vrste ■ Kje v TK nastopajo čakalne vrste- Omrežje je množica s komunikacijskimi kanali povezanih (realnih in virtualnih) čakalnih vrst- S teorijo čakalnih vrst: - Ocenimo pričakov čas čakanja - Zahtevano velikost predpomnilnikov - Stanje predpomnilnikov po dolgem času - Verjetnost izgube paketov - Notranji mehanizmi protokolov: RED, ... - Ocena zmogljivosti celotnega komunikacijskega sistema- Izrazoslovje: - Paket (ang. packet)= IP paket, stranka, „job“ - Vrsta (ang. Queue) - Čakalni sistem (ang. Queueing system) 6 12. Teorija in uporaba čakalnih vrst 12.1. Uvod v čakalne vrste ■ O pomembnih verjetnostnih porazdelitvah- Poissonova porazdelitev: - Število paketov na časovnem intervalu pri danih povprečnih prihodih $\lambda$ - Porazdelitev: $$ p(k; \lambda) = \frac{\lambda^k}{k!} e^{-\lambda} $$- Exponentna porazdelitev: - Tedaj so časi med prihodi eksponentni - Gostota por.: $$ p(t; \lambda) = \left\{\matrix{\lambda e^{-\lambda t}, & t \geq 0 \cr 0, \hfill & t < 0.}\right. $$- Erlangova porazdelitve: - Porazdelitev časov strežb za $k$ zaporednih strežnikov, če je $R=k\lambda$; - Gostota por.: $$ p(t; k, R) = \left\{\matrix{\frac{R(R t)^{k-1}}{(k-1)!} e^{-R t}, & t \geq 0 \cr 0, \hfill & t < 0.}\right. $$ 7 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste 12.2. Model in Kendallov opis čakalne vrste■ Model sistema z eno čakalno vrsto■ Sosledje dogodkov v čakalnem sistemu■ Standardni opis sistema: Kendallov opis■ Kendallov opis večih čakalnih vrst■ Porazdelitev časov prihodov in strežbe■ Vrstni red strežbe■ Pomembni primeri čakalnih sistemov■ Analiza M/M/1■ Primer M/M/1: prenos paketnih podatkov■ Primer M/M/1: spletni strežnik■ Čakalni sistem M/G/1■ Kako izbrati model čakalnega sistema v realnem primeru? 8 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Model sistema z eno čakalno vrsto 9 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Sosledje dogodkov v čakalnem sistemu 10 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Standardni opis sistema: Kendallov opis- Zapis A/B/C(/D/E/F) - A: Porazdelitev časov prihodov; - B: Porazdelitev časov strežbe; - C: Število strežnikov (mest za strežbo) - D: Celotna kapaciteta sistema (spomina) (∞ če ni podano) - E: Velikost populacije (∞ če ni podano) - F: Vrstni red strežbe (FCFS če ni podano) - D, F, E navadno ni podano 11 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Kendallov opis večih čakalnih vrst- Več čakalnih vrst v več strežnikov?- Posplošitev teorije, zapišemo npr. 5 A/B/C(/D/E/F) - Zapletena teorija, izpustimo - Glavne ugotovitve držijo s področja ene čakalne vrste 12 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Porazdelitev časov prihodov in strežbe- Porazdeliev časov prihodov (porazdelitve se nanašajo na čase, ne na število paketov): - $D$: deterministična - $M$: Eksponentna porazdelitev (Poissonov promet) - $E_k$: Erlang s k fazami (k strežnikov zaporedno) - $H_k$: hipereksponentna tipa k (k strežnikov vzporedno) - $Gl$: splošna neodvisna (neznana ali različna od zgornjih)- Porazdeliev časov strežbe - $D$: deterministična - $N$: normalna (Gaussova) - $M$: Eksponentna porazdelitev - $E_k$: Erlang s k fazami (k strežnikov zaporedno) - $H_k$: hipereksponentna tipa k (k strežnikov vzporedno) - $Gl$: splošna neodvisna (neznana ali različna od zgornjih) Opombe: - ti modeli so poenostavitve realnih sistemov, ki so v osnovi bolj kompleksni, vendarle so dobljeni rezultati analize uporabni v praksi;- ti modeli ne upoštevajo morebitne samopodobnosti prometa; 13 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Vrstni red strežbe- FIFO (ang. First In - First Out): po vrstnem redu prihodov- LIFO (ang. (Last In – First Out): prvi so zadnji- SIRO (ang. Service In Random Order): naključni ven- TS (ang. time sharing): uporabniki si delijo čas- PIR (ang. Priority in selection): prioritete glede na dogovore s strankami (ang. Service level agreement, SLA)- GD (ang. general) splošna - to je neznana; 14 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Pomembni primeri čakalnih sistemov- M/M/1: najpreprostejši realen primer (uporaben tudi za spletni strežnik) - Poissonova porazdelitev paketov - Eksponentna porazdelitev strežbe - En sam strežnik- M/D/m: klasična telefonija - Poissonova porazdelitev paketov - Determični časi strežbe - m strežnikov- M/G/1: trdi disk- G/G/m: realen spletni strežnik: - Potreben je model „omrežja čakalnih vrst“ - Parametri modela: Network Arrival Rate (A), Average File Size (F), Buffer Size (B), Initialization Time (I), Static Server Time (Y), Dynamic Server Rate (R), Server Network Bandwidth (S), Client Network Bandwidth (C) Opomba: Kompleksnih realnih sistemov ne opisujemo z matematičnimi modeli, ampak jih simuliramo s simulatorji in na tak način pridobimo rezultate. 15 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Analiza M/M/1 (1)- Poznamo (izmerimo): - $\lambda$: povprečno število prispelih paketov na dano časovno enoto - $\mu$: povprečno število strežb na dano časovno enoto - $N$: velikost čakalne vrste (če je znano)- Iščemo: - $L$: Pričakovano število paketov v sistemu - $L_q$: Pričakovano število paketov v čakalni vrsti - $W$: Pričakovan čakalni čas v sistemu - $W_q$: Pričakovan čakalni čas v čakalni vrsti - $P_n$: verjetnost, da je v sistemu 𝑛 paketov - $P(\p>n)$: verjetnost, da je število paketov v sistemu več kot 𝑛 - $N$: potrebna dolžina vrste, da je verjetnost odmeta manj od $p_0$ 16 12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Analiza M/M/1 (2)- Rešitev: - Definiramo izkoriščenost (ang. utility): $$ \rho=\frac{\lambda}{\mu} $$ - Z Markovskimi verigami izpeljemo: $$ \matrix{ L_q &=& \frac{\rho^2}{1-\rho}, & L = \frac{\rho^2}{1-\rho} \hfill\cr W_q &=& \frac{\rho}{\mu(1-\rho)}, & W = \frac{1}{\mu(1-\rho)} = \frac{1}{\mu - \lambda} \hfill\cr P_n &=& (1-\rho)\rho^n \hfill\cr P[\p > n] &=& \rho^n \hfill\cr N &\geq& \frac{\log(p_0)}{\log(\rho)} \hfill\cr } $$ - Porazdelitev čakalnih časov in percentili so $$ \matrix{ p_W(t) &=& (\mu - \lambda) e^{-(\mu-\lambda) t} \cr \alpha_W(r) &=& W \ln\left(\frac{100}{100-r}\right) \cr } $$ 17
###Code
import numpy as np
# Basic queueing system functions
# Utility
def util(la, mu):
return la/mu
# expected number of packets in the system
def exp_num_of_packets_que(la, mu):
rho = la/mu
return rho**2/(1-rho)
# expected number of packets in a waiting queue
def exp_num_of_packets_sys(la, mu):
rho = la/mu
return rho/(1-rho)
# expected waiting time in the system
def exp_wait_time_que(la, mu):
rho = la/mu
return rho/(mu*(1-rho))
# expected waiting time in rho in the system
def exp_wait_time_que_rho(rho, mu):
return 1/(mu*(1-rho))
# expected waiting time in the waiting queue
def exp_wait_time_sys(la, mu):
return 1.0/(mu - la)
# probability there are 15 packets in the system
def prob_num_packets_sys(la, mu, n):
rho = la/mu
return (1-rho)*(rho**n)
# probability there are more than n packets in the system
def prob_more_num_packets_sys(la, mu, n):
rho = la/mu
return rho**n
# required queue size to have probability of dropping les than $p_0$
def req_que_len(la, mu, p0):
rho = la/mu
return np.log(p0)/np.log(rho)
###Output
_____no_output_____
###Markdown
12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Primer M/M/1: prenos paketnih podatkov (1)- Privzemimo podatke: - $\lambda=620$ (povprečno število prispelih paketov na dano časovno enoto) - $\mu=800$ (povprečno število strežb na dano časovno enoto) - $N=20$ (velikost čakalne vrste) - $p_0=10^{-9}$ (verjetnost izpada paketov)- Izračunamo: - $\rho=0,775$: (izkoriščenost) - $L=3,444$: (pričakovano število paketov v sistemu) - $L_q=2,669$: (pričakovano število paketov v čakalni vrsti) - $W=0,0056$: (pričakovan čakalni čas v sistemu) - $W_q=0,0043$: (pričakovan čakalni čas v čakalni vrsti) - $P_{15}=0,0049$: (verjetnost, da je v sistemu 15 paketov) - $P(\p > 15)=0,0061$: verjetnost, da je število paketov v sistemu več kot $15$ - $N(10^{−9})=81,3021$: potrebna dolžina vrste, da je verjetnost odmeta manj od $10^{−9}$. 18
###Code
la = 620.0
mu = 800.0
N = 20
n = 15
p0 = 10**(-9)
print (r'rho =', util(la, mu))
print (r'L = ', '% .3f' % exp_num_of_packets_sys(la, mu))
print (r'L_q = ', '% .3f' % exp_num_of_packets_que(la, mu))
print (r'W = ', '% .3f' % exp_wait_time_sys(la, mu))
print (r'W_q = ', '% .3f' % exp_wait_time_que(la, mu))
print (r'P_{15} = ', '% .3f' % prob_num_packets_sys(la, mu, n))
print (r'P(#P>15) = ', '% .3f' % prob_more_num_packets_sys(la, mu, n))
print (r'N(10^{-9}) = ', '% .3f' % req_que_len(la, mu, p0))
###Output
rho = 0.775
L = 3.444
L_q = 2.669
W = 0.006
W_q = 0.004
P_{15} = 0.005
P(#P>15) = 0.022
N(10^{-9}) = 81.302
###Markdown
12. Teorija in uporaba čakalnih vrst 12.2. Model in Kendallov opis čakalne vrste ■ Primer M/M/1: prenos paketnih podatkov (2)Odvisnost čakalnih časov od izkoriščenosti - glejte spodnji graf. 19
###Code
import numpy as np
import matplotlib.pylab as plt
mu = 10
rho = np.arange(0, 0.99, 0.01)
wt = exp_wait_time_que_rho(rho, mu)
# Plot it
plt.figure(figsize=(8,5))
plt.plot(rho, wt)
plt.show()
###Output
_____no_output_____ |
implant-finder/implants.ipynb | ###Markdown
CheXpert: Mine reports for mention of device directly.
###Code
import pandas as pd
cxr_records = pd.read_csv('mimiccxr_files/cxr-record-list.csv.gz') #images
cxr_records.head()
cxr_studies = pd.read_csv('mimiccxr_files/cxr-study-list.csv.gz', compression='gzip').rename(columns = {'path':'txt_path'})
cxr_studies.head()
pathdir = 'mimiccxr_files/'
from tqdm import tqdm
def get_text(row):
fpath= pathdir+row
with open(fpath, 'r') as file:
text = file.read()
return text
tqdm.pandas(desc='assigning text...')
cxr_studies['text'] = cxr_studies['txt_path'].progress_apply(get_text)
cxr_studies
cxr_studies = cxr_studies.rename(columns = {'path':'txt_path'})
cxr_studies
chexpert_df = pd.read_csv('mimiccxr_jpg_and_labels/mimic-cxr-2.0.0-chexpert.csv',usecols=['subject_id','study_id','Support Devices'])
cxr_studies['Support Devices'] = chexpert_df['Support Devices']
cxr_studies
cxr_split_df = pd.read_csv('mimiccxr_jpg_and_labels/mimic-cxr-2.0.0-split.csv.gz')
cxr_split_df
# cxr_studies = cxr_studies.merge(cxr_metadata_df, on = 'study_id')
cxr_studies = cxr_studies.merge(cxr_split_df, on = 'study_id')
cxr_studies
###Output
_____no_output_____ |
PICES_Regional_Ecosystem_Tool.ipynb | ###Markdown
PICES Regional Ecosystem Tool Data acquisition, analysis, plotting & saving to facilitate IEA report Developed by Chelle Gentemann ([email protected]) & Marisol Garcia-Reyes ([email protected])****** Instructions To configure: In the cell marked by `* Configuration *`, specify: Region, Variable, Date Period To execute: Click on the top menu click: `Run` -> `Run All` ****** Regions Region Number 11121314151617 18192021222324 Region Name California Current Gulf of Alaska East Bering Sea North Bering Sea Aleutian Islands West Bering Sea Sea of Okhotsk Oyashio Current R19 Yellow Sea East China Sea Kuroshio Current West North Pacific East North Pacific *** Variables 1) SST: Sea Surface Temperature ('1981-01-01' - present) 2) Chl: Chlorphyll-a Concentration ('1997-01-01' - '2018-06-30') 3) Wind_U, Wind_V, Wind_Speed: Wind Speed Vectors & Speed ('1997-01-01' - present) 4) Current_U, Current_V, Current_Speed: Sea Surface Currents Vectors ('1992-01-01' - present) 5) SLA: Sea Level Anomaly ('1992-01-01' - present) 6) ADT: Absolute Dynamical Topography Heights ('1992-01-01' - present)****** *** Configuration ***
###Code
#######################
#### Configuration ####
#######################
## Region to analyze ##
region = 11 # <<<----- Use number (11 to 24) based on table above
## Variable ##
## Select the variable to analyze from the list above (eg. 'sst','chl','wind_v','current_speed','sla','adt')
var = 'sst' # <<<----- Use short name given above. upper or lower case accepted.
## Date Period to analize ##
## Specify the period using the format: #### YYYY-MM-DD #####
## Data available specified above
## All data in monthly resolution
initial_date = '1981-01-01'
final_date = '2019-12-31'
##############################
#### End of configuration ####
##############################
#### Do not modify ####
%matplotlib inline
import sys
sys.path.append('./utils/subroutines/')
import pices
from pices import analyze_PICES_Region
analyze_PICES_Region(region, var, initial_date, final_date)
#### End of script ####
###Output
_____no_output_____ |
Notebooks/NB06b_Data_Access_And_Quality_Control.ipynb | ###Markdown
NB06b Data Access and Quality Control[](https://mybinder.org/v2/gh/CSBiology/BIO-BTE-06-L-7/gh-pages?filepath=NB06b_Data_Access_And_Quality_Control.ipynb)[Download Notebook](https://github.com/CSBiology/BIO-BTE-06-L-7/releases/download/NB06b/NB06b_Data_Access_And_Quality_Control.ipynb)With this notebook, we want to converge the threads of computational and experimental proteomics by analyzing the data measured by you during the practical course.Behind the scenes there was already a lot going on! While you were going through the hands-on tutorials addressing single steps of the computation proteomics pipeline, we executeda pipeline combining all the steps. In this tutorial, we will provide you with the output of the pipeline, your sample description and a lot of code, which should help you explore the data set generated by you! Since the experiment is of a non neglectable depth - so is the code needed to analyze it. Do not hesitate if you do not understand every step right away, this is notrivial task and nontrivial are the means to achieve it! We included some questions along the way. The questions aim to check your understanding, but they can also serve as food for thought.For the explorative data analysis, we are using [Deedle](http://bluemountaincapital.github.io/Deedle/tutorial.html).Deedle is an easy to use library for data and time series manipulation and for scientific programming. It supports working with structured data frames, ordered and unordered data, as well as time series. Deedle is designed to work well for exploratory programming using F. I. Reading the sample descriptionBefore we analyze our data, we will download and read the sample description provided by the experimentalist.
###Code
let directory = __SOURCE_DIRECTORY__
let path2 = Path.Combine[|directory;"downloads/alle_Gruppen_V7_SWATE.xlsx"|]
downloadFile path2 "alle_Gruppen_V7_SWATE.xlsx" "bio-bte-06-l-7"
let _,_,_,myAssayFile = XLSX.AssayFile.AssayFile.fromFile path2
let inOutMap = BIO_BTE_06_L_7_Aux.ISA_Aux.createInOutMap myAssayFile
###Output
_____no_output_____
###Markdown
Next, we will prepare functions to look up parameters, which might be needed for further calculations.
###Code
let normalizeFileName (f:string) = if Path.HasExtension f then f else Path.ChangeExtension(f, "wiff")
//
let getStrain (fileName:string) =
let fN = fileName |> normalizeFileName
BIO_BTE_06_L_7_Aux.ISA_Aux.tryGetCharacteristic inOutMap "Cultivation -Sample preparation" "strain" fN myAssayFile
|> Option.defaultValue ""
//
let getExpressionLevel (fileName:string) =
let fN = fileName |> normalizeFileName
BIO_BTE_06_L_7_Aux.ISA_Aux.tryGetCharacteristic inOutMap "Cultivation -Sample preparation" "gene expression" fN myAssayFile
|> Option.defaultValue "Wt-Like"
//
let get15N_CBC_Amount (fileName:string) =
let fN = fileName |> normalizeFileName
BIO_BTE_06_L_7_Aux.ISA_Aux.tryGetCharacteristic inOutMap "Extraction" "gram" fN myAssayFile |> Option.defaultValue ""
|> String.split ' '
|> Array.head
|> float
//
let get15N_PS_Amount (fileName:string) =
let fN = fileName |> normalizeFileName
BIO_BTE_06_L_7_Aux.ISA_Aux.tryGetCharacteristic inOutMap "Extraction" "gram #2" fN myAssayFile |> Option.defaultValue ""
|> String.split ' '
|> Array.head
|> float
//
let getGroupID (fileName:string) =
let fN = fileName |> normalizeFileName
BIO_BTE_06_L_7_Aux.ISA_Aux.tryGetParameter inOutMap "Extraction" "Group name" fN myAssayFile |> Option.defaultValue ""
|> int
###Output
_____no_output_____
###Markdown
A quick execution to test the retrieval of data from the isa sample table:
###Code
getStrain "WCGr2_U1.wiff"
getExpressionLevel "WCGr2_U1.wiff"
get15N_CBC_Amount "WCGr2_U1.wiff"
get15N_PS_Amount "WCGr2_U1.wiff"
getGroupID "WCGr2_U1.wiff"
###Output
_____no_output_____
###Markdown
Now that we have the sample sheet, all that is missing is the data to be analyzed:
###Code
let path = Path.Combine[|directory;"downloads/Quantifications_wc_annotated.txt"|]
downloadFile path "Quantifications_wc_annotated.txt" "bio-bte-06-l-7"
###Output
_____no_output_____
###Markdown
II. Raw data access using Deedle:As teasered in the primer, we want to work with our tabular data using Deedle. Luckily, Deedle does not only deliver data frame and seriesmanipulation, but also allows us to quickly read the recently downloaded data into the memory:
###Code
let rawData = Frame.ReadCsv(path,separators="\t")
###Output
_____no_output_____
###Markdown
To visualize the data, we can call the "formatAsTable" function. The preview of visual studio code does not allowfor the charts to be scrollable, so we pipe the output into "Chart.Show", to visualize the data in your browser.
###Code
rawData
|> Frame.take 10
|> formatAsTable
|> Chart.Show
###Output
_____no_output_____
###Markdown
Looking at the raw data, we can see that each row contains a different quantification of a peptide ion, with the columns containing a single ion feature each, such as peptide ion charge, sequence or a quantification value reported for a file (e.g. light, heavy or ratio).Since the columns ProteinGroup, StringSequence, PepSequenceID and Charge uniquely identify a row, we can use these to index the rows.For this, we use a language feature called ["anonymous record type"](https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/anonymous-records). Here we create a tuple like structure, with the additional feature that each element of the tuple is named (e.g.: Proteingroup).
###Code
let indexedData =
rawData
// StringSequence is the peptide sequence
|> Frame.indexRowsUsing (fun os ->
{|
ProteinGroup = os.GetAs<string>("ProteinGroup");
Synonyms = os.GetAs<string>("Synonyms")
StringSequence = os.GetAs<string>("StringSequence");
PepSequenceID = os.GetAs<int>("PepSequenceID");
Charge = os.GetAs<int>("Charge")
|}
)
// The effect of our frame manipulation can be observed:
indexedData
|> Frame.take 10
|> formatAsTable
|> Chart.Show
###Output
_____no_output_____
###Markdown
III. Augmenting and filtering the data frame The data frame already contains all information needed to perform the analysis, but it could still benefit from some quality-of-life upgrades. Say, we want to encode the specific qConcat protein as a separate feature:
###Code
// Why does it make sense to model Qprots using this type, why do we not simply use a string?
type Qprot =
| CBB
| PS
let finalRaw =
indexedData
|> Frame.mapRowKeys (fun k ->
let qprot =
match k.ProteinGroup |> String.contains "QProt_newCBB", k.ProteinGroup |> String.contains "QProt_newPS" with
| true, false -> Some CBB
| false, true -> Some PS
| _ -> None
{|k with QProt = qprot|}
)
// What does this line filter for? Why does this make sense for our analysis?
// How many peptide ions did the filter remove?
|> Frame.filterRows (fun k s -> k.QProt.IsSome)
|> Frame.mapRowKeys (fun k -> {|k with QProt = k.QProt.Value|})
// Finally we want to define a function that given a distinct Qprot,
// returns the correct ISA lookup. (See: 'Reading the sample description')
let initGetQProtAmount qProt =
match qProt with
| CBB -> get15N_CBC_Amount
| PS -> get15N_PS_Amount
finalRaw
|> Frame.take 10
|> formatAsTable
|> Chart.Show
###Output
_____no_output_____
###Markdown
IV. Global quality control.With our data frame prepared, we want to see on a global scale if our experiment worked.We plot the overall mean of the 14N and 15N quantifications and observe if we can recover our dilution series (15N),while keeping the analyte to be quantified at a constant level (14N).Since it comes in handy to simplify the data frame, we will only keep columns that contain a specific identifier, such as, "Ratio", "Light" or "Heavy".
###Code
let sliceQuantColumns quantColID frame =
frame
|> Frame.filterCols (fun ck os -> ck |> String.contains ("."+quantColID))
|> Frame.mapColKeys (fun ck -> ck.Split('.') |> Array.item 0)
// How did the data frame change, how did the column headers change?
let ratios = sliceQuantColumns "Ratio" finalRaw
let light = sliceQuantColumns "Light" finalRaw
let heavy = sliceQuantColumns "Heavy" finalRaw
ratios
|> Frame.take 10
|> formatAsTable
|> Chart.Show
###Output
_____no_output_____
###Markdown
A nice tool to explore and compare distributions of different populations is the representation as a boxplot!To use this tool, we will define a function which creates a boxplot for every column (file) of our data set:
###Code
let createBoxPlot f =
f
|> Frame.getNumericCols
|> Series.map (fun k s ->
let x,y =
s
|> Series.values
|> Seq.map (fun values -> k,values)
|> Seq.unzip
Chart.BoxPlot(x,y,Orientation=StyleParam.Orientation.Vertical)
)
|> Series.values
|> Chart.Combine
|> Chart.withY_AxisStyle "Ion intensity"
###Output
_____no_output_____
###Markdown
The function applied to the n14 values:
###Code
// How is the data distributed?
light
|> createBoxPlot
###Output
_____no_output_____
###Markdown
The function applied to the n15 values:
###Code
// Can you recover the dilution series?
heavy
|> createBoxPlot
###Output
_____no_output_____
###Markdown
The following function performs a normalization which accounts for a specific effect. Can you determine what the function accounts for?
###Code
let normalizePeptides f =
f
|> Frame.transpose
|> Frame.getNumericCols
|> Series.mapValues (fun s ->
let m = Stats.median s
s / m
)
|> Frame.ofColumns
|> Frame.transpose
// How does the distribution of the date change, when the normalization is applied?
light
|> normalizePeptides
|> createBoxPlot
heavy
|> normalizePeptides
|> createBoxPlot
###Output
_____no_output_____
###Markdown
Finally we have a look at the ratios.
###Code
// Does it make sense to normalize the ratios the same way?
ratios
|> createBoxPlot
###Output
_____no_output_____
###Markdown
V. Local quality control. Now that we know on a global scale how our experiment worked, it might be time to have a look at the details.First, we want to write a function that allows us to plot all peptides of a protein vs. the dilution used. This way we can identify peptides thatwe want to use and those that seem to be prone to error and should thus be discarded. To keep things simple, we apply a filter step at the beginning, which only keeps peptides belonging to one protein and samples measured by one groupin the data frame. What are the sources of error? Which peptides do you think should be discarded and why? Which proteins need to be analyzed with extra care?Hint: you can hover over the data points to get insight into the file name and gene expression pattern of the corresponding strain.
###Code
// With this type we create an alias to our row key, this allows us to write functions, which operate on data frames such as 'plotPeptidesOf','discardPeptideIonInFile' and 'discardPeptideIon'
type PeptideIon =
{|
ProteinGroup : string
Synonyms : string
StringSequence : string
PepSequenceID : int
Charge : int
QProt : Qprot
|}
// Given a frame, a prot-ID and a group-ID this function creates an xy plot for every peptide ion belonging to the protein/proteingroup.
// The parameter 'prot' can either be given a valid Cre-ID or a synonym.
// What is the unit of the x-Axis? How is the ratio calculated?
let plotPeptidesOf (ratios:Frame<PeptideIon,string>) (prot:string) (groupID:int) =
ratios
|> Frame.filterRows (fun k s -> k.Synonyms.Contains prot || k.ProteinGroup.Contains prot )
|> Frame.filterCols (fun k s -> getGroupID k = groupID)
|> Frame.transpose
|> Frame.getNumericCols
|> Series.map (fun pep (values) ->
let getQProtAmount = initGetQProtAmount pep.QProt
let qprotAmounts,ratios,fileLabel =
values
|> Series.map (fun fileName (ratio) ->
let qProtAmount = getQProtAmount fileName
let expressionLevel = getExpressionLevel fileName
qProtAmount, ratio, (sprintf "%s %s" fileName expressionLevel)
)
|> Series.values
|> Seq.unzip3
Chart.Point(qprotAmounts,ratios,Labels=fileLabel)
|> Chart.withTraceName (sprintf "S:%s_C:%i" pep.StringSequence pep.Charge)
|> Chart.withX_AxisStyle("qProt Amount")
|> Chart.withY_AxisStyle("Ratio")
)
|> Series.values
|> Chart.Combine
###Output
_____no_output_____
###Markdown
First we get an overview of available protein ids.
###Code
ratios.RowKeys
|> Array.ofSeq
|> Array.map (fun k -> k.Synonyms)
|> Array.distinct
###Output
_____no_output_____
###Markdown
Then we can start to visualizes our results:
###Code
plotPeptidesOf ratios "rbcL" 1
plotPeptidesOf ratios "RBCS2;RBCS1" 2
plotPeptidesOf ratios "FBP1" 2
plotPeptidesOf ratios "FBP2" 2
plotPeptidesOf ratios "SEBP1" 2
###Output
_____no_output_____
###Markdown
Since we want to save our result and use it for the next notebook, where we will have a look at the isotopic labeling efficiency and finally calculate absolute protein amounts, we need to save the filtered frame. Additionally, we want to keep information which was dropped along the way: isotopic patterns. In order to do so, we perform a join operation, which keeps only those rows present in both files:
###Code
// Are there redundant columns in the result frame? Why?
let frameComplete =
Frame.join JoinKind.Inner finalRaw ratios
###Output
_____no_output_____
###Markdown
With the plots at hand, we can use the following functions to manipulate the data frame and discard peptides and/or whole files which we do not want to use for an absolute protein quantification e.g.:
###Code
let discardPeptideIonInFile stringsequence charge filename (ratios:Frame<PeptideIon,string>) =
ratios
|> Frame.map (fun r c value ->
let cFileName = String.split '.' c |> Array.head
if r.StringSequence = stringsequence && r.Charge = charge && cFileName = filename then nan else value
)
let discardPeptideIon stringsequence charge (ratios:Frame<PeptideIon,string>) =
ratios
|> Frame.filterRows (fun r s -> (r.StringSequence = stringsequence && r.Charge = charge) |> not)
###Output
_____no_output_____
###Markdown
These functions can then be used to create an updated version of the frame, containing only the values we want to use for quantification e.g.:
###Code
let filtered =
frameComplete
|> discardPeptideIonInFile "IYSFNEGNYGLWDDSVK" 3 "WCGr2_UF_1"
|> discardPeptideIon "IYSFNEGNYGLWDDSVK" 2
###Output
_____no_output_____
###Markdown
Of course, it is possible to apply very strict additional filters onto the previously filtered frame:
###Code
let ratiosFiltered =
filtered
|> Frame.filterCols (fun k s ->
let kFileName = String.split '.' k |> Array.head
try
get15N_CBC_Amount kFileName > 0.1
with
| _ -> false
)
###Output
_____no_output_____
###Markdown
This frame can then be saved locally using the following pattern:
###Code
let frameToSave =
ratiosFiltered
|> Frame.indexRowsOrdinally
frameToSave.SaveCsv(@"C:\YourPath\testOut.txt", separator = '\t', includeRowKeys = false)
###Output
_____no_output_____ |
notebooks/Lemmatizing of SpaCy(Part-3).ipynb | ###Markdown
Lemmatizing * Text Normalization * Word inflection == syntactic differences between word forms.* Reducing a word to its base/root form.* Lemmatizing *** * a word based on its intended meaning * Stemming * * Cutting of the prefixes/suffices to reduce a word to base form.
###Code
import spacy
nlp=spacy.load('en')
docx = nlp("study studying studious studio student ")
for word in docx:
print("Word => ", word.text, "Lemmatizing =>", word.lemma_,"Part of speech : ", word.pos_)
docx1=nlp("Walking walks walk walker")
for word in docx1:
print("Word => ", word.text, "Lemmatizing =>", word.lemma_,"Part of speech : ", word.pos_)
docx2=nlp("good goods run running runner runny was be were ")
for word in docx2:
print("Word => ", word.text, "Lemmatizing =>", word.lemma_,"Part of speech : ", word.pos_)
###Output
Word => good Lemmatizing => good Part of speech : ADJ
Word => goods Lemmatizing => good Part of speech : NOUN
Word => run Lemmatizing => run Part of speech : VERB
Word => running Lemmatizing => run Part of speech : VERB
Word => runner Lemmatizing => runner Part of speech : NOUN
Word => runny Lemmatizing => runny Part of speech : NOUN
Word => was Lemmatizing => be Part of speech : VERB
Word => be Lemmatizing => be Part of speech : VERB
Word => were Lemmatizing => be Part of speech : VERB
|
01.Algorithm/math_prime.ipynb | ###Markdown
수학 1) 나머지 연산C++- int: 2^31-1- long long: 2^63-1- 10^18 정도, 따라서 정답을 나눈 값을 return하도록 함- **답을 M으로 나눈 나머지 출력**1) 덧셈 곱셈(https://www.acmicpc.net/problem/10430)~~~ mod = %(A + B) % M = {(A % C) + (B % C)} % C(A X B) % M = {(A % C) X (B % C)} % C~~~2) 뺄셈: 더해주고 나눔~~~0 <= A % C <= C 0 <= B % C <= C-C < A % C - B % C < 2C0 < A % C - B % C + C < 3C~~~3) 나눗셈: 페르마의 소정리- A, B 서로소- C: 소수~~~(A / B) % C = {A X B^(C-2)} % C~~~
###Code
# 내풀이
a = input()
a = [int(x) for x in a.split()]
A = a[0]; B = a[1]; C = a[2]
print((A+B)%C)
print((A%C + B%C)%C)
print((A*B)%C)
print((A%C * B%C)%C)
# 답
A,B,C = map(int, input().split())
print((A+B)%C)
print((A%C + B%C)%C)
print((A*B)%C)
print((A%C * B%C)%C)
###Output
10 15 5
0
0
0
0
###Markdown
--- 2) 최대공약수/공배수- 정답이 분수일 때, 기약분수로 만들 때 사용1) 최대공약수GCD: 유클리스 호제법사용- 재귀: log_n- 반복: log_n- 3개 수: gcd(gcd(a, b), c)
###Code
# 2.1 최대공약수
def gcd(a, b):
if b == 0:
return a
else:
return gcd(b, a%b)
gcd(9, 25)
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
gcd(29, 20)
gcd(gcd(5, 25), 30)
###Output
_____no_output_____
###Markdown
3) 소수- 알고리즘 두 가지 - 어떤 수 N이 소수인지 아닌지 - a. 2 ~ n-2 다 돌려보기 - b. (2보다 크거나 같고, **N/2 보다 작거나 같은 자연수**)로 나누어 떨어지면 안됨 - N = a*b -> 최소2(1 제외) -> 최대 2/N -> 소수 = 1*a - c. **루트N 보다 작거나 같은 자연수**로 나누어 떨어지면 안됨 - N보다 작거나 같은 모든 소수 찾기 - **에라토스테네스의 체**: 소수 -> 배수 지우기 (N log(log(N))) - 구현에서는 지우는 게 아니라, 검산방식 - generate가 아니라 delete: **남은 놈이 prime number** - **골드바흐의 추측**: 2보다 큰 모든 **짝수**는 두 소수의 합으로 표현 가능 - 10^18 이하에서 증명됨 3.1) num은 prime인가?- best: O($\sqrt{N}$) num보다 작은 모든 수로 num을 나눔
###Code
def is_prime(num):
if num < 2:
return False
for i in range(2, num):
if num % i == 0:
return False
return True
is_prime(1)
int(3/2)
###Output
_____no_output_____
###Markdown
num/2까지의 수로 num을 나눔
###Code
def is_prime(num):
if num < 2:
return False
for i in range(2, int(num/2)+1):
if num % i == 0:
return False
return True
is_prime(923123)
from math import sqrt
def is_prime(num):
if num < 2:
return False
for i in range(2, int(sqrt(num/2))+1):
if num % i == 0:
return False
return True
is_prime(924311233377731)
def is_prime(num):
if num < 2:
return False
i = 2
while i*i <= num:
if num % i == 0:
return False
i += 1
return True
is_prime(923123)
###Output
_____no_output_____
###Markdown
3.2) 소수 갯수 찾기- limit 보다 작은 수 중에 소수: 최초 소수의 배수를 삭제
###Code
# bad
import time
start = time.time()
limit = 100000000
a = [True]*(limit+1)
a[0] = a[1] = False
ans = 0
for i in range(2, limit+1):
if a[i]:
ans += 1
for j in range(i+i, limit, i):
a[j] = False
end = time.time()
print(end-start)
# good
import time
start = time.time()
Max = 10000000
check = [True] * (Max+1)
check[0] = check[1] = False
ans = 0
for i in range(2, Max+1):
if check[i]:
ans += 1
j = i+i
while j <= Max:
check[j] = False
j += i
end = time.time()
print(end-start)
###Output
10.058541059494019
###Markdown
--- 문제: https://www.acmicpc.net/problem/1978
###Code
def is_prime(n):
if n < 2:
return False
i = 2
while i*i <= n:
if n % i == 0:
return False
i += 1
return True
_ = input()
num = list(map(int, input().split()))
ans = list(filter(is_prime, num))
print(len(ans))
###Output
2 3 5
3
###Markdown
문제 https://www.acmicpc.net/problem/1929
###Code
# 답3
MAX = 1000000
check = [True]*(MAX+1)
check[0] = check[1] = False
for i in range(2, MAX+1):
if check[i]:
j = i*i
while j <= MAX:
check[j] = False
j += i
m, n = map(int, input().split())
for i in range(m, n+1):
if check[i]:
print(i)
# 답2
MAX = 1000000
check = [True]*(MAX+1)
check[0] = check[1] = False
for i in range(2, MAX+1):
if check[i]:
for j in range(i*i, MAX+1, i):
check[j] = False
m, n = map(int, input().split())
for i in range(m, n+1):
if check[i]:
print(i)
# 정답1
m, n = map(int, input().split())
check = [True] * (n+1)
check[0] = check[1] = False
for i in range(2, n+1):
if check[i]:
for j in range(i*i, n+1, i):
check[j] = False
if i >= m:
print(i)
###Output
1 10
2
3
5
7
###Markdown
--- 4. 골드바흐 추측- 모든 짝수는 두 홀수 소수로 나타낼 수 있음- https://www.acmicpc.net/problem/6588
###Code
test = 5
Max = 10
def prime_list(Max):
ans = []
check = [True] * (Max+1)
check[0] = check[1] = False
for i in range(2, Max+1):
if check[i]:
for j in range(i*i, Max+1, i):
check[j] = False
if i % 2:
ans.append(i)
return ans
ans = prime_list(Max)
def test(num):
for a in ans:
b = num - a
if b in ans:
###Output
_____no_output_____ |
assignments/2021-08-28-L16/Yunzi_assignment_016.ipynb | ###Markdown
第16讲 一共可以排出多少个不同的队伍 Problem 问题描述 班级里的四个同学分别是:Jason, Sophie, Tony, Yunzi,他们按从前到后的次序排成一个纵队,可以排成多少个不一样的纵队? 通过编程求解。提示:“纵队”是一种排队的方式,在纵队中,队伍中间的任何一个成员其左右两侧没有其他的队伍成员,其他成员只出现在它的前面和后方。同时排第一的成员前面没有别的成员,派最末位的成员后面没有别的成员。
###Code
classmates = ["Jason", "Sophie", "Tony", "Yunzi"]
###Output
_____no_output_____
###Markdown
Math Background 数学背景1. 简单的排列组合2. 阶乘的概念 Prerequisites 预备知识 1. 排列和组合 A. 考虑只有一个小朋友Jason排队的情形 - Jason B. 考虑只有两个小朋友:Jason, Sophie 排队的情形 - Jason Sophie - Sophie Jason C. 考虑有三个小朋友:Jason, Sophie, Tony排队的情形 - Jason Sophie Tony - Jason Tony Sohie - Sophie Jason Tony - Tony Sophie Jason - Sophie Tony Jason - Tony Jason Sophie观察规律,写出有所有四个小朋友排队的各种可能性。 2. **Factorial 阶乘**- P1 = 1 = 1!- P2 = 2 * 1 = 2!- P3 = 3 * 2 * 1 = 3 * ( 2 * 1 ) = 3 * P2 = 3!- P4 = 4 * 3 * 2 * 1 = 4 * (3 * 2 * 1) = 4 * P3 = 4!- P5 = 5 * 4 * 3 * 2 * 1 = 5!- ...- Pn = n * (n-1) * (n-2) * ... * 2 * 1 = n!
###Code
def cal_num_queue(num_person):
#result = 1 * 2 * 3 * 4 * ... * num_person
result, factor = 1, 1
while factor <= num_person:
result = factor * result
factor += 1
#print("result: {}, factor: {}".format(result, factor))
print("There are {} queues altogether, if number of person is {}".format\
(result, num_person))
return
cal_num_queue(6)
###Output
There are 720 queues altogether, if number of person is 6
###Markdown
3. 手动列举三个同学排队的所有可能性**for 3 classmates: "Jason", "Sophie", "Tony"
###Code
classmates = ["Jason", "Sophie", "Tony"]
queues = [] # store all the queues generated 保存所有生成的队伍
n_classmates = len(classmates)
first, second, third = None, None, None
first, second, third = "Jason", "Sophie", "Tony"
queues.append((first, second, third))
first, second, third = "Jason", "Tony", "Sophie"
queues.append((first, second, third))
first, second, third = "Sophie", "Jason", "Tony"
queues.append((first, second, third))
first, second, third = "Sophie", "Tony", "Jason"
queues.append((first, second, third))
first, second, third = "Tony", "Jason", "Sophie"
queues.append((first, second, third))
first, second, third = "Tony", "Sophie", "Jason"
queues.append( (first, second, third) ) # tuple,
for q in queues:
print(q)
###Output
('Jason', 'Sophie', 'Tony')
('Jason', 'Tony', 'Sophie')
('Sophie', 'Jason', 'Tony')
('Sophie', 'Tony', 'Jason')
('Tony', 'Jason', 'Sophie')
('Tony', 'Sophie', 'Jason')
###Markdown
4. 使用while循环列出所有可能的排队
###Code
classmates = []
i, j, k = 0, 0, 0
n_classmates = len(classmates)
first, second, third = None, None, None
while i < n_classmates:
first = classmates[i]
j = 0
while j < n_classmates:
if j != i:
second = classmates[j]
k = 0
while k < n_classmates:
if k != i and k != j:
third = classmates[k]
queue = (first, second, third)
print(perm)
results.append(perm)
k += 1
j += 1
i += 1
###Output
_____no_output_____
###Markdown
5. 使用For...in... 循环列出所有排队的可能性
###Code
classmates = ["Jason", "Sophie", "Tony", "Yunzi", "Celine"]
queues = [] # store all the queues generated 保存所有生成的队伍
queue = None # build q queue to store all the classmates "Jason", "Tony"
# 声明一个队伍变量,存放一个可能的排队形式
for first in classmates:
for second in classmates:
if second != first:
for third in classmates:
if third != first and third != second:
queue = (first, second, third) # find a queue 找到一个队伍
queues.append(queue) # append this queue to queues
# 把找到的队伍放到队伍列表里
for queue in queues:
print(queue)
###Output
('Jason', 'Sophie', 'Tony')
('Jason', 'Sophie', 'Yunzi')
('Jason', 'Sophie', 'Celine')
('Jason', 'Tony', 'Sophie')
('Jason', 'Tony', 'Yunzi')
('Jason', 'Tony', 'Celine')
('Jason', 'Yunzi', 'Sophie')
('Jason', 'Yunzi', 'Tony')
('Jason', 'Yunzi', 'Celine')
('Jason', 'Celine', 'Sophie')
('Jason', 'Celine', 'Tony')
('Jason', 'Celine', 'Yunzi')
('Sophie', 'Jason', 'Tony')
('Sophie', 'Jason', 'Yunzi')
('Sophie', 'Jason', 'Celine')
('Sophie', 'Tony', 'Jason')
('Sophie', 'Tony', 'Yunzi')
('Sophie', 'Tony', 'Celine')
('Sophie', 'Yunzi', 'Jason')
('Sophie', 'Yunzi', 'Tony')
('Sophie', 'Yunzi', 'Celine')
('Sophie', 'Celine', 'Jason')
('Sophie', 'Celine', 'Tony')
('Sophie', 'Celine', 'Yunzi')
('Tony', 'Jason', 'Sophie')
('Tony', 'Jason', 'Yunzi')
('Tony', 'Jason', 'Celine')
('Tony', 'Sophie', 'Jason')
('Tony', 'Sophie', 'Yunzi')
('Tony', 'Sophie', 'Celine')
('Tony', 'Yunzi', 'Jason')
('Tony', 'Yunzi', 'Sophie')
('Tony', 'Yunzi', 'Celine')
('Tony', 'Celine', 'Jason')
('Tony', 'Celine', 'Sophie')
('Tony', 'Celine', 'Yunzi')
('Yunzi', 'Jason', 'Sophie')
('Yunzi', 'Jason', 'Tony')
('Yunzi', 'Jason', 'Celine')
('Yunzi', 'Sophie', 'Jason')
('Yunzi', 'Sophie', 'Tony')
('Yunzi', 'Sophie', 'Celine')
('Yunzi', 'Tony', 'Jason')
('Yunzi', 'Tony', 'Sophie')
('Yunzi', 'Tony', 'Celine')
('Yunzi', 'Celine', 'Jason')
('Yunzi', 'Celine', 'Sophie')
('Yunzi', 'Celine', 'Tony')
('Celine', 'Jason', 'Sophie')
('Celine', 'Jason', 'Tony')
('Celine', 'Jason', 'Yunzi')
('Celine', 'Sophie', 'Jason')
('Celine', 'Sophie', 'Tony')
('Celine', 'Sophie', 'Yunzi')
('Celine', 'Tony', 'Jason')
('Celine', 'Tony', 'Sophie')
('Celine', 'Tony', 'Yunzi')
('Celine', 'Yunzi', 'Jason')
('Celine', 'Yunzi', 'Sophie')
('Celine', 'Yunzi', 'Tony')
###Markdown
6. 使用导入的方法`permutations`来找出所有的排队可能性
###Code
classmates = ["Jason", "Sophie", "Tony"]
from itertools import permutations
perms = permutations(classmates)
perms = list(perms)
i = 0
while i < len(perms):
print(perms[i])
i += 1
###Output
('Jason', 'Sophie', 'Tony')
('Jason', 'Tony', 'Sophie')
('Sophie', 'Jason', 'Tony')
('Sophie', 'Tony', 'Jason')
('Tony', 'Jason', 'Sophie')
('Tony', 'Sophie', 'Jason')
###Markdown
Solution 编程求解 这里仅使用permutations方法来求解,读者可以自行练习使用其他上文介绍的其他方法来编程解决
###Code
classmates = ["Jason", "Sophie", "Tony", "Yunzi"]
from itertools import permutations
perms = permutations(classmates)
perms = list(perms)
i = 0
while i < len(perms):
print(perms[i])
i += 1
###Output
('Jason', 'Sophie', 'Tony', 'Yunzi')
('Jason', 'Sophie', 'Yunzi', 'Tony')
('Jason', 'Tony', 'Sophie', 'Yunzi')
('Jason', 'Tony', 'Yunzi', 'Sophie')
('Jason', 'Yunzi', 'Sophie', 'Tony')
('Jason', 'Yunzi', 'Tony', 'Sophie')
('Sophie', 'Jason', 'Tony', 'Yunzi')
('Sophie', 'Jason', 'Yunzi', 'Tony')
('Sophie', 'Tony', 'Jason', 'Yunzi')
('Sophie', 'Tony', 'Yunzi', 'Jason')
('Sophie', 'Yunzi', 'Jason', 'Tony')
('Sophie', 'Yunzi', 'Tony', 'Jason')
('Tony', 'Jason', 'Sophie', 'Yunzi')
('Tony', 'Jason', 'Yunzi', 'Sophie')
('Tony', 'Sophie', 'Jason', 'Yunzi')
('Tony', 'Sophie', 'Yunzi', 'Jason')
('Tony', 'Yunzi', 'Jason', 'Sophie')
('Tony', 'Yunzi', 'Sophie', 'Jason')
('Yunzi', 'Jason', 'Sophie', 'Tony')
('Yunzi', 'Jason', 'Tony', 'Sophie')
('Yunzi', 'Sophie', 'Jason', 'Tony')
('Yunzi', 'Sophie', 'Tony', 'Jason')
('Yunzi', 'Tony', 'Jason', 'Sophie')
('Yunzi', 'Tony', 'Sophie', 'Jason')
###Markdown
Summary 知识点小结 1. 学习`for`...`in`...循环,理解它与`while`循环的差别 1. for 比较简单, 不需要设置一个临时变量,也不需要条件表达式;while需要条件表达式(外加一个临时变量) 2. for 针对的是一个list(可迭代对象), while循环可以没有list 3. while 需要内部需要改变变量的值已继续判断是否符合循环条件, for不需要2. 复习列表`list`型变量的使用及其对应的方法3. 初步学习`permutations`方法 计算机小知识 暂缺 Assignments 作业 1. 现在有5个小朋友:Jason, Sophie, Tony, Yunzi, Celine, 他们排成一个纵队,那么一共有多少种排队方法?请分别用`while`循环`for`...`in`循环,和`permutations`方法来完成本题。 If now there is 5 chldren instead of 4 (Celine added), how many queues can be formed. Please use `while` loop, `for`...`in` loop, and `permuations` methods to answer this question seperatedly.
###Code
# Use while loop
classmates = ["Jason", "Sophie", "Tony", "Yunzi", "Celine"]
queues = []
i, j, k, l, m = 0, 0, 0, 0, 0
n_classmates = len(classmates)
first, second, third, fourthly, fifth = None, None, None, None, None
while i < n_classmates:
first = classmates[i]
j = 0
while j < n_classmates:
if j != i:
second = classmates[j]
k = 0
while k < n_classmates:
if k != i and k != j:
third = classmates[k]
l = 0
while l < n_classmates:
if l != i and l != j and l != k:
fourthly = classmates[l]
m = 0
while m < n_classmates:
if m != i and m != j and m != k and m != l:
fifth = classmates[m]
queue = (first, second, third, fourthly, fifth)
print(queue)
queues.append(queue)
m += 1
l += 1
k += 1
j += 1
i += 1
for queue in queues:
print(queue)
# Use for...in loop
classmates = ["Jason", "Sophie", "Tony", "Yunzi", "Celine"]
queues = []
queue = None
for first in classmates:
for second in classmates:
if second != first:
for third in classmates:
if third != first and third != second:
for fourthly in classmates:
if fourthly != first and fourthly != second and fourthly != third:
for fifth in classmates:
if fifth != first and fifth != second and fifth != third and fifth != fourthly:
queue = (first, second, third, fourthly, fifth)
queues.append(queue)
for queue in queues:
print(queue)
# Use method permutations
classmates = ["Jason", "Sophie", "Tony", "Yunzi", "Celine"]
from itertools import permutations
perms = permutations(classmates)
perms = list(perms)
i = 0
while i < len(perms):
print(perms[i])
i += 1
###Output
('Jason', 'Sophie', 'Tony', 'Yunzi', 'Celine')
('Jason', 'Sophie', 'Tony', 'Celine', 'Yunzi')
('Jason', 'Sophie', 'Yunzi', 'Tony', 'Celine')
('Jason', 'Sophie', 'Yunzi', 'Celine', 'Tony')
('Jason', 'Sophie', 'Celine', 'Tony', 'Yunzi')
('Jason', 'Sophie', 'Celine', 'Yunzi', 'Tony')
('Jason', 'Tony', 'Sophie', 'Yunzi', 'Celine')
('Jason', 'Tony', 'Sophie', 'Celine', 'Yunzi')
('Jason', 'Tony', 'Yunzi', 'Sophie', 'Celine')
('Jason', 'Tony', 'Yunzi', 'Celine', 'Sophie')
('Jason', 'Tony', 'Celine', 'Sophie', 'Yunzi')
('Jason', 'Tony', 'Celine', 'Yunzi', 'Sophie')
('Jason', 'Yunzi', 'Sophie', 'Tony', 'Celine')
('Jason', 'Yunzi', 'Sophie', 'Celine', 'Tony')
('Jason', 'Yunzi', 'Tony', 'Sophie', 'Celine')
('Jason', 'Yunzi', 'Tony', 'Celine', 'Sophie')
('Jason', 'Yunzi', 'Celine', 'Sophie', 'Tony')
('Jason', 'Yunzi', 'Celine', 'Tony', 'Sophie')
('Jason', 'Celine', 'Sophie', 'Tony', 'Yunzi')
('Jason', 'Celine', 'Sophie', 'Yunzi', 'Tony')
('Jason', 'Celine', 'Tony', 'Sophie', 'Yunzi')
('Jason', 'Celine', 'Tony', 'Yunzi', 'Sophie')
('Jason', 'Celine', 'Yunzi', 'Sophie', 'Tony')
('Jason', 'Celine', 'Yunzi', 'Tony', 'Sophie')
('Sophie', 'Jason', 'Tony', 'Yunzi', 'Celine')
('Sophie', 'Jason', 'Tony', 'Celine', 'Yunzi')
('Sophie', 'Jason', 'Yunzi', 'Tony', 'Celine')
('Sophie', 'Jason', 'Yunzi', 'Celine', 'Tony')
('Sophie', 'Jason', 'Celine', 'Tony', 'Yunzi')
('Sophie', 'Jason', 'Celine', 'Yunzi', 'Tony')
('Sophie', 'Tony', 'Jason', 'Yunzi', 'Celine')
('Sophie', 'Tony', 'Jason', 'Celine', 'Yunzi')
('Sophie', 'Tony', 'Yunzi', 'Jason', 'Celine')
('Sophie', 'Tony', 'Yunzi', 'Celine', 'Jason')
('Sophie', 'Tony', 'Celine', 'Jason', 'Yunzi')
('Sophie', 'Tony', 'Celine', 'Yunzi', 'Jason')
('Sophie', 'Yunzi', 'Jason', 'Tony', 'Celine')
('Sophie', 'Yunzi', 'Jason', 'Celine', 'Tony')
('Sophie', 'Yunzi', 'Tony', 'Jason', 'Celine')
('Sophie', 'Yunzi', 'Tony', 'Celine', 'Jason')
('Sophie', 'Yunzi', 'Celine', 'Jason', 'Tony')
('Sophie', 'Yunzi', 'Celine', 'Tony', 'Jason')
('Sophie', 'Celine', 'Jason', 'Tony', 'Yunzi')
('Sophie', 'Celine', 'Jason', 'Yunzi', 'Tony')
('Sophie', 'Celine', 'Tony', 'Jason', 'Yunzi')
('Sophie', 'Celine', 'Tony', 'Yunzi', 'Jason')
('Sophie', 'Celine', 'Yunzi', 'Jason', 'Tony')
('Sophie', 'Celine', 'Yunzi', 'Tony', 'Jason')
('Tony', 'Jason', 'Sophie', 'Yunzi', 'Celine')
('Tony', 'Jason', 'Sophie', 'Celine', 'Yunzi')
('Tony', 'Jason', 'Yunzi', 'Sophie', 'Celine')
('Tony', 'Jason', 'Yunzi', 'Celine', 'Sophie')
('Tony', 'Jason', 'Celine', 'Sophie', 'Yunzi')
('Tony', 'Jason', 'Celine', 'Yunzi', 'Sophie')
('Tony', 'Sophie', 'Jason', 'Yunzi', 'Celine')
('Tony', 'Sophie', 'Jason', 'Celine', 'Yunzi')
('Tony', 'Sophie', 'Yunzi', 'Jason', 'Celine')
('Tony', 'Sophie', 'Yunzi', 'Celine', 'Jason')
('Tony', 'Sophie', 'Celine', 'Jason', 'Yunzi')
('Tony', 'Sophie', 'Celine', 'Yunzi', 'Jason')
('Tony', 'Yunzi', 'Jason', 'Sophie', 'Celine')
('Tony', 'Yunzi', 'Jason', 'Celine', 'Sophie')
('Tony', 'Yunzi', 'Sophie', 'Jason', 'Celine')
('Tony', 'Yunzi', 'Sophie', 'Celine', 'Jason')
('Tony', 'Yunzi', 'Celine', 'Jason', 'Sophie')
('Tony', 'Yunzi', 'Celine', 'Sophie', 'Jason')
('Tony', 'Celine', 'Jason', 'Sophie', 'Yunzi')
('Tony', 'Celine', 'Jason', 'Yunzi', 'Sophie')
('Tony', 'Celine', 'Sophie', 'Jason', 'Yunzi')
('Tony', 'Celine', 'Sophie', 'Yunzi', 'Jason')
('Tony', 'Celine', 'Yunzi', 'Jason', 'Sophie')
('Tony', 'Celine', 'Yunzi', 'Sophie', 'Jason')
('Yunzi', 'Jason', 'Sophie', 'Tony', 'Celine')
('Yunzi', 'Jason', 'Sophie', 'Celine', 'Tony')
('Yunzi', 'Jason', 'Tony', 'Sophie', 'Celine')
('Yunzi', 'Jason', 'Tony', 'Celine', 'Sophie')
('Yunzi', 'Jason', 'Celine', 'Sophie', 'Tony')
('Yunzi', 'Jason', 'Celine', 'Tony', 'Sophie')
('Yunzi', 'Sophie', 'Jason', 'Tony', 'Celine')
('Yunzi', 'Sophie', 'Jason', 'Celine', 'Tony')
('Yunzi', 'Sophie', 'Tony', 'Jason', 'Celine')
('Yunzi', 'Sophie', 'Tony', 'Celine', 'Jason')
('Yunzi', 'Sophie', 'Celine', 'Jason', 'Tony')
('Yunzi', 'Sophie', 'Celine', 'Tony', 'Jason')
('Yunzi', 'Tony', 'Jason', 'Sophie', 'Celine')
('Yunzi', 'Tony', 'Jason', 'Celine', 'Sophie')
('Yunzi', 'Tony', 'Sophie', 'Jason', 'Celine')
('Yunzi', 'Tony', 'Sophie', 'Celine', 'Jason')
('Yunzi', 'Tony', 'Celine', 'Jason', 'Sophie')
('Yunzi', 'Tony', 'Celine', 'Sophie', 'Jason')
('Yunzi', 'Celine', 'Jason', 'Sophie', 'Tony')
('Yunzi', 'Celine', 'Jason', 'Tony', 'Sophie')
('Yunzi', 'Celine', 'Sophie', 'Jason', 'Tony')
('Yunzi', 'Celine', 'Sophie', 'Tony', 'Jason')
('Yunzi', 'Celine', 'Tony', 'Jason', 'Sophie')
('Yunzi', 'Celine', 'Tony', 'Sophie', 'Jason')
('Celine', 'Jason', 'Sophie', 'Tony', 'Yunzi')
('Celine', 'Jason', 'Sophie', 'Yunzi', 'Tony')
('Celine', 'Jason', 'Tony', 'Sophie', 'Yunzi')
('Celine', 'Jason', 'Tony', 'Yunzi', 'Sophie')
('Celine', 'Jason', 'Yunzi', 'Sophie', 'Tony')
('Celine', 'Jason', 'Yunzi', 'Tony', 'Sophie')
('Celine', 'Sophie', 'Jason', 'Tony', 'Yunzi')
('Celine', 'Sophie', 'Jason', 'Yunzi', 'Tony')
('Celine', 'Sophie', 'Tony', 'Jason', 'Yunzi')
('Celine', 'Sophie', 'Tony', 'Yunzi', 'Jason')
('Celine', 'Sophie', 'Yunzi', 'Jason', 'Tony')
('Celine', 'Sophie', 'Yunzi', 'Tony', 'Jason')
('Celine', 'Tony', 'Jason', 'Sophie', 'Yunzi')
('Celine', 'Tony', 'Jason', 'Yunzi', 'Sophie')
('Celine', 'Tony', 'Sophie', 'Jason', 'Yunzi')
('Celine', 'Tony', 'Sophie', 'Yunzi', 'Jason')
('Celine', 'Tony', 'Yunzi', 'Jason', 'Sophie')
('Celine', 'Tony', 'Yunzi', 'Sophie', 'Jason')
('Celine', 'Yunzi', 'Jason', 'Sophie', 'Tony')
('Celine', 'Yunzi', 'Jason', 'Tony', 'Sophie')
('Celine', 'Yunzi', 'Sophie', 'Jason', 'Tony')
('Celine', 'Yunzi', 'Sophie', 'Tony', 'Jason')
('Celine', 'Yunzi', 'Tony', 'Jason', 'Sophie')
('Celine', 'Yunzi', 'Tony', 'Sophie', 'Jason')
###Markdown
2. Jason, Sophie, Tony, Yunzi四个小朋友分别来自4个不同的家庭。新学期快要开始了,这几个小朋友准备通过一对一视频连线(一次视频只有2个小朋友参与)的方式相互交流整个暑假的生活。如果让每一个小朋友都有机会了解另一位小朋友的暑假情况,请问一共需要多少次视频连线? 请编程求解。 Jason, Sophie, Tony and Yunzi come from 4 different families. The new semester is about to begin, and these children are going to communicate with each other through one-to-one video meeting (only 2 children at a time). talking about their activities in the summer holidays. How many one-to-one video meetings are required for all the children to know each other's life in the summer holidays?
###Code
classmates = ["Jason", "Sophie", "Tony", "Yunzi"]
queues = []
queue = None
for second in classmates:
if second != first:
for third in classmates:
if third != first and third != second:
queue = (second, third)
queues.append(queue)
for queue in queues:
print(queue)
###Output
('Jason', 'Sophie')
('Jason', 'Tony')
('Jason', 'Yunzi')
('Sophie', 'Jason')
('Sophie', 'Tony')
('Sophie', 'Yunzi')
('Tony', 'Jason')
('Tony', 'Sophie')
('Tony', 'Yunzi')
('Yunzi', 'Jason')
('Yunzi', 'Sophie')
('Yunzi', 'Tony')
|
Hackathon_ML Sample Problems/Amex Dataset/ML on Amex Dataset Final.ipynb | ###Markdown
To define baseline model with basic data preprocessing.
###Code
cust_demo_data_expt = cust_demo_data.copy()
cust_demo_data_expt['marital_status'].fillna('Unspecified',inplace=True)
cust_demo_data_expt['no_of_children'].fillna(0,inplace=True)
cust_demo_data_expt['age_range'].replace(['18-25','26-35','36-45','46-55','56-70','70+'],[18,26,36,46,56,70],inplace=True)
cust_demo_data_expt['family_size'].replace('5+',5,inplace=True)
cust_demo_data_expt['no_of_children'].replace('3+',3,inplace=True)
cust_tran_data_expt = cust_tran_data.copy()
cust_tran_data_expt = pd.merge(cust_tran_data_expt,coupon_data,how='inner',on='item_id')
cust_tran_data_expt.drop('date',axis=1,inplace=True)
for column in ['quantity','coupon_discount','other_discount','selling_price']:
tran_summation(column)
cust_tran_data_expt.drop_duplicates(subset=['customer_id','item_id','coupon_id'], keep='first', inplace=True)
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data_expt,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge.drop('marital_status',axis=1,inplace=True)
train_data_merge.fillna({'age_range':0,'rented':0,'family_size':0,'no_of_children':0,'income_bracket':0},inplace=True)
train_data_merge['family_size'].astype('int8')
train_data_merge['no_of_children'].astype('int8')
train_data_merge = pd.get_dummies(train_data_merge, columns=['brand_type','category'], drop_first=False)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.7,random_state=7)
# defining the model
classifier = LogisticRegression(solver='lbfgs',max_iter=10000)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for Logistic Regression Baseline model
print ("Classification Report for Baseline Logistic Regression")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),list(X.columns),'Logistic Regresssion',report['precision'][1],report['recall'][1],report['support']['accuracy'],'Baseline Model')
file_write_cnt = file_write_cnt + 1
# defining the model
classifier = RandomForestClassifier(n_estimators=100)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for RandomForest Classifier Baseline model
print ("Classification Report for Baseline RandomForest Classifier")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),X.columns,'Random Forest Classifier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'Baseline Model')
file_write_cnt += 1
###Output
_____no_output_____
###Markdown
One Hot Encoding
###Code
del train_data_merge
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge['no_of_children'].fillna(0,inplace=True)
train_data_merge.fillna({'marital_status':'Unspecified','rented':'Unspecified','family_size':'Unspecified','age_range':'Unspecified'},inplace=True)
train_data_merge['income_bracket'].fillna(train_data_merge['income_bracket'].mean(),inplace=True)
train_data_merge.drop(['id'],axis=1,inplace=True)
train_data_merge['no_of_children'].replace('3+',3,inplace=True)
train_data_merge['no_of_children'].astype('int')
train_data_merge = pd.get_dummies(train_data_merge, columns=['age_range','marital_status','rented','family_size','brand_type','category'], drop_first=False)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.7,random_state=7)
# defining the model
classifier = RandomForestClassifier(n_estimators=100)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for RandomForest Classifier Baseline model
print ("Classification Report for RandomForest Classifier")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),X.columns,'Random Forest Classfier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'with OHE')
file_write_cnt += 1
###Output
_____no_output_____
###Markdown
Label Encoding
###Code
del train_data_merge
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge['no_of_children'].fillna(0,inplace=True)
train_data_merge.fillna({'marital_status':'Unspecified','rented':'Unspecified','family_size':'Unspecified','age_range':'Unspecified'},inplace=True)
train_data_merge['income_bracket'].fillna(train_data_merge['income_bracket'].mean(),inplace=True)
train_data_merge['no_of_children'].replace('3+',3,inplace=True)
train_data_merge['no_of_children'].astype('int')
train_data_merge.drop(['id'],axis=1,inplace=True)
for column in ['marital_status','rented','family_size','age_range','brand_type','category']:
label_encode(column)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size=0.7,random_state=7)
# defining the model
classifier = RandomForestClassifier(n_estimators=100)
# fitting the model
classifier.fit(X_train,y_train)
# predicting test result with model
y_pred = classifier.predict(X_test)
# Creating Classification report for RandomForest Classifier Baseline model
print ("Classification Report for RandomForest Classifier")
print(classification_report(y_test,y_pred))
report = pd.DataFrame(classification_report(y_test,y_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X.columns),X.columns,'Random Forest Classifier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'with Label Encoding')
file_write_cnt += 1
###Output
_____no_output_____
###Markdown
Feature Engineering with Treatment of Campaign Date, Transaction Date and using Coupon Redemption percentage as a value to convert categorical columns to integers
###Code
del train_data_merge
del cust_tran_data_expt
campaign_data_expt = campaign_data.copy()
campaign_data_expt['start_date_q'] = campaign_data_expt['start_date'].map(lambda x: date_q(x,'/'))
campaign_data_expt['end_date_q'] = campaign_data_expt['end_date'].map(lambda x: date_q(x,'/'))
campaign_data_expt.drop(['start_date','end_date'],axis=1,inplace=True)
cust_tran_data_expt = cust_tran_data.copy()
cust_tran_data_expt = pd.merge(cust_tran_data_expt,coupon_data,how='inner',on='item_id')
cust_tran_data_expt['tran_date_q'] = cust_tran_data_expt['date'].map(lambda x: date_q(x,'-'))
cust_tran_data_expt.drop('date',axis=1,inplace=True)
for column in ['quantity','coupon_discount','other_discount','selling_price']:
tran_summation_1(column)
cust_tran_data_expt.drop_duplicates(subset=['customer_id','item_id','coupon_id','tran_date_q'], keep='first', inplace=True)
train_data_merge = pd.merge(train_data,cust_tran_data_expt,how='inner',on=['customer_id','coupon_id'])
train_data_merge = pd.merge(train_data_merge,cust_demo_data,how='left',on='customer_id')
train_data_merge = pd.merge(train_data_merge,item_data,how='left',on='item_id')
train_data_merge = pd.merge(train_data_merge,campaign_data_expt,how='left',on='campaign_id')
train_data_merge['no_of_children'].fillna(0,inplace=True)
train_data_merge.fillna({'marital_status':'Unspecified','rented':'Unspecified','family_size':'Unspecified','age_range':'Unspecified','income_bracket':'Unspecified'},inplace=True)
train_data_merge.drop('id',axis=1,inplace=True)
train_data_merge['no_of_children'].replace('3+',3,inplace=True)
for column in ['customer_id','coupon_id','item_id','campaign_id','no_of_children','marital_status','rented','family_size','age_range','income_bracket','start_date_q','end_date_q','tran_date_q','brand','category']:
cat_percent(column)
train_data_merge = pd.get_dummies(train_data_merge, columns=['campaign_type','brand_type'], drop_first=False)
X = train_data_merge.drop('redemption_status', axis=1)
y = train_data_merge['redemption_status']
feature_sel = [5,10,15,23]
rforc = RandomForestClassifier(n_estimators=100)
for i in feature_sel:
rfe = RFE(rforc, i)
rfe.fit(X, y)
# Selecting columns
sel_cols = []
for a, b, c in zip(rfe.support_, rfe.ranking_, X.columns):
if b == 1:
sel_cols.append(c)
print ('Number of features selected are ::',i)
print ('Columns Selected are ::',sel_cols)
# Creating new DataFrame with selected columns only as X
X_sel = X[sel_cols]
# Split data in to train and test
X_sel_train, X_sel_test, y_sel_train, y_sel_test = train_test_split(X_sel, y, train_size=0.7, random_state=7)
# Fit and Predict the model using selected number of features
grid={"n_estimators":[100]}
rforc_cv = GridSearchCV(rforc,grid,cv=10)
rforc_cv.fit(X_sel_train, y_sel_train)
rforc_pred = rforc_cv.predict(X_sel_test)
# Classification Report
print(classification_report(y_sel_test,rforc_pred))
report = pd.DataFrame(classification_report(y_sel_test,rforc_pred,output_dict=True)).transpose()
write_file(file_write_cnt,'No','No','Yes',len(X_sel.columns),X_sel.columns,'Random Forest Classifier',report['precision'][1],report['recall'][1],report['support']['accuracy'],'Treating Date and Label Encoding with RFE')
file_write_cnt += 1
###Output
_____no_output_____ |
content/zh/projects/bda3/Estimate_asthma_mortality_rate.ipynb | ###Markdown
Estimating a rate from Poisson data (BDA3 p.45)
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gamma
%matplotlib inline
###Output
_____no_output_____
###Markdown
Prior and posterior distributions
###Code
# Create a grid
nx = 200
x = np.linspace(0, 3, nx)
# Prior
alpha, beta = 3, 5
prior_den = gamma.pdf(x, a = alpha, scale = 1/beta)
# Posteriors
x1, y1 = 2, 3
x2, y2 = 20, 30
post_den1 = gamma.pdf(x, alpha+y1, scale = 1/(beta+x1))
post_den2 = gamma.pdf(x, alpha+y2, scale = 1/(beta+x2))
# Plot
fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax1.plot(x, prior_den, 'r--', x, post_den1, 'b-',)
ax1.set_ylim([0, 1.8])
ax1.set_xlabel(r'$\theta$');
ax2.plot(x, prior_den, 'r--', x, post_den2, 'b-')
ax2.set_ylim([0, 1.8])
ax2.set_xlabel(r'$\theta$');
###Output
_____no_output_____
###Markdown
Random samples from posterior distributions
###Code
n = 1000
theta_sample1 = gamma.rvs(a=alpha+y1, scale = 1/(beta+x1), size=n)
theta_sample2 = gamma.rvs(a=alpha+y2, scale = 1/(beta+x2), size=n)
fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax1.hist(theta_sample1, bins=30, color='grey')
ax1.set_xlabel(r'$\theta$');
ax2.hist(theta_sample2, bins=30, color='grey')
ax2.set_xlabel(r'$\theta$');
###Output
_____no_output_____ |
MNIST_Dataset_and_Databases.ipynb | ###Markdown
MNIST Dataset & DatabaseIn the [MNIST tutorial](https://github.com/caffe2/caffe2/blob/master/caffe2/python/tutorials/MNIST.ipynb) we use an lmdb database. You can also use leveldb or even minidb by changing the type reference when you get ready to read from the dbs. In this tutorial, we will go over how to download, extract, and generate lmdb and leveldb variants of the MNIST dataset. Dataset:You can download the raw [MNIST dataset](https://download.caffe2.ai/datasets/mnist/mnist.zip), g/unzip the dataset and labels, and make the database yourself. Databases:We provide a few database formats for you to try with the MNIST tutorial. The default is lmdb. * [MNIST-nchw-lmdb](https://download.caffe2.ai/databases/mnist-lmdb.zip) - contains both the train and test lmdb MNIST databases in NCHW format* [MNIST-nchw-leveldb](https://download.caffe2.ai/databases/mnist-leveldb.zip) - contains both the train and test leveldb MNIST databases in NCHW format* [MNIST-nchw-minidb](https://download.caffe2.ai/databases/mnist-minidb.zip) - contains both the train and test minidb MNIST databases in NCHW format Tools: make_mnist_dbIf you like LevelDB you can use Caffe2's `make_mnist_db` binary to generate leveldb databases. This binary is found in `/caffe2/build/caffe2/binaries/` or depending on your OS and installation, in `/usr/local/bin/`.Here is an example call to `make_mnist_db`:```./make_mnist_db --channel_first --db leveldb --image_file ~/Downloads/train-images-idx3-ubyte --label_file ~/Downloads/train-labels-idx1-ubyte --output_file ~/caffe2/caffe2/python/tutorials/tutorial_data/mnist/mnist-train-nchw-leveldb./make_mnist_db --channel_first --db leveldb --image_file ~/Downloads/t10k-images-idx3-ubyte --label_file ~/Downloads/t10k-labels-idx1-ubyte --output_file ~/caffe2/caffe2/python/tutorials/tutorial_data/mnist/mnist-test-nchw-leveldb```Note leveldb can get deadlocked if more than one user attempts to open the leveldb at the same time. This is why there is logic in the Python below to delete LOCK files if they're found. Python scriptYou can use the Python in the code blocks below to download and extract the dataset with `DownloadResource`, call the `make_mnist_db` binary, and generate your database with `GenerateDB`. First, we will define our functions.
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
def DownloadResource(url, path):
'''Downloads resources from s3 by url and unzips them to the provided path'''
import requests, zipfile, StringIO
print("Downloading... {} to {}".format(url, path))
r = requests.get(url, stream=True)
z = zipfile.ZipFile(StringIO.StringIO(r.content))
z.extractall(path)
print("Completed download and extraction.")
def GenerateDB(image, label, name):
'''Calls the make_mnist_db binary to generate a leveldb from a mnist dataset'''
name = os.path.join(data_folder, name)
print('DB: ', name)
if not os.path.exists(name):
syscall = "/usr/local/bin/make_mnist_db --channel_first --db leveldb --image_file " + image + " --label_file " + label + " --output_file " + name
# print "Creating database with: ", syscall
os.system(syscall)
else:
print("Database exists already. Delete the folder if you have issues/corrupted DB, then rerun this.")
if os.path.exists(os.path.join(name, "LOCK")):
# print "Deleting the pre-existing lock file"
os.remove(os.path.join(name, "LOCK"))
###Output
_____no_output_____
###Markdown
Now that we have our functions for loading, extracting, and generating our dbs, we will put these functions to use and generate the MNIST data in both lmdb and leveldb formats (if they do not already exist).First, we **download and extract the MNIST dataset (train and test) in lmdb format** using:```pythonDownloadResource("http://download.caffe2.ai/databases/mnist-lmdb.zip", data_folder)```Next, we focus on **downloading, extracting, and generating MNIST train and test leveldbs**. We start by downloading and extracting the raw MNIST dataset (in ubyte format). This will ultimately extract four files, consisting of training images and labels, and testing images and labels.```pythonDownloadResource("http://download.caffe2.ai/datasets/mnist/mnist.zip", data_folder)```Finally, we **generate the leveldb train and test databases** (or regenerate; it can get locked with multi-user setups or abandoned threads). We do this by passing our `GenerateDB` function the names of the corresponding ubyte files along with an output file name.```pythonGenerateDB(image_file_train, label_file_train, "mnist-train-nchw-leveldb")GenerateDB(image_file_test, label_file_test, "mnist-test-nchw-leveldb")```
###Code
current_folder = os.path.join(os.path.expanduser('~'), 'caffe2_notebooks')
data_folder = os.path.join(current_folder, 'tutorial_data', 'mnist')
# If the data_folder does not already exist, create it
if not os.path.exists(data_folder):
os.makedirs(data_folder)
# Downloads and extracts the lmdb databases of MNIST images - both test and train
if not os.path.exists(os.path.join(data_folder,"mnist-train-nchw-lmdb")):
DownloadResource("http://download.caffe2.ai/databases/mnist-lmdb.zip", data_folder)
else:
print("mnist-lmdb already downloaded and extracted")
# Downloads and extracts the MNIST data set
if not os.path.exists(os.path.join(data_folder, "train-images-idx3-ubyte")):
DownloadResource("http://download.caffe2.ai/datasets/mnist/mnist.zip", data_folder)
else:
print("Raw mnist ubyte data already downloaded and extracted")
# (Re)generate the leveldb database (it can get locked with multi-user setups or abandoned threads)
# Requires the download of the dataset (mnist.zip) - see DownloadResource above.
# You also need to change references in the MNIST tutorial code where you train or test from lmdb to leveldb
image_file_train = os.path.join(data_folder, "train-images-idx3-ubyte")
label_file_train = os.path.join(data_folder, "train-labels-idx1-ubyte")
image_file_test = os.path.join(data_folder, "t10k-images-idx3-ubyte")
label_file_test = os.path.join(data_folder, "t10k-labels-idx1-ubyte")
GenerateDB(image_file_train, label_file_train, "mnist-train-nchw-leveldb")
GenerateDB(image_file_test, label_file_test, "mnist-test-nchw-leveldb")
###Output
_____no_output_____ |
TimeSeries/UnivariateMultistepEncoderDecoderLSTMexample.ipynb | ###Markdown
Univariate Multistep Encoder Decoder LSTM examplehttps://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/
###Code
from numpy import array
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, RepeatVector, TimeDistributed
# Separate a multivariate sequence into samples
def split_sequence(sequences, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequences)):
# Find the end of the pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
# Check if we are bound by sequence
if out_end_ix > len(sequences) - 1:
break
# Gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix], sequences[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
# Define input sequence
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
# Choose a number of time steps
n_steps_in = 3
n_steps_out = 2
# Convert into input/output
X, y = split_sequence(raw_seq, n_steps_in, n_steps_out)
# Reshape from [samples, timesteps] into [samples, timesteps, features]
n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))
# Define model
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_steps_in, n_features)))
model.add(RepeatVector(n_steps_out))
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mse')
# Fit model
model.fit(X, y, epochs=200)
# Demonstrate prediction
x_input = array([70, 80, 90])
x_input = x_input.reshape((1, n_steps_in, n_features))
yhat = model.predict(x_input)
yhat
###Output
_____no_output_____ |
04_Python_Functions_examples/008_make_a_simple_calculator.ipynb | ###Markdown
All the IPython Notebooks in this **Python Examples** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/90_Python_Examples)** Python Program to Make a Simple CalculatorIn this example you will learn to create a simple calculator that can add, subtract, multiply or divide depending upon the input from the user.To understand this example, you should have the knowledge of the following **[Python programming](https://github.com/milaan9/01_Python_Introduction/blob/main/000_Intro_to_Python.ipynb)** topics:* **[Python Functions](https://github.com/milaan9/04_Python_Functions/blob/main/001_Python_Functions.ipynb)*** **[Python Function Arguments](https://github.com/milaan9/04_Python_Functions/blob/main/004_Python_Function_Arguments.ipynb)*** **[Python User-defined Functions](https://github.com/milaan9/04_Python_Functions/blob/main/Python_User_defined_Functions.ipynb)*** **[Python if-elif-else Statement](https://github.com/milaan9/03_Python_Flow_Control/blob/main/003_Python_if_elif_else_statement%20.ipynb)**
###Code
# Example 1: Simple Calculator by Using Functions
# This function adds two numbers
def add(x, y):
return x + y
# This function subtracts two numbers
def subtract(x, y):
return x - y
# This function multiplies two numbers
def multiply(x, y):
return x * y
# This function divides two numbers
def divide(x, y):
return x / y
print("Select operation.")
print("1.Add")
print("2.Subtract")
print("3.Multiply")
print("4.Divide")
while True:
# Take input from the user
choice = input("Enter choice(1/2/3/4): ")
# Check if choice is one of the four options
if choice in ('1', '2', '3', '4'):
num1 = float(input("Enter first number: "))
num2 = float(input("Enter second number: "))
if choice == '1':
print(num1, "+", num2, "=", add(num1, num2))
elif choice == '2':
print(num1, "-", num2, "=", subtract(num1, num2))
elif choice == '3':
print(num1, "*", num2, "=", multiply(num1, num2))
elif choice == '4':
print(num1, "/", num2, "=", divide(num1, num2))
break
else:
print("Invalid Input")
'''
>>Output/Runtime Test Cases:
Select operation.
1.Add
2.Subtract
3.Multiply
4.Divide
Enter choice(1/2/3/4): 3
Enter first number: 3
Enter second number: 9
3.0 * 9.0 = 27.0
'''
###Output
Select operation.
1.Add
2.Subtract
3.Multiply
4.Divide
Enter choice(1/2/3/4): 3
Enter first number: 3
Enter second number: 9
3.0 * 9.0 = 27.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.