path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
plateflex/examples/Notebooks/Ex7_full_suite_NW_Pacific.ipynb | ###Markdown
Example 7: Full suite for NW Pacific Let's take a look now at an oceanic area, where we should be using Free-air gravity anomaly data instead of Bouguer. This example should produce elastic thickness estimates that are very close to those of Kalnins et al. (2009), although we use the parameter F as an additional model parameter (the analysis of Kalnins et al. (2009) is equivalent to setting `F` equal to zero everywhere).Warning: These grids are fairly large and may require too much memory for the wavelet transform calculations. You can try to decimate the data original sets by a factor of two before starting the analysis if you run into memory problems; however, this will change the sampling distance so be extra careful in that case. The estimation of flexural parameters over the whole grid will also be much slower, so make sure to use an appropriate decimating factor for testing.
###Code
import numpy as np
import pandas as pd
from plateflex import TopoGrid, FairGrid, ZcGrid, Project
# Read header (first line) of data set using pandas to get grid parameters
xmin, xmax, ymin, ymax, zmin, zmax, dx, dy, nx, ny = \
pd.read_csv('../data/Bathy_PAC.xyz', sep='\t', nrows=0).columns[1:].values.astype(float)
# Change type of nx and ny from float to integers
nx = int(nx); ny = int(ny)
# Read bathymetry and free-air anomaly data
bathydata = pd.read_csv('../data/Bathy_PAC.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
fairdata = pd.read_csv('../data/Freeair_PAC.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
# Read crustal thickness data
thickdata = pd.read_csv('../data/crustal_thickness_PAC.xyz', sep='\t', \
skiprows=1, names=['x', 'y', 'z'])['z'].values.reshape(ny,nx)[::-1]
# # Here we could extract a smaller grid to make things easier to test - power of 2 for faster processing
# bathydata = bathydata[100:356, 100:356]
# fairdata = fairdata[100:356, 100:356]
# thickdata = thickdata[100:356, 100:356]
###Output
_____no_output_____
###Markdown
All those data sets can be imported into their corresponding `Grid` objects:
###Code
# Load the data as `TopoGrid` and `FairGrid` objects
bathy = TopoGrid(bathydata, dx, dy)
fair = FairGrid(fairdata, dx, dy)
# Create contours
contours = bathy.make_contours(0.)
# Make mask over land areas
mask = (bathy.data > 0.)
# Load the crustal thickness as `ZcGrid` object
thick = ZcGrid(thickdata, dx, dy)
# Plot the three data sets
bathy.plot(mask=mask, contours=contours, cmap='Spectral_r', vmin=-6000, vmax=6000)
fair.plot(mask=mask, contours=contours, cmap='seismic', vmin=-200, vmax=200)
thick.plot(mask=mask, contours=contours, cmap='Spectral_r', vmin=0., vmax=40000)
###Output
grid contains NaN values. Performing interpolation...
###Markdown
Filter water depth attribute and plot it
###Code
# Produce filtered version of water depth
bathy.filter_water_depth()
bathy.plot_water_depth(mask=mask, contours=contours, cmap='Spectral')
###Output
_____no_output_____
###Markdown
We might want to change the value of crustal density, since it should be higher than 2700 kg/m^3 (default value). However, we don't have a `Grid` object for density, so let's fix a new global value:
###Code
# Import plateflex to change default variables
from plateflex.flex import conf_flex
conf_flex.rhoc = 2800.
###Output
_____no_output_____
###Markdown
Define the project with new `Grid` objects, initialize it and execute!
###Code
# Define new project
project = Project(grids=[bathy, fair, thick])
# Initialize project
project.init()
# Calculate wavelet admittance and coherence
project.wlet_admit_coh()
# Make sure we are using 'L2'
project.inverse = 'L2'
# Insert mask
project.mask = mask
# Estimate flexural parameters at every 5 points of the initial grid
project.estimate_grid(5, atype='admit')
###Output
Computing: [##########] 145/145
###Markdown
Now plot everything
###Code
project.plot_results(mean_Te=True, mask=True, contours=contours, cmap='Spectral', vmin=0., vmax=50.)
project.plot_results(std_Te=True, mask=True, contours=contours, cmap='magma_r')
project.plot_results(mean_F=True, mask=True, contours=contours, cmap='Spectral')
project.plot_results(std_F=True, mask=True, contours=contours, cmap='magma_r')
project.plot_results(chi2=True, mask=True, contours=contours, cmap='cividis', vmin=0, vmax=40)
###Output
_____no_output_____ |
sklearn/sklearn learning/demonstration/auto_examples_jupyter/ensemble/plot_adaboost_hastie_10_2.ipynb | ###Markdown
Discrete versus Real AdaBoostThis example is based on Figure 10.2 from Hastie et al 2009 [1]_ andillustrates the difference in performance between the discrete SAMME [2]_boosting algorithm and real SAMME.R boosting algorithm. Both algorithms areevaluated on a binary classification task where the target Y is a non-linearfunction of 10 input features.Discrete SAMME AdaBoost adapts based on errors in predicted class labelswhereas real SAMME.R uses the predicted class probabilities... [1] T. Hastie, R. Tibshirani and J. Friedman, "Elements of Statistical Learning Ed. 2", Springer, 2009... [2] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
###Code
print(__doc__)
# Author: Peter Prettenhofer <[email protected]>,
# Noel Dawe <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import zero_one_loss
from sklearn.ensemble import AdaBoostClassifier
n_estimators = 400
# A learning rate of 1. may not be optimal for both SAMME and SAMME.R
learning_rate = 1.
X, y = datasets.make_hastie_10_2(n_samples=12000, random_state=1)
X_test, y_test = X[2000:], y[2000:]
X_train, y_train = X[:2000], y[:2000]
dt_stump = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1)
dt_stump.fit(X_train, y_train)
dt_stump_err = 1.0 - dt_stump.score(X_test, y_test)
dt = DecisionTreeClassifier(max_depth=9, min_samples_leaf=1)
dt.fit(X_train, y_train)
dt_err = 1.0 - dt.score(X_test, y_test)
ada_discrete = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME")
ada_discrete.fit(X_train, y_train)
ada_real = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME.R")
ada_real.fit(X_train, y_train)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, n_estimators], [dt_stump_err] * 2, 'k-',
label='Decision Stump Error')
ax.plot([1, n_estimators], [dt_err] * 2, 'k--',
label='Decision Tree Error')
ada_discrete_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_test)):
ada_discrete_err[i] = zero_one_loss(y_pred, y_test)
ada_discrete_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_train)):
ada_discrete_err_train[i] = zero_one_loss(y_pred, y_train)
ada_real_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_test)):
ada_real_err[i] = zero_one_loss(y_pred, y_test)
ada_real_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_train)):
ada_real_err_train[i] = zero_one_loss(y_pred, y_train)
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err,
label='Discrete AdaBoost Test Error',
color='red')
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err_train,
label='Discrete AdaBoost Train Error',
color='blue')
ax.plot(np.arange(n_estimators) + 1, ada_real_err,
label='Real AdaBoost Test Error',
color='orange')
ax.plot(np.arange(n_estimators) + 1, ada_real_err_train,
label='Real AdaBoost Train Error',
color='green')
ax.set_ylim((0.0, 0.5))
ax.set_xlabel('n_estimators')
ax.set_ylabel('error rate')
leg = ax.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.7)
plt.show()
###Output
_____no_output_____ |
Linear Transformation.ipynb | ###Markdown
###Code
import numpy as np
A = np.array([[4,3],[5,9]]) #creation of matrix A
print (A)
inv_A = np.linalg.inv(A)
print(inv_A)
B = np.array([[20],[26]])
print(B)
X=np.linalg.inv(A).dot(B)
print(X)
X=np.dot(inv_A,B)
print(X)
A = np.array([[20,10],[17,22]])
print(A)
Inv_A = np.linalg.inv(A)
print(Inv_A)
B=np.array([[350],[500]])
print(B)
X=np.dot(Inv_A,B)
print(X)
B = np.dot(A,X)
print(B)
###Output
[[350.]
[500.]]
|
AzureCognitiveSearchService/SetupAzureCognitiveSearchService.ipynb | ###Markdown
Setup Azure Cognitive Search ServiceThis notebook will set up your Azure Cognitive Search Service for the COVID-19 example described at https://aka.ms/Covid19CognitiveSearchCode. Data is pulled from two folders in the same Azure blob storage container. The main indexer runs data in json format through a skillset which reshapes the data and extracts medical entities, and puts the enriched data in the search index. A second metadata indexer pulls additional metadata into the same search index. First, you will need an Azure account. If you don't already have one, you can start a free trial of Azure [here](https://azure.microsoft.com/free/). Secondly, create a new Azure search service using the Azure portal at . Select your Azure subscription. You may create a new resource group (you can name it something like "covid19-search-rg"). You will need a globally-unique URL as the name of your search service (try something like "covid19-search-" plus your name, organization, or numbers). Finally, choose a nearby location to host your search service - please remember the location that you chose, as your Cognitive Services instance will need to be based in the same location. Click "Review + create" and then (after validation) click "Create" to instantiate and deploy the service. After deployment is complete, click "Go to resource" to navigate to your new search service. We will need some information about your search service to fill in the "Azure Search variables" section in the cell below. First, on the "Overview" main page, you should see a "Url" value. Copy that value into the "azsearch_url" variable in the cell below (you can just update the "" section of the URL with the name of your Azure search service). Then, on the Azure portal page in the left-hand pane under "Settings", click on "Keys". Update the azsearch_key value below with one of the keys from your service on the Azure portal page. Finally, you will need to create an Azure storage account and upload the COVID-19 data set. The data set can be downloaded from https://www.semanticscholar.org/cord19/download. There are two different sections to download: the metadata and document parses. Then, back on the Azure portal, you can create a new Azure storage account at https://portal.azure.com/create/Microsoft.StorageAccount. Use the same subscription, resource group, and location that you did for the Azure search service. Choose your own unique storage account name (it must be lowercase letters and numbers only). You can change the replication to LRS. You can use the defaults for everything else, and then create the storage. Once it has been deployed, update the blob_connection_string variable in the cell below. Then create a container in your blob storage called "covid19". Inside of that container, create a folder called "json" and upload the document parses data there. Then create a folder called "metadata" in the same blob container, and upload the metadata.csv file to that folder. If you modify those names, update their respective values below.
###Code
# Azure Search variables
azsearch_url = "<YourSearchServiceName>.search.windows.net" # If you copy this value from the portal, leave off the "https://" from the beginning
azsearch_key = "TODO"
# Data source which contains documents to process
blob_connection_string = "DefaultEndpointsProtocol=https;AccountName=TODO;AccountKey=TODO;EndpointSuffix=core.windows.net"
blob_container = "covid19"
data_folder = "json"
metadata_folder = "metadata"
# Prefix for elements of the Cognitive Search service
search_prefix = "covid19" # Note that if you change this value, you will also have to change the values in the indexer json.
print("The variables are initialized.")
###Output
_____no_output_____
###Markdown
We will first create a simple function to wrap REST requests to the Azure Search service. If called with no parameters, it will get the service statistics.
###Code
import json
def azsearch_rest(request_type="GET", endpoint="servicestats", body=None):
# Imports and constants
import http.client, urllib.request, urllib.parse, urllib.error, base64, json, urllib
# Request headers.
headers = {
'Content-Type': 'application/json',
'api-key': azsearch_key
}
# Request parameters
params = urllib.parse.urlencode({
'api-version':'2019-05-06-Preview'
})
try:
# Execute the REST API call and get the response.
conn = http.client.HTTPSConnection(azsearch_url)
request_path = "/{0}?{1}".format(endpoint, params)
conn.request(request_type, request_path, body, headers)
response = conn.getresponse()
print(response.status)
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
return result
except Exception as ex:
raise ex
# Test the function
try:
response = azsearch_rest()
if response != None:
print(json.dumps(response, sort_keys=True, indent=2))
except Exception as ex:
print(ex.message)
###Output
_____no_output_____
###Markdown
First, let's set up data sources for your search service. In this service, we have two data sources, one that pulls data from a json folder and one that pulls data from a metadata folder.
###Code
def create_datasource(datasource_name, blob_connection_string, blob_container, folder):
# Define the request body with details of the data source we want to create
body = {
"name": datasource_name,
"description": "",
"type": "azureblob",
"credentials":
{
"connectionString": blob_connection_string
},
"container": {
"name": blob_container,
"query": folder
}
}
try:
# Call the REST API's 'datasources' endpoint to create a data source
result = azsearch_rest(request_type="POST", endpoint="datasources", body=json.dumps(body))
if result != None:
print(json.dumps(result, sort_keys=True, indent=2))
except Exception as ex:
print(ex)
# Create two datasources
datasource_name = search_prefix + "-ds"
metadata_datasource_name = "metadata-ds"
create_datasource(datasource_name, blob_connection_string, blob_container, data_folder)
create_datasource(metadata_datasource_name, blob_connection_string, blob_container, metadata_folder)
###Output
_____no_output_____
###Markdown
Then let's set up your search index.
###Code
index_name = search_prefix + "-index"
# Define the request body
with open("index.json") as datafile:
index_json = json.load(datafile)
try:
result = azsearch_rest(request_type="PUT", endpoint="indexes/" + index_name, body=json.dumps(index_json))
if result != None:
print(json.dumps(result, sort_keys=True, indent=2))
except Exception as e:
print('Error:')
print(e)
###Output
_____no_output_____
###Markdown
Next, we will set up your skillset.
###Code
skillset_name = search_prefix + "-skillset"
# Define the request body
with open("skillset.json") as datafile:
skillset_json = json.load(datafile)
try:
result = azsearch_rest(request_type="PUT", endpoint="skillsets/" + skillset_name, body=json.dumps(skillset_json))
if result != None:
print(json.dumps(result, sort_keys=True, indent=2))
except Exception as e:
print('Error:')
print(e)
###Output
_____no_output_____
###Markdown
Now, we will set up your main indexer. This indexer will take the data from the json folder in your Azure blob container, run it through the skillset, and put the results in the search index.
###Code
def create_indexer(indexer_name, filename):
# Define the request body
with open(filename) as datafile:
indexer_json = json.load(datafile)
try:
result = azsearch_rest(request_type="PUT", endpoint="indexers/" + indexer_name, body=json.dumps(indexer_json))
if result != None:
print(json.dumps(result, sort_keys=True, indent=2))
except Exception as e:
print('Error:')
print(e)
# Create main indexer
indexer_name = search_prefix + "-indexer"
create_indexer(indexer_name, filename="data-indexer.json")
###Output
_____no_output_____
###Markdown
Finally, we will set up your metadata indexer. This indexer pulls the data from the metadata folder in your Azure blob container and adds it to the search index.
###Code
metadata_indexer_name = "metadata-indexer"
create_indexer(metadata_indexer_name, filename="metadata-indexer.json")
###Output
_____no_output_____
###Markdown
If this is your first time running an indexer, you won't need to reset it. But just in case you want to reuse this code and rerun your indexer with changes (perhaps pointing to your own dataset in Azure blob storage instead of ours), you will need to reset the indexer before making changes.
###Code
def reset_indexer(indexer_name):
# Reset the indexer.
result = azsearch_rest(request_type="POST", endpoint="/indexers/{0}/reset".format(indexer_name), body=None)
if result != None:
print(json.dumps(result, sort_keys=True, indent=2))
def run_indexer(indexer_name):
# Rerun the indexer.
result = azsearch_rest(request_type="POST", endpoint="/indexers/{0}/run".format(indexer_name), body=None)
if result != None:
print(json.dumps(result, sort_keys=True, indent=2))
# Reset and rerun main indexer.
reset_indexer(indexer_name)
run_indexer(indexer_name)
# Reset and rerun the metadata indexer.
reset_indexer(metadata_indexer_name)
run_indexer(metadata_indexer_name)
###Output
_____no_output_____
###Markdown
The indexer run can take a while, so let's check the status to see when it is ready. Below we are checking the main indexer, not the metadata indexer, but you can do both if you want.
###Code
import time, json
def check_indexer_status(indexer_name):
try:
complete = False
while (complete == False):
result = azsearch_rest(request_type="GET", endpoint="indexers/{0}/status".format(indexer_name))
state = result["status"]
if result['lastResult'] is not None:
state = result['lastResult']['status']
print (state)
if state in ("success", "error"):
complete = True
time.sleep(1)
except Exception as e:
print('Error:')
print(e)
# Check the main indexer
check_indexer_status(indexer_name)
###Output
_____no_output_____
###Markdown
Now that the indexers have run to build the index, we can query it. First, we will create a wrapper function for querying an Azure Search service.
###Code
def azsearch_query(index, params):
# Imports and constants
import http.client, urllib.request, urllib.parse, urllib.error, base64, json, urllib
# Request headers.
headers = {
'Content-Type': 'application/json',
'api-key': azsearch_key
}
try:
# Execute the REST API call and get the response.
conn = http.client.HTTPSConnection(azsearch_url)
request_path = "/indexes/{0}/docs?{1}".format(index, params)
conn.request("GET", request_path, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = json.loads(data)
return result
except Exception as ex:
raise ex
print("Ready to use the REST API for Queries")
###Output
_____no_output_____
###Markdown
Finally, you can query your Azure search service. Try searching for "coronavirus".
###Code
import urllib.parse, json
search_terms = input("Search: ")
# Define the search parameters
searchParams = urllib.parse.urlencode({
'search':'"{0}"'.format(search_terms),
'searchMode':'All',
'queryType':'full',
'$count':'true',
'$select':'docID, title, abstractContent, body, pubDate, journalId, contributors, bodyStructure, conditionQualifier, diagnosis, direction, examinationName, examinationRelation, familyRelation, gender, gene, medicationClass, medicationName, routeOrMode, symptomOrSign, treatmentName, variant, url',
'api-version':'2019-05-06-Preview'
})
try:
result = azsearch_query(index=index_name, params=searchParams)
print('Hits:',result['@odata.count'])
print(json.dumps(result, indent=2))
except Exception as e:
print('Error:')
print(e)
###Output
_____no_output_____ |
ECM445_CW_1/coursework1_template.ipynb | ###Markdown
Coursework 1 - Decision Trees Learning Enter your candidate number here: 700041488 SummaryIn this coursework, your task is to develop a machine learning classifier for predicting female patients that at high risk of Diabetes. Your model is to support clinicians in identifying patients who are likely to have “Diabetes”. The dataset has 9 attributes in total including the “target/label” attribute. The full dataset is available on ELE under assessment coursework 1. The dataset consists of the following: Dataset1. preg: Number of times pregnant2. plas: Plasma glucose concentration a 2 hours in an oral glucose tolerance test3. pres: Diastolic blood pressure (mm Hg)4. skin: Triceps skin fold thickness (mm)5. insu: 2-Hour serum insulin (mu U/ml)6. mass: Body mass index (weight in kg/(height in m)^2)7. pedi: Diabetes pedigree function8. age: Age (years)9. class: Class variable (0 or 1)
###Code
from matplotlib import pyplot as plt
from sklearn.utils import shuffle
import pandas as pd
import os
%matplotlib inline
pd.set_option('mode.chained_assignment', None)
dia_all = pd.read_csv("diabetes.txt") # This loads the full dataset # In the file, attributes are separated by ,
dia_all.head(5)
###Output
_____no_output_____
###Markdown
Seperate the inpout (attributes) from target (label)
###Code
dia_all = shuffle(dia_all)
dia_all['class'] = dia_all['class'].apply(lambda x: 1 if x == 'tested_positive' else 0)
sourcevars = dia_all.iloc[:,:-1].astype(float) #all rows + all columns except the last one
targetvar = dia_all.iloc[:,-1:] #all rows + only the last column
###Output
_____no_output_____
###Markdown
Your answersPlease clearly highlight each task. Task1 [Exploratory data analysis] Taks 1.a [Data Processing, Statistic Analysis, Cleaning and Correlation Matrix] $Helper \thinspace Functions$
###Code
def calculate_stats(df, col_name):
'''
Returns array of mean and mode of given column
Arguments:
df -- pandas dataframe
col_name -- valid column name of dataframe
'''
try:
mean = df[col_name].mean()
mode = df[col_name].mode()
except Exception as err:
print('Column not found: %s'%col_name)
mm_array = [mean,mode]
return mm_array
###Output
_____no_output_____
###Markdown
$Zero \thinspace Replacement$
###Code
df = pd.DataFrame()
for col in sourcevars.columns:
sourcevars[col] = sourcevars[col].mask(sourcevars[col] == 0,calculate_stats(sourcevars, col)[0])
###Output
_____no_output_____
###Markdown
$Data \thinspace Statistics$
###Code
dia_all.describe()
from IPython.display import Image
corr = sourcevars.corr()
fig = corr.style.background_gradient('coolwarm', axis=1).set_properties(**{'max-width': '180px', 'font-size': '10pt', 'padding': "1em 2em"}).set_caption("Correlation Matrix").set_precision(2)
Image(filename='fig.png')
sourcevars.corr()
###Output
_____no_output_____
###Markdown
Task 1.b [Understand data using grouping and Class Distribution]
###Code
ax = dia_all[dia_all['class']==1].plot.scatter(x='plas', y='age', marker='o', color='green', s=50, label='positive', figsize=(12,8))
dia_all[dia_all['class']==0].plot.scatter(x='plas', y='age', marker='*', color='red', s=60, label='negative', ax=ax)
plt.grid(True, linewidth=0.7, color='#ff0000', linestyle='-')
df2 = dia_all.groupby(['class']).agg(['sum'])
df2.plot(kind='barh', stacked=False, figsize=(10,7));
###Output
_____no_output_____
###Markdown
$Check \thinspace for \thinspace distribution \thinspace of \thinspace true \thinspace and \thinspace false \thinspace cases$
###Code
num_obs = len(dia_all)
num_true = len(targetvar.loc[targetvar['class'] == 1])
num_false = len(targetvar.loc[targetvar['class'] == 0])
print("Number of True cases: {0} ({1:2.2f}%)".format(num_true, ((1.00 * num_true)/(1.0 * num_obs)) * 100))
print('_________________________________________________')
print("Number of False cases: {0} ({1:2.2f}%)".format(num_false, (( 1.0 * num_false)/(1.0 * num_obs)) * 100))
###Output
Number of True cases: 268 (34.90%)
_________________________________________________
Number of False cases: 500 (65.10%)
###Markdown
Task 2.a [ Classification] 2.a.1 Decision Tree (DT) classifier
###Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
import numpy as np
###Output
_____no_output_____
###Markdown
General normalization function
###Code
def standardize(X):
""" Standardize the dataset X """
X_std = X
mean = X.mean(axis=0)
std = X.std(axis=0)
X_std = (X - X.mean(axis=0)) / X.std(axis=0)
return X_std
def split_data(split_test_size = 0.30):
X = sourcevars
y = targetvar
X_train, X_test, y_train, y_test = train_test_split(X,targetvar,test_size=split_test_size, random_state = 0)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
return X_train, X_test, y_train, y_test
###Output
_____no_output_____
###Markdown
Find optimum parameters using Grid Search technique
###Code
# Parameter evaluation
treeclf = DecisionTreeClassifier(random_state=42)
parameters = {'max_depth': [6, 7, 8, 9],
'min_samples_split': [2, 3, 4, 5],
'max_features': [1, 2, 3, 4]
}
gridsearch=GridSearchCV(treeclf, parameters, cv=100, scoring='roc_auc')
gridsearch.fit(sourcevars,targetvar)
print(gridsearch.best_params_)
print(gridsearch.best_score_)
X_train, X_test, y_train, y_test = split_data()
tree = DecisionTreeClassifier(max_depth=6,max_features = 4, min_samples_split = 3, random_state = 0)
tree.fit(X_train,y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train,y_train)))
print("Accuracy on test set: {:.3f}".format(tree.score(X_test,y_test)))
prediction_from_test_data = tree.predict(X_test)
accuracy = metrics.accuracy_score(y_test, prediction_from_test_data)
print ("Accuracy of Decision Tree is: {0:0.4f}".format(accuracy))
print ("Confusion Matrix")
print ("{0}".format(metrics.confusion_matrix(y_test, prediction_from_test_data, labels=[1, 0])))
print ("Classification Report")
print('_________________________________________________________')
print ("{0}".format(metrics.classification_report(y_test, prediction_from_test_data, labels=[1, 0])))
# Making the Confusion Matrix
from sklearn.metrics import classification_report, confusion_matrix
y_pred = tree.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print('TP - True Negative {}'.format(cm[0,0]))
print('FP - False Positive {}'.format(cm[0,1]))
print('FN - False Negative {}'.format(cm[1,0]))
print('TP - True Positive {}'.format(cm[1,1]))
print('_________________________________________________')
print('Accuracy Rate: {}'.format(np.divide(np.sum([cm[0,0],cm[1,1]]),np.sum(cm))))
print('Misclassification Rate: {}'.format(np.divide(np.sum([cm[0,1],cm[1,0]]),np.sum(cm))))
round(metrics.roc_auc_score(y_test,y_pred),5)
print('_________________________________________________')
print ("Confusion Matrix")
print(cm)
###Output
TP - True Negative 125
FP - False Positive 29
FN - False Negative 25
TP - True Positive 52
_________________________________________________
Accuracy Rate: 0.7662337662337663
Misclassification Rate: 0.23376623376623376
_________________________________________________
Confusion Matrix
[[125 29]
[ 25 52]]
###Markdown
2.a.2 - Repeat(2.a.1) the experiment 10 times (General function for multiple iterations)
###Code
MSE = []
ACCURACY = []
TN, FP, FN, TP = [],[],[],[]
PRECISION = []
for i in range(10):
random_train_test_split = round(np.random.uniform(0,1),2)
X_train, X_test, y_train, y_test = split_data(random_train_test_split)
model = DecisionTreeClassifier(max_depth=7,max_features = 3, min_samples_split = 5, random_state = 0)
model.fit(X_train,y_train)
prediction_from_test_data = model.predict(X_test)
accuracy = metrics.accuracy_score(y_test, prediction_from_test_data)
MSE.append(metrics.mean_squared_error(y_test, prediction_from_test_data))
cm = confusion_matrix(y_test, prediction_from_test_data)
precision = metrics.precision_score(prediction_from_test_data, y_test, average='micro')
TN.append(cm[0,0]); FP.append(cm[0,1]); FN.append(cm[1,0]); TP.append(cm[1,1])
PRECISION.append(precision); ACCURACY.append(accuracy)
print('Running cyle: %s'%str(i))
print('__________________________________________________________________')
print('Random Split {}'.format(random_train_test_split))
print('TN - True Negative {}'.format(cm[0,0]))
print('FP - False Positive {}'.format(cm[0,1]))
print('FN - False Negative {}'.format(cm[1,0]))
print('TP - True Positive {}'.format(cm[1,1]))
print('Precision of Decision Tree {0:0.4f}'.format(precision))
print ("Accuracy of Decision Tree {0:0.4f}".format(accuracy))
print("Test set MSE for {} cycle:{}".format(i+1,MSE[i]))
print('__________________________________________________________________')
print("Mean MSE for {}-random split cross validation : {}".format(len(MSE), np.mean(MSE)))
print("Mean Accuracy for {}-random split cross validation : {}".format(len(ACCURACY), np.mean(ACCURACY)))
print("Mean Precision for {}-random split cross validation : {}".format(len(PRECISION), np.mean(PRECISION)))
print("Mean True Negative for {}-random split cross validation : {}".format(len(TN), np.mean(TN)))
print("Mean False Positive for {}-random split cross validation : {}".format(len(FP), np.mean(FP)))
print("Mean False Negative for {}-random split cross validation : {}".format(len(FN), np.mean(FN)))
print("Mean True Positive for {}-random split cross validation : {}".format(len(TP), np.mean(TP)))
###Output
Running cyle: 0
__________________________________________________________________
Random Split 0.99
TN - True Negative 404
FP - False Positive 90
FN - False Negative 112
TP - True Positive 155
Precision of Decision Tree 0.7346
Accuracy of Decision Tree 0.7346
Test set MSE for 1 cycle:0.26544021024967146
__________________________________________________________________
Running cyle: 1
__________________________________________________________________
Random Split 0.16
TN - True Negative 61
FP - False Positive 21
FN - False Negative 10
TP - True Positive 31
Precision of Decision Tree 0.7480
Accuracy of Decision Tree 0.7480
Test set MSE for 2 cycle:0.25203252032520324
__________________________________________________________________
Running cyle: 2
__________________________________________________________________
Random Split 0.31
TN - True Negative 128
FP - False Positive 32
FN - False Negative 37
TP - True Positive 42
Precision of Decision Tree 0.7113
Accuracy of Decision Tree 0.7113
Test set MSE for 3 cycle:0.28870292887029286
__________________________________________________________________
Running cyle: 3
__________________________________________________________________
Random Split 0.42
TN - True Negative 161
FP - False Positive 59
FN - False Negative 37
TP - True Positive 66
Precision of Decision Tree 0.7028
Accuracy of Decision Tree 0.7028
Test set MSE for 4 cycle:0.29721362229102166
__________________________________________________________________
Running cyle: 4
__________________________________________________________________
Random Split 0.36
TN - True Negative 145
FP - False Positive 44
FN - False Negative 36
TP - True Positive 52
Precision of Decision Tree 0.7112
Accuracy of Decision Tree 0.7112
Test set MSE for 5 cycle:0.2888086642599278
__________________________________________________________________
Running cyle: 5
__________________________________________________________________
Random Split 0.51
TN - True Negative 202
FP - False Positive 64
FN - False Negative 49
TP - True Positive 77
Precision of Decision Tree 0.7117
Accuracy of Decision Tree 0.7117
Test set MSE for 6 cycle:0.288265306122449
__________________________________________________________________
Running cyle: 6
__________________________________________________________________
Random Split 0.7
TN - True Negative 276
FP - False Positive 79
FN - False Negative 82
TP - True Positive 101
Precision of Decision Tree 0.7007
Accuracy of Decision Tree 0.7007
Test set MSE for 7 cycle:0.2992565055762082
__________________________________________________________________
Running cyle: 7
__________________________________________________________________
Random Split 0.58
TN - True Negative 214
FP - False Positive 85
FN - False Negative 54
TP - True Positive 93
Precision of Decision Tree 0.6883
Accuracy of Decision Tree 0.6883
Test set MSE for 8 cycle:0.3116591928251121
__________________________________________________________________
Running cyle: 8
__________________________________________________________________
Random Split 0.95
TN - True Negative 311
FP - False Positive 162
FN - False Negative 77
TP - True Positive 180
Precision of Decision Tree 0.6726
Accuracy of Decision Tree 0.6726
Test set MSE for 9 cycle:0.3273972602739726
__________________________________________________________________
Running cyle: 9
__________________________________________________________________
Random Split 0.36
TN - True Negative 145
FP - False Positive 44
FN - False Negative 36
TP - True Positive 52
Precision of Decision Tree 0.7112
Accuracy of Decision Tree 0.7112
Test set MSE for 10 cycle:0.2888086642599278
__________________________________________________________________
Mean MSE for 10-random split cross validation : 0.2907584875053787
Mean Accuracy for 10-random split cross validation : 0.7092415124946214
Mean Precision for 10-random split cross validation : 0.7092415124946214
Mean True Negative for 10-random split cross validation : 204.7
Mean False Positive for 10-random split cross validation : 68.0
Mean False Negative for 10-random split cross validation : 53.0
Mean True Positive for 10-random split cross validation : 84.9
###Markdown
2.b.1 Peformance comparsion between Gini impurity (“gini”) to information gain (“entropy”)
###Code
def compare_performance(criterion='gini', max_depth = 7, min_samples_split = 5):
tree = DecisionTreeClassifier(max_depth=max_depth, max_features = 3, min_samples_split = min_samples_split, random_state=0, criterion=criterion)
tree.fit(X_train,y_train)
return [tree.score(X_train,y_train), tree.score(X_test,y_test)]
print('Performance Check on: gini')
train_gini, test_gini = compare_performance(criterion='gini')
print('________________________________________________')
print("Accuracy on training set: {:.3f}".format(train_gini))
print("Accuracy on test set: {:.3f}".format(test_gini))
print('________________________________________________')
print('Performance Check on: entropy')
train_entropy, test_entropy = compare_performance(criterion='entropy')
print('________________________________________________')
print("Accuracy on training set: {:.3f}".format(train_entropy))
print("Accuracy on test set: {:.3f}".format(test_entropy))
print('________________________________________________')
###Output
Performance Check on: gini
________________________________________________
Accuracy on training set: 0.868
Accuracy on test set: 0.711
________________________________________________
Performance Check on: entropy
________________________________________________
Accuracy on training set: 0.843
Accuracy on test set: 0.704
________________________________________________
###Markdown
2.b.2 Peformance comparsion between Gini impurity (“gini”) to information gain (“entropy”) on random train test split and for 10 iterations
###Code
def repeat_experiment(criterion = 'gini'):
ACCURACY = []
for i in range(10):
random_train_test_split = round(np.random.uniform(0,1),2)
X_train, X_test, y_train, y_test = split_data(random_train_test_split)
model = DecisionTreeClassifier(max_depth=7,max_features = 3, min_samples_split = 5, random_state = 0)
model.fit(X_train,y_train)
prediction_from_test_data = model.predict(X_test)
ACCURACY.append(metrics.accuracy_score(y_test, prediction_from_test_data))
return ACCURACY
accuracy_gini = repeat_experiment(criterion='gini')
print("Mean Accuracy gini for {}-random train test cross validation : {}".format(len(accuracy_gini), np.mean(accuracy_gini)))
accuracy_entropy = repeat_experiment(criterion='entropy')
print("Mean Accuracy entropy for {}-random train test cross validation : {}".format(len(accuracy_entropy), np.mean(accuracy_entropy)))
###Output
Mean Accuracy entropy for 10-random train test cross validation : 0.6967989583165975
###Markdown
2.c Performance comparsion between "gini" and "entropy" using chart
###Code
cycles = range(10)
plt.plot(cycles, accuracy_gini, label='gini')
plt.plot(cycles, accuracy_entropy, label='entropy')
plt.title('gini vs entropy comparsion')
plt.xlabel('Input cycles')
plt.ylabel('Output accuracy')
plt.grid(True, linewidth=0.7, color='#ff0000', linestyle='-')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2.d why standardizing helps in improving performance *Standardizing a dataset in machine learning helps with making the data comparable across tasks and algorithms.There are many data preprocessing steps that could be applied to a dataset, such as data normalization, feature selection, data transformations, and so on. In the given dataset there were some zero values which are kind of outliers in data and hence removing zeros before applying DT algorithm definitely improved the performance. Also when I tried to standardize the dataset using formula*__standardized_data__ $= \frac{data - \mu }{\sigma}$There was no change in the performance of the model. The reason for that is because the data is highly correlated and is standardized. Task 3[Classification parameters DT] Task 3.a min_samples_split effect on performance of algorithm
###Code
min_samples_split = [2, 5, 10, 15]
acc_comparsion_train = []
acc_comparsion_test = []
for sample in min_samples_split:
acc_comparsion_train.append(compare_performance(min_samples_split = sample)[0])
acc_comparsion_test.append(compare_performance(min_samples_split = sample)[1])
plt.plot(min_samples_split, acc_comparsion_train, label='train', color='r')
plt.plot(min_samples_split, acc_comparsion_test, label='test', color='b')
plt.title('Performance on different sample splits')
plt.grid(True, linewidth=0.7, color='#ff0000', linestyle='-')
plt.xlabel('min_samples_split')
plt.ylabel('Output accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Task 3.b max_depth effect on performance of algorithm
###Code
max_depth = [3, 4, 5, 6]
acc_comparsion_train = []
acc_comparsion_test = []
for sample in min_samples_split:
acc_comparsion_train.append(compare_performance(max_depth = sample)[0])
acc_comparsion_test.append(compare_performance(max_depth = sample)[1])
plt.plot(min_samples_split, acc_comparsion_train, label='train', color='r')
plt.plot(min_samples_split, acc_comparsion_test, label='test', color='b')
plt.title('Performance on different max depth values')
plt.grid(True, linewidth=0.7, color='#ff0000', linestyle='-')
plt.xlabel('min_samples_split')
plt.ylabel('Output accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Task 4[Decision Tree Boundaries] - Implemented this part to understand decision tree better
###Code
#Feature Importance DecisionTreeClassifier
importance = tree.feature_importances_
indices = np.argsort(importance)[::-1]
feature = X_train
feat_names = sourcevars.columns
print("DecisionTree Feature ranking:")
print('_________________________________________________')
for f in range(feature.shape[1]):
print("%d. feature %s (%f)" % (f + 1, feat_names[indices[f]], importance[indices[f]]))
print('_________________________________________________')
plt.figure(figsize=(15,5))
plt.title("DecisionTree Feature importances")
plt.bar(range(feature.shape[1]), importance[indices], color='#00008B', align="center")
plt.xticks(range(feature.shape[1]), list(feat_names[indices]))
plt.xlim([-1, feature.shape[1]])
plt.grid(True, linewidth=0.7, color='#ff0000', linestyle='-')
plt.show()
from sklearn.tree import export_graphviz
import graphviz
importance = tree.feature_importances_
indices = np.argsort(importance)[::-1]
export_graphviz(tree,out_file="diabetes_tree.dot",class_names=["0","1"],
feature_names=sourcevars.columns,impurity=False,filled=True)
with open("diabetes_tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
#Evaluation DecisionTreeClassifier
from sklearn.metrics import roc_curve, auc
import random
y_pred = model.predict(X_test)
fpr,tpr,thres = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
plt.title('DecisionTreeClassifier-Receiver Operating Characteristic Test Data')
plt.plot(fpr, tpr, color='green', lw=2, label='DecisionTree ROC curve (area = %0.2f)' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.grid(True, linewidth=0.7, color='black', linestyle='-')
plt.show()
###Output
_____no_output_____ |
predictive-analytics-and-maintenance/predictive-analysis-pipeline.ipynb | ###Markdown
Methodology of Predictive Outlier Analysis of Server Data__This is a two step pipeline that is designed to first forecast the behavior of each variable independent of another. This independence amongst variables is a reasonable assumption and appropriate analytics decision while making the predictions for the future data i.e. forecasting target variables. And from the same forecasted variables are fed in combined manner to our unsupervised outlier detection ensemble algorithm.__ __Below, we state the exact steps followed for the modelling and prediction process:__ * Loading the dataset onto dataframes. Here, we have decided to work with `multi-var-four-two.csv` which describes the scenario or state features for a given single EC2 AWS VM.* Second, we separate these variables and feed them individually for prediction into `fbprophet` addition based models. Also, we keep our forecasting range to be `4.5 days` as knowing the defect almost 1 working week earlier is more than enough time to take suitable actions for ramifications. * After training the model and making a forecast we do evaluate the accuracy of the model.* After making the forecasts we use the same data for training out `pyod` outlier models from the previous training data split of `3024th` row number. * After training the model and making outlier predictions. We quantitatively analyze the output results by comparing outlier prediction values on actual and forecasted data in this notebook. Standardizing the model data, visualizing it and preparing train-test data splits.
###Code
vm_df = pd.read_csv('nab-sample-data-subset/multi-var-four-two.csv')
vm_df.head()
vm_df.describe()
# Now, standardizing the numerical values and doing a value analysis again.
vm_df.loc[:, 'net_util_ec2_one':] = (vm_df.loc[:,'net_util_ec2_one':]-vm_df.loc[:,'net_util_ec2_one':].min())/(vm_df.loc[:,'net_util_ec2_one':].max()-vm_df.loc[:,'net_util_ec2_one':].min())
vm_df.head()
vm_df.describe()
# Converting to datatype to timestamp for analysis.
vm_plot = pd.read_csv('nab-sample-data-subset/multi-var-four-two.csv')
vm_plot.loc[:, 'net_util_ec2_one':] = (vm_plot.loc[:,'net_util_ec2_one':]-vm_plot.loc[:,'net_util_ec2_one':].min())/(vm_plot.loc[:,'net_util_ec2_one':].max()-vm_plot.loc[:,'net_util_ec2_one':].min())
vm_plot['timestamp']= pd.to_datetime(vm_plot['timestamp'])
vm_plot.set_index('timestamp', inplace=True)
ax = vm_plot.plot(colormap='Dark2', figsize=(20, 10))
ax.set_xlabel('Monitoring Date Window')
ax.set_ylabel('Feature Variation Values')
plt.show()
# We clearly have very interesting forecasting functions available to us for prediction.
# changing the 'timestamp' field to datetime type
vm_df['timestamp']= pd.to_datetime(vm_df['timestamp'])
# creating a train-test split for approximately 8.5 days forecast window.
train_vm_df = vm_df[:3024]
test_vm_df = vm_df[3024:]
net_train = train_vm_df[['timestamp','net_util_ec2_one']]
cpu_train = train_vm_df[['timestamp','cpu_util_ec2_one']]
count_train = train_vm_df[['timestamp','req_count_ec2_one']]
rds_train = train_vm_df[['timestamp','rds_util_ec2_one']]
net_train.columns = ['ds', 'y']
cpu_train.columns = ['ds', 'y']
count_train.columns = ['ds', 'y']
rds_train.columns = ['ds', 'y']
net_test = test_vm_df[['timestamp','net_util_ec2_one']]
cpu_test = test_vm_df[['timestamp','cpu_util_ec2_one']]
count_test = test_vm_df[['timestamp','req_count_ec2_one']]
rds_test = test_vm_df[['timestamp','rds_util_ec2_one']]
net_test.columns = ['ds', 'y']
cpu_test.columns = ['ds', 'y']
count_test.columns = ['ds', 'y']
rds_test.columns = ['ds', 'y']
###Output
_____no_output_____
###Markdown
Model forecasts for every variable independently under consideration for VM under analysis.
###Code
# declaring different models for variable forecasting
model_net_util = fbprophet.Prophet()
model_net_util.fit(net_train)
net_fut = pd.DataFrame(net_test['ds'])
net_forecast = model_net_util.predict(net_fut)
fig1 = model_net_util.plot(net_forecast)
fig1.show()
# cpu utilization forecasting.
# declaring different models for variable forecasting
model_cpu_util = fbprophet.Prophet()
model_cpu_util.fit(cpu_train)
# Making forecasts for CPU utilization
cpu_fut = pd.DataFrame(cpu_test['ds'])
cpu_forecast = model_cpu_util.predict(cpu_fut)
fig1 = model_net_util.plot(cpu_forecast)
fig1.show()
# count request forecasting.
# declaring different models for variable forecasting
model_count_util = fbprophet.Prophet()
model_count_util.fit(count_train)
# Making forecasts for count requests
count_fut = pd.DataFrame(count_test['ds'])
count_forecast = model_count_util.predict(count_fut)
fig1 = model_net_util.plot(count_forecast)
fig1.show()
# rds utilization forecasting.
# declaring different models for variable forecasting
model_rds_util = fbprophet.Prophet()
model_rds_util.fit(rds_train)
# Making forecasts for rds utilization
rds_fut = pd.DataFrame(rds_test['ds'])
rds_forecast = model_rds_util.predict(rds_fut)
fig1 = model_net_util.plot(rds_forecast)
fig1.show()
###Output
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
/home/ashishrana160796/.local/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:1377: FutureWarning:
Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
/home/ashishrana160796/.local/lib/python3.6/site-packages/matplotlib/axes/_base.py:239: FutureWarning:
Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
/home/ashishrana160796/.local/lib/python3.6/site-packages/matplotlib/figure.py:445: UserWarning:
Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
###Markdown
Metric based evaluation of models under consideration for given features and visualizing them.
###Code
# MAE evaluation for all the forecasts that are made.
print("Network Utilization MAE: "+ str(mean_absolute_error(net_test['y'], net_forecast['yhat'])) )
print("CPU Utilization MAE: "+ str(mean_absolute_error(cpu_test['y'], cpu_forecast['yhat'])) )
print("Request Count MAE: "+ str(mean_absolute_error(count_test['y'], count_forecast['yhat'])) )
print("RDS Utilization MAE: "+ str(mean_absolute_error(rds_test['y'], rds_forecast['yhat'])) )
# Really accurate values of forecasting.
# Plotting the actul vs forecast data.
plt.rcParams["figure.figsize"] = (25,4)
plt.plot(net_test['y'], label='Actual Network Utilization', color='green')
plt.plot(net_forecast['yhat'], label='Forecasted Network Utilization', color='green')
plt.plot(cpu_test['y'], label='Actual CPU Utilization', color = 'purple')
plt.plot(cpu_forecast['yhat'], label='Forecasted CPU Utilization', color = 'purple' )
plt.plot(count_test['y'], label='Actual Request Counts', color = 'orange')
plt.plot(count_forecast['yhat'], label='Forecasted Request Counts', color = 'orange' )
plt.plot(rds_test['y'], label='Actual RDS Utilization', color = 'grey')
plt.plot(rds_forecast['yhat'], label='Forecasted RDS Utilization', color = 'grey' )
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Outlier analysis on forecasted data & sanity check.__If the outlier predictions made on forecasted data shows different result in comparison to outier predictions made on actual data then making predictions further in time is not a feasable option.__
###Code
# dataframe already available to us for model training
# train_vm_df, test_vm_df.
# create a forcast time frame for analysis.
forcasted_data = [net_forecast['yhat'], cpu_forecast['yhat'], count_forecast['yhat'], rds_forecast['yhat']]
headers = ['net_util_ec2_one', 'cpu_util_ec2_one' , 'req_count_ec2_one', 'rds_util_ec2_one']
forcast_df = pd.concat(forcasted_data, axis=1, keys=headers)
forcast_df.head()
detector_list = [ LOF(n_neighbors=16), LOF(n_neighbors=24),
CBLOF(random_state=42), LODA() ]
clf_org = LSCP(detector_list, random_state=42)
clf_org.fit(train_vm_df.loc[:,"net_util_ec2_one":])
pred_org = clf_org.predict_proba(test_vm_df.loc[:,"net_util_ec2_one":])
pred_for = clf_org.predict_proba(forcast_df)
# Sanity check on prediction variation metric
for_pred = pred_for[:,0].tolist()
org_pred = pred_org[:,0].tolist()
# Plotting the actul vs forecast data.
pred_plot_df = pd.DataFrame(list(zip(test_vm_df['timestamp'], org_pred, for_pred)),
columns =['timestamp', 'original_predictions', 'forcasted_predictions'])
pred_plot_df['timestamp']= pd.to_datetime(pred_plot_df['timestamp'])
pred_plot_df.set_index('timestamp', inplace=True)
ax = pred_plot_df.plot(colormap='Dark2', figsize=(10, 4))
ax.set_xlabel('Monitoring Date Window')
ax.set_ylabel('Feature Variation Values')
plt.show()
# Model performance sanity check establish.
print("Sanity Check Evaluation half day: "+ str(mean_absolute_error( for_pred[:143], org_pred[:143] )) )
print("Sanity Check Evaluation one day: "+ str(mean_absolute_error( for_pred[:287], org_pred[:287] )) )
print("Sanity Check Evaluation three days: "+ str(mean_absolute_error( for_pred[:861], org_pred[:861] )) )
print("Sanity Check Evaluation four and half days: "+ str(mean_absolute_error( for_pred, org_pred )) )
###Output
Sanity Check Evaluation half day: 0.03270968074605652
Sanity Check Evaluation one day: 0.07351063800487506
Sanity Check Evaluation three days: 0.17144583917568554
Sanity Check Evaluation four and half days: 0.19182688468024553
|
0.11.0/BOSSVS.ipynb | ###Markdown
BOSSVS: Bag of SFA Symbols in Vector Space* Website: https://www2.informatik.hu-berlin.de/~schaefpa/bossVS/* Paper: https://www2.informatik.hu-berlin.de/~schaefpa/bossvs.pdf**Note: an Internet connection is required to download the datasets used in this benchmark.**
###Code
import numpy as np
import pyts
from pyts.classification import BOSSVS
from pyts.datasets import fetch_ucr_dataset
from sklearn.ensemble import VotingClassifier
print("pyts: {0}".format(pyts.__version__))
dataset_params = {
'Adiac': {'word_size': 12,
'window_size': 80,
'norm_mean': True,
'drop_sum': True},
'ECG200': {'word_size': 5,
'window_size': 40,
'norm_mean': False,
'drop_sum': False},
'GunPoint': {'word_size': 14,
'window_size': 40,
'norm_mean': True,
'drop_sum': True},
'MiddlePhalanxTW': {'word_size': 10,
'window_size': 25,
'norm_mean': False,
'drop_sum': False},
'Plane': {'word_size': 6,
'window_size': 10,
'norm_mean': False,
'drop_sum': False},
'SyntheticControl': {'word_size': np.full(20, 6),
'window_size': np.arange(18, 37),
'norm_mean': np.full(20, False),
'drop_sum': np.full(20, False)}
}
for dataset, params in dataset_params.items():
print(dataset)
print('-' * len(dataset))
X_train, X_test, y_train, y_test = fetch_ucr_dataset(dataset, return_X_y=True)
# Truncate the input data containing padding values
if dataset == 'MiddlePhalanxTW':
X_train, X_test = X_train[:, :-29], X_test[:, :-29]
if isinstance(params['window_size'], np.ndarray):
dicts = [{key: value[i] for key, value in params.items()}
for i in range(len(params['window_size']))]
bossvses = [BOSSVS(**param) for param in dicts]
clf = VotingClassifier([('bossvs_' + str(i), bossvs)
for i, bossvs in enumerate(bossvses)])
else:
clf = BOSSVS(**params)
accuracy = clf.fit(X_train, y_train).score(X_test, y_test)
print('Accuracy on the test set: {0:.3f}'.format(accuracy))
print()
###Output
Adiac
-----
Accuracy on the test set: 0.703
ECG200
------
Accuracy on the test set: 0.860
GunPoint
--------
Accuracy on the test set: 1.000
MiddlePhalanxTW
---------------
Accuracy on the test set: 0.545
Plane
-----
Accuracy on the test set: 1.000
SyntheticControl
----------------
Accuracy on the test set: 0.980
|
Lab0_Intro2Python.ipynb | ###Markdown
Lab 0: Introduction To Python 🐍 Outline1. [Why Python](Why-Python?)2. [About This Lab](About-This-Lab) 1. [A Jupyter Notebook](This-is-a-Jupyter-Notebook)3. [Working in Python](Working-in-Python) 1. [Expressions](Expressions) 2. [Variables](Variables) 3. [Strings](Strings) 4. [Lists](Lists) 5. [Control Flow and Loops](Control-Flow-and-Loops) 6. [Functions](Functions) 7. [Other Python Topics](Other-Python-Topics)4. [NumPy](NumPy) 1. [Creating Arrays](Creating-Arrays) 2. [Thinking of Arrays as Vectors](Thinking-of-Arrays-as-Vectors) 3. [Arrays as Matrices](Arrays-as-Matrices) 4. [Arrays have Axes](Arrays-have-Axes) Why Python?Python is one of the hottest languages in industry today, especially in machine learning and data science. According to Stack Overflow's [2018 Developer Survey Results](https://insights.stackoverflow.com/survey/2018/most-loved-dreaded-and-wanted) Python is the third "most loved" and *the* "most wanted" language as chosen by industry professionals. It is widely appreciated for its clean, readable, and generally no-nonsense code that enables singleminded focus on the task at hand. It was designed to be both simple and friendly, yet still be powerful and expressive.In this course, we will be using Python 3.6. About This LabIn this lab we assume that you have taken an introductory course in Computer Science, or are familiar with programming in another programming language (eg. C++ or Java). If you are already familiar with Python, then you may skip to the [discussion of NumPy](NumPy). Here we will briefly demonstrate how to write Python code and introduce some of the tools we frequently use to analyze data. This is a Jupyter NotebookAs you may already know, this lab is formatted as a Jupyter Notebook, which provides a Python environment that you can interact with. This means that you can run all of the code examples you find below, and even create your own code to try things out — in fact, you are highly encouraged to do so. Each chunk of code or text in this notebook is written in a **block**. There are three types of blocks in Jupyter Notebooks: code, markdown, and raw. Code Blocks**Code blocks**, are exactly what their name implies: you write code in them. They are blocks that can be executed in the interactive Python environment we mentioned earlier. You can **run** code blocks by clicking into them so that you see a blue bar to the left of the block, and then either pressing the small ▶ button in the top bar, or pressing `Shift + Enter`.**Try this!** Here is an example of a code block, run it!
###Code
print("I'm a code block!")
###Output
_____no_output_____
###Markdown
You can also add new blocks by pressing the + button in the top bar. Markdown BlocksAll of the text you see in this notebook was written with the second type of block we mentioned: **markdown blocks**. These blocks allow you to write formatted text with a simple markdown language called, well, [Markdown](https://en.wikipedia.org/wiki/Markdown). You can find a very simple demonstration of markdown [here](https://markdown-here.com/livedemo.html). You can also double-click any of the text blocks in this document, **like this one**, to see the markdown "source" code that produced it. Just **run** it to render it again. Raw BlocksThe last type of block is a **raw block**, which allows you to write preformatted or "raw" text. The contents of these blocks are not rendered as happens with the markdown blocks. These blocks are not very common in a notebook, but there are times where you will find them useful.
###Code
This is a raw text block!
Everything is
formatted exactly as written.
###Output
_____no_output_____
###Markdown
Working in PythonAs mentioned above Python is renowned for its simplicity. And, while this introduction may be long and intense, we strongly believe that as you continue to write Python code, you will find that the language just "fades into the background," allowing you to focus on the data science that you are here to learn.As you read through the rest of the lab, remember that all of the code blocks are interactive and that you should **run** them to see what happens. Furthermore, you are _encouraged_ to make your own blocks (with the + button) and to try experimenting with things yourself. ExpressionsAs with any programming language, complex ideas are built up from small, *primitive* expressions. More generally, anything that can be *evaluated* to a value is an expression. For example, a number (1, 2, etc.) in code is an expression because that code can be evaluated to the value of that number.**Try this!** Execute the cells in this section and observe their output.
###Code
217
###Output
_____no_output_____
###Markdown
By combining simple expressions with operators, we can even express complex, almost magical, ideas like those used to find patterns in data. For example,
###Code
217 * 9 + 8 * 7 + 10
###Output
_____no_output_____
###Markdown
Some other types of expressions are `strings` and `booleans`.
###Code
"I'm an expression too!"
###Output
_____no_output_____
###Markdown
And of course:
###Code
True or False is not False
###Output
_____no_output_____
###Markdown
VariablesYou can create variables by assigning a value to a name (or, if you prefer, by giving a name a value). Values can be expressed as expressions, which are evaluated prior to being assigned to a name.
###Code
greeting = 'Hello, DS'
greeting
###Output
_____no_output_____
###Markdown
If you are in a hurry, you can also assign names to multiple values simultaneously. But a caveat for this is that assignment is done after evaluation so you cannot use `one` and `two` in the expression for `three`.
###Code
one, two, three = 1, 2, 3
# this won't work!
# four, five, nine = 4, 5, four + five
###Output
_____no_output_____
###Markdown
**Thy this!** Uncomment the last line in the previous code block and run the cell to verify that it doesn't work. StringsWorking with data often means working with strings. Recall that strings are what we call words and sentences in programming languanges because they are essentially a group of characters, like `a` or `b`, that have been *strung* together like `hello there!`. Being familiar with how to manipulate strings is not only important, but very useful. Many professionals love Python because of how easy it is to work with strings, especially in areas like [Natural Language Processing](https://en.wikipedia.org/wiki/Natural_language_processing). Like in other languages, you can denote a string using the `"` symbol. For example, `"this is a string"`. Alternatively in Python you may also use a "single quote", `'`, to denote a string (eg. `'this is also a string'`). **Try this!** In the follow block, try assigning a string containing your name to a variable called `my_name`.
###Code
# replace None with a string containing your name
my_name = None
my_name
###Output
_____no_output_____
###Markdown
Indexing and SlicingYou can access specific characters in a string using square bracket notation, `[]`.
###Code
my_name[0]
###Output
_____no_output_____
###Markdown
Remember that, like Java, strings are indexed from `0`. You can also slice strings, or get a part of a string using **slice** indexing. Slicing works by specifying the range of indices that you are interested in retrieving. The way to describe this is the same as [interval notation](http://www.mathwords.com/i/interval_notation.htm) if you remember from math: $[\text{begin} : \text{end})$.The selected string will include the character at the first index given, but will not include the character of the second index.
###Code
my_name[0:2]
###Output
_____no_output_____
###Markdown
Often, you will be interested in either the first few or last few characters in a string, in which case you can leave out the corresponding index. In the following example, I only want the characters from index `1` to the end so I can leave off the ending index.
###Code
my_name[1:]
###Output
_____no_output_____
###Markdown
**Thy this!** Get all characters of the following string but the first four without counting its length:
###Code
my_string = "This is a really long sting and you have now clue on how long it is!"
# your code here
###Output
_____no_output_____
###Markdown
More On StringsThere are many more things that you can do with strings, for example you can concatenate them together (`'hello' + ', '+ 'world'`). However, in this course, you will primarily be working with _numerical_ data, instead of strings, so we will not linger on this topic. For more information on strings, please see this [article from Google](https://developers.google.com/edu/python/strings). ListsLists are like `arrays` in Java, except that you can put whatever you want into them. They are also flexible in size so you can keep adding to them as much as you'd like. Python lists are also the basis of strings. Because of this, strings and lists both share many of the same features. Here we create a new list, using square bracket notation, and then fill it with `1`, `a`, and the value of the `my_name` variable.
###Code
[1, one, 'a', my_name]
###Output
_____no_output_____
###Markdown
Adding to a ListYou can add to lists using the `append` method like this.
###Code
a_list = [1, 2, 3, 4, 5]
a_list.append(6)
a_list
###Output
_____no_output_____
###Markdown
It is also possible to concatenate two arrays, ie. you can add two lists end-to-end.
###Code
a_list + a_list
###Output
_____no_output_____
###Markdown
IndexingThe good thing about this is that strings are essentially the same as lists so you can use the same indexing techniques as you did for strings for lists.**Try This!** Get the second to fourth elements of `a_list`.
###Code
# your code here
###Output
_____no_output_____
###Markdown
We can also do more crazy indexing things, like skipping elements:
###Code
a_list[0::2]
###Output
_____no_output_____
###Markdown
Or reversing it:
###Code
a_list[::-1]
###Output
_____no_output_____
###Markdown
Control Flow and Loops If-Else ConditionalsBesides the `elif` statement, there are three more things to notice. First, the conditional statements don't have to be in parentheses, which allows for easier reading of code with much less clutter. Second, each statement is terminated with a `:`. This is essentially saying that we will "define" what this statement entails. And, third, that each "block" following a statement is merely indented with either 4 spaces or a tab. This is, again, for readability. You can almost think of Python code as _outlining_ what you want it to do, where each statement is kind of like a heading and each block is an indented "idea", so to speak.
###Code
if 2 < 1 and 1 > 0:
print('Not both!')
elif 1 > 0:
print('Just one.')
else:
print('Everything else')
###Output
_____no_output_____
###Markdown
LoopsThe syntax for a `while` loop should look relatively similar to those in Java. The differences lie in the same places as for the `if` statements. A `while` loop will iterate "while" its condition remains `True`.
###Code
n = 5
while n > 0:
print(n)
n -= 1
###Output
_____no_output_____
###Markdown
`For` loops may look a little unfamiliar at first, but there is good reason for this. For loops in Python have managed to achieve greater functionality without sacrificing readability. Below is an example of how you would iterate 5 times.
###Code
for i in range(5):
print(i)
###Output
_____no_output_____
###Markdown
You can also easily iterate through lists the same way as lists are also `iterables`.
###Code
a_list = ['Hello', 'Machine', 'Learning', '!!!']
for word in a_list:
print(word)
###Output
_____no_output_____
###Markdown
**Try this!** Create a for loop to print out every even number from in the interval `[1:100]`. Hint: create an appropriate `iterable` first.
###Code
# your code here
###Output
_____no_output_____
###Markdown
List ComprehensionsA very useful feature of Python lists is called **comprehension** notation. It allows us to use for loops to create lists! This notation is very similar to set-builder notation from math and allows us to succinctly create lists from other lists. In the following cell, we take a `range`, which is "list-like", and square each value. The `range` function returns an `iterable` from an _optional_ starting point to an end point.
###Code
range(5)
squares = [x * x for x in range(5)]
squares
squares_plus = [x + 1 for x in squares]
squares_plus
###Output
_____no_output_____
###Markdown
FunctionsFunctions in Python, and in general, can be thought of as a generalized method in Java. Where methods operate on and are attached to classes, functions are not. However, both constructs facilitate "abstraction" in our code. Abstraction is usually defined as a process by which we hide away all the little details about something in order to focus on how the thing interacts. Practically in CS this means> [Abstraction's] main goal is to handle complexity by hiding unnecessary details from the user. -[Stackify](https://stackify.com/oop-concept-abstraction/?utm_referrer=https%3A%2F%2Fwww.google.com%2F)This is what allows us to think about driving a car without actually thinking about all the complex detail of what happens when we press the gas pedal. In the same way, functional abstraction allows us to think about functions as a collection of code that, given a particular input, will return a particular output. A very simple example might be a sum function, which computes the sum of a `list` of values.
###Code
sum([1, 2, 3, 4, 5])
###Output
_____no_output_____
###Markdown
With this function, I can retrieve the sum of a group of values without ever having to know _how_ the sum is actually computed. Another benefit of grouping code into functions is that it makes the code easy to test -- just call the function with some inputs and see if the right output is returned. The last benefit I will mention here is that a function is a way of separating concerns and ideas. When you write a `sum` function, all you are responsible for is the correctness and efficiency of the computation, nothing else. Also, when writing a function, you are putting code that does a particular computation into a logical grouping, much like how in writing you would group similar ideas into a paragraph. SyntaxThe syntax of defining a function in Python is quite simple. You let the interpreter know that you want to define a function using the `def` statement. Then, you follow that with your function's name with the arguments. There is no need to specify the types that the function accepts as Python will figure that out itself. If it cannot, then it will let you know in the form of an error.
###Code
def my_function(arg1, arg2):
output = arg1 + arg2
return output
###Output
_____no_output_____
###Markdown
Notice that, again, much like the `if` statements and loops, function definitions end with a `:` and the contents are indented. Some will miss the safety of static typing in Java (others won't). While this is true, in exchange you can reduce the redundancy of code between one version of a function that takes in one type of input and another version of the same function that merely takes in another type. Why write a separate `max` function for `byte`, `char`, `int`, `float`, etc. when you can just write one? More Examples
###Code
def mean(values):
return sum(values) / len(values)
###Output
_____no_output_____
###Markdown
Notice that the `len` function returns the length of any list. LambdasLambdas are a special type of functions called anonymous functions. Whereas with normal functions you must name the function in the definition, you do not have to name lambda functions.```def named_function(): pass```Lambdas are treated as expressions that evaluate to functions exactly like how `'hello'` is evaluated to a string. This means that you can store lambdas in variables but this is _widely_ considered bad practice.```lambda x, y: x + y```Here, the `lambda` keyword tells the Python that we want to make a lambda function. The `x` and `y` before the colon denote the arguments of the function. The expression after the colon represents the logic of the function. In this case, we could call this function `add` since it takes two values and adds them. Again, it is very bad practice to assign lambdas to variables. This might make you think that they are useless, but I assure you that they are not. The most common use case of lambdas is when another function takes a function as an argument. For example the `max` function.I'm sure everyone is familiar with the `max` function, but in case you are not, this function returns the largest element in a list. Typically you would simply call the `max` function with `some_list` and get the largest element. Try evaluating this next cell.
###Code
some_list = [1, 2, 3, 4]
max(some_list)
###Output
_____no_output_____
###Markdown
As expected, the largest value in `some_list` is `4`. However, there are cases where you want to get the max element of a list but the elements have complex structure. For example, consider a class roster, which is a list of students. In this situation, let us represent each student as a `list` containing their name, age, and graduation year.
###Code
roster = [
['Billy', 50, 2021],
['Meghan', 18, 2020],
['Jeff', 21, 2019],
['Alex', 21, 2019],
['Cate', 21, 2020]
]
###Output
_____no_output_____
###Markdown
We want to find the oldest student, `Billy`, in this group of students. How can we do that? Let's try directly calling `max` with this `roster` and see what we get.
###Code
max(roster)
###Output
_____no_output_____
###Markdown
The `max` function returned `['Meghan', 18, 2020]`, which isn't what we are looking for.> **Note**: this result makes sense because the max function sees a list of lists and defaults to using the first element of each list to compare them. In this case, `Meghan` starts with `M`, which comes later in the alphabet than the rest of the students' initials.In order to get the oldest student, we will need to show the `max` function which values we want it to compare. To do this we will use a `key`, which is a function. Instead of writing an entire function for this, we can just pass in a lambda that, when called with a student-list, will return the age of the student.
###Code
max(roster, key=lambda student: student[1])
###Output
_____no_output_____
###Markdown
Here we see that `'Billy'`, who is 50, was returned as the oldest student. Other Python TopicsA complete discussion of Python would include topics such as [dictionaries](https://www.programiz.com/python-programming/dictionary), [iterators](https://www.programiz.com/python-programming/iterator), and [classes](https://www.programiz.com/python-programming/object-oriented-programming), among other important topics. However, this has already been a lot and we will not run into these in the first few labs. Because of this, we will opt to introduce these other topics as they appear throughout this course. NumPyYou can put just about anything into a Python list. They are designed to be completely agnostic to types, making them just about as flexible as a data structure can get. You want to store a string, an int, and another list? No problem. However, this versatility doesn’t come for free. In exchange for quality of life, we must give up some degree of computational efficiency — though probably not enough to tip the scales against lists in most use cases.One case, however, where lists are not ideal is mathematical computation. Here, we don't need the flexibility that lists give us since we know upfront that we are only dealing with _numbers_ and _how many_ of these numbers we have (eg. the dimensionality of a column vector is fixed for a particular problem). This leads us to seek an alternative data structure that is optimized for these constraints (ie. known type and shape): the **array**. These types of math-specialized arrays are not provided by Python itself and, instead, can be found in the `numpy` package. Here we import `numpy` with the alias `np`. This is for convention and because, simply, $2 < 5$.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Creating ArraysArrays can be created from many "list-like" objects by calling `np.array` on the object.
###Code
some_list = range(10)
np.array(some_list)
###Output
_____no_output_____
###Markdown
You may also created multi-dimensional, or `ndarray`s, in this way.
###Code
some_ndlist = [
[1, 2, 3],
[4, 5, 6]
]
np.array(some_ndlist)
###Output
_____no_output_____
###Markdown
Zeros Sometimes just need a _uniform_ array of some value. Here are some examples of how you can make these.
###Code
np.zeros(5)
###Output
_____no_output_____
###Markdown
In the following example, we pass in a `tuple` (kind of like a list) with the shape of the array we want, ie. `(5, 5)`.
###Code
np.zeros((5, 3))
###Output
_____no_output_____
###Markdown
**Try this!** Create a three dimensional array with 3 zeros in the first two dimensions and 2 along the third dimension.
###Code
# your code here
###Output
_____no_output_____
###Markdown
This looks pretty complicated and luckly for our course we will mosly use one and two dinmesinoal arrays as they can represent _vectors_ and _matrices_. Ones
###Code
np.ones(5)
###Output
_____no_output_____
###Markdown
If you need an array of `5` then make an array of `ones` of the desired shape and multiple it by `5`.
###Code
np.ones(5) * 5
###Output
_____no_output_____
###Markdown
Evenly Spaced Sequences You can also get an array of evenly spaced numbers over a specified interval using `np.linspace`.```np.linspace(start, stop, number)```
###Code
np.linspace(0, 10, 20)
###Output
_____no_output_____
###Markdown
Thinking of Arrays as VectorsOne of the main benefits of using arrays rather than lists is because of _vectorized_ operations, which essentially allow use to think of an entire array as a unit and operate at the array level -- we don't have to concern ourselves with each individual number. From here on out, we will refer to arrays as **vectors** as the mathematical definitions of _vectorized_ operations are typically defined for vectors. It is important to note that, in many cases, we can use the terms interchangably, but this is not always the case. For example, a dot product, or inner product, of two vectors (mathspeak for same-sized arrays) $\textbf{a}$ and $\textbf{b}$ is defined as $$\textbf{a}\textbf{b} = \sum_{i=1}^{d} a_i b_i.$$ This roughly translates to "multiply each element of $\textbf{a}$ by the element at the same index in $\textbf{b}$ and sum the products". A Python function for this might look like
###Code
def dot(a, b):
products = []
for i in range(len(a)):
p = a[i] * b[i]
products.append(p)
return sum(products)
###Output
_____no_output_____
###Markdown
Notice the `for` loop that is required. Often times, we will be taking "elementwise" products, sums, etc. all of which will involve looping. Looping itself in Python is not necessarily slow, but given the constraints of this context (recall, we are only dealing with numbers and we know how many of them we have) we can leverage "super" fast C libraries to do this for us.> **For the curious**: In this case, we use Python and NumPy as an interface for highly optimized C routines.
###Code
a = np.ones(3) # a = [1, 1, 1]
b = np.array([1, 2, 3]) # b = [1, 2, 3]
dot(a, b) # 1*1 + 1*2 + 1*3 = 6
###Output
_____no_output_____
###Markdown
Even better, all of this comes with intuitive syntax. Below is an example of "vectorized" addition.
###Code
a + b
###Output
_____no_output_____
###Markdown
And the "elementwise" product of two vectors $$a_i b_i \quad i=0, 1, \ldots, d$$ can be done as follows.
###Code
2 * a * b
###Output
_____no_output_____
###Markdown
Now that you know what these arrays look like, let's try to add them together. Write down what you would expect to see in the cell below.
###Code
# your code here
###Output
_____no_output_____
###Markdown
There are many other vectorized array operations, such as `np.sum`, `np.min`, among others that also take advantage of the `array` datastructure to compute results very quickly. An extensive list of mathematical operations provided by NumPy can be found in [its documentation](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.math.html).Furthermore, some computations can be found as methods of arrays themselves. These include `array.min()`, `array.mean()`, etc. A list of these can be found [here](https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.ndarray.htmlcalculation) Arrays as MatricesArrays are considered to be very generic. They can be used to represent vectors, often as points in space, but they can also just represent a collection of numbers you would like to do math with. Moving from $1$ dimension to $2$ dimensions, an array could represent both a collection of vectors aligned side-by-side or a matrix in a more traditional mathematical sense. We won't spend much time on the matrix sense as most of the applications of _ndarrays_ we will see in this class are better described as collection of vectors.An example of this could be a collection of vectors representing the prices of several stocks and their "earnings per share," which together are used in finance to compute a price-earnings ratio, or more commonly, a [P/E ratio](https://www.investopedia.com/university/peratio/peratio1.asp).$$\begin{bmatrix}\text{prices} \\\text{earnings per share}\end{bmatrix}=\begin{bmatrix}175 & 150 & 180 \\1.60 & 1.03 & 2.00\end{bmatrix}$$ In NumPy, this could be represented by a 2D array as follows.
###Code
prices = [175, 150, 180]
earnings = [1.60, 1.03, 2.00]
data = np.array([prices, earnings])
data
###Output
_____no_output_____
###Markdown
Here we have represented our data, $\mathcal{D}$, as a collection of three column vectors, each representing one observation. With this data, we can calculate the P/E ratios of each of these stocks.$$\text{P/E Ratio}\; = \frac{\text{Price per Share}}{\text{Earnings per Share}}$$
###Code
def pe_ratio(data):
prices = data[0, :] # all values from row 1
earnings = data[1, :] # row 2
return prices / earnings
f'P/E ratios are: {pe_ratio(data)}'
###Output
_____no_output_____
###Markdown
Notice how we did not need to _manually_ loop through the elements. Arrays have AxesNumPy arrays have axes that correspond to the order of numbers supplied when indexing. It makes sense to consider axes for many operations. For example, the `min` method on arrays will default to returning the minimum value in each column, preferring to follow axis 0. However, there are cases where you want to get the minimum of each row instead. For these cases, you can specify which axis `min` should use. To demonstrate, we will use our sample BMI dataset. Here is a reminder of what the data looked like.
###Code
data
###Output
_____no_output_____
###Markdown
If we wanted to find the minimum value in `data` we would simply use the `min` method on the array as is done in the next cell.
###Code
data.min()
###Output
_____no_output_____
###Markdown
However, often times we would like to find the minimum value in a particular row or column. Depending on the data you are representing with the array, this might mean finding the minimum stock price and stock earnings per share as with our P/E ratio example. We can find the minimum price and earnings (each row; axis 1) we can specify that `min` should find the `min` in a specific axis.
###Code
data.min(axis=1)
###Output
_____no_output_____
###Markdown
Many vectorized operations that manipulate arrays can take an axis argument, allowing you more flexibility. Now, using the power of arrays, let's reattempt our goal from above to find the oldest student, Billy, in the group of students. Note that we do not want stings in our numpy arrays, so we will put the names into a list and the numeric entries into a two dimesnional array (matrix).
###Code
names = ['Billy','Meghan','Jeff', 'Alex','Cate']
roster = [
[50, 2021],
[18, 2020],
[21, 2019],
[21, 2019],
[21, 2020]
]
R = np.array(roster)
R
###Output
_____no_output_____
###Markdown
**Try this!** Get the age and graduation year of the odldes person in our roster.
###Code
R.max(axis=0)
###Output
_____no_output_____
###Markdown
**Try this!** Here is a last challenging excercise! Using the reference on [array functions](https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.ndarray.htmlcalculation) linked above, get the index of the oldest person in our roster and retrieve their name from the `names` list. Then adapt your code to reveal the name of the youngest person.
###Code
# your code here
###Output
_____no_output_____ |
Implementations/FY21/ACC_mapbox_traffic/final_traffic_step5_post_processing_step2.ipynb | ###Markdown
Step5: Post-Processing function step 2
###Code
import os, sys, time, importlib
import osmnx
import geopandas as gpd
import pandas as pd
import networkx as nx
import numpy as np
sys.path.append("../../../GOSTnets")
import GOSTnets as gn
# pip install osmium
# import osmium, logging
# import shapely.wkb as wkblib
from shapely.geometry import LineString, Point
# This is a Jupyter Notebook extension which reloads all of the modules whenever you run the code
# This is optional but good if you are modifying and testing source code
%load_ext autoreload
%autoreload 2
from GOSTnets.load_traffic2 import *
# read graph
G = nx.read_gpickle('./sri_lanka_unclean2_w_time_largest_20200616.pickle')
len(G.edges)
# import shapefile as GPD
study_area = gpd.read_file("./output_edges_shapefiles_merged_20200616/weighted_sec_saved_edges_all_merged2.shp")
study_area
sorted_study_area = study_area.sort_values(by=['weighted_s'], ascending=False)
sorted_study_area
total_cost = 0
total_count = 0
cutoff_row = 0
def sort_weighted_s(index, sec_saved, imp_cost):
global total_cost
global total_count
global cutoff_row
if total_cost < 20000000 and sec_saved > 0:
print(f"total_cost before:{total_cost}")
print(f"imp_cost:{imp_cost}")
total_cost = total_cost + imp_cost
total_count += 1
print(f"sec_saved:{sec_saved}")
print(f"total_cost:{total_cost}")
print(f"total_count:{total_count}")
print(f"cutoff_row:{index}")
cutoff_row = index
sorted_study_area.apply(lambda x: sort_weighted_s(x.name, x['sec_saved'],x['imp_cost']),axis=1)
total_cost
total_count
cutoff_row
###Output
_____no_output_____ |
notebooks/Gas_and_Airbnb_Visualizations.ipynb | ###Markdown
TO DO:
* gas price visualization of some kind (maybe showing yearly trend/seasonal trend)
* lodging price visualization of some kind (seasonal or cost differentiation bubble chart by number of rooms, amenities, etc.
* for visualizations we might be best served by making the visualization ourselves, and passing the plotly viz to the front end (not exactly sure how to do this, but I roughly understand from our DS guide). But we could just return the visualization data for them to render too.
Gas Price Visualizations Get data, Per Region
###Code
data_url = 'https://www.eia.gov/dnav/pet/xls/PET_PRI_GND_A_EPM0_PTE_DPGAL_W.xls'
import pandas as pd
df = pd.read_excel(data_url, sheet_name=2, header=2)
df.isna().sum()
###Output
_____no_output_____
###Markdown
East Coast Regional Plot
###Code
col_name = 'Weekly East Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)'
import plotly.express as px
fig = px.line(df, x="Date", y=col_name, title="East Coast Gas Prices - Weekly",
labels={"Date" : "Date", col_name: "East Coast Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='East Coast Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
New England Plot
###Code
fig = px.line(df, x="Date", y="Weekly New England (PADD 1A) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="East Coast Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly New England (PADD 1A) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "New England Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='New England Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Central Atlantic Plot
###Code
fig = px.line(df, x="Date", y="Weekly Central Atlantic (PADD 1B) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="Central Atlantic Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly Central Atlantic (PADD 1B) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "Central Atlantic Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='Central Atlantic Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Lower Atlantic Plot
###Code
fig = px.line(df, x="Date", y="Weekly Lower Atlantic (PADD 1C) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="Lower Atlantic Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly Lower Atlantic (PADD 1C) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "Lower Atlantic Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='Lower Atlantic Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Midwest Plot
###Code
fig = px.line(df, x="Date", y="Weekly Midwest All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="Midwest Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly Midwest All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "Midwest Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='Midwest Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Gulf Coast Plot
###Code
fig = px.line(df, x="Date", y="Weekly Gulf Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="Gulf Coast Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly Gulf Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "Gulf Coast Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='Gulf Coast Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Rocky Mountain Plot
###Code
fig = px.line(df, x="Date", y="Weekly Rocky Mountain All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="Rocky Mountain Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly Rocky Mountain All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "Rocky Mountain Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='Rocky Mountain Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
West Coast Plot
###Code
fig = px.line(df, x="Date", y="Weekly West Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="West Coast Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly West Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "West Coast Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='West Coast Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
Average Gas Price, Entire U.S. Plot
###Code
data = 'https://www.eia.gov/dnav/pet/xls/PET_PRI_GND_DCUS_NUS_W.xls'
import pandas as pd
df_all_regions = pd.read_excel(data_url, sheet_name=1, header=2)
fig = px.line(df_all_regions, x="Date", y="Weekly U.S. All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)", title="Average U.S. Gas Prices - Weekly",
labels={"Date" : "Date", "Weekly U.S. All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)": "Average U.S. Gas Prices (Dollars per Gallon)"})
fig.update_layout(title_text='Average U.S. Gas Prices - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
All Regions Together Plot
###Code
column_names = df.columns.to_list()
column_names.remove('Date')
column_names.remove('Weekly West Coast (PADD 5) Except California All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)')
regions_dict = {
'Weekly East Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'East Coast',
'Weekly New England (PADD 1A) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'New England',
'Weekly Central Atlantic (PADD 1B) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'Central Atlantic',
'Weekly Lower Atlantic (PADD 1C) All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'Lower Atlantic',
'Weekly Midwest All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'Midwest',
'Weekly Gulf Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'Gulf Coast',
'Weekly Rocky Mountain All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'Rocky Mountain',
'Weekly West Coast All Grades All Formulations Retail Gasoline Prices (Dollars per Gallon)': 'West Coast',
}
df_list = []
for col in column_names:
aux_df = pd.DataFrame()
aux_df['Date'] = df['Date']
aux_df['Price'] = df[col]
aux_df['Region'] = regions_dict[col]
df_list.append(aux_df)
separated_df = pd.concat(df_list)
separated_df.sample(10)
fig = px.line(separated_df, x="Date", y="Price", color="Region")
fig.update_layout(title_text='U.S. Gas Prices By Region - Weekly', title_x=0.5)
fig.show()
###Output
_____no_output_____
###Markdown
For FastAPI
###Code
#TO DO: create function
def make_plot(region):
# col_name = figure out appripriate column name given the region name
# title_name = figure the approrpaite title given the region name
# some_label = figure out appropriate label
fig = px.line(df, x="Date", y=col_name, labels={"Date" : "Date", col_name: some_label})
fig.update_layout(title_text=some_title, title_x=0.5)
fig.to_json()
###Output
_____no_output_____ |
notebooks/workshops/01Pandas.ipynb | ###Markdown
Anatomy of Pandas
###Code
import pandas as pd
import numpy as np
import datetime as dt
###Output
_____no_output_____
###Markdown
Basic Datatypes Series
###Code
# Series are a (comparatively) thin wrapper around arrays
# Get an array to work with
x = np.random.normal(size=(100,))
# Turn this into a Series
simple_series = pd.Series(x)
# This contains the same data - but note the index column on the left
# All Series objects contain and Index in addition to their data
simple_series
# By default, this index is an ordinal count (from 0), so the same as numpy/C indexes
# But, the similarity ends here!
# Pandas indexes are persistent, and will be subsetted along with data
simple_series[10:20]
# Pandas indexes can be of (almost) any datatype
# The library includes some very useful and common cases - in particular, the DatetimeIndex
# There are many ways to construct these - we will use some of Pandas builtin tools for these examples
dti = pd.date_range('1 jun 2000', periods=100, freq='d')
# As you can see, this is a special class - it is not a Series or an Array, although it shares some features
dti
# Rebuild our Series using the DatetimeIndex
date_series = pd.Series(x, index=dti)
date_series
s.plot()
# Now, indexing into the Series will use the Index data itself as index locations - not simply an integer index
start_date = dt.datetime(2000,7,15)
date_series[start_date]
# DatetimeIndexes wrap the standard Python datetime library. Get to the know this library, it makes working with indexing much easier!
end_date = start_date + dt.timedelta(10)
start_date, end_date
# Note that unlike indexing with integers, indexing into a Series or DataFrame with a custom index class will
# select data inclusively (ie it is a closed interval on both ends)
date_series[start_date:end_date]
# You can still access the contents of a Series by (0-based) ordinal indexing, by using the iloc method
# Note that iloc indexes are also inclusive
date_series.iloc[0:10]
# There is a similar method available that allows indexing by value rather than ordinal indexing
# Looks kind of pointless, since we can just use indexes...
# It will be important later!
date_series.loc[start_date:end_date]
# The index of a Series is available as its own object - this will also be very useful later
date_series.iloc[0:10].index
# Pandas has lots of convenience shortcuts, especially useful for interactive use
date_series['jul 2000':'aug 15 2000'].plot()
###Output
_____no_output_____
###Markdown
DataFrameA DataFrame can be thought of as a collection of Series with a shared Index
###Code
# Let's construct a minimal DataFrame with just one Series - the date_series from above
df = pd.DataFrame({'x': date_series})
# So far it doesn't contain anything additional to the Series data - with the exception of a Column name, 'x'
df
# Columns are selected using standard indexing
# Selecting a single column will return the Series containing that column's data
df['x']
# You can also select columns as if they were member variables of the DataFrame object
# Don't!
# Don't ever do this!
# This looks like it works fine...
df.x
# ... but
# DataFrames have hundreds of methods and member variables
# The moment one of your columns shares a name with them, this happens...
bad_df = pd.DataFrame({'columns': [0,1,5,2]})
bad_df
bad_df.columns
# Because DataFrames are indexed 'column first', passing index values directly in will cause an error
df[start_date]
# Or maybe they won't?
df['jul 2000']
# Pandas is a big, complicated library with a lot of baggage and technical debt ("backwards compatibility")
# Wherever possible, use the least ambiguous methods you can
# In this case, that is the loc method (I told you it would be important)
df.loc[start_date]
# Now, let's get a more complicated DataFrame from some real AuTuMN data
from autumn.tools.runs import ManagedRun
mr = ManagedRun("covid_19/hume/1633437782/f6ca627")
param_df = mr.calibration.get_mcmc_params()
# These are the parameters of a calibration run
param_df
# As you can see, there are multiple columns in this run; you can access this programatically
# Columns is also an Index! Just one that runs along a different axis
param_df.columns
# Multiple columns can be selected at once.
# This is extremely useful in a programattic context, where lists can be generated in code and then used as arguments in indexing
param_df[['seasonal_force','vic_2021_seeding.seed_time','contact_rate']]
# Let's get another DataFrame
mcmc_runs_df = mr.calibration.get_mcmc_runs()
mcmc_runs_df
# Boolean comparisons on Pandas objects produce boolean arrays, just like numpy
mcmc_runs_df['run'] > 500
# You can use these to select subsets of Series or DataFrames
burned_df = mcmc_runs_df[mcmc_runs_df['run'] > 500]
burned_df
# Take care to make sure your index matches the object you are indexing
# The following example will throw an exception in some versions of pandas, but just produce a warning in later versions
# Either way - don't do it! Warnings exist for a reason, and if you see one, there is almost certainly a better way
# to write the code that produced it
burned_df[mcmc_runs_df['accept'] == 1]
# Same object use for comparison and indexing - no complaints
burned_df[burned_df['accept'] == 1]
# You can combine boolean indexes using boolean operators
mcmc_runs_df[(mcmc_runs_df['run'] > 500) & (mcmc_runs_df['accept'] == 1)]
# Still.. that all seems a bit cumbersome
# OK, we're going to cheat a little here and use a custom function from the autumn library that makes life with pandas a little easier
from autumn.tools.utils.pandas import pdfilt
selected_runs = pdfilt(mcmc_runs_df, ["run > 500", "accept == 1"])
selected_runs
# Now, we can access the index, and do something useful with it...
selected_runs.index
# Get the parameters from our params_df, using the index of our selected runs
param_df.loc[selected_runs.index]
###Output
_____no_output_____
###Markdown
Pivots, Melts, MultiIndexing
###Code
# Those DataFrames above looked nice. A little too nice. Is that even our data?
raw_params = mr.calibration.get_mcmc_params(raw=True)
raw_params
# To reshape this DataFrame, we use the pivot_table method
# This needs to know which columns contain Index data, and which contain Column identifiers
# In this case, 'urun' has been handily filled in by combining run and chain in an earlier step,
# so we can use this directly as an index
raw_params.pivot_table(index='urun',columns='name')
# Hang on, that looks a bit weird - we've still got the chain and run columns in there, confusing matters...
# Use drop to tidy things up
raw_params_urun = raw_params.drop(['chain','run'],axis='columns')
raw_params_urun.pivot_table(index='urun',columns='name')
# An alternative, if we don't have a unique identifier, or more importantly want to retain access to both these
# "dimensions", is to use a MultiIndex
# We'll drop 'urun', and build an index using both chain and run
midx_df = raw_params.drop('urun',axis='columns').pivot_table(index=['chain','run'], columns='name')
midx_df
# MultiIndexes work a bit more like multidimensional arrays
# You can index by either subsetting on a single dimension...
midx_df.loc[6]
# ... or by passing in multidimensional coordinates
midx_df.loc[6,13569]
# Because columns are just Indexes on another axis, there can by column MultiIndexes too
pbi = mr.powerbi.get_db()
udf = pbi.get_uncertainty()
udf
# Finally - if you need to export this data, especially to CSV or a relational database, use melt
# This is the inverse of pivot_table
# It can require quite a lot of fine tuning, but is important to be aware of
udf.melt()
melted = udf.melt(ignore_index=False)
melted['date'] = melted.index
melted
###Output
_____no_output_____ |
NASA/Python_codes/ML_prepration/DeepLearning_CleanPlot.ipynb | ###Markdown
Set up Directories
###Code
data_dir = "/Users/hn/Documents/01_research_data/NASA/VI_TS/05_SG_TS/"
ML_data_dir = "/Users/hn/Documents/01_research_data/NASA/ML_data/"
###Output
_____no_output_____
###Markdown
Set other parameters
###Code
idx="EVI"
###Output
_____no_output_____
###Markdown
Read Train Labels and IDs
###Code
train_labels = pd.read_csv(ML_data_dir + "train_labels.csv")
train_labels.head(2)
train_labels.shape
len(train_labels.ID.unique())
###Output
_____no_output_____
###Markdown
Read TS files
###Code
file_names = ["SG_Walla2015_" + idx + "_JFD.csv", "SG_AdamBenton2016_" + idx + "_JFD.csv",
"SG_Grant2017_" + idx + "_JFD.csv", "SG_FranklinYakima2018_"+ idx +"_JFD.csv"]
data=pd.DataFrame()
for file in file_names:
curr_file=pd.read_csv(data_dir + file)
curr_file['human_system_start_time'] = pd.to_datetime(curr_file['human_system_start_time'])
# These data are for 3 years. The middle one is the correct one
all_years = sorted(curr_file.human_system_start_time.dt.year.unique())
if len(all_years)==3 or len(all_years)==2:
proper_year = all_years[1]
elif len(all_years)==1:
proper_year = all_years[0]
curr_file = curr_file[curr_file.human_system_start_time.dt.year==proper_year]
data=pd.concat([data, curr_file])
data.reset_index(drop=True, inplace=True)
data.loc[data[idx]<0, idx]=0
data.head(2)
###Output
_____no_output_____
###Markdown
Filter the train fields TS
###Code
trainIDs = list(train_labels.ID.unique())
data = data[data.ID.isin(trainIDs)]
data.reset_index(drop=True, inplace=True)
for curr_ID in data.ID.unique():
crr_fld=data[data.ID==curr_ID].copy()
crr_fld.reset_index(drop=True, inplace=True)
# crr_fld['human_system_start_time'] = pd.to_datetime(crr_fld['human_system_start_time'])
SFYr = crr_fld.human_system_start_time.dt.year.unique()[0]
fig, ax = plt.subplots();
fig.set_size_inches(10, 2.5)
ax.grid(False);
ax.plot(crr_fld['human_system_start_time'], crr_fld[idx],
c ='dodgerblue', linewidth=5)
ax.axis("off")
# ax.set_xlabel('time'); # , labelpad = 15
# ax.set_ylabel(idx, fontsize=12); # , labelpad = 15
# ax.tick_params(axis = 'y', which = 'major');
# ax = plt.gca()
# ax.axes.xaxis.set_visible(False)
# ax.axes.yaxis.set_visible(False)
left = crr_fld['human_system_start_time'][0]
right = crr_fld['human_system_start_time'].values[-1]
ax.set_xlim([left, right]); # the following line alsow orks
ax.set_ylim([-0.005, 1]); # the following line alsow orks
crop_count = train_labels[train_labels.ID==curr_ID]["Vote"].values[0]
if crop_count==1:
crop_count_letter="single"
else:
crop_count_letter="double"
# train_images is the same as expert labels!
plot_path = "/Users/hn/Documents/01_research_data/NASA/ML_data/train_images_" + idx + "/"
os.makedirs(plot_path, exist_ok=True)
fig_name = plot_path + crop_count_letter + "_" + curr_ID +'.jpg'
plt.savefig(fname = fig_name, dpi=100, bbox_inches='tight', facecolor="w")
plt.close('all')
# ax.legend(loc = "upper left");
print (plot_path)
###Output
/Users/hn/Documents/01_research_data/NASA/ML_data/train_images_EVI/
###Markdown
NDVI
###Code
idx="NDVI"
file_names = ["SG_Walla2015_" + idx + "_JFD.csv", "SG_AdamBenton2016_" + idx + "_JFD.csv",
"SG_Grant2017_" + idx + "_JFD.csv", "SG_FranklinYakima2018_"+ idx +"_JFD.csv"]
data=pd.DataFrame()
for file in file_names:
curr_file=pd.read_csv(data_dir + file)
curr_file['human_system_start_time'] = pd.to_datetime(curr_file['human_system_start_time'])
# These data are for 3 years. The middle one is the correct one
all_years = sorted(curr_file.human_system_start_time.dt.year.unique())
if len(all_years)==3 or len(all_years)==2:
proper_year = all_years[1]
elif len(all_years)==1:
proper_year = all_years[0]
curr_file = curr_file[curr_file.human_system_start_time.dt.year==proper_year]
data=pd.concat([data, curr_file])
data.reset_index(drop=True, inplace=True)
data.loc[data[idx]<0, idx]=0
data.head(2)
trainIDs = list(train_labels.ID.unique())
data = data[data.ID.isin(trainIDs)]
data.reset_index(drop=True, inplace=True)
for curr_ID in data.ID.unique():
crr_fld=data[data.ID==curr_ID].copy()
crr_fld.reset_index(drop=True, inplace=True)
# crr_fld['human_system_start_time'] = pd.to_datetime(crr_fld['human_system_start_time'])
SFYr = crr_fld.human_system_start_time.dt.year.unique()[0]
fig, ax = plt.subplots();
fig.set_size_inches(10, 2.5)
ax.grid(False);
ax.plot(crr_fld['human_system_start_time'], crr_fld[idx],
c ='dodgerblue', linewidth=5)
ax.axis("off")
# ax.set_xlabel('time'); # , labelpad = 15
# ax.set_ylabel(idx, fontsize=12); # , labelpad = 15
# ax.tick_params(axis = 'y', which = 'major');
# ax = plt.gca()
# ax.axes.xaxis.set_visible(False)
# ax.axes.yaxis.set_visible(False)
left = crr_fld['human_system_start_time'][0]
right = crr_fld['human_system_start_time'].values[-1]
ax.set_xlim([left, right]); # the following line alsow orks
ax.set_ylim([-0.005, 1]); # the following line alsow orks
crop_count = train_labels[train_labels.ID==curr_ID]["Vote"].values[0]
if crop_count==1:
crop_count_letter="single"
else:
crop_count_letter="double"
# train_images is the same as expert labels!
plot_path = "/Users/hn/Documents/01_research_data/NASA/ML_data/train_images_" + idx + "/"
os.makedirs(plot_path, exist_ok=True)
fig_name = plot_path + crop_count_letter + "_" + curr_ID +'.jpg'
plt.savefig(fname = fig_name, dpi=100, bbox_inches='tight', facecolor="w")
plt.close('all')
# ax.legend(loc = "upper left");
plot_path
crr_fld.head(2)
###Output
_____no_output_____ |
data_processing_in_pandas/4_create_subsets.ipynb | ###Markdown
Create manageable data sets- oneInHundred (1% the size of the original database. Takes every hundredth row)- oneInThousand (.1% the size of the original database. Takes every hundredth row)
###Code
oneInHundred = []
lngth = len(citibike_rides_df)
for i in range(lngth):
if i%100==0: oneInHundred.append(i)
citibike_rides_df_oneInHundred = citibike_rides_df.iloc[oneInHundred, :].copy()
citibike_rides_df_oneInHundred
# Export clean CSV file to master CSV file
tic = timeit.default_timer() # Monitor performance
citibike_rides_df_oneInHundred.to_csv("../2_clean_datasets_by_year/rides_2013-2021_oneInHundred.csv", index=False)
toc = timeit.default_timer() # Monitor performance
print(f'Time (in seconds) to export unified CSV file: {round(toc - tic, 2)}')
citibike_rides_df_oneInHundred.head()
oneInThousand = []
lngth = len(citibike_rides_df)
for i in range(lngth):
if i%1000==0: oneInThousand.append(i)
citibike_rides_df_oneInThousand = citibike_rides_df.iloc[oneInThousand, :].copy()
citibike_rides_df_oneInThousand
# Export clean CSV file to master CSV file
tic = timeit.default_timer() # Monitor performance
citibike_rides_df_oneInThousand.to_csv("../2_clean_datasets_by_year/rides_2013-2021_oneInThousand.csv", index=False)
toc = timeit.default_timer() # Monitor performance
print(f'Time (in seconds) to export unified CSV file: {round(toc - tic, 2)}')
citibike_rides_df_oneInThousand.head()
###Output
Time (in seconds) to export unified CSV file: 1.93
|
Keeping Track of x and y.ipynb | ###Markdown
Keeping Track of Vehicle x and yNow that you know how to solve trigonometry problems, you can keep track of a vehicle's $x$ and $y$ coordinates as it moves in any direction. The goal of this lesson is for you to implement a few methods in a `Vehicle` class. Once complete, your code will be used like this:```python instantiate vehiclev = Vehicle() drive forward 10 metersv.drive_forward(10) turn left in 10 increments of 9 degrees each.for _ in range(10): v.turn(9.0) v.drive_forward(1)v.drive_forward(10)v.show_trajectory()```and this final call to `show_trajectory` should produce a graph that looks like this: If, instead of calling ```pythonv.show_trajectory()```we had written:```pythonprint(v.history)```we would have seen a list of `(x,y)` tuples representing the vehicle's history that looks like this:```python[(0.0, 0.0), (10.0, 0.0), (10.988, 0.156), (11.939, 0.465), (12.830, 0.919), (13.639, 1.507), (14.346, 2.214), (14.934, 3.023), (15.388, 3.914), (15.697, 4.865), (15.853, 5.853), (15.853, 6.853)]```Note that it's this `history` data that is used to plot the points in `show_trajectory`.
###Code
import numpy as np
from math import sin, cos, pi
from matplotlib import pyplot as plt
class Vehicle:
def __init__(self):
self.x = 0.0 # meters
self.y = 0.0
self.heading = 0.0 # radians
self.history = []
def drive_forward(self, displacement):
"""
Updates x and y coordinates of vehicle based on
heading and appends previous (x,y) position to
history.
"""
delta_x = displacement * np.cos(self.heading)
delta_y = displacement * np.sin(self.heading)
new_x = self.x + delta_x
new_y = self.y + delta_y
self.history.append((self.x, self.y))
self.x = new_x
self.y = new_y
def set_heading(self, heading_in_degrees):
"""
Sets the current heading (in radians) to a new value
based on heading_in_degrees. Vehicle heading is always
between 0 and 2 * pi.
"""
assert(-180 <= heading_in_degrees <= 180)
rads = (heading_in_degrees * pi / 180) % (2*pi)
self.heading = rads
def turn(self, angle_in_degrees):
"""
Changes the vehicle's heading by angle_in_degrees. Vehicle
heading is always between 0 and 2 * pi.
"""
rads = (angle_in_degrees * pi / 180)
new_head = self.heading + rads % (2*pi)
self.heading = new_head
def show_trajectory(self):
"""
Creates a scatter plot of vehicle's trajectory.
"""
X = [p[0] for p in self.history]
Y = [p[1] for p in self.history]
X.append(self.x)
Y.append(self.y)
# create scatter AND plot (to connect the dots)
plt.scatter(X,Y)
plt.plot(X,Y)
plt.title("Vehicle (x, y) Trajectory")
plt.xlabel("X Position")
plt.ylabel("Y Position")
plt.axes().set_aspect('equal', 'datalim')
plt.show()
from testing import test_drive_forward, test_set_heading
test_set_heading(Vehicle)
test_drive_forward(Vehicle)
# instantiate vehicle
v = Vehicle()
# drive forward 10 meters
v.drive_forward(10)
# turn left in 10 increments of 9 degrees each.
for _ in range(10):
v.turn(9.0)
v.drive_forward(1)
v.drive_forward(10)
v.show_trajectory()
###Output
/opt/conda/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:106: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
warnings.warn(message, mplDeprecation, stacklevel=1)
|
2018_material/labs/2017-hw3.ipynb | ###Markdown
STA 208: Homework 3This is based on the material in Chapter 4 of 'Elements of Statistical Learning' (ESL), in addition to lectures 7-8. Chunzhe Zhang came up with the dataset and the analysis in the second section. InstructionsWe use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with __Exercise X.X__). So you - MUST add cells in between the exercise statements and add answers within them and- MUST NOT modify the existing cells, particularly not the problem statementTo make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntaxIn the conceptual exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in $$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.htmlWhen writing pseudocode, you should use enumerated lists, such as __Algorithm: Ordinary Least Squares Fit__(Input: X, y; Output: $\beta$)1. Initialize the $p \times p$ Gram matrix, $G \gets 0$, and the vector $b \gets 0$.2. For each sample, $x_i$: 1. $G \gets G + x_i x_i^\top$. 2. $b \gets b + y_i x_i$3. Solve the linear system $G \beta = b$ and return $\beta$ __Exercise 1.1__ (10 pts - 2 each)Recall that surrogate losses for large margin classification take the form, $\phi(y_i x_i^\top \beta)$ where $y_i \in \{-1,1\}$ and $\beta, x_i \in \mathbb R^p$.The following functions are used as surrogate losses for large margin classification. Demonstrate if they are convex or not, and follow the instructions.1. exponential loss: $\phi(x) = e^{-x}$1. truncated quadratic loss: $\phi(x) = (\max\{1-x,0\})^2$1. hinge loss: $\phi(x) = \max\{1-x,0\}$1. sigmoid loss: $\phi(x) = 1 - \tanh(\kappa x)$, for fixed $\kappa > 0$1. Plot these as a function of $x$.(This problem is due to notes of Larry Wasserman.) __Exercise 1.2__ (10 pts)Consider the truncated quadratic loss from (1.1.2). For brevity let $a_+ = max\{a,0\}$ denote the positive part of $a$.$$\ell(y_i,x_i,\beta) = \phi(y_i x_i^\top \beta) = (1-y_i x_i^\top \beta)_+^2$$1. Consider the empirical risk, $R_n$ (the average loss over a training set) for the truncated quadratic loss. What is gradient of $R_n$ in $\beta$? Does it always exists?1. Demonstrate that the gradient does not have continuous derivative everywhere.1. Recall that support vector machines used the hinge loss $(1 - y_i x_i^\top)_+$ with a ridge regularization. Write the regularized optimization method for the truncated quadratic loss, and derive the gradient of the regularized empirical risk.1. Because the loss does not have continuous Hessian, instead of the Newton method, we will use a quasi-Newton method that replaces the Hessian with a quasi-Hessian (another matrix that is meant to approximate the Hessian). Consider the following quasi-Hessian of the regularized objective to be $$G(\beta) = \frac 1n \sum_i 2 (x_i x_i^\top 1\{ y_i x_i^\top \beta > 1 \}) + 2 \lambda.$$ Demonstrate that the quasi-Hessian is positive definite, and write pseudo-code for quasi-Newton optimization. (There was a correction in the lectures, that when minimizing a function you should subtract the gradient $\beta \gets \beta - H^{-1} g$). HW3 Logistic, LDA, SVM
###Code
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import sklearn.linear_model as skl_lm
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.metrics import confusion_matrix, classification_report, precision_recall_curve, roc_curve
from sklearn import preprocessing
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
# dataset path
data_dir = "."
###Output
_____no_output_____
###Markdown
The following code reads the data, subselects the $y$ and $X$ variables, and makes a training and test split. This is the Abalone dataset and we will be predicting the age. V9 is age, 1 represents old, 0 represents young.
###Code
sample_data = pd.read_csv(data_dir+"/hw3.csv", delimiter=',')
sample_data.V1=sample_data.V1.factorize()[0]
X = np.array(sample_data.iloc[:,range(0,8)])
y = np.array(sample_data.iloc[:,8])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=0)
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/07_data_mining_solution-checkpoint.ipynb | ###Markdown
Data mining EXERCISE: Association analysis from scratch[Adapted from http://aimotion.blogspot.com.au/2013/01/machine-learning-and-data-mining.html.][For more on efficient approaches, see http://www-users.cs.umn.edu/~kumar/dmbook/ch6.pdf.]Refer to slides for definitions (itemset, support, frequent itemset, confidence, etc). Generate frequent itemsetsLet's find all sets of items with a support greater than some threshold.We define 4 functions for generating frequent itemsets:* createC1 - Create first candidate itemsets for k=1* scanD - Identify itemsets that meet the support threshold* aprioriGen - Generate the next list of candidates* apriori - Generate all frequent itemsetsSee slides for explanation of functions.
###Code
def createC1(dataset):
"Create a list of candidate item sets of size one."
c1 = []
for transaction in dataset:
for item in transaction:
if not [item] in c1:
c1.append([item])
c1.sort()
#frozenset because it will be a ket of a dictionary.
return list(map(frozenset, c1))
def scanD(dataset, candidates, min_support):
"Returns all candidates that meets a minimum support level"
sscnt = {}
for tid in dataset:
for can in candidates:
if can.issubset(tid):
sscnt.setdefault(can, 0)
sscnt[can] += 1
num_items = float(len(dataset))
retlist = []
support_data = {}
for key in sscnt:
support = sscnt[key] / num_items
if support >= min_support:
retlist.insert(0, key)
support_data[key] = support
return retlist, support_data
def aprioriGen(freq_sets, k):
"Generate the joint transactions from candidate sets"
retList = []
lenLk = len(freq_sets)
for i in range(lenLk):
for j in range(i + 1, lenLk):
L1 = list(freq_sets[i])[:k - 2]
L2 = list(freq_sets[j])[:k - 2]
L1.sort()
L2.sort()
if L1 == L2:
retList.append(freq_sets[i] | freq_sets[j]) # | is set union
return retList
def apriori(dataset, min_support=0.5):
"Generate a list of candidate item sets"
C1 = createC1(dataset)
D = list(map(set, dataset))
L1, support_data = scanD(D, C1, min_support)
L = [L1]
k = 2
while (len(L[k - 2]) > 0):
Ck = aprioriGen(L[k - 2], k)
Lk, supK = scanD(D, Ck, min_support)
support_data.update(supK)
L.append(Lk)
k += 1
return L, support_data
###Output
_____no_output_____
###Markdown
Itemset generation on sample data
###Code
MIN_SUPPORT=0.5
# Sample data
DATASET = [[1, 3, 4], [2, 3, 5], [1, 2, 3, 5], [2, 5]]
print('Dataset in list-of-lists format:\n', DATASET, '\n')
# Generate a first candidate itemsets for k=1
C1 = createC1(DATASET)
print('Initial 1-itemset candidates:\n', C1, '\n')
# Convert data to a list of sets
D = list(map(set, DATASET))
print('Dataset in list-of-sets format:\n', D, '\n')
# Identify items that meet support threshold (0.5)
# Note that {4} isn't here as it only occurs in one transaction.
# Remove it so we don't generate any further candidate itemsets containing {4}.
L1, support_data = scanD(D, C1, MIN_SUPPORT)
print('1-itemsets that appear in at least 50% of transactions:\n', L1, '\n')
# Generate the next list of candidates
print('Next set of candidates:\n', aprioriGen(L1,2), '\n')
# Generate all candidate itemsets
L, support_data = apriori(DATASET, min_support=MIN_SUPPORT)
print('Full list of candidate itemsets:\n', L, '\n')
print('Support values for candidate itemsets:\n', support_data, '\n')
###Output
('Dataset in list-of-lists format:\n', [[1, 3, 4], [2, 3, 5], [1, 2, 3, 5], [2, 5]], '\n')
('Initial 1-itemset candidates:\n', [frozenset([1]), frozenset([2]), frozenset([3]), frozenset([4]), frozenset([5])], '\n')
('Dataset in list-of-sets format:\n', [set([1, 3, 4]), set([2, 3, 5]), set([1, 2, 3, 5]), set([2, 5])], '\n')
('1-itemsets that appear in at least 50% of transactions:\n', [frozenset([1]), frozenset([3]), frozenset([2]), frozenset([5])], '\n')
('Next set of candidates:\n', [frozenset([1, 3]), frozenset([1, 2]), frozenset([1, 5]), frozenset([2, 3]), frozenset([3, 5]), frozenset([2, 5])], '\n')
('Full list of candidate itemsets:\n', [[frozenset([1]), frozenset([3]), frozenset([2]), frozenset([5])], [frozenset([1, 3]), frozenset([2, 5]), frozenset([2, 3]), frozenset([3, 5])], [frozenset([2, 3, 5])], []], '\n')
('Support values for candidate itemsets:\n', {frozenset([5]): 0.75, frozenset([3]): 0.75, frozenset([2, 3, 5]): 0.5, frozenset([1, 2]): 0.25, frozenset([1, 5]): 0.25, frozenset([3, 5]): 0.5, frozenset([4]): 0.25, frozenset([2, 3]): 0.5, frozenset([2, 5]): 0.75, frozenset([1]): 0.5, frozenset([1, 3]): 0.5, frozenset([2]): 0.75}, '\n')
###Markdown
TODO Exploring support thresholds* Generate frequent itemsets with a support threshold of 0.7* How many frequent itemsets do we get at 0.7?* How many do we get at 0.3?* What would be a reasonable value for supermarket transaction data?* Do you have datasets that resemble transactions?* What about the apps/websites you use?
###Code
# 1 -
l0_7, sd0_7 = apriori(DATASET, min_support=0.7)
print('Full list of candidate itemsets:\n', l0_7, '\n')
print('Support values for candidate itemsets:\n', sd0_7, '\n')
# 2 -
print('Number of frequent itemsets at 0.7:', len([i for ksets in l0_7 for i in ksets]))
# 3 -
l0_3, sd0_3 = apriori(DATASET, min_support=0.3)
print('Number of frequent itemsets at 0.3:', len([i for ksets in l0_3 for i in ksets]))
# 4 - Much lower (e.g., 5%) to actually generate any frequent itemsets on real data
# 5 - Could imagine doing this for files to know what tends to be open at the same time.
# 6 - Many, many! E.g., Amazon, Netflix.
###Output
_____no_output_____
###Markdown
Mine association rulesGiven frequent itemsets, we can create association rules.We add three more functions:* calc_confidence - Identify rules that meet the confidence threshold* rules_from_conseq - Recursively generate and evaluate candidate rules* generateRules - Mine all confident association rulesSee slides for explanation of functions.
###Code
def calc_confidence(freqSet, H, support_data, rules, min_confidence=0.7):
"Evaluate the rule generated"
pruned_H = []
for conseq in H:
conf = support_data[freqSet] / support_data[freqSet - conseq]
if conf >= min_confidence:
#print(freqSet - conseq, '--->', conseq, 'conf:', conf)
rules.append((freqSet - conseq, conseq, conf))
pruned_H.append(conseq)
return pruned_H
def rules_from_conseq(freqSet, H, support_data, rules, min_confidence=0.7):
"Generate a set of candidate rules"
m = len(H[0])
if (len(freqSet) > (m + 1)):
Hmp1 = aprioriGen(H, m + 1)
Hmp1 = calc_confidence(freqSet, Hmp1, support_data, rules, min_confidence)
if len(Hmp1) > 1:
rules_from_conseq(freqSet, Hmp1, support_data, rules, min_confidence)
def generateRules(L, support_data, min_confidence=0.7):
"""Create the association rules
L: list of frequent item sets
support_data: support data for those itemsets
min_confidence: minimum confidence threshold
"""
rules = []
for i in range(1, len(L)):
for freqSet in L[i]:
H1 = [frozenset([item]) for item in freqSet]
print("freqSet", freqSet, 'H1', H1)
if (i > 1):
rules_from_conseq(freqSet, H1, support_data, rules, min_confidence)
else:
calc_confidence(freqSet, H1, support_data, rules, min_confidence)
return rules
def print_rules(rules):
for r in rules:
print('{} ==> {} (c={})'.format(*r))
###Output
_____no_output_____
###Markdown
Rule mining on sample data
###Code
MIN_CONFIDENCE=0.7
# Mine association rules
association_rules = generateRules(L, support_data, min_confidence=MIN_CONFIDENCE)
print_rules(association_rules)
###Output
('freqSet', frozenset([1, 3]), 'H1', [frozenset([1]), frozenset([3])])
('freqSet', frozenset([2, 5]), 'H1', [frozenset([2]), frozenset([5])])
('freqSet', frozenset([2, 3]), 'H1', [frozenset([2]), frozenset([3])])
('freqSet', frozenset([3, 5]), 'H1', [frozenset([3]), frozenset([5])])
('freqSet', frozenset([2, 3, 5]), 'H1', [frozenset([2]), frozenset([3]), frozenset([5])])
frozenset([1]) ==> frozenset([3]) (c=1.0)
frozenset([5]) ==> frozenset([2]) (c=1.0)
frozenset([2]) ==> frozenset([5]) (c=1.0)
###Markdown
TODO Exploring confidence thresholds* Mine rules with a confidence threshold of 0.9* How many rules do we get at 0.9?* How many do we get at 0.5?* What would be a reasonable value for supermarket transaction data?* Can we use this for recommendation (e.g., Amazon, Netflix)?
###Code
# 1 -
r0_9 = generateRules(L, support_data, min_confidence=0.9)
print('Rules for confidence threshold of 0.9:')
print_rules(r0_9)
# 2 -
print('Number of rules at 0.9:', len(r0_9))
# 3 -
r0_5 = generateRules(L, support_data, min_confidence=0.5)
print('Rules for confidence threshold of 0.5:')
print_rules(r0_5)
print('Number of rules at 0.5:', len(r0_5))
# 4 - 70% might be reasonable; it will depend on the data and how many rules the business can use
# 5 - Absolutely, especially in session-focused recommendation ignoring user profile and history.
# [https://en.wikipedia.org/wiki/Recommender_system]
# [https://www.quora.com/How-does-Amazons-collaborative-filtering-recommendation-engine-work]
###Output
_____no_output_____
###Markdown
*STOP PLEASE. THE FOLLOWING IS FOR THE NEXT EXERCISE. THANKS.* EXERCISE: Clustering with k-means[Adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.htmlexample-cluster-plot-kmeans-digits-py] Loading handwritten digits dataWe'll work with the handwritten digits dataset, a classic machine-learning dataset used to explore automatic recognition of handwritten digits (i.e., 0, 1, 2, ..., 9).For more information:* http://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits* http://scikit-learn.org/stable/tutorial/basic/tutorial.htmlloading-an-example-dataset
###Code
%matplotlib inline
from sklearn.datasets import load_digits
import matplotlib.pyplot as plt
digits = load_digits()
print('The digits data comprises {} {}-dimensional representations of handwritten digits:\n{}\n'.format(
digits.data.shape[0],
digits.data.shape[1],
digits.data
))
print('It also includes labels:\n{}\n'.format(digits.target))
print('And it includes the original 8x8 image representation:\n{}\n'.format(digits.images[0]))
print('Let\'s look at a few images:\n')
NUM_SUBPLOT_ROWS = 1
NUM_SUBPLOT_COLS = 8
for i in range(NUM_SUBPLOT_ROWS*NUM_SUBPLOT_COLS):
_ = plt.subplot(NUM_SUBPLOT_ROWS,NUM_SUBPLOT_COLS,i+1)
_ = plt.imshow(digits.images[i], cmap=plt.cm.gray_r, interpolation='nearest')
###Output
The digits data comprises 1797 64-dimensional representations of handwritten digits:
[[ 0. 0. 5. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 10. 0. 0.]
[ 0. 0. 0. ..., 16. 9. 0.]
...,
[ 0. 0. 1. ..., 6. 0. 0.]
[ 0. 0. 2. ..., 12. 0. 0.]
[ 0. 0. 10. ..., 12. 1. 0.]]
It also includes labels:
[0 1 2 ..., 8 9 8]
And it includes the original 8x8 image representation:
[[ 0. 0. 5. 13. 9. 1. 0. 0.]
[ 0. 0. 13. 15. 10. 15. 5. 0.]
[ 0. 3. 15. 2. 0. 11. 8. 0.]
[ 0. 4. 12. 0. 0. 8. 8. 0.]
[ 0. 5. 8. 0. 0. 9. 8. 0.]
[ 0. 4. 11. 0. 1. 12. 7. 0.]
[ 0. 2. 14. 5. 10. 12. 0. 0.]
[ 0. 0. 6. 13. 10. 0. 0. 0.]]
Let's look at a few images:
###Markdown
Clustering handwritten digitsThat's the data. Now let's try clustering these 64d vectors.`scikit-learn` implements many differnt machine learning algorithms.The normal pattern is to:1. intialise an estimator (e.g., `estimator = KMeans()`)1. fit to the training data (e.g., `estimator.fit(training_data)`)1. label the test data (e.g., `estimator.predict(test_data)`)For clustering, we don't have separate training and test data.So the labelling is created when we fit and accessed by `estimator.labels_`. Note that, for clustering, these are cluster IDs. They are NOT labels.`estimator.inertia_` gives the sum of squared errors (SSE).
###Code
from sklearn.cluster import KMeans
from sklearn.preprocessing import scale
import numpy as np
# First let's scale the digits data (center to mean and scale to unit variance)
data = scale(digits.data)
print('Scaled digits data:\n{}\n'.format(data))
# Let's grab the data we'll need
n_samples, n_features = data.shape
n_digits = len(np.unique(digits.target)) # classes
labels = digits.target
# And let's run k-means, specifying initialisation (k-means++), k (n_digits),
# and the number of runs (10)
estimator = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
estimator.fit(data)
print('Sum of squared errors:', estimator.inertia_)
print('Clusters from k-means:', estimator.labels_[:10])
print('Gold standard classes:', labels[:10])
###Output
Scaled digits data:
[[ 0. -0.33501649 -0.04308102 ..., -1.14664746 -0.5056698
-0.19600752]
[ 0. -0.33501649 -1.09493684 ..., 0.54856067 -0.5056698
-0.19600752]
[ 0. -0.33501649 -1.09493684 ..., 1.56568555 1.6951369
-0.19600752]
...,
[ 0. -0.33501649 -0.88456568 ..., -0.12952258 -0.5056698
-0.19600752]
[ 0. -0.33501649 -0.67419451 ..., 0.8876023 -0.5056698
-0.19600752]
[ 0. -0.33501649 1.00877481 ..., 0.8876023 -0.26113572
-0.19600752]]
('Sum of squared errors:', 69417.009107181017)
('Clusters from k-means:', array([7, 5, 5, 3, 6, 3, 1, 9, 3, 3], dtype=int32))
('Gold standard classes:', array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))
###Markdown
TODO Try different initialisationsInitialisation has a large effect on cluster output. Let's try a few options.* Initialise with random (`init='random'`)* Run PCA with k components (`pca = PCA(n_components=n_digits).fit(data)`)* Use PCA components to initialise `KMeans` (`init=pca.components_`)* Can we determine which approach is best?
###Code
# 1 -
est_random = KMeans(init='random', n_clusters=n_digits, n_init=10)
est_random.fit(data)
print('RANDOM INITIALISATION')
print('Num of squared errors:', est_random.inertia_)
print('Clusters from k-means:', est_random.labels_[:10])
print('Gold standard classes:', labels[:10])
print('')
# 2 -
from sklearn.decomposition import PCA
pca = PCA(n_components=n_digits).fit(data)
# 3 -
est_pca = KMeans(init=pca.components_, n_clusters=n_digits, n_init=1)
est_pca.fit(data)
print('INITIALISATION WITH PCA COMPONENTS')
print('Num of squared errors:', est_pca.inertia_)
print('Clusters from k-means:', est_pca.labels_[:10])
print('Gold standard classes:', labels[:10])
print('')
# 4 - It looks like k-means++ >> random >> PCA from SSE/inertia.
# But SSE is an internal validation measure.
# Since we're trying to cluster by digit, we can't really say
# which is best without comparing to the gold partition.
###Output
RANDOM INITIALISATION
('Num of squared errors:', 69865.653191838268)
('Clusters from k-means:', array([5, 7, 7, 9, 2, 9, 1, 4, 9, 9], dtype=int32))
('Gold standard classes:', array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))
INITIALISATION WITH PCA COMPONENTS
('Num of squared errors:', 71820.930407889362)
('Clusters from k-means:', array([2, 9, 9, 3, 0, 3, 4, 1, 3, 3], dtype=int32))
('Gold standard classes:', array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))
###Markdown
*STOP PLEASE. THE FOLLOWING IS FOR THE NEXT EXERCISE. THANKS.* EXERCISE: Evaluating clusteringSince we have a gold-standard labels, we can compare our system clustering to the true partition.`scikit-learn` includes various metrics for this:* Homogeneity* Completeness* V-measure* Adjusted Rand index (ARI)* Adjusted mutual information (AMI)* Silhouette coefficientFor more information:* http://scikit-learn.org/stable/modules/clustering.htmlclustering-evaluationLet's compare the above clusterings using V-measure.Note that you may need to re-estimate or rename `est_random` and `est_pca` from the last exercise
###Code
from sklearn import metrics
print('k-means++ initialisation:', metrics.v_measure_score(labels, estimator.labels_))
print('random initialisation: ', metrics.v_measure_score(labels, est_random.labels_))
print('pca initialisation: ', metrics.v_measure_score(labels, est_pca.labels_))
###Output
('k-means++ initialisation:', 0.62759639152686164)
('random initialisation: ', 0.6777181043537035)
('pca initialisation: ', 0.69322745961589083)
###Markdown
Comparing initialisationsLet's be a bit more exhastive, comparing initialisations using variuos evaluation metrics.
###Code
from time import time
sample_size = 300
def bench_k_means(estimator, name, data):
"Calculate various metrics for comparing system clustering to a gold partition"
t0 = time()
estimator.fit(data)
print('% 9s %.2fs %i %.3f %.3f %.3f %.3f %.3f %.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
# print table header
print(75 * '_')
print('init time inertia homo compl v-meas ARI AMI silhouet')
print(75 * '_')
# benchmark k-means++ initialisation
bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),
name="k-means++", data=data)
# benchmark random initialisation
bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),
name="random", data=data)
# benchmark PCA initalisation
pca = PCA(n_components=n_digits).fit(data)
bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1),
name="PCA-based",
data=data)
print(75 * '_')
###Output
_____no_output_____
###Markdown
TODO Reading evaluation output- Which approach performs best? How would you order the other two?- Do you neighbours get the same result?- Can we apply statistical significance testing?- How else can we test reliability?
###Code
# 1 - PCA > random, PCA > k-means++, hard to say for random and k-means++
# 2 - Run multiple times, k-means clustering depends on initialisation and changes
# 3 - Not directly to the clustering output. Clustering is often more of an exploratory tool.
# 4 - A few possibilities..
# We could evaluate the impact of different clusterings on another task.
# We could do a bootstrap stability analysis
# (http://www.r-bloggers.com/bootstrap-evaluation-of-clusters/).
# We could use cophenitic correlation for hierarchical clustering
# (https://en.wikipedia.org/wiki/Cophenetic_correlation).
###Output
_____no_output_____
###Markdown
*STOP PLEASE. THE FOLLOWING IS FOR THE NEXT EXERCISE. THANKS.* EXERCISE: Choosing k Create example data for choosing kFirst, let's create some example data with 4 clusters using make_blobs.We set `random_state=1` so we all get the same clusters.
###Code
from sklearn.datasets import make_blobs
# Generating the sample data from make_blobs
# This particular setting has one distict cluster and 3 clusters placed close
# together.
X, y = make_blobs(n_samples=500,
n_features=2,
centers=4,
cluster_std=1,
center_box=(-10.0, 10.0),
shuffle=True,
random_state=1) # For reproducibility
d1 = X[:,0] # first dimension
d2 = X[:,1] # second dimension
_ = plt.scatter(d1,d2)
###Output
_____no_output_____
###Markdown
Choosing k using silhouette analysis[Adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html]For good clusterings:* the average silhouette should be close to 1 indicating that points are far away from neighbouring clusters * all cluster silhouettes should be close to the average silhouette score
###Code
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.cm as cm
range_n_clusters = range(2,6)
for n_clusters in range_n_clusters:
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhoutte score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors)
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1],
marker='o', c="white", alpha=1, s=200)
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50)
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
###Output
_____no_output_____
###Markdown
TODO Choosing k[Derived from Data Science from Scratch, Chapter 19]The textbook suggests another interactive approach for choosing the number of clusters: plot SSE versus k and looking for the knee (the point where the graph bends).- Plot inertia against k for k from 2 to 6- What k would you choose for each? Is it the same?- Does this work on the handwritten digits code? Why / why not?
###Code
# 1 -
range_n_clusters = [2, 3, 4, 5, 6]
inertia_values = [KMeans(n_clusters=k).fit(X).inertia_ for k in range_n_clusters]
_ = plt.plot(range_n_clusters, inertia_values)
# 2 - From this plot, it looks like the knee is at 4 or maybe 3
# 3 - Nope. The handwritten digits data is difficult to cluster.
# However, we haven't done anything clever with our feature representation.
# We might do better, e.g., with spectral clustering
# (http://scikit-learn.org/stable/modules/clustering.html#spectral-clustering).
# Q: Does spectral clustering outperform
# We leave this as an extra exercise for the keen reader.
range_n_clusters = range(5,15)
inertia_values = [KMeans(n_clusters=k).fit(data).inertia_ for k in range_n_clusters]
_ = plt.plot(range_n_clusters, inertia_values)
###Output
_____no_output_____ |
_notebooks/2020-01-22-HAR.ipynb | ###Markdown
"Physical activity classification using smartphone-data"> "The goal of this project is to predict the type of physical activity (e.g., walking, climbing stairs) from tri-axial smartphone accelerometer data."- toc: false- branch: master- badges: true- comments: true- categories: [project, machine-learning, random-forest, notebook, python]- image: images/vignette/har.png- hide: false- search_exclude: true Data Loading and Exploration load essential libraries
###Code
import pandas as pd
import numpy as np
import matplotlib as mpl
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
X = pd.read_csv("data/har/time_series.csv")
y = pd.read_csv("data/har/labels.csv").label
activities = {1:'standing', 2:'walking', 3:'stairs-down', 4:'stairs-up'}
labels = []
for i in range(len(y)):
label = np.repeat(y[i], 9)
labels.extend([*label, y[i]])
X['label'] = labels[:-6]
y = X.label
X.head()
X.info()
y
standing = X.label == 1
walking = X.label == 2
stairs_down = X.label == 3
stairs_up = X.label == 4
x = np.linspace(0, len(labels)-6, len(labels)-6)
mpl.style.use("fivethirtyeight")
%matplotlib notebook
fig, ax = plt.subplots(2, 2, figsize=(12, 8))
ax[0, 0].plot(x[standing], X.x[standing], x[standing],
X.y[standing], x[standing], X.z[standing], '-', alpha=0.4)
ax[0, 0].set_title(activities[1])
ax[0, 1].plot(x[walking], X.x[walking], x[walking],
X.y[walking], x[walking], X.z[walking], '-', alpha=0.4)
ax[0, 1].set_title(activities[2])
ax[1, 0].plot(x[stairs_down],
X.x[stairs_down], x[stairs_down],
X.y[stairs_down], x[stairs_down],
X.z[stairs_down], '-', alpha=0.4)
ax[1, 0].set_title(activities[3])
ax[1, 1].plot(X.timestamp[stairs_up], X.x[stairs_up], X.timestamp[stairs_up],
X.y[stairs_up], X.timestamp[stairs_up], X.z[stairs_up], '-', alpha=0.4)
ax[1, 1].set_title(activities[4])
fig.suptitle("Tri-Axial Linear Acceleration", fontsize=25)
plt.gcf().autofmt_xdate()
fig.text(0.5, 0.05, 'time', ha='center', fontsize=16)
fig.text(0.01, 0.5, 'acceleration', va='center', rotation='vertical', fontsize=16)
fig.show()
mpl.style.use("fivethirtyeight")
plt.plot(X.timestamp, X.x, X.timestamp, X.y, X.timestamp, X.z, '-', alpha=0.4)
plt.title("Tri-Axial Linear Acceleration")
plt.xlabel("time")
plt.ylabel("acceleration")
plt.gcf().autofmt_xdate()
plt.show()
walking = X.label == 1
standing = X.label == 2
stairs_down = X.label == 3
stairs_up = X.label == 4
%matplotlib notebook
fig,axs = plt.subplots(4,1, figsize = (16,12), sharex=True)
sns.kdeplot(X.x[walking], shade=True, ax=axs[0])
sns.kdeplot(X.y[walking], shade=True, ax=axs[0])
sns.kdeplot(X.z[walking], shade=True, ax=axs[0])
sns.kdeplot(X.x[standing], shade=True, ax=axs[1])
sns.kdeplot(X.y[standing], shade=True, ax=axs[1])
sns.kdeplot(X.z[standing], shade=True, ax=axs[1])
sns.kdeplot(X.x[stairs_down], shade=True, ax=axs[2])
sns.kdeplot(X.y[stairs_down], shade=True, ax=axs[2])
sns.kdeplot(X.z[stairs_down], shade=True, ax=axs[2])
sns.kdeplot(X.x[stairs_up], shade=True, ax=axs[3])
sns.kdeplot(X.y[stairs_up], shade=True, ax=axs[3])
sns.kdeplot(X.z[stairs_up], shade=True, ax=axs[3])
axs[0].set_title(activities[1])
axs[1].set_title(activities[2])
axs[2].set_title(activities[3])
axs[3].set_title(activities[4])
axs[0].set_xlim((-3,2))
fig.suptitle("Tri-Axial Acceralometer Data", fontsize=20)
fig.show()
###Output
_____no_output_____
###Markdown
Modelling
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score
train_covariates = X[['x', 'y', 'z']]
target = X.label
clf = RandomForestClassifier(max_depth=10, random_state=0)
def correlation(estimator, X, y):
estimator.fit(X,y)
y_pred = estimator.predict(X)
return r2_score(y, y_pred)
def accuracy(estimator, X, y):
estimator.fit(X,y)
y_pred = estimator.predict(X)
return accuracy_score(y, y_pred)
test_score = accuracy(clf, train_covariates, target)
val_scores = cross_val_score(clf,
train_covariates,
target,
cv=10,
scoring=accuracy)
scores
test_score
scores.mean()
###Output
_____no_output_____ |
DSFS Chapter 12 - k Nearest Neighbor.ipynb | ###Markdown
Chapter 12 - K Nearest NeighborAll about the k nearest neighbor technique for classification. It relies on no mathematical assumptions and not heavy lifting. It just requires a distance calculation and an assumption that points close to one another are similar.
###Code
#bringing in functions from other chapters
from collections import Counter
import matplotlib.pyplot as plt
# dot product
def dot(v, w):
return sum(v_i * w_i
for v_i, w_i in zip(v,w))
def sum_of_squares(v):
return dot(v,v)
# magnitudes
import math
def magnitude(v):
return math.sqrt(sum_of_squares(v))
def squared_dist(v,w):
return sum_of_squares(vector_subtract(v,w))
def distance(v,w):
return magnitude(vector_subtract(v,w))
def vector_subtract(v, w):
return [v_i - w_i
for v_i, w_i in zip(v,w)]
def mean(x):
return sum(x) / len(x)
# we need a way to count the votes from a k set of classifiers
def raw_majority_vote(labels):
votes = Counter(labels)
winner, _ = votes.most_common(1)[0]
return winner
# nothing to do about ties though.
# reducing k until there's a winner
def majority_vote(labels):
vote_counts = Counter(labels)
winner, winner_count = vote_counts.most_common(1)[0]
num_winners = len([count for count in vote_counts.values() if count == winner_count])
if num_winners == 1:
return winner
else:
return majority_vote(labels[:-1])
# easy to then make a classifier as it reduces labels until there's only one left and it wins
def knn_classify(k, labeled_points, new_point):
by_distance = sorted(labeled_points, key=lambda point_label : distance(point_label[0], new_point))
k_nearest_labels = [label for _, label in by_distance[:k]]
return majority_vote(k_nearest_labels)
# state border plotter
import re
import matplotlib.pyplot as plt
segments = []
points = []
lat_long_regex = r"<point lat=\"(.*)\" lng=\"(.*)\""
with open("states.txt", "r") as f:
lines = [line for line in f]
for line in lines:
if line.startswith("</state>"):
for p1, p2 in zip(points, points[1:]):
segments.append((p1, p2))
points = []
s = re.search(lat_long_regex, line)
if s:
lat, lon = s.groups()
points.append((float(lon), float(lat)))
def plot_state_borders(color='0.8'):
for (lon1, lat1), (lon2, lat2) in segments:
plt.plot([lon1, lon2], [lat1, lat2], color=color)
# example: favorite languages
cities = [([-122.3, 47.53], "Python"), ([ -96.85, 32.85], "Java"), ([ -89.33, 43.13], "R"),] # etc
plots = { "Java" : ([], []), "Python": ([],[]), "R" : ([], []) }
markers = { "Java" : "o", "Python" : 's', 'R' : "^" }
colors = { "Java" : 'r', "Python" : 'b', "R" : "g" }
for (longitude, latitude), language in cities:
plots[language][0].append(longitude)
plots[language][1].append(latitude)
for language, (x,y) in plots.items():
plt.scatter(x,y, color=colors[language], marker=markers[language], label=language, zorder=10)
plot_state_borders() # pretending we have a function that does this. Actually we do from the github page...
plt.legend(loc=0)
plt.axis([-130, -60, 20, 55])
plt.title("Favorite Programming Languages by City")
plt.show()
# now we will predict using the neighbors
for k in [1, 3, 5, 7]:
num_correct = 0
for city in cities:
location, actual_language = city
other_cities = [other_city for other_city in cities if other_city != city]
predicted_language = knn_classify(k, other_cities, location)
if predicted_language == actual_language:
num_correct += 1
print(k, "neigbor[s]: ", num_correct, "correct out of", len(cities))
# we can plot an entire grid worth of points and then plotting them as we did with cities
plots = { "Java" : ([], []), "Python": ([],[]), "R" : ([], []) }
#k = 1 # or 3 or 5 etc
k= 3
#k = 5
for longitude in range(-130, -60):
for latitude in range(20, 55):
predicted_language = knn_classify(k, cities, [longitude, latitude])
plots[predicted_language][0].append(longitude)
plots[predicted_language][1].append(latitude)
for language, (x,y) in plots.items():
plt.scatter(x,y, color=colors[language], marker=markers[language], label=language, zorder=10)
plot_state_borders() # pretending we have a function that does this. Actually we do from the github page...
plt.legend(loc=0)
plt.axis([-130, -60, 20, 55])
plt.title("Favorite Programming Languages by City")
plt.show()
###Output
_____no_output_____
###Markdown
The Curse of DimensionalityHigh dimensional spaces are VAST. Points in these spaces tend not be close to one another at all; one way to see this is by generating a d-dimensional unit cube and calculating the distances.
###Code
# generate some random data
import random
def random_point(dim):
return [random.random() for _ in range(dim)]
def random_distance(dim, num_pairs):
return [distance(random_point(dim), random_point(dim)) for _ in range(num_pairs)]
# for every dimesion 1 to 100 we compute 10,000 distances and use those to compute the average distance between points
# and the minimum distance between points in each dimension
dimensions = range(1, 101)
avg_distance = []
min_distance = []
random.seed(0)
for dim in dimensions:
distances = random_distance(dim, 10000)
avg_distance.append(mean(distances))
min_distance.append(min(distances))
plt.plot(dimensions, (avg_distance))
plt.plot(dimensions, (min_distance))
plt.show()
# as the number of dimensions increases the average distance, and more importantly the ratio between the average and minimum
# distance, increases.
min_avg_ratio = [min_dist / avg_dist for min_dist, avg_dist in zip(min_distance, avg_distance)]
plt.plot(dimensions, min_avg_ratio)
###Output
_____no_output_____ |
2020.07.2400_classification/.ipynb_checkpoints/RF_knn_sqrt-checkpoint.ipynb | ###Markdown
Random Forest
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split,KFold
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,\
recall_score,roc_curve,auc
#import expectation_reflection as ER
#from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
#from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from function import split_train_test,make_data_balance
np.random.seed(1)
###Output
_____no_output_____
###Markdown
First of all, the processed data are imported.
###Code
#data_list = ['1paradox']
#data_list = ['29parkinson','30paradox2','31renal','32patientcare','33svr','34newt','35pcos']
data_list = np.loadtxt('data_list_30sets.txt',dtype='str')
print(data_list)
def read_data(data_id):
data_name = data_list[data_id]
print('data_name:',data_name)
#Xy = np.loadtxt('%s/data_processed.dat'%data_name)
Xy = np.loadtxt('../classification_data/%s/data_processed_knn_sqrt.dat'%data_name)
X = Xy[:,:-1]
y = Xy[:,-1]
#print(np.unique(y,return_counts=True))
X,y = make_data_balance(X,y)
print(np.unique(y,return_counts=True))
X, y = shuffle(X, y, random_state=1)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state = 1)
sc = MinMaxScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
return X_train,X_test,y_train,y_test
def measure_performance(X_train,X_test,y_train,y_test):
model = RandomForestClassifier()
# Number of trees in random forest
#n_estimators = [int(x) for x in np.linspace(start = 10, stop = 100, num = 10)]
n_estimators = [10,50,100]
# Number of features to consider at every split
max_features = ['auto']
# Maximum number of levels in tree
#max_depth = [int(x) for x in np.linspace(1, 10, num = 10)]
max_depth = [2,4,6,8,10]
#max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [5, 10, 15, 20]
# Minimum number of samples required at each leaf node
min_samples_leaf = [int(x) for x in np.linspace(start = 1, stop = 5, num = 5)]
# Method of selecting samples for training each tree
#bootstrap = [True, False]
bootstrap = [True]
# Create the random grid
hyper_parameters = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
#random_search = RandomizedSearchCV(estimator = model, param_distributions = random_grid, n_iter = 100,
# cv = 4, verbose=2, random_state=1, n_jobs = -1)
# Create grid search using cross validation
clf = GridSearchCV(model, hyper_parameters, cv=4, iid='deprecated')
# Fit grid search
best_model = clf.fit(X_train, y_train)
# View best hyperparameters
#print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
#print('Best C:', best_model.best_estimator_.get_params()['C'])
# best hyper parameters
print('best_hyper_parameters:',best_model.best_params_)
# performance:
y_test_pred = best_model.best_estimator_.predict(X_test)
acc = accuracy_score(y_test,y_test_pred)
#print('Accuracy:', acc)
p_test_pred = best_model.best_estimator_.predict_proba(X_test) # prob of [0,1]
p_test_pred = p_test_pred[:,1] # prob of 1
fp,tp,thresholds = roc_curve(y_test, p_test_pred, drop_intermediate=False)
roc_auc = auc(fp,tp)
#print('AUC:', roc_auc)
precision = precision_score(y_test,y_test_pred)
#print('Precision:',precision)
recall = recall_score(y_test,y_test_pred)
#print('Recall:',recall)
f1_score = 2*precision*recall/(precision+recall)
return acc,roc_auc,precision,recall,f1_score
n_data = len(data_list)
roc_auc = np.zeros(n_data) ; acc = np.zeros(n_data)
precision = np.zeros(n_data) ; recall = np.zeros(n_data)
f1_score = np.zeros(n_data)
#data_id = 0
for data_id in range(n_data):
X_train,X_test,y_train,y_test = read_data(data_id)
acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id] =\
measure_performance(X_train,X_test,y_train,y_test)
print(data_id,acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id])
print('acc_mean:',acc.mean())
print('roc_mean:',roc_auc.mean())
print('precision:',precision.mean())
print('recall:',recall.mean())
print('f1_score:',f1_score.mean())
np.savetxt('result_knn_sqrt_RF.dat',(roc_auc,acc,precision,recall,f1_score),fmt='%f')
###Output
_____no_output_____ |
data_preparation/prepare_dataframe.ipynb | ###Markdown
Helper Functions
###Code
def get_paths(raw_name, raw_index, dataset):
'''
raw_name: image_name, e.g., ag_trainset_renderpeople_bfh_archviz_5_10_cam02_00001.png
raw_index: index of person in the image (dataframe["min_occ_idx"])
dataset: for example, train_0
'''
# generate img path
img_name = raw_name.replace('.png','_1280x720.png')
img_name_ele = img_name.split("_")
img_path = "./{}/{}".format(dataset, img_name)
img_name_ele[-2] = "0"+img_name_ele[-2]
if (raw_index+1<10):
img_name_ele.insert(-1,"0000{}".format(raw_index+1))
else:
img_name_ele.insert(-1,"000{}".format(raw_index+1))
# generate target path
tgt_path = "_".join(img_name_ele) # for example, ag_trainset_renderpeople_bfh_archviz_5_10_cam02_000001_00001_1280x720.png
tgt_path = "./dataset/{}/{}_{}".format(dataset.split("_")[0], dataset, tgt_path)
# generate mask path
mask_folder = "_".join(img_name_ele[:5])
if (img_name_ele[-4].startswith("cam")):
img_name_ele.insert(-4,"mask")
else:
img_name_ele.insert(-3,"mask")
mask_name = "_".join(img_name_ele) # for example, ag_trainset_renderpeople_bfh_archviz_5_10_mask_cam02_000001_00001_1280x720.png
if dataset.startswith("train"):
mask_path = "./train_masks_1280x720/train/{}/{}".format(mask_folder,mask_name)
else:
mask_path = "./validation_masks_1280x720/{}/{}".format(mask_folder,mask_name)
return img_path, tgt_path, mask_path
def get_final_image(img_path, tgt_path, mask_path):
try:
# get mask image of selected person
img = cv2.imread(img_path)
mask = cv2.imread(mask_path, 0) # for foreground (person)
masked_img = cv2.bitwise_and(img, img, mask=mask)
new_mask = np.logical_not(mask) # for background => we want white background eventually
masked_img[new_mask]=255 # new_mask contains boolean entries and therefore can be used in this way
masked_img = cv2.cvtColor(masked_img, cv2.COLOR_BGR2RGB)
# crop image from the mask
c = np.nonzero(mask)
x_min = int(min(c[1]))
x_max = int(max(c[1]))
y_min = int(min(c[0]))
y_max = int(max(c[0]))
cropped_img = masked_img[y_min:y_max, x_min:x_max]
w = x_max - x_min
h = y_max - y_min
# scale the cropped image
scale = 200/max(w, h)
resized_w = int(scale*w)
resized_h = int(scale*h)
resized_cropped_img = cv2.resize(cropped_img, (resized_w, resized_h))
# generate final result (256*256 white background image)
final_result = np.zeros((256,256,3))
final_c_x = 128
final_c_y = 128
final_result += 255
final_result[int(final_c_y-resized_h/2):int(final_c_y+resized_h/2),int(final_c_x-resized_w/2):int(final_c_x+resized_w/2)] = resized_cropped_img
final_result = final_result.astype(int) # necessary
plt.imshow(final_result)
plt.axis("off")
plt.savefig("{}".format(tgt_path))
except:
print (img_path, tgt_path, mask_path)
pass
###Output
_____no_output_____
###Markdown
Prepare Training Images
###Code
train_df = []
# read data for each of the 10 training group
for i in range(10):
df = pd.read_pickle("./SMPLX/train_{}_withjv.pkl".format(i))[["imgPath", "occlusion", "gt_path_smplx"]]
df["dataset"] = "train_{}".format(i)
df["indices"] = df.apply(lambda x: list(range(len(x["occlusion"]))), axis=1)
df = df.explode("indices")
df["smplx_path"] = df.apply(lambda x: x["gt_path_smplx"][x["indices"]], axis=1)
df["occlusions"] = df.apply(lambda x: x["occlusion"][x["indices"]], axis=1)
paths = df.apply(lambda x: get_paths(x["imgPath"], x["indices"], x["dataset"]), axis=1)
df["src_img_path"] = paths.apply(lambda x: x[0])
df["tgt_img_path"] = paths.apply(lambda x: x[1])
df["mask_path"] = paths.apply(lambda x: x[2])
train_df.append(df[["dataset", "smplx_path", "src_img_path", "mask_path", "tgt_img_path", "indices", "occlusions"]])
train_df = pd.concat(train_df)
# select threshold of occlusions for training images
final_train_df = train_df[(train_df["occlusions"]>=0) & (train_df["occlusions"]<0.3)] # 9655 rows
print (final_train_df.shape)
final_train_df.to_csv("train_dataframe.csv", index=False)
###Output
(9655, 7)
###Markdown
Prepare Developing Images
###Code
dev_df = final_train_df.iloc[:100]
dev_df.to_csv("dev_dataframe.csv", index=False)
###Output
_____no_output_____
###Markdown
Prepare Validation Images (will be combined with Training Images for Training)
###Code
val_df = []
# read data for each of the 10 training group
for i in range(10):
df = pd.read_pickle("./SMPLX/validation_{}_withjv.pkl".format(i))[["imgPath", "occlusion", "gt_path_smplx"]]
df["dataset"] = "validation"
df["indices"] = df.apply(lambda x: list(range(len(x["occlusion"]))), axis=1)
df = df.explode("indices")
df["smplx_path"] = df.apply(lambda x: x["gt_path_smplx"][x["indices"]], axis=1)
df["occlusions"] = df.apply(lambda x: x["occlusion"][x["indices"]], axis=1)
paths = df.apply(lambda x: get_paths(x["imgPath"], x["indices"], x["dataset"]), axis=1)
df["src_img_path"] = paths.apply(lambda x: x[0])
df["tgt_img_path"] = paths.apply(lambda x: x[1])
df["mask_path"] = paths.apply(lambda x: x[2])
val_df.append(df[["dataset", "smplx_path", "src_img_path", "mask_path", "tgt_img_path", "indices", "occlusions"]])
val_df = pd.concat(val_df) # 10175 rows
final_val_df = val_df[(val_df["occlusions"]>=0) & (val_df["occlusions"]<0.3)]
print (final_val_df.shape) # 596 rows
final_val_df.to_csv("val_dataframe.csv", index=False)
final_df = pd.concat([final_train_df, final_val_df]) # 10251 rows
print (final_df.shape)
final_df.to_csv("agora_dataframe.csv", index=False)
###Output
(10251, 7)
|
reddit-science.ipynb | ###Markdown
**r/Sciences Post Classification with Interpretability**In this project I have created text classification models to classify Reddit posts according to their topics. I use Local Interpretable Model-Agnostic Explanations ([lime](https://lime-ml.readthedocs.io/en/latest/index.html)) to generate explanations for the predictions.[r/Sciences](https://www.reddit.com/r/sciences/) subreddit is a great community to share and discuss new scientific research and read about the latest advances in astronomy, biology, medicine, physics, social science, and more.
###Code
# Imports necessary libraries
import numpy as np
import pandas as pd
from google.colab import drive
import os
import praw
import time
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
from lime import lime_text
from lime.lime_text import LimeTextExplainer
import warnings
warnings.filterwarnings("ignore")
# Changes directory to my Google Drive folder
drive.mount("/content/drive", force_remount=True)
os.chdir("/content/drive/My Drive/Colab Notebooks/Reddit/")
def add_or_update_posts(period="all", limit=1000):
"""Downloads and saves Reddit posts for a given period.
Args:
period (str): Can be one of the following – all, day, hour, month, week, year
limit (int): Number of posts
Returns:
None
"""
# Initializes a Reddit instance using provided credentials
reddit = praw.Reddit(client_id="client_id",
client_secret="client_secret",
password="password",
user_agent="user_agent",
username="username")
# Loads the "raw.csv" file, if it does not exist creates one
if not os.path.isfile("raw.csv"):
df = pd.DataFrame(columns=["id", "added_utc", "author.id", "author.name",
"edited", "created_utc", "link_flair_text",
"locked", "num_comments", "over_18", "permalink",
"title", "score", "upvote_ratio"])
df.to_csv("raw.csv", index=False)
else:
df = pd.read_csv("raw.csv")
# Imports top posts from a subreddit called "Science"
top_posts = reddit.subreddit("science").top(period, limit=limit)
# Creates a list with the names of the columns of the dataframe
cols = df.columns.tolist()
# Appends all posts from top_posts if they are not present in the dataframe or have been edited since
for post in top_posts:
d = {}
if post.id not in df["id"].values or post.edited > df.loc[df["id"] == post.id, "edited"].iloc[0]:
for c in cols:
try: d[c] = eval("post." + c)
except: d[c] = None
d["added_utc"] = round(time.time())
df.drop(df[df["id"] == post.id].index, inplace=True)
df = df.append(d, ignore_index=True)
# Writes the dataframe to the csv file
df.to_csv("raw.csv", index=False)
# Downloads and saves top posts from the following periods – all/year/month/week
for period in ["all", "year", "month", "week"]:
add_or_update_posts(period=period, limit = 1000)
# Loads all saved Reddit posts
df = pd.read_csv("raw.csv")
# Prints the number of posts and columns
df.shape
# Prints the counts of the top 20 most common flairs/subjects
df["link_flair_text"].value_counts().head(20)
# Remove all posts with subjects that have less than 100 posts
unique_flairs = df["link_flair_text"].value_counts()
df = df[df["link_flair_text"].isin(unique_flairs.index[unique_flairs>100])]
# Prints the counts of remaining subjects
df["link_flair_text"].value_counts()
# Prints the average upvote ratio for each subject
df.groupby("link_flair_text")["upvote_ratio"].mean().sort_values(ascending=False)
# Plots the distributions of upvote ratios by subject
fig, axs = plt.subplots(ncols=2, figsize=(15,5))
g1 = sns.kdeplot(df["upvote_ratio"], label="", ax=axs[0])
for flair in df.groupby("link_flair_text")["upvote_ratio"].mean().sort_values(ascending=False).index.tolist():
g2 = sns.kdeplot(df.loc[df["link_flair_text"] == flair, "upvote_ratio"], label=flair, ax=axs[1])
g1.set(xlim=(0,1), title="All Posts", xlabel="Upvote ratio", ylabel="Density")
g2.set(xlim=(0,1), title="Posts by Flair", xlabel="Upvote ratio", ylabel="Density")
fig.show()
# Creates a new column from the post title
df["text"] = df["title"]
# Replaces new lines and multiple spaces with a single space
df["text"] = df["text"].str.replace("\n", " ", regex=False)
df["text"] = df["text"].str.replace("\s{2,}", " ")
# Removes leading and trailing whitespace
df["text"] = df["text"].str.strip()
# Splits the dataset into train and test sets stratified by the subject
df_train, df_test = train_test_split(df, test_size=0.3, stratify=df["link_flair_text"], random_state=1)
# Prints the counts of subjects in the training set
df_train["link_flair_text"].value_counts()
# Prints the counts of subjects in the test set
df_test["link_flair_text"].value_counts()
# Upsamples the posts in the training set by duplication
max_size = df_train["link_flair_text"].value_counts().max()
dfs = [df_train]
for i, g in df_train.groupby("link_flair_text"):
dfs.append(g.sample(max_size-len(g), replace=True))
df_train = pd.concat(dfs)
df_train["link_flair_text"].value_counts()
# Separates the training and test sets into text and category
x_train, y_train = df_train["text"], df_train["link_flair_text"]
x_test, y_test = df_test["text"], df_test["link_flair_text"]
# Converts the posts into a matrix of TF-IDF features
vec = TfidfVectorizer(sublinear_tf=True, min_df=5, norm="l2", ngram_range=(1, 2), stop_words="english")
# Creates a list of classification models
clfs = [SGDClassifier(loss="log"), LogisticRegression(), MultinomialNB()]
# Creates models for each type of classifier and record classification accuracies
accs = []
for clf in clfs:
pipe = make_pipeline(vec, clf)
model = pipe.fit(x_train, y_train)
prediction = model.predict(x_test)
accs.append(accuracy_score(y_test, prediction))
print("Accuracies – " + ", ".join([f"{acc*100:.4f}%" for acc in accs]))
# Retrains the model with the highest accuracy
clf = clfs[np.argmax(accs)]
pipe = make_pipeline(vec, clf)
model = pipe.fit(x_train, y_train)
prediction = model.predict(x_test)
acc = accuracy_score(y_test, prediction)
"The selected classifier is '" + list(model.named_steps.keys())[1] + "' with " f"{acc*100:.2f}%" + " accuracy."
# Plots the confusion matrix for evaluation of the classification accuracy
conf_matrix = confusion_matrix(y_test, prediction)
rows = clf.classes_
columns = clf.classes_
df_conf_matrix = pd.DataFrame(conf_matrix, columns, rows)
plt.figure(figsize=(10,6))
sns.heatmap(df_conf_matrix, annot=True, cmap=sns.cubehelix_palette(30), fmt="g")
plt.xlabel("Predicted", fontsize = 12)
plt.ylabel("Actual", fontsize = 12)
plt.show()
# Picks a random post from the test set
random_row_number = np.random.randint(0, len(x_test))
print("Actual:", y_test.iloc[random_row_number])
print("Prediction:", prediction[random_row_number])
# Generates LIME explanation of the prediction
explainer = LimeTextExplainer(class_names = clf.classes_)
exp = explainer.explain_instance(x_test.iloc[random_row_number], model.predict_proba, num_features=10, top_labels=2)
exp.show_in_notebook(text=True)
###Output
Actual: Psychology
Prediction: Psychology
|
DensePose_COCO_Visualize.ipynb | ###Markdown
Visualization of DensePose-COCO datasetIn this notebook, we visualize the DensePose-COCO annotations on the images.The densepose COCO dataset annotations are provided within the coco annotation framework and can be handled directly using the pycocotools. _Visualization of the partitioning of the surface and demonstration of "correspondence to a single point on a part"._ DensePose fields in annotations: Collected Masks* **'dp_masks' :** RLE encoded dense masks. All part masks are of size 256x256 and maps to 14 labels. Please note that these are not linked to the 3D template. These are semantically meaningful parts collected from annotators, we use these to sample annotation points. Collected Points* **'dp_x'**, **'dp_y' :** The spatial coordinates of collected points on the image. The coordinates are scaled such that the bounding box size is 256x256.* **'dp_I' :** The patch index that indicates which of the 24 surface patches the point is on.* **'dp_U'**, **'dp_V' :** Coordinates in the UV space. Each surface patch has a separate 2D parameterization.In the following, we reshape the collected masks and points
###Code
from pycocotools.coco import COCO
import os
import cv2
import matplotlib.pyplot as plt
import numpy as np
import pycocotools.mask as mask_util
from random import randint
coco_folder = '../detectron/datasets/data/coco/'
dp_coco = COCO( coco_folder + '/annotations/densepose_coco_2014_minival.json')
###Output
loading annotations into memory...
Done (t=0.66s)
creating index...
index created!
###Markdown
Select a random image, read it and load the annotations that correspond to this image.
###Code
# Get img id's for the minival dataset.
im_ids = dp_coco.getImgIds()
# Select a random image id.
Selected_im = im_ids[randint(0, len(im_ids))] # Choose im no 57 to replicate
# Load the image
im = dp_coco.loadImgs(Selected_im)[0]
# Load Anns for the selected image.
ann_ids = dp_coco.getAnnIds( imgIds=im['id'] )
anns = dp_coco.loadAnns(ann_ids)
# Now read and b
im_name = os.path.join( coco_folder + 'val2014', im['file_name'] )
I=cv2.imread(im_name)
plt.imshow(I[:,:,::-1]); plt.axis('off'); plt.show()
###Output
_____no_output_____
###Markdown
Visualization of Collected MasksLet's visualize the collected masks on the image. These masks are used:* to sample points to collect dense correspondences.* as an auxillary loss in DensePose-RCNN.* to obtain dense FG/BG maps. A function to get dense masks from the decoded masks.
###Code
def GetDensePoseMask(Polys):
MaskGen = np.zeros([256,256])
for i in range(1,15):
if(Polys[i-1]):
current_mask = mask_util.decode(Polys[i-1])
MaskGen[current_mask>0] = i
return MaskGen
###Output
_____no_output_____
###Markdown
Go over all anns and visualize them one by one.
###Code
I_vis=I.copy()/2 # Dim the image.
for ann in anns:
bbr = np.array(ann['bbox']).astype(int) # the box.
if( 'dp_masks' in ann.keys()): # If we have densepose annotation for this ann,
Mask = GetDensePoseMask(ann['dp_masks'])
################
x1,y1,x2,y2 = bbr[0],bbr[1],bbr[0]+bbr[2],bbr[1]+bbr[3]
x2 = min( [ x2,I.shape[1] ] ); y2 = min( [ y2,I.shape[0] ] )
################
MaskIm = cv2.resize( Mask, (int(x2-x1),int(y2-y1)) ,interpolation=cv2.INTER_NEAREST)
MaskBool = np.tile((MaskIm==0)[:,:,np.newaxis],[1,1,3])
# Replace the visualized mask image with I_vis.
Mask_vis = cv2.applyColorMap( (MaskIm*15).astype(np.uint8) , cv2.COLORMAP_PARULA)[:,:,:]
Mask_vis[MaskBool]=I_vis[y1:y2,x1:x2,:][MaskBool]
I_vis[y1:y2,x1:x2,:] = I_vis[y1:y2,x1:x2,:]*0.3 + Mask_vis*0.7
plt.imshow(I_vis[:,:,::-1]); plt.axis('off'); plt.show()
###Output
_____no_output_____
###Markdown
Visualization of Collected pointsLet's visualize the collected points on the image. For each collected point we have the surface patch index, and UV coordinates.The following snippet creates plots colored by I U and V coordinates respectively.
###Code
# Show images for each subplot.
fig = plt.figure(figsize=[15,5])
plt.subplot(1,3,1)
plt.imshow(I[:,:,::-1]/2);plt.axis('off');plt.title('Patch Indices')
plt.subplot(1,3,2)
plt.imshow(I[:,:,::-1]/2);plt.axis('off');plt.title('U coordinates')
plt.subplot(1,3,3)
plt.imshow(I[:,:,::-1]/2);plt.axis('off');plt.title('V coordinates')
## For each ann, scatter plot the collected points.
for ann in anns:
bbr = np.round(ann['bbox'])
if( 'dp_masks' in ann.keys()):
Point_x = np.array(ann['dp_x'])/ 255. * bbr[2] # Strech the points to current box.
Point_y = np.array(ann['dp_y'])/ 255. * bbr[3] # Strech the points to current box.
#
Point_I = np.array(ann['dp_I'])
Point_U = np.array(ann['dp_U'])
Point_V = np.array(ann['dp_V'])
#
x1,y1,x2,y2 = bbr[0],bbr[1],bbr[0]+bbr[2],bbr[1]+bbr[3]
x2 = min( [ x2,I.shape[1] ] ); y2 = min( [ y2,I.shape[0] ] )
###############
Point_x = Point_x + x1 ; Point_y = Point_y + y1
plt.subplot(1,3,1)
plt.scatter(Point_x,Point_y,22,Point_I)
plt.subplot(1,3,2)
plt.scatter(Point_x,Point_y,22,Point_U)
plt.subplot(1,3,3)
plt.scatter(Point_x,Point_y,22,Point_V)
plt.show()
###Output
_____no_output_____ |
dmu18/dmu18_CDFS-SWIRE/normalize_PACS_160_psf.ipynb | ###Markdown
PSF normalizationLet us assume that we have reduced an observation, for which we have determined the PSF by stacking the flux of point-like sources. The PSF we obtain will not be as high S/N as the instrumental PSF that has been determined by the instrument team. Moreover, it is likely to be fattened due to the some small pointing errors. We need to find out what fraction of a point-like flux the PSF we have determined represent. In order to do this, we use the growth curve of the theoretical PSF that has been determine by the instrument team, and compare it to the growth curve we determine from our PSF.We will first look at a theoretical case, then go practical with an example drawn from the PACS observation of the the XMM-LSS. 1) Theoretical example. Let us suppose we have a perfect telescope, without any central obscuration and spider to support the secondary. Diffraction theory gives us the shape of a PSF in this case, an Airy function. Let's compute it, and assume the resolution is 10".
###Code
# import what we will need.
%matplotlib inline
import numpy as np
from astropy.io import fits
from astropy.table import Table
from astropy.io import ascii as asciiread
from matplotlib import pyplot as plt
from scipy import interpolate
from scipy import special
from scipy import signal
from scipy import fftpack
# Let us perform our computation with a 0.1" resolution on a 5' field of view
resol = 0.1
size = 300.
# wavelength
wavelength = 250e-6
# primary aperture = 3.6 m diameter
aperture = 3.6 / 2.
# Ensure we have an odd number of points
nbpix = np.ceil(size/resol) // 2 * 2 + 1
xcen = int((nbpix - 1) / 2)
ycen = int((nbpix - 1) / 2)
x = y = (np.arange(nbpix) - xcen)*resol
xv, yv = np.meshgrid(x, y, sparse=False, indexing='xy')
r = np.sqrt(xv**2+yv**2)
# avoid division by 0 problems in the center
r[xcen,ycen] = 1e-6
# coordinates in fourier
q = 2 * np.pi / wavelength * aperture * np.sin(r/3600.*np.pi/180.)
psf = (2*special.jn(1, q)/q)**2
# put back the correct value at center
psf[xcen, ycen] = 1.
# and normalize the PSF
psf = psf/(np.sum(psf)*resol**2)
plt.imshow(np.log10(psf))
print(r'$\int\int$ psf dx dy = {}'.format(np.sum(psf)*resol**2))
plt.plot(y[ycen-500:ycen+500], psf[ycen-500:ycen+500, xcen], label='Without obscuration')
plt.legend()
###Output
_____no_output_____
###Markdown
Let us now suppose that we observe a point source, and our image reconstruction has a ...This will shows a a blurring of the image, with a gaussian of 10" FWHM. Let's generate this blurring
###Code
fwhm = 10.
sigma = fwhm / 2. / np.sqrt(2. * np.log(fwhm))
sigmasq = sigma**2
kernel_blur = 1./ 2./ np.pi / sigmasq * np.exp(-(r**2/2./sigmasq))
# Check our kernel is properly normalized
np.sum(kernel_blur*resol**2)
# apply the blur
psfblur = signal.convolve(psf, kernel_blur, mode='same')*resol**2
plt.plot(y[ycen-500:ycen+500], psf[ycen-500:ycen+500, xcen], label='Original')
plt.plot(y[ycen-500:ycen+500], psfblur[ycen-500:ycen+500, xcen], label='With blurring')
plt.legend()
###Output
_____no_output_____
###Markdown
We see the effect of blurring, the, observed PSF is wider, and we have lost some flux in the central core. Suppose now that we observed this psf with sources of unknown fluxes, so that we re unsure of its scaling, and that a background remain in our observation
###Code
psfobs = psfblur * 2. + 1e-4
###Output
_____no_output_____
###Markdown
The question is now how to recover the PSF that serve for our observation. For this, we will use the PSFs curve of growth.
###Code
radii = np.arange(0, np.max(r), resol)
growth_psf = np.zeros(radii.shape)
growth_psfobs = np.zeros(radii.shape)
nbpix_psfobs = np.zeros(radii.shape)
for i, radius in enumerate(radii):
if ((i % 100) == 0):
print(radius, np.max(radii))
if i == 0:
idj, idi = np.where(r <= radius)
growth_psf[i] = np.sum(psf[idj, idi])*resol**2
growth_psfobs[i] = np.sum(psfobs[idj, idi])*resol**2
nbpix_psfobs[i] =len(idi)
else:
idj, idi = np.where((r > radii[i-1]) & (r <= radius))
growth_psf[i] = growth_psf[i-1]+np.sum(psf[idj, idi])*resol**2
growth_psfobs[i] = growth_psfobs[i-1]+np.sum(psfobs[idj, idi])*resol**2
nbpix_psfobs[i] = nbpix_psfobs[i-1]+len(idi)
plt.plot(radii, growth_psf, label='PSF')
plt.plot(radii, growth_psfobs, label='Observed PSF')
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
plt.legend()
###Output
_____no_output_____
###Markdown
This strongly rising shape of the observed PSF is a sure sign of an non zero background. Let's determine it.
###Code
plt.plot(nbpix_psfobs, growth_psfobs)
plt.xlabel('Number of pixels')
plt.ylabel('Encircled flux')
###Output
_____no_output_____
###Markdown
When plotted as a function of the intergated area, there is a clear linear relation, that we will fit:
###Code
idx, = np.where(radii > 50)
p = np.polyfit(nbpix_psfobs[idx], growth_psfobs[idx], 1)
bkg = p[0]/resol**2
# Correct PSF and curve of growth
psfcor = psfobs-bkg
growth_psfcor = growth_psfobs - bkg*nbpix_psfobs*resol**2
plt.plot(radii, growth_psf, label='PSF')
plt.plot(radii, growth_psfcor, label='Observed PSF')
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
plt.legend()
###Output
_____no_output_____
###Markdown
Let's have a look at the ratio of the two:
###Code
plt.plot(radii[1:], growth_psfcor[1:]/growth_psf[1:])
plt.xlabel('Radius [arcsec]')
plt.ylabel('Ratio of encircled flux')
###Output
_____no_output_____
###Markdown
Due to the different resolution, the ratio is not constant. Let's note the calibration $C(r)$. Let us assume that our observed PSF encirled energy is of the form:$E(r) = \alpha C(r \times \beta)$Where $\beta$ is the fattening of the PSF. If we differentiate as a function of $r$:$E'(r) = \alpha \beta C'(r \times \beta)$
###Code
# compute the derivatives
deriv_growth_psf = (growth_psf[2:]-growth_psf[0:-2])/(radii[2:]-radii[0:-2])
deriv_growth_psfcor = (growth_psfcor[2:]-growth_psfcor[0:-2])/(radii[2:]-radii[0:-2])
plt.plot(radii[1:-1], deriv_growth_psf)
plt.plot(radii[1:-1], deriv_growth_psfcor)
plt.xlim([0,60])
###Output
_____no_output_____
###Markdown
Compared with the growth curve plot, the derivative show clear maxima and minima that are out of phase. Findind the positions of the these will tell us if our assumption of homothetical variation is correct.
###Code
# Find the local minima and maxima of the two curves.
# To find a local extremum, we will fit the portion of curve with a degree 3 polynomial,
# extract the roots of its derivative and only retain the one that are between the bounds.
# This is what the following function does.
def local_max(xvalues, yvalues, lower_bound, upper_bound, check_plot=False):
idx,=np.where((xvalues > lower_bound) & (xvalues < upper_bound))
p = np.polyfit(xvalues[idx], yvalues[idx], 3)
delta = (2.*p[1])**2 - 4.*3.*p[0]*p[2]
r1 = (-2*p[1]+np.sqrt(delta))/(2*3*p[0])
r2 = (-2*p[1]-np.sqrt(delta))/(2*3*p[0])
result = r1 if ((r1 > lower_bound) and (r1 < upper_bound)) else r2
if check_plot:
plt.plot(xvalues[idx], yvalues[idx])
plt.plot(xvalues[idx], p[0]*xvalues[idx]**3+p[1]*xvalues[idx]**2+
p[2]*xvalues[idx]+p[3], '--')
plt.plot(np.array([result, result]), np.array([np.min(yvalues), np.max(yvalues)]), '-')
return result
max_dpsf_1 = local_max(radii[1:-1], deriv_growth_psf, 3, 10, check_plot=True)
max_dpsfcor_1 = local_max(radii[1:-1], deriv_growth_psfcor, 3, 10, check_plot=True)
max_dpsf_2 = local_max(radii[1:-1], deriv_growth_psf, 14, 21, check_plot=True)
max_dpsfcor_2 = local_max(radii[1:-1], deriv_growth_psfcor, 14, 21, check_plot=True)
max_dpsf_3 = local_max(radii[1:-1], deriv_growth_psf, 21, 28, check_plot=True)
max_dpsfcor_3 = local_max(radii[1:-1], deriv_growth_psfcor, 21, 28, check_plot=True)
max_dpsf_4 = local_max(radii[1:-1], deriv_growth_psf, 28, 35, check_plot=True)
max_dpsfcor_4 = local_max(radii[1:-1], deriv_growth_psfcor, 28, 35, check_plot=True)
max_dpsf_5 = local_max(radii[1:-1], deriv_growth_psf, 35, 45, check_plot=True)
max_dpsfcor_5 = local_max(radii[1:-1], deriv_growth_psfcor, 35, 45, check_plot=True)
max_dpsf_6 = local_max(radii[1:-1], deriv_growth_psf, 40, 50, check_plot=True)
max_dpsfcor_6 = local_max(radii[1:-1], deriv_growth_psfcor, 40, 50, check_plot=True)
plt.xlabel('Radius [arcsec]')
# Lets pack all of them, adding the r=0 point.
max_dpsf = np.array([0, max_dpsf_1, max_dpsf_2, max_dpsf_3, max_dpsf_4, max_dpsf_5, max_dpsf_6])
max_dpsfcor = np.array([0, max_dpsfcor_1, max_dpsfcor_2, max_dpsfcor_3, max_dpsfcor_4,
max_dpsfcor_5, max_dpsfcor_6])
print(max_dpsf,max_dpsfcor)
###Output
[ 0. 6.18161082 17.48546882 23.79928199 32.07353102
38.40607579 46.76238796] [ 0. 6.52203326 18.7589197 24.07489413 32.78748297
38.5386345 47.21468159]
###Markdown
From the plot, we can deduce that our homothetical assumption is not perfect: the spacing increases for the first three (don't forget the point at 0, 0, not shown), is very small for the 4th and 6th, and gets narrower for the 5th and 7th...Let's plot the situation
###Code
plt.plot(max_dpsf, max_dpsfcor, 'o-')
p = np.polyfit(max_dpsf[0:3], max_dpsfcor[0:3], 1)
plt.plot(max_dpsf, p[0]*max_dpsf+p[1])
plt.xlabel('extremum position of theoretical psf [arcsec]')
plt.ylabel('extremum position of observed blurred psf [arcsec]')
print(p)
print((max_dpsfcor[1]-max_dpsfcor[0])/(max_dpsf[1]-max_dpsf[0]))
print((max_dpsfcor[2]-max_dpsfcor[0])/(max_dpsf[2]-max_dpsf[0]))
# Lets use the data before 20", corresponding to the central core
beta = (max_dpsfcor[2]-max_dpsfcor[0])/(max_dpsf[2]-max_dpsf[0])
# lets interpolate at the scaled radius
tckpsfcor = interpolate.splrep(radii, growth_psfcor, s=0)
interp_growth_psfcor = interpolate.splev(radii*beta, tckpsfcor, der=0)
# check interpolation
plt.plot(radii*beta, growth_psf)
plt.plot(radii, growth_psfcor)
plt.plot(radii*beta, interp_growth_psfcor)
plt.xlim([0,60])
plt.xlabel('radius [arcsec]')
plt.ylabel('Encircled flux')
###Output
_____no_output_____
###Markdown
Let us check the ratio, using the psf with a corrected radius
###Code
plt.plot(radii[1:]*beta, interp_growth_psfcor[1:]/growth_psf[1:])
plt.xlabel('radius [arcsec]')
plt.ylabel('Ratio of encircled flux')
plt.xlim([0,60])
idx, = np.where(((radii*p[0]) > 0) & ((radii*p[0]) < 60))
scale_factor = np.median(interp_growth_psfcor[idx]/growth_psf[idx])
print("alpha = {:.3f}".format(scale_factor))
###Output
alpha = 2.005
###Markdown
We now have a much better looking ratio [compared with the cell where we computed the direct ratio](the_ratio), and we have a decent determination of the psf scaling. The normalized PSF to use for our observations is then:
###Code
psf_obs_norm = psfcor / scale_factor
print('\int \int psf_obs_norm dx dy = {}'.format(np.sum(psf_obs_norm)*resol**2))
###Output
\int \int psf_obs_norm dx dy = 0.9664739715532878
###Markdown
Indeed, let's look at the encircled energy in the core of our psf:In this example, we have used the derivative of the scale factor
###Code
idj, idi = np.where(r<max_dpsfcor_2)
print('central core for observation: {}'.format(np.sum(psf_obs_norm[idj, idi])*resol**2))
idj, idi = np.where(r<max_dpsf_2)
print('central core for theoretical: {}'.format(np.sum(psf[idj, idi])*resol**2))
###Output
central core for observation: 0.8526738059811085
central core for theoretical: 0.8526789463354869
###Markdown
The two agree extremely well. Unfortunately, with real data, it is not always possible as we will see to use the derivative of the curve of growth to derive the factor beta of PSF fattening. For real observation, one can use a brute force approach to try all the reasonable couples alpha, beta and try to match the theoretical psf to the observed one. This is how we will proceed next on real data. 2) Real data: PACS observationsWe will look at a real stack of point sources in the PACS 100 $\mathrm{\mu m}$ CDFS-SWIRE observations, and try to find its normalization factor. Let's load the stacked PSF:
###Code
stackhd = fits.open('../../dmu26/data/CDFS-SWIRE/PACS/PSF/output_data_160/psf_native.fits')
psf = stackhd[1].data
hd = stackhd[1].header
plt.imshow(psf)
###Output
_____no_output_____
###Markdown
Set the resolution of the psf. Because the map is in units of Jy/pixel, this turns out to be:* =1 if psf at same resolution of map* otherwise, should be in factor of map pixel sizeNOTE: Units are MJy/sr!!!!!!
###Code
resol= np.abs(stackhd[1].header['CDELT1'])/np.abs(stackhd[0].header['CDELT1'])
resol
###Output
_____no_output_____
###Markdown
Now let's build the growthcurve for our PSF.
###Code
# find the brightest pixel, it will be our center.
jmax, imax = np.unravel_index(np.argmax(psf), psf.shape)
# build the array of coordinates
x = np.arange(hd['NAXIS1'])
y = np.arange(hd['NAXIS2'])
xv, yv = np.meshgrid(x, y, sparse=False, indexing='xy')
xp = (xv-imax)*np.abs(hd['CDELT1'])*3600.
yp = (yv-jmax)*np.abs(hd['CDELT2'])*3600.
r = np.sqrt(xp**2 + yp**2)
# build the growth curve
radii = np.unique(r)
encircled_flux = np.zeros(radii.shape)
nbpix = np.zeros(radii.shape)
for i, radius in enumerate(radii):
idj, idi = np.where(r <= radius)
nbpix[i] =len(idi)
#multiply by ((np.abs(hd['CDELT1'])*3600.)**2)/4.25E10 as map is in units of MJy/sr
encircled_flux[i] = np.sum(psf[idj, idi])*((np.abs(hd['CDELT1'])*3600.)**2)/4.25E10
hd['CDELT1']*3600.
plt.plot(radii, encircled_flux)
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
###Output
_____no_output_____
###Markdown
Looking at the shape of the encircled flux, it looks like the background level of our PSF is not zero. Let's check
###Code
# This is clearly.
print(np.median(psf[0:5,:]))
plt.plot(nbpix, encircled_flux)
plt.xlabel('Number of pixels')
plt.ylabel('Encircled flux')
nbpix[1000]
# Lets do a linear fit to the outer part of the curve to determine the backgound
p = np.polyfit(nbpix[1000:], encircled_flux[1000:], 1)
bkg = p[0]/resol**2
print(bkg)
# Lets correct the psf and encircled flux
psf = psf - bkg
encircled_flux = encircled_flux - bkg * nbpix*resol**2
plt.plot(radii, encircled_flux)
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
###Output
_____no_output_____
###Markdown
Our PSF does now behaves correctly.Now let us compare our growth curve with the encircled energy curve provided by the instrument team. We use the standard growth curve for 100 µm PACS, taken with 20"/s scan speed.
###Code
f = open('../data/PACS/EEF_red_20.txt', 'r')
lines = f.readlines()
f.close()
radiuseff = np.zeros(len(lines)-3)
valeff = np.zeros(len(lines)-3)
i = 0
for line in lines:
if line[0] != '#':
bits = line.split()
radiuseff[i] = float(bits[0])
valeff[i] = float(bits[1])
i = i+1
plt.plot(radiuseff, valeff, label='Calibration')
plt.plot(radii, encircled_flux/np.max(encircled_flux), label='Our PSF')
plt.xlim([0, 50])
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
plt.legend()
###Output
_____no_output_____
###Markdown
We will work below 30" where our PSF is well behaved
###Code
plt.plot(radiuseff, valeff, label='Calibration')
plt.plot(radii, encircled_flux/np.max(encircled_flux), label='Our PSF')
plt.xlim([0, 30])
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
plt.legend()
###Output
_____no_output_____
###Markdown
We see that while the calibration curve still rises beyond 30", our PSF has reached a plateau. Let's note the calibration $C(r)$. Our PSF encirled energy is of the form:$E(r) = \alpha C(r \times \beta)$Where $\beta$ is the fattening of the PSF.We could take the derivative, but this too noisy. Instead we do a brute force approach
###Code
plt.plot(radiuseff, valeff, label='Calibration')
plt.plot(radii, encircled_flux/np.max(encircled_flux), label='Our PSF')
plt.xlim([0, 30])
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
plt.legend()
rfactor = np.arange(1.,2., 1e-3)
ffactor = np.arange(1.,2., 1e-3)
# work with the data points between 3 and 25"
idx, = np.where((radii > 2) & (radii < 6))
xv = radii[idx]
yv = encircled_flux[idx]/np.max(encircled_flux)
resid = np.zeros((len(rfactor), len(ffactor)))
for i, rf in enumerate(rfactor):
print(i, rf)
tck = interpolate.splrep(radiuseff*rf, valeff, s=0)
yfit = interpolate.splev(xv, tck, der=0)
for j, ff in enumerate(ffactor):
resid[i, j] = np.sum((yv-yfit*ff)**2)
plt.imshow(np.log(resid))
###Output
_____no_output_____
###Markdown
This shows a minimum, with some degeneracy.
###Code
imin = np.argmin(resid)
rmin, fmin = np.unravel_index(imin, resid.shape)
print("rf = {:.3f}, ff = {:.3f}, residual = {:.3f}".format(rfactor[rmin], ffactor[fmin], resid[rmin, fmin]))
plt.plot(radiuseff*rfactor[rmin], valeff, label='Calibration')
plt.plot(radii, encircled_flux/np.max(encircled_flux)/ffactor[fmin], label='Our PSF')
plt.xlim([0, 200])
plt.xlabel('Radius [arcsec]')
plt.ylabel('Encircled flux')
plt.legend()
# The two curve overlap
psfok = psf/np.max(encircled_flux)/ffactor[fmin]
np.sum(psfok)
###Output
_____no_output_____
###Markdown
psfok is the PSF that a source of flux 1 Jy has in our data, and is to be used for source extraction As units of map in MJy/sr, divide by 1E6
###Code
psfok=psfok/1.0E6
###Output
_____no_output_____
###Markdown
ValidationTo check PSF is reasonable, lets look at a 100 micron source, e.g. `EN1-PACSxID24-ELAIS-N1-HerMES-1-29844`. We can see from `WP4-ELAIS-N1-HerMES-PACSxID24-v1.fits.gz` that it has a flux of 55 mJy. Maximum value in our normalised PSF gives a peak below. Since PSF is three times resolution of map, it could also be off centre.
###Code
from astropy.table import Table
PACScat=Table.read('../../dmu26/data/CDFS-SWIRE/PACS/WP4-CDFS-SWIRE-PACSxID24-v1.fits.gz')
PACScat[PACScat['HELP_ID']=='CDS-PACSxID24-1-26528']
cpix=100
print("Max PSF = {:.4f} MJy/sr, off pixel Max PSF = {:.4f} MJy/sr".format(psfok[cpix-1,cpix-1]*0.108,psfok[cpix-2,cpix-2]*0.108))
import aplpy
import seaborn as sns
sns.set_style("white")
cmap=sns.cubehelix_palette(8, start=.5, rot=-.75,as_cmap=True)
fig=aplpy.FITSFigure('../../dmu26/data/CDFS-SWIRE/PACS/CDFS-SWIRE_PACS160_20160413_img_wgls.fits')
fig.recenter(PACScat[PACScat['HELP_ID']=='CDS-PACSxID24-1-26528']['RA'],PACScat[PACScat['HELP_ID']=='CDS-PACSxID24-1-26528']['Dec'], radius=0.002)
fig.show_colorscale(vmin=-1.0,vmax=25,cmap=cmap)
fig.add_colorbar()
fig.colorbar.set_location('top')
###Output
WARNING: Cannot determine equinox. Assuming J2000. [aplpy.wcs_util]
WARNING: Cannot determine equinox. Assuming J2000. [aplpy.wcs_util]
/Users/pdh21/anaconda3/envs/new/lib/python3.6/site-packages/aplpy/normalize.py:115: RuntimeWarning: invalid value encountered in less
negative = result < 0.
###Markdown
In summary, the PSF is within 10% of this source, and given noise and shape of source will add additional uncertianty, as well as non-zero background, this seems reasonable. Create PSF fits file¶
###Code
stackhd[1].data=psfok
stackhd.writeto('dmu18_PACS_160_PSF_CDFS-SWIRE_20171002.fits',output_verify='fix+warn', overwrite=True)
plt.hist(psfok.flatten(),bins=np.arange(-0.01,0.05,0.0005));
plt.yscale('log')
np.max(psfok)
###Output
_____no_output_____ |
docs/advanced/python/gaussian-filter-with-numpy.ipynb | ###Markdown
Gaussian Filter with Numpy_Scipy is not required for this example_
###Code
# start by importing what we need
import numpy as np
import matplotlib.pyplot as plt
# I like using this function
def filterGaussian(signal,sigma):
"""Return the Gaussian-filtered signal. The returned array will be the same length, padded with None."""
size=sigma*10
points=np.exp(-np.power(np.arange(size)-size/2,2)/(2*np.power(sigma,2)))
kernel=points/sum(points)
smooth=np.convolve(signal,kernel,mode='valid')
smooth=np.concatenate(([None]*int(size/2),smooth,[None]*int(size/2)))
smooth=smooth[:len(signal)]
return smooth
# create some data points
nPoints=500
data=np.sin(np.arange(nPoints)/nPoints*np.pi*2)
# add some randomness
data+=np.random.random_sample(len(data))
# plot the data
plt.plot(data,'.',label="original data",alpha=.2)
plt.plot(filterGaussian(data,2),label="sigma: 2")
plt.plot(filterGaussian(data,10),label="sigma: 10")
plt.legend(fontsize=8)
# show the plot
plt.margins(0,.1)
plt.show()
###Output
_____no_output_____ |
tutorials/submit.ipynb | ###Markdown
Submitting an Extractor JobThis Jupyter Notebook tutorial uses Python to show the steps needed get your data processed by an extractor. In this tutorial we will be using the OpenDroneMap extractor. The same process can be used for any extractor although some extractor-specific details may vary. --- Contents- [Overview](overview)- [Audience](audience)- [What to expect](expect)- [Prerequisites](prerequisites)- [Cautions](cautions)- [Step 1 - Python Imports and Setup](step1)- [Step 2 - Specify the Experiment](step2)- [Step 3 - Required Request Parameters](step3)- [Step 4 - Optional Request Parameters](step4)- [Step 5 - Making the Request](step5)- [Completed](completed)- [Feedback](feedback)- [References](references)- [Acknowledgements](acknowledgements)--- OverviewThis tutorial covers how to use Python within a dockerized Jupyter notebook to send a request to Clowder to start processing a set of previously loaded drone data.Completing this tutorial will provide the background for submitting other extractor requests and determining if the requests are successful. AudienceThis notebook is for people that want to learn how to process drone data using the Clowder-based Drone Pipeline.It's helpful, but not necessary, to be familiar with Jupyter Notebooks and, perhaps, have some experience with Python. What to expect We will be using a Python library to do most of the work for us.Each step of this tutorial contains text describing what needs to be done and then presents code that performs those actions.In the code cells below, we will be loading the pipelineutils library, defining variables that provide information about the experiment and our Clowder credentials, and then making the request to start the extractor using the variables we defined.You will need to modify the code cells to match your actual data (sample data will work as well). Prerequisites To successfully complete this tutorial you will need to have an existing Clowder account and have data loaded into a dataset.Additionally, the Python `pipelineutils` library will need to have been installed on the Jupyter Notebook instance this tutorial is running on.>Perform the following steps to install the `pipelineutils` library:>1. click the "New Launcher" icon and select a terminal>2. In the terminal window execute the following command 'pip install pipelineutils' to install the library>>If you are having trouble installing, try adding a version number to the install request. Assuming the latest version is 1.0.4, your command would look like 'pip install pipelineutils==1.0.4'You can create a Clowder account at the [Drone Processing Pipeline](https://dronepipeline.cyverse.org/) instance of Clowder. Once you have your account, create a dataset and load a flight's worth of data into the dataset. Cautions There are two main files in the Clowder dataset to be processed that, if they are in the dataset, will be overwritten.These files are the *experiment.yaml* file and the *extractors-opendronemap.txt* file.If you have placed these files in the dataset this tutorial will process, you should download them to preserve them. --- Step 1 - Python Imports and Setup The first step is to let Python know which libraries you will be needing for your commands.We are also going to define the Clowder URL so the calls we make know which instance to access.You will need to replace the endpoint with the URL of your Clowder instance.
###Code
# Importing the libraries we will need
import pipelineutils.pipelineutils as dpu
clowder_url="https://dronepipeline.cyverse.org" # Replace this value with your Clowder URL
###Output
_____no_output_____
###Markdown
--- Step 2 - Specify your Experiment There are several pieces of information needed by the extractor for its processing.We are focused on the OpenDroneMap extractor in this tutorial and are providing the information that it needs.Other extractors have different requirements which can be found with their documentation.The timestamp needed is an ISO 8601 timestamp, formatted as a complete date with hours, minutes, and seconds: `YYYY-MM-DDThh:mm:ssTZD`.Each of the angle bracket values that are shown below, and the text within them, need to be replaced with your values.For example, if your study name is "Height 2019", you would replace "<study name>" with "Height 2019".
###Code
# Provide experiment information for the extractor
experiment = dpu.prepare_experiment("<study name>", # Replace <study name> with your study name
"<season name>", # Replace <season name> with your season name
"<timestamp>" # Replace <timestamp> with your timestamp
)
# Display what we have
print(experiment)
###Output
_____no_output_____
###Markdown
Assuming a study name of "Height 2019", a season of "Season 3", and a data capture timestamp of "2019-05-31T14:20:40-08:00", you would have the following as your experiment data after making the call:```pythonexperiment = { "studyName": "Height 2019", "season": "Season 3", "observationTimeStamp": "2019-05-31T14:20:40-08:00"}``` --- Step 3 - Required Request Parameters We have encountered two of the call parameters above when we configured the Clowder URL and the experiment. What they areThe additional required parameters are your Clowder credentials, the dataset name, the name of a space in Clowder, and the extractor name.- username and password: these are your Clowder login credentials- dataset: the name of the loaded drone data to process- extractor: the shorthand name of the extractor- space name: location where the results of processing will be organized Why they're neededThe credentials are used to access Clowder on your behalf; the dataset name is used to identify where the data resides that should be processed; a space name is where resulting data is organized in Clowder; the extractor name identifies which extractor we'll be running.
###Code
# Specify required parameters
username="email@address" # The Clowder username portion of credentials
password="password" # The password associated with the Clowder username
dataset="my dataset" # The dataset to associate with the extractor request
extractor="opendronemap" # The extractor to run. Note that this is not the full Clowder name
space_name="Processed" # The space name for processed data organization
###Output
_____no_output_____
###Markdown
--- Step 4 - Optional Request Parameters In addition to the required parameters described above, there are other parameters that could be specified when we make the call. What they areThe `space_must_exist` optional parameter has three values: *None*, *False*, and *True*.The default value for this parameter is `None` indicating that an attempt will be made to create the space in Clowder if it doesn't already exist.If the value for this parameter is changed to `True`, the space must already exist in Clowder when the call is made or an error will be returned.If the value for this parameter is `False`, then the space must *not* exist when the call is made or an error is returned. If `False` is specified and the space does not exist, it's created before the extractor is run.The `config_file` optional parameter defaults to `None` indicating that there isn't a configuration file specified. This parameter can be overridden with the path to a configuration file or a with a configuration string. In our case we will use an empty string as our OpenDroneMap configuration override - indicating we will accept the default configuration.Refer to the [extractors-opendronemap.txt.sample](https://opensource.ncsa.illinois.edu/bitbucket/projects/CATS/repos/extractors-opendronemap/browse/extractors-opendronemap.txt.sample?at=refs%2Fheads%2Fupdate_odm_extractor) file in BitBucket for more information on the contents of the OpenDroneMap extractor configuration overrides.The `api_key` optional parameter is used when a specific key is to be used when making calls to clowder.The default behavior by the library is to fetch a key associated with the username and password and then used to make calls.Specifying a key will override this behavior.The default value for this parameter is `None`.
###Code
# Defining optional parameters
space_must_exist=None # The variable name does not need to be the same as the parameter name
config_file="" # We are using a string to indicate acceptance of the default configuration
api_key=None # The Clowder API key to use when making requests
###Output
_____no_output_____
###Markdown
--- Step 5 - Making the Request We are now ready to make the call to schedule the OpenDroneMap extractor. In our example below we will only be using the required parameters, but you are free to experiment with using the optional parameters.
###Code
# Make the call
res = dpu.start_extractor(clowder_url, # The URL of Clowder instance
experiment, # Experiment configuration
username, # The username portion of Clowder credentials
password, # The password associated with the username
dataset, # The dataset to associate with the extractor
extractor, # Name of the extractor to schedule
space_name, # Name of the target space
config_file=config_file # The configuration to submit the job with
)
# Check the result for a problem
if res == False:
raise RuntimeError
# Everything is OK
print("Extractor request submitted")
###Output
_____no_output_____ |
_posts/matplotlib/area/matplotlib_area.ipynb | ###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version CheckPlotly's python package is updated frequently. Run `pip install plotly --upgrade` to use the latest version.
###Code
import plotly
plotly.__version__
###Output
_____no_output_____
###Markdown
Area Plot
###Code
import plotly.plotly as py
import plotly.tools as tls
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([2,1,3,1,2])
update = {'data':[{'fill': 'tozeroy'}]} # this updates the trace
plotly_fig = tls.mpl_to_plotly( fig )
plotly_fig.update(update)
py.iplot(plotly_fig, update=update, filename='mpl-basic-area')
###Output
_____no_output_____
###Markdown
Multiple Line Area Plot
###Code
import plotly.plotly as py
import plotly.tools as tls
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 100)
fig, ax = plt.subplots()
ax.plot(np.sin(x), label='sin'); ax.plot(np.cos(x), label='cos')
update = {'data':[{'fill': 'tozeroy'}]} # this updates BOTH traces now
plotly_fig = tls.mpl_to_plotly( fig )
plotly_fig.update(update)
py.iplot(plotly_fig, filename='mpl-multi-fill')
###Output
_____no_output_____
###Markdown
Stacked Line Plot
###Code
import plotly.plotly as py
import plotly.tools as tls
import numpy as np
import matplotlib.pyplot as plt
# create our stacked data manually
y0 = np.random.rand(100)
y1 = y0 + np.random.rand(100)
y2 = y1 + np.random.rand(100)
capacity = 3*np.ones(100)
# make the mpl plot (no fill yet)
fig, ax = plt.subplots()
ax.plot(y0, label='y0')
ax.plot(y1, label='y1')
ax.plot(y2, label='y2')
ax.plot(capacity, label='capacity')
# set all traces' "fill" so that it fills to the next 'y' trace
update = {'data':[{'fill': 'tonexty'}]}
# strip style just lets Plotly make the styling choices (e.g., colors)
plotly_fig = tls.mpl_to_plotly( fig )
plotly_fig.update(update)
py.iplot(plotly_fig, strip_style=True, filename='mpl-stacked-line')
###Output
_____no_output_____
###Markdown
ReferenceSee https://plot.ly/python/reference/ for more information and chart attribute options!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'matplotlib_area.ipynb', 'matplotlib/filled-area-plots/', 'Filled Area Plots',
'How to make a filled area plot in matplotlib. An area chart displays a solid color between the traces of a graph.',
title = 'Matplotlib Filled Area Plots | Plotly',
has_thumbnail='true', thumbnail='thumbnail/area.jpg',
language='matplotlib',
page_type='example_index',
display_as='basic', ipynb='~notebook_demo/246')
###Output
_____no_output_____ |
Course4 - Convolutional Neural Networks/week4 Face Recognition and Neural Style Transfer/Face Recognition/Face Recognition for the Happy House - v3.ipynb | ###Markdown
Face Recognition for the Happy HouseWelcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). Face recognition problems commonly fall into two categories: - **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. - **Face Recognition** - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. **In this assignment, you will:**- Implement the triplet loss function- Use a pretrained model to map face images into 128-dimensional encodings- Use these encodings to perform face verification and face recognitionIn this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Let's load the required packages.
###Code
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
0 - Naive Face VerificationIn Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! **Figure 1** Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using an ConvNet to compute encodingsThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook). The key things you need to know are:- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ - It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vectorRun the cell below to create the model for face images.
###Code
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
###Output
Total Params: 3743280
###Markdown
** Expected Output **Total Params: 3743280 By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows: **Figure 2**: By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same personSo, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other - The encodings of two images of different persons are very differentThe triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. **Figure 3**: In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) 1.2 - The Triplet LossFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.<!--We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).!-->Training will use triplets of images $(A, P, N)$: - A is an "Anchor" image--a picture of a person. - P is a "Positive" image--a picture of the same person as the Anchor image.- N is a "Negative" image--a picture of a different person than the Anchor image.These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$You would thus like to minimize the following "triplet cost":$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes:- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. - $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$. Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$3. Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
###Code
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=None)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=None)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
###Output
loss = 350.026
###Markdown
**Expected Output**: **loss** 528.143 2 - Loading the trained modelFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
###Code
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
###Output
_____no_output_____
###Markdown
Here're some examples of distances between the encodings between three individuals: **Figure 4**: Example of distance outputs between three individuals' encodingsLet's now use this model to perform face verification and face recognition! 3 - Applying the model Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment. However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food. So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a **Face verification** system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be. 3.1 - Face VerificationLet's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use `img_to_encoding(image_path, model)` which basically runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
###Code
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
###Output
_____no_output_____
###Markdown
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:1. Compute the encoding of the image from image_path2. Compute the distance about this encoding and the encoding of the identity image stored in the database3. Open the door if the distance is less than 0.7, else do not open.As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
###Code
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(database[identity] - encoding)
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
###Output
_____no_output_____
###Markdown
Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
###Code
verify("images/camera_0.jpg", "younes", database, FRmodel)
###Output
It's younes, welcome home!
###Markdown
**Expected Output**: **It's younes, welcome home!** (0.65939283, True) Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
###Code
verify("images/camera_2.jpg", "kian", database, FRmodel)
###Output
It's not kian, please go away
###Markdown
**Expected Output**: **It's not kian, please go away** (0.86224014, False) 3.2 - Face RecognitionYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. **Exercise**: Implement `who_is_it()`. You will have to go through the following steps:1. Compute the target encoding of the image from image_path2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`. - Compute L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
###Code
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(db_enc - encoding)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
###Output
_____no_output_____
###Markdown
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
###Code
who_is_it("images/camera_0.jpg", database, FRmodel)
###Output
it's younes, the distance is 0.659393
|
05_Image_recognition_and_classification/cnn.ipynb | ###Markdown
卷积神经网络 - 图像分类与识别 - TensorFlow实现 需要Anaconda3-1.3.0 (Python 3)环境运行
###Code
import time
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import dataset
import cv2
from sklearn.metrics import confusion_matrix
from datetime import timedelta
%matplotlib inline
###Output
_____no_output_____
###Markdown
超参配置
###Code
# Convolutional Layer 1.
filter_size1 = 3
num_filters1 = 32
# Convolutional Layer 2.
filter_size2 = 3
num_filters2 = 32
# Convolutional Layer 3.
filter_size3 = 3
num_filters3 = 64
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
# Number of color channels for the images: 1 channel for gray-scale.
num_channels = 3
# image dimensions (only squares for now)
img_size = 128
# Size of image when flattened to a single dimension
img_size_flat = img_size * img_size * num_channels
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# class info
classes = ['dogs', 'cats']
num_classes = len(classes)
# batch size
batch_size = 32
# validation split
validation_size = .16
# how long to wait after validation loss stops improving before terminating training
early_stopping = None # use None if you don't want to implement early stoping
train_path = 'data/train/'
test_path = 'data/test/'
checkpoint_dir = "models/"
###Output
_____no_output_____
###Markdown
数据载入
###Code
data = dataset.read_train_sets(train_path, img_size, classes, validation_size=validation_size)
test_images, test_ids = dataset.read_test_set(test_path, img_size)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(test_images)))
print("- Validation-set:\t{}".format(len(data.valid.labels)))
###Output
Size of:
- Training-set: 21000
- Test-set: 12500
- Validation-set: 4000
###Markdown
绘图函数 Function used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
if len(images) == 0:
print("no images to show")
return
else:
random_indices = random.sample(range(len(images)), min(len(images), 9))
images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices])
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_size, img_size, num_channels))
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
随机显示图像,检查是否载入正确
###Code
# Get some random images and their labels from the train set.
images, cls_true = data.train.images, data.train.cls
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow 图模型主要包含以下部分:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional network.* A cost measure that can be used to guide the optimization of the variables.* An optimization method which updates the variables. 协助创建新参数的函数 Functions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.
###Code
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
###Output
_____no_output_____
###Markdown
协助创建新卷积层的函数 此函数用于定义模型,输入维度假设为4维:1. Image number. 图片数量2. Y-axis of each image. 图片高度3. X-axis of each image. 图片宽度4. Channels of each image. 图片通道数通道数可能为原始图片色彩通道数,也可能为之前所生成特征图的通道数输出同样为4维:1. Image number, same as input. 图片数量2. Y-axis of each image. 图片高度,如经过2x2最大池化,则减半3. X-axis of each image. 同上4. Channels produced by the convolutional filters. 输出特征图通道数,由本层卷积核数目决定
###Code
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
###Output
_____no_output_____
###Markdown
协助展开(一维化)卷积层的函数A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
###Code
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
###Output
_____no_output_____
###Markdown
协助创建全连接层的函数 This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.It is assumed that the input is a 2-dim tensor of shape `[num_images, num_inputs]`. The output is a 2-dim tensor of shape `[num_images, num_outputs]`.
###Code
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
###Output
_____no_output_____
###Markdown
Placeholder 参数定义 Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes`.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
WARNING:tensorflow:From <ipython-input-16-4674210f2acc>:1: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
###Markdown
卷积层 1Create the first convolutional layer. It takes `x_image` as input and creates `num_filters1` different filters, each having width and height equal to `filter_size1`. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
###Code
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
layer_conv1
###Output
_____no_output_____
###Markdown
卷积层 2 和 3Create the second and third convolutional layers, which take as input the output from the first and second convolutional layer respectively. The number of input channels corresponds to the number of filters in the previous convolutional layer.
###Code
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
layer_conv2
layer_conv3, weights_conv3 = \
new_conv_layer(input=layer_conv2,
num_input_channels=num_filters2,
filter_size=filter_size3,
num_filters=num_filters3,
use_pooling=True)
layer_conv3
###Output
_____no_output_____
###Markdown
展开层The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
###Code
layer_flat, num_features = flatten_layer(layer_conv3)
layer_flat
num_features
###Output
_____no_output_____
###Markdown
全连接层 1Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is `fc_size`. ReLU is used so we can learn non-linear relations.
###Code
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
###Output
_____no_output_____
###Markdown
Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and `fc_size` == 128.
###Code
layer_fc1
###Output
_____no_output_____
###Markdown
全连接层 2Add another fully-connected layer that outputs vectors of length num_classes for determining which of the classes the input image belongs to. Note that ReLU is not used in this layer.
###Code
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
###Output
_____no_output_____
###Markdown
所预测类 The second fully-connected layer estimates how likely it is that the input image belongs to each of the 2 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the all the elements sum to one. This is calculated using the so-called softmax function and the result is stored in `y_pred`.
###Code
y_pred = tf.nn.softmax(layer_fc2)
###Output
_____no_output_____
###Markdown
The class-number is the index of the largest element.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
WARNING:tensorflow:From <ipython-input-31-6aa54000365b>:1: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
###Markdown
将要优化的损失函数 To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model `y_pred` to the desired output `y_true`.The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.TensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of `layer_fc2` directly rather than `y_pred` which has already had the softmax applied.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
###Output
_____no_output_____
###Markdown
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
###Code
cost = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
优化方法 Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the `AdamOptimizer` which is an advanced form of Gradient Descent.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
###Output
_____no_output_____
###Markdown
评判手段 We need a few more performance measures to display the progress to the user.This is a vector of booleans whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
TensorFlow图模型的编译与运行 创建 TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
初始化参数The variables for `weights` and `biases` must be initialized before we start optimizing them.
###Code
session.run(tf.initialize_all_variables())
###Output
WARNING:tensorflow:From /home/xrong/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:175: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
###Markdown
协助优化迭代的函数 It takes a long time to calculate the gradient of the model using the entirety of a large dataset. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
###Code
train_batch_size = batch_size
def print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
msg = "Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
print(msg.format(epoch + 1, acc, val_acc, val_loss))
###Output
_____no_output_____
###Markdown
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
best_val_loss = float("inf")
patience = 0
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch, _, cls_batch = data.train.next_batch(train_batch_size)
x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(train_batch_size)
# Convert shape from [num examples, rows, columns, depth]
# to [num examples, flattened image shape]
x_batch = x_batch.reshape(train_batch_size, img_size_flat)
x_valid_batch = x_valid_batch.reshape(train_batch_size, img_size_flat)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
feed_dict_validate = {x: x_valid_batch,
y_true: y_valid_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status at end of each epoch (defined as full pass through training dataset).
if i % int(data.train.num_examples/batch_size) == 0:
val_loss = session.run(cost, feed_dict=feed_dict_validate)
epoch = int(i / int(data.train.num_examples/batch_size))
print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss)
if early_stopping:
if val_loss < best_val_loss:
best_val_loss = val_loss
patience = 0
else:
patience += 1
if patience == early_stopping:
break
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time elapsed: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
协助绘制错误结果的函数 Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.valid.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.valid.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
协助绘制混淆矩阵的函数
###Code
def plot_confusion_matrix(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.valid.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
协助展示实验结果与模型性能的函数 Function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
###Code
def print_validation_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.valid.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.valid.images[i:j, :].reshape(batch_size, img_size_flat)
# Get the associated labels.
labels = data.valid.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
cls_true = np.array(data.valid.cls)
cls_pred = np.array([classes[x] for x in cls_pred])
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
1次优化迭代后的结果
###Code
optimize(num_iterations=1)
print_validation_accuracy()
###Output
Epoch 1 --- Training Accuracy: 56.2%, Validation Accuracy: 31.2%, Validation Loss: 0.774
Time elapsed: 0:00:01
Accuracy on Test-Set: 48.8% (1950 / 4000)
###Markdown
100次优化迭代后的结果After 100 optimization iterations, the model should have significantly improved its classification accuracy.
###Code
optimize(num_iterations=99) # We already performed 1 iteration above.
print_validation_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 48.8% (1950 / 4000)
Example errors:
###Markdown
1000次优化迭代后的结果
###Code
optimize(num_iterations=900) # We performed 100 iterations above.
print_validation_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 68.9% (2755 / 4000)
Example errors:
###Markdown
10000次优化迭代后的结果
###Code
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_validation_accuracy(show_example_errors=True, show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 79.5% (3180 / 4000)
Example errors:
###Markdown
权值与卷积层的可视化In trying to understand why the convolutional neural network can recognize images, we will now visualize the weights of the convolutional filters and the resulting output images. 协助绘制卷积核权值的函数
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
协助绘制卷积层输出的函数
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
image = image.reshape(img_size_flat)
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
输入图像 Helper-function for plotting an image.
###Code
def plot_image(image):
plt.imshow(image.reshape(img_size, img_size, num_channels),
interpolation='nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Plot an image from the test-set which will be used as an example below.
###Code
image1 = test_images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
Plot another example image from the test-set.
###Code
image2 = test_images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
卷积层 1 Now plot the filter-weights for the first convolutional layer.Note that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to about half the resolution of the original input image.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
The following images are the results of applying the convolutional filters to the second image.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
卷积层 2 Now plot the filter-weights for the second convolutional layer.There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.Note again that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.Note that these are down-sampled yet again to half the resolution of the images from the first conv-layer.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
And these are the results of applying the filter-weights to the second image.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
将测试结果写入 CSV 文件
###Code
# def write_predictions(ims, ids):
# ims = ims.reshape(ims.shape[0], img_size_flat)
# preds = session.run(y_pred, feed_dict={x: ims})
# result = pd.DataFrame(preds, columns=classes)
# result.loc[:, 'id'] = pd.Series(ids, index=result.index)
# pred_file = 'predictions.csv'
# result.to_csv(pred_file, index=False)
# write_predictions(test_images, test_ids)
###Output
_____no_output_____
###Markdown
关闭 TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
session.close()
###Output
_____no_output_____ |
Day-4/3.Treasure_Map.ipynb | ###Markdown
InstructionsWrite a program that will mark a spot with an X.
###Code
# draw the map
row1 = ["⬜️","⬜️","⬜️"]
row2 = ["⬜️","⬜️","⬜️"]
row3 = ["⬜️","⬜️","⬜️"]
map = [row1, row2, row3]
print(f"{row1}\n{row2}\n{row3}")
#input the position
position = input("Where do you want to put the treasure? ")
# Draw the X
column = int(position[0]) - 1
row = int(position[1]) - 1
map[row][column] = "X"
print(f"{row1}\n{row2}\n{row3}")
###Output
_____no_output_____ |
Lecture 2 - Basic Machine Learning/linear_regression.ipynb | ###Markdown
Diabetes Dataset
###Code
# take only 2nd feature of dataset
X = diabetes.data[:, np.newaxis, 2]
Y = diabetes.target.reshape(442,1)
# spliting to train and test
x_train = X[:350]
y_train = Y[:350]
x_test = X[351:]
y_test = Y[351:]
plt.scatter(x_train,y_train)
plt.show()
class Lin_reg():
def __init__(self, epoch=5000, learning_rate=1):
self.epoch = epoch # max iterations
self.learning_rate = learning_rate # learning rate
self.bias = 1
self.theta = random.random() # initializing parameter
self.iteration = 0 # stores count of iteration
self.tolerance = 0.001 # minimum error
self.params = {'b': [], 'w': [], 'loss': []}
def fit(self, x, y):
m = x.shape[0] # training examples
for i in range(self.epoch): # iterate till maximum epoch
init_bias = self.bias
init_theta = self.theta
y_hat = self.bias + self.theta*x_train # prediction
loss = (0.5/m) * (y_hat - y)**2 # mean squared error
dloss = y_hat - y # loss gradient
# updating parameters
self.bias = self.bias - self.learning_rate*np.sum(dloss)/m
self.theta = self.theta - self.learning_rate*np.sum(dloss*x)/m
# saving parameters
self.params['b'].append(self.bias)
self.params['w'].append(self.theta)
self.params['loss'].append(np.sum(loss))
self.iteration = i
# getting change in parameter value
change = abs(init_bias-self.bias)+abs(init_theta-self.theta)
if(change < self.tolerance):
break # stop training when tolerance reached
def predict(self, x):
return (self.bias + self.theta*x_test)
model = Lin_reg() # initializing model
model.fit(x_train, y_train) # training model
y_predict = model.predict(x_test) # prediction
print('Iterations to converge:', model.iteration)
# Live Plotting
%matplotlib notebook
plt.ion()
fig = plt.figure(figsize=(12, 8))
fig.show()
fig.canvas.draw()
ax = fig.axes.copy()
ax = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
for i in range(0, len(model.params['w']), 100):
ax.clear()
ax1.clear()
ax1.plot(model.params['loss'][:i])
weights = model.params['w'][i]
bias = model.params['b'][i]
pred = weights*x_test + bias
ax.scatter(x_test, y_test, c = 'r')
ax.plot(x_test, pred,c = 'g')
ax.axis([-0.1,0.1,0,300])
ax.legend(['regression line','test set'])
ax.set_title('regression line and data set')
ax1.axis([-100,model.iteration,1900,4000])
ax1.legend(['loss'])
ax1.set_title('Loss curve')
fig.canvas.draw()
%matplotlib inline
plt.scatter(x_test, y_test, color='black')
plt.plot(x_test, y_predict, color='blue', linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
Sk Learn Implementation
###Code
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
model_sk = LinearRegression()
model_sk.fit(x_train, y_train)
y_predict_sk = model_sk.predict(x_test)
plt.scatter(x_test, y_test, color='black')
plt.plot(x_test, y_predict_sk, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____ |
Plant_Diseases_Detection_with_TF2_V4.ipynb | ###Markdown
TensorFlow Lite End-to-End Android ApplicationBy [Yannick Serge Obam](https://www.linkedin.com/in/yannick-serge-obam/)For this project, we will create an end-to-end Android application with TFLite that will then be open-sourced as a template design pattern. We opte to develop an **Android application that detects plant diseases**. The project is broken down into multiple steps:* Building and creating a machine learning model using TensorFlow with Keras* Deploying the model to an Android application using TFLite* Documenting and open-sourcing the development process **Machine Learning model using Tensorflow with Keras**We designed algorithms and models to recognize species and diseases in the crop leaves by using Convolutional Neural Network **Importing the Librairies**
###Code
# Install nightly package for some functionalities that aren't in alpha
!pip install tensorflow-gpu==2.0.0-beta1
# Install TF Hub for TF2
!pip install 'tensorflow-hub == 0.5'
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
#tf.logging.set_verbosity(tf.logging.ERROR)
#tf.enable_eager_execution()
import tensorflow_hub as hub
import os
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers
#from keras import optimizers
# verify TensorFlow version
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
Version: 2.0.0-beta1
Eager mode: True
Hub version: 0.5.0
GPU is available
###Markdown
Load the dataWe will download a public dataset of 54,305 images of diseased and healthy plant leaves collected under controlled conditions ( [PlantVillage Dataset](https://storage.googleapis.com/plantdata/PlantVillage.tar)). The images cover 14 species of crops, including: apple, blueberry, cherry, grape, orange, peach, pepper, potato, raspberry, soy, squash, strawberry and tomato. It contains images of 17 basic diseases, 4 bacterial diseases, 2 diseases caused by mold (oomycete), 2 viral diseases and 1 disease caused by a mite. 12 crop species also have healthy leaf images that are not visibly affected by disease. Then store the downloaded zip file to the "/tmp/" directory.we'll need to make sure the input data is resized to 224x224 or 229x229 pixels as required by the networks.
###Code
zip_file = tf.keras.utils.get_file(origin='https://storage.googleapis.com/plantdata/PlantVillage.zip',
fname='PlantVillage.zip', extract=True)
###Output
Downloading data from https://storage.googleapis.com/plantdata/PlantVillage.zip
856842240/856839084 [==============================] - 14s 0us/step
###Markdown
Prepare training and validation datasetCreate the training and validation directories
###Code
data_dir = os.path.join(os.path.dirname(zip_file), 'PlantVillage')
train_dir = os.path.join(data_dir, 'train')
validation_dir = os.path.join(data_dir, 'validation')
import time
import os
from os.path import exists
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
print('total images for training :', count(train_dir))
print('total images for validation :', count(validation_dir))
###Output
total images for training : /root/.keras/datasets/PlantVillage/train : 43444files
total images for validation : /root/.keras/datasets/PlantVillage/validation : 10861files
###Markdown
Label mappingYou'll also need to load in a mapping from category label to category name. You can find this in the file `categories.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the plants and diseases.
###Code
!!wget https://github.com/obeshor/Plant-Diseases-Detector/archive/master.zip
!unzip master.zip;
import json
with open('Plant-Diseases-Detector-master/categories.json', 'r') as f:
cat_to_name = json.load(f)
classes = list(cat_to_name.values())
print (classes)
print('Number of classes:',len(classes))
###Output
Number of classes: 38
###Markdown
Setup Image shape and batch size
###Code
IMAGE_SHAPE = (224, 224)
BATCH_SIZE = 64 #@param {type:"integer"}
###Output
_____no_output_____
###Markdown
Data PreprocessingLet's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).
###Code
# Inputs are suitably resized for the selected module. Dataset augmentation (i.e., random distortions of an image each time it is read) improves training, esp. when fine-tuning.
validation_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
shuffle=False,
seed=42,
color_mode="rgb",
class_mode="categorical",
target_size=IMAGE_SHAPE,
batch_size=BATCH_SIZE)
do_data_augmentation = True #@param {type:"boolean"}
if do_data_augmentation:
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
fill_mode='nearest' )
else:
train_datagen = validation_datagen
train_generator = train_datagen.flow_from_directory(
train_dir,
subset="training",
shuffle=True,
seed=42,
color_mode="rgb",
class_mode="categorical",
target_size=IMAGE_SHAPE,
batch_size=BATCH_SIZE)
###Output
Found 10861 images belonging to 38 classes.
Found 43444 images belonging to 38 classes.
###Markdown
Build the modelAll it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
###Code
model = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4",
output_shape=[1280],
trainable=False),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(train_generator.num_classes, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Specify Loss Function and Optimizer
###Code
#Compile model specifying the optimizer learning rate
LEARNING_RATE = 0.001 #@param {type:"number"}
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=LEARNING_RATE),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train Modeltrain model using validation dataset for validate each steps
###Code
EPOCHS=10 #@param {type:"integer"}
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.samples//train_generator.batch_size,
epochs=EPOCHS,
validation_data=validation_generator,
validation_steps=validation_generator.samples//validation_generator.batch_size)
###Output
Epoch 1/10
###Markdown
Check PerformancePlot training and validation accuracy and loss Random testRandom sample images from validation dataset and predict
###Code
import matplotlib.pylab as plt
import numpy as np
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.ylabel("Accuracy (training and validation)")
plt.xlabel("Training Steps")
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.ylabel("Loss (training and validation)")
plt.xlabel("Training Steps")
plt.show()
# Import OpenCV
import cv2
# Utility
import itertools
import random
from collections import Counter
from glob import iglob
def load_image(filename):
img = cv2.imread(os.path.join(data_dir, validation_dir, filename))
img = cv2.resize(img, (IMAGE_SHAPE[0], IMAGE_SHAPE[1]) )
img = img /255
return img
def predict(image):
probabilities = model.predict(np.asarray([img]))[0]
class_idx = np.argmax(probabilities)
return {classes[class_idx]: probabilities[class_idx]}
for idx, filename in enumerate(random.sample(validation_generator.filenames, 5)):
print("SOURCE: class: %s, file: %s" % (os.path.split(filename)[0], filename))
img = load_image(filename)
prediction = predict(img)
print("PREDICTED: class: %s, confidence: %f" % (list(prediction.keys())[0], list(prediction.values())[0]))
plt.imshow(img)
plt.figure(idx)
plt.show()
###Output
SOURCE: class: Tomato___Leaf_Mold, file: Tomato___Leaf_Mold/22cb45f2-6368-4e94-8deb-939d1f6b85ca___Crnl_L.Mold 7084.JPG
PREDICTED: class: Tomato___Leaf_Mold, confidence: 0.998060
###Markdown
Export as saved model and convert to TFLiteNow that you've trained the model, export it as a saved model
###Code
import time
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
tf.keras.experimental.export_saved_model(model, export_path)
export_path
# Now confirm that we can reload it, and it still gives the same results
reloaded = tf.keras.experimental.load_from_saved_model(export_path, custom_objects={'KerasLayer':hub.KerasLayer})
def predict_reload(image):
probabilities = reloaded.predict(np.asarray([img]))[0]
class_idx = np.argmax(probabilities)
return {classes[class_idx]: probabilities[class_idx]}
for idx, filename in enumerate(random.sample(validation_generator.filenames, 2)):
print("SOURCE: class: %s, file: %s" % (os.path.split(filename)[0], filename))
img = load_image(filename)
prediction = predict_reload(img)
print("PREDICTED: class: %s, confidence: %f" % (list(prediction.keys())[0], list(prediction.values())[0]))
plt.imshow(img)
plt.figure(idx)
plt.show()
###Output
SOURCE: class: Raspberry___healthy, file: Raspberry___healthy/f40dd477-0530-4619-a99a-03a51f053dfe___Mary_HL 9167.JPG
PREDICTED: class: Raspberry___healthy, confidence: 0.820325
###Markdown
Convert Model to TFLite
###Code
# convert the model to TFLite
!mkdir "tflite_models"
TFLITE_MODEL = "tflite_models/plant_disease_model.tflite"
# Get the concrete function from the Keras model.
run_model = tf.function(lambda x : reloaded(x))
# Save the concrete function.
concrete_func = run_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype)
)
# Convert the model to standard TensorFlow Lite model
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converted_tflite_model = converter.convert()
open(TFLITE_MODEL, "wb").write(converted_tflite_model)
###Output
_____no_output_____ |
basketball-blogs.ipynb | ###Markdown
Get links from Basketball Intelligence- Parse HTML to pull links- Save lists of links by date, title, author, etc (pick metadata) Part 1- Ray's site goes back to 2013. We will use Python standard library `calendar` to get all the days and it looks like we can just iterate over the dates as URL and pull the respective `a` tags. Part 2- Download Article, will use Python `newspaper` module and see how it does otherwise will look at `Scrapy`
###Code
import requests
from bs4 import BeautifulSoup, Tag, NavigableString
import re
import pandas as pd
import time
import random
import os
try:
os.mkdir('downloads')
except:
print("Directory Already Exists")
BASE_URL = "http://basketballintelligence.net/2018/12/"
# creating a list of years and months I want to scrape
years = [year for year in range(2019, 2012, -1)]
months = [month for month in range(12, 0, -1)]
###Output
_____no_output_____
###Markdown
Some Regex Tutorials- https://docs.python.org/3.3/howto/regex.html- https://regexone.com/
###Code
def extract_link(text):
# this is regex, a compiling pattern I want to extract from strings in the scrape
extractor = re.compile(r'http[s]?.+/.+.[com|net|html].+')
try:
return re.search(extractor, text).group(0)
except:
pass
# return 0
extract_link("bobs homes http://basketballintelligence.net/2018/12/")
###Output
_____no_output_____
###Markdown
If you need to scrape by actual Day instead of Year-MonthYou can use this:```pythonimport calendarnew_list = []def flatten_list(lists): for elem in lists: if not isinstance(elem, list): new_list.append(elem) else: flatten_list(elem) return new_listall_days = []for year in range(2013, 2020): print(year) for i in calendar.Calendar().yeardatescalendar(year): all_days.append(i)days_to_parse = flatten_list(all_days)[105:-3]```
###Code
def scrap_daily_links(soup):
daily_links = []
print(len(soup.find_all('div', {"class":"entry-content"})))
if len(soup.find_all('div', {"class":"entry-content"})) > 1:
for group in soup.find_all('div', {"class":"entry-content"}):
for j in group.find_all('div'):
try:
if 'http' in j.text:
daily_links.append(extract_link(j.text))
#print(j.find('div')['data-url'])
except:
pass
# print('****')
elif len(soup.find_all('div', {"class":"entry-content"})) > 1:
for group in soup.find_all('div', {"class":"entry-content"}):
for j in group.find_all('div'):
try:
if 'http' in j.text:
daily_links.append(extract_link(j.text))
#print(j.find('div')['data-url'])
except:
pass
# print('****')
else:
print('<<<<<<<<<< ---------- >>>>>>>>>>>')
print('No Div Entry-Content in Blog')
return list(set(daily_links))
for year in years:
for month in months:
page = 1
print(f'{year} - {month}')
BASE_URL = f"http://basketballintelligence.net/{year}/{str(month).zfill(2)}/"
r = requests.get(BASE_URL)
soup = BeautifulSoup(r.text, 'lxml')
daily_links = scrap_daily_links(soup)
# save DF, also add some metadata to enrich data set
df = pd.DataFrame(daily_links, columns=['daily_links'])
df.insert(0, 'page', page)
df.insert(0, 'month', month)
df.insert(0, 'year', year)
filename = f"downloads/articles{str(month).zfill(2)}{year}.csv"
df.to_csv(filename,index=False)
print(f"DF Rows: {len(df)}")
try:
while soup.find('div',{"class":"nav-previous"}).find('a')['href'] is not None:
print("----- Older Posts - Paginating")
page +=1
URL = soup.find('div',{"class":"nav-previous"}).find('a')['href']
r = requests.get(URL)
soup = BeautifulSoup(r.text, 'lxml')
daily_links = scrap_daily_links(soup)
df = pd.DataFrame(daily_links, columns=['daily_links'])
df.insert(0, 'page', page)
df.insert(0, 'month', month)
df.insert(0, 'year', year)
filename = f"downloads/articles{str(month).zfill(2)}{year}_{page}.csv"
df.to_csv(filename,index=False)
print(f"DF Rows: {len(df)}")
print(f"Page: {page}")
wait = random.uniform(1.2, 2)
print(wait)
time.sleep(wait)
except:
print("**** No More Pages")
pass
wait = random.uniform(1.4, 2.5)
print(wait)
time.sleep(wait)
pd.read_csv("z_downloads/articles012019.csv")
###Output
_____no_output_____ |
_downloads/plot_blur.ipynb | ###Markdown
Blurring of images===================An example showing various processes that blur an image.
###Code
import scipy.misc
from scipy import ndimage
import matplotlib.pyplot as plt
face = scipy.misc.face(gray=True)
blurred_face = ndimage.gaussian_filter(face, sigma=3)
very_blurred = ndimage.gaussian_filter(face, sigma=5)
local_mean = ndimage.uniform_filter(face, size=11)
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.imshow(blurred_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(132)
plt.imshow(very_blurred, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(133)
plt.imshow(local_mean, cmap=plt.cm.gray)
plt.axis('off')
plt.subplots_adjust(wspace=0, hspace=0., top=0.99, bottom=0.01,
left=0.01, right=0.99)
plt.show()
###Output
_____no_output_____ |
notebooks/result_analysis-pyod.ipynb | ###Markdown
import pandas as pdimport pathlibflist = [p for p in pathlib.Path('/home/philipp/projects/dad4td/reports/clustering').iterdir() if p.is_file()]flistfiles = ["/home/philipp/projects/dad4td/reports/clustering/0001_pyod_test.tsv", "/home/philipp/projects/dad4td/reports/clustering/0002_pyod_test.tsv", "/home/philipp/projects/dad4td/reports/clustering/0001_pyod_test_2.tsv"]df = pd.concat([pd.read_csv(filename, sep="\t") for filename in flist]).reset_index(drop=True)df_mean = dfscore_cols = ['model_train_data', 'f1_macro', 'in_f1', 'out_f1', "model_path" ]data_cols = ['Unnamed: 0', 'contamination', 'data_frac', 'seed', 'out_prec', 'out_rec']param_cols = [x for x in list(df.columns) if x not in score_cols]param_cols = [x for x in param_cols if x not in data_cols]cols = score_cols + param_colsdef get_name(x): x = str(x).split("/") if len(x) >=2: return x[-2] else: return x[0]df["model_name"] = df["model_path"].map(lambda x: get_name(x))df["model_name"]
###Code
# best parameters
# look at v_measure, homogeneity, out_f1 or out_f1_LOF
import os.path
score = "f1_macro"
all_avg_ivis = ['outlier_detector', "model_name", "model_train_data", "n_components", "set_op_mix_ratio", "distance","model"]
all_avg = ['outlier_detector', "model_name", "model_train_data", "n_components", "set_op_mix_ratio"]
only_nc = ['outlier_detector', "model_name", "model_train_data", "n_components"]
df_mean = df.groupby(all_avg)[score_cols].mean()
df_mean["runs"] = df.groupby(all_avg)[score_cols].size()
df_mean = df_mean.reset_index()
df_mean = df_mean[df_mean["runs"]>=1]
df_mean = df_mean.sort_values(by=score, ascending=False).reset_index(drop=True)
df_mean.head(50)
###Output
_____no_output_____ |
jupyter_notebooks/HSMA_tutorial_1.ipynb | ###Markdown
Types of machine learning(Image from https://technovert.com/introduction-to-machine-learning/) An introduction to classification with machine learningIn classification tasks we seek to classify a 'case' into one or more classes, given one of more input features. This may be extended to enquiring about probability of classification. Examples of classification include:* What diagnosis should this patient be given?* What is the probability that an emergency department will breach four-hour waiting in the next two hours?* What treatment should a patient be given?* What is the probability that a patient will be re-admitted?**Reflection:**1. Can you think of three occasions where people make classifications?1. Can you think of an example where it might be useful to provide automated classification? Regression and logistic regressionWith ordinary regression we are trying to predict a value given one or more input features.*(Flashcard images from machinelearningflashcards.com)*With logistic regression we are trying to the predict the probability, given one or more inputs, that an example belongs to a particualr class (e.g. *pass* vs *fail* in an exam). Our training data has the actual class (which we turn into 0 or 1), but the model returns a probability of a new example being in either class 0 or 1. The logistic regression fit limits the range of output to bewteen 0 and 1 (where a linear regression could predict outside of this range).$$P = \dfrac{e^{a+bX}}{1+e^{a+bX}}$$ Import libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Data scalingIt is usual to scale input features in machine learning, so that all features are on a similar scale. Consider these two features (we will create artifical data).
###Code
# Create two sets of data with different means and standard deviations
np.random.seed(123) # Make reproducible
x1 = np.random.normal(50,10,size=1000)
x2 = np.random.normal(150,30,size=1000)
# Plot data
with plt.xkcd():
# Set up single plot
fig, ax = plt.subplots(figsize=(8,5))
# Add histogram of x1
ax.hist(x1, bins=50, alpha=0.5, color='b', label='x1')
# Add histogram of x2
ax.hist(x2, bins=50, alpha=0.5, color='r', label='x2')
# Add labels
ax.set_xlabel('value')
ax.set_ylabel('count')
# Add legend
ax.legend()
# Finalise and show plot
plt.show()
###Output
_____no_output_____
###Markdown
MinMax NormalisationWith MinMax normalisation we scale all values between 0 and 1.$$z = \frac{x-min(x)}{max(x) - min(x)}$$A less common alternative is to scale between -1 and 1.$$z = -1 + 2\frac{x-min(x)}{max(x) - min(x)}$$Here we will use 0-1 normalisation:
###Code
x1_norm = (x1 - x1.min()) / (x1.max() - x1.min())
x2_norm = (x2 - x2.min()) / (x2.max() - x2.min())
###Output
_____no_output_____
###Markdown
StandardisationWith standardisation we scale data such that all features have a mean of 0 and standard deviation of 1. To do this we simply subtract by the mean and divide by the standard deviation.$$z = \frac{x-\mu}{\sigma}$$
###Code
x1_std = (x1 - x1.mean()) / x1.std()
x2_std = (x2 - x2.mean()) / x2.std()
###Output
_____no_output_____
###Markdown
Plotting the transformations
###Code
with plt.xkcd():
# Set up three subplots (12 x 5 inch plot)
fig, axs = plt.subplots(1, 3, figsize=(12,5))
# Plot original data in ax[0]
axs[0].hist(x1, bins=50, alpha=0.5, color='b', label='x1')
axs[0].hist(x2, bins=50, alpha=0.5, color='r', label='x2')
axs[0].set_xlabel('value')
axs[0].set_ylabel('count')
axs[0].legend()
axs[0].set_title('Original data')
# Plot normalised data in axs[1]
axs[1].hist(x1_norm, bins=50, alpha=0.5, color='b', label='x1 norm')
axs[1].hist(x2_norm, bins=50, alpha=0.5, color='r', label='x2 norm')
axs[1].set_xlabel('value')
axs[1].set_ylabel('count')
axs[1].set_title('MinMax Normalised data')
# Plot standardised data in axs[2]
axs[2].hist(x1_std, bins=50, alpha=0.5, color='b', label='x1 norm')
axs[2].hist(x2_std, bins=50, alpha=0.5, color='r', label='x2 norm')
axs[2].set_xlabel('value')
axs[2].set_ylabel('count')
axs[2].set_title('Standardised data')
# Adjust padding between subplots and show figure
fig.tight_layout(pad=1.0)
plt.show()
###Output
_____no_output_____ |
Supervised Learning/MNIST dataset classification/.ipynb_checkpoints/assignment1_template-checkpoint.ipynb | ###Markdown
Assignment 1This jupyter notebook is meant to be used in conjunction with the full questions in the assignment pdf. Instructions- Write your code and analyses in the indicated cells.- Ensure that this notebook runs without errors when the cells are run in sequence.- Do not attempt to change the contents of the other cells. Submission- Ensure that this notebook runs without errors when the cells are run in sequence.- Rename the notebook to `.ipynb` and submit ONLY the notebook file on moodle. Environment setupThe following code reads the train and test data (provided along with this template) and outputs the data and labels as numpy arrays. Use these variables in your code.--- Note on conventionsIn mathematical notation, the convention is tha data matrices are column-indexed, which means that a input data $x$ has shape $[d, n]$, where $d$ is the number of dimensions and $n$ is the number of data points, respectively.Programming languages have a slightly different convention. Data matrices are of shape $[n, d]$. This has the benefit of being able to access the ith data point as a simple `data[i]`.What this means is that you need to be careful about your handling of matrix dimensions. For example, while the covariance matrix (of shape $[d,d]$) for input data $x$ is calculated as $(x-u)(x-u)^T$, while programming you would do $(x-u)^T(x-u)$ to get the correct output shapes.
###Code
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
def read_data(filename):
with open(filename, 'r') as f:
lines = f.readlines()
num_points = len(lines)
dim_points = 28 * 28
data = np.empty((num_points, dim_points))
labels = np.empty(num_points)
for ind, line in enumerate(lines):
num = line.split(',')
labels[ind] = int(num[0])
data[ind] = [ int(x) for x in num[1:] ]
return (data, labels)
train_data, train_labels = read_data("sample_train.csv")
test_data, test_labels = read_data("sample_test.csv")
print(train_data.shape, test_data.shape)
print(train_labels.shape, test_labels.shape)
###Output
_____no_output_____
###Markdown
Questions--- 1.3.1 RepresentationThe next code cells, when run, should plot the eigen value spectrum of the covariance matrices corresponding to the mentioned samples. Normalize the eigen value spectrum and only show the first 100 values.
###Code
# Samples corresponding to the last digit of your roll number (plot a)
# Roll no. 2019701007
from numpy import linalg as LA
from numpy.linalg import matrix_rank
train_sample_of_7 = train_data[train_labels[:]==7]
cov = np.cov(train_sample_of_7.T) # need to do transpose to use np.cov
# otherwise do it formula wise
# mean_vec = np.mean(train_sample_of_7, axis=0)
# cov_mat = (train_sample_of_7 - mean_vec).T.dot((train_sample_of_7 - mean_vec)) / (train_sample_of_7.shape[0]-1)
eigenvalues, eigenvectors = LA.eigh(cov)
eigenvalues.sort()
eigenvalues = eigenvalues[::-1]
total = sum(eigenvalues)
var_exp = [(i / total)*100 for i in eigenvalues]
approx_rank_k = len((np.where(eigenvalues>0))[0])
print('Approv Rank of Cov matrix', approx_rank_k )
xaxis = np.arange(100)
plt.bar(xaxis, var_exp[:100])
plt.rcParams['figure.figsize'] = [15, 10]
plt.title('Eigen Spectrum')
plt.show()
# Samples corresponding to the last digit of (your roll number + 1) % 10 (plot b)
# for sample no - 8
train_sample_of_8 = train_data[train_labels[:]==8]
cov = np.cov(train_sample_of_8.T)
eigenvalues, eigenvectors = LA.eigh(cov)
eigenvalues.sort()
eigenvalues = eigenvalues[::-1]
total = sum(eigenvalues)
var_exp = [(i / total)*100 for i in eigenvalues]
approx_rank_k = len((np.where(eigenvalues>0))[0])
print('Approv Rank of Cov matrix', approx_rank_k )
xaxis = np.arange(100)
plt.bar(xaxis, var_exp[:100])
plt.rcParams['figure.figsize'] = [15, 10]
plt.title('Eigen Spectrum')
plt.show()
# All training data (plot c)
cov = np.cov(train_data.T)
eigenvalues, eigenvectors = LA.eigh(cov)
eigenvalues.sort()
eigenvalues = eigenvalues[::-1]
total = sum(eigenvalues)
var_exp = [(i / total)*100 for i in eigenvalues]
approx_rank_k = len((np.where(eigenvalues>0))[0])
print('Approv Rank of Cov matrix', approx_rank_k )
xaxis = np.arange(100)
plt.bar(xaxis, var_exp[:100])
plt.rcParams['figure.figsize'] = [15, 10]
plt.title('Eigen Spectrum')
plt.show()
# Randomly selected 50% of the training data (plot d)
# df = pd.DataFrame(train_data)
# half_train_data = df.sample(frac = 0.5)
idx = np.random.randint(6000, size=3000)
half_train_data = train_data[idx,:]
cov = np.cov(half_train_data.T)
eigenvalues, eigenvectors = LA.eigh(cov)
eigenvalues.sort()
eigenvalues = eigenvalues[::-1]
total = sum(eigenvalues)
var_exp = [(i / total)*100 for i in eigenvalues]
approx_rank_k = len((np.where(eigenvalues>0))[0])
print('Approv Rank of Cov matrix', approx_rank_k )
xaxis = np.arange(100)
plt.bar(xaxis, var_exp[:100])
plt.rcParams['figure.figsize'] = [15, 10]
plt.title('Eigen Spectrum')
plt.show()
###Output
_____no_output_____
###Markdown
1.3.1 Question 1- Are plots a and b different? Why?- Are plots b and c different? Why?- What are the approximate ranks of each plot? ---- Plot a is for samples for digit - 7 and Plot b is for samples for digit - 8.Overall the plot has a similiar structure, maximum -> close to zeroBut there's difference in magnitude and magnitude decrease.Eigen Values correspond to variances across features. so for different digit, variance across differnt features, will be different. Roughly Eigen Values of Plot a goes like 17.5, 12.5, 8, 5,...Roughly Eigen Values of Plot b goes like 14.5, 8, 6, 5,....- Plot b is for samples for digit - 8. Plot c is for samples for all training set.Overall the plot has a similiar structure, maximum -> close to zeroBut there's much more difference in magnitude and magnitude decrease.Roughly Eigen Values of Plot b goes like 14.5, 8, 6, 5,....Rouhgly Eigen Values of Plot c goes like 10, 7.5, 6.7, 5,...In plot b, the highest eigen value will be more, because its just for a single digit. and hence all sample will have quite similiar pattern,resulting into high varaince.and for plot c, since its all training set - it has all digit samples, it wont have as high variance as plot b. - Plot a : 561 Plot b : 562 Plot c : 704 Plot d : 675--- 1.3.1 Question 2- How many possible images could there be?- What percentage is accessible to us as MNIST data?- If we had acces to all the data, how would the eigen value spectrum of the covariance matrix look? ---- total 784 dimension, and each dimension can have two values - {0,1} so total possible images 2^784- Percentage : trained data: (6000/(2^784)) * 100, test data : (1000/2^784)) * 100. Quite Quite less. - If we have access to all data, all eigen values will have same value, eigen value spectrum will be constant through. and it makes sense, if we need everything to generate full data, cannot omit anything. --- 1.3.2 Linear Transformation--- 1.3.2 Question 1How does the eigen spectrum change if the original data was multiplied by an orthonormal matrix? Answer analytically and then also validate experimentally. ---Consider A is our data matrix. For simiplicity consider columns are mean 0. Hence the Covariance Matrix will be (AT)A where AT is A transposeDoing eigen value decomposition of Covariance Matrix : (P D P^-1)P is orthogonal matrix, since it is eigen vectors of symmetric matrix. Covariance is a symmetric matrix.Case 1 : X = AP (Multiply data with Orthogonal matrix on right)New Comvariance Matrix : XTX = (AP)T (AP)= (PT) (AT) (A) (P)= (PT) (ATA) (P) {now ATA is cov matrix i.e. it is P D P^-1}= (PT) (P D P^-1) (P) {since P is orthogonal PT = P^-1}= Dresulting into same set of Eigen ValuesCase 2 : X = PA (Multiply data with Orthogonal matrix on left)New Convariance Matrix : XTX = (PA)T (PA)= AT PT P A= AT A {since PT P = I}i.e. same covariance matrix, resulting into same set of Eigen values These were quite generice senarios, multiplying with the orthonomral matrix which is eigen vectors of covariance matrix So overall the eigen value spectrum would be similiar, eigen values may differ---
###Code
# Experimental validation here.
# Multiply your data (train_data) with an orthonormal matrix and plot the
# eigen value specturm of the new covariance matrix.
# code goes here
def rvs(dim):
print('Calculating Orthogonal matrix..........')
random_state = np.random
H = np.eye(dim)
D = np.ones((dim,))
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
D[n-1] = np.sign(x[0])
x[0] -= D[n-1]*np.sqrt((x*x).sum())
# Householder transformation
Hx = (np.eye(dim-n+1) - 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
H = np.dot(H, mat)
# Fix the last sign such that the determinant is 1
D[-1] = (-1)**(1-(dim % 2))*D.prod()
# Equivalent to np.dot(np.diag(D), H) but faster, apparently
H = (D*H.T).T
return H
cov = np.cov(train_data.T)
eigenvalues, eigenvectors = LA.eigh(cov)
eigenvalues.sort()
eigenvalues = eigenvalues[::-1]
total = sum(eigenvalues)
var_exp = [(i / total)*100 for i in eigenvalues]
approx_rank_k = len((np.where(eigenvalues>0))[0])
xaxis1 = np.arange(100)
plt.subplot(2, 2, 1)
plt.bar(xaxis1, var_exp[:100])
plt.rcParams['figure.figsize'] = [15, 10]
plt.title('Eigen Spectrum of original data')
# Case = train_data x orthono
orthogonalmatrix1 = rvs(784)
new_cov = np.cov(train_data.T)
new_data = train_data.dot(orthogonalmatrix1)
new_eigenvalues, new_eigenvectors = LA.eigh(new_cov)
new_eigenvalues.sort()
new_eigenvalues = new_eigenvalues[::-1]
new_total = sum(new_eigenvalues)
new_var_exp = [(i / new_total)*100 for i in new_eigenvalues]
plt.subplot(2, 2, 2)
xaxis1 = np.arange(100)
plt.bar(xaxis, new_var_exp[:100])
plt.rcParams['figure.figsize'] = [15, 10]
plt.title('Eigen Spectrum of new data')
plt.show()
###Output
_____no_output_____
###Markdown
1.3.2 Question 2If samples were multiplied by 784 × 784 matrix of rank 1 or 2, (rank deficient matrices), how will the eigen spectrum look like? If a matrix is Rank 1, that means it only has one non zero eigen value. And when we multiply a matrix with rank one matrix, it also becomes rank1.Proof :Rank1matrix = Column x row = uvTif a matrix A is multiplied by Rank1matrix, so it becomes AuvT. now Au is again a column. and columnx row is rank1.Basically multiplying the data with rank deficient marix, makes the matrix rank deficient. Hence, lot of columns will be dependent, independent information will be less. and if we calculate the covariance matrix of this matrix, and that will be also rank deficient. Say for A to be rank 1 matrixbecause it will be ATA = (column*row)T (column * row) = column * row * column * row = scalar * column * row => giving rank1 matrixEigen specturm, will have a drastic impact. It will lose lot of information 1.3.2 Question 3Project the original data into the first and second eigenvectors and plot in 2D
###Code
# Plotting code here
cov = np.cov(train_data.T) # need to do transpose to use np.cov
[eigenvalues, eigenvectors] = LA.eigh(cov)
#Get the top two eigen vector
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eigenvalues[i]), eigenvectors[:,i]) for i in range(len(eigenvalues))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort(key=lambda x: x[0], reverse=True)
matrix_w = np.hstack((eig_pairs[0][1].reshape(784,1), eig_pairs[1][1].reshape(784,1)))
transformed = train_data.dot(matrix_w)
# then plot
plt.plot(transformed[0:6000,0], transformed[0:6000,1], 'o', color='blue', label='Transformed data')
plt.xlim([-3000,1000])
plt.ylim([-2000,2000])
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.legend()
plt.title('Transformed samples')
plt.show()
###Output
_____no_output_____
###Markdown
1.3.3 Probabilistic View---In this section you will classify the test set by fitting multivariate gaussians on the train set, with different choices for decision boundaries. On running, your code should print the accuracy on your test set.
###Code
# Print accuracy on the test set using MLE
from numpy import diag
def train (train_data, train_labels):
dict_alldata = {};
n_features = len(train_labels)
for i in range(10):
print('Training for sample ', i)
train_sample = train_data[train_labels[:]==i]
cov = np.cov(train_sample.T)
mu = train_sample.mean ( axis=0 )
eigenvalues, eigenvectors = LA.eigh(cov)
eigen = np.column_stack((eigenvalues,eigenvectors))
eigen = eigen[(-eigen[:,0]).argsort()]
sortedEigenValues = np.array(eigen[:,0])
sortedEigenVectors = np.array(eigen[:,1:])
approx_rank_k = len((np.where(sortedEigenValues>0))[0])
first_k_eigen_values = sortedEigenValues[:approx_rank_k]
first_k_eigen_values_inv = np.reciprocal(first_k_eigen_values)
first_k_eigen_vectors = sortedEigenVectors[:approx_rank_k,:]
inverse = first_k_eigen_vectors.T.dot(diag(first_k_eigen_values_inv)).dot(first_k_eigen_vectors)
(sign, logdet) = np.linalg.slogdet(diag(first_k_eigen_values))
Wi = -0.5 * (inverse)
wi = inverse.dot(mu)
wi0 = -0.5*( mu.T.dot(inverse).dot(mu) - logdet )
dict_alldata[i]={
"Wi":Wi,
"wi":wi,
"wi0":wi0
}
return dict_alldata
def classify ( x_test ):
prob = [];
for i in range(10):
Wi = dict_alldata[i]["Wi"]
wi = dict_alldata[i]["wi"]
wi0 = dict_alldata[i]["wi0"]
log_prob = x_test.T.dot(Wi).dot(x_test) + wi.T.dot(x_test) + wi0
prob.append(log_prob)
maxElement = np.amax(prob)
itemindex = (np.where(prob == np.amax(prob)))
identified_number = itemindex[0][0]
return identified_number
dict_alldata = train(train_data,train_labels)
accuracy = 0;
for index in range(len(test_data)):
identified_number = (classify(test_data[index]))
# print('Identified ',identified_number, 'actually is', test_labels[index])
if(identified_number == test_labels[index]):
accuracy = accuracy + 1
print('Accuracy',(accuracy * 100) / 1000, '%')
# Print accuracy on the test set using MAP
# (assume a reasonable prior and mention it in the comments)
# Print accuracy using Bayesian pairwise majority voting method
import itertools
all_numbers_data = {};
# Calculate means for each class from data
# now for pair (0,1) calculate cov matrix. and all inv, and constant term.
#
pairs =(list(itertools.combinations(range(10), 2)))
def train_pairwise (train_data, train_labels):
mus = [];
n_features = len(train_labels)
for i in range(10):
train_sample = train_data[train_labels[:]==i]
mu = train_sample.mean ( axis=0 )
cov = np.cov(train_sample.T)
all_numbers_data[i]={
"mu":mu,
"cov":cov
}
for pair in pairs:
print('Training Pair', pair)
cov = 0.5* (all_numbers_data[pair[0]]["cov"] + all_numbers_data[pair[1]]["cov"] )
mu_0 = all_numbers_data[pair[0]]["mu"]
mu_1 = all_numbers_data[pair[1]]["mu"]
inverse = getInverseFromCov(cov)
# finding m and c
w = inverse.dot((mu_0 - mu_1))
x0 = 0.5*(mu_0 + mu_1)
m = w.T
c = w.T.dot(x0)
dict_alldata[pair]={
"m":m,
"c":c
}
return dict_alldata
def getInverseFromCov(cov):
eigenvalues, eigenvectors = LA.eigh(cov)
eigen = np.column_stack((eigenvalues,eigenvectors))
eigen = eigen[(-eigen[:,0]).argsort()]
sortedEigenValues = np.array(eigen[:,0])
sortedEigenVectors = np.array(eigen[:,1:])
approx_rank_k = len((np.where(sortedEigenValues>0))[0])
first_k_eigen_values = sortedEigenValues[:approx_rank_k]
first_k_eigen_values_inv = np.reciprocal(first_k_eigen_values)
first_k_eigen_vectors = sortedEigenVectors[:approx_rank_k,:]
inverse = first_k_eigen_vectors.T.dot(diag(first_k_eigen_values_inv)).dot(first_k_eigen_vectors)
return (inverse)
dict_alldata = train_pairwise(train_data,train_labels)
def predict_classwise(x_test):
votes={}
for i in range(10):
votes[i]=0
for pair in pairs:
m = dict_alldata[pair]["m"]
c = dict_alldata[pair]["c"]
# print('c', c)
line = m.dot(x_test) - c
# print('line', line)
if( line > 0):
votes[pair[0]] = votes[pair[0]] + 1
else:
votes[pair[1]] = votes[pair[1]] + 1
maximum_votes_number = max(votes, key=votes.get)
return maximum_votes_number
accuracy = 0;
for index in range(len(test_data)):
identified_number = (predict_classwise(test_data[index]))
# print('Identified ',identified_number, 'actually is', test_labels[index])
if(identified_number == test_labels[index]):
accuracy = accuracy + 1
print('Accuracy is',(accuracy*100/1000), ' % ')
# Print accuracy using Simple Perpendicular Bisector majority voting method
# Print accuracy using Bayesian pairwise majority voting method
import itertools
all_numbers_data = {};
# Calculate means for each class from data
# now for pair (0,1) calculate cov matrix. and all inv, and constant term.
#
pairs =(list(itertools.combinations(range(10), 2)))
def train_pairwise (train_data, train_labels):
mus = [];
n_features = len(train_labels)
for i in range(10):
train_sample = train_data[train_labels[:]==i]
mu = train_sample.mean ( axis=0 )
cov = np.cov(train_sample.T)
all_numbers_data[i]={
"mu":mu,
"cov":cov
}
for pair in pairs:
print('Training Pair : ' ,pair)
cov = 0.5* (all_numbers_data[pair[0]]["cov"] + all_numbers_data[pair[1]]["cov"] )
mu_0 = all_numbers_data[pair[0]]["mu"]
mu_1 = all_numbers_data[pair[1]]["mu"]
inverse = getInverseFromCov(cov)
# basically wT (x-x0) = 0 the perpendicular line. w is mu0-mu1. and x-x0 is the vector of line perpendicular
# its the same as question 3
w=mu_0 - mu_1
x0 = 0.5*(mu_0+mu_1)
c = w.T.dot(x0)
m = w.T
dict_alldata[pair]={
"m":m,
"c":c
}
return dict_alldata
def getInverseFromCov(cov):
eigenvalues, eigenvectors = LA.eigh(cov)
eigen = np.column_stack((eigenvalues,eigenvectors))
eigen = eigen[(-eigen[:,0]).argsort()]
sortedEigenValues = np.array(eigen[:,0])
sortedEigenVectors = np.array(eigen[:,1:])
approx_rank_k = len((np.where(sortedEigenValues>0))[0])
first_k_eigen_values = sortedEigenValues[:approx_rank_k]
first_k_eigen_values_inv = np.reciprocal(first_k_eigen_values)
first_k_eigen_vectors = sortedEigenVectors[:approx_rank_k,:]
inverse = first_k_eigen_vectors.T.dot(diag(first_k_eigen_values_inv)).dot(first_k_eigen_vectors)
return (inverse)
dict_alldata = train_pairwise(train_data,train_labels)
def predict_classwise(x_test):
votes={}
for i in range(10):
votes[i]=0
for pair in pairs:
m = dict_alldata[pair]["m"]
c = dict_alldata[pair]["c"]
# print('c', c)
line = m.dot(x_test) - c
# print('line', line)
if( line > 0):
votes[pair[0]] = votes[pair[0]] + 1
else:
votes[pair[1]] = votes[pair[1]] + 1
maximum_votes_number = max(votes, key=votes.get)
return maximum_votes_number
accuracy = 0;
for index in range(len(test_data)):
identified_number = (predict_classwise(test_data[index]))
# print('Identified ',identified_number, 'actually is', test_labels[index])
if(identified_number == test_labels[index]):
accuracy = accuracy + 1
print('Accuracy',accuracy*100/1000)
###Output
_____no_output_____
###Markdown
1.3.3 Question 4Compare performances and salient observations ---Performances:1) MLE => Accuracy came out to be 10%. It identified all samples as 5. on doing MLE paramenters optimisation we get that mean is the mean of training data, and covaraiance is the covaraiance of the training data. 2) MAP =>3) Pairwise Bayesian => 77%.4) Perpendicular Line => maximum accuracy : 77%.3 and 4 are same, because we assume probability of each class is same, and the covariance is same, so the linear boundary comes out to be a line perpendicular to line joining mean and passing through meanBecause for a pair, the covariance matrix is same, just a different mean, so the shape of gaussian is same, it is just shifted according to mean, and hence, line bisecting the mean, gives us our linear boundary--- 1.3.4 Nearest Neighbour based Tasks and Design--- 1.3.4 Question 1 : NN Classification with various KImplement a KNN classifier and print accuracies on the test set with K=1,3,7
###Code
# Your code here
# Print accuracies with K = 1, 3, 7
import operator
def euclideanDistance(data1, data2, length):
distance = 0
for x in range(length):
distance += np.square(data1[x] - data2[x])
return np.sqrt(distance)
# Defining our KNN model
def knn(trainingSet, testInstance, k):
distances = {}
sort = {}
length = len(testInstance)
# Calculating euclidean distance between each row of training data and test data
for x in range(len(trainingSet)):
dist = euclideanDistance(testInstance, trainingSet[x], length)
distances[x] = dist
#### Start of STEP 3.2
# Sorting them on the basis of distance
sorted_d = sorted(distances.items(), key=operator.itemgetter(1))
#### End of STEP 3.2
neighbors = []
#### Start of STEP 3.3
# Extracting top k neighbors
for x in range(k):
neighbors.append(sorted_d[x][0])
classVotes = {}
for x in range(len(neighbors)):
response = train_labels[neighbors[x]]
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse=True)
return(sortedVotes[0][0], neighbors)
def calculateAccuracy(k):
accuracy = 0;
test = test_data.T
train = train_data.T
for x in range(len(test)):
result,neigh = knn(train, test[x], k)
print('Predicted result', result, 'Actually it is ', test_labels[x])
accuracy = accuracy + (result == test_labels[x])
print('accuracy for k ::' ,k , ' : ', accuracy*100/1000)
calculateAccuracy(1)
calculateAccuracy(3)
calculateAccuracy(7)
# from sklearn.neighbors import KNeighborsClassifier
# #Import scikit-learn metrics module for accuracy calculation
# from sklearn import metrics
# #Create KNN Classifier K=1
# knn1 = KNeighborsClassifier(n_neighbors=1)
# #Train the model using the training sets
# knn1.fit(train_data, train_labels)
# #Predict the response for test dataset
# label_pred = knn1.predict(test_data)
# print(label_pred.shape)
# print("Accuracy for k=1:",metrics.accuracy_score(test_labels, label_pred))
# #Create KNN Classifier K=3
# knn3 = KNeighborsClassifier(n_neighbors=3)
# #Train the model using the training sets
# knn3.fit(train_data, train_labels)
# #Predict the response for test dataset
# label_pred = knn3.predict(test_data)
# print(label_pred.shape)
# print("Accuracy for k=3:",metrics.accuracy_score(test_labels, label_pred))
# #Create KNN Classifier K=7
# knn7 = KNeighborsClassifier(n_neighbors=7)
# #Train the model using the training sets
# knn7.fit(train_data, train_labels)
# #Predict the response for test dataset
# label_pred = knn7.predict(test_data)
# print(label_pred.shape)
# print("Accuracy for k=7:",metrics.accuracy_score(test_labels, label_pred))
###Output
_____no_output_____
###Markdown
1.3.4 Question 1 continued- Why / why not are the accuracies the same?- How do we identify the best K? Suggest a computational procedure with a logical explanation. ---Accuraciesk=1 is different 90%. k=3 and k=7 are same : 91%. for k value to be 1 , is less, for any conclusion. We can have outliers or anything that can change the resultfor k value to be 3 and 7, they are giving the same accuracy. But 7 increases computation.Small value of k means that noise will have a higher influence on the result nd a large value make it computationally expensive. Alogrithm:We can chose k to be sqrt(n) where n is no of classes. --- 1.3.4 Question 2 : Reverse NN based outlier detectionA sample can be thought of as an outlier is it is NOT in the nearest neighbour set of anybody else. Expand this idea into an algorithm.
###Code
# This cell reads mixed data containing both MNIST digits and English characters.
# The labels for this mixed data are random and are hence ignored.
mixed_data, _ = read_data("outliers.csv")
print(mixed_data.shape)
###Output
_____no_output_____
###Markdown
1.3.4 Question 3 : NN for regressionAssume that each classID in the train set corresponds to a neatness score as:$$ neatness = \frac{classID}{10} $$---Assume we had to predict the neatness score for each test sample using NN based techiniques on the train set. Describe the algorithm. ---yes we can use.Simply use the NN to get the class, and then divide by 10.Algorithm: keep k = 3 (close to sqrt(10))-> For each data in test_data-> compute distance with each of the training set-> now sort and find out top minimum three distance-> identify the class and now just think it by 10.-> and we have our neatness score--- 1.3.4 Question 3 continuedValidate your algorithm on the test set. This code should print mean absolute error on the test set, using the train set for NN based regression.
###Code
# Your code here
# Print accuracies with K = 1, 3, 7
mean = np.mean(test_labels)
print('Mean is ', mean)
import operator
def euclideanDistance(data1, data2, length):
distance = 0
for x in range(length):
distance += np.square(data1[x] - data2[x])
return np.sqrt(distance)
# Defining our KNN model
def knn(trainingSet, testInstance, k):
distances = {}
sort = {}
length = len(testInstance)
# Calculating euclidean distance between each row of training data and test data
for x in range(len(trainingSet)):
dist = euclideanDistance(testInstance, trainingSet[x], length)
distances[x] = dist
#### Start of STEP 3.2
# Sorting them on the basis of distance
sorted_d = sorted(distances.items(), key=operator.itemgetter(1))
#### End of STEP 3.2
neighbors = []
#### Start of STEP 3.3
# Extracting top k neighbors
for x in range(k):
neighbors.append(sorted_d[x][0])
classVotes = {}
for x in range(len(neighbors)):
response = train_labels[neighbors[x]]
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse=True)
return(sortedVotes[0][0], neighbors)
def calculateMAE(k):
mae = 0;
test = test_data.T
train = train_data.T
for x in range(len(test)):
result,neigh = knn(train, test[x], k)
resultvalue = result/10
acutalvalue = test_labels[x]
mae = mae + np.absolute(resultvalue - acutalvalue)
print('current mae...', mae)
mae = mae / (len(test))
print('MAE for', k , ' is : ', mae)
calculateMAE(3)
###Output
_____no_output_____ |
MINI PROJECTS/Machine Learning/Car Price Prediction.ipynb | ###Markdown
Importing the Dependencies
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn import metrics
###Output
_____no_output_____
###Markdown
Data Collection and Processing
###Code
# loading the data from csv file to pandas dataframe
car_dataset = pd.read_csv('Desktop/Python/MINI PROJECTS/Machine Learning/archive1/car data.csv')
# print first 5 rows of the dataframe
car_dataset.head()
# checking the number of rows and columns
car_dataset.shape
# getting the some information about the dataset
car_dataset.info()
# checking the number of missing values
car_dataset.isnull().sum()
# checking the distribution of categorical data
print(car_dataset.Fuel_Type.value_counts())
print(car_dataset.Seller_Type.value_counts())
print(car_dataset.Transmission.value_counts())
###Output
Petrol 239
Diesel 60
CNG 2
Name: Fuel_Type, dtype: int64
Dealer 195
Individual 106
Name: Seller_Type, dtype: int64
Manual 261
Automatic 40
Name: Transmission, dtype: int64
###Markdown
Econding the Categorical Data
###Code
# encoding "Fuel_Type" Column
car_dataset.replace({'Fuel_Type':{'Petrol':0, 'Diesel':1, 'CNG':2}}, inplace=True)
# encode "Seller Type" Column
car_dataset.replace({'Seller_Type': {'Dealer':0, 'Individual':1}}, inplace=True)
# encode "Transimission" Column
car_dataset.replace({'Transmission': {"Manual":0, 'Automatic':1}}, inplace=True)
car_dataset.head()
###Output
_____no_output_____
###Markdown
Splitting the data and Target
###Code
X = car_dataset.drop(['Car_Name', 'Selling_Price'],axis=1)
Y = car_dataset['Selling_Price']
print(X)
print(Y)
###Output
0 3.35
1 4.75
2 7.25
3 2.85
4 4.60
...
296 9.50
297 4.00
298 3.35
299 11.50
300 5.30
Name: Selling_Price, Length: 301, dtype: float64
###Markdown
Splitting Training and Test data
###Code
X_train, x_test, Y_train, y_test = train_test_split(X, Y, test_size = 0.1, random_state=2)
###Output
_____no_output_____
###Markdown
Model Training 1. Linear Regression
###Code
# loading the linear regression model
lin_reg_model = LinearRegression()
lin_reg_model.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
# prediction on Training data
training_data_prediction = lin_reg_model.predict(X_train)
# R square Error
error_score = metrics.r2_score(Y_train, training_data_prediction)
print("R square Error : ", error_score)
###Output
R square Error : 0.8799451660493711
###Markdown
Visualize the actual prices and Predicated Prices
###Code
plt.scatter(Y_train, training_data_prediction)
plt.xlabel("Actual Price")
plt.ylabel("Predicted Price")
plt.title("Actual Prices vs Predicted Prices")
plt.show()
# prediction on Training data
test_data_prediction = lin_reg_model.predict(x_test)
# R squared Error
error_score = metrics.r2_score(y_test, test_data_prediction)
print("R squared Error : ", error_score)
plt.scatter(y_test, test_data_prediction)
plt.xlabel("Actual Price")
plt.ylabel("Predicted Price")
plt.title(" Actual Prices vs Predicted Prices")
plt.show()
###Output
_____no_output_____
###Markdown
2. Lasso Regression
###Code
# loading the linear regression model
lass_reg_model = Lasso()
lass_reg_model.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
Model Evalution
###Code
# prediction on Training data
training_data_prediction = lass_reg_model.predict(X_train)
# R square Error
training_data_prediction = lass_reg_model.predict(X_train)
print("R square : ", error_score)
###Output
R square : 0.8365766715026396
###Markdown
Visualize the actual prices and Predicted prices
###Code
plt.scatter(Y_train, training_data_prediction)
plt.xlabel('Actual Price')
plt.ylabel('Predicted Price')
plt.title('Actual Price vs Predicted Prices')
plt.show()
# prediction on Training data
test_data_prediction = lass_reg_model.predict(x_test)
# R square Error
error_score = metrics.r2_score(y_test, test_data_prediction)
print("R square Error : ", error_score)
plt.scatter(y_test, test_data_prediction)
plt.xlabel("Actual Prices")
plt.ylabel("Predicted Prices")
plt.title("Actual Prices vs Predicted Prices")
plt.show()
###Output
_____no_output_____ |
Dimensionality Reduction/PCA/SparsePCA_Normalize.ipynb | ###Markdown
Sparse PCA with Normalize This code template is for Sparse Principal Component Analysis(SparsePCA) along with Standard Scaler in python for dimensionality reduction technique and Data Rescaling using Normalize. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages
###Code
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
from sklearn.decomposition import SparsePCA
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X = df[features]
Y = df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data RescalingFor rescaling the data normalize function of Sklearn is used.Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.The function normalize provides a quick and easy way to scale input vectors individually to unit norm (vector length).More about Normalize
###Code
X_Norm = preprocessing.normalize(X)
X=pd.DataFrame(X_Norm,columns=X.columns)
X.head(3)
###Output
_____no_output_____
###Markdown
Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance
###Code
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features
egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectos
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
return var_exp
var_exp=explained_variance_plot(X)
###Output
_____no_output_____
###Markdown
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
###Code
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html)
###Code
spca = SparsePCA(n_components=4)
spcaX = pd.DataFrame(data = spca.fit_transform(X))
###Output
_____no_output_____
###Markdown
Output Dataframe
###Code
finalDf = pd.concat([spcaX, Y], axis = 1)
finalDf.head()
###Output
_____no_output_____
###Markdown
Sparse PCA with Normalize This code template is for Sparse Principal Component Analysis(SparsePCA) along with Standard Scaler in python for dimensionality reduction technique and Data Rescaling using Normalize. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages
###Code
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
from sklearn.decomposition import SparsePCA
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X = df[features]
Y = df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data RescalingFor rescaling the data normalize function of Sklearn is used.Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.The function normalize provides a quick and easy way to scale input vectors individually to unit norm (vector length).More about Normalize
###Code
X_Norm = preprocessing.normalize(X)
X=pd.DataFrame(X_Norm,columns=X.columns)
X.head(3)
###Output
_____no_output_____
###Markdown
Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance
###Code
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features
egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectos
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
return var_exp
var_exp=explained_variance_plot(X)
###Output
_____no_output_____
###Markdown
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
###Code
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html)
###Code
spca = SparsePCA(n_components=4)
spcaX = pd.DataFrame(data = spca.fit_transform(X))
###Output
_____no_output_____
###Markdown
Output Dataframe
###Code
finalDf = pd.concat([spcaX, Y], axis = 1)
finalDf.head()
###Output
_____no_output_____ |
Specifics/TabCompletion.ipynb | ###Markdown
Tab completion acts as an autocomplte It can be for used for instances, functions, keywords, or commands
###Code
array = [1, 2, 3]
###Output
_____no_output_____
###Markdown
Try pressing th _Tab_ key after both to see available options
###Code
a
array.
###Output
_____no_output_____ |
2019/12-spark/12-spark-intro/spark_OtacilioBezerra.ipynb | ###Markdown
Hands-on!Nessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark. Estimar o PiExiste um algoritmo para estimar o Pi com números radômicos. Implemente-o sobre o Spark.Descrição do algoritmo: http://www.eveandersson.com/pi/monte-carlo-circleImplementação EM PYTHON (__não sobre o SPARK__): http://www.stealthcopter.com/blog/2009/09/python-calculating-pi-using-random-numbers/O númer de pontos deve ser 100000 (cem mill) vezes o número mínimo de partições padrão do seu SparkContext (`sc.defaultMinPartitions`). Esses pontos devem ser selecionados aleatóriamente na etapa de map (ver observações).Observações: use as funções __map__ (para mapear as ocorrêncas em `0` ou `1`, significando `1` quando o ponto aleatório cair dentro do círculo e `0` quando o contrário) e __reduce__ (para sumar as ocorrências).
###Code
# Bibliotecas
import numpy as np
# Definição da Função que Verifica se o ponto está dentro do círculo
def piCheck(n):
if(np.sqrt(n[0]*n[0] + n[1]*n[1]) <= 1):
return 1
else: return 0
# Definição da quantiade de pontos
n = sc.defaultMinPartitions * 100000
# Os pontos são gerados, transformados em 1's e 0's, e a quantidade de 1's é contada
data = sc.parallelize([[np.random.random(), np.random.random()] for i in range(n)])
res = data.map(piCheck)
inside = res.reduce( lambda x,y: x+y )
# Pi é estimado utilizado a fórmula especificada
pi = 4 * inside / n
print(pi)
###Output
3.13994
###Markdown
Filtragem de PrimosDado uma sequência de números de `1` a `1000000`, filtre somente os primos dessa sequência.
###Code
# Função que verifica números primos
def isPrimo(n):
if(n == 2): return True
for i in range(2, n):
if(n % i == 0):
return False
return True
# Gera todos os números de 1 até 1000000 e filtra somente os números primos
data = sc.parallelize(range(1,1000000))
res = data.filter(isPrimo)
print(res.collect())
###Output
_____no_output_____
###Markdown
Municípios do BrasilDado o dataset `mucipios_do_Brasil.csv`, faça duas operações com ele:1. Monte uma lista dos municípios por estado.2. Conte quantos municípios há em cada estado.Dicas: use as operações groupByKey e reduceByKey, não faça um count na lista da operação 1.
###Code
# Função para estrutura chave-valor
def chaveUF(l):
line = l.split(",")
return (line[0], line[1])
# Carrega o dataset como dicionário {uf:cidade}
lines = sc.textFile("municipios_do_Brasil.csv")
lines = lines.filter( lambda l: l.split(",")[0] != "uf")
dict = lines.map(chaveUF)
for pair in dict.groupByKey().collect():
print(pair[0], ":", list(pair[1]))
print()
# Carrega o dataset como um dataframe
df = spark.read.csv("municipios_do_Brasil.csv", header=True)
df.groupBy("uf").count().show()
###Output
+---+-----+
| uf|count|
+---+-----+
| SC| 293|
| RO| 67|
| PI| 224|
| AM| 62|
| GO| 246|
| TO| 137|
| MT| 141|
| SP| 645|
| ES| 78|
| PB| 223|
| RS| 496|
| MS| 78|
| AL| 102|
| MG| 853|
| PA| 143|
| BA| 417|
| SE| 75|
| PE| 185|
| CE| 184|
| RN| 167|
+---+-----+
only showing top 20 rows
###Markdown
Word Count Memória Postumas de Brás CubasMemórias Póstumas de Brás Cubas é um romance escrito por Machado de Assis, desenvolvido em princípio como folhetim, de março a dezembro de 1880, na Revista Brasileira, para, no ano seguinte, ser publicado como livro, pela então Tipografia Nacional.A obra retrata a escravidão, as classes sociais, o cientificismo e o positivismo da época. Dada essas informações, será que conseguimos idenficar essas características pelas palavras mais utilizadas em sua obra?Utilizando o dataset `Machado-de-Assis-Memorias-Postumas.txt`, faça um word count e encontre as palavras mais utilizadas por Machado de Assis em sua obra. Não esqueça de utilizar `stopwords.pt` para remover as `stop words`!
###Code
stopwords = sc.textFile("stopwords.pt").collect()
livro = sc.textFile("Machado-de-Assis-Memorias-Postumas.txt")
words = livro.flatMap( lambda l: l.split(" ") )
words = words.filter( lambda w: w not in stopwords )
words = words.map( lambda w: (w, 1))
wordCnt = words.reduceByKey( lambda x,y: x+y )
sortedWordCnt = wordCnt.map( lambda w: (w[1], w[0])).sortByKey(False)
for word in sortedWordCnt.take(100):
print(word)
###Output
(723, '—')
(303, '')
(210, 'Não')
(165, 'O')
(163, 'A')
(159, 'lhe')
(145, 'CAPÍTULO')
(117, 'E')
(109, 'Virgília')
(101, 'olhos')
(85, 'D.')
(81, 'alguma')
(76, 'Era')
(75, 'Mas')
(66, 'que,')
(65, 'Quincas')
(63, 'homem')
(60, 'ia')
(59, 'Que')
(58, 'podia')
(57, 'dizer')
(55, 'Um')
(52, 'Lobo')
(52, 'Eu')
(51, 'sei')
(50, 'fui')
(49, 'e,')
(47, 'meus')
(47, 'dizia')
(47, 'logo')
(45, 'mim')
(45, 'algum')
(45, 'idéia')
(44, 'olhar')
(44, 'isto')
(44, 'eram')
(43, 'lá')
(43, 'talvez')
(41, 'nossa')
(41, 'Virgília,')
(40, 'mão')
(40, 'coisas')
(40, 'fosse')
(39, 'certo')
(38, 'modo')
(38, 'eu,')
(38, 'No')
(37, 'te')
(37, 'tal')
(36, 'tempo,')
(35, 'nada.')
(35, 'depois,')
(35, 'Meu')
(35, 'ver')
(35, 'De')
(34, 'pai')
(33, 'ir')
(33, 'Ao')
(33, 'uns')
(33, 'simples')
(32, 'si')
(32, 'algumas')
(31, 'aí')
(31, 'mim,')
(31, 'Marcela')
(31, 'Talvez')
(30, 'É')
(30, 'certa')
(29, 'pé')
(29, 'morte')
(29, 'Se')
(29, 'capítulo')
(28, 'anos,')
(28, 'nosso')
(28, 'Borba')
(28, 'Brás')
(28, 'casa,')
(27, 'ele,')
(27, 'muitas')
(27, 'filha')
(27, 'Já')
(27, 'Uma')
(26, 'boa')
(26, 'pai,')
(26, 'digo')
(26, 'disse-me')
(26, 'gesto')
(26, 'porém,')
(26, 'Plácida')
(25, 'verdade,')
(25, 'ali')
(25, 'Pois')
(25, 'coisa,')
(25, 'nunca')
(25, 'muita')
(25, 'tu')
(25, 'homem,')
(25, 'porta')
(25, 'muito,')
(24, 'nenhum')
|
Hospital - Word2Vec Embedding.ipynb | ###Markdown
Read DataRead the hospital dataset
###Code
df = pd.read_csv("CleanHospitalDataset.csv",dtype=object, encoding='utf8')
###Output
_____no_output_____
###Markdown
Drop some columns
###Code
df.drop(columns=['HospitalType','label','State'], axis=1, inplace=True)
df['ProviderNumber'] = df['ProviderNumber'].apply(lambda x: str(int(float(x))))
df['ZipCode'] = df['ZipCode'].apply(lambda x: str(int(float(x))))
df['PhoneNumber'] = df['PhoneNumber'].apply(lambda x: str(int(float(x))))
df.head(3)
###Output
_____no_output_____
###Markdown
Word2Vec Embedding using the Gensim libraryDetails here: https://radimrehurek.com/gensim/
###Code
dfList = df.values.tolist()
dfList[0]
model = Word2Vec(dfList, sg=1, min_count=1, workers=8, iter=1000)
print(model)
print(model['GUNTERSVILLE'])
model.most_similar("GUNTERSVILLE")
model.save("HospitalWord2Vec.w2v")
###Output
_____no_output_____ |
07b_exercises.ipynb | ###Markdown
Exercises 8 and 9 (Chapter 7)7. Load the MNIST data (introduced in Chapter 3), and:- split it into a training set, a validation set, and a test set (e.g., use 50,000 instances for training, 10,000 for validation, and 10,000 for testing)- Then train various classifiers, such as a Random Forest classifier, an Extra-Trees classifier, and an SVM classifier.- Next, try to combine them into an ensemble that outperforms each individual classifier on the validation set, using soft or hard voting.- Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Is this notebook running on Colab or Kaggle?
IS_COLAB = "google.colab" in sys.modules
IS_KAGGLE = "kaggle_secrets" in sys.modules
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
import pandas as pd
# For saving the models after they're trained
import pickle
from joblib import dump, load
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
X, y = mnist["data"], mnist["target"]
from sklearn.model_selection import train_test_split
X_train_full, X_test, y_train_full, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
X_train, X_val, y_train, y_val = train_test_split(X_train_full, y_train_full,
test_size=10000,
random_state=1989)
# Train RandomForest, ExtraTrees and SVM
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from os.path import exists
if exists('outputs/grid_rf_ch7.joblib'):
grid_rf = load('outputs/grid_rf_ch7.joblib')
else:
rnd_clf = RandomForestClassifier(n_estimators=500, # number of trees
n_jobs=-1) # use all the cores
max_leaf_nodes_params = list(range(2, 500, 20))
max_leaf_nodes_params.append(None)
grid_rf = {
'max_leaf_nodes': max_leaf_nodes_params
}
grid_rf = GridSearchCV(rnd_clf, grid_rf, cv = 3, scoring = 'accuracy')
grid_rf.fit(X_train, y_train)
dump(grid_rf, 'outputs/grid_rf_ch7.joblib')
grid_rf.best_score_
grid_rf.best_params_
# Now get the performance (score) on the validation set
from sklearn.metrics import accuracy_score
y_pred_rf = grid_rf.predict(X_val)
accuracy_score(y_val, y_pred_rf)
# Now it's the turn of an ExtraTrees classifier
from sklearn.ensemble import ExtraTreesClassifier
if exists('outputs/gridsearch_xt_ch7.joblib'):
gridsearch_xt = load('outputs/gridsearch_xt_ch7.joblib')
else:
xt_clf = ExtraTreesClassifier(n_jobs=-1,
random_state=1989)
# max_leaf_nodes and n_estimators
max_leaf_nodes_params = list(range(2, 500, 35))
max_leaf_nodes_params.append(None)
grid_xt = {
'max_leaf_nodes': max_leaf_nodes_params,
'n_estimators': [250, 500, 750]
}
gridsearch_xt = GridSearchCV(xt_clf,
grid_xt,
cv = 3,
scoring = 'accuracy')
gridsearch_xt.fit(X_train, y_train)
dump(gridsearch_xt, 'outputs/gridsearch_xt_ch7.joblib')
gridsearch_xt.best_score_
gridsearch_xt.best_params_
y_pred_xt = gridsearch_xt.predict(X_val)
accuracy_score(y_val, y_pred_xt)
# AND FINALLY! THE SVM PREDICTOR
# Look at the SVM chapter exercises
# For this I have to scale and center the data
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
if exists('outputs/svm_clf_ch7.joblib'):
svm_clf = load('outputs/svm_clf_ch7.joblib')
else:
svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", probability=True))
])
svm_clf.fit(X_train, y_train)
dump(svm_clf, 'outputs/svm_clf_ch7.joblib')
y_pred_svm = svm_clf.predict(X_val)
accuracy_score(y_val, y_pred_svm)
###Output
_____no_output_____
###Markdown
Now I'm gonna combine them in a Voting Classifier.Note that if this is done with the default `VotingClassifier` class, it is going to retrain all the models. But here I want to compare the performance of them on their own with an ensemble that combines them, so I need a voting classifier that preserves the training.
###Code
# Solution using mlxtend
from mlxtend.classifier import EnsembleVoteClassifier
import copy
eclf_hard = EnsembleVoteClassifier(clfs=[grid_rf, gridsearch_xt, svm_clf],
voting='hard',
fit_base_estimators=False,
use_clones=False)
eclf_hard.fit(X_train, y_train)
y_pred_vot_hard = eclf_hard.predict(X_val)
accuracy_score(y_val, y_pred_vot_hard)
eclf_soft = EnsembleVoteClassifier(clfs=[grid_rf, gridsearch_xt, svm_clf],
voting='soft',
fit_base_estimators=False,
use_clones=False)
eclf_soft.fit(X_train, y_train)
y_pred_vot_soft = eclf_soft.predict(X_val)
accuracy_score(y_val, y_pred_vot_soft)
###Output
_____no_output_____
###Markdown
Now it's time to check the accuracy of the base models against the best voting classifier *on test data* (no validation data)
###Code
labels = ['Random Forest', 'ExtraTrees', 'SVM', 'Voting Classifier']
for clf, label in zip([grid_rf, gridsearch_xt, svm_clf, eclf_soft], labels):
y_pred = clf.predict(X_test)
score = accuracy_score(y_test, y_pred)
print("Accuracy: %0.4f [%s]"
% (score, label))
###Output
Accuracy: 0.9701 [Random Forest]
Accuracy: 0.9732 [ExtraTrees]
Accuracy: 0.9645 [SVM]
Accuracy: 0.9733 [Voting Classifier]
###Markdown
9. - Run the individual classifiers from the previous exercise to make predictions on the validation set- create a new training set with the resulting predictions: each training instance is a vector containing the set of predictions from all your classifiers for an image, and the target is the image’s class
###Code
# Create columns of predictions from each model on validation set
y_pred_rf.shape
y_pred_xt.shape
y_pred_svm.shape
new_X_train = np.vstack((y_pred_rf, y_pred_xt, y_pred_svm)).T
new_X_train[:5]
###Output
_____no_output_____
###Markdown
- Train a classifier on this new training set. Congratulations, you have just trained a blender, and together with the classifiers it forms a stacking ensemble!
###Code
blender_xt = ExtraTreesClassifier(n_jobs=-1,
random_state=1989)
# Using cross validation for this one too
max_leaf_nodes_blender = list(range(2, 20, 1))
max_leaf_nodes_blender.append(None)
grid_blender = {
'max_leaf_nodes': max_leaf_nodes_blender,
'n_estimators': [20, 30, 40, 50, 60, 70, 75, 80, 90, 100]
}
gridsearch_blender = GridSearchCV(blender_xt,
grid_blender,
cv = 3,
scoring = 'accuracy')
gridsearch_blender.fit(new_X_train, y_val)
gridsearch_blender.best_score_
gridsearch_blender.best_params_
###Output
_____no_output_____
###Markdown
- Now evaluate the ensemble on the test set. For each image in the test set, make predictions with all your classifiers, then feed the predictions to the blender to get the ensemble’s predictions. How does it compare to the voting classifier you trained earlier?
###Code
y_pred_test_rf = grid_rf.predict(X_test)
y_pred_test_xt = gridsearch_xt.predict(X_test)
y_pred_test_svm = svm_clf.predict(X_test)
new_X_test = np.vstack((y_pred_test_rf, y_pred_test_xt, y_pred_test_svm)).T
y_pred_test_blender = gridsearch_blender.predict(new_X_test)
accuracy_score(y_test, y_pred_test_blender)
###Output
_____no_output_____ |
OCRtest.ipynb | ###Markdown
Test of the automatic OCR
###Code
import glob
import subprocess
for file in glob.glob("/media/benjamin/Elements/pdfs/*.pdf"):
path,filename = os.path.split(file)
print(filename)
out_name =filename[0:-4]+".%d.png"
print(out_name)
file
out_file = 'test.png'
cmd = 'gs -dSAFER -dNOPAUSE -q -r300x300 -sDEVICE=pnggray -dBATCH -dLastPage=5 -sOutputFile='+out_file+' '+file
proc_results = subprocess.run(cmd.split(), stdout=subprocess.PIPE,timeout=60)
print(proc_results.returncode)
print(proc_results)
testfile = 'test.1.txt'
'.' in testfile
testfile.index('.')
testfile[0:-6]
import pandas as pd
i,j=0,0
df = pd.DataFrame(columns=['file','text'])
for idx,file in enumerate(glob.glob("/media/benjamin/Elements/pdfs/txt/*.txt")):
path,txtfile = os.path.split(file)
fname = txtfile[0:-6]
text = []
with open(file,'r') as text_file:
text_block = text_file.read()
df.loc[idx,'file'] = fname
df.loc[idx,'text'] = text_block
if len(text_block)>20:
i+=1
else:
j+=1
print(i,j)
df2 = pd.DataFrame(df.groupby('file')['text'].apply(lambda x: ' '.join(x)))
df2['text_length'] = df2['text'].apply(len)
df3 = df2.sort_values('text_length',ascending=False)
df3[df3['text_length']>20]
import re
def text_split(text):
text_list = re.split('; |, | |\n+',text)
return [word for word in text_list if word]
df3['text_list'] = df3['text'].apply(text_split)
df3['nb_words'] = df3['text_list'].apply(len)
len(df3[df3['text_length']<10])
len(df3[df3['nb_words']<5])
###Output
_____no_output_____ |
Twitter Tutorial.ipynb | ###Markdown
Scraping Twitter with PythonThere are a number different python libraries we could use, but for this example we will be using twython. The library contains convient methods to interact with twitters restful api. [Twython Documentation](https://twython.readthedocs.io/en/latest/index.html) [Twitter API Documentation](https://dev.twitter.com/rest/public) Import packages
###Code
from twython import Twython
import json
import pytz
from datetime import datetime
###Output
_____no_output_____
###Markdown
AuthenticationThis logs you in to you app and lets twitter keep track of you. You cannot use the api without authenticating. One reason twitter requires authentication is so that they can limit the number of requests you make in a given time period. If you are queerying a high volume of data, you will need to account for this limitation. Readabout it [here](https://dev.twitter.com/rest/public/rate-limiting) First, you need to create api keys to access , you can do this [here](https://apps.twitter.com). Then, you need to copy the app keys you created on twitters website into the variables below.
###Code
APP_KEY = "your_app_key"
APP_SECRET = "your_secret_key"
twitter = Twython(APP_KEY, APP_SECRET,oauth_version=2)
ACCESS_TOKEN = twitter.obtain_access_token()
twitter = Twython(APP_KEY, access_token=ACCESS_TOKEN)
###Output
_____no_output_____
###Markdown
Querying the garden hose
###Code
results = twitter.search(q="sports")
results.keys()
tweets=[]
for result in results["statuses"]:
tweets.append(result)
len(tweets)
tweets
tweets[0].keys()
tweets[0]
###Output
_____no_output_____
###Markdown
Get an accounts tweets
###Code
accounts = ["espn","foxsports","skepticalsports"]
acounts_tweets = []
for account in accounts:
results=twitter.get_user_timeline(screen_name=account,cursor=-1,format="json")
acounts_tweets.extend(results)
###Output
_____no_output_____
###Markdown
Get all tweets from an account from yesterday
###Code
def get_yesterday_tweets(handle):
tweets=[]
now=datetime.utcnow().date()
today_date = datetime(now.year,now.month,now.day,0,0)
yesterday_date = datetime(now.year,now.month,now.day-1,0,0)
max_id=None
while True:
results=twitter.get_user_timeline(screen_name=handle,max_id=max_id,format="json")
if len(results)>0:
for result in results:
tweet_date = datetime.strptime(result["created_at"],'%a %b %d %H:%M:%S +0000 %Y')
if tweet_date < yesterday_date:
return tweets
elif tweet_date < today_date:
tweets.append(result)
max_id = result["id_str"]
len(get_yesterday_tweets("skepticalsports"))
###Output
_____no_output_____
###Markdown
Save tweets as json file
###Code
for account in accounts:
with open(account + "_" + str(datetime.utcnow().date()) + ".json","w") as outfile:
json.dump(get_yesterday_tweets(account),outfile)
###Output
_____no_output_____ |
RegressaoLogistica-e-commerce.ipynb | ###Markdown
e-commerce - Regressão Logística
###Code
# Importando bibliotecas
import pandas as pd
import numpy as np
#Carregando os dados e visualizando as primeiras linhas
dados = pd.read_csv('advertising.csv')
dados.head()
#verificando o tipo de dados
dados.info()
#complementando a análise de dados com descibe
dados.describe()
#Gerando o relatório dos dados com o Pandas Profiling e salvando-o em disco
from pandas_profiling import ProfileReport
report=ProfileReport(dados, title='Relatório')
report.to_file('relatorioEcommerce1.html')
#Assim como feito no Power Bi, separarei o atributo timestamp_clique em horario, dia, mes e ano
dados['timestamp_clique']
dados['timestamp_clique'] = pd.to_datetime(dados['timestamp_clique'])
dados['ano'] = dados['timestamp_clique'].dt.year
dados['mes'] = dados['timestamp_clique'].dt.month
dados['dia'] = dados['timestamp_clique'].dt.day
dados
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
import warnings
warnings.filterwarnings('ignore')
dados.columns
#separando os dados em treino e teste
x = dados[['sexo', 'tempo_diario_site','idade']]
y = dados['clique']
X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=30)
maq5 = LogisticRegression()
maq5.fit(X_train, y_train)
maq5_predict = maq5.predict(X_test)
accuracy_score(y_test, maq5_predict)
print(classification_report(y_test, maq1_predict))
###Output
precision recall f1-score support
0 0.90 0.90 0.90 128
1 0.89 0.89 0.89 122
accuracy 0.90 250
macro avg 0.90 0.90 0.90 250
weighted avg 0.90 0.90 0.90 250
|
Visualizing Service Coverage.ipynb | ###Markdown
Visualizing Fleet Distances I was having discussion with friend about how one could visualize the geographical coverage of a service fleet for a known set of customers. For example, if set A is a set of students and B is a set of tutors, we are interested in being able to plot areas which are lacking a good student to tutor ratio, given expected reasonable threshold of distance for the student to travel to get to the tutor.
###Code
import pandas as pd
from numpy.random import rand
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
###Output
_____no_output_____
###Markdown
Make a sample set of customers and providersLets make some fake data sets. For simplicity, I'm just going to work with cartesian coordinates, but if you had real data it would be easy to transition this into latitude longitude.
###Code
def create_data(n, xrange, yrange):
""" A function to create a dummy data set for plotting """
return pd.DataFrame({"x":rand(n)*yrange, "y":rand(n)*xrange,'name':range(n)})
xrange=10
yrange=10
C = create_data(10,xrange,yrange)
P = create_data(20,xrange,yrange)
C.head()
###Output
_____no_output_____
###Markdown
Create gridded summary of densityWe can summarize the density of the customers or providers in each area by generating a grid for the area of interest at a resolution of our choice, and then counting the providers or customers within a threshold of those points. Note that the choice of the number of gridpoints should probably result in a resolution of at most half of the desired distance threshold to have a meaningful plot.
###Code
gridpoints = 50
# The x and y coodinates of the grid
X,Y = np.meshgrid(np.linspace(0,xrange,gridpoints),np.linspace(0,yrange,gridpoints))
# The arrays which will hold the values for the proximate elements at that grid point.
# For Customers
N_C = np.zeros((gridpoints,gridpoints))
# and for Providers
N_P = np.zeros((gridpoints,gridpoints))
# A matrix to store the ratio of providers per customer within the threshold distance of this gridpoint
P_per_C = np.zeros((gridpoints,gridpoints))
###Output
_____no_output_____
###Markdown
Specify the theshold distance that a provider must be within, to be considered within range of a customer.
###Code
threshold = 2
if (xrange/gridpoints) > (threshold/2) or (yrange/gridpoints) > (threshold/2):
print("Warning, grid resolution is too low {},{} vs threshold {}".format(xrange/gridpoints,yrange/gridpoints, threshold) )
###Output
_____no_output_____
###Markdown
Define some functions for calculating distance. This uses euclidian distance, though the arguement could be made for manhattan distance.
###Code
def griddistance(row,x,y):
""" Calculate cartesian distance for a row with attributes, x and y """
return np.sqrt((row.x-x)**2+(row.y-y)**2)
def units_in_threshold(x, y ,A ,threshold):
""" For a pandas series, A, calculate the distance of all points from
given x,y coordinates, and return the number of values within a distance threshold"""
distance = A.apply(griddistance,args=(x,y),raw=True,axis=1)
return (distance[distance<threshold].count() or 0)
###Output
_____no_output_____
###Markdown
For each point in the grid, record the number of customers or providers within the threshold distance. Also calculate the ratio of provider to customer where applicable. I know there is a way to do this that is vectorized and more beautiful, but for clarity I'm putting up with using loops.
###Code
for i in range(gridpoints):
for j in range(gridpoints):
C_density = units_in_threshold(X[i, j], Y[i, j], C, threshold)
P_density = units_in_threshold(X[i, j], Y[i, j], P, threshold)
N_C[i,j]=C_density
N_P[i,j]=P_density
if C_density>0:
P_per_C[i,j] = P_density/C_density
else:
P_per_C[i,j] = np.nan
###Output
_____no_output_____
###Markdown
Make some contour plots
###Code
f, axes = plt.subplots(1,3, sharex=True, sharey=True, figsize=(15, 5) )
# First plot the Customer Density
c1=axes[0].contourf(X, Y, N_C,
cmap=cm.Reds,
# norm=cm.colors.Normalize(vmax=N_C.max(), vmin=0)
)
axes[0].scatter(C.x,C.y, color='b')
axes[0].set_title('Customer Density')
plt.colorbar(c1,ax=axes[0])
axes[0].tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
# THen the Provider Density
c2=axes[1].contourf(X, Y, N_P,
cmap=cm.Blues,
)
axes[1].scatter(P.x,P.y, color='r' )
axes[1].set_title('Provider Density')
plt.colorbar(c2,ax=axes[1])
axes[1].tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
# Finally plot the shortfall of coverage at each location, based on the provider density
target_P_per_C = 3
c3=axes[2].contourf(X, Y, P_per_C-target_P_per_C,
cmap=cm.RdYlGn,
norm=cm.colors.Normalize(vmax=0, vmin=-target_P_per_C))
axes[2].scatter(C.x,C.y, color='b',label='Customers')
axes[2].scatter(P.x,P.y, color='r',label='Providers')
axes[2].set_title('Provider/Customer - Target')
plt.colorbar(c3,ax=axes[2])
axes[2].tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
###Output
_____no_output_____
###Markdown
The third chart shows one way of evaluating the coverage. Areas which would not impact customers are null, and show as white. Within those areas which impact customers, the density of coverage is shown, offset by the target number of providers. You can interpret this by saying that any areas which are dark green would result in users with sufficient coverage.One thing I don't love about this approach is that it only calculates the distance from the grid point to the provider, so you could have a situation where the distance from customer to provider is up to twice that of the threshold. Also, if you have a customer who has sufficient providers to the north of them, it can indicate that providers are desirable to the south of them, even when this is not necessary. Visualizing customer coverageAs an alternative, instead of simply combining the density of the providers and customers, we could create a score which tabulates the customer which dont have sufficient providers within their theshold for the customers within the threshold of the gridpoint. This will give you the value of adding a provider at given grid point, and can increase that value if it can benefit multiple customers.To do this, first, calculate the distance for every customer to every provider. If you are working with a big data set, this should probably be precalculated, could potentially benefit from optimization. For the purpose of demonstration I am simply cross joining the two sets, and applying the distance calculation to all combinations.
###Code
# in order to get a cross join from our merge, we have to make a dummy key
C.loc[:,'key']=0
P.loc[:,'key']=0
PC = C.merge(P, how='outer',on='key', suffixes=('C','P'))
# Calculate the distance for each combination of customer and provider
def rowdist(row):
return np.sqrt((row.xC-row.xP)**2+(row.yC-row.yP)**2)
PC.loc[:,'distance'] = PC.apply(rowdist, axis=1)
# Calculate if the provider is within the threshold distance
PC.loc[:,'provider_count'] = (PC.distance<threshold).astype(int)
PC.head()
###Output
_____no_output_____
###Markdown
Once you have calculated the distance between each customer and provider, summarize the number of providers available to that customer.
###Code
newC = C.copy().set_index('name')
newC = C.join(PC.groupby('nameC').provider_count.sum().to_frame())
newC
###Output
_____no_output_____
###Markdown
Now for each gridpoint, calculate the number of customers which could use a provider in this location. You can also calculate the number of customers impacted at each gridpoint, to understand the value of different locations.
###Code
def customers_in_threshold(x, y ,A ,threshold):
""" For a pandas series, A, calculate the distance of all points from
given x,y coordinates, and return the customers withnin that distance"""
distance = A.apply(griddistance,args=(x,y),raw=True,axis=1)
return A[distance<threshold]
Customers_shortfall = np.zeros((gridpoints,gridpoints))
for i in range(gridpoints):
for j in range(gridpoints):
if N_C[i,j]>0:
S = customers_in_threshold(X[i, j], Y[i, j],newC ,threshold)
Customers_shortfall[i, j] = (S[S.provider_count<target_P_per_C].provider_count).count()
else:
Customers_shortfall[i, j] = np.nan
###Output
_____no_output_____
###Markdown
Make the plots
###Code
f, axes = plt.subplots(1,2, sharex=True, sharey=True, figsize=(18, 6) )
c1=axes[0].contourf(X, Y, N_C,
cmap=cm.Reds)
axes[0].scatter(C.x,C.y, color='b',label='Customers')
axes[0].scatter(P.x,P.y, color='r',label='Providers')
axes[0].set_title('Customers Density Plot')
plt.colorbar(c1,ax=axes[0])
axes[0].legend()
axes[0].tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
c2=axes[1].contourf(X, Y, Customers_shortfall,
cmap=cm.viridis,
norm=cm.colors.Normalize( vmin=0))
axes[1].scatter(C.x,C.y, color='b',label='Customers')
axes[1].scatter(P.x,P.y, color='r',label='Providers')
axes[1].set_title('Customers with less than Target Providers')
plt.colorbar(c2,ax=axes[1])
axes[1].legend()
axes[1].tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
###Output
_____no_output_____ |
jupyter_russian/projects_individual/project_example_banks.ipynb | ###Markdown
Мини-проект. Полный анализ данных и построение прогнозной модели модели Автор: Григорий Демин В рамках данного задания используется набор данных по директ-маркетингу португальского банка. Ссылка: [Bank Marketing Data Set](http://archive.ics.uci.edu/ml/datasets/Bank+Marketing).
###Code
from __future__ import division, print_function
# отключим всякие предупреждения Anaconda
import warnings
warnings.filterwarnings('ignore')
%pylab inline
import seaborn as sns
import pandas as pd
###Output
_____no_output_____
###Markdown
__Описание набора данных__Данные описывают директ-маркетинговую кампанию португальского банка. Кампания заключалась в обзвоне клиентов и предложения им депозита. Достаточно часто одному и тому же клиенту делалось несколько звонков.Целью задачи является предсказание, воспользуется ли клиент депозитом или нет.__Список переменных:__* 1 - Возраст (шкала)* 2 - job : Род занятий (номинальная: "admin." (админ. персонал),"unknown" (неизвестно),"unemployed" (безработный),"management" (менеджмент),"housemaid" (домохозяйка),"entrepreneur" (предприниматель),"student" (учащийся),"blue-collar" (служащий),"self-employed" (самозанятый),"retired" (пенсионер),"technician","services" (сервис) * 3 - marital : Семейное положение (номинальная: "married" (женат/замужем),"divorced" (разведен(а)/вдова/вдовец),"single" (одинокий))* 4 - education: Образование (номинальная: "unknown" (неизвестно),"secondary" (среднее),"primary" (начальное),"tertiary" (высшее))* 5 - default: была ли просрочка по кредиту? (бинарная: "yes" (да),"no" (нет))* 6 - balance: среднегодовой баланс на счету, в евро (число) * 7 - housing: есть ли ипотека? (бинарная: "yes" (да),"no" (нет))* 8 - loan: есть ли личные кредиты? (бинарная: "yes" (да),"no" (нет))* _далее переменные, связанные с предыдущими контактами с данным клиентом:_* 9 - contact: тип коммуникации (номинальная: "unknown" (неизвестно),"telephone" (стационарный телефон),"cellular"(сотовый)) * 10 - day: число месяца, когда был последний контакт (число)* 11 - month: месяц, когда был последний контакт (Номинальная: "jan", "feb", "mar", ..., "nov", "dec")* 12 - duration: длительность последнего контакта, в секундах (число)* _другие атрибуты:_* 13 - campaign: количество контактов, которое было с данным клиентом в данной кампании (число, включает последний контакт)* 14 - pdays: количество дней, которое прошло с последнего контакта данной кампании (число, -1 обозначает, что ранее контактов не было)* 15 - previous: количество контактов, которое было с данным клиентом до этой кампании (число)* 16 - poutcome: результат предыдущей кампании (номинальная: "unknown" (неизвестно),"other" (другое),"failure" (неудача),"success" (успех)) Целевая переменная: 17 - y: Открыл ли клиент депозит? (бинарная: 1, 0)
###Code
data = pd.read_csv("../../data/bank.csv")
data.head()
###Output
_____no_output_____
###Markdown
**Выводим основные харакетристики переменных;**
###Code
print(data.shape)
data.describe(include = "all").T
# частотки по категориальным переменным
categorical = ["marital","education","default","housing","loan","contact","month","poutcome"]
for each_var in categorical:
print('********')
print('*',each_var,'*')
print(data[each_var].value_counts())
###Output
_____no_output_____
###Markdown
Делаем визуализацию:
###Code
sns.pairplot(data)
###Output
_____no_output_____
###Markdown
Далее предобработка данных. * Удалим переменную duration - это длительность последнего контакта, если она равна нулю, то явно, что успеха не было. Из-за этого у неё хорошая прогнозная сила, но её значение не известно до звонка.* pdays - количество дней, прошедшее с последнего звонка. Если -1, то звонка не было. Соответсвенно, перекодируем -1 в отдельную переменную - "Был звонок/не был", а в исходной переменной -1 заменим на медиану.* В дополнение к непрерывному возрасту сделаем еще возраст категориальный.* Перекодируем все категориальные переменные в dummies (0,1).* Масштабируем все переменные.* Разобъем выборку на тестовую и обучающую.
###Code
### Обрабатываем pdays:
first_call = (data.pdays == -1).astype(int)
first_call.name = "first_call"
data.pdays[data.pdays==-1] = NaN
data.pdays = data.pdays.fillna(value = data.pdays.median())
### Делаем возрастные категории
age1 = (data.age<25).astype(int)
age2 = (data.age>50).astype(int)
age1.name = "age1"
age2.name = "age2"
### dummy переменные:
data_dummies = pd.concat([
pd.get_dummies(data.job , prefix = 'job'),
pd.get_dummies(data.marital , prefix = 'marital'),
pd.get_dummies(data.education, prefix = 'education'),
pd.get_dummies(data.default , prefix = 'default'),
pd.get_dummies(data.housing , prefix = 'housing'),
pd.get_dummies(data.loan , prefix = 'loan'),
pd.get_dummies(data.contact , prefix = 'contact'),
pd.get_dummies(data.month , prefix = 'month'),
pd.get_dummies(data.poutcome, prefix = 'poutcome'),
data.age ,
data.balance ,
data.day ,
data.campaign ,
data.pdays ,
data.previous ,
first_call,
age1,
age2], axis=1)
###Output
_____no_output_____
###Markdown
Разбиваем на тестовую и обучающую выборку
###Code
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_dummies, data.y,
test_size=0.3,
random_state=20160212,
stratify = data.y)
###Output
_____no_output_____
###Markdown
Масштабируем переменные
###Code
from sklearn import preprocessing
X_train = preprocessing.scale(X_train)
X_test = preprocessing.scale(X_test)
### Конвертируем назад в Pandas DataFrame
X_train = pd.DataFrame(X_train)
X_train.columns = data_dummies.columns
X_test = pd.DataFrame(X_test)
X_test.columns = data_dummies.columns
###Output
_____no_output_____
###Markdown
**Посмотрим на baseline - классификацию случайным лесом без настройки параметров.**
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score
forest = RandomForestClassifier(n_estimators=500)
forest.fit(X_train, y_train)
test_pred = forest.predict(X_test)
###Output
_____no_output_____
###Markdown
Убедимся в том, что для данной задачи доля правильных ответов не будет хорошей метрикой качества модели.
###Code
accuracy_score(y_test, test_pred)
###Output
_____no_output_____
###Markdown
Однако, лучшее качество дает даже тривиальный прогноз, что клиент не откроет депозит.
###Code
y_test.value_counts()[0] / y_test.shape[0]
f1_score(y_test, test_pred)
confusion_matrix(y_test, test_pred)
###Output
_____no_output_____
###Markdown
**Выберем наиболее важные переменные**
###Code
forest = RandomForestClassifier(n_estimators=1000, max_depth = 5,
random_state=42).fit(X_train, y_train)
features = pd.DataFrame(forest.feature_importances_,
index=X_train.columns,
columns=['Importance']).sort(['Importance'],
ascending=False)
features
plt.plot(range(len(features.Importance.tolist())),
features.Importance.tolist())
###Output
_____no_output_____
###Markdown
Выбираем 20 признаков. Конвертируем выборки в Numpy матрицы.
###Code
selected_attr = features.index.tolist()[0:20]
X_train = X_train[selected_attr].as_matrix()
X_test = X_test[selected_attr].as_matrix()
###Output
_____no_output_____
###Markdown
Попробуем четыре разных классификатора: логистическую регрессию, Gradient boosting, Random Forest и SVM. Так как у нас сильный дисбаланс в выборке (успехов только 10%), то в качестве меры будем использовать F1 score.
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.metrics import f1_score
classifiers = [LogisticRegression(),
GradientBoostingClassifier(),
RandomForestClassifier(),
SVC()]
classifiers_name = ["LogisticRegression",
"GradientBoostingClassifier",
"RandomForestClassifier",
"SVC"]
###Output
_____no_output_____
###Markdown
Настраиваем параметры выбранных алгоритмов с помощью GridSearchCV и выбираем лучший классификатор.
###Code
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import StratifiedKFold
n_folds = 5
scores = []
fits = []
logistic_params = {'penalty': ('l1', 'l2'),
'C': (.01,.1,1,5)}
gbm_params = { 'n_estimators': [100, 300, 500],
'learning_rate':(0.1, 0.5, 1),
'max_depth': list(range(3, 6)),
'min_samples_leaf': list(range(10, 31, 10))}
forest_params = {'n_estimators': [100, 300, 500],
'criterion': ('gini', 'entropy'),
'max_depth': list(range(3, 6)),
'min_samples_leaf': list(range(10, 31, 10))}
svm_param = {'kernel' : ('linear', 'rbf'),
'C': (.5, 1, 2)
}
params = [logistic_params, gbm_params, forest_params, svm_param]
# Проводим кросс-валидацию для всех моделей
for i, each_classifier in enumerate(classifiers):
clf = each_classifier
clf_params = params[i]
grid = GridSearchCV(clf, clf_params,
cv=StratifiedKFold(y_train, n_folds=n_folds,
shuffle=False, random_state=42),
n_jobs=-1, scoring="f1")
grid.fit(X_train, y_train)
fits.append(grid.best_params_)
clf_best_score = grid.best_score_
scores.append(clf_best_score)
print(classifiers_name[i], clf_best_score, "\n", grid.best_params_)
# Печатаем параметры лучшего
grid_value = max(scores)
grid_index = [i for i in xrange(len(scores)) if scores[i]==grid_value][0]
print("Лучший классификатор при GridSearch:",
classifiers_name[grid_index], grid_value)
print(fits[grid_index])
###Output
_____no_output_____
###Markdown
**Для лучшего классификатора и набора параметров для него выберем более мелкую сетку (для того чтобы уточнить лучшие значения параметров).**
###Code
clf_params = {'n_estimators': (250, 300, 350),
'learning_rate': (0.75, 1, 1.25, 1.5),
'min_samples_leaf': list(range(1, 14, 3))}
clf = classifiers[grid_index]
grid = GridSearchCV(clf, clf_params, cv=n_folds,
n_jobs=-1, scoring="f1")
grid.fit(X_train, y_train)
clf_best_score = grid.best_score_
clf_best_params = grid.best_params_
clf_best = grid.best_estimator_
mean_validation_scores = []
print("Лучший результат", clf_best_score,
"лучшие параметры", clf_best_params)
###Output
_____no_output_____
###Markdown
**Строим график кривой обучения**
###Code
def plot_with_std(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
def plot_learning_curve(clf, X, y, scoring, cv=5):
train_sizes = np.linspace(0.05, 1, 20)
n_train, val_train, val_test = learning_curve(clf,
X, y, train_sizes, cv=cv,
scoring=scoring)
plot_with_std(n_train, val_train, label='training scores', c='green')
plot_with_std(n_train, val_test, label='validation scores', c='red')
plt.xlabel('Training Set Size'); plt.ylabel(scoring)
plt.legend()
plot_learning_curve(GradientBoostingClassifier(n_estimators=2,
learning_rate=1.5, min_samples_leaf=7),
X_train, y_train, scoring='f1', cv=10)
###Output
_____no_output_____
###Markdown
**Построим валидационную кривую для данных параметров бустинга. В качестве параметра сложности будем использовать learning_rate:**
###Code
from sklearn.learning_curve import validation_curve
def plot_validation_curve(clf, X, y, cv_param_name,
cv_param_values, scoring):
val_train, val_test = validation_curve(clf, X, y, cv_param_name,
cv_param_values, cv=5,
scoring=scoring)
plot_with_std(cv_param_values, val_train,
label='training scores', c='green')
plot_with_std(cv_param_values, val_test,
label='validation scores', c='red')
plt.xlabel(cv_param_name); plt.ylabel(scoring)
plt.legend()
learning_rates = np.linspace(0.1, 2.3, 20)
plot_validation_curve(GradientBoostingClassifier(n_estimators=250,
min_samples_leaf=7), X_train, y_train,
cv_param_name='learning_rate',
cv_param_values=learning_rates,
scoring='f1')
final_gbm = GradientBoostingClassifier(n_estimators=250,
min_samples_leaf=7, learning_rate=1.5)
final_gbm.fit(X_train, y_train)
final_pred = final_gbm.predict(X_test)
accuracy_score(y_test, final_pred), f1_score(y_test, final_pred)
###Output
_____no_output_____ |
3_Teams.ipynb | ###Markdown
Baseball Statistics: Teams Baseball is a game full of statistics, and most of those statistics have been consistently and carefully tracked going back to the late 1800s. That makes professional baseball a playground for data analysts. Here I look at interesting correlations between players, their stats, and their salaries.**Data Source:** [Lahman's Baseball Database](http://www.seanlahman.com/baseball-archive/statistics/). The data set I used was through the 2018 season.Copyright © 2019 Ken Norton ([email protected])
###Code
%run ./1_Data_Preparation.ipynb
plt.style.use(['default', 'fivethirtyeight', 'seaborn-poster'])
###Output
_____no_output_____
###Markdown
Let's take a look at the Teams table. Here are the individual columns:```yearID YearlgID LeagueteamID TeamfranchID Franchise (links to TeamsFranchise table)divID Team's divisionRank Position in final standingsG Games playedGHome Games played at homeW WinsL LossesDivWin Division Winner (Y or N)WCWin Wild Card Winner (Y or N)LgWin League Champion(Y or N)WSWin World Series Winner (Y or N)R Runs scoredAB At batsH Hits by batters2B Doubles3B TriplesHR Homeruns by battersBB Walks by battersSO Strikeouts by battersSB Stolen basesCS Caught stealingHBP Batters hit by pitchSF Sacrifice fliesRA Opponents runs scoredER Earned runs allowedERA Earned run averageCG Complete gamesSHO ShutoutsSV SavesIPOuts Outs Pitched (innings pitched x 3)HA Hits allowedHRA Homeruns allowedBBA Walks allowedSOA Strikeouts by pitchersE ErrorsDP Double PlaysFP Fielding percentagename Team's full namepark Name of team's home ballparkattendance Home attendance totalBPF Three-year park factor for battersPPF Three-year park factor for pitchersteamIDBR Team ID used by Baseball Reference websiteteamIDlahman45 Team ID used in Lahman database version 4.5teamIDretro Team ID used by Retrosheet```
###Code
teams.describe()
plt.hist(teams['W'], bins=25)
plt.hist(teams['L'], bins=25)
teams['Win_Pct'] = teams['W'] / (teams['W'] + teams['L'])
plt.hist(teams['Win_Pct'], bins=25)
###Output
_____no_output_____
###Markdown
San Francisco GiantsThe Giants are my favorite team, so I've decided to do some analysis of their stats over the years.
###Code
# Teams are identified both with teamIDs and franchID (franchise IDs). The San Francisco Giants were
# originally the New York Giants. If we selected only data with the SFN teamID, we'd miss out on
# all of the stats from their years in NY. I want the full franchise history, so I'm going to filter
# on franchID
sfg = teams[teams['franchID'] == 'SFG']
sfg.teamID.unique()
plt.style.use('dark_background')
sfgplot(sfg['yearID'], sfg['attendance'], "Year", "Attendance")
###Output
_____no_output_____
###Markdown
It's cool that the data include attendance numbers. With the Giants, you can see the year they moved into their new stadium in San Franciso (2000) as well as their record-breaking attendance numbers since then, including through three World Series titles in 2010, 2012, and 2014.
###Code
sfgplot(sfg['yearID'], sfg['HR'], "Year", "Home Runs")
###Output
_____no_output_____
###Markdown
Unsurprisingly, when you look at Giants home run totals throughout history you can clearly see Barry Bonds record-breaking years in the late 1990s/early 2000s.
###Code
sfgplot(sfg['yearID'], sfg['SB'], "Year", "Stolen Bases")
sfgplot(sfg['yearID'], sfg['Win_Pct'], "Year", "Win Percentage")
###Output
_____no_output_____
###Markdown
In what place (rank) did the Giants finish at the end of the season?
###Code
plt.hist(sfg['Rank'], color='#FD5A1E', bins=10)
###Output
_____no_output_____
###Markdown
How many World Series titles do the Giants have?
###Code
sfg[sfg['WSWin'] == "Y"]['yearID'].count()
sfg[sfg['WSWin'] == "Y"]['yearID']
###Output
_____no_output_____ |
DotGraph Example.ipynb | ###Markdown
DotGraph ExampleTesting example created in the [Zephyr](https://github.com/uwoseis/zephyr) project using `pyreverse` from the `pylint` project. The file `Example/packages_zephyr.dot` was generated by:```bashpyreverse -my -A -o dot -p zephyr ../zephyr/**/**.py``` Rendered using pyreverse  DotGraph Example
###Code
from dotgraph import *
dg = DotGraph('./Example/packages_zephyr.dot')
dg
help(DotGraph)
###Output
Help on class DotGraph in module dotgraph:
class DotGraph(__builtin__.object)
| Class that returns various representations of the directed graph
| in a 'dot' file. This includes converting to NetworkX graph object,
| Python dictionary, JSON, and HTML with d3.js rendering (which can
| be displayed inline in an IPython/Jupyter notebook).
|
| Methods defined here:
|
| __init__(self, infile, template=None)
| Initialize DotGraph
|
| Args:
| infile (str): Input file in dot format
| template (str): Input file for HTML template
|
| Returns:
| new DotGraph instance
|
| render(self)
| Returns IPython display representation
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| dict
| Returns dictionary representation
|
| graph
| Returns NetworkX graph representation
|
| html
| Returns HTML representation
|
| json
| Returns JSON representation
|
Time Series Analysis of Data/task-time series analysis.ipynb | ###Markdown
..
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('mmc4.csv')
df.shape
df.head()
df.tail()
df.dtypes
df['date_time'] = pd.to_datetime(df['date_time'])
df.dtypes
df = df.set_index('date_time')
df.head()
# Add columns
df['Year'] = df.index.year
df['Month'] = df.index.month
df['Weekday Name'] = df.index.weekday
df.head()
# Displaying random sample of 5 rows
df.sample(5, random_state = 0)
# one day data
df.loc['2016-09-24']
# one hour data
df.loc['2016-09-24 00:00:00': '2016-09-24 01:00:00']
###Output
_____no_output_____
###Markdown
Visualizing time series data
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(rc = {'figure.figsize': (11, 4)})
df['hr'].plot(linewidth = 0.5)
df.columns
cols_plot = ['hr', 'calories', 'temp']
axes = df[cols_plot].plot(marker = '.', alpha = 0.5, figsize=(10, 9), subplots = True)
for ax in axes:
ax.set_ylabel('Variation')
# same above plot for one hour data
df_hour = df.loc['2016-09-24 00:00:00': '2016-09-24 01:00:00']
cols_plot = ['hr', 'calories', 'temp']
axes = df_hour[cols_plot].plot(marker = '.', alpha = 0.5, figsize=(10, 9), subplots = True)
for ax in axes:
ax.set_ylabel('variation')
# plot for differnt two hour data
df_2hour = df.loc['2016-09-25 00:00:00': '2016-09-25 02:00:00']
cols_plot = ['hr', 'calories', 'temp']
axes = df_2hour[cols_plot].plot(marker = '.', alpha = 0.5, figsize=(10, 9), subplots = True)
for ax in axes:
ax.set_ylabel('variation')
df.iloc[43199,:]
fig, axes = plt.subplots(3, 1, figsize=(11, 10), sharex=True)
df_day = df.loc['2016-01-01 00:00:00' : '2016-01-02 00:00:00']
for name, ax in zip(['hr', 'calories', 'temp'], axes):
sns.boxplot(data=df, x='Month', y=name, ax=ax)
ax.set_ylabel('variation')
ax.set_title(name)
# Remove the automatic x-axis label from all but the bottom subplot
if ax != axes[-1]:
ax.set_xlabel('')
df.loc['2016-08-15 00:00:00' : '2016-08-16 00:00:00']
df.head()
###Output
_____no_output_____ |
RI-GW.ipynb | ###Markdown
RI GW Abstract Here we present a reference implementation of a GW class with the option of applying resolution-of-identity (JK with Coulomb metric, available for both self-energy and W) as well as SPA approximation when evaluating W. The class will be further extended to incorporate GW@DFT and spin polarization.1. Bruneval, F., Rangel, T., Hamed, S. M., Shao, M., Yang, C., & Neaton, J. B. (2016). molgw 1: Many-body perturbation theory software for atoms, molecules, and clusters. Computer Physics Communications, 208, 149–161. https://doi.org/10.1016/J.CPC.2016.06.0192. van Setten, M. J., Weigend, F., & Evers, F. (2013). The GW -Method for Quantum Chemistry Applications: Theory and Implementation. Journal of Chemical Theory and Computation, 9(1), 232–246. https://doi.org/10.1021/ct300648t
###Code
import psi4
import numpy as np
import scipy as sp
from matplotlib import pyplot as plt
%matplotlib inline
h2o = psi4.geometry("""O 0.000000000000 -0.143225816552 0.000000000000
H 1.638036840407 1.136548822547 -0.000000000000
H -1.638036840407 1.136548822547 -0.000000000000
symmetry c1
units bohr
""")
psi4.set_options({'basis': 'aug-cc-pvdz', 'd_convergence' : 1e-7,'scf_type' : 'direct'})
#psi4.set_options({'basis': 'aug-cc-pvdz', 'd_convergence' : 1e-7,'scf_type' : 'df', 'ints_tolerance' : 1.0E-10})
psi4.set_memory('2 GB')
psi4.set_output_file('h2o_accpvdz_rigw.out')
# Get the SCF wavefunction & energies
scf_e, scf_wfn = psi4.energy('hf', return_wfn=True)
print("SCF energy is %16.10f" % scf_e)
# GW implementation will be folded into a standalone class that uses molecule and wave function data from Psi4
class GW:
def __init__(self, wfn, mol, gw_par):
# wfn - Psi4 w.f. object from SCF calculation
# mol - Psi4 molecule object
# gw_par - is a dictionary with GW calculation parameters
# such as the number of states or the number of
# omega sampling points
self.scf_wfn = wfn
self.mol = mol
self._init_sys_params() # sets some basic system parameters
# Determine if we are doing RI
self.do_ri = True if not 'do_ri' in gw_par.keys() else gw_par['do_ri']
if self.do_ri:
self._gen_ri_ints() # generates integrals for RI-GW and
# RI integrals are now available in self.nmR
else:
self._transform_eri()
#self._calculate_W(gw_par) # this will produce RPA excitation energies and X + Y
self._calculate_W_SPA(gw_par) # this will produce RPA excitation energies and X + Y
# set GW calculation parameters
# parameters of the self-energy calculation
nomega_sigma = 501 if not 'nomega_sigma' in gw_par.keys() else gw_par['nomega_sigma']
step_sigma = 0.01 if not 'step_sigma' in gw_par.keys() else gw_par['step_sigma']
# Quasi-particle states
self.no_qp = self.nocc if not 'no_qp' in gw_par.keys() else gw_par['no_qp'] # Number of hole states
self.nv_qp = 0 if not 'nv_qp' in gw_par.keys() else gw_par['nv_qp'] # Number of particle states
self.eta = 1e-3 # Default eta value as recommended by F. Bruneval
# Quick sanity check
assert self.no_qp <= self.nocc and self.nv_qp <= self.nvir
# create an array of sampling frequencies similar to MolGW
nomega_grid = nomega_sigma // 2 # note this is a truncation (aka integer) division
omega_grid = np.array(range(-nomega_grid, nomega_grid + 1)) * step_sigma
# sampling energies for all the states so we could calculate the self-energy matrix (broadcasting)
omega_grid_all = omega_grid + self.eps[self.nocc - self.no_qp:self.nocc + self.nv_qp].reshape((-1, 1))
assert omega_grid_all.shape == (self.no_qp + self.nv_qp, 2*nomega_grid + 1)
Sigma_c_grid = self._calculate_iGW(omega_grid_all) # self-energy matrix
# Apply solvers; Similar to MolGW - linear & graphic solutions
print("Performing one-shot G0W0")
qp_molgw_lin_ = np.zeros(self.no_qp + self.nv_qp)
# Calculate pole strengths by performing numerical derivative on the omega grid
zz = np.real(Sigma_c_grid[:, nomega_grid + 1] - Sigma_c_grid[:, nomega_grid - 1]) / (omega_grid[nomega_grid + 1] - omega_grid[nomega_grid - 1])
zz = 1. / (1. - zz)
zz[zz <= 0.0] = 0.0
zz[zz >= 1.0] = 1.0
qp_molgw_lin_ = self.eps[self.nocc - self.no_qp:self.nocc + self.nv_qp] + zz * np.real(Sigma_c_grid[:, nomega_grid])
print("Perfoming graphic solution of the inverse Dyson equation")
# both rhs and lhs of the QP equation have been calculated above
qp_molgw_graph_ = np.copy(self.eps[self.nocc - self.no_qp:self.nocc + self.nv_qp])
zz_graph = np.zeros(self.no_qp + self.nv_qp)
for state in range(self.no_qp + self.nv_qp):
z , e = self._find_fixed_point(omega_grid_all[state], np.real(Sigma_c_grid[state, :]) + self.eps[state + self.nocc - self.no_qp])
if z[0] < 1e-6:
print("Graphical solver failed for state %d" % (state + 1))
# Do nothing since the array cell already contains HF orbital energy
else:
qp_molgw_graph_[state] = e[0]
zz_graph[state] = z[0]
self.zz = np.copy(zz)
self.qp_molgw_lin_ = np.copy(qp_molgw_lin_)
self.qp_molgw_graph_ = np.copy(qp_molgw_graph_)
print("Done!")
def print_summary(self):
Ha2eV = 27.21138505
print("E^lin, eV E^graph, eV Z ")
for i in range(self.no_qp + self.nv_qp):
print("%13.6f %13.6f %13.6f" % (self.qp_molgw_lin_[i]*Ha2eV, self.qp_molgw_graph_[i]*Ha2eV, self.zz[i]))
def _init_sys_params(self):
self.nocc = self.scf_wfn.nalpha()
self.nbf = self.scf_wfn.nmo()
self.nvir = self.nbf - self.nocc
self.C = self.scf_wfn.Ca()
self.Cocc = self.scf_wfn.Ca_subset("AO", "OCC")
self.Cvirt = self.scf_wfn.Ca_subset("AO", "VIR")
self.eps = np.asarray(self.scf_wfn.epsilon_a())
# print a quick summary
print("Number of basis functions: ", self.nbf)
print("occ/virt: %d/%d" % (self.nocc, self.nvir))
def _transform_eri(self):
Co = self.Cocc
C = self.C
mints = psi4.core.MintsHelper(self.scf_wfn.basisset())
self.MO = np.asarray(mints.mo_eri(Co, C, C, C))
def _gen_ri_ints(self):
# MO coefficients
C = np.asarray(self.C)
# Extract basis set from the wfn object
orb = self.scf_wfn.basisset()
# Build auxiliary basis set
aux = psi4.core.BasisSet.build(self.mol, "DF_BASIS_SCF", "", "JKFIT", orb.name())
# From Psi4 doc as of March, 2019 (http://www.psicode.org/psi4manual/1.2/psi4api.html#psi4.core.BasisSet.zero_ao_basis_set):
# Returns a BasisSet object that actually has a single s-function at
# the origin with an exponent of 0.0 and contraction of 1.0.
zero_bas = psi4.core.BasisSet.zero_ao_basis_set()
# Create a MintsHelper Instance
mints = psi4.core.MintsHelper(orb)
# Build (pq|P) raw 3-index ERIs, dimension (nbf, nbf, Naux, 1)
pqP = mints.ao_eri(orb, orb, aux, zero_bas)
# Build and invert the metric
metric = mints.ao_eri(zero_bas, aux, zero_bas, aux)
metric.power(-0.5, 1.e-14)
# Remove the dimensions of size 1
pqP = np.squeeze(pqP)
metric = np.squeeze(metric)
# Transform (pq|P) to obtain (nm|P) in molecular orbital basis
nmP = np.einsum("pn, qm, pqR-> nmR", C, C, pqP)
# Contract with the inverse square root of the metric tensor
self.nmR = np.einsum( "nmP, PR-> nmR", nmP, metric)
print("Auxiliary basis set has been generated!")
print("Number of auxiliary basis functions: ", self.nmR.shape[2])
def _calculate_W_SPA(self, gw_par):
# spa_onset - is the orbital # starting from which SPA will be employed,
# i.e. spa_onset - 1 is the index of the last virtual active in
# RPA calculation
spa_onset = self.nbf if not 'spa_onset' in gw_par.keys() else gw_par['spa_onset']
spa_alpha = 1.5 if not 'spa_alpha' in gw_par.keys() else gw_par['spa_alpha']
assert (spa_onset >= self.nocc) and (spa_alpha >= 0.0)
nocc = self.nocc
nvir = self.nvir
nbf = self.nbf
if spa_onset < nbf:
# The actual number of virtual orbitals is reduced
nvir = spa_onset - nocc # if spa_onset is nocc (i.e. LUMO) => there is no virtual orbitals in the RPA
else:
spa_onset = nbf # for extra safety so that nothing fails if spa_onset is too large; the last virtual is nbf - 1
print("Number of virtual orbitals in RPA: %d" % (nvir))
# Diagonal \epsilon_a - \epsilon_i
eps_diag = self.eps[nocc:nocc + nvir].reshape(-1, 1) - self.eps[:nocc]
assert eps_diag.shape == (nvir, nocc)
# Diagonal orbital difference matrix for SPA
eps_spa_diag = self.eps[spa_onset:].reshape(-1, 1) - self.eps[:nocc]
assert eps_spa_diag.shape == (nbf - nvir - nocc, nocc)
# A^{+} + B^{+}
ApB = np.zeros((nocc, nvir, nocc, nvir))
ApB_spa_diag = np.zeros((nocc, nbf - nvir - nocc))
if self.do_ri:
ApB = np.einsum("ij,ab,ai -> iajb", np.eye(nocc), np.eye(nvir), eps_diag) + 4. * np.einsum("iaQ, jbQ->iajb", self.nmR[:nocc, nocc:nocc+nvir], self.nmR[:nocc, nocc:nocc+nvir])
else:
ApB = np.einsum("ij,ab,ai -> iajb", np.eye(nocc), np.eye(nvir), eps_diag) + 4. * self.MO[:nocc, nocc:nocc + nvir, :nocc, nocc:nocc + nvir ]
if spa_onset < nbf:
# calculate ApB_spa_diag
if self.do_ri:
ApB_spa_diag = spa_alpha * ( eps_spa_diag.T + 4. * np.einsum("iaQ, iaQ->ia", self.nmR[:nocc, spa_onset:], self.nmR[:nocc, spa_onset:]))
else:
ApB_spa_diag = spa_alpha * (eps_spa_diag.T + 4. * np.einsum("iaia->ia", self.MO[:nocc, spa_onset:, :nocc, spa_onset: ])) # not sure if this is 100% correct
ApB = ApB.reshape((nocc*nvir, nocc*nvir))
# since nD numpy arrays have C-style memroy layout the occupied orbital inedex changes slower than the virtual one
# Diagonal of A^{+} - B^{+}
AmB_diag = eps_diag.T.reshape((1, -1))
AmB_diag = np.diag(AmB_diag[0,:])
# Reshape the ApB_spa_diag as well
AmB_spa_diag = spa_alpha * eps_spa_diag.T.reshape((1, -1))
#AmB_spa_diag = np.diag(AmB_spa_diag[0,:])
ApB_spa_diag = ApB_spa_diag.reshape((1, -1))
# Form C matrix (RPA eigenvalue problem)
C_ = np.einsum("ij,jk,kl->il", np.sqrt(AmB_diag), ApB, np.sqrt(AmB_diag))
# Solve for the excitation energies and calculate X + Y eigenvectors
omega2, Z = np.linalg.eigh(C_)
self.omega_s = np.sqrt(omega2)
self.xpy = np.einsum("ij,jk,kl->il", np.sqrt(AmB_diag), Z, np.diag(1./np.sqrt(self.omega_s)))
if spa_onset < nbf:
omega_s_spa = np.sqrt(ApB_spa_diag * AmB_spa_diag)
#print(omega_s_spa.shape)
#print(np.sqrt(np.diag(AmB_spa_diag[0,:])).shape)
#print(np.eye(nocc * (nbf - nocc - nvir)).shape)
#print(np.diag(1./np.sqrt(omega_s_spa)).shape)
xpy_spa = np.einsum("ij,jk,kl->il", np.sqrt(np.diag(AmB_spa_diag[0,:])), np.eye(nocc * (nbf - nocc - nvir)), np.diag(1./np.sqrt(omega_s_spa[0,:])))
# The trickiest part is the eigenvectors since the index spaces should be
# aligned; Here we need to do the following:
# 1. Stretch xpy array such that its shape = (nocc * (nbf - nocc), nocc * (nbf - nocc))
# 2. Copy the eigenvectors adding 0 in place of excitations excluded via SPA
# 3. Add an apropriate number of xpy_spa padding them with zero at the indexes corresponding
# to excitations excluded from SPA
xpy_tmp = np.zeros((nocc * (nbf - nocc), nocc * (nbf - nocc)))
# Merge non-SPA eigenvectors first
nvir_spa = nbf - nocc - nvir
for i in range(len(self.omega_s)):
for j in range(nocc):
xpy_tmp[j * (nbf - nocc) : j * (nbf - nocc) + nvir , i] = self.xpy[j * nvir: (j+1) * nvir, i]
# Now merge SPA vectors
for i in range(len(omega_s_spa)):
for j in range(nocc):
xpy_tmp[ j * (nbf - nocc) + nvir : (j + 1) * (nbf - nocc) , i + len(self.omega_s)] = xpy_spa[j * nvir_spa: (j+1) * nvir_spa, i]
# Replace xpy with xpy_tmp and update the list of excitation energies accordingly
self.xpy = np.copy(xpy_tmp)
self.omega_s = np.hstack((self.omega_s, omega_s_spa[0,:]))
def _calculate_W(self, gw_par):
nocc = self.nocc
nvir = self.nvir
# Diagonal \epsilon_a - \epsilon_i
eps_diag = self.eps[nocc:].reshape(-1, 1) - self.eps[:nocc]
assert eps_diag.shape == (nvir, nocc)
# A^{+} + B^{+}
ApB = np.zeros((nocc, nvir, nocc, nvir))
if self.do_ri:
ApB = np.einsum("ij,ab,ai -> iajb", np.eye(nocc), np.eye(nvir), eps_diag) + 4. * np.einsum("iaQ, jbQ->iajb", self.nmR[:nocc, nocc:], self.nmR[:nocc, nocc:])
else:
ApB = np.einsum("ij,ab,ai -> iajb", np.eye(nocc), np.eye(nvir), eps_diag) + 4. * self.MO[:nocc, nocc:, :nocc, nocc: ]
ApB = ApB.reshape((nocc*nvir, nocc*nvir))
# since nD numpy arrays have C-style memroy layout the occupied orbital inedex changes slower than the virtual one
# Diagonal of A^{+} - B^{+}
AmB_diag = eps_diag.T.reshape((1, -1))
AmB_diag = np.diag(AmB_diag[0,:])
assert AmB_diag.shape == ApB.shape
# Form C matrix (as one usually does when solving RPA eigenvalue problem)
C_ = np.einsum("ij,jk,kl->il", np.sqrt(AmB_diag), ApB, np.sqrt(AmB_diag))
# Solve for the excitation energies and calculate X + Y eigenvectors
omega2, Z = np.linalg.eigh(C_)
self.omega_s = np.sqrt(omega2)
self.xpy = np.einsum("ij,jk,kl->il", np.sqrt(AmB_diag), Z, np.diag(1./np.sqrt(self.omega_s)))
def _calculate_iGW(self, omega_grid_all):
nocc = self.nocc
nvir = self.nvir
eps = self.eps
no_qp = self.no_qp
nv_qp = self.nv_qp
nbf = self.nbf
# Self-energy denominators; those are of two kinds
Dis = -eps[:nocc].reshape((-1, 1)) + self.omega_s
Das = -eps[nocc:].reshape((-1, 1)) - self.omega_s
# Omega tensors; This will be refactored to improve memory efficiency
i_rtia = np.zeros((nbf,nbf, nocc, nvir))
if self.do_ri:
i_rtia = np.einsum("iaQ, rtQ ->rtia", self.nmR[:nocc, nocc:, :], self.nmR)
i_rtia = i_rtia.reshape((nbf, nbf, nocc*nvir))
else:
i_rtia = np.einsum("iart->rtia", self.MO[:,nocc:,:,:])
i_rtia = i_rtia.reshape((nbf, nbf, nocc*nvir))
omega_rts = np.sqrt(2.) * np.einsum("rtk, ks->rts", i_rtia, self.xpy)
#Calculate denominators
Dis_ = Dis + omega_grid_all.reshape((no_qp + nv_qp, omega_grid_all.shape[1], 1, 1)) - 1.j*self.eta
Das_ = Das + omega_grid_all.reshape((no_qp + nv_qp, omega_grid_all.shape[1], 1, 1)) + 1.j*self.eta
# self-energy matrix (with the shape (no_qp + nv_qp, 2*nomega_grid + 1))
# Contribution due to occupied orbitals (note that the shape of the structure of the denominator array is not optimal)
Sigma_c_grid = np.einsum("kis, klis, kis->kl", omega_rts[nocc - no_qp:nocc + nv_qp,:nocc,:], 1./Dis_, omega_rts[nocc - no_qp:nocc + nv_qp,:nocc,:])
# Contribution due to virtuals
Sigma_c_grid += np.einsum("kas, klas, kas->kl", omega_rts[nocc - no_qp:nocc + nv_qp,nocc:,:], 1./Das_, omega_rts[nocc - no_qp:nocc + nv_qp,nocc:,:])
return Sigma_c_grid
def _find_fixed_point(self, lhs, rhs):
# This function returns an array of fixed points and correspoinding pole strengths
# Its application can be vectorized using strandard NumPy np.vectorize
assert lhs.shape == rhs.shape
# Maximum number of fixed points (same as in MolGW)
nfp_max = 4
# Pole strength threshold
pthresh = 1e-5
# Arrays of f.p. energies and Z
zfp = np.zeros(nfp_max)
zfp[:] = -1.0
efp = np.zeros(nfp_max)
# Auxiliary index array
idx = np.arange(nfp_max)
n = len(lhs)
ifixed = 0
g = rhs - lhs
# loop over grid points excluding the last one
for i in range(n - 1):
if g[i] * g[i + 1] < 0.0:
#print("Fixed point found betwenn %13.6f and %13.6f eV! " % (lhs[i] * Ha2eV, lhs[i+1] * Ha2eV))
z_zero = 1. / ( 1. - ( g[i+1] - g[i] ) / ( lhs[i+1] - lhs[i] ) )
if z_zero < pthresh:
continue
# Do some bookkeeping; the code looks ugly but that is exactly what F.Bruneval has in MolGW package
if z_zero > zfp[-1]:
jfixed = np.min(idx[z_zero > zfp])
zfp[jfixed + 1:] = zfp[jfixed:nfp_max - 1]
efp[jfixed + 1:] = efp[jfixed:nfp_max - 1]
zfp[jfixed] = z_zero
# Perfom linear interpolation to find the root
zeta = (g[i + 1] - g[i]) / (lhs[i + 1] - lhs[i])
efp[jfixed] = lhs[i] - g[i] / zeta
#print("Graphical solver concluded operation")
return (zfp, efp)
# Quick test for the GW class
gw_par = {'no_qp' : 5, 'nv_qp' : 1, 'nomega_sigma' : 501, 'step_sigma' : 0.01, 'do_ri' : False}
gw_h2o_accpvdz = GW(scf_wfn, h2o, gw_par)
gw_h2o_accpvdz.print_summary()
gw_par = {'no_qp' : 5, 'nv_qp' : 1, 'nomega_sigma' : 501, 'step_sigma' : 0.01, 'do_ri' : True}
rigw_h2o_accpvdz = GW(scf_wfn, h2o, gw_par)
rigw_h2o_accpvdz.print_summary()
# GW class with the new SPA-enabled calculate_W method (SPA will be enabled starting from orbital #7 (LUMO + 2))
gw_par = {'no_qp' : 5, 'nv_qp' : 1, 'nomega_sigma' : 501, 'step_sigma' : 0.01, 'do_ri' : False, 'spa_onset' : 7}
gw_h2o_accpvdz = GW(scf_wfn, h2o, gw_par)
gw_h2o_accpvdz.print_summary()
###Output
Number of basis functions: 41
occ/virt: 5/36
Number of virtual orbitals in RPA: 2
Performing one-shot G0W0
Perfoming graphic solution of the inverse Dyson equation
Done!
E^lin, eV E^graph, eV Z
-559.330154 -559.326189 0.943804
-35.092657 -35.108317 0.736476
-17.457153 -17.457130 0.988255
-14.989264 -14.989221 0.985092
-13.319399 -13.319324 0.983257
0.864673 0.864673 0.999210
###Markdown
``` 1 -560.533646 -0.000000 15.096607 0.794502 -548.539363 -547.986931 2 -35.249771 -0.000000 4.458541 0.651707 -32.344109 -31.106131 3 -17.621318 -0.000000 0.449470 0.938071 -17.199683 -17.199450 4 -15.211273 -0.000000 1.038641 0.928771 -14.246613 -14.245203 5 -13.614305 -0.000000 1.685604 0.921941 -12.060277 -12.056026 6 0.873198 -0.000000 -0.155930 0.994145 0.718181 0.718176`````` WITH SPA (2 virtuals in RPA) 1 -560.533646 -0.000000 1.269277 0.943812 -559.335688 -559.331759 2 -35.249771 -0.000000 0.213172 0.736479 -35.092774 -35.108421 3 -17.621318 -0.000000 0.166196 0.988255 -17.457074 -17.457051 4 -15.211273 -0.000000 0.225294 0.985092 -14.989337 -14.989294 5 -13.614305 -0.000000 0.300034 0.983257 -13.319294 -13.319219 6 0.873198 -0.000000 -0.008534 0.999210 0.864670 0.864670```
###Code
# Convergence test for water (noRI; SPA as specified below)
global_gw_par = {'no_qp' : 5, 'nv_qp' : 10, 'nomega_sigma' : 501, 'step_sigma' : 0.01, 'do_ri' : False}
ilumo = 5 # Water has five occ. orbitals
ilastvir = 40 # for aug-cc-pvdz
qp_en = np.zeros((global_gw_par['no_qp'] + global_gw_par['nv_qp'], 36))
for i in range(ilumo, ilastvir+1):
global_gw_par['spa_onset'] = i
gw_h2o = GW(scf_wfn, h2o, global_gw_par)
qp_en[:, i - ilumo] = gw_h2o.qp_molgw_graph_
Ha2eV = 27.21138505
x = np.asarray(range(ilumo, ilastvir + 1))
for state in range(7, global_gw_par['no_qp'] + global_gw_par['nv_qp']):
plt.plot(x + 1, Ha2eV * qp_en[state, :], 'o-', label='state '+str(state + 1) )
plt.xlabel('SPA onset (counting from 1)')
plt.ylabel('BE, eV')
plt.legend()
# Some error convergence analysis
qp_en_err = (qp_en.T - qp_en[:,-1].T).T
x = np.asarray(range(ilumo, ilastvir + 1))
for state in range(5, global_gw_par['no_qp'] + global_gw_par['nv_qp']):
plt.plot(x + 1, Ha2eV * qp_en_err[state, :], 'o-', label='state '+str(state + 1) )
plt.xlabel('SPA onset (counting from 1)')
plt.ylabel('BE Error, eV')
plt.legend()
# Error analysis for occupied states
for state in range(0, 5):
plt.plot(x + 1, Ha2eV * qp_en_err[state, :], 'o-', label='state '+str(state + 1) )
plt.xlabel('SPA onset (counting from 1)')
plt.ylabel('BE Error (occupied), eV')
plt.legend()
###Output
_____no_output_____ |
python/projects/houseSales/basicHousePrice.ipynb | ###Markdown
Dataset https://www.kaggle.com/harlfoxem/housesalesprediction Import Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("seaborn-whitegrid")
###Output
_____no_output_____
###Markdown
Import the dataset
###Code
data = pd.read_csv("houseSales.csv")
data.head()
###Output
_____no_output_____
###Markdown
Information about the dataset
###Code
data.info()
data.describe().T
data.columns
###Output
_____no_output_____
###Markdown
Visualizating and Examining the Dataset
###Code
data.hist(bins=25,figsize=(15,15),xlabelsize='10',ylabelsize='10')
plt.tight_layout()
plt.show()
plt.figure(dpi=120)
sns.boxplot(x=data['bedrooms'],y=data['price'])
plt.show()
plt.figure(dpi=120)
sns.boxplot(x=data['floors'],y=data['price'])
plt.show()
plt.figure(dpi=120,figsize=(15,7))
sns.boxplot(x=data['bathrooms'],y=data['price'])
plt.show()
###Output
_____no_output_____
###Markdown
It seems that there is not a perfect linear relationship between the price and these features
###Code
# Waterfront does have an effect on the price
plt.figure(dpi=120)
sns.boxplot(x=data['waterfront'],y=data['price'])
plt.show()
# View have less effect on the price
plt.figure(dpi=120)
sns.boxplot(x=data['view'],y=data['price'])
plt.show()
plt.figure(dpi=130)
sns.boxplot(x=data['grade'],y=data['price'])
plt.show()
###Output
_____no_output_____
###Markdown
Correlation
###Code
data.corr()
plt.figure(dpi=120,figsize=(16, 12))
sns.heatmap(data.corr(),square=True,cmap="BuGn",linecolor='w',annot=True,annot_kws={"size":8})
plt.show()
plt.figure(dpi=120,figsize=(16, 12))
sns.heatmap(data.corr(),square=True,cmap="BuGn",
linecolor='w',annot=True,annot_kws={"size":8},
mask=np.triu(data.corr()))
plt.show()
data.corr().index!="price"
corr_data = data.corr().loc[data.corr().index!="price","price"]
corr_data
np.abs(corr_data)
threshold=0.3
corr_data[np.abs(corr_data)>threshold]
feature_name = corr_data[np.abs(corr_data)>threshold].index.values
feature_name
X = data[feature_name]
print(X.shape)
print(X.head())
y = data["price"]
print(y.shape)
print(y.head())
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=123)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(X_train,y_train)
model.coef_
model.intercept_
model.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Polynomial Regression
###Code
X.head()
from sklearn.preprocessing import PolynomialFeatures
poly=PolynomialFeatures(degree=2)
new_X=poly.fit_transform(X)
print(new_X.shape)
print(new_X[0])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(new_X, y,test_size=0.2,random_state=123)
model=LinearRegression()
model.fit(X_train,y_train)
model.coef_[:10]
model.intercept_
model.score(X_test,y_test)
###Output
_____no_output_____ |
examples/neurolib_example.ipynb | ###Markdown
Parameter exploration with `neurolib` In this example, we will draw a bifurcation diagram of a neural mass model that we load using the brain simulation framework `neurolib`. Please visit the [Github repo](https://github.com/neurolib-dev/neurolib) to learn more about this library or read the [gentle introduction to `neurolib`](https://caglorithm.github.io/notebooks/neurolib-intro/) to learn more about the neuroscience background of neural mass models and whole-brain simulations. What we will do We will scan through a 2-dimensional parameter space and record all outputs of the model to a hdf file. We will then load the simulated results from the hdf file and condense the output to a single scalar so we can plot it. Let's get to it! We'll first start with some unnecessary but useful imports for logging and auto-reloading code. You can skip this block if you don't know what it does.
###Code
# change into the root directory of the project
import os
if os.getcwd().split("/")[-1] == "examples":
os.chdir('..')
%load_ext autoreload
%autoreload 2
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
We have to install `matplotlib` and `neurolib` for this example, which we can do using the shell command syntax of jupyter using `!pip install matplotlib`.
###Code
# install matplotlib
!pip install matplotlib
import matplotlib.pyplot as plt
import numpy as np
# a nice color map
plt.rcParams['image.cmap'] = 'plasma'
# install neurolib
!pip install neurolib
from neurolib.models.aln import ALNModel
import mopet
###Output
_____no_output_____
###Markdown
We can now load the `ALNModel` model from `neurolib`.
###Code
model = ALNModel()
model.params.duration = 1*1000
###Output
INFO:root:aln: Model initialized.
###Markdown
Like in other examples, we need to define an `evalFunction` that `mopet` can call. The parameters `params` that are passed to that function are the parameters of the model to run. We load the parameters to the model using a simple dictionary update `old_dict.update(new_dict)`. We then return the output of the model.
###Code
def evalFunction(params):
model.params.update(params)
model.run()
return model.outputs
###Output
_____no_output_____
###Markdown
Now we define the parameter ranges to explore (the parameters here are called `mue_ext_mean` and `mui_ext_mean` which represent the background input to two neural populations).
###Code
# NOTE: These values are low for testing!
explore_params = {"mue_ext_mean" : np.linspace(0, 3, 3),
"mui_ext_mean" : np.linspace(0, 3, 3)}
# For a real run, use these values:
# explore_params = {"mue_ext_mean" : np.linspace(0, 3, 31),
# "mui_ext_mean" : np.linspace(0, 3, 31)}
# we need this random filename to avoid testing clashes
hdf_filename = f"exploration-{np.random.randint(99999)}.h5"
ex = mopet.Exploration(evalFunction, explore_params, default_params=model.params, hdf_filename=hdf_filename)
###Output
_____no_output_____
###Markdown
Everything is ready and we can now simply run the exploration.
###Code
ex.run()
###Output
2021-02-15 14:01:44,651 INFO resource_spec.py:212 -- Starting Ray with 3.81 GiB memory available for workers and up to 1.92 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2021-02-15 14:01:44,913 INFO services.py:1093 -- View the Ray dashboard at [1m[32mlocalhost:8265[39m[22m
WARNING:root:Could not store dict entry model (type: <class 'str'>)
WARNING:root:Could not store dict entry name (type: <class 'str'>)
WARNING:root:Could not store dict entry description (type: <class 'str'>)
WARNING:root:Could not store dict entry seed (type: <class 'NoneType'>)
INFO:root:Starting 9 jobs.
100%|██████████| 9/9 [00:00<00:00, 41.28it/s]
INFO:root:Runs took 0.2351241111755371 s to submit.
100%|██████████| 9/9 [00:12<00:00, 1.35s/it]
INFO:root:Runs and storage took 12.125471115112305 s to complete.
###Markdown
After the exploration is done, we can load the results into a pandas Dataframe. By adding the argument `arrays=True`, we also tell `mopet` to load all simulated output (including arrays!) of the exploration into the Dataframe.
###Code
ex.load_results(arrays=True, as_dict=True)
###Output
INFO:root:exploration-82611.h5 opened for reading.
INFO:root:Gettings runs of exploration ``exploration_2021_02_15_14H_01M_43S``
100%|██████████| 9/9 [00:00<00:00, 172.57it/s]
INFO:root:Creating new results DataFrame
INFO:root:Aggregating all results ...
100%|██████████| 9/9 [00:00<00:00, 228.06it/s]
INFO:root:exploration-82611.h5 closed.
###Markdown
Because we have used `as_dict=True`, the outputs of each run are also stored in the `results` dictionary.
###Code
ex.results
###Output
_____no_output_____
###Markdown
Reducing the results As you can see, the results above are time series. For each simulation, we've got multiple arrays. We would like to visualize the results somehow. However, it can be quite challenging to plot many time series in a single figure and still understand what's happening. Therefore, to reduce the dimensionality of the data to a single scalar number per simulation, we will loop through the entire results Dataframe, grab the time series called `rates_exc` and simply compute its maximum after a transient time of `t>500` time steps. We then store this number in the Dataframe and simply call the new column `result`.
###Code
ex.df["result"] = None
for r in ex.df.index:
t = ex.results[r]['t']
rates_exc = ex.results[r]['rates_exc']
ex.df.loc[r, "result"] = np.max(rates_exc[:, t>500])
###Output
_____no_output_____
###Markdown
We can now inspect the updated dataframe using the attribute `.df`.
###Code
ex.df
###Output
_____no_output_____
###Markdown
As you can see, the column `result` only has a single number for each run. Perfect for plotting! In order to plot this long-format Dataframe, we need to pivot it to create a new Dataframe `pivoted` which has the correct x and y coordinates for plotting the `results` value.
###Code
pivoted = ex.df.pivot_table(values='result', index = 'mui_ext_mean', columns='mue_ext_mean', aggfunc='first')
###Output
_____no_output_____
###Markdown
Let's have a look at the new Dataframe that we're about to plot.
###Code
pivoted
###Output
_____no_output_____
###Markdown
Perfect, that's exactly what we need. We have two indices, which are both equal to the paramters that we have explored before. The only entry of the Dataframe is the `results` value that we've put in before. Now, let's plot this Dataframe using `matplotlib.imshow()`.
###Code
plt.imshow(pivoted, \
extent = [min(ex.df.mue_ext_mean), max(ex.df.mue_ext_mean),
min(ex.df.mui_ext_mean), max(ex.df.mui_ext_mean)], origin='lower')
plt.colorbar(label='Maximum firing rate')
plt.xlabel("Input to E")
plt.ylabel("Input to I")
###Output
_____no_output_____ |
AI pandas/pandas2.ipynb | ###Markdown
Sorting DataframesTo sort the values of pandas dataframes we have to used the sort_values function
###Code
#df.sort_values(by=['age'],ascending=False)
df.sort_values(by=['fare','age'],ascending=False)
#df.sort_values(by=['fare'],ascending=False)
data={ 'age':[12,12,33,44],
'fare':[55,66,88,99]
}
data=pd.DataFrame(data)
data.sort_values(by=['age','fare'],ascending=False)
###Output
_____no_output_____
###Markdown
Apply functionApply function is used to apply function on single rows of the columns or multiple rows.A lamda function is passed to the apply function which specify that what expression is perform on the values mean rows of the particular columns actually the apply function performed this expression on every values of the rows
###Code
df.pclass.apply(lambda x:x+2)
###Output
_____no_output_____ |
datasets/sandiegodata.org-downtown_homeless/notebooks/eda-homeless_counts.ipynb | ###Markdown
Exploratory Data AnalysisWhen placed in Metapack data package, this notebook will load the package and run a variety of common EDA operations on the first resource.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import metapack as mp
import pandas as pd
import numpy as np
from IPython.display import display
%matplotlib inline
sns.set_context('notebook')
pkg = mp.jupyter.open_package()
# For testing and development
#pkg = mp.open_package('http://s3.amazonaws.com/library.metatab.org/cde.ca.gov-accountability_dashboard-2.zip')
pkg
resource_name='homeless_counts'
pkg.resource(resource_name)
df = pkg.resource(resource_name).read_csv(parse_dates=True)
df.head()
empty_col_names = [cn for cn in df.columns if df[cn].nunique() == 0]
const_col_names= [cn for cn in df.columns if df[cn].nunique() == 1]
ignore_cols = empty_col_names+const_col_names
dt_col_names= list(df.select_dtypes(include=[np.datetime64]).columns)
number_col_names = [ cn for cn in df.select_dtypes(include=[np.number]).columns if cn not in ignore_cols ]
other_col_names = [cn for cn in df.columns if cn not in (empty_col_names+const_col_names+dt_col_names+number_col_names)]
pd.DataFrame.from_dict({'empty':[len(empty_col_names)],
'const':[len(const_col_names)],
'datetime':[len(dt_col_names)],
'number':[len(number_col_names)],
'other':[len(other_col_names)],
},
orient='index', columns=['count'])
###Output
_____no_output_____
###Markdown
Constant Columns
###Code
if const_col_names:
display(df[const_col_names].drop_duplicates().T)
###Output
_____no_output_____
###Markdown
Empty Columns
###Code
if empty_col_names:
display(df[empty_col_names].drop_duplicates().T)
###Output
_____no_output_____
###Markdown
Date and Time Columns
###Code
if dt_col_names:
display(df[dt_col_names].info())
display(df[dt_col_names].describe().T)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 39202 entries, 0 to 39201
Data columns (total 1 columns):
date 39202 non-null datetime64[ns]
dtypes: datetime64[ns](1)
memory usage: 306.3 KB
###Markdown
Number Columns
###Code
if number_col_names:
display(df[number_col_names].info())
display(df[number_col_names].describe().T)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 39202 entries, 0 to 39201
Data columns (total 3 columns):
temp 14190 non-null float64
x 39202 non-null float64
y 39202 non-null float64
dtypes: float64(3)
memory usage: 918.9 KB
###Markdown
Distributions
###Code
def plot_histograms(df):
col_names = list(df.columns)
n_cols = np.ceil(np.sqrt(len(col_names)))
n_rows = np.ceil(np.sqrt(len(col_names)))
#plt.figure(figsize=(3*n_cols,3*n_rows))
fig, ax = plt.subplots(figsize=(3*n_cols,3*n_rows))
for i in range(0,len(col_names)):
plt.subplot(n_rows + 1,n_cols,i+1)
try:
g = sns.distplot(df[col_names[i]].dropna(),kde=True)
g.set(xticklabels=[])
g.set(yticklabels=[])
except:
pass
plt.tight_layout()
plot_histograms(df[number_col_names])
###Output
_____no_output_____
###Markdown
Box Plots
###Code
def plot_boxes(df):
col_names = list(df.columns)
n_cols = np.ceil(np.sqrt(len(col_names)))
n_rows = np.ceil(np.sqrt(len(col_names)))
#plt.figure(figsize=(2*n_cols,3*n_rows))
fig, ax = plt.subplots(figsize=(2*n_cols,5*n_rows))
for i in range(0,len(col_names)):
plt.subplot(n_rows + 1,n_cols,i+1)
try:
g = sns.boxplot(df[col_names[i]].dropna(),orient='v')
except:
pass
plt.tight_layout()
plot_boxes(df[number_col_names])
## Correlations
cm = df[number_col_names].corr()
mask = np.zeros_like(cm, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(.5*len(number_col_names),.5*len(number_col_names)))
sns.heatmap(cm, mask=mask, cmap = 'viridis')
###Output
_____no_output_____
###Markdown
Other Columns
###Code
if other_col_names:
display(df[other_col_names].info())
display(df[other_col_names].describe().T)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 39202 entries, 0 to 39201
Data columns (total 5 columns):
neighborhood 39202 non-null object
type 39202 non-null object
rain 17119 non-null object
geoid 39202 non-null object
geometry 39202 non-null object
dtypes: object(5)
memory usage: 1.5+ MB
###Markdown
Nulls
###Code
cols = dt_col_names + number_col_names + other_col_names
fig, ax = plt.subplots(figsize=(15,.5*len(cols)))
sns.heatmap(df[cols].isnull().T,cbar=False,xticklabels=False,cmap = 'viridis', ax=ax )
###Output
_____no_output_____ |
P8_Data_Engineering_Capstone_Project/P8_capstone_project/P8_Capstone_Project_Data_Preparation_Step_1_and_2.ipynb | ###Markdown
Project 08 - Analysis of U.S. Immigration (I-94) Data Udacity Data Engineer - Capstone Project> by Peter Wissel | 2021-05-05 Project OverviewThis project works with a data set for immigration to the United States. The supplementary datasets will include data onairport codes, U.S. city demographics and temperature data.The following process is divided into different sub-steps to illustrate how to answer the questions set by the businessanalytics team.The project file follows the following steps:* Step 1: Scope the Project and Gather Data* Step 2: Explore and Assess the Data Step 1: Scope the Project and Gather Data Scope of the ProjectBased on the given data set, the following four project questions (PQ) are posed for business analysis, which need to be answered in this project. The data pipeline and star data model are completely aligned with the questions.1. From which country do immigrants come to the U.S. and how many?2. At what airports do foreign persons arrive for immigration to the U.S.?3. At what times do foreign persons arrive for immigration to the U.S.?4. To which states in the U.S. do immigrants want to continue their travel after their initial arrival and what demographics can immigrants expect when they arrive in the destination state, such as average temperature, population numbers or population density? Gather DataThe project works primarily with a dataset based on immigration data (I94) to the United States.- Gathering Data (given data sets): 1. [Immigration data '18-83510-I94-Data-2016' to the U.S.](https://travel.trade.gov/research/programs/i94/description.asp) 2. [airport-codes_csv.csv: Airports around the world](https://datahub.io/core/airport-codesdata) 3. [us-cities-demographics.csv: US cities and it's information about citizens](https://public.opendatasoft.com/explore/dataset/us-cities-demographics/export/) 4. [GlobalLandTemperaturesByCity.csv: Temperature grouped by City and Country](https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data) Step 2: Explore and Assess the DataThe next step is used to find insights within given data. Summary for Immigration data `18-83510-I94-Data-2016` to the U.S.:* **Source**: [Visitor Arrivals Program (I-94 Form)](https://travel.trade.gov/research/programs/i94/description.asp)* **Description**: [I94_SAS_Labels_Descriptions.SAS](../P8_capstone_resource_files/I94_SAS_Labels_Descriptions.SAS) filecontains descriptions for the I94 data* **Data**: Month based dataset for year 2016* **Format**: SAS (SAS7BDAT - e.g. `i94_apr16_sub.sas7bdat`)* **Rows**: Over 3 million lines for each file. In total, about 40 million lines.* **Data description**: Data has 29 columns containing information about event date, arriving person, airport, airline, etc.NOTE: The Data has to be paid. Year 2016 is included and available for Udacity DEND course. Immigration data '18-83510-I94-Data-2016' to the U.S. The descriptions for the listed columns were taken from file [I94_SAS_Labels_Descriptions.SAS](../P8_capstone_resource_files/I94_SAS_Labels_Descriptions.SAS). - **i94yr:** 4 digit year - **i94mon:** numeric month - **i94cit + i94res:** Country where the immigrants come from - `Country code, country name`Look at file [I94_SAS_Labels_I94CIT_I94RES.txt](../P8_capstone_resource_files/I94_sas_labels_descriptions_extracted_data/I94_SAS_Labels_I94CIT_I94RES.txt)for more details. 438 = 'AUSTRALIA' 112 = 'GERMANY' ! Note that the I94 country codes are different from the ISO country numbers. - **i94port:** arrival airport - `Airport code, Airport city, State of Airport`. Note that the airport code is **not** the same as the [IATA](https://en.wikipedia.org/wiki/International_Air_Transport_Association) code. [IATA-Code Search Engine](https://www.iatacodes.de/) The data of the I-94 table do not correspond to the current ISO standards. Therefore, `SFR` is used for San Francisco Airport rather than the more common `SFO` designation. 'SFR' = 'SAN FRANCISCO, CA ' 'LOS' = 'LOS ANGELES, CA ' 'NYC' = 'NEW YORK, NY ' Look at file [I94_SAS_Labels_I94PORT.txt](../P8_capstone_resource_files/I94_sas_labels_descriptions_extracted_data/I94_SAS_Labels_I94PORT.txt) for more details. - **arrdate:** Arrival date in the U.S. (SAS Date format) SAS: Start Date is 01.01.1960 (SAS - Days since 1/1/1960: 0) Example: 01.01.1960: (SAS: Days since 1/1/1960: 0) 01.01.1970: (SAS: Days since 1/1/1960: 3653) Take a look at [Free SAS Date Calculator](https://www.sastipsbyhal.com) - **i94mode:** Type of immigration to U.S. Look at file [I94_SAS_Labels_I94MODE.txt](../P8_capstone_resource_files/I94_sas_labels_descriptions_extracted_data/I94_SAS_Labels_I94MODE.txt) for more details. 1 = 'Air' 2 = 'Sea' 3 = 'Land' 9 = 'Not reported' - **i94addr:** Location State where the immigrants want travel to. Look at file [I94_SAS_Labels_I94ADDR.txt](../P8_capstone_resource_files/I94_sas_labels_descriptions_extracted_data/I94_SAS_Labels_I94ADDR.txt) for more details. 'AL'='ALABAMA' 'IN'='INDIANA' - **depdate:** Departure date from USA (SAS Date format) -> look at `arrdate` for calculation - **i94bir:** Age of respondent in years - **i94ivsa:** Visa codes collapsed into three categories: Look at file [I94_SAS_Labels_I94VISA.txt](../P8_capstone_resource_files/I94_sas_labels_descriptions_extracted_data/I94_SAS_Labels_I94VISA.txt) for more details. 1 = Business 2 = Pleasure 3 = Student - **count:** value is for summary statistics - **dtadfile:** Date added to I-94 Files - Character date field as YYYYMMDD (represents `arrdate`) - **visapost:** Department of state where Visa was issued - **occup:** Occupation that will be performed in U.S. - **entdepa:** Arrival Flag - admitted or paroled into the U.S. - **entdepd:** Departure Flag - Departed, lost I-94 or is deceased - **entdepu:** Update Flag - Either apprehended, overstayed, adjusted to perm residence - **matflag:** Match flag - Match of arrival and departure records - **biryear:** 4 digit year of birth - **dtaddto:** Date to which admitted to U.S. (allowed to stay until) - Character date field as MMDDYYYY (represents `depdate`) - **gender:** Gender - Non-immigrant sex - **insnum:** Insurance (INS) number - **airline:** Airline used to arrive in U.S. - **admnum:** Admission Number - **fltno:** Flight number of Airline used to arrive in U.S. - **viatype:** Class of admission legally admitting the non-immigrant to temporarily stay in U.S. Imports and Installs section
###Code
import shutil
import pandas as pd
import pyspark.sql.functions as F
# import spark as spark
from pyspark.sql.types import StructType, StructField, DoubleType, StringType, IntegerType, LongType, TimestampType, DateType
from datetime import datetime, timedelta
from pyspark.sql import SparkSession, DataFrameNaFunctions
from pyspark.sql.functions import when, count, col, to_date, datediff, date_format, month
import re
import json
from os import path
###Output
_____no_output_____
###Markdown
Create Pandas and SparkSession to create data frames from source data
###Code
# If code will be executed in Udacity workbench --> use the following config(...)
#spark = SparkSession.builder.config("spark.jars.packages","saurfang:spark-sas7bdat:2.0.0-s_2.11").enableHiveSupport().getOrCreate()
# The version number for "saurfang:spark-sas7bdat" had to be updated for the local installation
MAX_MEMORY = "5g"
spark = SparkSession\
.builder\
.appName("etl pipeline for project 8 - I94 data") \
.config("spark.jars.packages","saurfang:spark-sas7bdat:3.0.0-s_2.12")\
.config('spark.sql.repl.eagerEval.enabled', True) \
.config("spark.executor.memory", MAX_MEMORY) \
.config("spark.driver.memory", MAX_MEMORY) \
.appName("Foo") \
.enableHiveSupport()\
.getOrCreate()
# setting the current LOG-Level
spark.sparkContext.setLogLevel('ERROR')
# Read data from Immigration data '18-83510-I94-Data-2016' to the U.S.
filepath = '../P8_capstone_resource_files/immigration_data/18-83510-I94-Data-2016/i94_feb16_sub.sas7bdat'
df_pd_i94 = pd.read_sas(filepath, format=None, index=None, encoding=None, chunksize=None, iterator=False)
# Show data (1st 5 rows)
df_pd_i94.head()
# Show data (last 5 rows)
df_pd_i94.tail()
# Get an overview about filled fields (not null)
df_pd_i94.count()
###Output
_____no_output_____
###Markdown
Summary for Airport Codes [`airport-codes_csv.csv`](../P8_capstone_resource_files/airport-codes_csv.csv):* **Source**: [datahub.io - Airport codes](https://datahub.io/core/airport-codesdata)* **Description**: Airport codes from around the world contain codes that may refer to either IATA airport code, a three-letter code which is used in passenger reservation, ticketing and baggage-handling systems, or the ICAO airport code which is a four letter code used by ATC systems and for airports that do not have an IATA airport code.* **Data**: Large file, containing information about all airports from [this site](https://ourairports.com/data/)* **Format**: CSV File - Comma separated text file format* **Rows**: over 55k* **Data description**: Detailed information about each listed airport is displayed in 12 columns.  Read data from file Airport Codes: `airport-codes_csv.csv`
###Code
filepath = '../P8_capstone_resource_files/airport-codes_csv.csv'
df_pd_airport = pd.read_csv(filepath)
# Show data (1st 5 rows)
df_pd_airport.head()
# Show data (last 5 rows)
df_pd_airport.tail()
# Get an overview about filled fields
df_pd_airport.count()
###Output
_____no_output_____
###Markdown
Summary for US Cities: Demographics [`us-cities-demographics.json`](../P8_capstone_resource_files/us-cities-demographics.json):* **Source:** [US Cities: Demographics ](https://public.opendatasoft.com/explore/dataset/us-cities-demographics/information/)* **Description:** This dataset contains information about the demographics of all US cities and census-designated places with a population greater or equal to 65,000. This data comes from the [US Census Bureau's 2015 American Community Survey](https://www.census.gov/en.html).* **Data:** Structured data about City, State, Age, Population, etc.* **Format:** JSON File - Structured data* **Rows:** 2,8k* **Data description:** 12 columns describing facts from cities across the U.S. about demographics.  Read data from file US Cities and it's information about citizens: `us-cities-demographics.csv:`
###Code
filepath = '../P8_capstone_resource_files/us-cities-demographics.json'
df_pd_us_cities = pd.read_json(filepath, orient='columns')
# Show data (1st 5 rows)
df_pd_us_cities.head()
# Show data (last 5 rows)
df_pd_us_cities.tail()
# Get an overview about filled fields
df_pd_us_cities.count()
###Output
_____no_output_____
###Markdown
Summary for World Temperature Data [`GlobalLandTemperaturesByCity.csv`](../P8_capstone_resource_files/GlobalLandTemperaturesByCity.csv):* **Source:** [World Temperature Data: Temperature grouped by City and Country](https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data)* **Description:** Climate Change: Earth Surface Temperature Data. Global temperatures since 1750.* **Data:** Structured data about Average Temperature, City, Country, Location (Latitude and Longitude)* **Format:** CSV File - Comma separated text file format* **Rows:** 8,5 million entries* **Data description:** Temperature record as time series information since 1750. * **Note:** Temperature data must be formatted correctly Read data from World Temperature Data where Temperature is grouped by City and Country: `GlobalLandTemperaturesByCity.csv`
###Code
filepath = '../P8_capstone_resource_files/GlobalLandTemperaturesByCity.csv'
df_pd_temperature = pd.read_csv(filepath)
# Show data (1st 5 rows)
df_pd_temperature.head()
# Show data (last 5 rows)
df_pd_temperature.tail()
# Get an overview about filled fields
df_pd_temperature.count()
###Output
_____no_output_____ |
examples/encoding/RareLabelEncoder.ipynb | ###Markdown
RareLabelEncoderThe RareLabelEncoder() groups labels that show a small number of observations in the dataset into a new category called 'Rare'. This helps to avoid overfitting.The argument ' tol ' indicates the percentage of observations that the label needs to have in order not to be re-grouped into the "Rare" label. The argument n_categories indicates the minimum number of distinct categories that a variable needs to have for any of the labels to be re-grouped into 'Rare'. NoteIf the number of labels is smaller than n_categories, then the encoder will not group the labels for that variable.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine.encoding import RareLabelEncoder
# Load titanic dataset from OpenML
def load_titanic():
data = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl')
data = data.replace('?', np.nan)
data['cabin'] = data['cabin'].astype(str).str[0]
data['pclass'] = data['pclass'].astype('O')
data['age'] = data['age'].astype('float')
data['fare'] = data['fare'].astype('float')
data['embarked'].fillna('C', inplace=True)
data.drop(labels=['boat', 'body', 'home.dest'], axis=1, inplace=True)
return data
data = load_titanic()
data.head()
X = data.drop(['survived', 'name', 'ticket'], axis=1)
y = data.survived
# we will encode the below variables, they have no missing values
X[['cabin', 'pclass', 'embarked']].isnull().sum()
''' Make sure that the variables are type (object).
if not, cast it as object , otherwise the transformer will either send an error (if we pass it as argument)
or not pick it up (if we leave variables=None). '''
X[['cabin', 'pclass', 'embarked']].dtypes
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
The RareLabelEncoder() groups rare / infrequent categories ina new category called "Rare", or any other name entered by the user.For example in the variable colour, if the percentage of observationsfor the categories magenta, cyan and burgundy are < 5%, all thosecategories will be replaced by the new label "Rare".Note, infrequent labels can also be grouped under a user defined name, forexample 'Other'. The name to replace infrequent categories is definedwith the parameter replace_with. The encoder will encode only categorical variables (type 'object'). A listof variables can be passed as an argument. If no variables are passed as argument, the encoder will find and encode all categorical variables(object type).
###Code
## Rare value encoder
'''
Parameters
----------
tol: float, default=0.05
the minimum frequency a label should have to be considered frequent.
Categories with frequencies lower than tol will be grouped.
n_categories: int, default=10
the minimum number of categories a variable should have for the encoder
to find frequent labels. If the variable contains less categories, all
of them will be considered frequent.
max_n_categories: int, default=None
the maximum number of categories that should be considered frequent.
If None, all categories with frequency above the tolerance (tol) will be
considered.
variables : list, default=None
The list of categorical variables that will be encoded. If None, the
encoder will find and select all object type variables.
replace_with : string, default='Rare'
The category name that will be used to replace infrequent categories.
'''
rare_encoder = RareLabelEncoder(tol=0.05,
n_categories=5,
variables=['cabin', 'pclass', 'embarked'])
rare_encoder.fit(X_train)
rare_encoder.encoder_dict_
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_train)
test_t.head()
test_t.cabin.value_counts()
###Output
_____no_output_____
###Markdown
The user can change the string from 'Rare' to something else.
###Code
## Rare value encoder
rare_encoder = RareLabelEncoder(tol = 0.03,
replace_with='Other', #replacing 'Rare' with 'Other'
variables=['cabin', 'pclass', 'embarked'],
n_categories=2
)
rare_encoder.fit(X_train)
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_train)
test_t.sample(5)
rare_encoder.encoder_dict_
test_t.cabin.value_counts()
###Output
_____no_output_____
###Markdown
The user can choose to retain only the most popular categories with the argument max_n_categories.
###Code
## Rare value encoder
rare_encoder = RareLabelEncoder(tol = 0.03,
variables=['cabin', 'pclass', 'embarked'],
n_categories=2,
max_n_categories=3 #keeps only the most popular 3 categories in every variable.
)
rare_encoder.fit(X_train)
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_train)
test_t.sample(5)
rare_encoder.encoder_dict_
###Output
_____no_output_____
###Markdown
Automatically select all categorical variablesIf no variable list is passed as argument, it selects all the categorical variables.
###Code
## Rare value encoder
rare_encoder = RareLabelEncoder(tol = 0.03, n_categories=3)
rare_encoder.fit(X_train)
rare_encoder.encoder_dict_
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_train)
test_t.sample(5)
###Output
_____no_output_____ |
live-tutorials/Holoscope.ipynb | ###Markdown
HoloScope: detecting collective anoamlies of constract suspiciousnessHoloScope is topology-and-spike aware fraud detection.HoloScope detect a subgraph of highly constrast suspicousness on topological, temporal, and categorical (e.g. rating score, topic, tag) infomation. Temporal spike of retweeting a message: AbstractAs online fraudsters invest more resources, including purchasing large pools of fake user accounts and dedicated IPs, fraudulent attacks become less obvious and their detection becomes increasingly challenging. Existing approaches such as average degree maximization suffer from the bias of including more nodes than necessary, resulting in lower accuracy and increased need for manual verification. Hence, we propose HoloScope, which uses information from graph topology and temporal spikes to more accurately detect groups of fraudulent users. In terms of graph topology, we introduce contrast suspiciousness, a dynamic weighting approach, which allows us to more accurately detect fraudulent blocks, particularly low-density blocks. In terms of temporal spikes, HoloScope takes into account the sudden bursts and drops of fraudsters' attacking patterns. In addition, we provide theoretical bounds for how much this increases the time cost needed for fraudsters to conduct adversarial attacks. Additionally, from the perspective of ratings, HoloScope incorporates the deviation of rating scores in order to catch fraudsters more accurately. Moreover, HoloScope has a concise framework and sub-quadratic time complexity, making the algorithm reproducible and scalable. Extensive experiments showed that HoloScope achieved significant accuracy improvements on synthetic and real data, compared with state-of-the-art fraud detection methods.
###Code
import spartan as st
###Output
_____no_output_____
###Markdown
You can configure the backend to use GPU or CPU only. \Default is using backend cpu.
###Code
# load graph data
tensor_data = st.loadTensor(path = "./inputData/yelp.tensor.gz", header=None)
###Output
_____no_output_____
###Markdown
"tensor_data.data" has multiple-colum attributes, and a single-colum values (optional). The following table shows an example of 10000 four-tuple (user, object, date, score) and the 5th-colum is the frequency. |row id | 0 | 1 | 2 | 3 | 4 ||-----: |-----: |----: |-----------: |----: |----- || 0 | 0 | 0 | 2012-08-01 | 4 | 1 || 1 | 1 | 0 | 2014-02-13 | 5 | 1 || 2 | 2 | 0 | 2015-10-31 | 5 | 1 || 3 | 3 | 0 | 2015-12-26 | 3 | 1 || 4 | 4 | 0 | 2016-04-08 | 2 | 1 || ... | ... | ... | ... | ... | ... || 9995 | 4523 | 508 | 2013-03-06 | 5 | 1 || 9996 | 118 | 508 | 2013-03-07 | 4 | 1 || 9997 | 5884 | 508 | 2013-03-07 | 1 | 1 || 9998 | 2628 | 508 | 2013-04-08 | 5 | 1 || 9999 | 5885 | 508 | 2013-06-17 | 5 | 1 |
###Code
stensor = tensor_data.toSTensor(hasvalue=True, mappers={2:st.TimeMapper(timeformat='%Y-%m-%d')})
#stensor._data
###Output
_____no_output_____
###Markdown
Sparse tensor "stensor" is a multi-mode constructed from tensor_data. users, objects, date time, and score are all mapped into $[0, N]$ integers. \This example constructs a tensor of $5886 \times 509 \times 3857 \times 6$.
###Code
graph = st.Graph(stensor, bipartite=True, weighted=True, modet=2)
###Output
_____no_output_____
###Markdown
Get a Graph instance from a sparse tensor. Run holoscope as a single model
###Code
hs = st.HoloScope(graph)
print(hs)
###Output
_____no_output_____
###Markdown
Default parameters are:{'alg': 'fastgreedy', 'eps': 1.6, 'numSing': 10, 'qfun': 'exp', 'b': 32, 'level': 0}You can change them be passing = as the doc shows.
###Code
res = hs.run(level=0, k=1)
###Output
_____no_output_____
###Markdown
Running level can be 0: topology only; 1: topology with time; 2: topology with category (e.g. rating score); 3: all three. Use k for number of dense blocks you want to get. res is a list of each block. Each block constains $((rows, nnzcols), susp\_score, levelcols, nnzcol\_scores)$ Run holoscope from anomaly detection task
###Code
# create a anomaly detection model
ad_model = st.AnomalyDetection.create(graph, st.ADPolicy.HoloScope, 'holoscope')
# run the model
#default k=2, eps=1.6
res = ad_model.run(k=2)
###Output
_____no_output_____
###Markdown
The results is a list of top-k suspicious blocks.For each block, the resulting tuple contains $(user~nodes, object~nodes)$, suspicious score, and suspicious scores of all object nodes.\Then we can visualize the subgraphs as follows.
###Code
#viusal of graphs by networkx
import matplotlib.pyplot as plt
for r in res:
rows, cols = r[0]
# to subgraph
sg = graph.get_sub_graph(rows, cols)
# networkx plot
fig = st.plot_graph(sg, bipartite=True, labels=[*rows, *cols])
fig = st.plot_graph(sg, layout='circular', bipartite=True, labels=[*rows, *cols])
###Output
_____no_output_____ |
xu_where-theres-smoke/opendatachallenge_new_wenfei.ipynb | ###Markdown
Results for Random Forest classifiersUsing around 80-120 trees seem best here, which gets us around 0.61 AUC and a 0.99 accuracy. However, increasing the number of trees also increasese the run-time by quite a bit.
###Code
from sklearn.ensemble import GradientBoostingClassifier
for i in np.arange(.1,.9,.05):
start = time.time()
clf=GradientBoostingClassifier(learning_rate=i,n_estimators=int(100*(1+i)))
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
score = np.mean(cross_val_score(clf, X_test, y_test,scoring ='accuracy',cv=4))
model_auc = np.mean(cross_val_score(clf, X_test, y_test,scoring ='roc_auc',cv=4))
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
print ('learning rate is %s'%i,'\t AUC is %s'%model_auc,'\t Accuracy is %s'%score)
end = time.time()
print ('Runtime is %s'%(end - start))
###Output
learning rate is 0.1 AUC is 0.686639026922 Accuracy is 0.891957941284
Runtime is 21.089849710464478
learning rate is 0.15 AUC is 0.693736637971 Accuracy is 0.928911487624
Runtime is 23.61993980407715
learning rate is 0.2 AUC is 0.634074934677 Accuracy is 0.933624393114
Runtime is 25.314812898635864
learning rate is 0.25 AUC is 0.620524595231 Accuracy is 0.857290726553
Runtime is 26.039467096328735
learning rate is 0.3 AUC is 0.667528112169 Accuracy is 0.984839430649
Runtime is 27.768565893173218
learning rate is 0.35 AUC is 0.67979029022 Accuracy is 0.977036790598
Runtime is 29.458000898361206
learning rate is 0.4 AUC is 0.687259716779 Accuracy is 0.924931626716
Runtime is 28.532164096832275
learning rate is 0.45 AUC is 0.572584908081 Accuracy is 0.816418298021
Runtime is 28.686460971832275
learning rate is 0.5 AUC is 0.623950109649 Accuracy is 0.798849247646
Runtime is 27.56224274635315
learning rate is 0.55 AUC is 0.659882955394 Accuracy is 0.85925842147
Runtime is 29.808526039123535
learning rate is 0.6 AUC is 0.650792833147 Accuracy is 0.818853439748
Runtime is 31.76380181312561
learning rate is 0.65 AUC is 0.575895938317 Accuracy is 0.819665791766
Runtime is 31.493573904037476
learning rate is 0.7 AUC is 0.624298397256 Accuracy is 0.920742278659
Runtime is 32.5714910030365
learning rate is 0.75 AUC is 0.62894097611 Accuracy is 0.878244419304
Runtime is 35.30757713317871
learning rate is 0.8 AUC is 0.625402464772 Accuracy is 0.903277816903
Runtime is 34.725735902786255
learning rate is 0.85 AUC is 0.62568820572 Accuracy is 0.813223909019
Runtime is 36.176201820373535
###Markdown
Results for Gradient Boosted ForestsUsing a learning rate of 0.35-0.4 gets us between 0.68 - 0.69 AUC and a 0.97 - 0.92 accuracy, which seems like it's a slightly better classifier to use here. And it's also significantly faster.
###Code
importances = clf.feature_importances_
print (importances)
###Output
_____no_output_____ |
le_jit.ipynb | ###Markdown
The validity of JIT model
###Code
from pathlib import Path
import os
HOME = Path(os.environ['HOME'])
MODELNAME = "fpn"
ENCODER = "efficientnet-b5"
DOWN = False # Downsample at the bottom
SRC = HOME/"ucsi"/"fastai"/"models"/"bestmodel_3.pth" # source model path
DST = HOME/"ucsi"/"jit"/"fpn_b5_e3.pth" # desitination model path
from torch import jit
import segmentation_models_pytorch as smp
import torch
from torch import nn
if MODELNAME =="fpn":
model_class = smp.FPN
elif MODELNAME == "unet":
model_class = smp.Unet
###Output
_____no_output_____
###Markdown
Loading The Model
###Code
seg_conf = {
"encoder_name":ENCODER,
"encoder_weights":None,
"classes":4,
"activation":"sigmoid",
}
print("Constructing the model")
print(seg_conf)
if DOWN:
class majorModel(nn.Module):
def __init__(self, seg_model):
super().__init__()
self.seq = nn.Sequential(*[
nn.Conv2d(3,12,kernel_size=(3,3), padding=1, stride=1, ),
nn.ReLU(),
nn.Conv2d(12,3,kernel_size=(3,3), padding=1, stride=2),
nn.ReLU(),
seg_model,])
def forward(self,x):
return self.seq(x)
model = majorModel(model_class(**seg_conf))
else:
model = model_class(**seg_conf)
CUDA = torch.cuda.is_available()
print("CUDA available:\t%s"%(CUDA))
print("Loading from weights:\t%s"%(SRC))
state = torch.load(SRC)
if "model" in state:
state = state["model"]
if CUDA:
model = model.cuda()
model.load_state_dict(state)
testimg = torch.rand(2, 3, 320, 640)
if CUDA:
testimg = testimg.cuda()
model = model.eval()
with torch.no_grad():
y1 = model(testimg)
###Output
_____no_output_____
###Markdown
Save to JIT
###Code
print("Saving to jit traced model:\t%s"%(DST))
with torch.no_grad():
traced = jit.trace(model, testimg)
traced.save(str(DST))
if CUDA:
model = model.cpu()
###Output
_____no_output_____
###Markdown
Recover from saved JIT
###Code
recovered = jit.load(str(DST))
if CUDA:
recovered = recovered.cuda()
with torch.no_grad():
y2 = recovered(testimg)
print("Absolute Mean Error:%s"%(torch.abs(y1-y2).mean().item()))
###Output
_____no_output_____ |
alc/ep1/ep1.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = "Breno Poggiali de Sousa"
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- Exercício Prático 1: ConvoluçãoNeste exercício iremos implementar a função que calcula a convolução de uma matriz ```top``` sobre uma imagem. Não é permitido usar as funções correlate ou convolve de scipy.ndimage.filters.
###Code
# importa as bibliotecas e seta alguns parâmetros
%matplotlib inline
import numpy as np
from sklearn.datasets import fetch_openml
from matplotlib import pyplot as plt, rcParams
rcParams['figure.figsize'] = 3, 6
%precision 4
np.set_printoptions(precision=4, linewidth=100)
# define duas funções para imprimir matrizes como imagens
def plots(ims, interp=False, titles=None):
ims=np.array(ims)
mn,mx=ims.min(),ims.max()
f = plt.figure(figsize=(12,24))
for i in range(len(ims)):
sp=f.add_subplot(1, len(ims), i+1)
if not titles is None: sp.set_title(titles[i], fontsize=18)
plt.imshow(ims[i], interpolation=None if interp else 'none', vmin=mn,vmax=mx)
def plot(im, interp=False):
f = plt.figure(figsize=(3,6), frameon=True)
# plt.show(im)
plt.imshow(im, interpolation=None if interp else 'none')
plt.gray()
plt.close()
# carrega um '5' escrito a mão a partir do arquivo exemplo.npy
with open('entrada.npy','rb') as infile:
image = np.load(infile)
# Baixa e carrega o dataset mnist_784 que contém 70000 dígitos escritos a mão.
# Ele foi comentado pois não será necessário.
# from sklearn.datasets import fetch_openml
# mnist = fetch_openml('mnist_784', version=1, cache=True)
# images = np.reshape(mnist['data'], (70000, 28, 28))
# labels = mnist['target'].astype(int)
# n=len(images)
# images.shape, labels.shape
# images = images/255
# image = images[0]
# plota imagem
plot(image)
# define e plota matriz top
top=[[-1,-1,-1],
[ 1, 1, 1],
[ 0, 0, 0]]
top = np.array(top)
plot(top)
def convolucao(top, image):
""" Calcula a matriz result que é obtida pela convolução da matriz top
sobre a imagem image.
Dicas:
1. Inicializar a matriz result com np.zeros ou np.empty (Qual o número de linhas? E de colunas?)
2. Iterar sobre cada posição de result fazendo a combinação linear dos coeficientes de top e das
posições correspondentes em image. Note que é possível multiplicar matrizes elemento a elemento
usando o operador *. Consulte np.sum() também.
3. Retornar result
"""
result = np.zeros([image.shape[0]-top.shape[0]+1, image.shape[1]-top.shape[0]+1])
result = result
for line in range(result.shape[0]):
for col in range(result.shape[1]):
matrix = np.array(image[line:line+top.shape[0], col:col+top.shape[1]])
matrix = top*matrix
result.itemset((line, col), np.sum(matrix))
return result
# plota result
result = convolucao(top,image)
plot(result)
with open('saida.npy','rb') as infile:
answer = np.load(infile)
assert (result == answer).all()
# additional *hidden* tests
###Output
_____no_output_____ |
src/homework03/e05-adventures.ipynb | ###Markdown
Exercise 5 ~ The Adventures of the Crypto-Apprentice: Return Of Vernam Cipher
###Code
# Get variable from parameters file
import utils
import binascii
params = utils.get_parameters()
p = params.Q5_p # the prime number
a = params.Q5_a # a first integer
b = params.Q5_b # a second integer
C = params.Q5_C # the ciphertext
y = params.Q5_y # y-coordinate of [2^{2|M'|}]P
n = params.Q5_n # the order of the elliptic curve E
###Output
_____no_output_____
###Markdown
Random point $P=(P_x,P_y)$ of an elliptic curve $E$ is the shared secret and a seed of the key sequence.Let $K=K_0K_1\cdots$ be a key sequence. Then:$K_i=\times([2^i]P)\ \textrm{mod}\ 2$$K_i=1\ \textrm{if}\ [2^i]P\ \textrm{is the point at infinity}\ \mathcal{O}$Where $\times(P)=P_x$ and $[2^i]P$ is a scalar multiplication between an integer $2^i$ and a point $P$. Elliptic curve $E$:$E=\{\mathcal{O}\}\cup\{(x,y)\in K^2\mid y^2=x^3+ax+b\}$Where $K=\mathbb{Z}_p$. Also:We are given the $y$-coordinate of $[2^{2\mid M^\prime\mid}]P$
###Code
G = IntegerModRing(p)
###Output
_____no_output_____
###Markdown
First, let's find the solutions of the elliptic curve equation with the given value $y$ :
###Code
R.<u> = PolynomialRing(G, 'u')
f = (- y ** 2 + u ** 3 + a * u + b)
r = [i[0] for i in f.roots()]
print r
###Output
[2863630260273906879738304382327082098766838571003235254560, 829539754520273617339926976669563027315213831455604281432, 215435968619537503897656783387023535044104393106441343301]
###Markdown
Now let's get these points in the elliptic curve :
###Code
E = EllipticCurve(G, [0, 0, 0, a, b])
points = [E(i, y) for i in r]
###Output
_____no_output_____
###Markdown
We can easily find the possible private keys using the inverse of $2^{2|M'|}$ and the three points ! The apprentice should not have given us the value $y$ !
###Code
k = inverse_mod(2 ** (2 * len(C)), n)
possible_private_keys = [k * point for point in points]
# decode_binary_string will decode a binary string using a given encoding [FOUND ON STACKOVERFLOW]
def decode_binary_string(s, encoding='utf-8'):
byte_string = ''.join(chr(int(s[i*8:i*8+8],2)) for i in range(len(s)//8))
return byte_string.decode(encoding)
def decrypt(P):
plaintext = []
two_i = 1
for i, b in enumerate(C):
Ki = mod((two_i * P)[0], 2)
Mi = mod(Ki + int(b), 2)
plaintext.append(str(Mi))
two_i *= 2
try:
plain = decode_binary_string(''.join(plaintext), 'ascii')
return plain
except:
return ''
for P in possible_private_keys:
d = decrypt(P)
if d != '':
print d
break
###Output
Revenge? Revenge! The King under Raglan is dead and where are his kin that dare seek revenge? Girion Lord of Dwarf is dead, and I have eaten his people like a wolf among sheep, and where are his sons' sons that dare approach me? I kill where I wish and none dare resist. I laid low the warriors of old and their like is not in the world today. Then I was but young and tender. Now I am old and strong, strong, strong, Thief in Siestas!
|
exercise3/lab6.ipynb | ###Markdown
6 Support Vector Machine 6.1 KernelsSupport Vector Machine can [use different kernels](https://en.wikipedia.org/wiki/Kernel_method): linear, radial basis function, polynomial, sigmoid, etc. The difference between some of them can be seen after running the code below that uses a classical example. Besides the usual packages, the *sklearn* package is also used here.
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
#take the well-known iris dataset
iris = datasets.load_iris()
#we will use only sepal length and width
x=iris.data[:, :2]
y=iris.target
#plot points
x1, x2=x[:, 0], x[:, 1]
x_min, x_max=x1.min()-1, x1.max()+1
y_min, y_max=x2.min()-1, x2.max()+1
h=0.02
plot_x, plot_y=np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
#regularization
C=1.0
models=(svm.SVC(kernel="linear", C=C),
svm.SVC(kernel="rbf", gamma=0.7, C=C),
svm.SVC(kernel="poly", degree=3, C=C))
models=(model.fit(x, y) for model in models)
# title for the plots
titles = ("Linear kernel", "RBF kernel", "Polynomial (degree 3) kernel")
for model, title in zip(models, titles):
points=model.predict(np.c_[plot_x.ravel(), plot_y.ravel()]).reshape(plot_x.shape)
plt.contourf(plot_x, plot_y, points, cmap=plt.cm.coolwarm, alpha=0.8)
plt.xlim(plot_x.min(), plot_x.max())
plt.ylim(plot_y.min(), plot_y.max())
plt.xlabel("Sepal length")
plt.ylabel("Sepal width")
plt.title(title)
predicted=model.predict(x);
print("Accuracy: %.2lf%%"%(100*np.sum(y==predicted)/y.size))
plt.scatter(x1, x2, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors="k")
plt.show()
###Output
Accuracy: 82.00%
###Markdown
**Tasks**1. What accuracies are achieved when other features are used as well?2. Split the dataset into a training and testing part, fit the SVM model on the training part, and test it on the testing part. What gives the highest accuracy?3. Make the code below give over 90% accuracy and then explain how you achieved it and why did it work.
###Code
import numpy as np
from sklearn import svm, datasets
n1=400
n2=400
class1=(np.tile(np.random.uniform(low=0.0, high=1, size=n2).reshape((n2, 1)), (1, 2))+3/2)*\
np.array([(np.cos(a), np.sin(a)) for a in np.random.uniform(low=2, high=8, size=n2)])+np.tile(np.array([[3/2, 0]]), (n1, 1))
class2=(np.tile(np.random.uniform(low=0.0, high=1, size=n2).reshape((n2, 1)), (1, 2))+3/2)*\
np.array([(np.cos(a), np.sin(a)) for a in np.random.uniform(low=-1, high=4, size=n2)])
x=np.vstack((class1, class2))
y=np.concatenate((np.ones((n1)), 2*np.ones((n2))))
idx=np.random.permutation(y.size)
x=x[idx, :]
y=y[idx]
s=round((n1+n2)/2)
#s=600
x_train=x[:s, :]
y_train=y[:s]
x_test=x[s:, :]
y_test=y[s:]
#EDIT ONLY FROM HERE...
model=svm.SVC(kernel="rbf")
model.fit(x_train, y_train)
#...TO HERE
predicted=model.predict(x_test);
print("Accuracy: %.2lf%%"%(100*np.sum(y_test==predicted)/y_test.size))
###Output
Accuracy: 96.00%
###Markdown
6.2 Wine datasetHere we are going to make some experiments with the wine dataset to see how features can [affect](https://en.wikipedia.org/wiki/Feature_selection) the classification.**Tasks**1. Which SVM kernel will achieve the highest accuracy when all features are used?2. If you can use **only one** feature and any kernel to achieve highest possible accuracy, which feature and kernel would that be?3. If you can use **only two** features and any kernel to achieve highest possible accuracy, which feature and kernel would that be?4. How do you explain the results?
###Code
from sklearn.datasets import load_wine
wine=load_wine()
x=wine.data
y=wine.target
idx=np.random.permutation(y.size)
x=x[idx, :]
y=y[idx]
#all features
features_idx=range(x.shape[1])
#only some of the features
#features_idx=[0, 1]
x=x[:, features_idx]
s=round(y.size/2)
x_train=x[:s, :]
y_train=y[:s]
x_test=x[s:, :]
y_test=y[s:]
model=svm.SVC(kernel="linear")
model.fit(x_train, y_train)
predicted=model.predict(x_test);
print("Accuracy: %.2lf%%"%(100*np.sum(y_test==predicted)/y_test.size))
###Output
Accuracy: 95.51%
###Markdown
6.3 SpeedSVM is really great, but it has an important disadvantage with respect to neural networks in general. Here we are going to demonstrate it.**Tasks**1. Run the code below for various dataset sizes and each time store the time needed for the model to fit.2. Draw a plot that shows the influence of dataset size on execution time.3. How would you model the influence?4. How would you model the same influence in case of multilayer perceptron?
###Code
import numpy as np
from sklearn import svm, datasets
def create_data(n1, n2):
class1=np.c_[np.random.normal(0, 1, size=n1), np.random.normal(0, 1, size=n1)]
class2=np.c_[np.random.normal(2, 1, size=n2), np.random.normal(0, 1, size=n2)]
x=np.vstack((class1, class2))
y=np.concatenate((np.ones((n1)), 2*np.ones((n2))))
return x, y
x, y=create_data(5000, 5000)
model=svm.SVC(kernel="poly", C=1.0)
import time;
start=time.time()
model.fit(x, y)
end=time.time();
t=end-start
print(t)
###Output
5.469843864440918
|
Artificial Neural Network (ANN)/ANN For Classification/artificial_neural_network_for_classification_samrat.ipynb | ###Markdown
Artificial Neural Network (For Classification) Importing the libraries
###Code
import pandas as pd
import numpy as np
import tensorflow as tf
tf.__version__
###Output
_____no_output_____
###Markdown
Part 1 - Data Preprocessing Importing the dataset
###Code
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:-1].values
y = dataset.iloc[:, -1].values
print(X)
print(y) # Contains the results whether the customers has left the bank or not
###Output
[1 0 1 ... 1 1 0]
###Markdown
Encoding categorical data Label Encoding the "Gender" column
###Code
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
X[:, 2] = le.fit_transform(X[:, 2]) # 0 for Female and 1 for Male
print(X)
###Output
[[619 'France' 0 ... 1 1 101348.88]
[608 'Spain' 0 ... 0 1 112542.58]
[502 'France' 0 ... 1 0 113931.57]
...
[709 'France' 0 ... 0 1 42085.58]
[772 'Germany' 1 ... 1 0 92888.52]
[792 'France' 0 ... 1 0 38190.78]]
###Markdown
One Hot Encoding the "Geography" column
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [1])], remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(X)
###Output
[[1.0 0.0 0.0 ... 1 1 101348.88]
[0.0 0.0 1.0 ... 0 1 112542.58]
[1.0 0.0 0.0 ... 1 0 113931.57]
...
[1.0 0.0 0.0 ... 0 1 42085.58]
[0.0 1.0 0.0 ... 1 0 92888.52]
[1.0 0.0 0.0 ... 1 0 38190.78]]
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Part 2 - Building the ANN Initializing the ANN
###Code
ann = tf.keras.models.Sequential(); # Keras is integrated in Tensorflow 2.0.X
###Output
_____no_output_____
###Markdown
Adding the input layer and the first hidden layer
###Code
ann.add(tf.keras.layers.Dense(units = 6, activation = 'relu')) # Units specifies the numbers of Neurons in the Hidden Layer and we have used Rectifier Activation Function as we have binary output
###Output
_____no_output_____
###Markdown
Adding the second hidden layer
###Code
ann.add(tf.keras.layers.Dense(units = 6, activation = 'relu')) # Adds the Second Hidden Layer which also contains 6 Neurons
###Output
_____no_output_____
###Markdown
Adding the output layer
###Code
ann.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
###Output
_____no_output_____
###Markdown
Part 3 - Training the ANN Compiling the ANN
###Code
ann.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) # adam optimiser helps with applying Stochastic Gradient Descent
###Output
_____no_output_____
###Markdown
Training the ANN on the Training set
###Code
ann.fit(X_train, y_train, batch_size = 32, epochs = 100) # batch_size is 32 by default # We have almost 86% accuracy
###Output
Epoch 1/100
250/250 [==============================] - 1s 1ms/step - loss: 0.5900 - accuracy: 0.6895
Epoch 2/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4554 - accuracy: 0.8006
Epoch 3/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4304 - accuracy: 0.8101
Epoch 4/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4172 - accuracy: 0.8140
Epoch 5/100
250/250 [==============================] - 0s 1ms/step - loss: 0.4044 - accuracy: 0.8202
Epoch 6/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3934 - accuracy: 0.8304
Epoch 7/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3818 - accuracy: 0.8393
Epoch 8/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3728 - accuracy: 0.8449
Epoch 9/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3657 - accuracy: 0.8487
Epoch 10/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3604 - accuracy: 0.8506
Epoch 11/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3556 - accuracy: 0.8545
Epoch 12/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3525 - accuracy: 0.8539
Epoch 13/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3495 - accuracy: 0.8562
Epoch 14/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3474 - accuracy: 0.8583
Epoch 15/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3462 - accuracy: 0.8581
Epoch 16/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3454 - accuracy: 0.8568
Epoch 17/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3443 - accuracy: 0.8591
Epoch 18/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3438 - accuracy: 0.8610
Epoch 19/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3429 - accuracy: 0.8602
Epoch 20/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3427 - accuracy: 0.8604
Epoch 21/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3424 - accuracy: 0.8602
Epoch 22/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3416 - accuracy: 0.8620
Epoch 23/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3414 - accuracy: 0.8605
Epoch 24/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3413 - accuracy: 0.8625
Epoch 25/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3414 - accuracy: 0.8612
Epoch 26/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3405 - accuracy: 0.8629
Epoch 27/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3407 - accuracy: 0.8634
Epoch 28/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3403 - accuracy: 0.8618
Epoch 29/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3401 - accuracy: 0.8618
Epoch 30/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3401 - accuracy: 0.8625
Epoch 31/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3398 - accuracy: 0.8636
Epoch 32/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3396 - accuracy: 0.8644
Epoch 33/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3393 - accuracy: 0.8635
Epoch 34/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3393 - accuracy: 0.8639
Epoch 35/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3384 - accuracy: 0.8633
Epoch 36/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3386 - accuracy: 0.8630
Epoch 37/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3383 - accuracy: 0.8639
Epoch 38/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3384 - accuracy: 0.8625
Epoch 39/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3380 - accuracy: 0.8649
Epoch 40/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3380 - accuracy: 0.8629
Epoch 41/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3375 - accuracy: 0.8640
Epoch 42/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3377 - accuracy: 0.8637
Epoch 43/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3374 - accuracy: 0.8631
Epoch 44/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3372 - accuracy: 0.8640
Epoch 45/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3371 - accuracy: 0.8626
Epoch 46/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3364 - accuracy: 0.8643
Epoch 47/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3364 - accuracy: 0.8639
Epoch 48/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3365 - accuracy: 0.8640
Epoch 49/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3365 - accuracy: 0.8635
Epoch 50/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3366 - accuracy: 0.8627
Epoch 51/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3364 - accuracy: 0.8639
Epoch 52/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3362 - accuracy: 0.8631
Epoch 53/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3357 - accuracy: 0.8646
Epoch 54/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3359 - accuracy: 0.8649
Epoch 55/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3353 - accuracy: 0.8664
Epoch 56/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3356 - accuracy: 0.8648
Epoch 57/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3351 - accuracy: 0.8640
Epoch 58/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3350 - accuracy: 0.8631
Epoch 59/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3353 - accuracy: 0.8645
Epoch 60/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3351 - accuracy: 0.8633
Epoch 61/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3345 - accuracy: 0.8634
Epoch 62/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3346 - accuracy: 0.8636
Epoch 63/100
250/250 [==============================] - 0s 2ms/step - loss: 0.3339 - accuracy: 0.8631
Epoch 64/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3344 - accuracy: 0.8639
Epoch 65/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3346 - accuracy: 0.8636
Epoch 66/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3348 - accuracy: 0.8649
Epoch 67/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3342 - accuracy: 0.8626
Epoch 68/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3344 - accuracy: 0.8644
Epoch 69/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3341 - accuracy: 0.8630
Epoch 70/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3344 - accuracy: 0.8637
Epoch 71/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3341 - accuracy: 0.8648
Epoch 72/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3343 - accuracy: 0.8631
Epoch 73/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3344 - accuracy: 0.8654
Epoch 74/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3340 - accuracy: 0.8624
Epoch 75/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8629
Epoch 76/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3338 - accuracy: 0.8650
Epoch 77/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3338 - accuracy: 0.8648
Epoch 78/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3337 - accuracy: 0.8643
Epoch 79/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8650
Epoch 80/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3337 - accuracy: 0.8640
Epoch 81/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3337 - accuracy: 0.8640
Epoch 82/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8652
Epoch 83/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8641
Epoch 84/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3339 - accuracy: 0.8654
Epoch 85/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8644
Epoch 86/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3331 - accuracy: 0.8650
Epoch 87/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3337 - accuracy: 0.8644
Epoch 88/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3332 - accuracy: 0.8637
Epoch 89/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3330 - accuracy: 0.8656
Epoch 90/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8659
Epoch 91/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8643
Epoch 92/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3335 - accuracy: 0.8656
Epoch 93/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3333 - accuracy: 0.8637
Epoch 94/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3332 - accuracy: 0.8649
Epoch 95/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8641
Epoch 96/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3325 - accuracy: 0.8670
Epoch 97/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3328 - accuracy: 0.8635
Epoch 98/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3331 - accuracy: 0.8654
Epoch 99/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3330 - accuracy: 0.8649
Epoch 100/100
250/250 [==============================] - 0s 1ms/step - loss: 0.3329 - accuracy: 0.8644
###Markdown
Part 4 - Making the predictions and evaluating the model Predicting the result of a single observation **Homework**Use our ANN model to predict if the customer with the following informations will leave the bank: Geography: FranceCredit Score: 600Gender: MaleAge: 40 years oldTenure: 3 yearsBalance: \$ 60000Number of Products: 2Does this customer have a credit card? YesIs this customer an Active Member: YesEstimated Salary: \$ 50000So, should we say goodbye to that customer? **Solution**
###Code
print(ann.predict(sc.transform([[1, 0, 0, 600, 1, 40, 3, 60000, 2, 1, 1, 50000]])) > 0.5) # As we have used Sigmoid Function in Output Layer we will get a Probability
###Output
[[False]]
###Markdown
Therefore, our ANN model predicts that this customer stays in the bank!**Important note 1:** Notice that the values of the features were all input in a double pair of square brackets. That's because the "predict" method always expects a 2D array as the format of its inputs. And putting our values into a double pair of square brackets makes the input exactly a 2D array.**Important note 2:** Notice also that the "France" country was not input as a string in the last column but as "1, 0, 0" in the first three columns. That's because of course the predict method expects the one-hot-encoded values of the state, and as we see in the first row of the matrix of features X, "France" was encoded as "1, 0, 0". And be careful to include these values in the first three columns, because the dummy variables are always created in the first columns. Predicting the Test set results
###Code
y_pred = ann.predict(X_test)
y_pred = (y_pred > 0.5)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[0 0]
[0 1]
[0 0]
...
[0 0]
[0 0]
[0 0]]
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred) # So our Accuracy is 86.05%
###Output
[[1532 63]
[ 216 189]]
|
notebooks/LeNet_MNIST_Experiments.ipynb | ###Markdown
Obtain Training and Test Datasets
###Code
## Obtaining the training and testing data
%reload_ext autoreload
%autoreload 2
NUM_NORMAL = 5000
NUM_ANOMALIES = 1000
TEST_NUM_ANOMALIES = 50
# trainX,trainY = createData.get_MNIST_TrainingData(NUM_NORMAL)
trainX,trainY,testX,testY = createData.get_MNIST_TrainingData(NUM_NORMAL,NUM_ANOMALIES)
trainX = np.concatenate((trainX,testX),axis=0)
trainY = np.concatenate((trainY,testY),axis=0)
[Xtest_Pos,label_ones,Xtest_Neg,label_sevens]= createData.get_MNIST_TestingData(NUM_NORMAL,TEST_NUM_ANOMALIES)
###Output
5000 Positive test samples
1000 Negative test samples
5000 Positive test samples
50 Negative test samples
###Markdown
Train LeNet Model Supervised Model
###Code
%reload_ext autoreload
%autoreload 2
IMG_HGT =28
IMG_WDT=28
IMG_DEPTH=1
### Reshape the numpy array
trainX = np.reshape(trainX,(len(trainX),IMG_HGT,IMG_WDT,IMG_DEPTH))
Xtest_Pos = np.reshape(Xtest_Pos,(len(Xtest_Pos),IMG_HGT,IMG_WDT,IMG_DEPTH))
Xtest_Neg = np.reshape(Xtest_Neg,(len(Xtest_Neg),IMG_HGT,IMG_WDT,IMG_DEPTH))
testX = np.concatenate((Xtest_Pos,Xtest_Neg),axis=0)
testY = np.concatenate((label_ones,label_sevens),axis=0)
nClass =2
NUM_EPOCHS = 25
clf_LeNet = LeNet()
clf_LeNet.fit(trainX,trainY,testX,testY,NUM_EPOCHS,IMG_HGT,IMG_WDT,IMG_DEPTH,nClass)
###Output
[INFO] compiling model...
[INFO] training network...
Epoch 1/25
187/187 [==============================] - 8s 43ms/step - loss: 0.1077 - acc: 0.9574 - val_loss: 0.0122 - val_acc: 0.9956
Epoch 2/25
187/187 [==============================] - 8s 44ms/step - loss: 0.0438 - acc: 0.9846 - val_loss: 0.0384 - val_acc: 0.9869
Epoch 3/25
187/187 [==============================] - 9s 46ms/step - loss: 0.0302 - acc: 0.9888 - val_loss: 0.0099 - val_acc: 0.9964
Epoch 4/25
187/187 [==============================] - 9s 47ms/step - loss: 0.0300 - acc: 0.9900 - val_loss: 0.0161 - val_acc: 0.9949
Epoch 5/25
187/187 [==============================] - 9s 48ms/step - loss: 0.0240 - acc: 0.9918 - val_loss: 0.0034 - val_acc: 0.9976
Epoch 6/25
187/187 [==============================] - 9s 48ms/step - loss: 0.0259 - acc: 0.9915 - val_loss: 0.0026 - val_acc: 0.9990
Epoch 7/25
187/187 [==============================] - 9s 49ms/step - loss: 0.0236 - acc: 0.9910 - val_loss: 0.0101 - val_acc: 0.9964
Epoch 8/25
187/187 [==============================] - 9s 49ms/step - loss: 0.0197 - acc: 0.9926 - val_loss: 0.0042 - val_acc: 0.9990
Epoch 9/25
187/187 [==============================] - 9s 49ms/step - loss: 0.0172 - acc: 0.9940 - val_loss: 0.0056 - val_acc: 0.9980
Epoch 10/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0192 - acc: 0.9937 - val_loss: 0.0100 - val_acc: 0.9966
Epoch 11/25
187/187 [==============================] - 9s 49ms/step - loss: 0.0200 - acc: 0.9947 - val_loss: 0.0041 - val_acc: 0.9986
Epoch 12/25
187/187 [==============================] - 10s 51ms/step - loss: 0.0207 - acc: 0.9936 - val_loss: 0.0070 - val_acc: 0.9970
Epoch 13/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0172 - acc: 0.9950 - val_loss: 0.0101 - val_acc: 0.9968
Epoch 14/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0177 - acc: 0.9935 - val_loss: 0.0046 - val_acc: 0.9978
Epoch 15/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0129 - acc: 0.9952 - val_loss: 0.0061 - val_acc: 0.9978
Epoch 16/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0194 - acc: 0.9942 - val_loss: 0.0026 - val_acc: 0.9992
Epoch 17/25
187/187 [==============================] - 9s 51ms/step - loss: 0.0151 - acc: 0.9957 - val_loss: 0.0025 - val_acc: 0.9990
Epoch 18/25
187/187 [==============================] - 9s 51ms/step - loss: 0.0136 - acc: 0.9945 - val_loss: 0.0200 - val_acc: 0.9923
Epoch 19/25
187/187 [==============================] - 9s 51ms/step - loss: 0.0195 - acc: 0.9943 - val_loss: 0.0029 - val_acc: 0.9986
Epoch 20/25
187/187 [==============================] - 10s 51ms/step - loss: 0.0127 - acc: 0.9960 - val_loss: 0.0081 - val_acc: 0.9970
Epoch 21/25
187/187 [==============================] - 10s 51ms/step - loss: 0.0121 - acc: 0.9957 - val_loss: 0.0028 - val_acc: 0.9990
Epoch 22/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0114 - acc: 0.9955 - val_loss: 0.0076 - val_acc: 0.9976
Epoch 23/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0157 - acc: 0.9945 - val_loss: 0.0093 - val_acc: 0.9960
Epoch 24/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0139 - acc: 0.9947 - val_loss: 0.0050 - val_acc: 0.9980
Epoch 25/25
187/187 [==============================] - 9s 50ms/step - loss: 0.0125 - acc: 0.9955 - val_loss: 0.0037 - val_acc: 0.9986
[INFO] serializing network...
###Markdown
Test LeNet Model
###Code
%reload_ext autoreload
%autoreload 2
auc_LeNet = clf_LeNet.score(Xtest_Pos,label_ones,Xtest_Neg,label_sevens)
print("===========")
print("AUC: ",auc_LeNet)
print("===========")
###Output
[INFO] loading network...
5050 Actual test samples
===================================
auccary_score: 0.9986138613861386
roc_auc_score: 0.9993000000000001
y_true [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
y_pred [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
===================================
===========
AUC: 0.9993000000000001
===========
|
docsource/source/auto_examples/wifibot.ipynb | ###Markdown
********************************************************************************2D Robot Localization on Real Data********************************************************************************Goals of this script:- apply the UKF for the 2D robot localization example on real data.*We assume the reader is already familiar with the considered problem describedin the tutorial.*We address the same problem as described in the tutorial on our own data. Import==============================================================================
###Code
from ukfm import LOCALIZATION as MODEL
import ukfm
import numpy as np
import matplotlib
ukfm.utils.set_matplotlib_config()
###Output
_____no_output_____
###Markdown
Model and Data==============================================================================This script uses the :meth:`~ukfm.LOCALIZATION` model.Instead of creating data, we load recorded data. We have recorded fivesequences (sequence 2 and 3 are the more interesting).
###Code
# sequence number
n_sequence = 3
# GPS frequency (Hz)
gps_freq = 2
# GPS noise standard deviation (m)
gps_std = 0.1
# load data
states, omegas, ys, one_hot_ys, t = MODEL.load(n_sequence, gps_freq, gps_std)
###Output
_____no_output_____
###Markdown
Data has been obtained in an experiment conducted at the Centre for Robotics,MINES ParisTech. We used a so-called Wifibot, which is a small wheeled robotequipped with independent odometers on the left and right wheels, see figure.A set of seven highly precise cameras, the OptiTrack motion capture system,provide the reference trajectory (ground truth) with sub-millimeter precisionat a rate of 120 Hz... figure:: ../images/robot.jpg :scale: 20 % :alt: robot picture. :align: center Testing arena with Wifibot robot in the foreground of the picture. We can also see two of the seven Optitrack cameras in the background. We define noise odometry standard deviation for the filter.
###Code
odo_std = np.array([0.15, # longitudinal speed
0.05, # transverse shift speed
0.15]) # differential odometry
###Output
_____no_output_____
###Markdown
Filter Design==============================================================================We embed here the state in $SE(2)$ with left multiplication, i.e. - the retraction $\varphi(.,.)$ is the $SE(2)$ exponential, where the state multiplies on the left the uncertainty $\boldsymbol{\xi}$.- the inverse retraction $\varphi^{-1}_.(.)$ is the $SE(2)$ logarithm.We define the filter parameters based on the model parameters.
###Code
# propagation noise covariance matrix
Q = np.diag(odo_std ** 2)
# measurement noise covariance matrix
R = gps_std ** 2 * np.eye(2)
# sigma point parameters
alpha = np.array([1e-3, 1e-3, 1e-3])
###Output
_____no_output_____
###Markdown
Filter Initialization------------------------------------------------------------------------------We initialize the filter with the true state plus an initial heading error of30°, and set corresponding initial covariance matrices.
###Code
# "add" orientation error to the initial state
SO2 = ukfm.SO2
state0 = MODEL.STATE(Rot=states[0].Rot.dot(SO2.exp(30/180*np.pi)),
p=states[0].p)
# initial state uncertainty covariance matrix
P0 = np.zeros((3, 3))
# The state is not perfectly initialized
P0[0, 0] = (30/180*np.pi)**2
###Output
_____no_output_____
###Markdown
We define the filter as an instance of the ``UKF`` class.
###Code
ukf = ukfm.UKF(state0=state0, # initial state
P0=P0, # initial covariance
f=MODEL.f, # propagation model
h=MODEL.h, # observation model
Q=Q, # process noise covariance
R=R, # observation noise covariance
phi=MODEL.left_phi, # retraction function
phi_inv=MODEL.left_phi_inv, # inverse retraction function
alpha=alpha # sigma point parameters
)
###Output
_____no_output_____
###Markdown
Before launching the filter, we set a list for recording estimates along thefull trajectory and a 3D array to record covariance estimates.
###Code
N = t.shape[0]
ukf_states = [ukf.state]
ukf_Ps = np.zeros((N, 3, 3))
ukf_Ps[0] = ukf.P
###Output
_____no_output_____
###Markdown
Filtering------------------------------------------------------------------------------The UKF proceeds as a standard Kalman filter with a for loop.
###Code
# measurement iteration number (first measurement is for n == 0)
k = 1
for n in range(1, N):
# propagation
dt = t[n] - t[n-1]
ukf.propagation(omegas[n-1], dt)
# update only if a measurement is received
if one_hot_ys[n] == 1:
ukf.update(ys[k])
k += 1
# save estimates
ukf_states.append(ukf.state)
ukf_Ps[n] = ukf.P
###Output
_____no_output_____
###Markdown
Results==============================================================================We plot the trajectory, the measurements and the estimated trajectory. We thenplot the position and orientation error with 95% ($3\sigma$) confidentinterval.
###Code
MODEL.plot_wifibot(ukf_states, ukf_Ps, states, ys, t)
###Output
_____no_output_____ |
notebooks/AnalysesReasoning.ipynb | ###Markdown
How much more often do words related to 'free reasoning' occur, compared to 'following existing reasoning'?
###Code
print(counts_category.free.sum() / counts_category.following.sum())
###Output
3.37278390034
###Markdown
By schoolWe divide the text per school, and look at what percentage of the total words refers to 'free reasoning' and 'following existing reasoning'
###Code
by_school = counts_category.groupby(['School']).sum()[['following', 'free', 'Number_of_tokens']]
by_school = by_school[['following', 'free']].divide(by_school.Number_of_tokens, axis=0)
by_school
###Output
_____no_output_____
###Markdown
How much more often does free occur compared to following?
###Code
by_school.free / by_school.following
###Output
_____no_output_____
###Markdown
Let's plot these percentages. The higher the total bar, the more often these words appear in the text, relatively
###Code
import matplotlib.ticker as ticker
ax = (by_school*100).plot(kind='bar', stacked=True)
plt.title('Categories as percentages of total texts')
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.3f%%'))
plt.show()
###Output
_____no_output_____
###Markdown
The same plot, but then the bars not stacked, to more clearly see the difference in height
###Code
ax = (by_school*100).plot(kind='bar', stacked=False)
plt.title('Categories as percentages of total texts')
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.3f%%'))
plt.show()
###Output
_____no_output_____
###Markdown
Over centuriesWe create the same counts and plots, but now for the centuries.But first check how many books we have per century
###Code
counts_category.century_n.value_counts(sort=False).plot(kind='bar')
by_century = counts_category.groupby(['century_n']).sum()[['following', 'free', 'Number_of_tokens']]
by_century = by_century[['following', 'free']].divide(by_century.Number_of_tokens, axis=0)
by_century
###Output
_____no_output_____
###Markdown
How much more often does free occur?
###Code
by_century.free / by_century.following
ax = (by_century*100).plot(kind='bar', stacked=True)
plt.title('Categories as percentages of total texts')
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.3f%%'))
plt.show()
ax = (by_century*100).plot(kind='bar', stacked=False)
plt.title('Categories as percentages of total texts')
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.3f%%'))
plt.show()
###Output
_____no_output_____
###Markdown
Per regionThe same plots for the regions. First the number of books for each region. For some regions, this number is very low and it is difficult to draw conclusions.
###Code
counts_category.Geographical_area.value_counts(sort=False).plot(kind='bar')
by_region = counts_category.groupby(['Geographical_area']).sum()[['following', 'free', 'Number_of_tokens']]
by_region = by_region[['following', 'free']].divide(by_region.Number_of_tokens, axis=0)
by_region
by_region.free / by_region.following
ax = (by_region*100).plot(kind='bar', stacked=True)
plt.title('Categories as percentages of total texts')
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.3f%%'))
plt.show()
ax = (by_region*100).plot(kind='bar', stacked=False)
plt.title('Categories as percentages of total texts')
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.3f%%'))
plt.show()
###Output
_____no_output_____ |
2020_04_06/Npy文件训练.ipynb | ###Markdown
加载数据- 加载数据- 检查图像与标签是否对应
###Code
train_images = np.load('/content/drive/My Drive/Machine Learning/dataset/cats_dogs/npy/train-images-idx3.npy')
train_labels = np.load('/content/drive/My Drive/Machine Learning/dataset/cats_dogs/npy/train-labels-idx1.npy')
test_images = np.load('/content/drive/My Drive/Machine Learning/dataset/cats_dogs/npy/t10k-images-idx3.npy')
test_labels = np.load('/content/drive/My Drive/Machine Learning/dataset/cats_dogs/npy/t10k-labels-idx1.npy')
print(train_images.shape, train_labels.shape, test_images.shape, test_labels.shape)
# 随机检查label与图片是否可以对应上
# - 从train中抽取9个image和9个label
image_no = np.random.randint(0,3602, size=9) # 随机挑选9个数字
fig, axes = plt.subplots(nrows=3, ncols=3,figsize=(7,7))
for i in range(3):
for j in range(3):
axes[i][j].imshow(train_images[image_no[i*3+j]])
axes[i][j].set_title(train_labels[image_no[i*3+j]])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
开始训练- 定义DataLoader- 模型的搭建- 定义损失函数和优化器- 开始训练
###Code
import torch
from torch import nn, optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import transforms
from torch.utils.checkpoint import checkpoint, checkpoint_sequential
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
###Output
cuda
###Markdown
创建dataloader- 需要进行transfer
###Code
# 定义DataLoader
# ---------
# 转换为tensor
# ---------
X_train = torch.from_numpy(train_images.reshape(-1, 3, 64, 64)).float() # 输入 x 张量
X_test = torch.from_numpy(test_images.reshape(-1, 3, 64, 64)).float()
Y_train = torch.from_numpy(train_labels).long() # 输入 y 张量
Y_test = torch.from_numpy(test_labels).long()
print(X_train.shape, Y_train.shape)
# ---------------
# 创建dataloader
# ---------------
MINIBATCH_SIZE = 200
trainDataset = torch.utils.data.TensorDataset(X_train, Y_train) # 合并训练数据和目标数据
trainDataloader = torch.utils.data.DataLoader(
dataset=trainDataset,
batch_size=MINIBATCH_SIZE,
shuffle=True,
num_workers=1 # set multi-work num read data
)
testDataset = torch.utils.data.TensorDataset(X_test, Y_test) # 数据路径
testDataloader = torch.utils.data.DataLoader(
dataset=testDataset,
batch_size=MINIBATCH_SIZE, # 批量大小
shuffle=True, # 乱序
num_workers=1 # 多进程
)
###Output
torch.Size([22500, 3, 64, 64]) torch.Size([22500])
###Markdown
模型的定义
###Code
torch.cuda.empty_cache()
# 模型的定义
# -------
# 定义网络
# -------
class ModuleWrapperIgnores2ndArg(nn.Module):
def __init__(self, module):
super().__init__()
self.module = module
def forward(self,x, dummy_arg=None):
assert dummy_arg is not None
x = self.module(x)
return x
class cnn(nn.Module):
def __init__(self):
super(cnn, self).__init__()
# 参数的定义
# 卷积层+池化层
self.conv1 = nn.Sequential(
# 第一层
nn.Conv2d(kernel_size=2, in_channels=3, out_channels=64, stride=1, padding=1),
nn.BatchNorm2d(64), # 加上BN的结果
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=1),
# 第二层
nn.Conv2d(kernel_size=2, in_channels=64, out_channels=128, stride=1, padding=1),
nn.BatchNorm2d(128), # 加上BN的结果
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=1),
# 第三层
nn.Conv2d(kernel_size=2, in_channels=128, out_channels=256, stride=1, padding=1),
nn.BatchNorm2d(256), # 加上BN的结果
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=1),
)
self.conv2 = nn.Sequential(
# 第四层
nn.Conv2d(kernel_size=2, in_channels=256, out_channels=256, stride=2, padding=1),
nn.BatchNorm2d(256), # 加上BN的结果
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
# 第五层
nn.Conv2d(kernel_size=3, in_channels=256, out_channels=256, stride=3, padding=1),
nn.BatchNorm2d(256), # 加上BN的结果
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=3)
)
# 全连接层
self.line = nn.Sequential(
nn.Linear(in_features=1024, out_features=512),
nn.Dropout(p=0.7),
nn.Linear(in_features=512, out_features=2)
)
# 用来checkpoints
# self.dummy_tensor = torch.ones(1, dtype=torch.float32, requires_grad=True)
# self.module_wrapper = ModuleWrapperIgnores2ndArg(self.conv1)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
# x = self.conv(x)
x = x.view(x.size(0), -1) #展开
x = self.line(x)
return x
# ------------
# 查看网络结构
# ------------
Cnn = cnn().to(device)
print(Cnn)
# ------------
# 测试输入数据
# ------------
Cnn(X_train[0:3].to(device))
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
###Output
Sun Apr 5 21:53:28 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 55C P0 36W / 250W | 875MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
###Markdown
定义优化器和损失函数&开始训练
###Code
loss_fn = nn.CrossEntropyLoss() # 定义损失函数
optimiser = optim.Adam(params=Cnn.parameters(), lr=0.001) # 定义优化器
print('开始训练.')
num_epochs = 42
total_step = len(trainDataloader) # 每一个epoch中的步骤
lossList = []
AccuryList = []
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimiser, step_size=45*7, gamma=0.9) # 学习率每20轮变为原来的90%
for epoch in range(num_epochs):
Cnn.train() # 训练模式
totalLoss = 0 # 计算训练集的平均loss
for i, (images, labels) in enumerate(trainDataloader):
images = images.to(device)
labels = labels.to(device)
pre = Cnn(images) # 模型的预测结果
loss = loss_fn(pre, labels) # 计算误差
# 反向传播
optimiser.zero_grad()
loss.backward()
optimiser.step()
lr_scheduler.step()
# 计算平均loss
totalLoss = totalLoss + loss.item()
# ---------
# 打印结果
# ---------
if (i+2) % 30 == 0:
t = datetime.now() #获取现在的时间
print('Time {}, Epoch [{}/{}], Step [{}/{}], loss:{:.4f}'.format(t, epoch, num_epochs, i+1, total_step, totalLoss/(i+1)))
# 看一下训练集准确率
_, predicted = torch.max(pre.data, 1)
total = labels.size(0)
correct = (predicted == labels).sum().item()
print('Training Accuracy: {}, Training Rate: {}'.format(100 * correct / total, optimiser.param_groups[0]['lr']))
lossList.append(totalLoss/(i+1))
# --------------------------
# 每一个epoch对测试集进行测试
# --------------------------
Cnn.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in testDataloader:
images = images.to(device)
labels = labels.to(device)
outputs = Cnn(images) # 模型的预测结果(概率)
_, predicted = torch.max(outputs.data, 1) # 模型的预测结果(对应到是哪一个label)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total))
AccuryList.append(correct / total)
print('-'*10)
fig, axes = plt.subplots(nrows=1, ncols=1,figsize=(13,7))
axes.plot(lossList, 'k--')
fig, axes = plt.subplots(nrows=1, ncols=1,figsize=(13,7))
axes.plot(AccuryList, 'k--')
###Output
_____no_output_____ |
examples/mixup_example_using_IMDB_sentiment.ipynb | ###Markdown
Mixup augmentation for NLPUsing IMDB sentiment classification dataset
###Code
# Import libraries
try:
import textaugment
except ModuleNotFoundError:
!pip install textaugment
import textaugment
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.layers import Embedding
from tensorflow.keras.layers import Conv1D, GlobalMaxPooling1D
from tensorflow.keras.datasets import imdb
from textaugment import MIXUP
%matplotlib inline
tf.__version__
textaugment.__version__
###Output
_____no_output_____
###Markdown
Initialize constant variables
###Code
# set parameters:
max_features = 5000
maxlen = 400
batch_size = 32
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
epochs = 10
runs = 1
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
###Output
Loading data...
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 0s 0us/step
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 400)
x_test shape: (25000, 400)
###Markdown
Initialize mixup
###Code
mixup = MIXUP()
generator, step = mixup.flow(x_train, y_train, batch_size=batch_size, runs=runs)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
###Output
Build model...
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 400, 50) 250000
_________________________________________________________________
dropout (Dropout) (None, 400, 50) 0
_________________________________________________________________
conv1d (Conv1D) (None, 398, 250) 37750
_________________________________________________________________
global_max_pooling1d (Global (None, 250) 0
_________________________________________________________________
dense (Dense) (None, 250) 62750
_________________________________________________________________
dropout_1 (Dropout) (None, 250) 0
_________________________________________________________________
activation (Activation) (None, 250) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 251
_________________________________________________________________
activation_1 (Activation) (None, 1) 0
=================================================================
Total params: 350,751
Trainable params: 350,751
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train model using mixup augmentation
###Code
h1 = model.fit(generator, steps_per_epoch=step,
epochs=epochs,
validation_data=(x_test, y_test))
pd.DataFrame(h1.history)[['loss','val_loss']].plot(title="With mixup")
print('Build model...')
model2 = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model2.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model2.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model2.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model2.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model2.add(Dense(hidden_dims))
model2.add(Dropout(0.2))
model2.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model2.add(Dense(1))
model2.add(Activation('sigmoid'))
model2.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model2.summary()
h2 = model2.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
pd.DataFrame(h2.history)[['loss','val_loss']].plot(title="Without mixup")
###Output
_____no_output_____ |
pelajaran/09_Bab_9_Pemrograman_Fungsional.ipynb | ###Markdown
Modul Python Bahasa Indonesia Seri Kesembilan___Coded by psychohaxer | Version 1.1 (2020.12.24)___Notebook ini berisi contoh kode dalam Python sekaligus outputnya sebagai referensi dalam coding. Notebook ini boleh disebarluaskan dan diedit tanpa mengubah atau menghilangkan nama pembuatnya. Selamat belajar dan semoga waktu Anda menyenangkan.Catatan: Modul ini menggunakan Python 3Notebook ini dilisensikan dibawah [MIT License](https://opensource.org/licenses/MIT).___ Bab 9 Pemrograman Fungsional (Functional Programming)Pemrograman fungsional adalah paradigma pemrograman dimana blok-blok kode program tertentu hanya perlu ditulis sekali dan cukup dipanggil ketika hendak dijalankan. `Function` itu sendiri adalah sekumpulan blok kode program yang bisa dijalankan dengan memanggil nama fungsinya. Dengan menerapkan pemrograman fungsional, kode kita akan lebih terstruktur dan tidak perlu boros dalam mengetik. _DRY (Don't Repeat Yourself)_ adalah konsep yang diterapkan dan disaat yang sama menghindari _WET (Write Everything Twice)_. Intinya, kita akan mencegah menulis kode yang sama secara berulang.Karakteristik function:* Sekumpulan blok kode yang berfungsi untuk melakukan tugas tetentu* Bisa digunakan kembali* Dijalankan dengan memanggil namanya* Bisa meiliki parameter dan diberi argumen * Parameter adalah variabel yang didefinisikan didalan kurung `()` * Contoh fungsi dengan parameter: `hitungLuas(sisi)` * Argumen adalah nilai aktual yang diberikan pada fungsi ketika dipanggil * Contoh fungsi dengan argumen: `hitungLuas(4)`* Bisa menghasilkan sebuah nilai sebagai hasil (`return`)* Ada function built-in (contohnya `print()`)* Kita bisa membuat function sendiri Membuat FungsiJika kita hendak membuat fungsi, hendaknya menggunakan nama yang jelas sesuai dengan kegunaanya. Membuat fungsi dimulai dengan keyword `def` diikuti nama fungsi.
###Code
def nama_fungsi():
print("Hai aku adalah fungsi!")
###Output
_____no_output_____
###Markdown
Diatas adalah contoh sederhana membuat fungsi. Memanggil FungsiFungsi tidak akan dijalankan bila tidak dipanggil. Memanggil fungsi juga dikenal juga dengan _call_, _invoke_, _run_, _execute_, dll. Semua itu sama. Memanggil fungsi dilakukan dengan nama fungsi diikuti tanda kurung, dan jika diperlukan sambil memberikan argumennya.
###Code
nama_fungsi()
###Output
Hai aku adalah fungsi!
###Markdown
Fungsi dengan ParameterSebagai contoh, kita akan membuat fungsi untuk membandingkan dua bilangan. Bilangan yang akan kita proses akan kita masukkan ke parameter, dimana pada fungsi dibawah adalah variabel a dan b.
###Code
def cek_bil(a,b):
if (a == b):
print("A == B")
elif (a < b):
print("A < B")
elif (a > b):
print("A > B")
###Output
_____no_output_____
###Markdown
Kita akan memanggil fungsi ini dan memberikan argumennya.
###Code
cek_bil(13,13)
cek_bil(12,14)
cek_bil(10,7)
###Output
A == B
A < B
A > B
###Markdown
Keyword `return`Return digunakan bila kita ingin fungsi yang kita buat mengembalikan nilai.
###Code
def jumlah(a,b):
return a+b
jumlah(3,4)
###Output
_____no_output_____
###Markdown
Parameter DefaultNilai parameter default adalah nilai yang digunakan bila fungsi dipanggil tanpa memberikan argumen untuk parameter tersebut. Hanya **parameter terakhir yang bisa memiliki nilai default**. Ini karena nilainya diberikan menggunakan posisi. Jadi jika kita punya beberapa parameter, hanya yang terakhir yang memiliki nilai default.
###Code
def sapa(nama, sapaan="Halo"):
print(sapaan, nama)
###Output
_____no_output_____
###Markdown
Jika kita memanggil fungsi diatas dengan hanya memberikan argumen pada parameter nama, maka parameter sapaan akan terisi dengan `"Halo"`.
###Code
sapa("Adi")
###Output
Halo Adi
###Markdown
Lain halnya jika kita berikan nilai.
###Code
sapa("Fajar","Selamat pagi")
###Output
Selamat pagi Fajar
###Markdown
Bagaimana jika kita memberikan nilai parameter default tidak sesuai tempatnya?
###Code
def fungsi(a=10,b):
return a+b
###Output
_____no_output_____
###Markdown
Fungsi Banyak ArgumenJika kita tidak mengetahui secara pasti jumlah argumen, maka penulisannya seperti dibawah ini.
###Code
def simpan(variabel, * tup):
print(variabel)
for variabel in tup:
print(variabel)
simpan(2)
simpan(2,3,4,5,6)
###Output
2
3
4
5
6
###Markdown
Menugaskan Fungsi ke VariabelKita bisa menggunakan variabel untuk memanggil fungsi. Ingat fungsi `cek_bil` diatas? Kita akan memasukkannya ke variabel `cek`.
###Code
cek = cek_bil
cek(2,3)
###Output
A < B
###Markdown
Fungsi Mengembalikan Fungsi LainFungsi dapat memanggil dan mengembalikan nilai dari fungsi didalamnya. Ini dilakukan dengan memperhatikan indentasi fungsi dan menugaskan fungsi ke variabel.
###Code
def salam():
def ucapkan():
return "Assalamualaikum"
return ucapkan
ucapan_salam = salam()
print(ucapan_salam())
###Output
Assalamualaikum
###Markdown
Variabel Lokal dan Variabel GlobalTidak semua variabel bisa diakses dari semua tempat. Ini tergantung dari tempat dimana kita mendefinisikan variabel. Variabel lokal adalah variabel yang didefinisikan didalam sebuah fungsi. Sedangkan variabel global didefinisikan diluar fungsi.
###Code
variabel_global = "Aku variabel global! Aku bisa diakses dimana saja."
def fungsi1():
variabel_lokal = "Aku variabel lokal! Aku hanya bisa diakses didalam fungsiku."
print(variabel_global)
print(variabel_lokal)
fungsi1()
###Output
Aku variabel global! Aku bisa diakses dimana saja.
Aku variabel lokal! Aku hanya bisa diakses didalam fungsiku.
###Markdown
Memanggil variabel lokal diluar fungsi akan menghasilkan error `NameError` karena variabel tidak ditemukan.
###Code
print(variabel_lokal)
def fungsi2():
variabel_lokal_juga = "Aku variabel lokal juga lho!"
print(variabel_global)
print(variabel_lokal_juga)
print(variabel_lokal)
fungsi2()
###Output
Aku variabel global! Aku bisa diakses dimana saja.
Aku variabel lokal juga lho!
###Markdown
Fungsi BersarangFungsi memuat fungsi lainnya, dimana ia disebut dengan fungsi bersarang. Sebagai contoh kita akan membuat fungsi untuk hitung luas persegi dan persegi panjang dimana jika kita memberikan 1 argumen maka akan dihitung sebagai persegi dan jika 2 argumen maka persegi panjang.
###Code
def hitungLuasSegiEmpat(a, b=0):
def persegi(s):
luas = s*s
return luas
def persegiPanjang(p,l):
luas = p*l
return luas
if (b == 0):
return persegi(a)
else:
return persegiPanjang(a,b)
hitungLuasSegiEmpat(4)
hitungLuasSegiEmpat(3,5)
###Output
_____no_output_____ |
notebooks/05.1-Corrupted-convE.ipynb | ###Markdown
Corrupted labels are a useful way to measure a networks generalization potential- https://arxiv.org/pdf/1611.03530.pdf
###Code
import os
import json
import numpy as np
import pandas as pd
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from src.data_loader import Shifted_Data_Loader,upsample_dataset
from src.plot import orig_vs_transformed as plot_ovt
from src.plot import enc_dec_samples
from src.models import GResNet,EDense,EResNet,EConvNet
from src.config import get_config
from src.trainer import Trainer
from src.utils import prepare_dirs_and_logger
from keras.datasets import fashion_mnist,mnist
from keras.layers import Dense
from keras.models import Model
from keras.utils import to_categorical
from keras.optimizers import adadelta
config,_ = get_config()
# Boilerplate
setattr(config, 'proj_root', '/home/elijahc/projects/vae')
setattr(config, 'log_dir', '/home/elijahc/projects/vae/logs')
setattr(config, 'dev_mode',False)
setattr(config, 'seed', 7)
setattr(config, 'project','vae')
setattr(config, 'ecc_max',4.8/8.0)
setattr(config, 'bg_noise',0.2)
setattr(config, 'contrast_level',0.3)
# setattr(config, 'rot_max',90.0/360.0)
setattr(config, 'rot_max',0)
# Training Params
setattr(config, 'batch_size', 512)
setattr(config, 'dataset', 'fashion_mnist')
setattr(config, 'epochs',27)
setattr(config, 'monitor', None)
# setattr(config, 'lr', 10)
# setattr(config, 'min_delta', 0.25)
# setattr(config, 'monitor', 'val_loss')
setattr(config, 'optimizer', 'nadam')
setattr(config, 'label_corruption',0.0)
# Architecture Params
setattr(config, 'enc_blocks', [128,256,512])
setattr(config, 'enc_arch', 'convnet-4')
setattr(config, 'dec_blocks', [4,2,1])
setattr(config, 'z_dim', 35)
setattr(config, 'y_dim', 35)
if config.ecc_max == 0.:
translation_amt = None
else:
translation_amt = config.ecc_max
if config.rot_max == 0.:
rot_max = None
else:
rot_max = config.rot_max
if config.bg_noise == 0.:
bg_noise = None
else:
bg_noise = config.bg_noise
# Loss Weights
setattr(config, 'xcov', 0)
setattr(config, 'recon', 3)
setattr(config, 'xent', 15)
# setattr(config,'model_dir','/home/elijahc/projects/vae/models/2019-06-07/recon_{}_xent_{}/label_corruption_{}'.format(config.recon,config.xent,config.label_corruption))
setattr(config,'model_dir','/home/elijahc/projects/vae/models/2019-06-05/xent_{}_recon_{}_{}/bg_noise_{}'.format(config.xent,config.recon,config.enc_arch,config.bg_noise))
np.random.seed(7)
if not config.dev_mode:
print('setting up...')
prepare_dirs_and_logger(config)
vars(config)
oversample_factor=3
DL = Shifted_Data_Loader(dataset=config.dataset,flatten=False,num_train=60000*oversample_factor,
translation=translation_amt,
rotation=rot_max,
contrast_level=config.contrast_level,
noise_mode='uniform',
noise_kws={
'amount':1,
'width':config.bg_noise,
},
bg_only=False,
)
pt,idx = plot_ovt(DL,cmap='gray')
# plt.imshow(DL.fg_train[50].reshape(56,56),cmap='gray',vmin=0,vmax=1)
G_builder = GResNet(y_dim=config.y_dim,z_dim=config.z_dim,dec_blocks=config.dec_blocks,flatten_out=False)
E_builder = EConvNet(blocks=config.enc_blocks,z_dim=config.z_dim,output_size=512)
trainer = Trainer(config,DL,E_builder,G_builder,)
# setattr(trainer.config,'model_dir','/home/elijahc/projects/vae/models/2019-01-22/')
trainer.model.summary()
# trainer.build_model()
trainer.compile_model()
# trainer.G.summary()
DL.sx_test.shape
val_pct = 0.05
val_idxs = np.random.choice(np.arange(10000),int(val_pct*60000),replace=False)
validation_set = (DL.sx_test[val_idxs],
{'class':DL.y_test_oh[val_idxs],
'G':DL.fg_test[val_idxs]}
)
if config.label_corruption >= 0.1:
# Load corrupted Labels
y_tr_corr = np.load('../data/fashion_mnist_corrupted_labels/y_train_{}.npy'.format(config.label_corruption))
y_tr_corr = upsample_dataset(y_tr_corr,180000-60000)
y_corr_idxs = np.load('../data/fashion_mnist_corrupted_labels/corrupted_idxs_{}.npy'.format(config.label_corruption))
y_corr_idxs = np.concatenate([y_corr_idxs,(y_corr_idxs+60000),y_corr_idxs+120000],axis=0)
y_corrupted_oh = to_categorical(y_tr_corr,num_classes=10)
y = y_corrupted_oh
DL.gen_corrupted_shift_image(y_corr_idxs,y_tr_corr)
else:
y = DL.y_train_oh
trainer.go(x=DL.sx_train,
y={
'class':y,
# 'D_real':RF,
'G':DL.fg_train},
# validation_split=0.05,
shuffle=True,
validation_data=validation_set,
verbose=0)
DL.sx_train.shape
hist_df = pd.DataFrame.from_records(trainer.model.history.history)
hist_df.head()
sns.set_context('paper')
metrics = ['loss','G_loss','class_acc']
fig,axs = plt.subplots(nrows=len(metrics),sharex=True,figsize=(10,10))
for metric_name,ax in zip(metrics,axs):
sns.scatterplot(data=hist_df[[metric_name,'val_'+metric_name]],ax=ax)
hist_df['generalization_error'] = hist_df.val_loss - hist_df.loss
hist_df['G_generalization_error'] = hist_df.val_G_loss - hist_df.G_loss
hist_df['class_generalization_error'] = hist_df.val_class_loss - hist_df.class_loss
sns.lineplot(data=hist_df[['class_generalization_error']])
# plt.yscale('log')
import datetime as dt
def clean_config(config,keys=['dev_mode','log_dir','log_level','proj_root']):
c = vars(config)
for k in keys:
if k in c.keys():
del c[k]
c['uploaded_by']='elijahc'
c['last_updated']= str(dt.datetime.now())
return c
run_meta = clean_config(config)
run_meta['project']='vae'
# run_meta['ecc_max']=0.8
run_meta
trainer.config = config
trainer.save_model()
run_conf = clean_config(config)
with open(os.path.join(run_conf['model_dir'],'config.json'), 'w') as fp:
json.dump(run_conf, fp)
hist_df.to_parquet(os.path.join(run_conf['model_dir'],'train_history.parquet'))
generator = trainer.G
trainer.E.summary()
z_encoder = Model(trainer.input,trainer.z_lat)
y_encoder = Model(trainer.input,trainer.y_lat)
classifier = Model(trainer.input,trainer.y_class)
l3_encoder = Model(trainer.input,trainer.model.get_layer(name='dense_1').output)
l1_encoder = Model(trainer.input,trainer.model.get_layer(name='conv2d_1').output)
# l2_encoder = Model(trainer.input,trainer.model.get_layer(name='block_2_Add_2').output)
# l2_encoder = Model(trainer.input,trainer.model.get_layer(name='block_4_Add_1').output)
l2_encoder = Model(trainer.input,trainer.model.get_layer(name='conv2d_3').output)
mod = trainer.model
# mod.summary()
def get_weight_grad(model, inputs, outputs):
""" Gets gradient of model for given inputs and outputs for all weights"""
grads = model.optimizer.get_gradients(model.total_loss, model.trainable_weights)
symb_inputs = (model._feed_inputs + model._feed_targets + model._feed_sample_weights)
f = K.function(symb_inputs, grads)
x, y, sample_weight = model._standardize_user_data(inputs, outputs)
output_grad = f(x + y + sample_weight)
return output_grad
classifier.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['acc'])
res = classifier.evaluate(DL.sx_test,DL.y_test_oh,batch_size=config.batch_size)
ts_error = 1-res[1]
print(res[1])
df = pd.DataFrame.from_records({'test_acc':[res[1]],
'label_corruption':[config.label_corruption],
'recon':[config.recon],
'xent':[config.xent],
'ecc_max':[config.ecc_max],
'xcov': [config.xcov]})
df.to_json(os.path.join(config.model_dir,'performance.json'))
out_s = l1_encoder.output_shape
type(out_s)
l1_enc = l1_encoder.predict(DL.sx_test,batch_size=config.batch_size).reshape(10000,np.prod(l1_encoder.output_shape[1:]))
l2_enc = l2_encoder.predict(DL.sx_test,batch_size=config.batch_size).reshape(10000,np.prod(l2_encoder.output_shape[1:]))
l3_enc = l3_encoder.predict(DL.sx_test,batch_size=config.batch_size).reshape(10000,np.prod(l3_encoder.output_shape[1:]))
z_enc = z_encoder.predict(DL.sx_test,batch_size=config.batch_size)
# y_lat = y_lat_encoder.predict(DL.sx_test,batch_size=config.batch_size)
y_enc = y_encoder.predict(DL.sx_test,batch_size=config.batch_size)
l1_enc.shape
import xarray
import hashlib
import random
def raw_to_xr(encodings,l_2_depth,stimulus_set):
obj_names = [
"T-shirt",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Dress Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
all_das = []
for layer,activations in encodings.items():
neuroid_n = activations.shape[1]
n_idx = pd.MultiIndex.from_arrays([
pd.Series(['{}_{}'.format(layer,i) for i in np.arange(neuroid_n)],name='neuroid_id'),
pd.Series([l_2_depth[layer]]*neuroid_n,name='layer'),
pd.Series([layer]*neuroid_n,name='region')
])
p_idx = pd.MultiIndex.from_arrays([
stimulus_set.image_id,
stimulus_set.dx,
stimulus_set.dy,
stimulus_set.rxy,
stimulus_set.numeric_label.astype('int8'),
pd.Series([obj_names[i] for i in stimulus_set.numeric_label],name='object_name'),
pd.Series(stimulus_set.dx.values/28, name='tx'),
pd.Series(stimulus_set.dy.values/28, name='ty'),
pd.Series([1.0]*len(stimulus_set),name='s'),
])
da = xarray.DataArray(activations.astype('float32'),
coords={'presentation':p_idx,'neuroid':n_idx},
dims=['presentation','neuroid'])
all_das.append(da)
return xarray.concat(all_das,dim='neuroid')
encodings = {
'pixel':DL.sx_test.reshape(10000,np.prod(DL.sx_test.shape[1:])),
'dense_1':l1_enc,
'dense_2':l2_enc,
'dense_3':l3_enc,
'y_lat':y_enc,
'z_lat':z_enc
}
depths = {
'pixel':0,
'dense_1':1,
'dense_2':2,
'dense_3':3,
'y_lat':4,
'z_lat':4
}
slug = [(dx,dy,float(lab),float(random.randrange(20))) for dx,dy,rxy,lab in zip(DL.dx[1],DL.dy[1],DL.dtheta[1],DL.y_test)]
image_id = [hashlib.md5(json.dumps(list(p),sort_keys=True).encode('utf-8')).digest().hex() for p in slug]
stim_set = pd.DataFrame({'dx':DL.dx[1]-14,'dy':DL.dy[1]-14,'numeric_label':DL.y_test,'rxy':DL.dtheta[1],'image_id':image_id})
out = raw_to_xr(encodings,depths,stim_set)
out = raw_to_xr(encodings,depths,stim_set)
from collections import OrderedDict
def save_assembly(da,run_dir,fname,**kwargs):
da = da.reset_index(da.coords.dims)
da.attrs = OrderedDict()
with open(os.path.join(run_dir,fname), 'wb') as fp:
da.to_netcdf(fp,**kwargs)
save_assembly(out,run_dir=config.model_dir,fname='dataset.nc',
format='NETCDF3_64BIT',
# engine=
# encoding=enc,
)
# z_enc_tr = z_encoder.predict(DL.sx_train,batch_size=config.batch_size)
# y_lat = y_lat_encoder.predict(DL.sx_test,batch_size=config.batch_size)
# y_enc_tr = y_encoder.predict(DL.sx_train,batch_size=config.batch_size)
np.save(os.path.join(config.model_dir,'z_enc'),z_enc)
np.save(os.path.join(config.model_dir,'l1_enc'),l1_enc)
np.save(os.path.join(config.model_dir,'l2_enc'),l2_enc)
np.save(os.path.join(config.model_dir,'y_enc'),y_enc)
y_enc.shape
_lat_vec = np.concatenate([y_enc,z_enc],axis=1)
_lat_vec.shape
z_enc_mu = np.mean(z_enc,axis=0)
z_enc_cov = np.cov(z_enc,rowvar=False)
np.random.multivariate_normal(z_enc_mu,z_enc_cov,size=50).shape
regen = generator.predict(_lat_vec,batch_size=config.batch_size)
rand_im = np.random.randint(0,10000)
plt.imshow(regen[rand_im].reshape(56,56),cmap='gray')
_lat_vec[rand_im]
DL2 = Shifted_Data_Loader(dataset=config.dataset,flatten=False,
rotation=None,
translation=translation_amt,
bg_noise=bg_noise,
bg_only=False,
)
# enc_dec_samples(DL.x_train,DL.sx_train,z_enc_tr,y_enc_tr,generator)
enc_dec_samples(DL.x_test,DL.sx_test,z_enc,y_enc,generator)
z_enc2 = z_encoder.predict(DL2.sx_test,batch_size=config.batch_size)
y_lat2 = y_encoder.predict(DL2.sx_test,batch_size=config.batch_size)
_lat_vec2 = np.concatenate([y_lat2,z_enc2],axis=1)
regen2 = generator.predict(_lat_vec2,batch_size=config.batch_size)
from src.plot import remove_axes,remove_labels
from src.utils import gen_trajectory
examples = 5
rand_im = np.random.randint(0,10000,size=examples)
fix,axs = plt.subplots(examples,11,figsize=(8,4))
_lat_s = []
regen_s = []
out = gen_trajectory(z_enc[rand_im],z_enc2[rand_im],delta=.25)
out_y = gen_trajectory(y_enc[rand_im],y_lat2[rand_im],delta=.25)
for z,y in zip(out,out_y):
_lat = np.concatenate([y,z],axis=1)
_lat_s.append(_lat)
regen_s.append(generator.predict(_lat,batch_size=config.batch_size))
i=0
for axr,idx in zip(axs,rand_im):
axr[0].imshow(DL.x_test[idx].reshape(28,28),cmap='gray')
axr[1].imshow(DL.sx_test[idx].reshape(56,56),cmap='gray')
axr[2].imshow(regen[idx].reshape(56,56),cmap='gray')
for j,a in enumerate(axr[3:-3]):
a.imshow(regen_s[j][i,:].reshape(56,56),cmap='gray')
# a.imshow(s.reshape(56,56),cmap='gray')
axr[-3].imshow(regen2[idx].reshape(56,56),cmap='gray')
axr[-2].imshow(DL2.sx_test[idx].reshape(56,56),cmap='gray')
axr[-1].imshow(DL2.x_test[idx].reshape(28,28),cmap='gray')
for a in axr:
remove_axes(a)
remove_labels(a)
i+=1
# plt.imshow(regen[rand_im].reshape(56,56),cmap='gray')
# fix.savefig('../../updates/2019-02-05/assets/img/translocate_{}.png'.format(translation_amt))
fdjsakl;fdsa
dxs = DL.dx[1]-14
dys = DL.dy[1]-14
from sklearn.preprocessing import MinMaxScaler
feat_range = (0,30)
z_enc_scaled = [MinMaxScaler(feat_range).fit_transform(z_enc[:,i].reshape(-1,1)).tolist() for i in np.arange(25)]
z_enc_scaled = np.squeeze(np.array(z_enc_scaled,dtype=int))
l2_enc_scaled = [MinMaxScaler(feat_range).fit_transform(l2_enc[:,i].reshape(-1,1)).tolist() for i in np.arange(2000)]
l2_enc_scaled = np.squeeze(np.array(l2_enc_scaled,dtype=int))
l2_enc_scaled.shape
from collections import Counter
import dit
from dit import Distribution
def mutual_information(X,Y):
XY_c = Counter(zip(X,Y))
XY_pmf = {k:v/float(sum(XY_c.values())) for k,v in XY_c.items()}
XY_jdist = Distribution(XY_pmf)
return dit.shannon.mutual_information(XY_jdist,[0],[1])
z_dx_I = [mutual_information(z_enc_scaled[i],dxs.astype(int)+14) for i in np.arange(25)]
l2_dx_I = [mutual_information(l2_enc_scaled[i],dxs.astype(int)+14) for i in np.arange(2000)]
z_dy_I = [mutual_information(z_enc_scaled[i],dys.astype(int)+14) for i in np.arange(25)]
l2_dy_I = [mutual_information(l2_enc_scaled[i],dys.astype(int)+14) for i in np.arange(2000)]
z_class_I = [mutual_information(z_enc_scaled[i],DL.y_test) for i in np.arange(25)]
l2_class_I = [mutual_information(l2_enc_scaled[i],DL.y_test) for i in np.arange(2000)]
z_I_df = pd.DataFrame.from_records({'class':z_class_I,'dy':z_dy_I,'dx':z_dx_I})
z_I_df['class'] = z_I_df['class'].values.round(decimals=1)
l2_I_df = pd.DataFrame.from_records({
'class':l2_class_I,
'dy':l2_dy_I,
'dx':l2_dx_I})
l2_I_df['class'] = l2_I_df['class'].values.round(decimals=1)
l2_I_df.head()
plt.hist(l2_I_df.dx)
plt.hist(z_I_df.dx)
config.translation_amt = translation_amt
config.translation_amt
dir_path = '../data/xcov_importance/dist_{}/'.format(translation_amt)
z_I_df.to_pickle('../data/xcov_importance/dist_{}/z_mutual_info.pk'.format(translation_amt))
np.save('../data/xcov_importance/dist_{}/dxs'.format(translation_amt), DL.dx[1]-14)
np.save('../data/xcov_importance/dist_{}/dys'.format(translation_amt), DL.dy[1]-14)
np.save('../data/xcov_importance/dist_{}/z_enc'.format(translation_amt), z_enc)
hist_df.to_pickle(os.path.join(dir_path,'training_hist.df'))
with open(os.path.join(dir_path,'config.json'), 'w') as fp:
json.dump(vars(config), fp)
def filter_by_weight(wts,thresh=0.01):
idxs = np.abs(wts)>thresh
return idxs
dx_max = np.argmax(z_I_df.dx.values)
dy_max = np.argmax(z_I_df.dy.values)
t = 0.05
dx_filt = filter_by_weight(z_w_k[:,dx_max],thresh=t)
dy_filt = filter_by_weight(z_w_k[:,dy_max],thresh=t)
union = np.union1d(np.where(dx_filt==True),np.where(dy_filt==True))
intersect = np.intersect1d(np.where(dx_filt==True),np.where(dy_filt==True))
# filt = np.array([False]*2000)
# filt[union] = True
sns.set_context('talk')
fig,axs = plt.subplots(1,2,figsize=(6*2,5))
filt = dy_filt
print('num: ',len(union))
print('intersect_frac: ',float(len(intersect))/len(union))
print('mean dx_I: ',l2_I_df.dx[filt].mean())
print('mean dy_I: ',l2_I_df.dy[filt].mean())
points = axs[0].scatter(x=l2_I_df['dx'],y=l2_I_df['dy'],
c=l2_I_df['class'],cmap='viridis',vmin=0,vmax=0.4,s=z_I_df['class']*100
)
plt.colorbar(points)
points = axs[1].scatter(x=z_I_df['dx'],y=z_I_df['dy'],c=z_I_df['class'],cmap='viridis',s=z_I_df['class']*100,vmin=0,vmax=0.4)
# plt.colorbar(points)
axs[0].set_ylim(0,0.9)
axs[0].set_xlim(0,0.9)
axs[1].set_ylim(0,0.9)
axs[1].set_xlim(0,0.9)
fig,ax = plt.subplots(1,1,figsize=(5,5))
ax.scatter(z_dx_I,z_dy_I)
# ax.set_ylim(0,0.8)
# ax.set_xlim(0,0.8)
plt.scatter(np.arange(25),sorted(z_class_I,reverse=True))
# plt.scatter(np.arange(25),z_dx_I)
# plt.scatter(np.arange(25),z_dy_I)
from src.metrics import var_expl,norm_var_expl
from collections import Counter
dtheta = DL.dtheta[1]
fve_dx = norm_var_expl(features=z_enc,cond=dxs,bins=21)
fve_dy = norm_var_expl(features=z_enc,cond=dys,bins=21)
fve_class = norm_var_expl(features=z_enc, cond=DL.y_test, bins=21)
# fve_dt = norm_var_expl(features=z_enc,cond=dtheta,bins=21)
# fve_dx_norm = (dxs.var()-fve_dx)/dxs.var()
# fve_dy_norm = (dys.var()-fve_dy)/dys.var()
# fve_dth_norm = (dtheta.var()-fve_dt)/dtheta.var()
fve_dx_norm = fve_dx
fve_dy_norm = fve_dy
import seaborn as sns
sns.set_context('talk')
fve_dx_norm.shape
# np.save(os.path.join(config.model_dir,'fve_dx_norm'),fve_dx_norm)
fig,ax = plt.subplots(1,1,figsize=(5,5))
plt.scatter(fve_dx_norm.mean(axis=0),fve_dy_norm.mean(axis=0))
plt.xlabel('fve_dx')
plt.ylabel('fve_dy')
plt.tight_layout()
# plt.savefig(os.path.join(config.model_dir,'fve_dx.png'))
# plt.ylim(-0.125,0.25)
xdim = np.argmax(fve_dx_norm.mean(axis=0))
fve_dy_norm.mean(axis=0)
# np.save(os.path.join(config.model_dir,'fve_dy_norm'),fve_dy_norm)
plt.scatter(np.arange(config.z_dim),fve_dy_norm.mean(axis=0))
plt.xlabel('Z_n')
plt.ylabel('fve_dy')
plt.tight_layout()
# plt.savefig(os.path.join(config.model_dir,'fve_dy.png'))
# plt.ylim(-0.125,0.25)
ydim = np.argmax(fve_dy_norm.mean(axis=0))
plt.scatter(np.arange(config.z_dim),fve_class.mean(axis=0))
plt.xlabel('Z_n')
plt.ylabel('fve_class')
# plt.ylim(0.0,0.5)
np.argmax(fve_class.mean(axis=0))
from src.plot import Z_color_scatter
Z_color_scatter(z_enc,[xdim,ydim],dxs)
Z_color_scatter(z_enc,[xdim,ydim],dys)
Z_color_scatter(z_enc,[7,18],dtheta)
from plt.
###Output
_____no_output_____ |
src/pokec/pokec_low_inf_results.ipynb | ###Markdown
Load results
###Code
pokec_out = '../../out/pokec_low_inf'
exps = 10
embed='user'
models = ['unadjusted.main',
'spf.main',
'network_pref_only.main',
'pif.z-theta-joint']
conf_types = ['homophily', 'exog', 'both']
confounding_strengths = [(50, 10), (50, 50), (50, 100)]
exp_results = {}
for i in range(1, exps+1):
for model in models:
for (cov1conf, cov2conf) in confounding_strengths:
for ct in conf_types:
try:
base_file_name = 'conf=' + str((cov1conf, cov2conf)) +';conf_type=' +ct + '.npz'
result_file = os.path.join(pokec_out, str(i), model + '_model_fitted_params', base_file_name)
res = np.load(result_file)
params = res['fitted']
truth = res['true']
if (ct, (cov1conf,cov2conf)) in exp_results:
if model in exp_results[(ct, (cov1conf,cov2conf))]:
exp_results[(ct, (cov1conf,cov2conf))][model].append((params, truth))
else:
exp_results[(ct, (cov1conf,cov2conf))][model]= [(params, truth)]
else:
exp_results[(ct, (cov1conf,cov2conf))] = {model:[(params, truth)]}
except:
print(result_file,' not found')
###Output
_____no_output_____
###Markdown
Confounding bias from per-item confounders only
###Code
confounding_type='exog'
models = list(exp_results[(confounding_type, confounding_strengths[1])].keys())
regime1 = {'Low':(confounding_type, confounding_strengths[0]),
'Med.':(confounding_type, confounding_strengths[1]),
'High':(confounding_type, confounding_strengths[2])}
df1 = print_table(exp_results, regime1, models)
df1
###Output
_____no_output_____
###Markdown
Confounding bias from per-person confounders only
###Code
confounding_type='homophily'
models = list(exp_results[(confounding_type, confounding_strengths[0])].keys())
regime1 = {'Low':(confounding_type, confounding_strengths[0]),
'Med.':(confounding_type, confounding_strengths[1]),
'High':(confounding_type, confounding_strengths[2])}
df2= print_table(exp_results, regime1, models)
df2
###Output
_____no_output_____
###Markdown
Confounding bias from both sources
###Code
confounding_type='both'
models = list(exp_results[(confounding_type, confounding_strengths[0])].keys())
regime1 = {'Low':(confounding_type, confounding_strengths[0]),
'Med.':(confounding_type, confounding_strengths[1]),
'High':(confounding_type, confounding_strengths[2])}
df3 = print_table(exp_results, regime1, models)
df3
###Output
_____no_output_____
###Markdown
Visualize all results
###Code
all_results = pd.concat([df1, df2, df3], axis=1)
all_results
###Output
_____no_output_____ |
feature_extraction/src/in_out.ipynb | ###Markdown
In out file. On this file I have to put all the methods related to the data processing. * Add metadata * Label stuff* Outlier treatment !!! TODO* Missings * SplitI'll make it in the jupyter and then I'll move it into a py file, once it is tested and working. ````Input: data_path/allfiles + data_path/metadatos_v2.0.txtOutput: name.csv or name_train.csv, name_train_target.csv, name_test.csv, name_test_target.csv````
###Code
import os
import sys
sys.path.insert(1, '../../src')
import warnings
warnings.simplefilter('ignore')
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
from datetime import timedelta
import time
import numpy as np
import pandas as pd
import networkx as nx
from fancyimpute import IterativeImputer
from sklearn.model_selection import train_test_split
from natsort import natsorted
from matplotlib import pyplot as plt
import gc
# Data paths:
DATA_PATH = '../definitive_data_folder'
PATIENTS_PATH = DATA_PATH + '/allfiles'
# The prgram will try to load the csv, if the csv does not exist it will generate it ussing the txt.
METADATA_PATH = DATA_PATH + '/metadatos_v2.0.csv'
if not os.path.exists(METADATA_PATH):
generate_metadata_csv()
OUTPUT_PATH = DATA_PATH + '/datasets'
try: os.mkdir(DATA_PATH)
except: pass
try: os.mkdir(OUTPUT_PATH)
except: pass
# Globals
labels=['ECTODERM', 'NEURAL_CREST', 'MESODERM', 'ENDODERM']
hist2 = np.array(['Biliary', 'Bladder', 'Bone/SoftTissue', 'Breast', 'CNS', 'Cervix',
'Colon/Rectum', 'Esophagus', 'Head/Neck', 'Kidney', 'Liver',
'Lung', 'Lymphoid', 'Myeloid', 'Ovary', 'Pancreas', 'Prostate',
'Skin', 'Stomach', 'Thyroid', 'Uterus'])
chromosomes = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18',
'19', '20', '21', '22', 'X', 'Y']
svclass = ['DEL', 'DUP', 'TRA', 'h2hINV', 't2tINV']
k = 300
TOMMY = '43dadc68-c623-11e3-bf01-24c6515278c0'
def generate_metadata_csv():
"""
This function generates a real dataset using the txt given and saves it as a csv.
:return:
"""
data = pd.DataFrame(
columns=['sampleID', 'donor_sex', 'donor_age_at_diagnosis', 'histology_tier1', 'histology_tier2',
'tumor_stage1', 'tumor_stage2'])
with open(METADATA_PATH.replace('.csv','.txt')) as f:
for l in f:
words = l.split()
id = words[0]
sex = words[1]
age = words[2]
tier1 = words[3]
tier2 = words[4]
tumor_stage1 = '_'.join(words[5:7])
tumor_stage2 = '_'.join(words[8:])
data = data.append({'sampleID': id, 'donor_sex': sex, 'donor_age_at_diagnosis': age,
'histology_tier1': tier1, 'histology_tier2': tier2,
'tumor_stage1': tumor_stage1, 'tumor_stage2': tumor_stage2}, ignore_index=True)
data = data.drop(data.index[0])
data.to_csv(METADATA_PATH, index=False)
def generateTRAGraph(patient):
'''
This function generates a graph per patient representing the traslocations of this patient.
vertex: Chromosomes
edge: the number of traslocations between each chromosome
Input:
patient(string): The patient id.
Output:
graph: networkx format
edge_list: List with the format:
node1 node2 weight (edge between node1 and node2 with weight weight)
'''
patient_path = PATIENTS_PATH + '/'+ patient + '.vcf.tsv'
# Load the patient breaks, and select only the traslocations
patient_breaks = pd.read_csv(patient_path, sep='\t', index_col=None)
# patient_breaks['chrom2'] = patient_breaks['chrom2'].map(str)
only_TRA = patient_breaks.loc[patient_breaks['svclass'] == 'TRA']
# The crosstab is equivalent to the adjacency matrix, so we use this to calculate it
ct_tra = pd.crosstab(only_TRA['#chrom1'], only_TRA['chrom2'])
ct_tra.index = ct_tra.index.map(str)
adjacency_matrix_connected_only = ct_tra
aux = pd.DataFrame(0,columns=chromosomes, index=chromosomes)
aux.index = aux.index.map(str)
ct_tra = aux.add(ct_tra,fill_value=0)
aux = None
# Reorder
ct_tra = ct_tra.reindex(index=natsorted(ct_tra.index))
ct_tra = ct_tra[chromosomes]
# change the values to int
ct_tra = ct_tra.astype(int)
# Generate the adjacency matrix
adjacency_matrix = pd.DataFrame(data=ct_tra.values,
columns=chromosomes, index=chromosomes)
# print(adjacency_matrix)
graph = nx.from_pandas_adjacency(adjacency_matrix)
graph.to_undirected()
# Remove isolated vertices
graph.remove_nodes_from(list(nx.isolates(graph)))
edge_list = nx.generate_edgelist(graph,data=['weight'])
return graph, edge_list
def nan_imputing(df):
"""
There is only one feature with nans. Donor age at diagnosis.
We impute it using the KNN strategy
:param df:
:return:
"""
# Imput missing data with mice
fancy_imputed = df
dummies = pd.get_dummies(df)
imputed = pd.DataFrame(data=IterativeImputer().fit_transform(dummies), columns=dummies.columns, index=dummies.index)
fancy_imputed.donor_age_at_diagnosis = imputed.donor_age_at_diagnosis
fancy_imputed['donor_age_at_diagnosis'] = fancy_imputed['donor_age_at_diagnosis'].astype(np.int)
return fancy_imputed
def preprocessing_without_split(X):
# this function is only ment for data analysis
X['donor_sex'] = X['donor_sex'].str.replace('female','1')
X['donor_sex'] = X['donor_sex'].str.replace('male','0')
X['female'] = pd.to_numeric(X['donor_sex'])
X = X.drop('donor_sex',axis=1)
# X['number_of_breaks'] = X['DUP'] + X['DEL'] + X['TRA'] + X['h2hINV'] + X['t2tINV']
for column in X.columns:
if 'chr' in column:
X['proportion_' + column] = 0
X[['proportion_' + column]] = np.true_divide(np.float32(X[[column]]),
np.float32(X[['number_of_breaks']]))
if 'DUP' in column or 'DEL' in column or 'TRA' in column or 'h2hINV' in column or 't2tINV' in column:
X['proportion_' + column] = 0
X[['proportion_' + column]] = np.true_divide(np.float32(X[[column]]),
np.float32(X[['number_of_breaks']]))
X = nan_imputing(X)
X = pd.get_dummies(X,columns=['tumor_stage1', 'tumor_stage2'])
return X
def preprocessing(df,hist1=True):
if hist1:
y = df.pop('histology_tier1')
X = df.drop('histology_tier2', axis=1)
else:
y = df.pop('histology_tier2')
X = df.drop('histology_tier1', axis=1)
X['donor_sex'] = X['donor_sex'].str.replace('female','1')
X['donor_sex'] = X['donor_sex'].str.replace('male','0')
X['female'] = pd.to_numeric(X['donor_sex'])
X = X.drop('donor_sex',axis=1)
X_train, X_test, Y_train, Y_test = \
train_test_split(pd.get_dummies(X), y, stratify=y, test_size=.2)
X_train = nan_imputing(X_train)
X_test = nan_imputing(X_test)
for column in X_train.columns:
if 'chr' in column:
X_train['proportion_' + column] = 0
X_train[['proportion_' + column]] = np.true_divide(np.float32(X_train[[column]]),
np.float32(X_train[['number_of_breaks']]))
X_test['proportion_' + column] = 0
X_test[['proportion_' + column]] = np.true_divide(np.float32(X_test[[column]]),
np.float32(X_test[['number_of_breaks']]))
if 'DUP' in column or 'DEL' in column or 'TRA' in column or 'h2hINV' in column or 't2tINV' in column:
X_train['proportion_' + column] = 0
X_train[['proportion_' + column]] = np.true_divide(np.float32(X_train[[column]]),
np.float32(X_train[['number_of_breaks']]))
X_test['proportion_' + column] = 0
X_test[['proportion_' + column]] = np.true_divide(np.float32(X_test[[column]]),
np.float32(X_test[['number_of_breaks']]))
return X_train, Y_train, X_test, Y_test
def generate_one_vs_all_datasets(Y, class_name):
to_replace = [c for c in labels if c != class_name]
Y_class = Y.replace(to_replace=to_replace, value='OTHER')
return Y_class
def generate_dataset(name, split=True, hist1=True):
"""
slow but u only need to run it once.
connected_components
connected_components_max_size
"""
print 'Generating csv..'
# load the metadata
metadata = pd.read_csv(METADATA_PATH)
metadata = metadata.set_index('sampleID')
# load the patient ids and remove the ones that don't have metadata.
patients = os.listdir(PATIENTS_PATH)
patients = [p.replace('.vcf.tsv','') for p in patients if p in list(metadata.index)]
# The initial dataset is the metadata one.
dataset = metadata
for i, patient in enumerate(metadata.index):
# Generate the traslocation graph of the patient and the edge_list
g, edge_list = generateTRAGraph(patient=patient)
dataset.loc[patient, 'connected_components'] = len(list(nx.connected_component_subgraphs(g)))
# add the max of the number of vertex of the connected components of the graph
if len(list(nx.connected_component_subgraphs(g))) > 0:
dataset.loc[patient, 'connected_components_max_size'] = np.max(
[len(list(component.nodes())) for component in nx.connected_component_subgraphs(g)])
else:
dataset.loc[patient, 'connected_components_max_size'] = 0
# add the translocations
for edge in edge_list:
edge = edge.split(' ')
if edge[0] in ['X', 'Y'] and edge[1] in ['X','Y']:
edge_column = '(' + 'X' + ',' + 'Y' + ')'
elif edge[0] in ['X', 'Y']:
edge_column = '(' + edge[1] + ',' + edge[0] + ')'
elif edge[1] in ['X', 'Y']:
edge_column = '(' + edge[0] + ',' + edge[1] + ')'
elif int(edge[0]) < int(edge[1]):
edge_column = '(' + edge[0] + ',' + edge[1] + ')'
else:
edge_column = '(' + edge[1] + ',' + edge[0] + ')'
edge_weight = int(edge[2])
dataset.loc[patient, edge_column] = edge_weight
# now we load the breaks
patient_path = PATIENTS_PATH + '/'+ patient + '.vcf.tsv'
patient_breaks = pd.read_csv(patient_path, sep='\t', index_col=None)
# load the chromosomes as strings
patient_breaks['chrom2'] = patient_breaks['chrom2'].map(str)
# generate a crosstab of the svclass with the chromosomes and add this info to the dataset
ct = pd.crosstab(patient_breaks['chrom2'], patient_breaks['svclass'])
ct.index = ct.index.map(str)
for chrom in ct.index:
for svc in ct.columns:
dataset.loc[patient, svc + '_' + str(chrom)]= ct.loc[chrom, svc]
# add the number of breaks
number_of_breaks = len(patient_breaks)
dataset.loc[patient, 'number_of_breaks'] = number_of_breaks
# I count how many times appears on the breaks each of the chromosomes.
contained_chromosomes = patient_breaks[['#chrom1', 'chrom2']].apply(pd.Series.value_counts)
contained_chromosomes = contained_chromosomes.fillna(0)
contained_chromosomes[['#chrom1', 'chrom2']] = contained_chromosomes[['#chrom1', 'chrom2']].astype(int)
contained_chromosomes['chromosome'] = contained_chromosomes.index
contained_chromosomes['count'] = contained_chromosomes['#chrom1'] + contained_chromosomes['chrom2']
# Then saves it on the chromosome feature.
for chrom in contained_chromosomes.index:
dataset.loc[patient, 'chr_' + str(chrom)] = contained_chromosomes.loc[chrom, 'count']
# Counts how many breaks of each class there are on the breaks and saves it.
count_svclass = patient_breaks[['svclass', ]].apply(pd.Series.value_counts)
for svclass in count_svclass.index:
dataset.loc[patient, svclass] = count_svclass.loc[svclass, 'svclass']
# fill with zeros the false nans generated now
dataset.loc[:, dataset.columns != 'donor_age_at_diagnosis'] = dataset.loc[:, dataset.columns != 'donor_age_at_diagnosis'].fillna(0)
if split:
X_train, Y_train, X_test, Y_test = preprocessing(dataset, hist1)
# and save
X_train.to_csv(OUTPUT_PATH + '/' + name + '_train.csv')
Y_train.to_csv(OUTPUT_PATH + '/' + name + '_train_target.csv')
X_test.to_csv(OUTPUT_PATH + '/' + name + '_test.csv')
Y_test.to_csv(OUTPUT_PATH + '/' + name + '_test_target.csv')
return X_train, Y_train, X_test, Y_test
else:
dataset = preprocessing_without_split(dataset)
dataset.to_csv(OUTPUT_PATH +'/'+ name + '.csv')
return dataset
init = time.time()
name = 'dataset'
data = generate_dataset(name,split=True)
print 'Total time:', timedelta(seconds=time.time() - init)
data[0]
def load_data(name):
# todo reformat
try:
X_train =pd.read_csv(OUTPUT_PATH + '/' + name + '_train.csv',index_col=0)
Y_train=pd.read_csv(OUTPUT_PATH + '/' + name + '_train_target.csv',index_col=0, names = ['SampleID','histology'])
X_test=pd.read_csv(OUTPUT_PATH + '/' + name + '_test.csv',index_col=0)
Y_test=pd.read_csv(OUTPUT_PATH + '/' + name + '_test_target.csv',index_col=0,names = ['SampleID','histology'])
print 'Loaded'
except Exception as e:
print 'peta', e
return
return X_train, Y_train, X_test, Y_test
X_train, Y_train, X_test, Y_test = load_data('dataset')
X_train.head()
a = set(data.columns)
a.difference(b)
b.difference(a)
b = set(X_train.columns)
###Output
_____no_output_____ |
.ipynb_checkpoints/2-differentiable-quantum-computing-checkpoint.ipynb | ###Markdown
Differentiable quantum computing with PennyLaneIn this tutorial we will:* learn step-by-step how quantum computations are implemented in PennyLane,* understand parameter-dependent quantum computations ("variational circuits"), * build our first quantum machine learning model, and* compute its gradient.We need the following imports:
###Code
import pennylane as qml
from pennylane import numpy as np
###Output
_____no_output_____
###Markdown
1. Quantum nodes In PennyLane, a *quantum node* is a computational unit that involves the construction, evaluation, pre- and postprocessing of quantum computations.A quantum node consists of a *quantum function* that defines a circuit, as well as a *device* on which it is run. There is a growing [device ecosystem](https://pennylane.ai/plugins.html) which allows you to change only one line of code to dispatch your quantum computation to local simulators, remote simulators and remote hardware from different vendors.Here we will use the built-in `default.qubit` device.
###Code
dev = qml.device('default.qubit', wires=2)
###Output
_____no_output_____
###Markdown
To combine the device with a quantum function to a quantum node we can use the `qml.qnode` decorator. The function can then be evaluated as if it was any other python function. Internally, it will construct a circuit and run it on the device.
###Code
@qml.qnode(dev)
def circuit():
qml.Hadamard(wires=0)
return qml.probs(wires=[0, 1])
circuit()
###Output
_____no_output_____
###Markdown
2. Building quantum circuits The initial stateThe initial state has 100% probability to be measured in the "0..0" configuration. Let's see how we can verify this with PennyLane.
###Code
@qml.qnode(dev)
def circuit():
return qml.probs(wires=[0, 1])
circuit()
###Output
_____no_output_____
###Markdown
The internal state vector that we use to mathematically keep track of probabilities is complex-valued. Since `default.qubit` is a simulator we can have a look at the state, for example by checking the device's `state` attribute.
###Code
dev.state
###Output
_____no_output_____
###Markdown
Unitary evolutionsQuantum circuits are represented by unitary matrices. We can evolve the initial state by an arbitrary unitrary matrix as follows:
###Code
s = 1/np.sqrt(2)
U = np.array([[0., -s, 0., s],
[ s, 0., -s, 0.],
[ s, 0., s, 0.],
[0., -s, 0., -s]])
@qml.qnode(dev)
def circuit():
qml.QubitUnitary(U, wires=[0, 1])
return qml.probs(wires=[0, 1])
circuit()
###Output
_____no_output_____
###Markdown
The internal quantum state changed.
###Code
dev.state
###Output
_____no_output_____
###Markdown
Measurements sample outcomes from the distributionThe most common measurement takes samples $-1, 1$ from the "Pauli-Z" observable. The samples indicate if the qubit was measured in state $| 0 \rangle$ or $| 1 \rangle$.
###Code
@qml.qnode(dev)
def circuit():
qml.QubitUnitary(U, wires=[0, 1])
return qml.sample(qml.PauliZ(wires=0)), qml.sample(qml.PauliZ(wires=1))
circuit()
###Output
_____no_output_____
###Markdown
The quantum state should be still the same as above.
###Code
dev.state
###Output
_____no_output_____
###Markdown
Computing expectation values When we want outputs of computations to be deterministic, we often interpret the expected measurement outcome as the result. This value is estimated by taking lots of samples and averaging over them.
###Code
@qml.qnode(dev)
def circuit():
qml.QubitUnitary(U, wires=[0, 1])
return qml.expval(qml.PauliZ(wires=0)), qml.expval(qml.PauliZ(wires=1))
circuit()
###Output
_____no_output_____
###Markdown
Again, the quantum state should be the same as above.
###Code
dev.state
###Output
_____no_output_____
###Markdown
Quantum circuits are decomposed into gatesQuantum circuits rarely consist of one large unitary (which quickly becomes intractably large as the number of qubits grow). Instead, they are composed of *quantum gates*.
###Code
@qml.qnode(dev)
def circuit():
qml.PauliX(wires=0)
qml.CNOT(wires=[0,1])
qml.Hadamard(wires=0)
qml.PauliZ(wires=1)
return qml.expval(qml.PauliZ(wires=0)), qml.expval(qml.PauliZ(wires=1))
circuit()
###Output
_____no_output_____
###Markdown
Some gates depend on "control" parametersTo train circuits, there is a special subset of gates which is of particular interest: the Pauli rotation gates. These "rotate" a special representation of the quantum state around a specific axis. The gates depend on a scalar parameter which is the angle of the rotation.
###Code
@qml.qnode(dev)
def circuit(w1, w2):
qml.RX(w1, wires=0)
qml.CNOT(wires=[0,1])
qml.RY(w2, wires=1)
return qml.expval(qml.PauliZ(wires=0)), qml.expval(qml.PauliZ(wires=1))
circuit(0.2, 1.3)
###Output
_____no_output_____
###Markdown
The names `w1`, `w2` are already suggestive that these can be used like the trainable parameters of a classical machine learning model. But we could also call the control parameters `x1`, `x2` and encode data features into quantum states. 3. A full quantum machine learning model and its gradient Finally, we can use pre-coded routines or [templates](https://pennylane.readthedocs.io/en/stable/introduction/templates.html) to conveniently build full quantum machine learning model that include a data encoding part, and a trainable part.Here, we will use the `AngleEmbedding` template to load the data, and the `BasicEntanglingLayers` as the trainable part of the circuit.
###Code
@qml.qnode(dev)
def quantum_model(x, w):
qml.templates.AngleEmbedding(x, wires=[0, 1])
qml.templates.BasicEntanglerLayers(w, wires=[0, 1])
return qml.expval(qml.PauliZ(wires=0))
x = np.array([0.1, 0.2], requires_grad=False)
w = np.array([[-2.1, 1.2], [-1.4, -3.9], [0.5, 0.2]])
quantum_model(x, w)
###Output
_____no_output_____
###Markdown
We can draw the circuit.
###Code
print(quantum_model.draw())
###Output
0: ──RX(0.1)──RX(-2.1)──╭C──RX(-1.4)──╭C──RX(0.5)──╭C──┤ ⟨Z⟩
1: ──RX(0.2)──RX(1.2)───╰X──RX(-3.9)──╰X──RX(0.2)──╰X──┤
###Markdown
The best thing is that by using PennyLane, we can easily compute its gradient!
###Code
gradient_fn = qml.grad(quantum_model)
gradient_fn(x, w)
###Output
_____no_output_____ |
001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/Parallel Computing/Using MPI with IPython Parallel.ipynb | ###Markdown
Simple usage of a set of MPI engines This example assumes you've started a cluster of N engines (4 in this example) as partof an MPI world. Our documentation describes [how to create an MPI profile](http://ipython.org/ipython-doc/dev/parallel/parallel_process.htmlusing-ipcluster-in-mpiexec-mpirun-mode)and explains [basic MPI usage of the IPython cluster](http://ipython.org/ipython-doc/dev/parallel/parallel_mpi.html).For the simplest possible way to start 4 engines that belong to the same MPI world, you can run this in a terminal:ipcluster start --engines=MPI -n 4or start an MPI cluster from the cluster tab if you have one configured.Once the cluster is running, we can connect to it and open a view into it:
###Code
from ipyparallel import Client
c = Client()
view = c[:]
###Output
_____no_output_____
###Markdown
Let's define a simple function that gets the MPI rank from each engine.
###Code
@view.remote(block=True)
def mpi_rank():
from mpi4py import MPI
comm = MPI.COMM_WORLD
return comm.Get_rank()
###Output
_____no_output_____
###Markdown
mpi_rank()
###Code
To get a mapping of IPython IDs and MPI rank (these do not always match),
you can use the get_dict method on AsyncResults.
###Output
_____no_output_____
###Markdown
mpi_rank.block = Falsear = mpi_rank()ar.get_dict()
###Code
With %%px cell magic, the next cell will actually execute *entirely on each engine*:
###Output
_____no_output_____
###Markdown
%%pxfrom mpi4py import MPIcomm = MPI.COMM_WORLDsize = comm.Get_size()rank = comm.Get_rank()if rank == 0: data = [(i+1)**2 for i in range(size)]else: data = Nonedata = comm.scatter(data, root=0)assert data == (rank+1)**2, 'data=%s, rank=%s' % (data, rank)
###Code
view['data']
###Output
_____no_output_____ |
python/notebooks/L2.ipynb | ###Markdown
Introduction to Python for Health Data Analytics*Lectured by [Md. Atik Shariar Sammo](https://hdrobd.org/member/atik_shariar_sammo/) | Course & Materials Designed by [Jubayer Hossain](https://jhossain.me/)* Agenda- Part-1: Algorithmic Thinking- Part-2: Pyhton Control Flow: Conditional Logic- Part-3: Pyhton Control Flow: Loops Part-1: Algorithmic Thinking Topics - What is algorithm?- Importance of algorithmic thinking- What is Flowchart?- What is Pseudocode? What is Algorithms? An algorithm is a **sequence** combination of finite steps to solve a particular problem. > “Algorithmic thinking is likely to cause the most disruptive paradigm shift in the sciences since quantum mechanics.” —Bernard ChazelleFor example: Multiple two numbers - **Step-1:** Take two inputs(a,b) - **Step-2:** Multiply `a` and `b` and store in `sum`- **Step-3:** Print `sum` Importance of algorithms - To improve the efficiency of a computer program - Proper utilization of resources Algorithmic Thinking: The Ultimate Steps - **Step-1:** Understabd the problem - **Step-2:** Formulate it - **Step-3:** Design an algorithm- **Step-4:** Implement it - **Step-5:** Run the code and solve the original problem Understanding the Problem- Understabd the description of the problem - What are the input/output? - Do a few examples by hand - Think about special cases Formulate the Problem - Think about the data and the best way to represent them(graph, strings, etc)- What mathematical criterion corresponds to the desired output? Design an Algorithm - Is the formulated problem amenable to certain algorithm design technique (greedy, divide-and-conquer etc.)- What data structures match Examples \begin{example}Write a Python Program to Add Two Integers \end{example} - Start - Inputs A, B(INT) - SUM = A + B - PRINT SUM - End
###Code
A = int(input())
B = int(input())
SUM = A+B
print(SUM)
###Output
_____no_output_____
###Markdown
\begin{example}Write a Python Program to Compute the Average of Two Integers \end{example} - Start - INPUT A, B(INT) - AVG = A+B/2- PRINT AVG - End
###Code
X = int(input())
Y = int(input())
AVG = (X+Y)/2
print(AVG)
###Output
_____no_output_____
###Markdown
What is Flow Chart? A flowchart is a type of diagram that represents a workflow or process. **A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task.** The flowchart shows the steps as boxes of various kinds, and their order by connecting the boxes with arrows. Sources: [wikipedia](https://en.wikipedia.org/wiki/Flowchart) Decision Making - **Step-1:** Start - **Step-2:** Input Marks - **Step-3:** Calculate Grade and Store in a Variable(Grade = M1+M2+M3+M4)- **Step-4:** Grade < 60? - **Step-4.1:** Yes, then print `FAIL` - **Step-4.2:** False, then print `PASS` - **Step-5:** End  What is Pseudocode? Pseudocode is an informal high-level description of the operating principle of a computer program or other algorithm. It uses the structural conventions of a normal programming language, but is intended for human reading rather than machine reading. Source: [wikipedia](https://en.wikipedia.org/wiki/Pseudocode) Pseudocode Example```pythonInput M1Input M2 Input M3 Input M4 Grade = (M1+M2+M3+M4)/4 if grade < 60 Print FAILelse Print Pass ``` Back to Top Part-2: Python Control Flow: Conditional Logic Topics- Conditional Execution Patterns - `if` statement - `else` statement - `elif` statement `if` Statement Syntax ```pythonif condition: Statement...1 Statement...2 Statement...n ```- The `if` keyword - A Condition(that is an expression that evaluates True or False) - A colon - Starting on the next line, an **indented** block of code(called if clause) Flowchart 
###Code
# Example-1
x = 5
if x > 3:
print("Smaller")
print("Inside if")
print("Outside if")
# Example-2
if x < 3:
print("Smaller")
if x > 3:
print("Larger")
print("End")
###Output
_____no_output_____
###Markdown
`else` Statement Syntax ```pythonif condition: Body of if block else: Body of else block ```- The `else` keyword - A colon - Starting on the next line, an **indented** block of code(called else clause) Flowchart 
###Code
a = -10
if a > 0:
print("Positive")
else:
print("Negative")
a = 10
if a > 0:
print("Positive")
else:
print("Negative")
a = -3
if a >= 0:
print("Positive")
else:
print("Negative")
###Output
_____no_output_____
###Markdown
`elif` Statement Syntax ```pythonif test expression: Body of ifelif test expression: Body of elifelse: Body of else```- The `elif` keyword - A Condition(that is an expression that evaluates True or False) - A colon - Starting on the next line, an **indented** block of code(called elif clause) Flowchart 
###Code
bmi = 20
if bmi <= 18.5:
print("Unhealthy")
elif bmi >= 18.5 and bmi < 24.5:
print("Normal")
elif bmi >= 24.5 and bmi < 30:
print("Healthy")
else:
print("Obese")
# Even or Odd
A = int(input("Enter a number: "))
if A % 2 == 0:
print("Even")
else:
print("Odd")
20 % 2
11 % 2
25 % 2
###Output
_____no_output_____
###Markdown
Back to Top Part-3: Python Control Flow Loops Topics- `while` loop - `range()` function- `for` loop - `pass` statement- `break` statement - `continue` statement Why Loops?
###Code
print("Bangladesh!")
print("Bangladesh!")
print("Bangladesh!")
print("Bangladesh!")
print("Bangladesh!")
###Output
_____no_output_____
###Markdown
`while` loop Syntax ```pythonCounter while condition: Body of while ``` Flowchart 
###Code
# Sum of 1-100 natural numbers
total = 0
n = 1
while n <= 100:
total = total + n
n = n+ 1
print(total)
# Increment
i = 0
while i < 10:
i += 1
print(i)
# Decrement
i = 10
while i > 0:
i -= 1
print(i)
###Output
_____no_output_____
###Markdown
`range()` function
###Code
range(1,10)
range(1, 10, 2)
list(range(10)) # range(i) ==> i - 1
list(range(1, 10)) # range(i) ==> i - 1
list(range(1, 11)) # range(i) ==> i - 1
list(range(1, 11, 2)) # range(i) ==> i - 1
list(range(10, 1, -2)) # range(i) ==> i - 1
# 1400, 2000(included) and 2 step
list(range(1400, 2001, 2))
###Output
_____no_output_____
###Markdown
`for` loop Syntax ```pythonfor var in sequence: Body of for ``` Flowchart
###Code
# List Iteration
li = [1, 2, 3]
for i in li:
print(i)
# String iteration
s = "Bangladesh"
for j in s:
print(j)
# for loop using range function: Increment
for n in range(1, 11):
print(n)
# for loop using range function: Decrement
for m in range(10, 0, -1):
print(m)
###Output
_____no_output_____
###Markdown
`break` statement 
###Code
# Example of break statement in while loop-1
j = 0
while j < 10:
j += 1
if j == 5:
break
print(j)
# Example of break statement in while loop-2
x = 0
while x < 100:
x += 1
if x == 5:
break
print(x)
# Example of break statement in for loop-1
for y in range(1, 100):
if y == 5:
break
print(y)
# Example of break statement in for loop-2
for y in range(1, 100):
if y % 5 == 0:
break
print(y)
###Output
_____no_output_____
###Markdown
`continue` Statement in `for` and `while` loop 
###Code
# Example of continue satement in while loop
x = 0
while x < 10:
x += 1
if x == 5:
continue
print(x)
# Example of continue satement in for loop
for y in range(1, 10):
if y == 5:
continue
print(y)
###Output
_____no_output_____
###Markdown
`pass` statement
###Code
# pass statement in python control flow structure
for i in range(10):
pass
x = 2
if x < 0:
pass
# Odd-Even
n = int(input("Enter a number: "))
if n % 2 == 0:
print("Even")
else:
print("Odd")
# Negative-Positive
m = int(input("Enter a number: "))
if m >= 0:
print("Positive")
else:
print("Negative")
# greater-less
a = int(input())
b = int(input())
if a > b:
print(f'{a} is greater than {b}')
else:
print(f'{a} is less than {b}')
# 1-10 even
for i in range(1, 11):
if i % 2 == 0:
print(i)
# 1-10 odd
for i in range(1, 11):
if i % 2 != 0:
print(i)
# Positive
li = [-3, -4, - 5, 11, 14, 14, 5]
for n in li:
if n >= 0:
print(n)
# Negative
li = [-3, -4, - 5, 11, 14, 14, 5]
for n in li:
if n < 0:
print(n)
###Output
_____no_output_____ |
week2/NumpyNN (honor).ipynb | ###Markdown
Your very own neural networkIn this notebook we're going to build a neural network using naught but pure numpy and steel nerves. It's going to be fun, I promise!
###Code
import sys
sys.path.append("..")
import tqdm_utils
import download_utils
# use the preloaded keras datasets and models
download_utils.link_all_keras_resources()
from __future__ import print_function
import numpy as np
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Here goes our main class: a layer that can do .forward() and .backward() passes.
###Code
class Layer:
"""
A building block. Each layer is capable of performing two things:
- Process input to get output: output = layer.forward(input)
- Propagate gradients through itself: grad_input = layer.backward(input, grad_output)
Some layers also have learnable parameters which they update during layer.backward.
"""
def __init__(self):
"""Here you can initialize layer parameters (if any) and auxiliary stuff."""
# A dummy layer does nothing
pass
def forward(self, input):
"""
Takes input data of shape [batch, input_units], returns output data [batch, output_units]
"""
# A dummy layer just returns whatever it gets as input.
return input
def backward(self, input, grad_output):
"""
Performs a backpropagation step through the layer, with respect to the given input.
To compute loss gradients w.r.t input, you need to apply chain rule (backprop):
d loss / d x = (d loss / d layer) * (d layer / d x)
Luckily, you already receive d loss / d layer as input, so you only need to multiply it by d layer / d x.
If your layer has parameters (e.g. dense layer), you also need to update them here using d loss / d layer
"""
# The gradient of a dummy layer is precisely grad_output, but we'll write it more explicitly
num_units = input.shape[1]
d_layer_d_input = np.eye(num_units)
return np.dot(grad_output, d_layer_d_input) # chain rule
###Output
_____no_output_____
###Markdown
The road aheadWe're going to build a neural network that classifies MNIST digits. To do so, we'll need a few building blocks:- Dense layer - a fully-connected layer, $f(X)=W \cdot X + \vec{b}$- ReLU layer (or any other nonlinearity you want)- Loss function - crossentropy- Backprop algorithm - a stochastic gradient descent with backpropageted gradientsLet's approach them one at a time. Nonlinearity layerThis is the simplest layer you can get: it simply applies a nonlinearity to each element of your network.
###Code
class ReLU(Layer):
def __init__(self):
"""ReLU layer simply applies elementwise rectified linear unit to all inputs"""
pass
def forward(self, input):
"""Apply elementwise ReLU to [batch, input_units] matrix"""
# <your code. Try np.maximum>
def backward(self, input, grad_output):
"""Compute gradient of loss w.r.t. ReLU input"""
relu_grad = input > 0
return grad_output*relu_grad
# some tests
from util import eval_numerical_gradient
x = np.linspace(-1,1,10*32).reshape([10,32])
l = ReLU()
grads = l.backward(x,np.ones([10,32])/(32*10))
numeric_grads = eval_numerical_gradient(lambda x: l.forward(x).mean(), x=x)
assert np.allclose(grads, numeric_grads, rtol=1e-3, atol=0),\
"gradient returned by your layer does not match the numerically computed gradient"
###Output
_____no_output_____
###Markdown
Instant primer: lambda functionsIn python, you can define functions in one line using the `lambda` syntax: `lambda param1, param2: expression`For example: `f = lambda x, y: x+y` is equivalent to a normal function:```def f(x,y): return x+y```For more information, click [here](http://www.secnetix.de/olli/Python/lambda_functions.hawk). Dense layerNow let's build something more complicated. Unlike nonlinearity, a dense layer actually has something to learn.A dense layer applies affine transformation. In a vectorized form, it can be described as:$$f(X)= W \cdot X + \vec b $$Where * X is an object-feature matrix of shape [batch_size, num_features],* W is a weight matrix [num_features, num_outputs] * and b is a vector of num_outputs biases.Both W and b are initialized during layer creation and updated each time backward is called.
###Code
class Dense(Layer):
def __init__(self, input_units, output_units, learning_rate=0.1):
"""
A dense layer is a layer which performs a learned affine transformation:
f(x) = <W*x> + b
"""
self.learning_rate = learning_rate
# initialize weights with small random numbers. We use normal initialization,
# but surely there is something better. Try this once you got it working: http://bit.ly/2vTlmaJ
self.weights = np.random.randn(input_units, output_units)*0.01
self.biases = np.zeros(output_units)
def forward(self,input):
"""
Perform an affine transformation:
f(x) = <W*x> + b
input shape: [batch, input_units]
output shape: [batch, output units]
"""
return #<your code here>
def backward(self,input,grad_output):
# compute d f / d x = d f / d dense * d dense / d x
# where d dense/ d x = weights transposed
grad_input = #<your code here>
# compute gradient w.r.t. weights and biases
grad_weights = #<your code here>
grad_biases = #<your code here>
assert grad_weights.shape == self.weights.shape and grad_biases.shape == self.biases.shape
# Here we perform a stochastic gradient descent step.
# Later on, you can try replacing that with something better.
self.weights = self.weights - self.learning_rate * grad_weights
self.biases = self.biases - self.learning_rate * grad_biases
return grad_input
###Output
_____no_output_____
###Markdown
Testing the dense layerHere we have a few tests to make sure your dense layer works properly. You can just run them, get 3 "well done"s and forget they ever existed.... or not get 3 "well done"s and go fix stuff. If that is the case, here are some tips for you:* Make sure you compute gradients for W and b as __sum of gradients over batch__, not mean over gradients. Grad_output is already divided by batch size.* If you're debugging, try saving gradients in class fields, like "self.grad_w = grad_w" or print first 3-5 weights. This helps debugging.* If nothing else helps, try ignoring tests and proceed to network training. If it trains alright, you may be off by something that does not affect network training.
###Code
l = Dense(128, 150)
assert -0.05 < l.weights.mean() < 0.05 and 1e-3 < l.weights.std() < 1e-1,\
"The initial weights must have zero mean and small variance. "\
"If you know what you're doing, remove this assertion."
assert -0.05 < l.biases.mean() < 0.05, "Biases must be zero mean. Ignore if you have a reason to do otherwise."
# To test the outputs, we explicitly set weights with fixed values. DO NOT DO THAT IN ACTUAL NETWORK!
l = Dense(3,4)
x = np.linspace(-1,1,2*3).reshape([2,3])
l.weights = np.linspace(-1,1,3*4).reshape([3,4])
l.biases = np.linspace(-1,1,4)
assert np.allclose(l.forward(x),np.array([[ 0.07272727, 0.41212121, 0.75151515, 1.09090909],
[-0.90909091, 0.08484848, 1.07878788, 2.07272727]]))
print("Well done!")
# To test the grads, we use gradients obtained via finite differences
from util import eval_numerical_gradient
x = np.linspace(-1,1,10*32).reshape([10,32])
l = Dense(32,64,learning_rate=0)
numeric_grads = eval_numerical_gradient(lambda x: l.forward(x).sum(),x)
grads = l.backward(x,np.ones([10,64]))
assert np.allclose(grads,numeric_grads,rtol=1e-3,atol=0), "input gradient does not match numeric grad"
print("Well done!")
#test gradients w.r.t. params
def compute_out_given_wb(w,b):
l = Dense(32,64,learning_rate=1)
l.weights = np.array(w)
l.biases = np.array(b)
x = np.linspace(-1,1,10*32).reshape([10,32])
return l.forward(x)
def compute_grad_by_params(w,b):
l = Dense(32,64,learning_rate=1)
l.weights = np.array(w)
l.biases = np.array(b)
x = np.linspace(-1,1,10*32).reshape([10,32])
l.backward(x,np.ones([10,64]) / 10.)
return w - l.weights, b - l.biases
w,b = np.random.randn(32,64), np.linspace(-1,1,64)
numeric_dw = eval_numerical_gradient(lambda w: compute_out_given_wb(w,b).mean(0).sum(),w )
numeric_db = eval_numerical_gradient(lambda b: compute_out_given_wb(w,b).mean(0).sum(),b )
grad_w,grad_b = compute_grad_by_params(w,b)
assert np.allclose(numeric_dw,grad_w,rtol=1e-3,atol=0), "weight gradient does not match numeric weight gradient"
assert np.allclose(numeric_db,grad_b,rtol=1e-3,atol=0), "weight gradient does not match numeric weight gradient"
print("Well done!")
###Output
Well done!
###Markdown
The loss functionSince we want to predict probabilities, it would be logical for us to define softmax nonlinearity on top of our network and compute loss given predicted probabilities. However, there is a better way to do so.If you write down the expression for crossentropy as a function of softmax logits (a), you'll see:$$ loss = - log \space {e^{a_{correct}} \over {\underset i \sum e^{a_i} } } $$If you take a closer look, ya'll see that it can be rewritten as:$$ loss = - a_{correct} + log {\underset i \sum e^{a_i} } $$It's called Log-softmax and it's better than naive log(softmax(a)) in all aspects:* Better numerical stability* Easier to get derivative right* Marginally faster to computeSo why not just use log-softmax throughout our computation and never actually bother to estimate probabilities.Here you are! We've defined the both loss functions for you so that you could focus on neural network part.
###Code
def softmax_crossentropy_with_logits(logits,reference_answers):
"""Compute crossentropy from logits[batch,n_classes] and ids of correct answers"""
logits_for_answers = logits[np.arange(len(logits)),reference_answers]
xentropy = - logits_for_answers + np.log(np.sum(np.exp(logits),axis=-1))
return xentropy
def grad_softmax_crossentropy_with_logits(logits,reference_answers):
"""Compute crossentropy gradient from logits[batch,n_classes] and ids of correct answers"""
ones_for_answers = np.zeros_like(logits)
ones_for_answers[np.arange(len(logits)),reference_answers] = 1
softmax = np.exp(logits) / np.exp(logits).sum(axis=-1,keepdims=True)
return (- ones_for_answers + softmax) / logits.shape[0]
logits = np.linspace(-1,1,500).reshape([50,10])
answers = np.arange(50)%10
softmax_crossentropy_with_logits(logits,answers)
grads = grad_softmax_crossentropy_with_logits(logits,answers)
numeric_grads = eval_numerical_gradient(lambda l: softmax_crossentropy_with_logits(l,answers).mean(),logits)
assert np.allclose(numeric_grads,grads,rtol=1e-3,atol=0), "The reference implementation has just failed. Someone has just changed the rules of math."
###Output
_____no_output_____
###Markdown
Full networkNow let's combine what we've just built into a working neural network. As we announced, we're gonna use this monster to classify handwritten digits, so let's get them loaded.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from preprocessed_mnist import load_dataset
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset(flatten=True)
plt.figure(figsize=[6,6])
for i in range(4):
plt.subplot(2,2,i+1)
plt.title("Label: %i"%y_train[i])
plt.imshow(X_train[i].reshape([28,28]),cmap='gray');
###Output
Using TensorFlow backend.
###Markdown
We'll define network as a list of layers, each applied on top of previous one. In this setting, computing predictions and training becomes trivial.
###Code
network = []
network.append(Dense(X_train.shape[1],100))
network.append(ReLU())
network.append(Dense(100,200))
network.append(ReLU())
network.append(Dense(200,10))
def forward(network, X):
"""
Compute activations of all network layers by applying them sequentially.
Return a list of activations for each layer.
Make sure last activation corresponds to network logits.
"""
activations = []
input = X
# <your code here>
assert len(activations) == len(network)
return activations
def predict(network,X):
"""
Compute network predictions.
"""
logits = forward(network,X)[-1]
return logits.argmax(axis=-1)
def train(network,X,y):
"""
Train your network on a given batch of X and y.
You first need to run forward to get all layer activations.
Then you can run layer.backward going from last to first layer.
After you called backward for all layers, all Dense layers have already made one gradient step.
"""
# Get the layer activations
layer_activations = forward(network,X)
layer_inputs = [X]+layer_activations #layer_input[i] is an input for network[i]
logits = layer_activations[-1]
# Compute the loss and the initial gradient
loss = softmax_crossentropy_with_logits(logits,y)
loss_grad = grad_softmax_crossentropy_with_logits(logits,y)
# <your code: propagate gradients through the network>
return np.mean(loss)
###Output
_____no_output_____
###Markdown
Instead of tests, we provide you with a training loop that prints training and validation accuracies on every epoch.If your implementation of forward and backward are correct, your accuracy should grow from 90~93% to >97% with the default network. Training loopAs usual, we split data into minibatches, feed each such minibatch into the network and update weights.
###Code
def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
assert len(inputs) == len(targets)
if shuffle:
indices = np.random.permutation(len(inputs))
for start_idx in tqdm_utils.tqdm_notebook_failsafe(range(0, len(inputs) - batchsize + 1, batchsize)):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt], targets[excerpt]
from IPython.display import clear_output
train_log = []
val_log = []
for epoch in range(25):
for x_batch,y_batch in iterate_minibatches(X_train,y_train,batchsize=32,shuffle=True):
train(network,x_batch,y_batch)
train_log.append(np.mean(predict(network,X_train)==y_train))
val_log.append(np.mean(predict(network,X_val)==y_val))
clear_output()
print("Epoch",epoch)
print("Train accuracy:",train_log[-1])
print("Val accuracy:",val_log[-1])
plt.plot(train_log,label='train accuracy')
plt.plot(val_log,label='val accuracy')
plt.legend(loc='best')
plt.grid()
plt.show()
###Output
_____no_output_____ |
notebooks/113. grav and mag fetch.ipynb | ###Markdown
Table of Contents1 map2loop: Fetching grav/mag grids (modified from example in geophys_utils by Alex Ip) map2loop: Fetching grav/mag grids (modified from example in geophys_utils by Alex Ip)https://github.com/GeoscienceAustralia/geophys_utils
###Code
%matplotlib inline
import os
import netCDF4
import numpy as np
from geophys_utils import NetCDFGridUtils
from geophys_utils import get_netcdf_edge_points, points2convex_hull
import matplotlib.pyplot as plt
minlong=117 # should back calc from metre system
maxlong=118
minlat=-23
maxlat=-22
# Open mag tmi vrtp netCDF4 Dataset
mnetcdf_path = "http://dapds00.nci.org.au/thredds/dodsC/rr2/national_geophysical_compilations/magmap_v6_2015_VRTP/magmap_v6_2015_VRTP.nc"
mnetcdf_dataset = netCDF4.Dataset(mnetcdf_path, 'r')
print(type(mnetcdf_dataset))
max_bytes = 500000000
mnetcdf_grid_utils = NetCDFGridUtils(mnetcdf_dataset)
#netcdf_grid_utils.__dict__
mnetcdf_dataset.variables.keys()
lats = mnetcdf_dataset.variables['lat'][:]
lons = mnetcdf_dataset.variables['lon'][:]
latselect = np.logical_and(lats>minlat,lats<maxlat)
lonselect = np.logical_and(lons>minlong,lons<maxlong)
mdata = mnetcdf_dataset.variables['mag_tmi_rtp_anomaly'][latselect,lonselect]
print(mdata.shape)
# Open grav netCDF4 Dataset
gnetcdf_path = "http://dapds00.nci.org.au/thredds/dodsC/rr2/national_geophysical_compilations/IR_gravity_anomaly_Australia_V1/IR_gravity_anomaly_Australia_V1.nc"
gnetcdf_dataset = netCDF4.Dataset(gnetcdf_path, 'r')
print(type(gnetcdf_dataset))
max_bytes = 500000000
gnetcdf_grid_utils = NetCDFGridUtils(gnetcdf_dataset)
#netcdf_grid_utils.__dict__
gnetcdf_dataset.variables.keys()
lats = gnetcdf_dataset.variables['lat'][:]
lons = gnetcdf_dataset.variables['lon'][:]
latselect = np.logical_and(lats>minlat,lats<maxlat)
lonselect = np.logical_and(lons>minlong,lons<maxlong)
gdata = gnetcdf_dataset.variables['grav_ir_anomaly'][latselect,lonselect]
print(gdata.shape)
fig, ax = plt.subplots(1,2,figsize=(13, 13))
fig.tight_layout()
ax[0].title.set_text('Mag TMI vRTP' )
ax[1].title.set_text('Grav')
ax[0].imshow(mdata[::],cmap='YlGnBu');
ax[1].imshow(gdata[::],cmap='gist_rainbow');
plt.show()
###Output
_____no_output_____ |
Semana 6/.ipynb_checkpoints/pandas-checkpoint.ipynb | ###Markdown
Pandas (Parte 1) Pandas es la librería más popular de Python para realizar análisis de datos. Esta se especializa en trabajar con ciertos tipos de datos. Los más importantes son:- Data tabular, como los dataframes- Series- Matrices de datos- Series de tiempoLos objetos de Pandas más importantes son 2: - Series- Dataframes (Estas clases se enfocarán en trabajar con dataframes). Esta librería es un _must_ en el aprendizaje del análisis de datos, machine learning, etc por una serie de motivos:- Se integra perfectamente con librerías como scikit-learn (machine learning), matplotlib, seaborn y altair (visualización de datos), statsmodels (modelos estadísticos listos para usar), y un gran etcétera. - La gran parte de la comunidad que hace data analysis, y que se apoya mediante foros y demás, usa Pandas. - Pandas es muy amplio y tiene muchas funcionalidades. 1. Importar pandas
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
2. Hacer disponible nuestro dataframe- Casi SIEMPRE trabajaremos con dataframes. Pero qué es un dataframe? - Es una estructura de datos de dos dimensiones (fila-columna). - Cada fila es una observación, y cada columna, una variable (atributo, feature, etc). - El dataframe sirve para representar bases de datos. 2 maneras: crearlo por nosotros mismos ó leer datos Una forma simple de crear dataframes:
###Code
nombres_columnas = ['nombre', 'edad']
obs1 = ['Cristina', 32]
obs2 = ['Diego', 21]
data = [obs1, obs2]
ejemplo_df = pd.DataFrame(data = data, columns = nombres_columnas)
ejemplo_df
###Output
_____no_output_____
###Markdown
Otra forma de hacerlo
###Code
df = pd.DataFrame()
df['nombre'] = ['Cristina', 'Diego']
df['edad'] = [32 , 21]
df
###Output
_____no_output_____
###Markdown
Leyendo la información: Los archivos que tenemos pueden estar en distintos formatos: dta, sav, json, csv. Pandas puede leer cualquiera de dichos archivos.
###Code
locacion_datos = "https://otorongo.club/2021/json/ingresos/"
cong = pd.read_json(locacion_datos) ## Hacer esto tiene el mismo efecto que haber guardado el archivo en PC
cong ## Nos da una representación del dataframe cong
cong.head() ## Nos da las primeras observaciones del dataframe
cong.tail() ## Nos da las últimas observaciones del dataframe.
## Abrir un archivo spss es muy parecido a abrir un archivo dta
enc_lgbt = pd.read_spss("/Users/ccsuehara/Downloads/602-Modulo1287.sav")
enc_lgbt.head(5)
###Output
_____no_output_____
###Markdown
Explorando la base de datos que tenemos. El primer paso de nuestro análisis siempre tiene que ser entender las datos. Para ello seguimos los siguientes pasos:
###Code
cong.info() ## Obtenemos el nombre de las columnas, si es que hay valores faltantes (missing values) y el tipo de cada una de las cols
cong.shape ### Nos ayuda a ver las dimensiones de nuestra base de datos.
### en este caso: 3316 congresistas y 10 variables
cong.columns ## Nos da un array con el nombre de las variables que componen la base de datos
cong.describe() ## Da las medidas centrales de las variables numéricas
###Output
_____no_output_____
###Markdown
Cómo seleccionar datosA veces sólo necesitamos entender una parte de todos nuestros datos. Por ejemplo, queremos ver los datos de los partidos políticos:
###Code
cong.partido ## 1era forma
cong['partido'] # 2da forma
## A veces queremos contar los valores únicos de una sola columna
cong['partido'].value_counts()
### Saliendonos del ejemplo de congresistas:
pd.set_option('display.max_columns', 28, 'display.max_rows', 28)
## A veces queremos hacer el crosstab entre 2 variables.
pd.crosstab(enc_lgbt['depa'], enc_lgbt['depa_nac'], normalize = "columns")
#Asignar como un dataframe el resultado
###Output
_____no_output_____
###Markdown
Seleccionando un grupo de columnas A veces solo queremos ver un grupo de columnas
###Code
cong.columns
peque_lst = ['nombre', 'dni', 'partido']
cong[peque_lst]
###Output
_____no_output_____
###Markdown
Un dataframe está compuesto de varias series (cada columna es una serie diferente)
###Code
type(cong['nombre'])
type(cong['partido'])
###Output
_____no_output_____
###Markdown
Seleccionando un grupo de filasPara dicha tarea, hay 2 formas de hacerlo: usando el ``` iloc``` o el ```loc ``` Utilizando el ilocEl iloc ubica las observaciones que corresponden al índice que le indicamos. Recordando nuestra base de congresistas:
###Code
#agarrando una fila específica
cong.iloc[0]
#Agarrando una serie de filas en un rango
cong.iloc[0:3]
# Agarrando observaciones puntuales
cong.iloc[[10,13,3000]]
###Output
_____no_output_____
###Markdown
Utilizando el ```loc```El ``` loc``` es una forma de ubicar observaciones que se basa en las etiquetas, tanto de columnas como de filas. Vamos a hacer una nueva indexación de la base de congresistas:
###Code
cong_ind = cong.set_index('dni')
cong_ind.head(10)
cong_ind.iloc[0]
## cong_ind.loc[0] Esto sale error! porque no hay un index = 0 , ya que hemos puesto como index a los DNIS.
cong_ind.loc[27428781]
columnas = ["nombre", "partido", "total_ingreso"]
filas = range(0,6)
cong.loc[filas, columnas]
cong.loc[:, columnas]
###Output
_____no_output_____
###Markdown
Filtrando informaciónMuchas veces querremos quedarnos con un subconjunto de datos que cumplen cierta condición. Las condiciones tienen que evaluarse a un booleano. Por ejemplo:
###Code
condicion = [True] * 4 + [False] * 3312
condicion2 = [True] * 5 + [False] * (len(cong) - 5)
cong[condicion]
cong.loc[condicion2]
cond = cong['total_ingreso'] > 0
cong[cond]
cond_1 = cong['total_ingreso'] > 0
cond_2 = cong['partido'] == "JUNTOS POR EL PERU"
cong[cond_1 & cond_2]
###Output
_____no_output_____ |
nb/supervised/supervisedML.ipynb | ###Markdown
Supervised machine learning on the TCGA Breast Cancer setThis notebook can be run locally or on a remote cloud computer by clicking the badge below:[](https://mybinder.org/v2/gh/statisticalbiotechnology/cb2030/master?filepath=nb%2Fsupervised%2FsupervisedML.ipynb)We begin by reading in the TCGA Breast Cancer dataset, and calculate significance of the measured genes as differntial expressed when comparing Progesterone Positive with Progesterone Negative cancers.
###Code
import pandas as pd
import numpy as np
from scipy.stats import ttest_ind
import sys
sys.path.append("..") # Read loacal modules for tcga access and qvalue calculations
import tcga_read as tcga
import qvalue
brca = tcga.get_expression_data("../../data/brca.tsv.gz", 'http://download.cbioportal.org/brca_tcga_pub2015.tar.gz',"data_RNA_Seq_v2_expression_median.txt")
brca_clin = tcga.get_clinical_data("../../data/brca_clin.tsv.gz", 'http://download.cbioportal.org/brca_tcga_pub2015.tar.gz',"data_clinical_sample.txt")
brca.dropna(axis=0, how='any', inplace=True)
brca = brca.loc[~(brca<=0.0).any(axis=1)]
brca = pd.DataFrame(data=np.log2(brca),index=brca.index,columns=brca.columns)
brca_clin.loc["PR"]= (brca_clin.loc["PR status by ihc"]!="Negative")
pr_bool = (brca_clin.loc["PR"] == True)
def get_significance_two_groups(row):
log_fold_change = row[pr_bool].mean() - row[~pr_bool].mean()
p = ttest_ind(row[pr_bool],row[~pr_bool],equal_var=False)[1]
return [p,-np.log10(p),log_fold_change]
pvalues = brca.apply(get_significance_two_groups,axis=1,result_type="expand")
pvalues.rename(columns = {list(pvalues)[0]: 'p', list(pvalues)[1]: '-log_p', list(pvalues)[2]: 'log_FC'}, inplace = True)
qvalues = qvalue.qvalues(pvalues)
###Output
_____no_output_____
###Markdown
The overoptimstic investigatorWe begin with a case of supervised machine learning aimed as a warning, as it illustrates the importance of separating training from testing data.Imagine a situation where we want to find the best combination of genes unrelated to a condition that still are telling of the condition. Does that sound like an imposibility, it is because it is imposible. However, there is nothing stopping us to try.So first we select the 1000 genes which are the least differentialy expressed genes when comparing PR positive with PR negative breast cancers.
###Code
last1k=brca.loc[qvalues.iloc[-1000:,:].index]
###Output
_____no_output_____
###Markdown
Subsequently we standardize the data, i.e. we assure a standard deviation of 1 and a mean of zero for every gene among our 1k genes.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(last1k.values.T) # Scale all gene expression values to stdv =1 and mean =0
y = 2*pr_bool.values.astype(int) - 1 # transform from bool to -1 and 1
###Output
_____no_output_____
###Markdown
We are now ready to try to train a linear SVM for the task of predictin PR negatives from PR positives. We test the performance of our classifier on the training data.
###Code
from sklearn import svm
from sklearn.metrics import confusion_matrix
clf = svm.LinearSVC(C=1,max_iter=5000).fit(X, y) # Train a SVM
y_pred = clf.predict(X) # Predict labels for the give features
pd.DataFrame(data = confusion_matrix(y, y_pred),columns = ["predicted_PR-","predicted_PR+"],index=["actual_PR-","actualPR+"])
###Output
_____no_output_____
###Markdown
Fantastic! The classifier manage to use junk data to perfectly separate our PR+ from PR- cancers. However, before we call NEJM, lets try to see if we can sparate an *independent* test set in the same manner. We use the function train_test_split to divide the data into 60% training data and 40% test data.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=0)
clf = svm.LinearSVC(C=1,max_iter=5000).fit(X_train, y_train) # Train an SVM
y_pred = clf.predict(X_test) # Predict labels for the give features
pd.DataFrame(data = confusion_matrix(y_test, y_pred),columns = ["predicted_PR-","predicted_PR+"],index=["actual_PR-","actualPR+"])
###Output
_____no_output_____
###Markdown
In this setting, the classifier seems to have very little predictive power. The reason for the discrepency of the two predictors are that in both cases the large number of variables makes the predictor to overfit to the data. In the first instance, we could not detect the problem as we were testing on the overfitted data. However, when holding out a separate test set, the predictors weak performance was blatantly visible. A low dimensional classifierLets now focus on an alternative setting, where we instead select six separate genes which are among the most differentially expressed transcripts when comparing PR+ and PR-.How would we combine their expression values optimaly? Again we begin by standardize our features.
###Code
top6=brca.loc[qvalues.iloc[[1,2,5,6,9],:].index]
scaler = StandardScaler()
X = scaler.fit_transform(top6.values.T) # Scale all gene expression values to stdv =1 and mean =0
y = 2*pr_bool.values.astype(int) - 1 # transform from bool to -1 and 1
###Output
_____no_output_____
###Markdown
We then separate 40% of our cancers into a separate test set. The function $GridSearchCV$ use cross validation (k=5) to select an optimal slack penalty $C$ out from a vector of differnt choices.
###Code
from sklearn.model_selection import GridSearchCV
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
param_grid = [{'C': [0.0001, 0.001, 0.1, 1, 10, 100, 1000]}]
clf = GridSearchCV(svm.LinearSVC(max_iter=10000000,class_weight="balanced"), param_grid, cv=5, scoring='accuracy')
clf.fit(X_train, y_train)
print("Best cross validation accuracy for the model: " + str(clf.best_params_))
y_pred = clf.predict(X_test)
pd.DataFrame(data = confusion_matrix(y_test, y_pred),columns = ["predicted_PR-","predicted_PR+"],index=["actual_PR-","actualPR+"])
###Output
Best cross validation accuracy for the model: {'C': 0.1}
###Markdown
Given the choise of penalty $C=0.1$, we can now perform a cross validation (k=5) on the full data set. Here we will train thee separate classifiers on ech cross validation training set, and subsequently merge each such predictor's prediction into one combined result.
###Code
from sklearn.model_selection import StratifiedKFold
y_pred, y_real = np.array([]), np.array([])
skf = StratifiedKFold(n_splits=5)
for train_id, test_id in skf.split(X, y):
X_train, X_test, y_train, y_test = X[train_id,:], X[test_id,:], y[train_id],y[test_id]
clf = svm.LinearSVC(C=0.1,max_iter=100000).fit(X_train, y_train) # Train an SVM
y_pred_fold = clf.predict(X_test) # Predict labels for the give features
y_pred = np.concatenate([y_pred,y_pred_fold])
y_real = np.concatenate([y_real,y_test])
pd.DataFrame(data = confusion_matrix(y_real, y_pred),columns = ["predicted_PR-","predicted_PR+"],index=["actual_PR-","actualPR+"])
###Output
_____no_output_____ |
ipynbs/get-refseq-taxonomic-info.ipynb | ###Markdown
Retrieve NCBI Taxonomy Info For RefSeq GenomesIn `../refseq_masher/data/RefSeqSketches.msh`, each sketch ID has an NCBI Taxonomy UID as the 3rd element if you were to split the ID on `-` (dashes):
###Code
!mash info -t ../refseq_masher/data/RefSeqSketches.msh | head
###Output
#Hashes Length ID Comment
400 2679921514 ./rcn/refseq-AC-10090-PRJNA16113-SAMN03004379-GCF_000002165.2-.-Mus_musculus.fna
400 16300 ./rcn/refseq-AC-10116-PRJNA12455-.-.-.-Rattus_norvegicus.fna
400 2559974438 ./rcn/refseq-AC-10116-PRJNA16219-.-GCF_000002265.2-.-Rattus_norvegicus.fna
400 30536 ./rcn/refseq-AC-10512-PRJNA15101-.-.-.-Canine_adenovirus_1.fna
400 31323 ./rcn/refseq-AC-10514-PRJNA15102-.-.-.-Canine_adenovirus_2.fna
400 35937 ./rcn/refseq-AC-10515-PRJNA15106-.-.-.-Human_adenovirus_2.fna
400 35514 ./rcn/refseq-AC-10519-PRJNA15114-.-.-.-Human_adenovirus_7.fna
400 34794 ./rcn/refseq-AC-10522-PRJNA15115-.-.-.-Human_adenovirus_35.fna
400 36001 ./rcn/refseq-AC-10533-PRJNA15113-.-.-.-Human_adenovirus_1.fna
###Markdown
Output tabular Mash info output to a file
###Code
!mash info -t ../refseq_masher/data/RefSeqSketches.msh > RefSeqSketches.msh-info.tab
###Output
_____no_output_____
###Markdown
Read file into Pandas DataFrame
###Code
import pandas as pd
df_mash_info = pd.read_table('RefSeqSketches.msh-info.tab')
df_mash_info.head()
###Output
_____no_output_____
###Markdown
Split the Mash info IDs on `-` (dashes)
###Code
import re
ids = []
for mash_id in df_mash_info['ID']:
r = re.search(r'\./rcn/(.+)', mash_id)
if r:
sp = r.group(1).split('-')
ids.append(sp)
###Output
_____no_output_____
###Markdown
Mash info IDs into a Pandas DataFrame
###Code
df_split_ids = pd.DataFrame(ids)
df_split_ids.head()
###Output
_____no_output_____
###Markdown
Assign more meaningful column names
###Code
columns = """
0
prefix
taxid
bioproject
biosample
assembly_accession
plasmid
fna_filename
""".strip().split('\n')
df_split_ids.columns = columns
df_split_ids.head()
###Output
_____no_output_____
###Markdown
The number of unique NCBI Taxonomy UIDs
###Code
df_split_ids.taxid.unique().size
###Output
_____no_output_____
###Markdown
Fetch NCBI Taxonomy info using BioPython Entrez efetchFetch the Taxonomy info from NCBI for a list of NCBI Taxonomy UIDs.
###Code
from Bio import Entrez
Entrez.email = '[email protected]'
def get_tax_lineage(taxids, filter_no_rank=True):
assert isinstance(taxids, list)
h = Entrez.efetch(db='Taxonomy', id=taxids, retmode='xml')
recs = Entrez.read(h)
dfs = []
for taxid, rec in zip(taxids, recs):
lineage_info = rec['LineageEx']
lineage_info.append({'TaxId': rec['TaxId'],
'ScientificName': rec['ScientificName'],
'Rank': rec['Rank']})
dflineage = pd.DataFrame(lineage_info)
if filter_no_rank:
dflineage = dflineage[dflineage['Rank'] != 'no rank']
dflineage = dflineage[~dflineage.duplicated(keep='first')]
dflineage['query_taxid'] = taxid
dfs.append(dflineage)
return pd.concat(dfs), recs
###Output
_____no_output_____
###Markdown
Here's what the output looks like for a single taxid, 562, corresponding to *E. coli*
###Code
dftest, recs_test = get_tax_lineage(['562'], filter_no_rank=False)
dftest
len(recs_test)
recs_test
###Output
_____no_output_____
###Markdown
Getting all taxonomic info for all 42367 taxidsThe taxonomy info was fetched in batches of 1000 (max taxids you can query at once with the API is 10000).However there are a few missing taxids that needed to be accounted for:```153997816082751609188```Unfortunately, the response from NCBI API doesn't let you know that your taxid was deleted or changed, so it was something that had to be manually tracked down.*Don't execute the following cell*
###Code
# do not execute this cell; it takes a long time
step = 1000
all_dfs = []
all_recs = []
for i in range(0, len(taxids), step):
retry = 0
while True:
try:
if retry >= 5:
break
big_df, recs = get_tax_lineage(taxids[i:i+step], filter_no_rank=False)
print(i, big_df.query_taxid.unique().size, len(recs))
all_dfs.append(big_df)
all_recs += recs
break
except Exception as ex:
print(ex.message)
retry += 1
print('Retrying', retry)
###Output
0 1000 1000
1000 1000 1000
2000 1000 1000
3000 1000 1000
4000 997 997
5000 1000 1000
6000 1000 1000
7000 1000 1000
8000 1000 1000
9000 1000 1000
10000 1000 1000
11000 1000 1000
12000 1000 1000
13000 1000 1000
14000 1000 1000
15000 1000 1000
16000 1000 1000
17000 1000 1000
18000 1000 1000
19000 1000 1000
20000 1000 1000
21000 1000 1000
22000 1000 1000
23000 1000 1000
24000 1000 1000
25000 1000 1000
26000 1000 1000
27000 1000 1000
28000 1000 1000
29000 1000 1000
30000 1000 1000
31000 1000 1000
32000 1000 1000
33000 1000 1000
34000 1000 1000
35000 1000 1000
36000 1000 1000
37000 1000 1000
38000 1000 1000
39000 1000 1000
40000 1000 1000
41000 1000 1000
42000 367 367
###Markdown
Taxonomy info for 3 of the taxids could not be retrieved
###Code
df_split_ids.taxid.unique().size - len(all_recs)
###Output
_____no_output_____
###Markdown
The missing taxids were in between taxids 4000 and 5000
###Code
missing_recs = all_recs[4000:5000]
###Output
_____no_output_____
###Markdown
Manually determined missing taxids
###Code
deleted_taxids = '''
1539978
1608275
1609188
'''.strip().split('\n')
taxids_4000_no_missing = [x for x in taxids[4000:5000] if not x in deleted_taxids]
len(taxids_4000_no_missing)
###Output
_____no_output_____
###Markdown
Queried for non-missing taxids in `taxids[4000:5000]`
###Code
big_df, recs = get_tax_lineage(taxids_4000_no_missing, filter_no_rank=False)
print(i, big_df.query_taxid.unique().size, len(recs))
###Output
4900 997 997
###Markdown
Replaced DataFrame at index 4 with missing taxids accounted for
###Code
all_dfs[4] = big_df
###Output
_____no_output_____
###Markdown
Concatenated DataFrames from batched NCBI Taxonomy DB queries
###Code
dfalltaxid = pd.concat(all_dfs)
dfalltaxid = dfalltaxid.reset_index()
dfalltaxid = dfalltaxid[dfalltaxid.columns[1:]]
###Output
_____no_output_____
###Markdown
Removed some uninformative info
###Code
dfalltaxid = dfalltaxid[~(dfalltaxid.ScientificName == 'cellular organisms')]
dfalltaxid.Rank[dfalltaxid.Rank == 'no rank'] = None
###Output
_____no_output_____
###Markdown
Rename columns
###Code
dfalltaxid.columns = """
rank
name
taxid
query_taxid
""".strip().split('\n')
###Output
_____no_output_____
###Markdown
Save taxonomy information table
###Code
dfalltaxid.to_csv('refseq-taxid-info.csv', index=None)
dfalltaxid[dfalltaxid.query_taxid == '10090']
###Output
_____no_output_____
###Markdown
Generate NCBI RefSeq Taxonomy Summary TableThe `dfalltaxid` table contains a lot of extra information in an inefficient format so it would be useful to summarize that info in a table with one row corresponding to one RefSeq taxid.
###Code
from typing import List
def get_full_taxonomy(l: List[str]) -> str:
"""From an ordered list of taxonomic classifications, return a string of concatenated taxonomic info
Remove some of the redundancy between classifications, e.g.
["Salmonella enterica", "Salmonella enterica subsp. enterica"]
is concatenated to
"Salmonella enterica; subsp. enterica" rather than "Salmonella enterica; Salmonella enterica subsp. enterica"
Args:
l: list of taxonomic classifications
Returns:
(str): concatenated taxonomic classifications with some redundancy removed
"""
out = [l[0]]
for i in range(1, len(l)):
prev = l[i - 1]
curr = l[i]
out.append(curr.replace(prev, '').strip())
return '; '.join(out)
def summary_taxid(dftax):
taxids = dftax.query_taxid.unique()
df_has_rank = dftax[~pd.isnull(dftax['rank'])]
dicts = []
for i, taxid in enumerate(taxids):
d = {'taxid': taxid}
df_has_rank_taxid = df_has_rank[df_has_rank['query_taxid'] == taxid]
for _, r in df_has_rank_taxid.iterrows():
d['taxonomic_{}'.format(r['rank'])] = r['name']
df_matching_taxid = dftax[dftax['query_taxid'] == taxid]
# highest resolution taxonomic classification is the last entry for that taxid so get last row
try:
_, row = df_matching_taxid[~df_matching_taxid['query_taxid'].duplicated(keep='last')].iterrows().__next__()
d['top_taxonomy_name'] = row['name']
except StopIteration:
d['top_taxonomy_name'] = None
print('Could not get top taxonomic classification for taxid %s', taxid)
# build string of concatenated all taxonomic info with some redundancy between successive terms
try:
d['full_taxonomy'] = get_full_taxonomy(list(df_matching_taxid['name']))
except IndexError:
d['full_taxonomy'] = None
print('Could not get full taxonomy for taxid %s', taxid)
dicts.append(d)
if i % 1000 == 0:
print('Parsed taxnomic info entries', i, d, len(dicts))
return pd.DataFrame(dicts)
###Output
_____no_output_____
###Markdown
Produce summarized NCBI Taxonomy info table for each unique taxid in `../refseq_masher/data/RefSeqSketches.msh`
###Code
df_summary_taxid = summary_taxid(dfalltaxid)
df_summary_taxid.head()
df_summary_taxid.shape
###Output
_____no_output_____
###Markdown
Save NCBI RefSeq Taxonomy Summary Table
###Code
df_summary_taxid.to_csv('../refseq_masher/data/ncbi_refseq_taxonomy_summary.csv', index=None)
###Output
_____no_output_____ |
Notebooks/HNCommentsDB.ipynb | ###Markdown
HackerNews Database.
###Code
# import pandas.
import pandas as pd
# load the csv files.
db1 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2006TopCommentorsComments.csv')
db2 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2007TopCommentorsComments.csv')
db3 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2008TopCommentorsComments.csv')
db4 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2009TopCommentorsComments.csv')
db5 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2010TopCommentorsComments.csv')
db6= pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2011TopCommentorsComments.csv')
db7 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2012TopCommentorsComments.csv')
db8 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2013TopCommentorsComments.csv')
db9 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2014TopCommentorsComments.csv')
db10 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2015TopCommentorsComments.csv')
db11 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2016TopCommentorsComments.csv')
db12 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2017TopCommentorsComments.csv')
db13 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2018TopCommentorsComments.csv')
db14 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2019TopCommentorsComments.csv')
db15 = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/HackerNews/2020TopCommentorsComments.csv')
# merge the csv files.
df = pd.concat([db1, db2, db3, db4, db5, db6, db7, db8, db9, db10, db11, db12,
db13, db14, db15], sort=True)
df = df[['by','timestamp','time','text','parent', 'score']]
print(df.shape)
df.head()
# set elephantsql instance details.
host = 'rajje.db.elephantsql.com'
user = 'ibnzqkfl'
database = 'ibnzqkfl'
password = 'rYgeprTJq6jD_eR0bxEXwAnYX7fM-yRD'
import psycopg2
# connect the the elephantsql instance.
conn = psycopg2.connect(database=database, user=user, password=password, host=host)
# create cursor from connection.
cur = conn.cursor()
# drop the any titatic table in the instance.
cur.execute('DROP TABLE "public"."hackernews"')
# commit the change.
conn.commit()
# set a new table titanic, set some details for each column.
create_hackernews_table = """
CREATE TABLE hackernews(
by VARCHAR NOT NULL,
timestamp TIMESTAMP NOT NULL,
time INT NOT NULL,
text VARCHAR,
parent INT NOT NULL,
score SMALLINT
);
"""
# create the titanic table.
cur.execute(create_hackernews_table)
# close the connection.
# commit the change.
conn.commit()
from sqlalchemy import create_engine
# set the postgress url.
db_string = "postgres://ibnzqkfl:[email protected]:5432/ibnzqkfl"
# create the engine with postgress.
engine = create_engine(db_string)
# create the connection with the engine.
conn_2 = engine.connect()
# change the df to sql.
df.to_sql('hackernews', conn_2, index=False, if_exists='append')
conn.rollback()
import pandas as pd
df = pd.read_csv('mypath.csv')
df.columns = [c.lower() for c in df.columns] #postgres doesn't like capitals or spaces
from sqlalchemy import create_engine
engine = create_engine('postgresql://ibnzqkfl:rYgeprTJq6jD_eR0bxEXwAnYX7fM-yRD@localhost:5432/dbname')
df.to_sql("HackerNews", engine)
###Output
----------------------------------------
Exception happened during processing of request from ('::ffff:127.0.0.1', 45392, 0, 0)
Traceback (most recent call last):
File "/usr/lib/python3.6/socketserver.py", line 320, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python3.6/socketserver.py", line 351, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python3.6/socketserver.py", line 364, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.6/socketserver.py", line 724, in __init__
self.handle()
File "/usr/lib/python3.6/http/server.py", line 418, in handle
self.handle_one_request()
File "/usr/lib/python3.6/http/server.py", line 406, in handle_one_request
method()
File "/usr/lib/python3.6/http/server.py", line 639, in do_GET
self.copyfile(f, self.wfile)
File "/usr/lib/python3.6/http/server.py", line 800, in copyfile
shutil.copyfileobj(source, outputfile)
File "/usr/lib/python3.6/shutil.py", line 82, in copyfileobj
fdst.write(buf)
File "/usr/lib/python3.6/socketserver.py", line 803, in write
self._sock.sendall(b)
ConnectionResetError: [Errno 104] Connection reset by peer
----------------------------------------
|
first-neural-network/dlnd-first-neural-network.ipynb | ###Markdown
你的第一个神经网络在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
加载和准备数据构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!
###Code
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
###Output
_____no_output_____
###Markdown
数据简介此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。
###Code
rides[:24*10].plot(x='dteday', y='cnt')
###Output
_____no_output_____
###Markdown
虚拟变量(哑变量)下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 `get_dummies()` 就可以轻松实现。
###Code
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
###Output
_____no_output_____
###Markdown
调整目标变量为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。
###Code
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
###Output
_____no_output_____
###Markdown
将数据拆分为训练、测试和验证数据集我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。
###Code
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
###Output
_____no_output_____
###Markdown
我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。
###Code
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
###Output
_____no_output_____
###Markdown
开始构建网络下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。> **提示**:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。你需要完成以下任务:1. 实现 S 型激活函数。将 `__init__` 中的 `self.activation_function` 设为你的 S 型函数。2. 在 `train` 方法中实现前向传递。3. 在 `train` 方法中实现反向传播算法,包括计算输出错误。4. 在 `run` 方法中实现前向传递。
###Code
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes #3
self.hidden_nodes = hidden_nodes #2
self.output_nodes = output_nodes #1
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes)) #3,2
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes)) #2,1
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : 1.0/1+np.exp(-x) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets): #1,3 1,1
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) #3,2
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) #2,1
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer 1,3 3,2 1,2
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer 1,2
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) #1,2 2,1 1,1
# signals into final output layer
final_outputs = final_inputs #self.activation_function(final_inputs) # signals from final output layer 1,1
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error # * final_outputs * (1 - final_outputs) #1,0
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = (output_error_term * self.weights_hidden_to_output).T # 1,1, 2,1 2,1
hidden_error_term = hidden_error * hidden_outputs * (1-hidden_outputs) #1,2 1,2 1,2 2,2
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None] # 3,1 1,2 3,2
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None] # 1,0 1,2 2,1
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer 1,3 3,2 1,2
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) #1,2 2,1 1,1 signals into final output layer
final_outputs = final_inputs #self.activation_function(final_inputs) # signals from final output layer 1,1
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
###Output
_____no_output_____
###Markdown
单元测试运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。
###Code
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
###Output
.....
----------------------------------------------------------------------
Ran 5 tests in 0.008s
OK
###Markdown
训练网络现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。 选择迭代次数也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。 选择学习速率速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。 选择隐藏节点数量隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。
###Code
import sys
### Set the hyperparameters here ###
iterations = 600
learning_rate = 0.80
hidden_nodes = 18
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
检查预测结果使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。
###Code
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
###Output
_____no_output_____ |
parts-distributor-sku-classifier-part-1.ipynb | ###Markdown
Parts Distributor SKU classifier, part 1: Build the model IntroductionElectronic parts distributors like Digi-Key, Mouser etc assign their own product IDs (known as a SKU, or "stock keeping unit") to every product they sell, which is different from the "part number" that manufacturers assign to the products they make.For example, `SN74LVC541APWR` is a part number identifying a particular IC made by Texas Instruments. Digi-Key's assigned SKU for it is `296-8521-1-ND`. Mouser calls it `595-SN74LVC541APWR`.Once you look at a few examples, you'll notice simple patterns that allow you to (mostly) identify the source of each part number/SKU. If you wanted a computer to do that for you, regular expressions would work. _But that wouldn't be fun, would it?_This turns out to be a great toy problem to try some machine learning algorithms on. What we want to do is use a whole lot of labeled data (where we already know the answers from some other data source) to build a model that we can then ask to categorize part numbers/SKUs that it hasn't seen before.~~~Me: Hey computer, what's "595-SN74LVC541APWR"?Computer: That looks like a Mouser SKU.Me: Ok, how about "296-8521-1-ND"?Computer: Pretty sure it's a Digi-Key SKU.Me: And what about "the AI is a lie"?Computer: ...*&$*&^$....~~~Since we'll be classifying sequences of characters, something like an [LSTM recurrent neural network](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) should do the trick.We'll be using a fairly typical machine learning environment: Python, pandas, numpy, Keras and TensorFlow.
###Code
import pandas as pd
import numpy as np
import json
from IPython.display import Markdown, display
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.utils import to_categorical
###Output
Using TensorFlow backend.
###Markdown
Training dataLet's start with a simple labeled dataset with 2 columns:- `partnum` is the part number/SKU string that we'll teach the model to classify- `class` is the known classification. It has 3 possible values: - `0` is a manufacturer's part number - `1` is a Mouser SKU - `2` is a Digi-Key SKUHere are the first few rows of our source file:
###Code
df_raw = pd.read_csv('data/mpn_mouser_digikey.csv')
class_names = ['MPN', 'Mouser SKU', 'Digi-Key SKU']
df_raw.sample(n=10)
###Output
_____no_output_____
###Markdown
It'd be good to know how many examples of each class we have, to make sure we don't run into issues with [unbalanced training sets](https://www.quora.com/In-classification-how-do-you-handle-an-unbalanced-training-set)Let's plot how many samples of manufacturer part numbers, Mouser SKUs and Digi-Key SKUs we have in our training set.
###Code
df_raw.groupby('class').count().plot.bar()
###Output
_____no_output_____
###Markdown
Looks like we have a lot more samples of Digi-Key SKUs than others. Let's drop some data to equalize the number of samples in each class, and reshuffle the rows.
###Code
limit_rows_per_class = int(df_raw.groupby('class').count().min())
limit_rows_per_class
df = pd.concat(list(df_raw[df_raw['class'] == c][:limit_rows_per_class] for c in df_raw['class'].unique()))
df = df.sample(frac=1, random_state=20181203)
###Output
_____no_output_____
###Markdown
To [properly train the model and evaluate results](https://towardsdatascience.com/train-test-split-and-cross-validation-in-python-80b61beca4b6), we'll separate our data into 2 sets:- train - these are the rows that the model will be learning from. (80% of the data)- validate - use this data to evaluate the accuracy of the model. This data will NOT be used for actual training. (20% of the data)
###Code
# Create a new column, randomly assign each row to a dataset
np.random.seed(20181203)
df['dataset'] = np.random.choice(['train', 'val'], size=len(df), replace=True, p=[0.80, 0.20])
df.head()
###Output
_____no_output_____
###Markdown
Prepare the inputsThe Keras LSTM layer operates on dense vectors (arrays of floats). To turn our part number strings into sequences of vectors, we'll take two steps: turn the strings into sequences of integers using a dictionary to map every character to a number, then use [Keras's "embedding" layer](https://keras.io/layers/embeddings/) to turn those into vectors. We also need to remember to terminate every sequence with a special code (we'll use a zero) to tell the model when the input stops.
###Code
# build the dictionary - map every unique character to an integer
unique_chars = set()
for s in df['partnum'].values:
unique_chars |= set(c for c in s)
partnum_dict = {c: i+1 for i, c in enumerate(unique_chars)}
df['x'] = list(df['partnum'].map(lambda s: list(partnum_dict[c] for c in s)))
maxlen = max(len(pn) for pn in df['partnum'].values)
df['x'] = list(list(l) for l in sequence.pad_sequences(df['x'], maxlen=maxlen+1, padding='post'))
###Output
_____no_output_____
###Markdown
Here is a snippet of that dictionary:
###Code
list(partnum_dict.items())[:10]
###Output
_____no_output_____
###Markdown
To tell our model which class a particular string belongs to, we'll use a technique called ["one-hot encoding"](https://towardsdatascience.com/choosing-the-right-encoding-method-label-vs-onehot-encoder-a4434493149b) using a helper method from Keras. Now each class will be represented by an array of mostly 0s. By the way, we'll use these same arrays when we start classifying data with our model, only then the values aren't going to be crisp 0s and 1s, but somewhere in between.
###Code
df['y'] = list(list(l) for l in to_categorical(df['class']))
df.head()
###Output
_____no_output_____
###Markdown
Build and train the modelWe are now ready to build and train the model. The simple architecture of this particular network was taken from a [Keras example](https://github.com/keras-team/keras/blob/2.0.5/examples/imdb_lstm.py), and it just happened to work for our toy problem with only minor modifications, so we'll just leave it as is.
###Code
def d(col, ds, class_filter=None):
if class_filter is not None:
return list(df[(df['dataset'] == ds) & (df['class'] == class_filter)][col])
else:
return list(df[df['dataset'] == ds][col])
# config
batch_size = 32
# build model
model = Sequential()
model.add(Embedding(len(partnum_dict)+1, 32))
model.add(LSTM(32, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(3, activation='softmax'))
# try using different optimizers and different optimizer configs
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(d('x', 'train'), d('y', 'train'),
batch_size=batch_size,
epochs=7,
validation_data=(d('x', 'val'), d('y', 'val')))
score, acc = model.evaluate(d('x', 'val'), d('y', 'val'), batch_size=batch_size)
display(Markdown('### Accuracy of the model: {:.2f}%'.format(acc * 100.0)))
###Output
2789/2789 [==============================] - 0s
###Markdown
That looks pretty good, but I'm actually curious as to what kind of samples were miscategorized. Save the model to diskLet's try predicting samples from each class separately to get an idea of where the model gets confused.
###Code
res = []
for c in sorted(df['class'].unique()):
score, acc = model.evaluate(d('x', 'val', class_filter=c), d('y', 'val', class_filter=c), batch_size=batch_size)
res.append([class_names[c], '{:.2f}%'.format(acc*100.0)])
pd.DataFrame(res, columns=['class', 'accuracy'])
###Output
800/921 [=========================>....] - ETA: 0s
###Markdown
Looks like the model nailed the Digi-Key SKUs, is very good with Mouser SKUs, but is misclassifying some part numbers as either Mouser or Digi-Key SKUs. Let's save the model so that we can reload it in [part 2](parts-distributor-sku-classifier-part-2-explore.ipynb) of the notebook and poke around a bit.
###Code
# Serialize the model architecture
with open("data/trained_model_layers.json", "w") as json_file:
json_file.write(model.to_json())
# Serialize the model weights to HDF5
model.save_weights("data/trained_model_weights.h5")
# Serialize our part number character dictionary - we'll need it to classify strings
with open("data/char_dictionary.json", "w") as json_file:
json.dump(partnum_dict, json_file)
# Finally, save our cleaned and prepared data set - we'll need it to explore the model in part 2
df.to_json("data/cleaned_training_data.json")
###Output
_____no_output_____ |
ACEkrillDepth.ipynb | ###Markdown
ACE_krill dataset computing the depth Data description Data collected during the Antarctic Circumnavigation Expedition (ACE) in 2017 using an EK80 echosounder running at a frequency of 200 kHz. Objective Extracting the krill parameters: the depth and height of the krill swarm Import packages
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
#import scipy.signal
#import skimage
###Output
_____no_output_____
###Markdown
Import dataWe import the csv file.
###Code
import ACE_box
import importlib
importlib.reload(ACE_box)
data_path = '/home/benjamin/Documents/datascience/ACE/'
filename = data_path + 'ACE_-D20170207-T104031.sv.csv'
info_df,data_trunc,depth_data = ACE_box.extract_data(filename)
###Output
Data matrix size: (2693, 9163)
Start depth (in meters): 0.09278976
Stop depth (in meters): 499.8584359
Nb of pixels along depth axis: 2693
Depth per pixel (in meters): 0.185579519547
###Markdown
Filtering processes
###Code
data_rescale = ACE_box.fix_contrast(data_trunc)
data2 =data_rescale.copy()
#data2[data<-70] = -70
#data2[data>-65] = -65
#data2 = data2 + 70
np.min(data3)
data3 = ACE_box.remove_vertical_lines(data2)
data3 = ACE_box.substract_meanovertime(data3)
import ACE_box
import importlib
importlib.reload(ACE_box)
gauss_denoised = ACE_box.gaussian_filter(data3)
krillsignal,energy_fluctuation = ACE_box.krill_function(gauss_denoised,1)
energy_fluctuation
import ACE_box
import importlib
importlib.reload(ACE_box)
kchunks = ACE_box.extract_krillchunks(krillsignal,data3)
kchunks_gauss = ACE_box.extract_krillchunks(krillsignal,gauss_denoised)
print("Nb of chunks: ",len(kchunks))
idx = 1
print(kchunks[idx]['Ping_start_index'],kchunks[idx]['Ping_end_index'])
print(kchunks[idx]['data'].shape)
%matplotlib notebook
from matplotlib import pyplot as plt
idx=19
plt.figure(figsize=(10,10))
ax1 = plt.subplot(2,1,1)
ax1.imshow(kchunks[idx]['data'],aspect='auto')
plt.title('Echogram',fontsize=20, color="darkblue")
ax2 = plt.subplot(2,1,2)
ax2.imshow(kchunks_gauss[idx]['data'],aspect='auto')
plt.ylabel('Depth')
plt.xlabel('Number of pings')
plt.show()
idx = 19
test_chunk = kchunks_gauss[idx]['data']
distribution = np.sum(test_chunk,axis=1)
plt.plot(distribution)
distribution_large = distribution.copy()
distribution_large[distribution_large<0.5*np.max(distribution_large)] =0
#density = distribution**2/np.sum(distribution**2)
#density = (distribution-np.min(distribution))**2/np.sum((distribution-np.min(distribution))**2)
#density = distribution
#density[density<0] = 0
density = distribution_large/np.sum(distribution_large)
depth_coord = np.arange(0,len(density))
mean_point = np.sum(density*depth_coord)
sigma = np.sqrt((np.sum(density*(depth_coord-mean_point)**2)))
print(mean_point-2*sigma,'Mean:',mean_point,mean_point+2*sigma,'Height:',4*sigma)
plt.plot(density)
print(ACE_box.swarm_depth(test_chunk))
%matplotlib notebook
from matplotlib import pyplot as plt
idx=13
plt.figure(figsize=(10,10))
ax1 = plt.subplot(2,1,1)
ax1.imshow(kchunks[idx]['data'],aspect='auto')
plt.title('Echogram',fontsize=20, color="darkblue")
ax2 = plt.subplot(2,1,2, sharex=ax1, sharey=ax1)
ax2.imshow(kchunks_gauss[idx]['data'],aspect='auto')
plt.ylabel('Depth')
plt.xlabel('Number of pings')
test_chunk = kchunks_gauss[idx]['data']
depth_point ,height = ACE_box.swarm_depth(test_chunk)
ax1.plot([0-1/2, test_chunk.shape[1]-1/2], [depth_point,depth_point], 'k--',linewidth=2)
ax1.plot([0-1/2, test_chunk.shape[1]-1/2], [depth_point-height/2,depth_point-height/2], 'r--',linewidth=2)
ax1.plot([0-1/2, test_chunk.shape[1]-1/2], [depth_point+height/2,depth_point+height/2], 'r--',linewidth=2)
ax2.plot([0-1/2, test_chunk.shape[1]-1/2], [depth_point,depth_point], 'k--',linewidth=2)
ax2.plot([0-1/2, test_chunk.shape[1]-1/2], [depth_point-height/2,depth_point-height/2], 'r--',linewidth=2)
ax2.plot([0-1/2, test_chunk.shape[1]-1/2], [depth_point+height/2,depth_point+height/2], 'r--',linewidth=2)
###Output
_____no_output_____ |
Shooting victims by block.ipynb | ###Markdown
Shooting victims by blockWhich Chicago block has the most shooting victims so far this year? Fetch the data from NewsroomDBNewsroomDB is the Tribune's proprietary database for tracking data that needs to be manually entered and validated rather than something that can be ingested from an official source. It's mostly used to track shooting victims and homicides. As far as I know, CPD doesn't provide granular data on shooting victims and the definition of homicide can be tricky (and vary from source to source).We'll grab shooting victims from the `shootings` collection.
###Code
import os
import requests
def get_table_url(table_name, base_url=os.environ['NEWSROOMDB_URL']):
return '{}table/json/{}'.format(os.environ['NEWSROOMDB_URL'], table_name)
def get_table_data(table_name):
url = get_table_url(table_name)
try:
r = requests.get(url)
return r.json()
except:
print("Request failed. Probably because the response is huge. We should fix this.")
return get_table_data(table_name)
shooting_victims = get_table_data('shootings')
print("Loaded {} shooting victims".format(len(data['shooting_victims'])))
###Output
Loaded 11713 shooting victims
###Markdown
Filter to only shootings this year
###Code
from datetime import date, datetime
def get_shooting_date(shooting_victim):
return datetime.strptime(shooting_victim['Date'], '%Y-%m-%d')
def shooting_is_this_year(shooting_victim, today):
try:
shooting_date = get_shooting_date(shooting_victim)
except ValueError:
if shooting_victim['RD Number']:
msg = "Could not parse date for shooting victim with RD Number {}".format(
shooting_victim['RD Number'])
else:
msg = "Could not parse date for shooting victim with record ID {}".format(
shooting_victim['_id'])
print(msg)
return False
return shooting_date.year == today.year
today = date.today()
# Use a list comprehension to filter the shooting victims to ones that
# occured on or before today's month and day.
# Also sort by date because it makes it easier to group by year
shooting_victims_this_year = sorted([sv for sv in shooting_victims
if shooting_is_this_year(sv, today)],
key=get_shooting_date)
###Output
Could not parse date for shooting victim with RD Number HX448309
Could not parse date for shooting victim with record ID 560bc169db573e1c2c67789e
Could not parse date for shooting victim with record ID 565d8490389ce82a2a5b07dc
Could not parse date for shooting victim with record ID 56d6c55e389ce82a2a5b09ac
Could not parse date for shooting victim with record ID 536b0f4edb573e257039a258
Could not parse date for shooting victim with record ID 53693edc389ce83e25cd4823
Could not parse date for shooting victim with record ID 536cf216db573e256fa3af22
Could not parse date for shooting victim with record ID 53ac49c8389ce835c90b18b9
Could not parse date for shooting victim with record ID 536cf773389ce835c8d88b28
Could not parse date for shooting victim with record ID 5421c1c1db573e3dc9db2e98
Could not parse date for shooting victim with RD Number HX445856
Could not parse date for shooting victim with RD Number HX447455
Could not parse date for shooting victim with RD Number HY182250
Could not parse date for shooting victim with record ID 552c0a0f389ce8650e9a9916
Could not parse date for shooting victim with record ID 55c79ce6389ce865f1892777
Could not parse date for shooting victim with RD Number HY369178
Could not parse date for shooting victim with record ID 565d882edb573e070ae4c259
Could not parse date for shooting victim with record ID 565da430389ce82a2bd86b3b
Could not parse date for shooting victim with record ID 56e09073389ce82a2a5b09d1
###Markdown
Get the block address
###Code
import re
def blockify(address):
"""
Convert a street address to a block level address
Example:
>>> blockify("1440 W 84th St, Chicago, IL 60620")
'1400 W 84th St, Chicago, IL 60620'
"""
m = re.search(r'^(?P<address_number>\d+) ', address)
address_number = m.group('address_number')
block_address_number = (int(address_number) // 100) * 100
return address.replace(address_number, str(block_address_number))
def add_block(sv):
"""Make a copy of a shooting victim record with an added block field"""
with_block = dict(**sv)
if not sv['Shooting Location']:
# No location, just set block to none
print("Record with RD number {0} has no location.".format(
sv['RD Number']))
with_block['block'] = None
return with_block
if sv['Shooting Specificity'] == 'Exact':
# Address is exact, convert to 100-block
with_block['block'] = blockify(sv['Shooting Location'])
else:
# Address is already block. Use it
with_block['block'] = sv['Shooting Location']
return with_block
# Create a list of shooting victim dictionaries with blocks
shooting_victims_this_year_with_block = [add_block(sv) for sv in shooting_victims_this_year]
###Output
Record with RD number has no location.
Record with RD number has no location.
###Markdown
Count victims by block
###Code
import pandas as pd
# Load shooting victims into a dataframe,
# filtering out victim records for which we couldn't determine the block
shooting_victims_this_year_df = pd.DataFrame([sv for sv in shooting_victims_this_year_with_block if sv['block'] is not None])
# Group by block
shooting_victims_this_year_by_block = shooting_victims_this_year_df.groupby('block').size().sort_values(ascending=False)
shooting_victims_this_year_by_block
# Output to a CSV file so I can email to the reporter who requested it
shooting_victims_this_year_by_block.to_csv("shooting_victims_by_block.csv")
###Output
_____no_output_____ |
ti_python_sdk/pytorch_non_distributed/pytorch_non_distributed.ipynb | ###Markdown
单机版PyTorch任务 本例介绍了使用TI SDK训练PyTorch模型 引入依赖
###Code
from __future__ import absolute_import, print_function
import sys
from ti import session
from ti.pytorch import PyTorch
###Output
_____no_output_____
###Markdown
准备训练数据和脚本 我们已经在对象存储COS中准备了一份数据集,您也可以替换为自己的数据。**测试数据集**(点击[此链接](https://ti-ap-guangzhou-1300268737.cos.ap-guangzhou.myqcloud.com/training_data/pytorch/simple/test)下载)**训练数据集**(点击[此链接](https://ti-ap-guangzhou-1300268737.cos.ap-guangzhou.myqcloud.com/training_data/pytorch/simple/train)下载)**验证数据集**(点击[此链接](https://ti-ap-guangzhou-1300268737.cos.ap-guangzhou.myqcloud.com/training_data/pytorch/simple/valid)下载) 初始化PyTorch Estimator
###Code
import os
# 初始化session
ti_session = session.Session()
# 授权给TI的服务角色
role = "TIONE_QCSRole"
# COS中的训练数据
inputs = 'cos://ti-%s-1300268737/training_data/pytorch/simple' % (os.environ.get('REGION'))
# 创建一个PyTorch Estimator
estimator = PyTorch(entry_point='train.py',
role=role,
framework_version='1.1.0',
train_instance_count=1,
train_instance_type='TI.SMALL2.1core2g',
source_dir='code',
# available hyperparameters: emsize, nhid, nlayers, lr, clip, epochs, batch_size,
# bptt, dropout, tied, seed, log_interval
hyperparameters={
'epochs': 1,
'tied': True
})
# 提交PyTorch训练任务
estimator.fit({'training': inputs})
###Output
_____no_output_____ |
notebooks/Spark_DataFrame_To_Tensorflow_Dataset_Perf_Testing.ipynb | ###Markdown
Spark DataFrame -> Tensorflow DatasetThis notebook serves as a playground for testing `oarphpy.spark.spark_df_to_tf_dataset()`. See also the unit tests for this utiltiy.
###Code
# Common imports and setup
from oarphpy.spark import NBSpark
from oarphpy.spark import spark_df_to_tf_dataset
from oarphpy import util
import os
import random
import sys
import numpy as np
import tensorflow as tf
from pyspark.sql import Row
spark = NBSpark.getOrCreate()
###Output
/usr/local/lib/python3.6/dist-packages/google/protobuf/__init__.py:37: UserWarning: Module oarphpy was already imported from /opt/oarphpy/oarphpy/__init__.py, but /opt/oarphpy/notebooks is being added to sys.path
__import__('pkg_resources').declare_namespace(__name__)
2019-12-27 21:01:03,560 oarph 336 : Trying to auto-resolve path to src root ...
2019-12-27 21:01:03,561 oarph 336 : Using source root /opt/oarphpy
2019-12-27 21:01:03,589 oarph 336 : Generating egg to /tmp/op_spark_eggs_e2392756-5287-4e0e-bdb3-3bc52ee6cde4 ...
2019-12-27 21:01:03,641 oarph 336 : ... done. Egg at /tmp/op_spark_eggs_e2392756-5287-4e0e-bdb3-3bc52ee6cde4/oarphpy-0.0.0-py3.6.egg
###Markdown
Test on a "large" 2GB random dataset Create the dataset
###Code
NUM_RECORDS = 1000
DATASET_PATH = '/tmp/spark_df_to_tf_dataset_test_large'
def gen_data(n):
import numpy as np
y = np.random.rand(2 ** 15).tolist()
return Row(part=n % 100, id=str(n), x=1, y=y)
rdd = spark.sparkContext.parallelize(range(NUM_RECORDS))
df = spark.createDataFrame(rdd.map(gen_data))
if util.missing_or_empty(DATASET_PATH):
df.write.parquet(DATASET_PATH, partitionBy=['part'], mode='overwrite')
%%bash -s "$DATASET_PATH"
du -sh $1
###Output
2.7M /tmp/spark_df_to_tf_dataset_test_large
###Markdown
Test reading the dataset through Tensorflow
###Code
udf = spark.read.parquet(DATASET_PATH)
print("Have %s rows" % udf.count())
n_expect = udf.count()
ds = spark_df_to_tf_dataset(
udf,
'part',
spark_row_to_tf_element=lambda r: (r.x, r.id, r.y),
tf_element_types=(tf.int64, tf.string, tf.float64))
n = 0
t = util.ThruputObserver(name='test_spark_df_to_tf_dataset_large')
with util.tf_data_session(ds) as (sess, iter_dataset):
t.start_block()
for actual in iter_dataset():
n += 1
t.update_tallies(n=1)
for i in range(len(actual)):
t.update_tallies(num_bytes=sys.getsizeof(actual[i]))
t.maybe_log_progress()
t.stop_block()
print("Read %s records" % n)
assert n == n_expect
###Output
Have 10 rows
getting shards
10 [1, 6, 3, 5, 9, 4, 8, 7, 2, 0]
|
quantum-with-qiskit/.ipynb_checkpoints/Q64_Phase_Kickback-checkpoint.ipynb | ###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Phase Kickback _prepared by Abuzer Yakaryilmaz_[](https://youtu.be/7H7A9IRPc8s) We observe another interesting quantum effect here.We apply a Controlled-NOT operator, but the controller qubit will be affected! Task 1Create a quantum circuit with two qubits, say $ q[1] $ and $ q[0] $ in the reading order of Qiskit.We start in quantum state $ \ket{01} $:- set the state of $ q[1] $ to $ \ket{0} $, and- set the state of $ q[0] $ to $ \ket{1} $.Apply Hadamard to both qubits.Apply CNOT operator, where the controller qubit is $ q[1] $ and the target qubit is $ q[0] $.Apply Hadamard to both qubits.Measure the outcomes.
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
#
# your code is here
#
q = QuantumRegister(2,"q") # quantum register with 2 qubits
c = ClassicalRegister(2,"c") # classical register with 2 bits
qc = QuantumCircuit(q,c) # quantum circuit with quantum and classical registers
# the up qubit is in |0>
# set the down qubit to |1>
qc.x(q[0]) # apply x-gate (NOT operator)
qc.barrier()
# apply Hadamard to both qubits.
qc.h(q[0])
qc.h(q[1])
# apply CNOT operator, where the controller qubit is the up qubit and the target qubit is the down qubit.
qc.cx(1,0)
# apply Hadamard to both qubits.
qc.h(q[0])
qc.h(q[1])
# measure both qubits
qc.measure(q,c)
# draw the circuit in Qiskit reading order
display(qc.draw(output='mpl',reverse_bits=True))
# execute the circuit 100 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(qc)
print(counts)
###Output
_____no_output_____
###Markdown
click for our solution The effect of CNOT The quantum state of the up qubit before CNOT:$$ \ket{0} \xrightarrow{H} \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1}.$$The quantum state of the down qubit before CNOT:$$ \ket{1} \xrightarrow{H} \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1}.$$ The quantum state of the composite system:$$ \mypar{ \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1} } \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} }$$ CNOT affects when the up qubit has the value 1.Let's rewrite the composite state as below to explicitly represent the effect of CNOT.$$ \frac{1}{\sqrt{2}} \ket{0} \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} } + \frac{1}{\sqrt{2}} \ket{1} \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} }$$ CNOT flips the state of the down qubit.After CNOT, we have:$$ \frac{1}{\sqrt{2}} \ket{0} \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} } + \frac{1}{\sqrt{2}} \ket{1} \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{1} - \frac{1}{\sqrt{2}} \ket{0} }$$Remark that $\ket{0}$ and $ \ket{1} $ are swapped in the second qubit.If we write the quantum state of the down qubit as before, the sign of $ \ket{1} $ in the up qubit should be flipped.Thus the last equation can be equivalently written as follows:$$ \frac{1}{\sqrt{2}} \ket{0} \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} } - \frac{1}{\sqrt{2}} \ket{1} \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} }$$ Before CNOT operator, the sign of $ \ket{1} $ in the up qubit is positive. After CNOT operator, its sign changes to negative.This is called phase kickback. After CNOT It is easy to see from the last expression, that the quantum states of the qubits are separable (no correlation):$$ \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} } \otimes \mypar{ \frac{1}{\sqrt{2}} \ket{0} - \frac{1}{\sqrt{2}} \ket{1} }$$If we apply Hadamard to each qubit, both qubits evolve to state $ \ket{1} $.The final state is $ \ket{11} $. Task 2 Create a circuit with 7 qubits, say $ q[6],\ldots,q[0] $ in the reading order of Qiskit.Set the states of the top six qubits to $ \ket{0} $.Set the state of the bottom qubit to $ \ket{1} $.Apply Hadamard operators to all qubits.Apply CNOT operator ($q[1]$,$q[0]$) Apply CNOT operator ($q[4]$,$q[0]$) Apply CNOT operator ($q[5]$,$q[0]$) Apply Hadamard operators to all qubits.Measure all qubits. For each CNOT operator, is there a phase-kickback effect?
###Code
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
#
# your code is here
#
# Create a circuit with 7 qubits.
q = QuantumRegister(7,"q") # quantum register with 7 qubits
c = ClassicalRegister(7) # classical register with 7 bits
qc = QuantumCircuit(q,c) # quantum circuit with quantum and classical registers
# the top six qubits are already in |0>
# set the bottom qubit to |1>
qc.x(0) # apply x-gate (NOT operator)
# define a barrier
qc.barrier()
# apply Hadamard to all qubits.
for i in range(7):
qc.h(q[i])
# define a barrier
qc.barrier()
# apply CNOT operator (q[1],q[0])
# apply CNOT operator (q[4],q[0])
# apply CNOT operator (q[5],q[0])
qc.cx(q[1],q[0])
qc.cx(q[4],q[0])
qc.cx(q[5],q[0])
# define a barrier
qc.barrier()
# apply Hadamard to all qubits.
for i in range(7):
qc.h(q[i])
# define a barrier
qc.barrier()
# measure all qubits
qc.measure(q,c)
# draw the circuit in Qiskit reading order
display(qc.draw(output='mpl',reverse_bits=True))
# execute the circuit 100 times in the local simulator
job = execute(qc,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(qc)
print(counts)
###Output
_____no_output_____ |
notebooks/tg/ttn/simple/real/amplitude/mnist_0_1_n8_q3.ipynb | ###Markdown
Imports
###Code
import math
import pandas as pd
import pennylane as qml
import time
from keras.datasets import mnist
from matplotlib import pyplot as plt
from pennylane import numpy as np
from pennylane.templates import AmplitudeEmbedding, AngleEmbedding
from pennylane.templates.subroutines import ArbitraryUnitary
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
Model Params
###Code
np.random.seed(131)
initial_params = np.random.random([5])
INITIALIZATION_METHOD = 'Amplitude'
BATCH_SIZE = 20
EPOCHS = 400
STEP_SIZE = 0.01
BETA_1 = 0.9
BETA_2 = 0.99
EPSILON = 0.00000001
TRAINING_SIZE = 0.78
VALIDATION_SIZE = 0.07
TEST_SIZE = 1-TRAINING_SIZE-VALIDATION_SIZE
initial_time = time.time()
###Output
_____no_output_____
###Markdown
Import dataset
###Code
(train_X, train_y), (test_X, test_y) = mnist.load_data()
examples = np.append(train_X, test_X, axis=0)
examples = examples.reshape(70000, 28*28)
classes = np.append(train_y, test_y)
x = []
y = []
for (example, label) in zip(examples, classes):
if label == 0:
x.append(example)
y.append(-1)
elif label == 1:
x.append(example)
y.append(1)
x = np.array(x)
y = np.array(y)
# Normalize pixels values
x = x / 255
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=TEST_SIZE, shuffle=True)
validation_indexes = np.random.random_integers(len(X_train), size=(math.floor(len(X_train)*VALIDATION_SIZE),))
X_validation = [X_train[n] for n in validation_indexes]
y_validation = [y_train[n] for n in validation_indexes]
pca = PCA(n_components=8)
pca.fit(X_train)
X_train = pca.transform(X_train)
X_validation = pca.transform(X_validation)
X_test = pca.transform(X_test)
preprocessing_time = time.time()
###Output
_____no_output_____
###Markdown
Circuit creation
###Code
device = qml.device("default.qubit", wires=3)
@qml.qnode(device)
def circuit(features, params):
# Load state
if INITIALIZATION_METHOD == 'Amplitude':
AmplitudeEmbedding(features=features, wires=range(3), normalize=True, pad_with=0.)
else:
AngleEmbedding(features=features, wires=range(3), rotation='Y')
# First layer
qml.RY(params[0], wires=0)
qml.RY(params[1], wires=1)
qml.CNOT(wires=[0, 1])
# Second layer
qml.RY(params[2], wires=1)
qml.RY(params[3], wires=2)
qml.CNOT(wires=[1, 2])
# Third layer
qml.RY(params[4], wires=2)
# Measurement
return qml.expval(qml.PauliZ(2))
###Output
_____no_output_____
###Markdown
Circuit example
###Code
features = X_train[0]
print(f"Inital parameters: {initial_params}\n")
print(f"Example features: {features}\n")
print(f"Expectation value: {circuit(features, initial_params)}\n")
print(circuit.draw())
###Output
Inital parameters: [0.65015361 0.94810917 0.38802889 0.64129616 0.69051205]
Example features: [-3.62782683 -2.63177839 -1.03062668 -0.42073574 -0.76211978 -0.06781324
0.54118154 0.30746003]
Expectation value: -0.607111385954991
0: ──╭QubitStateVector(M0)──RY(0.65)───╭C────────────────────────────┤
1: ──├QubitStateVector(M0)──RY(0.948)──╰X──RY(0.388)──╭C─────────────┤
2: ──╰QubitStateVector(M0)──RY(0.641)─────────────────╰X──RY(0.691)──┤ ⟨Z⟩
M0 =
[-0.76824488+0.j -0.5573172 +0.j -0.21825013+0.j -0.08909689+0.j
-0.1613899 +0.j -0.01436044+0.j 0.11460303+0.j 0.06510912+0.j]
###Markdown
Accuracy test definition
###Code
def measure_accuracy(x, y, circuit_params):
class_errors = 0
for example, example_class in zip(x, y):
predicted_value = circuit(example, circuit_params)
if (example_class > 0 and predicted_value <= 0) or (example_class <= 0 and predicted_value > 0):
class_errors += 1
return 1 - (class_errors/len(y))
###Output
_____no_output_____
###Markdown
Training
###Code
params = initial_params
opt = qml.AdamOptimizer(stepsize=STEP_SIZE, beta1=BETA_1, beta2=BETA_2, eps=EPSILON)
test_accuracies = []
best_validation_accuracy = 0.0
best_params = []
for i in range(len(X_train)):
features = X_train[i]
expected_value = y_train[i]
def cost(circuit_params):
value = circuit(features, circuit_params)
return ((expected_value - value) ** 2)/len(X_train)
params = opt.step(cost, params)
if i % BATCH_SIZE == 0:
print(f"epoch {i//BATCH_SIZE}")
if i % (10*BATCH_SIZE) == 0:
current_accuracy = measure_accuracy(X_validation, y_validation, params)
test_accuracies.append(current_accuracy)
print(f"accuracy: {current_accuracy}")
if current_accuracy > best_validation_accuracy:
print("best accuracy so far!")
best_validation_accuracy = current_accuracy
best_params = params
if len(test_accuracies) == 30:
print(f"test_accuracies: {test_accuracies}")
if np.allclose(best_validation_accuracy, test_accuracies[0]):
params = best_params
break
del test_accuracies[0]
print("Optimized rotation angles: {}".format(params))
training_time = time.time()
###Output
Optimized rotation angles: [-0.12911659 2.14395627 -0.64302077 1.53472696 -0.02795154]
###Markdown
Testing
###Code
accuracy = measure_accuracy(X_test, y_test, params)
print(accuracy)
test_time = time.time()
print(f"pre-processing time: {preprocessing_time-initial_time}")
print(f"training time: {training_time - preprocessing_time}")
print(f"test time: {test_time - training_time}")
print(f"total time: {test_time - initial_time}")
###Output
pre-processing time: 3.1723732948303223
training time: 158.48766684532166
test time: 7.157676458358765
total time: 168.81771659851074
|
02-Variaveis_Tipo_Estrutura_Dados/01-Numeros.ipynb | ###Markdown
Números Operações Básicas
###Code
#soma
4 + 4
#subtração
4 - 3
#multiplicação
3 * 3
#Divisão
3 / 2
#Potência
4 ** 2
#Módulo
10 % 3
###Output
_____no_output_____
###Markdown
Função Type
###Code
type(5)
type(5.0)
a = 'string'
type(a)
###Output
_____no_output_____
###Markdown
Operações com números float
###Code
3.1 + 6.4
4 + 4.0
4 + 4
#Resultado é um float
4 / 2
#Resultado é um int
4 // 2
4 / 3.0
4 // 3.0
###Output
_____no_output_____
###Markdown
Conversão
###Code
float(9)
int(6.0)
int(6.5)
###Output
_____no_output_____
###Markdown
Hexadecimal e binário
###Code
hex(394)
hex(217)
bin(286)
bin(390)
###Output
_____no_output_____
###Markdown
Funções abs, round e pow
###Code
#retorna o valor absoluto
abs(-8)
#retorna o valor absoluto
abs(8)
#retorna o valor com arredondamento
#(numero, casas decimas que serão consideradas)
round(3.14151922,2)
#potência
pow(4, 2)
pow(5, 3)
###Output
_____no_output_____ |
Creating a Scalable Recommender System with Spark & Elasticsearch.ipynb | ###Markdown
Overview1. Create Elasticsearch Mappings 1. Load data into Elasticsearch (see `Enrich & Prepare MovieLens Dataset.ipynb`)2. Load ratings data and run ALS3. Save ALS model factors to Elasticsearch4. Show similar items using Elasticsearch 1. Set up Elasticsearch mappingsReferences:* [Create index request](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html)* [Delimited payload filter](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/analysis-delimited-payload-tokenfilter.html)* [Term vectors](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/docs-termvectors.html_term_information)* [Mapping](https://www.elastic.co/guide/en/elasticsearch/reference/2.4/mapping.html)
###Code
from elasticsearch import Elasticsearch
es = Elasticsearch()
create_index = {
"settings": {
"analysis": {
"analyzer": {
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
}
}
}
},
"mappings": {
"ratings": {
"properties": {
"timestamp": {
"type": "date"
},
"userId": {
"type": "string",
"index": "not_analyzed"
},
"movieId": {
"type": "string",
"index": "not_analyzed"
},
"rating": {
"type": "double"
}
}
},
"users": {
"properties": {
"name": {
"type": "string"
},
"@model": {
"properties": {
"factor": {
"type": "string",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
},
"movies": {
"properties": {
"genres": {
"type": "string"
},
"original_language": {
"type": "string",
"index": "not_analyzed"
},
"image_url": {
"type": "string",
"index": "not_analyzed"
},
"release_date": {
"type": "date"
},
"popularity": {
"type": "double"
},
"@model": {
"properties": {
"factor": {
"type": "string",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
# create index with the settings & mappings above
#es.indices.create(index="demo", body=create_index)
###Output
_____no_output_____
###Markdown
Load User, Movie and Ratings DataFrames from ElasticsearchShow schemas
###Code
user_df = sqlContext.read.format("es").load("demo/users")
user_df.printSchema()
user_df.select("userId", "name").show(5)
movie_df = sqlContext.read.format("es").load("demo/movies")
movie_df.printSchema()
movie_df.select("movieId", "title", "genres", "release_date", "popularity").show(5)
ratings_df = sqlContext.read.format("es").load("demo/ratings")
ratings_df.printSchema()
ratings_df.show(5)
###Output
_____no_output_____
###Markdown
2. Run ALS
###Code
from pyspark.ml.recommendation import ALS
als = ALS(userCol="userId", itemCol="movieId", ratingCol="rating", regParam=0.1, rank=20)
model = als.fit(ratings_df)
model.userFactors.show(5)
model.itemFactors.show(5)
###Output
_____no_output_____
###Markdown
3. Write ALS user and item factors to Elasticsearch Utility functions for converting factor vectors
###Code
from pyspark.sql.types import *
from pyspark.sql.functions import udf, lit
def convert_vector(x):
'''Convert a list or numpy array to delimited token filter format'''
return " ".join(["%s|%s" % (i, v) for i, v in enumerate(x)])
def reverse_convert(s):
'''Convert a delimited token filter format string back to list format'''
return [float(f.split("|")[1]) for f in s.split(" ")]
def vector_to_struct(x, version):
'''Convert a vector to a SparkSQL Struct with string-format vector and version fields'''
return (convert_vector(x), version)
vector_struct = udf(vector_to_struct, \
StructType([StructField("factor", StringType(), True), \
StructField("version", StringType(), True)]))
# test out the vector conversion function
test_vec = model.userFactors.select("features").first().features
print test_vec
print
print convert_vector(test_vec)
###Output
_____no_output_____
###Markdown
Convert factor vectors to [factor, version] form and write to Elasticsearch
###Code
ver = model.uid
movie_vectors = model.itemFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
movie_vectors.select("id", "@model.factor", "@model.version").show(5)
user_vectors = model.userFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
user_vectors.select("id", "@model.factor", "@model.version").show(5)
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "update" write mode for ES
# - "append" write mode for Spark
movie_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/movies", mode="append")
user_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/users", mode="append")
###Output
_____no_output_____
###Markdown
Check the data was written correctly
###Code
es.search(index="demo", doc_type="movies", q="star wars force", size=1)
###Output
_____no_output_____
###Markdown
4. Recommend using Elasticsearch!
###Code
from IPython.display import Image, HTML, display
def fn_query(query_vec, q="*", cosine=False):
return {
"query": {
"function_score": {
"query" : {
"query_string": {
"query": q
}
},
"script_score": {
"script": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
}
},
"boost_mode": "replace"
}
}
}
def get_similar(the_id, q="*", num=10, index="demo", dt="movies"):
response = es.get(index=index, doc_type=dt, id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=True)
results = es.search(index, dt, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def display_similar(the_id, q="*", num=10, index="demo", dt="movies"):
movie, recs = get_similar(the_id, q, num, index, dt)
# display query
q_im_url = movie['image_url']
display(HTML("<h2>Get similar movies for:</h2>"))
display(Image(q_im_url, width=200))
display(HTML("<br>"))
display(HTML("<h2>Similar movies:</h2>"))
sim_html = "<table border=0><tr>"
i = 0
for rec in recs:
r_im_url = rec['_source']['image_url']
r_score = rec['_score']
sim_html += "<td><img src=%s width=200></img></td><td>%2.3f</td>" % (r_im_url, r_score)
i += 1
if i % 5 == 0:
sim_html += "</tr><tr>"
sim_html += "</tr></table>"
display(HTML(sim_html))
display_similar(122886, num=5)
display_similar(122886, num=5, q="title:(NOT trek)")
display_similar(6377, num=5, q="genres:children AND release_date:[now-3y/y TO now]")
###Output
_____no_output_____ |
_doc/notebooks/lectures/wines_knn_split.ipynb | ###Markdown
Base d'apprentissage et de testLe modèle est estimé sur une base d'apprentissage et évalué sur une base de test.
###Code
%matplotlib inline
from papierstat.datasets import load_wines_dataset
df = load_wines_dataset()
X = df.drop(['quality', 'color'], axis=1)
y = df['quality']
###Output
_____no_output_____
###Markdown
On divise en base d'apprentissage et de test avec la fonction [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=1)
knn.fit(X_train, y_train)
prediction = knn.predict(X_test)
import pandas
res = pandas.DataFrame(dict(expected=y_test, prediction=prediction))
res.head()
from seaborn import jointplot
ax = jointplot("expected", "prediction", res, kind="kde", size=4)
ax.ax_marg_y.set_title('Distribution valeurs attendues\nvaleurs prédites');
###Output
_____no_output_____
###Markdown
Le résultat paraît acceptable. On enlève les réponses correctes.
###Code
ax = jointplot("expected", "prediction", res[res['expected'] != res['prediction']], kind="kde", size=4)
ax.ax_marg_x.set_title('Distribution valeurs attendues\nvaleurs prédites\n' +
'sans les réponses correctes');
res['diff'] = res['prediction'] - res["expected"]
ax = res['diff'].hist(bins=15, figsize=(3,3))
ax.set_title("Répartition des différences");
###Output
_____no_output_____
###Markdown
Si on fait la moyenne des erreurs en valeur absolue :
###Code
import numpy
numpy.abs(res['diff']).mean()
###Output
_____no_output_____
###Markdown
Le modèle se trompe en moyenne d'un demi point. Le module *scikit-learn* propose de nombreuses [métriques](http://scikit-learn.org/stable/modules/classes.htmlsklearn-metrics-metrics) pour évaluer les résultats. On s'intéresse plus particulièrement à celle de la [régression](http://scikit-learn.org/stable/modules/classes.htmlregression-metrics). Celle qu'on a utilisée s'appelle [mean_absolute_error](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.htmlsklearn.metrics.mean_absolute_error).
###Code
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_test, prediction)
###Output
_____no_output_____
###Markdown
Un autre indicateur très utilisé : [R2](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.htmlsklearn.metrics.r2_score).
###Code
from sklearn.metrics import r2_score
r2_score(y_test, prediction)
###Output
_____no_output_____
###Markdown
Une valeur négative implique que le modèle fait moins bien que si la prédiction était constante et égale à la moyenne des notes sur la base de test. Essayons.
###Code
const = numpy.mean(y_test) * numpy.ones(y_test.shape[0])
r2_score(y_test, const)
###Output
_____no_output_____
###Markdown
Pour être rigoureux, il faudrait prendre la moyenne des notes sur la base d'apprentissage, celles des vins connus.
###Code
const = numpy.mean(y_train) * numpy.ones(y_test.shape[0])
r2_score(y_test, const)
###Output
_____no_output_____
###Markdown
Sensiblement pareil et on sait maintenant que le modèle n'est pas bon. On cherche une explication. Une raison possible est que les bases d'apprentissage et de test ne sont pas homogènes : le modèle apprend sur des données et est testé sur d'autres qui n'ont rien à voir. On commence par regarder la distribution des notes.
###Code
ys = pandas.DataFrame(dict(y=y_train))
ys['base'] = 'train'
ys2 = pandas.DataFrame(dict(y=y_test))
ys2['base'] = 'test'
ys = pandas.concat([ys, ys2])
ys['compte'] = 1
piv = ys.groupby(['base', 'y'], as_index=False).count().pivot('y', 'base', 'compte')
piv['ratio'] = piv['test'] / piv['train']
piv
###Output
_____no_output_____ |
Section 2/Scipy_stats.ipynb | ###Markdown
Scipy Stats
###Code
from scipy import stats as sps
# cumulative area under the normal curve
sps.norm.cdf(2)
# generate a list of critical values for various levels of significance
sps.t.isf([.1,.05,.01], [[30],[40]])
# descriptives
x = sps.norm.rvs(size=100)
sps.describe(x)
desc = sps.describe(x)
desc[2]
def print_descriptives(array):
'''print table of descriptive statistics
input - list, numpy array, pandas dataframe'''
import scipy.stats as sps
import numpy as np
stats = sps.describe(array)
print ("{:30s}".format("Descriptive Statistics"))
print ("-" * 30)
print("{:14s} {:15.4f}".format("Count:", stats[0]))
print("{:14s} {:15.4f}".format("Min:", stats[1][0]))
print("{:14s} {:15.4f}".format("Max:", stats[1][1]))
print("{:14s} {:15.4f}".format("Mean:", stats[2]))
print("{:14s} {:15.4f}".format("St Dev:", np.sqrt(stats[3])))
print("{:14s} {:15.4f}".format("Skew:", stats[4]))
print("{:14s} {:15.4f}".format("Kurtosis:", stats[5]))
print_descriptives(x)
percentiles = [5,10,25,75,90,95]
sps.scoreatpercentile(x, percentiles)
import scipy
scipy.__version__
###Output
_____no_output_____ |
Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/Week 2/Optimization_methods_v1b.ipynb | ###Markdown
Optimization MethodsUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: **Figure 1** : **Minimizing the cost is like finding the lowest point in a hilly landscape** At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.To get started, run the following code to import the libraries you will need. Updates to Assignment If you were working on a previous version* The current notebook filename is version "Optimization_methods_v1b". * You can find your work in the file directory as version "Optimization methods'.* To see the file directory, click on the Coursera logo at the top left of the notebook. List of Updates* op_utils is now opt_utils_v1a. Assertion statement in `initialize_parameters` is fixed.* opt_utils_v1a: `compute_cost` function now accumulates total cost of the batch without taking the average (average is taken for entire epoch instead).* In `model` function, the total cost per mini-batch is accumulated, and the average of the entire epoch is taken as the average cost. So the plot of the cost function over time is now a smooth downward curve instead of an oscillating curve.* Print statements used to check each function are reformatted, and 'expected output` is reformatted to match the format of the print statements (for easier visual comparisons).
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils_v1a import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils_v1a import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
1 - Gradient DescentA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * grads['dW' + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * grads['dW' + str(l+1)])
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 =\n" + str(parameters["W1"]))
print("b1 =\n" + str(parameters["b1"]))
print("W2 =\n" + str(parameters["W2"]))
print("b2 =\n" + str(parameters["b2"]))
###Output
W1 =
[[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 =
[[ 1.75581796 1.73336453 1.73579586]
[-0.76623184 -0.77021546 -0.75436962]]
W2 =
[[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 =
[[-0.87517954 -0.88316197 -0.87094181]
[ 0.04618128 0.04908547 0.0506658 ]
[ 0.58952768 0.58294186 0.59398832]]
###Markdown
**Expected Output**:```W1 =[[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]]b1 =[[ 1.74604067] [-0.75184921]]W2 =[[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]]b2 =[[-0.88020257] [ 0.02561572] [ 0.57539477]]``` A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**:``` pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): Forward propagation a, caches = forward_propagation(X, parameters) Compute cost. cost += compute_cost(a, Y) Backward propagation. grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads) ```- **Stochastic Gradient Descent**:```pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): for j in range(0, m): Forward propagation a, caches = forward_propagation(X[:,j], parameters) Compute cost cost += compute_cost(a, Y[:,j]) Backward propagation grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads)``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: **Figure 1** : **SGD vs GD** "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). **Note** also that implementing SGD requires 3 for-loops in total:1. Over the number of iterations2. Over the $m$ training examples3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. **Figure 2** : **SGD vs Mini-Batch GD** "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. **What you should remember**:- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.- You have to tune a learning rate hyperparameter $\alpha$.- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). 2 - Mini-Batch Gradient descentLet's learn how to build mini-batches from the training set (X, Y).There are two steps:- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:```pythonfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]...```Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
###Code
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, (k * mini_batch_size) : (k+1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, (k * mini_batch_size) : (k+1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = mini_batch_X = shuffled_X[:, m - mini_batch_size * num_complete_minibatches : m]
mini_batch_Y = mini_batch_X = shuffled_Y[:, m - mini_batch_size * num_complete_minibatches : m]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
###Output
shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (1, 128)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 128)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
###Markdown
**Expected Output**: **shape of the 1st mini_batch_X** (12288, 64) **shape of the 2nd mini_batch_X** (12288, 64) **shape of the 3rd mini_batch_X** (12288, 20) **shape of the 1st mini_batch_Y** (1, 64) **shape of the 2nd mini_batch_Y** (1, 64) **shape of the 3rd mini_batch_Y** (1, 20) **mini batch sanity check** [ 0.90085595 -0.7612069 0.2344157 ] **What you should remember**:- Shuffling and Partitioning are the two steps required to build mini-batches- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. 3 - MomentumBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. **Figure 3**: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$. **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:for $l =1,...,L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
###Code
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] =\n" + str(v["dW1"]))
print("v[\"db1\"] =\n" + str(v["db1"]))
print("v[\"dW2\"] =\n" + str(v["dW2"]))
print("v[\"db2\"] =\n" + str(v["db2"]))
###Output
v["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] =
[[ 0.]
[ 0.]]
v["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**:```v["dW1"] =[[ 0. 0. 0.] [ 0. 0. 0.]]v["db1"] =[[ 0.] [ 0.]]v["dW2"] =[[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]v["db2"] =[[ 0.] [ 0.] [ 0.]]``` **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases}v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}\end{cases}\tag{3}$$$$\begin{cases}v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta*v["dW" + str(l+1)] + (1-beta)* grads['dW' + str(l+1)]
v["db" + str(l+1)] = beta*v["db" + str(l+1)] + (1-beta)* grads['db' + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * v["dW" + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * v["db" + str(l+1)])
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = v" + str(v["db2"]))
###Output
W1 =
[[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 =
[[ 1.74493465]
[-0.76027113]]
W2 =
[[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 =
[[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] =
[[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] =
[[-0.01228902]
[-0.09357694]]
v["dW2"] =
[[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = v[[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
###Markdown
**Expected Output**:```W1 = [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]]b1 = [[ 1.74493465] [-0.76027113]]W2 = [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]]b2 = [[-0.87809283] [ 0.04055394] [ 0.58207317]]v["dW1"] = [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]]v["db1"] = [[-0.01228902] [-0.09357694]]v["dW2"] = [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]]v["db2"] = v[[ 0.02344157] [ 0.16598022] [ 0.07420442]]``` **Note** that:- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.- If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?**- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. **What you should remember**:- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. 4 - AdamAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?**1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2".The update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}\end{cases}$$where:- t counts the number of steps taken of Adam - L is the number of layers- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate- $\varepsilon$ is a very small number to avoid dividing by zeroAs usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:for $l = 1, ..., L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])s["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])s["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```
###Code
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
###Output
v["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] =
[[ 0.]
[ 0.]]
v["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
s["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] =
[[ 0.]
[ 0.]]
s["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**:```v["dW1"] = [[ 0. 0. 0.] [ 0. 0. 0.]]v["db1"] = [[ 0.] [ 0.]]v["dW2"] = [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]v["db2"] = [[ 0.] [ 0.] [ 0.]]s["dW1"] = [[ 0. 0. 0.] [ 0. 0. 0.]]s["db1"] = [[ 0.] [ 0.]]s["dW2"] = [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]s["db2"] = [[ 0.] [ 0.] [ 0.]]``` **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}\end{cases}$$**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate=0.01,
beta1=0.9, beta2=0.999, epsilon=1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l + 1)] = beta1 * v["dW" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)]
v["db" + str(l + 1)] = beta1 * v["db" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)] / (1 - np.power(beta1, t))
v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)] / (1 - np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l + 1)] = beta2 * s["dW" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2)
s["db" + str(l + 1)] = beta2 * s["db" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l + 1)] = s["dW" + str(l + 1)] / (1 - np.power(beta2, t))
s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)] / (1 - np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - (learning_rate * v_corrected["dW" + str(l + 1)]) / (np.sqrt(s_corrected["dW" + str(l + 1)]) + epsilon)
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - (learning_rate * v_corrected["db" + str(l + 1)]) / (np.sqrt(s_corrected["db" + str(l + 1)]) + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
###Output
W1 =
[[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]]
b1 =
[[ 1.75225313]
[-0.75376553]]
W2 =
[[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09244991 -0.16498684]]
b2 =
[[-0.88529979]
[ 0.03477238]
[ 0.57537385]]
v["dW1"] =
[[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] =
[[-0.01228902]
[-0.09357694]]
v["dW2"] =
[[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] =
[[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
s["dW1"] =
[[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]]
s["db1"] =
[[ 1.51020075e-05]
[ 8.75664434e-04]]
s["dW2"] =
[[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] =
[[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]]
###Markdown
**Expected Output**:```W1 = [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]]b1 = [[ 1.75225313] [-0.75376553]]W2 = [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]]b2 = [[-0.88529978] [ 0.03477238] [ 0.57537385]]v["dW1"] = [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]]v["db1"] = [[-0.01228902] [-0.09357694]]v["dW2"] = [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]]v["db2"] = [[ 0.02344157] [ 0.16598022] [ 0.07420442]]s["dW1"] = [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]]s["db1"] = [[ 1.51020075e-05] [ 8.75664434e-04]]s["dW2"] = [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]s["db2"] = [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]]``` You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. 5 - Model with different optimization algorithmsLets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
###Code
train_X, train_Y = load_dataset()
###Output
_____no_output_____
###Markdown
We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()`- Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()`- Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()`
###Code
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
m = X.shape[1] # number of training examples
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
cost_total = 0
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost and add to the cost total
cost_total += compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
cost_avg = cost_total / m
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost_avg))
if print_cost and i % 100 == 0:
costs.append(cost_avg)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
You will now run this 3 layer neural network with each of the 3 optimization methods. 5.1 - Mini-batch Gradient descentRun the following code to see how the model does with mini-batch gradient descent.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
5.2 - Mini-batch gradient descent with momentumRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
5.3 - Mini-batch with Adam modeRun the following code to see how the model does with Adam.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
jupyter_notebooks/ploitly tries/Untitled.ipynb | ###Markdown
función para devolver las imagenes de los cascos segunn el porcenntaje de los usuarios
###Code
import pandas as pd
df_ecobici = pd.read_csv("../../data/production_data/viajes_ecobici.csv")
df_ecobici.head()
CVE_AGEB_arribo_ = 745
CVE_AGEB_retiro_ = 120
def nombre_archivo(CVE_AGEB_arribo_, CVE_AGEB_retiro_):
"""
info
"""
query_genero = df_ecobici[(df_ecobici["CVE_AGEB_arribo_"]==CVE_AGEB_arribo_) &
( df_ecobici["CVE_AGEB_retiro_"]==CVE_AGEB_retiro_)][["porcentage_hombres", "porcentage_mujeres"]].mean().round().astype(int)
numero_hombres = query_genero[0]
if numero_hombres == 0:
archivo = "10m-0h.png"
elif numero_hombres == 1:
archivo = "9m-1h.png"
elif numero_hombres == 2:
archivo = "8m-2h.png"
elif numero_hombres == 3:
archivo = "7m-3h.png"
elif numero_hombres == 4:
archivo = "6m-4h.png"
elif numero_hombres == 5:
archivo = "5m-5h.png"
elif numero_hombres == 6:
archivo = "4m-6h.png"
elif numero_hombres == 7:
archivo = "3m-7h.png"
elif numero_hombres == 8:
archivo = "2m-8h.png"
elif numero_hombres == 9:
archivo = "1m-9h.png"
elif numero_hombres == 10:
archivo = "0m-1h.png"
return str("./assets/cascos/" + archivo)
ejemplo_path = nombre_archivo(CVE_AGEB_arribo_, CVE_AGEB_retiro_)
ejemplo_path
%pylab inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread(str("../../"+ejemplo_path))
imgplot = plt.imshow(img)
plt.show()
###Output
Populating the interactive namespace from numpy and matplotlib
|
classification/simple_classification.ipynb | ###Markdown
Naive implementation of Naive Bayes$$P(class \mid features) = P(class)P(feature \mid class)$$
###Code
size = int(I.shape[0]*0.8)
train, test = I.iloc[:size], I.iloc[size:]
###Output
_____no_output_____
###Markdown
Priors
###Code
categories = train.category.unique()
categories
K = {}
for c in categories:
K[c] = train[train.category == c]
priors = np.array([K[c].shape[0] for c in categories])
priors = priors / priors.sum()
priors
###Output
_____no_output_____
###Markdown
Likelihood
###Code
models = {}
for c in categories:
models[c] = K[c][K[c].columns.difference(['doc', 'url', 'category'])].sum() + 1
for c, m in models.items():
models[c] = m / m.sum()
train_true = train.category.values
test_true = test.category.values
train_pred, test_pred = [], []
for i, row in train[train.columns.difference([
'doc', 'url', 'category'])].iterrows():
predictions = np.ones(len(categories))
for k, _ in [(x, y) for x, y in row.items() if y > 0]:
for j, z in enumerate(categories):
predictions[j] *= models[z][k]
pred = np.argmax(predictions * priors)
train_pred.append(categories[pred])
for i, row in test[test.columns.difference([
'doc', 'url', 'category'])].iterrows():
predictions = np.ones(len(categories))
for k, _ in [(x, y) for x, y in row.items() if y > 0]:
for j, z in enumerate(categories):
predictions[j] *= models[z][k]
pred = np.argmax(predictions * priors)
test_pred.append(categories[pred])
def cm_plot(ax, classes, CM, title, figure):
im = ax.imshow(CM, interpolation='nearest', cmap=plt.cm.Blues)
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
figure.colorbar(im, cax=cax, orientation='vertical')
tick_marks = np.arange(len(classes))
ax.set_xticks(tick_marks)
ax.set_xticklabels(classes, rotation=90, fontsize=12)
ax.set_yticks(tick_marks)
ax.set_yticklabels(classes, rotation=0, fontsize=12)
ax.set_title(title, fontsize=16)
thresh = CM.max() / 2.
for i, j in itertools.product(range(CM.shape[0]), range(CM.shape[1])):
ax.text(j, i, CM[i, j], horizontalalignment="center",
color="white" if CM[i, j] > thresh else "black", fontsize=12)
ax.set_ylabel('True label', fontsize=16)
ax.set_xlabel('Predicted label', fontsize=16)
cm_train = metrics.confusion_matrix(train_true, train_pred, labels=categories)
cm_test = metrics.confusion_matrix(test_true, test_pred, labels=categories)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(14, 7))
cm_plot(axes[0], categories, cm_train, 'Train set', fig)
cm_plot(axes[1], categories, cm_test, 'Test set', fig)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Visualize results
###Code
images = {'image': [], 'true': [], 'predicted': [], 'url': []}
for j, (i, row) in enumerate(test.iterrows()):
images['image'].append('<img src="{}">'.format(row.url))
images['true'].append(row.category)
images['predicted'].append(test_pred[j])
images['url'].append(row.url)
rows = []
for i, image in enumerate(images['image']):
row = '<tr><td>{}</td><td>{}</td><td>{}</td></tr>'.format(
image, images['true'][i], images['predicted'][i]
)
rows.append(row)
table = "<table><tr><th>{}</th><th>{}</th><th>{}</th></tr>{}</table>".format(
'image', 'true', 'predicted', "".join(rows)
)
display(HTML(table))
###Output
_____no_output_____ |
extra-course-parts/10_Modules_Extra.ipynb | ###Markdown
10 Modules Extra 10.1 IntroductionWe know how to make functions, but how can you re-use them? Imagine that you've started writing code and functions in one file and the project has grown to such an extent that it would be easier to maintain it in different files each containing a specific part of the project. Or you want to re-use some of the functions in other projects as well. In Python you can import functions and chunks of code from files. Such a file containing the functions is called a *module*. Generally we say that we import a *definition* from a *module*. A module can have one or multiple functions in it. The file name is the module name with the suffix `.py` appended. Using the code from this module is possible by using **import**. In this way you can import your own functions, but also draw on a very extensive library of functions provided by Python (built-in modules). In this extra sections part we will look at the syntax for imports and how to import your own functions. 10.2 How imports workThe easiest way to import a module looks like this:```pythonimport module1```Imagine that in the module `module1`, there is a function called `getMeanValue()`. This way of importing does not make the name of the function available; it only remembers the module name `module1` which you can than use to access the functions within the module:```pythonimport module1module1.getMeanValue([1,2,3])``` 10.3 How to create your own moduleThe easiest example is importing a module from within the same working directory. Let's create a Python module called `module1.py` with the code of the function `getMeanValue()` that we have written earlier (and you can find here below). **Create a module in Jupyter Lab/Notebook**- In order to create a module in Jupyter Lab, first create a new notebook - Rename the notebook (e.g. 'module1.ipynb') and copy paste the code in the notebook - Click 'File', 'Download as' and 'Python' - Jupyter will not download it in some local folder, copy it to your current working directory (in our case in the same directory as we're in right now). Unfortunately, Jupyter Lab/Notebook doesn't have a streamlined & straightforward way of creating Python modules and Python scripts. When you export the notebook, it will always export the whole Notebook and not just a part of it, which makes it very messy if you have a very large notebook. Import the following code in the `module1.py` file.
###Code
# When you download this as a Python script, Jupyter will automatically insert the environment shebang here.
def getMeanValue(valueList):
"""
Calculate the mean (average) value from a list of values.
Input: list of integers/floats
Output: mean value
"""
valueTotal = 0.0
for value in valueList:
valueTotal += value
numberValues = len(valueList)
return (valueTotal/numberValues)
###Output
_____no_output_____
###Markdown
10.4 Import syntax We can now use the module we just created by importing it. In this case where we import the whole 'module1' file, we can call the function as a method, similar to the methods for lists and strings that we saw earlier:
###Code
import module
print(module1.getMeanValue([4,6,77,3,67,54,6,5]))
###Output
_____no_output_____
###Markdown
If we were to write code for a huge project, long names can get exhaustive. Programmers will intrinsically make shortcut names for functions they use a lot. Renaming a module is therefore a common thing to do (e.g. NumPy as np, pandas as pd, etc.):
###Code
import module1 as m1
print(m1.getMeanValue([4,6,77,3,67,54,6,5]))
###Output
_____no_output_____
###Markdown
When importing a file, Python only searches the current directory, the directory that the entry-point script is running from, and sys.path which includes locations such as the package installation directory (it's actually a little more complex than this, but this covers most cases).However, you can specify the Python path yourself as well. Note that within our folders there is a directory named `modules` and within this folder, there is a module named `module2` (recognizable due to its .py extension). In that module there are two functions: 'getMeanValue' and 'compareMeanValueOfLists'.
###Code
from modules import module2
print(module2.getMeanValue([4,6,77,3,67,54,6,5]))
from modules import module2 as m2
print(m2.getMeanValue([4,6,77,3,67,54,6,5]))
###Output
_____no_output_____
###Markdown
Another way of writing this is with an absolute path to the module. You can explicitly import an attribute from a module.
###Code
from modules.module2 import compareMeanValueOfLists
print(compareMeanValueOfLists([1,2,3,4,5,6,7], [4,6,77,3,67,54,6,5]))
###Output
_____no_output_____
###Markdown
So here we *import* the function compareMeanValueOfLists (without brackets!) from the file *module2* (without .py extension!).In order to have an overview of all the different functions within a module, use `dir()`:
###Code
dir(module2)
###Output
_____no_output_____
###Markdown
--- 10.4.5 Extra exercisesInspect the file `SampleInfo.txt`. Write a program that:- Has a function `readSampleInformationFile()` to read the information from this sample data file into a dictionary. Also check whether the file exists.- Has a function `getSampleIdsForValueRange()` that can extract sample IDs from this dictionary. Print the sample IDs for pH 6.0-7.0, temperature 280-290 and volume 200-220 using this function.---
###Code
import os
def readSampleInformationFile(fileName):
# Read in the sample information file in .csv (comma-delimited) format
# Doublecheck if file exists
if not os.path.exists(fileName):
print(f"File {fileName} does not exist!")
return None
# Open the file and read the information
with open(fileName) as fileHandle:
lines = fileHandle.readlines()
# Now read the information. The first line has the header information which
# we are going to use to create the dictionary!
fileInfoDict = {}
headerCols = lines[0].strip().split(',')
# Now read in the information, use the first column as the key for the dictionary
# Note that you could organise this differently by creating a dictionary with
# the header names as keys, then a list of the values for each of the columns.
for line in lines[1:]:
line = line.strip() # Remove newline characters
cols = line.split(',')
sampleId = int(cols[0])
fileInfoDict[sampleId] = {}
# Don't use the first column, is already the key!
for i in range(1,len(headerCols)):
valueName = headerCols[i]
value = cols[i]
if valueName in ('pH','temperature','volume'):
value = float(value)
fileInfoDict[sampleId][valueName] = value
# Return the dictionary with the file information
return fileInfoDict
def getSampleIdsForValueRange(fileInfoDict,valueName,lowValue,highValue):
# Return the sample IDs that fit within the given value range for a kind of value
#sampleIdList = fileInfoDict.keys()
#sampleIdList.sort()
sampleIdList = sorted(fileInfoDict.keys())
sampleIdsFound = []
for sampleId in sampleIdList:
currentValue = fileInfoDict[sampleId][valueName]
if lowValue <= currentValue <= highValue:
sampleIdsFound.append(sampleId)
return sampleIdsFound
if __name__ == '__main__':
fileInfoDict = readSampleInformationFile("../data/SampleInfo.txt")
print(getSampleIdsForValueRange(fileInfoDict,'pH',6.0,7.0))
print(getSampleIdsForValueRange(fileInfoDict,'temperature',280,290))
print(getSampleIdsForValueRange(fileInfoDict,'volume',200,220))
###Output
_____no_output_____ |
weather_vacation/.ipynb_checkpoints/VacationPy-checkpoint.ipynb | ###Markdown
VacationPy---- Note* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
# Load the city_data.csv file from output folder
ct_df = pd.read_csv("../output_data/city_data.csv")
ct_df
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
###Code
# Configure gmaps
gmaps.configure(api_key=g_key)
# Store latitude and longitude in locations
locations = ct_df[["Lat", "Lng"]]
# Store Humidity in humidity
humidity = ct_df["Humidity"]
# Plot Heatmap
fig = gmaps.figure(center=(20, 150), zoom_level=1.6)
# Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights = humidity,dissipating=False, max_intensity=max(humidity), point_radius=3)
# Add layer
fig.add_layer(heat_layer)
# Display figure
fig
###Output
_____no_output_____
###Markdown
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
###Code
# Narrow down the cities to fit ideal weather:
# A max temperature lower than 80 degrees but higher than 70.
# Wind speed less than 10 mph.
# Zero cloudiness.
# Humidity is lower than 80% but greater than 50.
select_city_df = ct_df.loc[(ct_df["Max Temp"] >= 70) & (ct_df["Max Temp"] <= 80) &\
(ct_df["Wind Speed"] <= 10) & (ct_df["Cloudiness"] == 0) &\
(ct_df["Humidity"] >= 50) & (ct_df["Humidity"] <= 80)]
# Drop any rows that don't contain all above conditions
select_city_df = select_city_df.dropna()
select_city_df.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
# Create a hotel_df
hotel_df = select_city_df.reset_index(drop=True)
# Add a "Hotel Name" column to the DataFrame.
hotel_df["Hotel Name"] = ""
# Display the result
hotel_df
# Build URL using the Google Maps API
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {"type" : "hotel",
"keyword" : "hotel",
"radius" : 5000,
"key" : g_key}
# Loop through the cities_pd and run a hotel search for each city
for index, row in hotel_df.iterrows():
# Extract latitude and longitude and the city name
lat = row["Lat"]
lng = row["Lng"]
city_name = row["City"]
# update address key value
params["location"] = f"{lat},{lng}"
# make request
response = requests.get(base_url, params=params).json()
# test if there is a an available hotel in the city
try:
hotel_name = response["results"][0]["name"]
print(f"Closest hotel in {city_name} is {hotel_name}.")
hotel_df.loc[index, "Hotel Name"] = hotel_name
# if there is no hotel available, throw an error message
except (KeyError, IndexError):
print(f"No Hotel available in {city_name}")
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations, info_box_content = hotel_info)
# Add the layer to the map
fig.add_layer(markers)
# Display figure
fig
###Output
_____no_output_____ |
FashionMNIST/Generative Adversarial Networks Files/[2] Student Configurations on FashionMNIST using GANs.ipynb | ###Markdown
Distilling Knowledge in Multiple Students Using Generative Models
###Code
# %tensorflow_version 1.x
# !pip install --upgrade opencv-python==3.4.2.17
import numpy as np
import tensorflow as tf
import tensorflow.keras
import tensorflow.keras.backend as K
# import os
from tensorflow.keras.datasets import fashion_mnist,mnist,cifar10
# import keras.backend as K
from tensorflow.keras.layers import Conv2D,Activation,BatchNormalization,UpSampling2D,Embedding,ZeroPadding2D, Input, Flatten, Dense, Reshape, LeakyReLU, Dropout,MaxPooling2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras import regularizers
from tensorflow.keras.utils import Progbar
from keras.initializers import RandomNormal
import random
from sklearn.model_selection import train_test_split
# from keras.utils import np_utils
from tensorflow.keras import utils as np_utils
#Loading and splitting the dataset into train, validation and test
nb_classes = 10
(X_Train, y_Train), (X_test, y_test) = fashion_mnist.load_data()
X_train, X_val, y_train, y_val = train_test_split(X_Train, y_Train, test_size=0.20)
# convert y_train and y_test to categorical binary values
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_val = np_utils.to_categorical(y_val, nb_classes)
X_train = X_train.reshape(48000, 28, 28, 1)
X_val = X_val.reshape(12000, 28, 28, 1)
X_train = X_train.astype('float32')
X_val = X_val.astype('float32')
# Normalize the values
X_train /= 255
X_val /= 255
#Creating a teacher network
input_shape = (28, 28, 1) # Input shape of each image
teacher = Sequential()
teacher.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
teacher.add(Conv2D(64, (3, 3), activation='relu'))
teacher.add(MaxPooling2D(pool_size=(2, 2)))
teacher.add(Dropout(0.25)) # For reguralization
teacher.add(Flatten())
teacher.add(Dense(256, activation='relu'))
teacher.add(Dense(256, activation='relu', name="dense_1"))
teacher.add(Dropout(0.5)) # For reguralization
teacher.add(Dense(nb_classes, name = 'dense_2'))
teacher.add(Activation('softmax')) # Note that we add a normal softmax layer to begin with
teacher.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(teacher.summary())
# Train the teacher model as usual
epochs = 20
batch_size = 256
teacher.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_val, Y_val))
teacher.save_weights("Teacher_FMNIST_92.h5")
Y_test = np_utils.to_categorical(y_test, nb_classes)
X_test = X_test.reshape(10000, 28, 28, 1)
X_test = X_test.astype('float32')
X_test /= 255
teacher.evaluate(X_test,Y_test)
teacher_WO_Softmax = Model(teacher.input, teacher.get_layer('dense_1').output)
train_dense = teacher_WO_Softmax.predict(X_train)
val_dense = teacher_WO_Softmax.predict(X_val)
# 2 Students case
# ---------------------------------------------
s1Train=train_dense[:,:128]
s2Train=train_dense[:,128:]
###Output
_____no_output_____
###Markdown
GANs' Training
###Code
# import np.random import random
BATCH_SIZE=32
def smooth_real_labels(y):
return y - 0.3+(np.random.random(y.shape)*0.5)
def smooth_fake_labels(y):
return y + (0.3 * np.random.random(y.shape))
def build_gan(gen,disc):
disc.trainable = False
input= Input(shape=input_shape)
output = gen(input)
output2= disc(output)
gan=Model(input,output2)
gan.compile(Adam(lr=0.0002),loss=['binary_crossentropy','mse'],metrics=['accuracy'])
return gan
def build_sdiscriminator():
input2 = Input(shape=(128,),name='input')
inp=Dense(64,use_bias=False)(input2)
leaky_relu = LeakyReLU(alpha=0.2)(inp)
conv3 = Dense(128)(leaky_relu)
b_n = BatchNormalization()(conv3)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(256)(leaky_relu)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(512)(leaky_relu)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(1024)(leaky_relu)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
dense = Dense(1,activation='sigmoid',name='dense')(leaky_relu)
output2=Dense(128)(leaky_relu)
disc = Model(input2,[dense,output2])
disc.compile(optd,loss=['binary_crossentropy','mse'],metrics=['accuracy'])
return disc
optd = Adam(lr=0.0002)
opt = Adam(lr=0.0002)
def build_sgenerator(name):
student1 = Sequential()
student1.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=(28, 28, 1),kernel_initializer='normal', name=name))
student1.add(Conv2D(32, (3, 3), activation='relu',kernel_initializer='normal'))
student1.add(MaxPooling2D(pool_size=(2, 2)))
student1.add(Conv2D(16, kernel_size=(3, 3),activation='relu',kernel_initializer='normal'))
student1.add(Conv2D(16, (3, 3), activation='relu',kernel_initializer='normal'))
student1.add(MaxPooling2D(pool_size=(2, 2)))
student1.add(Dropout(0.25)) # For reguralization
student1.add(Flatten())
student1.add(Dense(16, activation='relu'))
student1.add(Dropout(0.3))
student1.add(Dense(128,name='req'+name))
student1.compile(opt,loss='mse',metrics=['accuracy'])
student1.summary()
return student1
def training(generator,discriminator,gan,features,epo=20):
BATCH_SIZE = 128
discriminator.trainable = True
total_size = X_train.shape[0]
indices = np.arange(0,total_size ,BATCH_SIZE)
all_disc_loss = []
all_gen_loss = []
all_class_loss=[]
if total_size % BATCH_SIZE:
indices = indices[:-1]
for e in range(epo):
progress_bar = Progbar(target=len(indices))
np.random.shuffle(indices)
epoch_gen_loss = []
epoch_disc_loss = []
epoch_class_loss= []
for i,index in enumerate(indices):
inputs=X_train[index:index+BATCH_SIZE]
strain = features[index:index+BATCH_SIZE]
y_real = np.ones((BATCH_SIZE,1))
y_fake = np.zeros((BATCH_SIZE,1))
#Generator Training
fake_images = generator.predict_on_batch(inputs)
#Disrciminator Training
disc_real_loss1,_,disc_real_loss2,_,_= discriminator.train_on_batch(strain,[y_real,strain])
disc_fake_loss1,_,disc_fake_loss2,_,_= discriminator.train_on_batch(fake_images,[y_fake,strain])
#Gans Training
discriminator.trainable = False
gan_loss,_,gan_loss2,_,_ = gan.train_on_batch(inputs, [y_real,strain])
discriminator.trainable = True
disc_loss = (disc_fake_loss1 + disc_real_loss1)/2
epoch_disc_loss.append(disc_loss)
progress_bar.update(i+1)
epoch_gen_loss.append((gan_loss))
avg_epoch_disc_loss = np.array(epoch_disc_loss).mean()
avg_epoch_gen_loss = np.array(epoch_gen_loss).mean()
all_disc_loss.append(avg_epoch_disc_loss)
all_gen_loss.append(avg_epoch_gen_loss)
print("Epoch: %d | Discriminator Loss: %f | Generator Loss: %f | " % (e+1,avg_epoch_disc_loss,avg_epoch_gen_loss))
return generator
discriminator1 = build_sdiscriminator()
discriminator2 = build_sdiscriminator()
s1=build_sgenerator("s1")
s2=build_sgenerator('s2')
# s3=build_sgenerator("s3")
# s4=build_sgenerator('s4')
gan1 = build_gan(s1,discriminator1)
gan2 = build_gan(s2,discriminator2)
s1 = training(s1,discriminator1,gan1,s1Train,epo=55)
s2 = training(s2,discriminator2,gan2,s2Train,epo=55)
###Output
375/375 [==============================] - 15s 35ms/step
Epoch: 1 | Discriminator Loss: 0.484974 | Generator Loss: 1.522924 |
375/375 [==============================] - 13s 35ms/step
Epoch: 2 | Discriminator Loss: 0.232264 | Generator Loss: 1.050430 |
375/375 [==============================] - 13s 34ms/step
Epoch: 3 | Discriminator Loss: 0.185770 | Generator Loss: 0.835174 |
375/375 [==============================] - 13s 34ms/step
Epoch: 4 | Discriminator Loss: 0.165592 | Generator Loss: 0.767950 |
375/375 [==============================] - 13s 35ms/step
Epoch: 5 | Discriminator Loss: 0.154636 | Generator Loss: 0.755518 |
375/375 [==============================] - 13s 35ms/step
Epoch: 6 | Discriminator Loss: 0.146981 | Generator Loss: 0.735421 |
375/375 [==============================] - 13s 36ms/step
Epoch: 7 | Discriminator Loss: 0.141064 | Generator Loss: 0.720830 |
375/375 [==============================] - 13s 34ms/step
Epoch: 8 | Discriminator Loss: 0.137847 | Generator Loss: 0.699222 |
375/375 [==============================] - 13s 35ms/step
Epoch: 9 | Discriminator Loss: 0.134544 | Generator Loss: 0.679605 |
375/375 [==============================] - 13s 36ms/step
Epoch: 10 | Discriminator Loss: 0.131767 | Generator Loss: 0.645129 |
375/375 [==============================] - 13s 35ms/step
Epoch: 11 | Discriminator Loss: 0.130213 | Generator Loss: 0.615478 |
375/375 [==============================] - 13s 36ms/step
Epoch: 12 | Discriminator Loss: 0.127408 | Generator Loss: 0.595560 |
375/375 [==============================] - 14s 37ms/step
Epoch: 13 | Discriminator Loss: 0.125131 | Generator Loss: 0.586648 |
375/375 [==============================] - 13s 36ms/step
Epoch: 14 | Discriminator Loss: 0.122381 | Generator Loss: 0.568220 |
375/375 [==============================] - 14s 37ms/step
Epoch: 15 | Discriminator Loss: 0.120320 | Generator Loss: 0.548418 |
375/375 [==============================] - 14s 38ms/step
Epoch: 16 | Discriminator Loss: 0.118519 | Generator Loss: 0.529006 |
375/375 [==============================] - 14s 37ms/step
Epoch: 17 | Discriminator Loss: 0.117571 | Generator Loss: 0.512678 |
375/375 [==============================] - 14s 37ms/step
Epoch: 18 | Discriminator Loss: 0.117157 | Generator Loss: 0.503690 |
375/375 [==============================] - 13s 35ms/step
Epoch: 19 | Discriminator Loss: 0.115521 | Generator Loss: 0.494017 |
375/375 [==============================] - 14s 36ms/step
Epoch: 20 | Discriminator Loss: 0.115009 | Generator Loss: 0.504843 |
375/375 [==============================] - 13s 35ms/step
Epoch: 21 | Discriminator Loss: 0.114051 | Generator Loss: 0.496370 |
375/375 [==============================] - 13s 35ms/step
Epoch: 22 | Discriminator Loss: 0.113176 | Generator Loss: 0.493024 |
375/375 [==============================] - 14s 37ms/step
Epoch: 23 | Discriminator Loss: 0.112032 | Generator Loss: 0.487736 |
375/375 [==============================] - 13s 35ms/step
Epoch: 24 | Discriminator Loss: 0.111232 | Generator Loss: 0.480365 |
375/375 [==============================] - 13s 36ms/step
Epoch: 25 | Discriminator Loss: 0.109836 | Generator Loss: 0.471305 |
375/375 [==============================] - 14s 37ms/step
Epoch: 26 | Discriminator Loss: 0.109529 | Generator Loss: 0.472105 |
375/375 [==============================] - 14s 37ms/step
Epoch: 27 | Discriminator Loss: 0.108370 | Generator Loss: 0.460461 |
375/375 [==============================] - 14s 38ms/step
Epoch: 28 | Discriminator Loss: 0.106953 | Generator Loss: 0.457317 |
375/375 [==============================] - 14s 37ms/step
Epoch: 29 | Discriminator Loss: 0.105820 | Generator Loss: 0.448319 |
375/375 [==============================] - 13s 34ms/step
Epoch: 30 | Discriminator Loss: 0.104658 | Generator Loss: 0.443948 |
375/375 [==============================] - 13s 36ms/step
Epoch: 31 | Discriminator Loss: 0.103484 | Generator Loss: 0.440299 |
375/375 [==============================] - 14s 36ms/step
Epoch: 32 | Discriminator Loss: 0.102861 | Generator Loss: 0.430401 |
375/375 [==============================] - 13s 35ms/step
Epoch: 33 | Discriminator Loss: 0.102172 | Generator Loss: 0.420914 |
375/375 [==============================] - 13s 33ms/step
Epoch: 34 | Discriminator Loss: 0.101647 | Generator Loss: 0.417422 |
375/375 [==============================] - 13s 35ms/step
Epoch: 35 | Discriminator Loss: 0.100422 | Generator Loss: 0.416226 |
375/375 [==============================] - 13s 34ms/step
Epoch: 36 | Discriminator Loss: 0.099239 | Generator Loss: 0.406801 |
375/375 [==============================] - 13s 34ms/step
Epoch: 37 | Discriminator Loss: 0.098300 | Generator Loss: 0.400543 |
375/375 [==============================] - 13s 35ms/step
Epoch: 38 | Discriminator Loss: 0.097639 | Generator Loss: 0.395971 |
375/375 [==============================] - 13s 35ms/step
Epoch: 39 | Discriminator Loss: 0.097171 | Generator Loss: 0.392078 |
375/375 [==============================] - 13s 34ms/step
Epoch: 40 | Discriminator Loss: 0.097180 | Generator Loss: 0.389433 |
375/375 [==============================] - 12s 33ms/step
Epoch: 41 | Discriminator Loss: 0.097358 | Generator Loss: 0.388491 |
375/375 [==============================] - 13s 34ms/step
Epoch: 42 | Discriminator Loss: 0.096732 | Generator Loss: 0.387960 |
375/375 [==============================] - 13s 34ms/step
Epoch: 43 | Discriminator Loss: 0.095770 | Generator Loss: 0.388074 |
375/375 [==============================] - 13s 35ms/step
Epoch: 44 | Discriminator Loss: 0.094859 | Generator Loss: 0.384738 |
375/375 [==============================] - 13s 35ms/step
Epoch: 45 | Discriminator Loss: 0.094145 | Generator Loss: 0.382081 |
375/375 [==============================] - 13s 36ms/step
Epoch: 46 | Discriminator Loss: 0.093751 | Generator Loss: 0.375897 |
375/375 [==============================] - 13s 34ms/step
Epoch: 47 | Discriminator Loss: 0.093353 | Generator Loss: 0.373716 |
375/375 [==============================] - 12s 33ms/step
Epoch: 48 | Discriminator Loss: 0.092383 | Generator Loss: 0.374362 |
375/375 [==============================] - 12s 32ms/step
Epoch: 49 | Discriminator Loss: 0.091846 | Generator Loss: 0.372938 |
375/375 [==============================] - 13s 33ms/step
Epoch: 50 | Discriminator Loss: 0.090778 | Generator Loss: 0.374312 |
375/375 [==============================] - 13s 35ms/step
Epoch: 51 | Discriminator Loss: 0.089997 | Generator Loss: 0.372233 |
375/375 [==============================] - 13s 34ms/step
Epoch: 52 | Discriminator Loss: 0.089343 | Generator Loss: 0.371324 |
375/375 [==============================] - 13s 34ms/step
Epoch: 53 | Discriminator Loss: 0.089036 | Generator Loss: 0.381762 |
375/375 [==============================] - 12s 33ms/step
Epoch: 54 | Discriminator Loss: 0.089487 | Generator Loss: 0.387412 |
375/375 [==============================] - 12s 33ms/step
Epoch: 55 | Discriminator Loss: 0.088369 | Generator Loss: 0.384037 |
375/375 [==============================] - 14s 34ms/step
Epoch: 1 | Discriminator Loss: 0.371198 | Generator Loss: 1.559743 |
375/375 [==============================] - 13s 35ms/step
Epoch: 2 | Discriminator Loss: 0.210504 | Generator Loss: 1.048438 |
375/375 [==============================] - 12s 33ms/step
Epoch: 3 | Discriminator Loss: 0.190263 | Generator Loss: 0.996902 |
375/375 [==============================] - 12s 32ms/step
Epoch: 4 | Discriminator Loss: 0.174197 | Generator Loss: 0.923824 |
375/375 [==============================] - 12s 32ms/step
Epoch: 5 | Discriminator Loss: 0.170226 | Generator Loss: 0.903478 |
375/375 [==============================] - 12s 33ms/step
Epoch: 6 | Discriminator Loss: 0.161821 | Generator Loss: 0.845493 |
375/375 [==============================] - 12s 32ms/step
Epoch: 7 | Discriminator Loss: 0.158882 | Generator Loss: 0.957419 |
375/375 [==============================] - 13s 34ms/step
Epoch: 8 | Discriminator Loss: 0.152584 | Generator Loss: 0.875005 |
375/375 [==============================] - 13s 34ms/step
Epoch: 9 | Discriminator Loss: 0.148019 | Generator Loss: 0.864026 |
###Markdown
**2 Students**
###Code
o1=s1.get_layer("reqs1").output
o2=s2.get_layer("reqs2").output
output=tensorflow.keras.layers.concatenate([o1,o2])
output=Activation('relu')(output)
output2=Dropout(0.5)(output) # For reguralization
output3=Dense(10,activation="softmax", name="d1")(output2)
mm2=Model([s1.get_layer("s1").input,s2.get_layer("s2").input], output3)
my_weights=teacher.get_layer('dense_2').get_weights()
mm2.get_layer('d1').set_weights(my_weights)
for l in mm2.layers[:len(mm2.layers)-1]:
l.trainable=False
mm2.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
batch_size = 128
mm2_history=mm2.fit([X_train,X_train], Y_train,
batch_size=batch_size,
epochs=30,
verbose=1, validation_data=([X_val,X_val], Y_val))
loss, acc = mm2.evaluate([X_test,X_test], Y_test, verbose=1)
loss, acc
###Output
313/313 [==============================] - 2s 5ms/step - loss: 0.5186 - accuracy: 0.8217
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.