path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
09 Model Debugging, Model Monitoring and AutoML/xgboost_debugger_demo.ipynb | ###Markdown
Targeting Direct Marketing with Amazon SageMaker XGBoost_**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_ BackgroundDirect marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.This notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include:* Preparing your Amazon SageMaker notebook* Downloading data from the internet into Amazon SageMaker* Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms* Estimating a model using the Gradient Boosting algorithm* Evaluating the effectiveness of the model* Setting the model up to make on-going predictions--- Preparation_This notebook was created and tested on an ml.m4.xlarge notebook instance._Let's start by specifying:- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
bucket = 'luis-guides'
prefix = 'sagemaker/DEMO-xgboost-dm'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
session = sagemaker.Session()
bucket = session.default_bucket()
###Output
_____no_output_____
###Markdown
Now let's bring in the Python libraries that we'll use throughout the analysis
###Code
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker # Amazon SageMaker's Python SDK provides many helper functions
from sagemaker.predictor import csv_serializer # Converts strings for HTTP POST requests on inference
! python -m pip install smdebug
###Output
Requirement already satisfied: smdebug in /opt/conda/lib/python3.7/site-packages (0.8.1)
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from smdebug) (3.12.2)
Requirement already satisfied: boto3>=1.10.32 in /opt/conda/lib/python3.7/site-packages (from smdebug) (1.14.17)
Requirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from smdebug) (20.1)
Requirement already satisfied: numpy<2.0.0,>1.16.0 in /opt/conda/lib/python3.7/site-packages (from smdebug) (1.18.1)
Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->smdebug) (1.14.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->smdebug) (45.2.0.post20200210)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (0.3.3)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (0.10.0)
Requirement already satisfied: botocore<1.18.0,>=1.17.17 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (1.17.17)
Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->smdebug) (2.4.6)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version != "3.4" in /opt/conda/lib/python3.7/site-packages (from botocore<1.18.0,>=1.17.17->boto3>=1.10.32->smdebug) (1.25.8)
Requirement already satisfied: docutils<0.16,>=0.10 in /opt/conda/lib/python3.7/site-packages (from botocore<1.18.0,>=1.17.17->boto3>=1.10.32->smdebug) (0.15.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/conda/lib/python3.7/site-packages (from botocore<1.18.0,>=1.17.17->boto3>=1.10.32->smdebug) (2.8.1)
###Markdown
--- DataLet's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket. \[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
###Code
!apt-get install unzip zip -y
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
!unzip -o bank-additional.zip
###Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
unzip is already the newest version (6.0-23+deb10u1).
zip is already the newest version (3.0-11+b1).
0 upgraded, 0 newly installed, 0 to remove and 15 not upgraded.
--2020-07-31 17:16:34-- https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
Resolving sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)... 52.218.208.249
Connecting to sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)|52.218.208.249|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 432828 (423K) [application/zip]
Saving to: ‘bank-additional.zip.4’
bank-additional.zip 100%[===================>] 422.68K 1.46MB/s in 0.3s
2020-07-31 17:16:34 (1.46 MB/s) - ‘bank-additional.zip.4’ saved [432828/432828]
Archive: bank-additional.zip
inflating: bank-additional/bank-additional-names.txt
inflating: bank-additional/bank-additional.csv
inflating: bank-additional/bank-additional-full.csv
###Markdown
Now lets read this into a Pandas data frame and take a look.
###Code
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_rows',10)
data
###Output
_____no_output_____
###Markdown
Let's talk about the data. At a high level, we can see:* We have a little over 40K customer records, and 20 features for each customer* The features are mixed; some numeric, some categorical* The data appears to be sorted, at least by `time` and `contact`, maybe more_**Specifics on each of the features:**_*Demographics:** `age`: Customer's age (numeric)* `job`: Type of job (categorical: 'admin.', 'services', ...)* `marital`: Marital status (categorical: 'married', 'single', ...)* `education`: Level of education (categorical: 'basic.4y', 'high.school', ...)*Past customer events:** `default`: Has credit in default? (categorical: 'no', 'unknown', ...)* `housing`: Has housing loan? (categorical: 'no', 'yes', ...)* `loan`: Has personal loan? (categorical: 'no', 'yes', ...)*Past direct marketing contacts:** `contact`: Contact communication type (categorical: 'cellular', 'telephone', ...)* `month`: Last contact month of year (categorical: 'may', 'nov', ...)* `day_of_week`: Last contact day of the week (categorical: 'mon', 'fri', ...)* `duration`: Last contact duration, in seconds (numeric). Important note: If duration = 0 then `y` = 'no'. *Campaign information:** `campaign`: Number of contacts performed during this campaign and for this client (numeric, includes last contact)* `pdays`: Number of days that passed by after the client was last contacted from a previous campaign (numeric)* `previous`: Number of contacts performed before this campaign and for this client (numeric)* `poutcome`: Outcome of the previous marketing campaign (categorical: 'nonexistent','success', ...)*External environment factors:** `emp.var.rate`: Employment variation rate - quarterly indicator (numeric)* `cons.price.idx`: Consumer price index - monthly indicator (numeric)* `cons.conf.idx`: Consumer confidence index - monthly indicator (numeric)* `euribor3m`: Euribor 3 month rate - daily indicator (numeric)* `nr.employed`: Number of employees - quarterly indicator (numeric)*Target variable:** `y`: Has the client subscribed a term deposit? (binary: 'yes','no') TransformationCleaning up data is part of nearly every machine learning project. It arguably presents the biggest risk if done incorrectly and is one of the more subjective aspects in the process. Several common techniques include:* Handling missing values: Some machine learning algorithms are capable of handling missing values, but most would rather not. Options include: * Removing observations with missing values: This works well if only a very small fraction of observations have incomplete information. * Removing features with missing values: This works well if there are a small number of features which have a large number of missing values. * Imputing missing values: Entire [books](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247) have been written on this topic, but common choices are replacing the missing value with the mode or mean of that column's non-missing values.* Converting categorical to numeric: The most common method is one hot encoding, which for each feature maps every distinct value of that column to its own feature which takes a value of 1 when the categorical feature is equal to that value, and 0 otherwise.* Oddly distributed data: Although for non-linear models like Gradient Boosted Trees, this has very limited implications, parametric models like regression can produce wildly inaccurate estimates when fed highly skewed data. In some cases, simply taking the natural log of the features is sufficient to produce more normally distributed data. In others, bucketing values into discrete ranges is helpful. These buckets can then be treated as categorical variables and included in the model when one hot encoded.* Handling more complicated data types: Mainpulating images, text, or data at varying grains is left for other notebook templates.Luckily, some of these aspects have already been handled for us, and the algorithm we are showcasing tends to do well at handling sparse or oddly distributed data. Therefore, let's keep pre-processing simple.
###Code
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators
###Output
_____no_output_____
###Markdown
Another question to ask yourself before building a model is whether certain features will add value in your final use case. For example, if your goal is to deliver the best prediction, then will you have access to that data at the moment of prediction? Knowing it's raining is highly predictive for umbrella sales, but forecasting weather far enough out to plan inventory on umbrellas is probably just as difficult as forecasting umbrella sales without knowledge of the weather. So, including this in your model may give you a false sense of precision.Following this logic, let's remove the economic features and `duration` from our data as they would need to be forecasted with high precision to use as inputs in future predictions.Even if we were to use values of the economic indicators from the previous quarter, this value is likely not as relevant for prospects contacted early in the next quarter as those contacted later on.
###Code
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
###Output
_____no_output_____
###Markdown
When building a model whose primary goal is to predict a target value on new data, it is important to understand overfitting. Supervised learning models are designed to minimize error between their predictions of the target value and actuals, in the data they are given. This last part is key, as frequently in their quest for greater accuracy, machine learning models bias themselves toward picking up on minor idiosyncrasies within the data they are shown. These idiosyncrasies then don't repeat themselves in subsequent data, meaning those predictions can actually be made less accurate, at the expense of more accurate predictions in the training phase.The most common way of preventing this is to build models with the concept that a model shouldn't only be judged on its fit to the data it was trained on, but also on "new" data. There are several different ways of operationalizing this, holdout validation, cross-validation, leave-one-out validation, etc. For our purposes, we'll simply randomly split the data into 3 uneven groups. The model will be trained on 70% of data, it will then be evaluated on 20% of data to give us an estimate of the accuracy we hope to have on "new" data, and 10% will be held back as a final testing dataset which will be used later on.
###Code
train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
###Output
_____no_output_____
###Markdown
Amazon SageMaker's XGBoost container expects data in the libSVM or CSV data format. For this example, we'll stick to CSV. Note that the first column must be the target variable and the CSV should not include headers. Also, notice that although repetitive it's easiest to do this after the train|validation|test split rather than before. This avoids any misalignment issues due to random reordering.
###Code
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
###Output
_____no_output_____
###Markdown
Now we'll copy the file to S3 for Amazon SageMaker's managed training to pickup.
###Code
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
###Output
_____no_output_____
###Markdown
--- TrainingNow we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm.There are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms.`xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using Amazon SageMaker's managed, distributed training framework.First we'll need to specify the ECR container location for Amazon SageMaker's implementation of XGBoost.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name,'xgboost', '1.0-1')
###Output
WARNING:sagemaker.amazon.amazon_estimator:'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
###Markdown
Then, because we're training with the CSV file format, we'll create `s3_input`s that our training function can use as a pointer to the files in S3, which also specify that the content type is CSV.
###Code
s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')
base_job_name = "demo-smdebug-xgboost-regression"
bucket_path='s3://{}/{}/output'.format(bucket, prefix)
###Output
_____no_output_____
###Markdown
Enabling Debugger in Estimator object DebuggerHookConfigEnabling Amazon SageMaker Debugger in training job can be accomplished by adding its configuration into Estimator object constructor:```pythonfrom sagemaker.debugger import DebuggerHookConfig, CollectionConfigestimator = Estimator( ..., debugger_hook_config = DebuggerHookConfig( s3_output_path="s3://{bucket_name}/{location_in_bucket}", Required collection_configs=[ CollectionConfig( name="metrics", parameters={ "save_interval": "10" } ) ] ))```Here, the `DebuggerHookConfig` object instructs `Estimator` what data we are interested in.Two parameters are provided in the example:- `s3_output_path`: it points to S3 bucket/path where we intend to store our debugging tensors. Amount of data saved depends on multiple factors, major ones are: training job / data set / model / frequency of saving tensors. This bucket should be in your AWS account, and you should have full access control over it. **Important Note**: this s3 bucket should be originally created in the same region where your training job will be running, otherwise you might run into problems with cross region access.- `collection_configs`: it enumerates named collections of tensors we want to save. Collections are a convinient way to organize relevant tensors under same umbrella to make it easy to navigate them during analysis. In this particular example, you are instructing Amazon SageMaker Debugger that you are interested in a single collection named `metrics`. We also instructed Amazon SageMaker Debugger to save metrics every 10 iteration. See [Collection](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.mdcollection) documentation for all parameters that are supported by Collections and DebuggerConfig documentation for more details about all parameters DebuggerConfig supports. RulesEnabling Rules in training job can be accomplished by adding the `rules` configuration into Estimator object constructor.- `rules`: This new parameter will accept a list of rules you wish to evaluate against the tensors output by this training job. For rules, Amazon SageMaker Debugger supports two types: - SageMaker Rules: These are rules specially curated by the data science and engineering teams in Amazon SageMaker which you can opt to evaluate against your training job. - Custom Rules: You can optionally choose to write your own rule as a Python source file and have it evaluated against your training job. To provide Amazon SageMaker Debugger to evaluate this rule, you would have to provide the S3 location of the rule source and the evaluator image.In this example, you will use a Amazon SageMaker's LossNotDecreasing rule, which helps you identify if you are running into a situation where the training loss is not going down.```pythonfrom sagemaker.debugger import rule_configs, Ruleestimator = Estimator( ..., rules=[ Rule.sagemaker( rule_configs.loss_not_decreasing(), rule_parameters={ "collection_names": "metrics", "num_steps": "10", }, ), ],)```- `rule_parameters`: In this parameter, you provide the runtime values of the parameter in your constructor. You can still choose to pass in other values which may be necessary for your rule to be evaluated. In this example, you will use Amazon SageMaker's LossNotDecreasing rule to monitor the `metircs` collection. The rule will alert you if the tensors in `metrics` has not decreased for more than 10 steps. First we'll need to specify training parameters to the estimator. This includes:1. The `xgboost` algorithm container1. The IAM role to use1. Training instance type and count1. S3 location for output data1. Algorithm hyperparametersAnd then a `.fit()` function which specifies:1. S3 location for output data. In this case we have both a training and validation set which are passed in.
###Code
from sagemaker.debugger import rule_configs, Rule, DebuggerHookConfig, CollectionConfig
from sagemaker.estimator import Estimator
sess = sagemaker.Session()
save_interval = 5
xgboost_estimator = Estimator(
role=role,
base_job_name=base_job_name,
train_instance_count=1,
train_instance_type='ml.m5.4xlarge',
image_name=container,
train_max_run=1800,
sagemaker_session=sess,
debugger_hook_config=DebuggerHookConfig(
s3_output_path=bucket_path, # Required
collection_configs=[
CollectionConfig(
name="metrics",
parameters={
"save_interval": str(save_interval)
}
),
CollectionConfig(
name="predictions",
parameters={
"save_interval": str(save_interval)
}
),
CollectionConfig(
name="feature_importance",
parameters={
"save_interval": str(save_interval)
}
),
CollectionConfig(
name="average_shap",
parameters={
"save_interval": str(save_interval)
}
)
],
)
)
xgboost_estimator.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
num_round=100)
xgboost_estimator.fit(
{"train": s3_input_train, "validation": s3_input_validation},
# This is a fire and forget event. By setting wait=False, you submit the job to run in the background.
# Amazon SageMaker starts one training job and release control to next cells in the notebook.
# Follow this notebook to see status of the training job.
wait=False
)
###Output
_____no_output_____
###Markdown
ResultAs a result of the above command, Amazon SageMaker starts one training job and one rule job for you. The first one is the job that produces the tensors to be analyzed. The second one analyzes the tensors to check if `train-rmse` and `validation-rmse` are not decreasing at any point during training.Check the status of the training job below.After your training job is started, Amazon SageMaker starts a rule-execution job to run the LossNotDecreasing rule.**Note that the next cell blocks until the rule execution job ends. You can stop it at any point to proceed to the rest of the notebook. Once it says Rule Evaluation Status is Started, and shows the `RuleEvaluationJobArn`, you can look at the status of the rule being monitored.**
###Code
import time
from time import gmtime, strftime
# Below command will give the status of training job
job_name = xgboost_estimator.latest_training_job.name
client = xgboost_estimator.sagemaker_session.sagemaker_client
description = client.describe_training_job(TrainingJobName=job_name)
print('Training job name: ' + job_name)
print(description['TrainingJobStatus'])
if description['TrainingJobStatus'] != 'Completed':
while description['SecondaryStatus'] not in ['Training', 'Completed']:
description = client.describe_training_job(TrainingJobName=job_name)
primary_status = description['TrainingJobStatus']
secondary_status = description['SecondaryStatus']
print("{}: {}, {}".format(strftime('%X', gmtime()), primary_status, secondary_status))
time.sleep(15)
###Output
Training job name: demo-smdebug-xgboost-regression-2020-05-06-21-00-31-554
InProgress
21:00:33: InProgress, Starting
21:00:48: InProgress, Starting
21:01:03: InProgress, Starting
21:01:18: InProgress, Starting
21:01:33: InProgress, Starting
21:01:49: InProgress, Starting
21:02:04: InProgress, Starting
21:02:19: InProgress, Starting
21:02:34: InProgress, Starting
21:02:49: InProgress, Starting
21:03:04: InProgress, Downloading
21:03:19: InProgress, Downloading
21:03:34: InProgress, Training
###Markdown
Data Analysis - ManualNow that you've trained the system, analyze the data.Here, you focus on after-the-fact analysis.You import a basic analysis library, which defines the concept of trial, which represents a single training run.
###Code
from smdebug.trials import create_trial
description = client.describe_training_job(TrainingJobName=job_name)
s3_output_path = xgboost_estimator.latest_job_debugger_artifacts_path()
# This is where we create a Trial object that allows access to saved tensors.
trial = create_trial(s3_output_path)
###Output
[2020-05-06 21:11:02.031 ip-172-16-10-134:9629 INFO s3_trial.py:42] Loading trial debug-output at path s3://luis-guides/sagemaker/DEMO-xgboost-dm/output/demo-smdebug-xgboost-regression-2020-05-06-21-00-31-554/debug-output
###Markdown
You can list all the tensors that you know something about. Each one of these names is the name of a tensor. The name is a combination of the feature name, which in these cases, is auto-assigned by XGBoost, and whether it's an evaluation metric, feature importance, or SHAP value.
###Code
trial.tensor_names()
###Output
[2020-05-06 21:11:04.201 ip-172-16-10-134:9629 INFO trial.py:198] Training has ended, will refresh one final time in 1 sec.
[2020-05-06 21:11:05.221 ip-172-16-10-134:9629 INFO trial.py:210] Loaded all steps
###Markdown
For each tensor, ask for the steps where you have data. In this case, every five steps
###Code
trial.tensor("predictions").values()
###Output
_____no_output_____
###Markdown
You can obtain each tensor at each step as a NumPy array.
###Code
type(trial.tensor("predictions").value(10))
###Output
_____no_output_____
###Markdown
Performance metricsYou can also create a simple function that visualizes the training and validation errors as the training progresses.Each gradient should get smaller over time, as the system converges to a good solution.Remember that this is an interactive analysis. You are showing these tensors to give an idea of the data.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import re
def get_data(trial, tname):
"""
For the given tensor name, walks though all the iterations
for which you have data and fetches the values.
Returns the set of steps and the values.
"""
tensor = trial.tensor(tname)
steps = tensor.steps()
vals = [tensor.value(s) for s in steps]
return steps, vals
def plot_collection(trial, collection_name, regex='.*', figsize=(8, 6)):
"""
Takes a `trial` and a collection name, and
plots all tensors that match the given regex.
"""
fig, ax = plt.subplots(figsize=figsize)
sns.despine()
tensors = trial.collection(collection_name).tensor_names
for tensor_name in sorted(tensors):
if re.match(regex, tensor_name):
steps, data = get_data(trial, tensor_name)
ax.plot(steps, data, label=tensor_name)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel('Iteration')
plot_collection(trial, "metrics")
###Output
_____no_output_____
###Markdown
Feature importancesYou can also visualize the feature priorities as determined by[xgboost.get_score()](https://xgboost.readthedocs.io/en/latest/python/python_api.htmlxgboost.Booster.get_score).If you instructed Estimator to log the `feature_importance` collection, all five importance types supported by `xgboost.get_score()` will be available in the collection.
###Code
def plot_feature_importance(trial, importance_type="weight"):
SUPPORTED_IMPORTANCE_TYPES = ["weight", "gain", "cover", "total_gain", "total_cover"]
if importance_type not in SUPPORTED_IMPORTANCE_TYPES:
raise ValueError(f"{importance_type} is not one of the supported importance types.")
plot_collection(
trial,
"feature_importance",
regex=f"feature_importance/{importance_type}/.*")
plot_feature_importance(trial)
plot_feature_importance(trial, importance_type="cover")
###Output
_____no_output_____
###Markdown
SHAP[SHAP](https://github.com/slundberg/shap) (SHapley Additive exPlanations) isanother approach to explain the output of machine learning models.SHAP values represent a feature's contribution to a change in the model output.You instructed Estimator to log the average SHAP values in this example so the SHAP values (as calculated by [xgboost.predict(pred_contribs=True)](https://xgboost.readthedocs.io/en/latest/python/python_api.htmlxgboost.Booster.predict)) will be available the `average_shap` collection.
###Code
plot_collection(trial,"average_shap")
###Output
_____no_output_____ |
api-examples/CHIRPS_example.ipynb | ###Markdown
Analyzing Historical Rainfall in Palo Alto, CA with CHIRPS DataIn this Notebook we demonstrate how you can use Climate Hazards Group InfraRed Precipitation With Station (CHIRPS) data. CHIRPS is a 30+ year quasi-global rainfall dataset that enables comparing current rainfall patters with historical averages providing very accurate results. Palo Alto is used as an example location, but we encourage you try out other locations. Palo Alto has a Mediterranean climate with cool, relatively wet winters and warm, dry summers. However, because the city is located next to the Santa Cruz Mountains that block the passage of rain-producing weather system, there is a so-called __[rain shadow](https://en.wikipedia.org/wiki/Rain_shadow)__ in Palo Alto resulting in a very low average annual rainfall. We will make several plots which show how precipitation can differ during one year and between different years. Note that this notebook is using Python 3.First of all, we are importing some modules
###Code
%matplotlib notebook
import pandas as pd
import numpy
from po_data_process import get_data_from_point_API, make_histogram, make_plot
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
###Output
_____no_output_____
###Markdown
Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!Here we define dataset_key and point of interest.
###Code
API_key = open('APIKEY').read().strip()
dataset_key = 'chg_chirps_global_05'
#Palo Alto
latitude = 37.42
longitude = -122.17
###Output
_____no_output_____
###Markdown
Now it's time to read the actual data as Pandas DataFrame. We add separate columns for 'year' and 'month' for later use. For reading the data, we use function *get_data_from_point_API* from module named *get_data_from_point_API* which is in the Notebook folder (the same folder where this script is located in GitHub). At the end of this box we also print out Pandas data structure so that you can see how it looks like.
###Code
data = get_data_from_point_API(dataset_key, longitude, latitude, API_key)
data['time'] = pd.to_datetime(data['time'])
data['year'] = data['time'].dt.year
data['month'] = data['time'].dt.month
data = data.loc[data['year'] < 2019]
print (data.keys())
###Output
http://api.planetos.com/v1/datasets/chg_chirps_global_05/point?lat=37.42&lon=-122.17&apikey=8428878e4b944abeb84790e832c633fc&count=100000&z=all&verbose=False
Index(['context', 'axes', 'time', 'latitude', 'longitude', 'precip', 'year',
'month'],
dtype='object')
###Markdown
Now that we have fetched the time-series of precipitation data, we can easily compute the following statistics:1. Number of completely dry days since 19812. Number of completely dry days by year3. Number of days with more than 20 mm precipitation in a year4. Annual precipitation by year5. Daily maximum precipitation by year6. Average annual cycle of precipitation7. Histogram
###Code
print ('There have been ' + str(len(data.loc[data['precip'] == 0])) + ' completely dry days in Palo Alto of ' + str(len(data)) + ' total.')
print ('It means that ' + str(round((100. * len(data.loc[data['precip'] == 0]))/len(data),2)) + '% of the days since 1981 have been completely dry in Palo Alto.')
###Output
There have been 12526 completely dry days in Palo Alto of 13879 total.
It means that 90.25% of the days since 1981 have been completely dry in Palo Alto.
###Markdown
The following plot will show the number of days by year with no observed precipitation. We can see that the difference between the years is not significant.
###Code
make_plot(data.loc[data['precip'] == 0].groupby('year').count()['precip'],dataset_key,'Completely dry days by year')
###Output
_____no_output_____
###Markdown
In this plot we look at how many days had more than 20 mm precipitation in a year. In 1998, one of the strongest El Niño years in the recent history, is the clear winner. __[Daniel Swain](http://weatherwest.com/about)__ also brought out __[the fact](http://weatherwest.com/archives/3836)__ in his Weather West blog that California’s wettest years on record were 1982-1983 and 1997-1998, and they occurred during the strongest El Niño years. Those years clearly stand out in our demo as well.Daniel also says that the common belief that El Niño always brings a lot of water to the Golden State is not particularly true. It does, however, increase the potential of more precipitation.For example, the years 2015-2016 when the El Niño was very strong, don't stand out in this plot. The unusal precipitation pattern of 2016 was also __[discussed by Daniel Swain in his blogpost](http://weatherwest.com/archives/tag/el-nino-california)__, because curiously, it was almost the opposite of what was expected based upon theoretical and empirical models for ENSO teleconnections.
###Code
make_plot(data.loc[data['precip'] > 20].groupby('year').count()['precip'],dataset_key,'Number of days with more than 20 mm precipitation in a year')
###Output
_____no_output_____
###Markdown
The next plot is about annual total precipitation. Two from the three driest years in the whole period, 2013 and 2015, have been very recent. 1998 has been among the strongest and is exceeded only by the exceptionally large values of 1982 and 1983. Those, again, were strongest El Niño years we talked about above. The El Niño of 2015-2016 still doesn't stand out.
###Code
make_plot(data.groupby('year').sum()['precip'],dataset_key,'Annual precipitation by year')
###Output
_____no_output_____
###Markdown
Daily maximum precipitation was on 1982. Again, this plot confirms results from previous plots.
###Code
make_plot(data.groupby('year')['precip'].max(),dataset_key,'Daily maximum precipitation by year')
###Output
_____no_output_____
###Markdown
The average annual cycle of precipitation shows that it mostly rains during the winter months and the summer is usually dry.
###Code
make_plot(data.groupby('month')['precip'].mean(),dataset_key, 'Average annual cycle of precipitation')
###Output
_____no_output_____
###Markdown
Finally, let's look at a histogram. As we saw from the previous plots, Palo Alto has very many completely dry days. From the histogram we can see that when it does rain, it rains a lot! Almost 350 days since 1981 there has been 8-16 mm/day and near 300 days there has been 16-32 mm/day. Duing 30 days of the entire period it has been raining even 64-128 mm/day.
###Code
bins = [1,2,4,8,16,32,64,128]
make_histogram(data['precip'],bins)
###Output
_____no_output_____ |
src/mvp/MVP.ipynb | ###Markdown
MVP Minimum Viable ProjectThe simplest start is the training and testing subset provided by the dataset author.The simplest algorithm is one-nearest-neighbor: Tag the values of the testing set with the label that their nearest neighbor inthe training set has. Library imports
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import tqdm
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import make_pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, classification_report, roc_auc_score, accuracy_score
%load_ext watermark
%watermark -iv
%matplotlib inline
###Output
_____no_output_____
###Markdown
Connect to PostgresAnd the training subset is small enough to load into Pandas.
###Code
%load_ext sql
%config SqlMagic.autopandas = True
%sql postgres://localhost/nb15
trainset = %sql select * from trainset;
trainset.shape
trainset.info()
testset = %sql select * from testset;
testset.shape
###Output
* postgres://localhost/nb15
82332 rows affected.
###Markdown
Feature selection and engineering
###Code
normset = trainset.query('label == 0')
attackset = trainset.query('label == 1')
attackgroup = trainset.groupby('attack_cat')
fig, axs = plt.subplots(nrows=1, ncols=3, sharey=True, figsize=(15,4))
corrdata = trainset.corr()
sns.heatmap(corrdata, ax=axs[0], vmin=-1, vmax=1, cmap='PiYG')
axs[0].set_title('Full dataset')
corrdata = attackset.corr()
sns.heatmap(corrdata, ax=axs[1], vmin=-1, vmax=1, cmap='PiYG')
axs[1].set_title('Label = 1')
corrdata = normset.corr()
sns.heatmap(corrdata, ax = axs[2], vmin=-1, vmax=1, cmap='PiYG')
axs[2].set_title('Label = 0')
plt.show()
###Output
_____no_output_____
###Markdown
Looks like we have some correlated columns to clean up, but a lot of white means there areplenty of columns that are uncorrelated.And, it looks like we have a few good candidates for feature selection looking at the difference between the two correlation maps for the attack and normallabels.
###Code
corrdata = trainset.corr()
# we're not going to model on row-id or our target!
corrdata = corrdata.drop(columns='id')
corrdata = corrdata.drop(index='id')
corrdata = corrdata.drop(index='label')
corrdata = corrdata.drop(columns='label')
for r, row in enumerate(corrdata.index):
for c, col in enumerate(corrdata.columns):
if r < c:
abc = abs(corrdata.loc[row,col])
if abc > .5:
print(row, col, corrdata.loc[row,col])
###Output
spkts sbytes 0.9637905453632948
spkts sloss 0.9710686917734201
dpkts dbytes 0.9719070079937576
dpkts dloss 0.9786363765707413
sbytes sloss 0.9961094729162718
dbytes dloss 0.9965035947658807
rate sload 0.6024917209485852
rate swin -0.515680523970463
rate dwin -0.5181167101716453
sttl dmean -0.5503894995831023
sttl ct_state_ttl 0.6723247538872464
dttl swin 0.746246550391458
dttl stcpb 0.5927892500491115
dttl dtcpb 0.5967704444222411
dttl dwin 0.7540175796120973
dttl tcprtt 0.8073405596741321
dttl synack 0.7469996936389071
dttl ackdat 0.781261466971933
dload dmean 0.550601586510385
sinpkt is_sm_ips_ports 0.9413189007356698
dinpkt sjit 0.6731042707786768
swin stcpb 0.7812149404258413
swin dtcpb 0.7816351259976572
swin dwin 0.9901399299450626
swin tcprtt 0.5697518462385165
swin synack 0.52896194809928
swin ackdat 0.5494258227235498
swin ct_state_ttl -0.5575336991045338
stcpb dtcpb 0.6498790909569351
stcpb dwin 0.7888342470350359
dtcpb dwin 0.7893866556371264
dwin tcprtt 0.5754476712726219
dwin synack 0.5343137609233369
dwin ackdat 0.5548501275329738
dwin ct_state_ttl -0.6061249642528826
tcprtt synack 0.949467661103672
tcprtt ackdat 0.9417603738099297
synack ackdat 0.7886230642429164
ct_srv_src ct_dst_ltm 0.8412804888191318
ct_srv_src ct_src_dport_ltm 0.8660103830153143
ct_srv_src ct_dst_sport_ltm 0.8235830768433696
ct_srv_src ct_dst_src_ltm 0.9671378245459187
ct_srv_src ct_src_ltm 0.7810512201424434
ct_srv_src ct_srv_dst 0.980323009991157
ct_dst_ltm ct_src_dport_ltm 0.9620518416457033
ct_dst_ltm ct_dst_sport_ltm 0.8706444443778707
ct_dst_ltm ct_dst_src_ltm 0.852252416488471
ct_dst_ltm ct_src_ltm 0.8860718014695137
ct_dst_ltm ct_srv_dst 0.8525834503494464
ct_src_dport_ltm ct_dst_sport_ltm 0.9067931558833536
ct_src_dport_ltm ct_dst_src_ltm 0.8699407842490623
ct_src_dport_ltm ct_src_ltm 0.8974378792234384
ct_src_dport_ltm ct_srv_dst 0.8688500773270724
ct_dst_sport_ltm ct_dst_src_ltm 0.838678463641535
ct_dst_sport_ltm ct_src_ltm 0.8030132947214229
ct_dst_sport_ltm ct_srv_dst 0.8301524161854222
ct_dst_src_ltm ct_src_ltm 0.783753395432917
ct_dst_src_ltm ct_srv_dst 0.9723704538695266
is_ftp_login ct_ftp_cmd 1.0
ct_src_ltm ct_srv_dst 0.7778914761278788
###Markdown
whoa that 1.0 correlation looks really weird
###Code
trainset.ct_ftp_cmd.value_counts()
trainset.is_ftp_login.value_counts()
# looks like these are really exactly the same
# so drop one
corrdata = corrdata.drop(columns='is_ftp_login')
corrdata = corrdata.drop(index='is_ftp_login')
corrdata.index
# make some kde plots
len(corrdata.index)
fig, axs = plt.subplots(nrows=10, ncols=4, figsize=(10,40) )
for i,label in enumerate(corrdata.index):
row = i // 4
col = i % 4
ax = axs[row, col]
trainset[label].plot.kde(ax=ax)
ax.set_title(label)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
###Output
_____no_output_____
###Markdown
Normalization -- We have a lot of poission arrival type data here. It seems like a good candidate for log and robust scaling but let's check
###Code
l1p = FunctionTransformer(np.log1p, np.expm1, validate=False)
rsc = RobustScaler()
xform = make_pipeline(l1p, rsc)
fig, axs = plt.subplots(nrows=10, ncols=4, figsize=(15,40) )
for i,label in tqdm.tqdm(enumerate(corrdata.index)):
row = i // 4
col = i % 4
ax = axs[row, col]
data = xform.fit_transform(
trainset[label].values.reshape(-1,1)
)
slabel = pd.Series(
data.reshape(-1)
)
slabel.plot.kde(ax=ax)
ax.set_title(label)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
## looks like is_sm_ips_ports is really categorical
trainset.is_sm_ips_ports.value_counts()
corrdata = corrdata.drop(index='is_sm_ips_ports')
corrdata = corrdata.drop(columns='is_sm_ips_ports')
corrdata.index
final_list_numeric_cols = corrdata.index
trainset[final_list_numeric_cols].shape
xformed_data = xform.fit_transform(trainset[final_list_numeric_cols].values)
xformed_data = pd.DataFrame(data=xformed_data, columns=final_list_numeric_cols)
xformed_data.head()
xformed_test_data = xform.transform(testset[final_list_numeric_cols].values)
xformed_test_data = pd.DataFrame(data=xformed_test_data,
columns=final_list_numeric_cols)
xformed_test_data.head()
###Output
_____no_output_____
###Markdown
Categorical valuesWe need to process the train and test sets simultaneously here, so that the onehot vectors turn out the same. Combine infrequent values of proto into one category other
###Code
trainset.proto.value_counts()
# m_ just means my modified version
trainset['m_proto'] = trainset['proto']
testset['m_proto'] = testset['proto']
val_counts = trainset.proto.value_counts()
common_proto = list(val_counts[val_counts > 1000].index)
query_string = "proto != '" + "'"" and proto != '".join(common_proto) + "'"
query_string
trainset.loc[trainset.eval(query_string), 'm_proto'] = 'other'
testset.loc[testset.eval(query_string), 'm_proto'] = 'other'
trainset.m_proto.value_counts()
###Output
_____no_output_____
###Markdown
Combine infrequent values of service into one category otherAlso renaming the '-' value to 'none'
###Code
trainset.service.value_counts()
trainset['m_service'] = trainset['service']
trainset.loc[trainset.eval("service == '-'"), 'm_service'] = 'none'
testset['m_service'] = testset['service']
testset.loc[testset.eval("service == '-'"), 'm_service'] = 'none'
val_counts = trainset.m_service.value_counts()
common_service = list(val_counts[val_counts > 1000].index)
query_string = "m_service != '" + "'"" and m_service != '".join(common_service) + "'"
query_string
trainset.loc[trainset.eval(query_string), 'm_service'] = 'other'
testset.loc[testset.eval(query_string), 'm_service'] = 'other'
trainset.m_service.value_counts()
###Output
_____no_output_____
###Markdown
Combine infrequent values of state into one category other
###Code
trainset.state.value_counts()
trainset['m_state'] = trainset['state']
testset['m_state'] = testset['state']
val_counts = trainset.state.value_counts()
common_state = list(val_counts[val_counts > 1000].index)
query_string = "state != '" + "'"" and state != '".join(common_state) + "'"
query_string
trainset.loc[trainset.eval(query_string), 'm_state'] = 'other'
testset.loc[testset.eval(query_string), 'm_state'] = 'other'
trainset.m_state.value_counts()
###Output
_____no_output_____
###Markdown
Create dummy variables
###Code
dummies = pd.get_dummies(trainset[['m_proto', 'm_state', 'm_service']])
test_dummies = pd.get_dummies(testset[['m_proto', 'm_state', 'm_service']])
# rather than dropping the first, I'm dropping the 'catch-all' column
dummies = dummies.drop(columns=['m_proto_other', 'm_state_other', 'm_service_other'])
dummies.columns = [ s.replace('m_','') for s in dummies.columns ]
test_dummies = test_dummies.drop(columns=['m_proto_other', 'm_state_other', 'm_service_other'])
test_dummies.columns = [ s.replace('m_','') for s in dummies.columns ]
dummies.head(5)
test_dummies.head(5)
test_dummies.columns
dummies.columns
# Yay! Both tables have the columns in the same order!
###Output
_____no_output_____
###Markdown
KNN-1 as the baseline modelKNN-1 is just taking a vector from the testing set, finding the calculating a single nearest neighbor point from the training set,and giving the previously-unseen data the same label.It's a good baseline model because it's fairly simple, yet has predictedwell on similar datasets in the past.I'm going to do this twice: Once with the up/down label and once with the specific attack categories.
###Code
## Pull all the stuff together
prepped_data = pd.concat([dummies, trainset.is_sm_ips_ports, xformed_data],
axis=1)
prepped_test_data = pd.concat([test_dummies, testset.is_sm_ips_ports, xformed_test_data],
axis=1)
prepped_data.shape
prepped_test_data.shape
knn1 = KNeighborsClassifier(n_neighbors=1)
X = prepped_data.values
y = trainset.label
knn1.fit(X, y)
test_predictions = knn1.predict(prepped_test_data.values)
# Now to figure out my confusion matrix
y_gt = testset.label
confusion_matrix(y_gt, test_predictions)
print(classification_report(y_gt, test_predictions))
roc_auc_score(y_gt, test_predictions)
accuracy_score(y_gt, test_predictions)
# So those are the scores for my models to beat!
# 95% recall doesn't seem like a bad start
attack_types = trainset.attack_cat.unique()
ordinal_encoder = dict()
for i,name in enumerate(attack_types):
ordinal_encoder[name] = i
y = trainset.attack_cat.map(ordinal_encoder)
y_gt = testset.attack_cat.map(ordinal_encoder)
knn1.fit(X, y)
y_pred = knn1.predict(prepped_test_data.values)
print(confusion_matrix(y_gt, y_pred))
print(classification_report(y_gt, y_pred))
list(enumerate(attack_types))
###Output
_____no_output_____
###Markdown
Lots of room for improvement on the second-level task. We recognized normaltraffic and generic bad packets very well, but quite a bit of room for improvement elsewhere. You can also see in the 'support' column of the classification report the imbalanced learning challenges in this problem set.
###Code
accuracy_score(y_gt, y_pred)
###Output
_____no_output_____ |
ex2.3.2.ipynb | ###Markdown
Table of Contents0.1 Set $x_0 = -1.0$0.2 $x_0 = 0$ can not be used Let $f(x) = −x^3 − \cos x$ and $p_0 = −1$. Use Newton’s method to find $p_2$. Could $p_0 = 0$ be used?
###Code
import numpy as np
from numpy import linalg
from abc import abstractmethod
import pandas as pd
import math
pd.options.display.float_format = '{:,.8f}'.format
np.set_printoptions(suppress=True, precision=8)
TOR = pow(10.0, -9)
MAX_ITR = 150
class NewtonMethod(object):
def __init__(self):
return
@abstractmethod
def f(self, x):
return NotImplementedError('Implement f()!')
@abstractmethod
def jacobian(self, x):
return NotImplementedError('Implement jacobian()!')
@abstractmethod
def run(self, x):
return NotImplementedError('Implement run()!')
class Newton1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return -pow(x, 3) - math.cos(x)
def jacobian(self, x):
return -3 * pow(x, 2) + math.sin(x)
def run(self, x0):
df = pd.DataFrame(columns=['(NT) f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = x - self.f(x) / self.jacobian(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
###Output
_____no_output_____
###Markdown
$21^{\frac{1}{3}} = 2.75892417...$ Set $x_0 = -1.0$
###Code
Newton1D().run(-1.0).astype(np.float64)
###Output
_____no_output_____
###Markdown
$x_0 = 0$ can not be used
###Code
Newton1D().run(0).astype(np.float64)
###Output
_____no_output_____ |
traffic_sign_classification/Traffic_sign_classification.ipynb | ###Markdown
Traffic Sign Classification with CNN
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import cv2
import warnings
warnings.filterwarnings('ignore')
from PIL import Image
import tensorflow as tf
from sklearn.model_selection import train_test_split
from skimage.transform import resize
from sklearn.metrics import accuracy_score
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout, Activation
# loading dataset
data = []
labels = []
classes = 43
path = '/home/eva/Schreibtisch/Bootcamp_Data_Science/12_final_project_v2/data/train'
for i in os.listdir(path):
dir = path + '/' + i
for j in os.listdir(dir):
img_path = dir+'/'+j
img = cv2.imread(img_path,-1)
img = cv2.resize(img, (28,28), interpolation = cv2.INTER_NEAREST)
data.append(img)
labels.append(i)
data = np.array(data)
labels = np.array(labels)
print(data.shape, labels.shape)
plt.figure(figsize=(5,5))
plt.subplot(121)
plt.imshow(Image.open('/home/eva/Schreibtisch/Bootcamp_Data_Science/12_final_project_v2/data/train/1/00001_00072_00027.png'))
plt.axis('off')
plt.subplot(122)
plt.imshow(Image.open('/home/eva/Schreibtisch/Bootcamp_Data_Science/12_final_project_v2/data/train/5/00005_00029_00022.png'))
plt.axis('off')
# number of images in each class
data_dic = {}
for folder in os.listdir(path):
data_dic[folder] = len(os.listdir(path + '/' + folder))
data_df= pd.Series(data_dic)
plt.figure(figsize = (15, 6))
data_df.sort_values().plot(kind = 'bar')
plt.xlabel('Classes')
plt.ylabel('Number of images')
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size= 0.20, random_state=42)
print((X_train.shape, y_train.shape), (X_test.shape, y_test.shape))
# converting the labels into one hot encoding
y_train = to_categorical(y_train, 43)
y_test = to_categorical(y_test, 43)
from tensorflow.keras import backend as K
K.clear_session()
model = Sequential([
Conv2D(32, kernel_size = (5,5),strides = (2,2), padding = 'same', input_shape = (28,28,3)),
MaxPooling2D(pool_size = (2,2), strides = (2,2), padding = 'same'),
Activation('relu'),
Conv2D(32, kernel_size = (3,3), strides = (2,2), padding = 'same'),
MaxPooling2D(pool_size = (2,2), strides = (2,2), padding = 'same'),
Activation('relu'),
Conv2D(32, kernel_size = (3,3), strides = (2,2), padding = 'same'),
MaxPooling2D(pool_size = (2,2), strides = (2,2), padding = 'same'),
Activation('relu'),
Conv2D(32, kernel_size = (3,3), strides = (2,2), padding = 'same'),
MaxPooling2D(pool_size = (2,2), strides = (2,2), padding = 'same'),
Activation('relu'),
Conv2D(32, kernel_size = (3,3), strides = (2,2), padding = 'same'),
MaxPooling2D(pool_size = (2,2), strides = (2,2), padding = 'same'),
Activation('relu'),
Conv2D(32, kernel_size = (3,3), strides = (2,2), padding = 'same'),
MaxPooling2D(pool_size = (2,2), strides = (2,2), padding = 'same'),
Activation('relu'),
Flatten(),
Dense(43),
Activation('softmax')
])
model.summary()
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(X_train,y_train,epochs=20, batch_size=32,validation_split=0.2)
from matplotlib import pyplot as plt
plt.plot(history.history['accuracy'], label= 'train accuracy')
plt.plot(history.history['val_accuracy'],label= 'val accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
from matplotlib import pyplot as plt
plt.plot(history.history['loss'], label= 'train loss')
plt.plot(history.history['val_loss'], label= 'test loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
## Test
plt.imshow(X_test[7])
pred = model.predict(X_test)
pred
pred[7].argmax()
plt.imshow(X_test[1])
pred[1].argmax()
plt.imshow(X_test[981])
pred[981].argmax()
from keras.models import load_model
#model.save('traffic_signs.h5')
model = load_model('traffic_signs.h5')
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 14, 14, 32) 2432
max_pooling2d (MaxPooling2D (None, 7, 7, 32) 0
)
activation (Activation) (None, 7, 7, 32) 0
conv2d_1 (Conv2D) (None, 4, 4, 32) 9248
max_pooling2d_1 (MaxPooling (None, 2, 2, 32) 0
2D)
activation_1 (Activation) (None, 2, 2, 32) 0
conv2d_2 (Conv2D) (None, 1, 1, 32) 9248
max_pooling2d_2 (MaxPooling (None, 1, 1, 32) 0
2D)
activation_2 (Activation) (None, 1, 1, 32) 0
conv2d_3 (Conv2D) (None, 1, 1, 32) 9248
max_pooling2d_3 (MaxPooling (None, 1, 1, 32) 0
2D)
activation_3 (Activation) (None, 1, 1, 32) 0
conv2d_4 (Conv2D) (None, 1, 1, 32) 9248
max_pooling2d_4 (MaxPooling (None, 1, 1, 32) 0
2D)
activation_4 (Activation) (None, 1, 1, 32) 0
conv2d_5 (Conv2D) (None, 1, 1, 32) 9248
max_pooling2d_5 (MaxPooling (None, 1, 1, 32) 0
2D)
activation_5 (Activation) (None, 1, 1, 32) 0
flatten (Flatten) (None, 32) 0
dense (Dense) (None, 43) 1419
activation_6 (Activation) (None, 43) 0
=================================================================
Total params: 50,091
Trainable params: 50,091
Non-trainable params: 0
_________________________________________________________________
|
populate_df.ipynb | ###Markdown
This isn't normally distrbibuted - therefore, could 'class_weight='balanced' - or, could use tree-based models that perform well on imbalanced datsets as their hierarchcial strucutre alllows them to learn signals from both classses Lipinski moleculesThis section identifies all molecules that satisfy all criteria of Lipinski's rule of five - for bioavaliable small molecules* 5 or fewer hydrogen bond donors;* 10 or fewer hydrogen bond acceptors;* A molecular weight (MW) of less than 500 Daltons;* An octanol-water partition coefficient (log $P_{o/w}$) of less than 5. Larger log $P_{o/w}$ meansmore lipophilic (i.e., less water soluble).
###Code
# Lipinski molecules
def lipinski_mol(df):
"""Determines whether a molecule in dataframe follows Lipinski rules"""
few_hb_donor = (df["H_donors"] <= 5)
few_hb_acceptor = (df["H_acceptors"] <= 10)
low_MW = (df["MW"] < 500)
low_pc = (df["LogP"] < 5)
return (few_hb_acceptor & few_hb_donor & low_MW & low_pc)
def select_lipinski(df, rule = lipinski_mol):
"""Removes compounds from database which don't follow Lipinski rules"""
df_lip = analyser.analyse_compounds(df, df, 'lipinski_mol', rule)
# df is passed twice, as it contains the data to operate on, and also is the dataframe to add output to
df_lip.drop(df_lip[df_lip.lipinski_mol == False].index, inplace=True)
df_lip.drop(columns = ['lipinski_mol'], inplace=True) # As all values are now true so column is defunct
return df_lip
df_lip_compounds = select_lipinski(df_compounds.copy())
print(f"Selected {df_lip_compounds.shape[0]} molecules out of {df_compounds.shape[0]}.")
def select_lipinski_assay(df_c, df_a, rule = lipinski_mol):
"""Removes assay corresponding to compounds from database which don't follow Lipinski rules"""
df_lip = analyser.analyse_compounds(df_c, df_c, 'lipinski_mol', rule)
df_a['lipinski_mol'] = df_lip['lipinski_mol']
df_a.drop(df_a[df_a.lipinski_mol == False].index, inplace=True)
df_a.drop(columns = ['lipinski_mol'], inplace=True) # As all values are now true so column is defunct
return df_a
df_lip_assays = select_lipinski_assay(df_compounds.copy(), df_assay.copy())
print(f"Remaining: {df_lip_compounds.shape[0]} assays out of {df_compounds.shape[0]}.")
# Representation of Lipinski molecules in sample space
def compare_dataframe_pair(df_1, df_2, x_var, y_var, subset_name = 'Lipinski Molecules'):
try:
fig, ax = plt.subplots()
df_1.plot(x=x_var, y=y_var, ax=ax, style='o', color = 'grey', label = 'Molecule Space')
df_2.plot(x=x_var, y=y_var, ax=ax, style='o', color = 'g', label = subset_name)
ax.set_ylabel(y_var); ax.legend()
except KeyError as k:
print(f"Column name {k} does not exist in dataframe")
compare_dataframe_pair(df_compounds, df_lip_compounds, 'H_acceptors', 'H_donors')
compare_dataframe_pair(df_compounds, df_lip_compounds, 'MW', 'LogP')
###Output
_____no_output_____
###Markdown
Note that the MW and $P_{o/w}$ are the most significant factors in determining whether a molecule satifes the Lipinski conditions, with the hydrogen bond acceptor and donor limits ruling out very few molecules that have not been ruled out by these factors. Relaxed Lipinski conditionsWe may also consider the subset of molecules that satisfy at least three of the Lipinski conditions, as shown below:
###Code
def relaxed_lipinski(df):
"""Determines whether a molecule in dataframe follows all but one Lipinski rules"""
few_hb_donor = (df["H_donors"] <= 5) * 1 # (* 1) to convert to int
few_hb_acceptor = (df["H_acceptors"] <= 10) * 1
low_MW = (df["MW"] < 500) * 1
low_pc = (df["LogP"] < 5) * 1
accepted_cond = few_hb_acceptor + few_hb_donor + low_MW + low_pc
return (accepted_cond >= 3)
df_rlip_compounds = select_lipinski(df_compounds.copy(), rule=relaxed_lipinski)
print(f"Selected {df_rlip_compounds.shape[0]} molecules out of {df_compounds.shape[0]}.")
###Output
Selected 1950 molecules out of 2037.
###Markdown
We see that this new criterion includes a further 422 molecules, compared to the stricter condition where all Lipinski conditions must be passed. We expect from our previous work that most of these new molecules will have broken the MW or $P_{o/w}$ conditions, but may verify this graphically.
###Code
def compare_dataframe_trio(df_1, df_2, df_3, x_var, y_var):
try:
fig, ax = plt.subplots()
df_1.plot(x=x_var, y=y_var, ax=ax, style='o', color = 'grey', label = 'Molecule Space')
df_2.plot(x=x_var, y=y_var, ax=ax, style='o', color = 'blue', label = 'Relaxed Lipinski Molecules')
df_3.plot(x=x_var, y=y_var, ax=ax, style='o', color = 'g', label = 'Lipinski Molecules')
ax.set_ylabel(y_var); ax.legend()
except KeyError as k:
print(f"Column name {k} does not exist in dataframe")
compare_dataframe_trio(df_compounds, df_rlip_compounds, df_lip_compounds, 'H_acceptors', 'H_donors')
compare_dataframe_trio(df_compounds, df_rlip_compounds, df_lip_compounds, 'MW', 'LogP')
###Output
_____no_output_____
###Markdown
As expected, we see two new tranches of molecules (in blue) on the MW/$P_{o/w}$ plot, that have been accepted under the new relaxed conditions. In general, outliers that lie further from the main cluster have not been accepted - these heavier molecules are more likely to fail the hydrogben bond donor/acceptor conditions, as they contain more functional groups. We may also consider the distribution of these compounds across subspaces of other assay data:
###Code
compare_dataframe_trio(df_compounds, df_rlip_compounds, df_lip_compounds, 'NHOH_count', 'NO_count')
compare_dataframe_trio(df_compounds, df_rlip_compounds, df_lip_compounds, 'Heavy_atoms', 'Heavy_atom_mw')
compare_dataframe_trio(df_compounds, df_rlip_compounds, df_lip_compounds, 'Rotatable_bonds', 'Valence_electonrs')
compare_dataframe_trio(df_compounds, df_rlip_compounds, df_lip_compounds, 'Rings', 'TPSA')
#tpsa is good, rings are not - compare tpsa to valence e?
#nhoh is h bond donor, no is hydrogen bond acceptor
print(df_lip_compounds.columns)
###Output
Index(['SMILES', 'H_acceptors', 'H_donors', 'MW', 'LogP', 'Heavy_atoms',
'Heavy_atom_mw', 'NHOH_count', 'NO_count', 'Rotatable_bonds',
'Valence_electonrs', 'Rings', 'TPSA', 'fingerprint'],
dtype='object')
###Markdown
Clustering We have computed ECFP (Morgan) fingerprints with radius 2 and 2048 bits for each of the Moonshot compounds. We will then cluster the compounds using the Tanimoto (a.k.a. Jaccard) index. The remove zero size clusters?or remove zero variance bits and conduct pca analysis
###Code
#Define clustering setup
def ClusterFps(fps,cutoff=0.2):
from rdkit import DataStructs
from rdkit.ML.Cluster import Butina
# first generate the distance matrix:
dists = []
nfps = len(fps)
for i in range(1,nfps):
sims = DataStructs.BulkTanimotoSimilarity(fps[i],fps[:i])
dists.extend([1-x for x in sims])
# now cluster the data:
cs = Butina.ClusterData(dists,nfps,cutoff,isDistData=True)
return cs
clusters=ClusterFps(df_compounds['fingerprint'],cutoff=0.322) # for 95% confidence that clusters are different
def cluster_size_dist(clusters):
""" Records the size of each cluster.
"""
cluster_sizes = []
for cluster in clusters:
cluster_sizes.append(len(cluster))
return cluster_sizes
cluster_size_dist = cluster_size_dist(clusters)
print(f"Generated {len(clusters)} with mean occupancy of {sum(cluster_size_dist)/len(clusters):.2f} molecules")
###Output
Generated 911 with mean occupancy of 2.24 molecules
###Markdown
Our clustering algorithm has generated a large number of clusters, with very few molecules per cluster. What does this mean though - are our molecules evenly spread between clusters?
###Code
plt.figure(figsize=(14,6))
plt.subplot(121)
hist, bins, _ = plt.hist(cluster_size_dist, bins=12)
plt.xlabel('Cluster Size'); plt.ylabel('Frequency')
plt.title("Linear Scale")
plt.subplot(122)
logbins = np.logspace(np.log10(bins[0]),np.log10(bins[-1]),len(bins))
plt.hist(cluster_size_dist, bins=logbins)
plt.xscale('log'); plt.yscale('log')
plt.xlabel('Cluster Size'); plt.ylabel('Frequency')
plt.title("Log Scale")
plt.show()
###Output
_____no_output_____
###Markdown
From the linear histogram, it is clear that the vast majority of the clusters are small; in fact the log scale shows the frequency of clusters decreases almost exponentially with size. We believe this over-representation of small clusters is a result of an extremely diverse dataset, with many unique molecules unrelated to others in the dataset (and a small number of well clustered molecules, ie derived from the same scaffold).
###Code
# find features associated with the most common clusters, and display graphically.
# remove molecules in smallest clusters and recluster?
# or remove zero variance bits in fingerprint and repeat.
def find_max_cluster(clusters):
""" Returns the largest cluster
"""
max_cluster = ()
for cluster in clusters:
if len(cluster) > len(max_cluster):
max_cluster = cluster
return max_cluster
max_cluster = find_max_cluster(clusters)
print(f"The largest cluster has {len(max_cluster)} compounds.")
###Output
The largest cluster has 143 compounds.
###Markdown
This cluster can be visualised within the sample space, for example in terms of MD and $P_{o/w}$:
###Code
compare_dataframe_pair(df_compounds, df_compounds.iloc[list(max_cluster),], 'MW', 'LogP', subset_name='Largest Cluster')
compare_dataframe_pair(df_compounds, df_compounds.iloc[list(max_cluster),], 'NHOH_count', 'NO_count', subset_name='Largest Cluster')
compare_dataframe_pair(df_compounds, df_compounds.iloc[list(max_cluster),], 'TPSA', 'Valence_electonrs', subset_name='Largest Cluster')
'NHOH_count', 'NO_count'
###Output
_____no_output_____
###Markdown
We may also visualise molecules from the cluster, using RDKit's built-in drawer:
###Code
n_molecule = 3
cluster_mols=[]
for i in range(n_molecule):
m = Chem.MolFromSmiles(df_compounds['SMILES'][max_cluster[i]])
cluster_mols.append(m)
Draw.MolsToGridImage(cluster_mols)
###Output
_____no_output_____ |
examples/Prior_distribution_BRIE2.ipynb | ###Markdown
Prior distribtuion
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Logit-NormalSee more on [logit-normal distribution Wikipedia page](https://en.wikipedia.org/wiki/Logit-normal_distribution).In practicce, we use $\mu=0, \sigma=3.0$ as prior
###Code
def _logit(x):
return tf.math.log(x / (1 - x))
_logit(np.array([0.6]))
_means = np.array([0.05, 0.3, 0.5, 0.7, 0.95], dtype=np.float32)
_vars = np.array([0.1, 0.5, 1.0, 1.5, 2.0, 3.0], dtype=np.float32)
xx = np.arange(0.001, 0.999, 0.001).astype(np.float32)
fig = plt.figure(figsize=(10, 6))
for j in range(len(_vars)):
plt.subplot(2, 3, j + 1)
for i in range(len(_means)):
_model = tfd.Normal(_logit(_means[i:i+1]), _vars[j:j+1])
_pdf = _model.prob(_logit(xx)) * 1 / (xx * (1 - xx))
plt.plot(xx, _pdf, label="mean=%.2f" %(_means[i]))
plt.title("std=%.2f" %(_vars[j]))
if j == 5:
plt.legend(loc="best")
plt.tight_layout()
plt.show()
_model = tfd.Normal([0], [3])
plt.hist(np.array(tf.sigmoid(_model.sample(1000))).reshape(-1), bins=100)
plt.show()
###Output
_____no_output_____
###Markdown
Gamma distribution See more on [Gamma distribution Wikipedia page](https://en.wikipedia.org/wiki/Gamma_distribution)In practice, we use $\alpha=10, \beta=3$ as prior
###Code
_alpha = np.array([2, 3, 5, 10, 15], dtype=np.float32)
_beta = np.array([0.5, 1, 2, 3, 5, 8], dtype=np.float32)
xx = np.arange(0.01, 10, 0.1).astype(np.float32)
fig = plt.figure(figsize=(10, 6))
for j in range(len(_vars)):
plt.subplot(2, 3, j + 1)
for i in range(len(_means)):
_model = tfd.Gamma(_alpha[i:i+1], _beta[j:j+1])
_pdf = _model.prob(xx)
plt.plot(xx, _pdf, label="alpha=%.2f" %(_alpha[i]))
plt.title("beta=%.2f" %(_beta[j]))
if j == 5:
plt.legend(loc="best")
plt.tight_layout()
plt.show()
_model = tfd.Gamma([10], [3])
plt.hist(_model.sample(1000).numpy().reshape(-1), bins=100)
plt.show()
###Output
_____no_output_____ |
mcis6273_f20_datamining/homework/hw3/hw3.ipynb | ###Markdown
MCIS6273 Data Mining (Prof. Maull) / Fall 2020 / HW3 Student Name: Manideepak Neeli SAU ID : 999901000 ------------------------------------------------------------------------------------------------------------------------------ OBJECTIVES* Perform Bayesian text classification ASSIGNMENT TASKS (100%) Perform Bayesian text classification
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
###Output
_____no_output_____
###Markdown
In order to work with the documents we want to load, you will want to study * [`from sklearn.feature_extraction.text import TfidfVectorizer`]()* [`from sklearn.naive_bayes import MultinomialNB`]()Our test data will be given by the following dictionary:
###Code
test_map = {
'a': # Plato
[
'data/plato/test/pg1726.txt', # Title: Cratylus
'data/plato/test/pg1616.txt', # Title: Ion
'data/plato/test/pg1735.txt', # Title: Theaetetus
'data/plato/test/pg1635.txt' # Title: Sophist
],
'b':
[ # Hume
'data/hume/test/pg59792-0.txt', # Title: Hume's Political Discourses
'data/hume/test/pg62856-0.txt', # Title: A Treatise of Human Nature Being an Attempt to Introduce the Experimental Method into Moral Subjects
'data/hume/test/pg9662.txt', # Title: An Enquiry Concerning Human Understanding
],
'c':
[ # Aristotle
'data/aristotle/test/pg59058.txt', # Title: Aristotle's History of Animals In Ten Books
'data/aristotle/test/pg2412.txt', # Title: The Categories
'data/aristotle/test/pg6762.txt', # Title: Politics A Treatise on Government
'data/aristotle/test/pg1974.txt', # Title: Poetics
]
}
training_map = {
'a':
[ # Plato
'data/plato/train/pg1750.txt', # Laws
'data/plato/train/pg1497.txt', # The Republic
'data/plato/train/pg1600.txt', # Symposium
],
'b':
[ # Hume
'data/hume/train/pg10574.txt', # The History of England, Volume I
'data/hume/train/pg4705.txt', # A Treatise of Human Nature
'data/hume/train/pg36120.txt', # Essays
],
'c':
[ # Aristotle
'data/aristotle/train/pg8438.txt', # Ethics
'data/aristotle/train/pg26095.txt',# The Athenian Constitution
'data/aristotle/train/pg6763.txt' # The Poetics
]
}
files_train = []
y_train = []
for k in training_map.keys():
files_train.extend(training_map[k])
y_train.extend(k * len(training_map[k]))
pass
files_train
y_train
###Output
_____no_output_____
###Markdown
PART I Use the dictionary map in the variable `training_map`. Your function will take the files (in the order they appear in `training_map`) and pass the data into the [`TfidfVectorizer`]() vectorizer. You will need to set the parameter to the constructor to `input='file'` and the `stop_words` to `'english'` (e.g. initialize the vectorizer to `TfidfVectorizer(input='file', stop_words='english')`.* **You will just need to show the new function and the initialization of the vectorizer in this step.** This will be one or two cells at most.* You will use `fit_transform()` with the parameter being a list of the training files objects.
###Code
def getVectorizerVectors(files_train,vectorizer):
# call vectorizer.fit_transform on the list of FILE OBJECTS
vectorizer_vectors = vectorizer.fit_transform([open(file, 'r', encoding='utf-8') for file in files_train if file.endswith("txt")]).astype(int)
return vectorizer_vectors
vectorizer = TfidfVectorizer(input='file', stop_words='english')
tfidf_vectorizer_vectors = getVectorizerVectors(files_train,vectorizer)
tfidf_vectorizer_vectors
type(tfidf_vectorizer_vectors)
tfidf_vectorizer_vectors.toarray()
tfidf_vectorizer_vectors.toarray().shape
len(y_train)
###Output
_____no_output_____
###Markdown
PART IINow that you have a vectorizer which effectively builds the data structure to hold theTF-IDF of all the words which appear for each document, you can move to the trainingphase for the Bayesian classifier. Look in the sample notebook for guidance. You will take asinput the vectorizer output (the documents vectorized by TF-IDF) and the correspondingclasses (in the order they appear in the original dictionary map) and pass that into the [`MultinomialNB.fit()`](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.htmlsklearn.naive_bayes.MultinomialNB.fit) method.* **Show the initialization of your `MultinomialNB()` classifier and the application of the `fit()` method.**
###Code
from sklearn.naive_bayes import MultinomialNB
# initialize MultinomialNB (one line) (e.g. clf = ???)
clf = MultinomialNB()
# e.g. clf.fit( with_the_approproate_parameters )
clf.fit(tfidf_vectorizer_vectors, y_train)
tfidf_vectorizer_vectors.shape
len(y_train)
clf.score(tfidf_vectorizer_vectors,y_train)*100
###Output
_____no_output_____
###Markdown
PART IIIOnce you have the classifier, you will need to convert a test file usingthe vectorizer from part I. Then you will execute the `predict()` method of your classifier.Assume `vectorizer` is your TF-IDF vectorizer from above and the `clf` yourclassifier from part II above, your code could be modeled after this:```pythonx_test = vectorizer.transform([open("data/aristotle/test/pg2412.txt")]) should be class C!clf.predict(x_test)``` In your notebook do the following:1. **Write a function to take as input a vectorized document and trained classifier and returnthe predicted label for the document.** See the sample notebook for guidance.1. **Test on the files in the `data/philosopher_name/test` folders and show the output of your test.**You can wrap your function from the previous step in a loopto run through all data in the folder. This will be short enough to be coded in a single Jupyter cell.
###Code
files_test = [
#("data/aristotle/test/pg2412.txt", 'c'), # the class should match the file (e.g. Hume is 'b')
# add all the remaining files
('data/plato/test/pg1726.txt','a') ,
('data/plato/test/pg1616.txt','a'),
('data/plato/test/pg1735.txt','a'),
('data/plato/test/pg1635.txt','a'),
('data/hume/test/pg59792-0.txt','b'),
('data/hume/test/pg62856-0.txt','b'),
('data/hume/test/pg9662.txt','b'),
('data/aristotle/test/pg59058.txt','c'),
('data/aristotle/test/pg2412.txt','c'),
('data/aristotle/test/pg6762.txt','c'),
('data/aristotle/test/pg1974.txt','c')
]
def getPredictValue(x_test,clf):
return clf.predict(x_test)[0]
for f, cls_predict in files_test:
# load the test dataset
x_test = vectorizer.transform([open(f,'r', encoding='utf-8')])
# should be class C!
print(x_test.shape)
pred = getPredictValue(x_test,clf)
print(cls_predict,":",pred[0])
print (f"{f}: {cls_predict == pred}")
pass # remove
###Output
(1, 24856)
a : a
data/plato/test/pg1726.txt: True
(1, 24856)
a : a
data/plato/test/pg1616.txt: True
(1, 24856)
a : a
data/plato/test/pg1735.txt: True
(1, 24856)
a : a
data/plato/test/pg1635.txt: True
(1, 24856)
b : a
data/hume/test/pg59792-0.txt: False
(1, 24856)
b : a
data/hume/test/pg62856-0.txt: False
(1, 24856)
b : a
data/hume/test/pg9662.txt: False
(1, 24856)
c : a
data/aristotle/test/pg59058.txt: False
(1, 24856)
c : a
data/aristotle/test/pg2412.txt: False
(1, 24856)
c : a
data/aristotle/test/pg6762.txt: False
(1, 24856)
c : a
data/aristotle/test/pg1974.txt: False
###Markdown
&167; You have now built your first document classifier! Now answer the following questions: 1. How many of the documents did your classifier correctly classify? Ans:The Four documents had classifier correctly classified. We can show results based on the above code results.The process of classifying or grouping documents into predefined set of classes based on a set of criteria that defined in advance is called text classification. TC has been exploited in various applications such as: documents organization, automated documents indexing, filtering of spams, text filtering, word sense disambiguation. &167; The classifier `predict` method only returns the label, but you can get the probabilities assigned to allclasses using [`predict_proba()`](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.htmlsklearn.naive_bayes.MultinomialNB.predict_proba).Please take a look at the example notebook to see how that is done.
###Code
print("Classes Labels: ",clf.classes_)
for f, cls_predict in files_test:
# load the test dataset
x_test = vectorizer.transform([open(f,'r', encoding='utf-8')])
pred_proba = clf.predict_proba(x_test)
print(f,":",pred_proba)
###Output
Classes Labels: ['a' 'b' 'c']
data/plato/test/pg1726.txt : [[0.33333333 0.33333333 0.33333333]]
data/plato/test/pg1616.txt : [[0.33333333 0.33333333 0.33333333]]
data/plato/test/pg1735.txt : [[0.33333333 0.33333333 0.33333333]]
data/plato/test/pg1635.txt : [[0.33333333 0.33333333 0.33333333]]
data/hume/test/pg59792-0.txt : [[0.33333333 0.33333333 0.33333333]]
data/hume/test/pg62856-0.txt : [[0.33333333 0.33333333 0.33333333]]
data/hume/test/pg9662.txt : [[0.33333333 0.33333333 0.33333333]]
data/aristotle/test/pg59058.txt : [[0.33333333 0.33333333 0.33333333]]
data/aristotle/test/pg2412.txt : [[0.33333333 0.33333333 0.33333333]]
data/aristotle/test/pg6762.txt : [[0.33333333 0.33333333 0.33333333]]
data/aristotle/test/pg1974.txt : [[0.33333333 0.33333333 0.33333333]]
###Markdown
**Answer the following questions inside your notebook:** 1. Make an observation about the class probabilities. What did you notice? Ans: An observation or input to the model is referred to as X and the class label or output of the model is referred to as y. Together, X and y represent observations collected from the domain Please note that P(y) is also called class probability and P(xi | y) is called conditional probability. In the theorem, P(A) represents the probabilities of each event. In the Naive Bayes Classifier, we can interpret these Class Probabilities as simply the frequency of each instance of the event divided by the total number of instances. A document d A fixed set of classes C = { c1, c2, … , cn } A training set of m documents that we have pre-determined to belong to a specific class We train our classifier using the training set, and result in a learned classifier. We can then use this learned classifier to classify new documents. Notation: we use Υ(d) = C to represent our classifier, where Υ() is the classifier, d is the document, and c is the class we assigned to the document. 2. Provide some commentary on how the probabilities might be improved (you can provide you answer as a thought exercise or if you have time, provide some example code). Ans:By using the predict_prob() method to imporved the probabilities of Return probability estimates for the test vector X.The predict() function enables us to predict the labels of the data values on the basis of the trained model.It returns the labels of the data passed as argument based upon the learned or trained data obtained from the model.The predict_proba(X) method return probability estimates for the test vector X.As shown bellow example for predict() and predict_prob() results as follow. Example:
###Code
# load the iris dataset
from sklearn.datasets import load_iris
iris = load_iris()
# store the feature matrix (X) and response vector (y)
X = iris.data
y = iris.target
# splitting X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1)
# training the model on training set
from sklearn.naive_bayes import MultinomialNB
mnb = MultinomialNB()
mnb.fit(X_train, y_train)
# making predictions on the testing set
y_pred = mnb.predict(X_test)
print('Predicted Class: ',y_pred)
print(len(y_pred))
# comparing actual response values (y_test) with predicted response values (y_pred)
from sklearn import metrics
print("MultinomialNB Bayesian text classification model accuracy(in %):", metrics.accuracy_score(y_test, y_pred)*100)
print("Classes Labels: ",mnb.classes_)
# Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order,
# as they appear in the attribute classes_.
# predict(X): Perform classification on an array of test vectors X.
# predict_proba(X) : Return probability estimates for the test vector X.
y_pred_prob = mnb.predict_proba(X_test)
print('Classes: ',mnb.classes_)
print('Predicted Probabilities: ')
print(y_pred_prob)
###Output
Classes: [0 1 2]
Predicted Probabilities:
[[0.84782459 0.10087277 0.05130264]
[0.13897718 0.45635525 0.40466757]
[0.065804 0.49378465 0.44041136]
[0.77565499 0.14319566 0.08114934]
[0.01479228 0.48808597 0.49712175]
[0.04597056 0.4836567 0.47037274]
[0.01343944 0.45799804 0.52856252]
[0.66688299 0.20602622 0.12709079]
[0.67993551 0.19761207 0.12245242]
[0.0092437 0.44988817 0.54086813]
[0.06001761 0.47836301 0.46161938]
[0.72194203 0.17542687 0.1026311 ]
[0.00979182 0.46035138 0.5298568 ]
[0.05802771 0.48828932 0.45368297]
[0.04535742 0.48033587 0.47430671]
[0.71303904 0.17674608 0.11021487]
[0.0769015 0.48261562 0.44048288]
[0.04434897 0.47287291 0.48277812]
[0.69120713 0.19348583 0.11530704]
[0.74997422 0.1580045 0.09202128]
[0.05535675 0.47924742 0.46539583]
[0.04262238 0.46891271 0.4884649 ]
[0.02974367 0.48440425 0.48585208]
[0.74380682 0.16107334 0.09511984]
[0.01583604 0.48099169 0.50317227]
[0.06920573 0.47947769 0.45131658]
[0.83669391 0.10706452 0.05624158]
[0.75341297 0.15588276 0.09070427]
[0.05661488 0.49044861 0.45293651]
[0.01560128 0.46007061 0.52432811]
[0.05213339 0.48534563 0.46252098]
[0.00726961 0.45938214 0.53334825]
[0.08518579 0.48000014 0.43481407]
[0.00933963 0.44573683 0.54492354]
[0.00777305 0.4219696 0.57025735]
[0.7743441 0.14472812 0.08092778]
[0.05399892 0.476305 0.46969608]
[0.75222418 0.15761511 0.09016071]
[0.04412842 0.48942642 0.46644516]
[0.0109453 0.463015 0.5260397 ]
[0.01227721 0.45960354 0.52811925]
[0.73372126 0.16754483 0.0987339 ]
[0.02134589 0.4709074 0.50774671]
[0.01471222 0.46203543 0.52325235]
[0.06190376 0.49672496 0.44137127]
[0.0032063 0.43875495 0.55803876]
[0.74829331 0.16170478 0.09000191]
[0.7664619 0.14821399 0.08532412]
[0.66000765 0.20876249 0.13122986]
[0.059795 0.46838882 0.47181618]
[0.7348892 0.16753374 0.09757707]
[0.78551104 0.13866183 0.07582713]
[0.00867533 0.46707934 0.52424534]
[0.01428294 0.46478155 0.5209355 ]
[0.01093249 0.47487673 0.51419078]
[0.00812278 0.43977369 0.55210353]
[0.02408976 0.46901186 0.50689838]
[0.05264372 0.4785256 0.46883069]
[0.0063535 0.46429468 0.52935182]
[0.08208005 0.48025993 0.43766003]]
|
notebooks/4.2-jalvaradoruiz-construccion-dataset-tribunales-orales.ipynb | ###Markdown
Creación Data referente a TRIBUNALES ORALESSe realiza un analisis mediante expresiones regulares del art. 21 del Código Organico de Tribunales (COT). En este artículo se definen las comunas de Chile donde tendrán asiento este tipo de Tribunales y además su territorio de jurisdiccion.
###Code
import re
import os
import pandas as pd
import numpy as np
from tqdm import tqdm
from unicodedata import normalize
from pjud.data import cleandata
tqdm.pandas()
path_raw = "../data/raw/cot"
###Output
_____no_output_____
###Markdown
Articulo 21 COT ref: TRIBUNALES ORALES
###Code
with open(f'{path_raw}/Tribunales_orales.txt', 'r') as file:
contenido_top = ''
for line in file.readlines():
contenido_top += line
###Output
_____no_output_____
###Markdown
Se construye una expresion regular para captar una lista con la información que se desea procesar, y así generar un dataframe.
###Code
regex_top=r"(?:(?P<Region>^[\w \']+)\:\n)|(?P<JG>^[\w. \-]+)\,\scon\s(?P<Jueces>[\w.\-]+)[a-z\-\s\,]+(?P<Competencia>\.|\s[\w. \-\,\']+)"
matches = re.findall(regex_top, contenido_top, re.MULTILINE)
#matches
# GENERA UNA ARREGLO DE LISTA DE 4 ELEMENTOS
# ELEMENTO 0 : REGIÓN
# ELEMENTO 1: TRIBUNAL ORAL DE ...
# ELEMENTO 2: Cantidad de Jueces
# ELEMENTO 3: Competencia (. es lo mismo de 3 0 7) o varias ciudades (separadas por , o y)
data_top=[]
for item in range(0,len(matches)):
if matches[item][0] != '':
region = matches[item][0].upper()
else:
if matches[item][1] != '':
ciudad = matches[item][1].upper()
if ciudad.find("TRIBUNAL") != -1:
juzgado = ciudad
else:
juzgado = f"TRIBUNAL DE JUICIO ORAL EN LO PENAL {ciudad}"
if matches[item][2] != '':
cantidad_jueces = cleandata.transforma_numero(matches[item][2])
if matches[item][3] == '.':
competencia = ciudad
else:
if matches[item][3] != '':
competencia = matches[item][3].upper()
competencia = competencia.replace(" Y ",",")
competencia = competencia.replace(" E ",",")
competencia = competencia.replace(".","")
comunas = competencia.split(",")
for comuna in comunas:
data_top.append([region,juzgado,ciudad,cantidad_jueces,comuna.strip(),'ORAL'])
df_tribunal_oral = pd.DataFrame(data_top, columns = ['REGION','TRIBUNAL','ASIENTO','JUECES','COMUNA','TIPO JUZGADO'])
df_tribunal_oral['JUECES'] = df_tribunal_oral['JUECES'].fillna(0).astype(np.int8)
cols = df_tribunal_oral.select_dtypes(include = ["object"]).columns
df_tribunal_oral[cols] = df_tribunal_oral[cols].progress_apply(cleandata.elimina_tilde)
df_tribunal_oral
# Reset el index para realizar feather
df_tribunal_oral.reset_index(inplace = True)
path_interim = "../data/interim/pjud"
os.makedirs(path_interim, exist_ok = True)
# Guardamos dataset como archivo feather
df_tribunal_oral.to_feather(f'{path_interim}/generates_TribunalOral.feather')
###Output
_____no_output_____ |
_posts/python/layout/horizontal-legend/horizontal-legends.ipynb | ###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Horizontal Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
trace1 = go.Scatter(
x=np.random.randn(75),
mode='markers',
name="Plot1",
marker=dict(
size=16,
color='rgba(152, 0, 0, .8)'
))
trace2 = go.Scatter(
x=np.random.randn(75),
mode='markers',
name="Plot2",
marker=dict(
size=16,
color='rgba(0, 152, 0, .8)'
))
trace3 = go.Scatter(
x=np.random.randn(75),
mode='markers',
name="Plot3",
marker=dict(
size=16,
color='rgba(0, 0, 152, .8)'
))
data = [trace1, trace2, trace3]
layout = go.Layout(
legend=dict(
orientation="h")
)
figure=go.Figure(data=data, layout=layout)
py.iplot(figure)
###Output
_____no_output_____
###Markdown
ReferenceSee https://plot.ly/python/reference/layout-legend-orientation for more information and chart attribute options!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/csshref="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'horizontal-legends.ipynb', 'python/horizontal-legend/', 'Horizontal legend | plotly',
'How to add images to charts as background images or logos.',
title = 'Horizontal legend | plotly',
name = 'Horizontal Legends',
has_thumbnail='false', thumbnail='thumbnail/your-tutorial-chart.jpg',
language='python', page_type='example_index',
display_as='layout_opt', order=12,
ipynb= '~notebook_demo/94')
###Output
_____no_output_____ |
zetus/ZetusFeedings.ipynb | ###Markdown
separate same time feedings
###Code
zetus_dedup["prev_dur"] = zetus_dedup.sort_values("time_").groupby(["time_"])["dur"].shift(1)
zetus_dedup["fixed_time_"] = zetus_dedup.apply(
lambda x: x.time_ + pd.Timedelta(minutes = (x.prev_dur if pd.notna(x.prev_dur) else 0)),
axis = 1)
init_notebook_mode(connected=True)
zetus_dedup.dtypes
import colorlover as cl
colorscale = cl.scales['5']['qual']['Dark2']
light_colorscale = cl.scales['5']['qual']['Set2']
zetus_dedup["day"] = zetus_dedup.fixed_time_.dt.date
zetus_dedup["time_of_day"] = pd.to_datetime(zetus_dedup.fixed_time_.dt.time.astype("str").map(lambda x: "01/01/19 {}".format(x)))
feed_data = [
go.Bar(
x = zetus_dedup[zetus_dedup["type"] == side].sort_values("fixed_time_").fixed_time_,
y = 2 * (i - 0.5) * zetus_dedup[zetus_dedup["type"] == side].sort_values("fixed_time_").dur,
name = side,
width = 4 * 1000 * 60,
marker = {"color": colorscale[i]}) for i, side in enumerate(["L", "R"])
]
output_data = [
go.Bar(
x = zetus_dedup[zetus_dedup["type"] == output].sort_values("fixed_time_").fixed_time_,
y = len(zetus_dedup[zetus_dedup["type"] == output]) * [60],
name = output,
base = -30,
width = 4 * 1000 * 60,
marker = {"color": colorscale[i + 2]}) for i, output in enumerate(["S", "W"])
]
layout = go.Layout()
fig = go.Figure(data = feed_data + output_data, layout = layout)
iplot(fig)
plot(fig)
from plotly import tools
days = zetus_dedup.day.unique()
days
feed_data = {
day: [
go.Bar(
x = zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"] == side)].sort_values("fixed_time_").time_of_day,
y = len(zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"] == side)]) * [1],
width = 1000 * 60 * zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"] == side)].sort_values("fixed_time_").dur,
name = side,
legendgroup = side,
showlegend = day == days[0],
hoverinfo = "text",
text = zetus_dedup[
(zetus_dedup["day"] == day) & (zetus_dedup["type"] == side)].sort_values("fixed_time_").apply(
lambda x: "{} for {} min".format(x["type"], x.dur), axis = 1),
# width = 4 * 1000 * 60,
offset = 0,
marker = {"color": colorscale[i]}) for i, side in enumerate(["L", "R"])] + [
go.Scatter(
x = zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"] == output)].sort_values("fixed_time_").time_of_day,
y = len(zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"] == output)]) * [i],
name = output,
legendgroup = output,
showlegend = day == days[0],
hoverinfo = "text",
text = zetus_dedup[
(zetus_dedup["day"] == day) & (zetus_dedup["type"] == output)].sort_values("fixed_time_").apply(
lambda x: "{} at {}".format(x["type"], x["time_"].strftime("%H:%M")), axis = 1),
mode = "markers",
marker = {"color": colorscale[i + 2], "size": 10}) for i, output in enumerate(["S", "W"])] for day in days}
fig = tools.make_subplots(
rows = len(days),
cols = 1,
shared_xaxes = True,
shared_yaxes = True,
vertical_spacing = 0.02,
subplot_titles = ["First Days Input/Output"]
)
for i, day in enumerate(days):
for bar in feed_data[day]:
fig.append_trace(bar, i + 1, 1)
import datetime as dt
fig["layout"].update(
# height = 400,
# width = 1000,
xaxis = {
"tickformat": "%H:%M",
"range": [
dt.datetime.strptime("01/01/19 00:00", "%m/%d/%y %H:%M"),
dt.datetime.strptime("01/01/19 23:59", "%m/%d/%y %H:%M")],
"tick0": dt.datetime.strptime("01/01/19 00:00", "%m/%d/%y %H:%M"),
"dtick": 1000 * 60 * 60 * 2
},
hovermode = "closest",
plot_bgcolor = "#DCDCDC")
for i in range(len(days)):
# fig["layout"]["yaxis{}".format(i + 1)]["showgrid"] = False
fig["layout"]["yaxis{}".format(i + 1)]["tick0"] = 0
fig["layout"]["yaxis{}".format(i + 1)]["dtick"] = 1
fig["layout"]["yaxis{}".format(i + 1)]["range"] = [0, 1]
fig["layout"]["yaxis{}".format(i + 1)]["showticklabels"] = False
fig["layout"]["yaxis{}".format(i + 1)]["zeroline"] = False
fig["layout"]["yaxis{}".format(i + 1)]["title"] = i + 1
iplot(fig)
plot(fig, filename = "input_output.html")
def simp_plot(num_days = 14):
chosen_days = days[:num_days]
in_out_data = {
day: [
go.Bar(
x = zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))].sort_values("fixed_time_").time_of_day,
y = len(zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))]) * [1],
width = 1000 * 60 * zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))].sort_values("fixed_time_").dur,
name = "Feeding",
legendgroup = "Feeding",
showlegend = day == days[0],
hoverinfo = "text",
text = zetus_dedup[
(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))].sort_values("fixed_time_").apply(
lambda x: "{} min at {}".format(x.dur, x["fixed_time_"].strftime("%H:%M")), axis = 1),
offset = 0,
marker = {"color": colorscale[0]})] + [
go.Scatter(
x = zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["S", "W"]))].sort_values("fixed_time_").time_of_day,
y = len(zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["S", "W"]))]) * [0.5],
name = "Diaper change",
legendgroup = "Diaper change",
showlegend = day == days[0],
hoverinfo = "text",
text = zetus_dedup[
(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["S", "W"]))].sort_values("fixed_time_").apply(
lambda x: "{}".format(x["time_"].strftime("%H:%M")), axis = 1),
mode = "markers",
marker = {"color": colorscale[1], "size": 8})] for day in chosen_days}
fig2 = tools.make_subplots(
rows = len(chosen_days),
cols = 1,
shared_xaxes = True,
shared_yaxes = True,
vertical_spacing = 0.02,
subplot_titles = ["First Days Input/Output"])
for i, day in enumerate(chosen_days):
for ob in in_out_data[day]:
fig2.append_trace(ob, i + 1, 1)
fig2["layout"].update(
height = 30 * len(chosen_days),
# width = 1000,
xaxis = {
"tickformat": "%H:%M",
"range": [
dt.datetime.strptime("01/01/19 00:00", "%m/%d/%y %H:%M"),
dt.datetime.strptime("01/01/19 23:59", "%m/%d/%y %H:%M")],
"tick0": dt.datetime.strptime("01/01/19 00:00", "%m/%d/%y %H:%M"),
"dtick": 1000 * 60 * 60 * 2
},
hovermode = "closest",
plot_bgcolor = "#DCDCDC")
for i in range(len(chosen_days)):
# fig["layout"]["yaxis{}".format(i + 1)]["showgrid"] = False
fig2["layout"]["yaxis{}".format(i + 1)]["tick0"] = 0
fig2["layout"]["yaxis{}".format(i + 1)]["dtick"] = 1
fig2["layout"]["yaxis{}".format(i + 1)]["range"] = [0, 1]
fig2["layout"]["yaxis{}".format(i + 1)]["showticklabels"] = False
fig2["layout"]["yaxis{}".format(i + 1)]["zeroline"] = False
fig2["layout"]["yaxis{}".format(i + 1)]["title"] = i + 1
return fig2
first_two_weeks = days[:14]
in_out_data = {
day: [
go.Bar(
x = zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))].sort_values("fixed_time_").time_of_day,
y = len(zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))]) * [1],
width = 1000 * 60 * zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))].sort_values("fixed_time_").dur,
name = "Feeding",
legendgroup = "Feeding",
showlegend = day == days[0],
hoverinfo = "text",
text = zetus_dedup[
(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["L", "R"]))].sort_values("fixed_time_").apply(
lambda x: "{} min at {}".format(x.dur, x["fixed_time_"].strftime("%H:%M")), axis = 1),
offset = 0,
marker = {"color": colorscale[0]})] + [
go.Scatter(
x = zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["S", "W"]))].sort_values("fixed_time_").time_of_day,
y = len(zetus_dedup[(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["S", "W"]))]) * [0.5],
name = "Diaper change",
legendgroup = "Diaper change",
showlegend = day == days[0],
hoverinfo = "text",
text = zetus_dedup[
(zetus_dedup["day"] == day) & (zetus_dedup["type"].isin(["S", "W"]))].sort_values("fixed_time_").apply(
lambda x: "{}".format(x["time_"].strftime("%H:%M")), axis = 1),
mode = "markers",
marker = {"color": colorscale[1], "size": 8})] for day in first_two_weeks}
fig2 = tools.make_subplots(
rows = len(first_two_weeks),
cols = 1,
shared_xaxes = True,
shared_yaxes = True,
vertical_spacing = 0.02,
subplot_titles = ["First Days Input/Output"]
)
for i, day in enumerate(first_two_weeks):
for ob in in_out_data[day]:
fig2.append_trace(ob, i + 1, 1)
fig2["layout"].update(
# height = 400,
# width = 1000,
xaxis = {
"tickformat": "%H:%M",
"range": [
dt.datetime.strptime("01/01/19 00:00", "%m/%d/%y %H:%M"),
dt.datetime.strptime("01/01/19 23:59", "%m/%d/%y %H:%M")],
"tick0": dt.datetime.strptime("01/01/19 00:00", "%m/%d/%y %H:%M"),
"dtick": 1000 * 60 * 60 * 2
},
hovermode = "closest",
plot_bgcolor = "#DCDCDC")
for i in range(len(first_two_weeks)):
# fig["layout"]["yaxis{}".format(i + 1)]["showgrid"] = False
fig2["layout"]["yaxis{}".format(i + 1)]["tick0"] = 0
fig2["layout"][b
"yaxis{}".format(i + 1)]["dtick"] = 1
fig2["layout"]["yaxis{}".format(i + 1)]["range"] = [0, 1]
fig2["layout"]["yaxis{}".format(i + 1)]["showticklabels"] = False
fig2["layout"]["yaxis{}".format(i + 1)]["zeroline"] = False
fig2["layout"]["yaxis{}".format(i + 1)]["title"] = i + 1
iplot(simp_plot(14))
iplot(simp_plot(28))
plot(simp_plot(28), filename = "input_output_simple_3.html")
zetus_dedup["mod_dur"] = zetus_dedup.apply(lambda x: x.dur if x["type"] in ["R", "L"] else 1, axis = 1)
zetus_dedup.head()
zetus_stats = zetus_dedup.groupby(["day", "type"], as_index = False)["mod_dur"].sum().sort_values("day").pivot(index = "day", columns = "type", values = "mod_dur")
zetus_num_feeds = zetus_dedup[zetus_dedup["type"].isin(["L", "R"])].groupby(["day", "type"], as_index = False)["mod_dur"].count().sort_values("day").pivot(index = "day", columns = "type", values = "mod_dur")
zetus_num_feeds
zetus_stats
zetus_stats.reset_index()
zetus_stats_2 = zetus_stats.reset_index().merge(
right = zetus_num_feeds.rename(columns = {"L": "num_L", "R": "num_R"}).reset_index(),
on = ["day"],
how = "left")
zetus_stats_2["avg_L"] = zetus_stats_2.L / zetus_stats_2.num_L
zetus_stats_2["avg_R"] = zetus_stats_2.R / zetus_stats_2.num_R
zetus_stats_2
###Output
_____no_output_____
###Markdown
Stats part 2
###Code
zetus_dedup["mod_type"] = zetus_dedup["type"].map(lambda x: "F" if x in ["L", "R"] else x)
zetus_dedup.head()
zetus_daily_stats = zetus_dedup.groupby(
["day", "time", "mod_type"], as_index = False)["dur"].sum().groupby(
["day", "mod_type"], as_index = False).agg({"time": "count", "dur": "sum"}).rename(
columns = {"time": "num", "dur": "min"})
zetus_daily_stats.iloc[:10]
zetus_rolling_stats = zetus_daily_stats.sort_values("day").groupby(["mod_type"], as_index = False).rolling(7, min_periods = 1).mean()
zetus_rolling_num_stats = zetus_rolling_stats.pivot(
index = "day",
columns = "mod_type",
values = "num").reset_index().rename(columns = {"F": "avg_num_F", "S": "avg_num_S", "W": "avg_num_W"})
zetus_rolling_min_stats = zetus_rolling_stats[zetus_rolling_stats["mod_type"] == "F"].pivot(
index = "day",
columns = "mod_type",
values = "min").reset_index().rename(columns = {"F": "avg_ttl_min_F"})
zetus_rolling_stats_table = zetus_rolling_num_stats.merge(right = zetus_rolling_min_stats, on = ["day"], how = "left")
zetus_rolling_stats_table["avg_min_F"] = zetus_rolling_stats_table.avg_ttl_min_F / zetus_rolling_stats_table.avg_num_F
zetus_rolling_stats_table_2 = zetus_rolling_stats_table.set_index("day")[["avg_num_F", "avg_min_F", "avg_num_W", "avg_num_S"]]
zetus_rolling_stats_table_2
zetus_rolling_stats_table_2.to_csv("zetus_stats_021819.csv")
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style("darkgrid")
fig, ax = plt.subplots(figsize = (15, 10))
sns.lineplot(
data = pd.melt(
zetus_rolling_stats_table_2.reset_index(),
id_vars = ["day"],
value_vars = ["avg_num_F", "avg_min_F", "avg_num_W", "avg_num_S"]),
x = "day",
y = "value",
hue = "mod_type",
ax = ax)
ax.legend(loc = 4)
###Output
_____no_output_____ |
example_notebooks/02_get_sequences.ipynb | ###Markdown
Get sequences for datasetThis workflow to get sequences was inspired by [notebooks/functions from this repository](https://github.com/emdann/scATAC_prep).
###Code
%load_ext rpy2.ipython
# Ignore R warning messages
#Note: this can be commented out to get more verbose R output
rpy2.rinterface_lib.callbacks.logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
Load dataset
###Code
ad = annd.read("data/Marrow_pooled.h5ad")
sc.pl.umap(ad, color='most_abundant_celltype')
gene_ids = ad.var["symbol"]
gene_ids[0:10]
###Output
_____no_output_____
###Markdown
Use R to extract sequencesMost/all of these are available through R's package manager or Bioconductor. For human genome, use `library(EnsDb.Hsapiens.v86)` instead.
###Code
%%R
library(Matrix)
library(GenomicRanges)
library(ensembldb)
library(EnsDb.Mmusculus.v79)
library(Biostrings)
library(tidyr)
library(AnnotationHub)
options(stringsAsFactors = F)
%%R
genome <- ensembldb:::getGenomeTwoBitFile(EnsDb.Mmusculus.v79)
genome_genes <- ensembldb::genes(EnsDb.Mmusculus.v79)
###Output
_____no_output_____
###Markdown
Since we are working with symbols, we set the gene names to symbols:
###Code
%%R
names(genome_genes) <- genome_genes$symbol
###Output
_____no_output_____
###Markdown
Get genes and filter out those that are not included:
###Code
%%R -i gene_ids -o gene_ids
gene_ids <- as.vector(gene_ids)
good_seqname <- as.character(seqnames(genome_genes)) %in% c(as.character(1:19), "MT", "X", "Y")
genome_genes <- genome_genes[good_seqname]
gene_ids <- gene_ids[gene_ids %in% genome_genes$symbol]
###Output
_____no_output_____
###Markdown
Define sequence length relative to (canonical) TSS of genes and extract sequences
###Code
%%R -i gene_ids -o promoter_seq
upstream = 500
downstream = 500
# This returns a GRanges object:
promoter_ranges <- GenomicRanges::promoters(genome_genes[gene_ids],
upstream = upstream,
downstream = downstream)
promoter_seq <- Biostrings::getSeq(genome, promoter_ranges)
promoter_seq <- tolower(as.character(promoter_seq))
ad = ad[:,ad.var["symbol"].isin(gene_ids)]
ad.var["promoter_seq"] = promoter_seq
sum(["N" in x for x in ad.var["promoter_seq"]])
ad.var
ad.write("data/Marrow_pooled.h5ad")
###Output
_____no_output_____ |
Codes/Final Version- All models.ipynb | ###Markdown
0. Data and Preparation
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
from keras.preprocessing.image import ImageDataGenerator, load_img
from pathlib import Path
import matplotlib.pyplot as plt
from matplotlib import pyplot
%matplotlib inline
import cv2
import random as rn
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tqdm import tqdm
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Input, Flatten, SeparableConv2D
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from tensorflow.keras.optimizers import RMSprop
#setting up path for data
train_path = Path('drive/Shared drives/STA221 - Final Project/datash/train/train')
test_path = Path('drive/Shared drives/STA221 - Final Project/datash/test/test')
normal_data_path = train_path / 'NORMAL'
covid_data_path = train_path / 'COVID19 AND PNEUMONIA'
#get the counts of each type of training data
#https://stackoverflow.com/questions/2632205/how-to-count-the-number-of-files-in-a-directory-using-python
list1 = os.listdir(normal_data_path)
num_normal = len(list1)
print ('Number of normal data in training set:',num_normal)
list2 = os.listdir(covid_data_path)
num_sick = len(list2)
print ('Number of covid data intraining set:',num_sick)
group_names=['Healthy', 'Sick']
cases_count=[num_normal, num_sick]
plt.figure(figsize=(10,8))
pyplot.bar(group_names, cases_count)
plt.title('Number of cases', fontsize=14)
plt.xlabel('Case type', fontsize=12)
plt.ylabel('Count', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
The histogram of the available data is as shown.
###Code
#resize everything to 250,250
all_data=[]
all_labels=[]
for img in tqdm(os.listdir(normal_data_path)):
path = os.path.join(normal_data_path,img)
img = cv2.imread(path,cv2.IMREAD_COLOR)
img = cv2.resize(img, (250,250)) # try different size here
img = img.astype(np.float32)/255.
all_data.append(np.array(img))
#create labels list, 0 for normal
all_labels.append(str(0))
#resize eveything
for img in tqdm(os.listdir(covid_data_path)):
path = os.path.join(covid_data_path,img)
img = cv2.imread(path,cv2.IMREAD_COLOR)
img = cv2.resize(img, (250,250)) # try different size here
img = img.astype(np.float32)/255.
all_data.append(np.array(img))
#add to labels list, 1 for covid
all_labels.append(str(1))
###Output
100%|██████████| 3925/3925 [21:16<00:00, 3.07it/s]
###Markdown
Two cells of code executed above are to resize the image data to 250$\times$250 pixels. Resizing to smaller scales were also performed which results are shown in the report, however, for the convinience of performance, the only resizing presented here is the one that the models worked with best.
###Code
#print some random data
fig,ax=plt.subplots(2,2)
fig.set_size_inches(7,7)
for i in range(2):
for j in range (2):
l=rn.randint(0,len(all_labels))
ax[i,j].imshow(all_data[l])
if all_labels[l]=='0':stat='normal'
else:stat='sick'
ax[i,j].set_title('status: '+ stat)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Some of the data is randomly printed, to check everything and visually compare them before training the model.
###Code
all_labels = np.array(all_labels)
all_data = np.array(all_data)
#get training and test sets (use .25 as test size)
x_train,x_test,y_train,y_test=train_test_split(all_data,all_labels,test_size=0.25,random_state=99)
#data augmentation?
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.2, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(x_train)
###Output
_____no_output_____
###Markdown
Data Augmentation with the parameters mentioned in the report, to prevetn overfitting.
###Code
test_data=[]
#resize the unknown test data
for img in tqdm(os.listdir(test_path)):
path = os.path.join(test_path,img)
img = cv2.imread(path,cv2.IMREAD_COLOR)
img = cv2.resize(img, (250,250))
img = img.astype(np.float32)/255.
test_data.append(np.array(img))
names=[]
for img in tqdm(os.listdir(test_path)):
names.append(img)
import pandas as pd
names=pd.DataFrame(names)
#ready the file for final submissions on Kaggle
###Output
_____no_output_____
###Markdown
ResNet5092.27%
###Code
pip install keras-resnet
import keras_resnet.models
from tensorflow.keras import layers
from keras.layers import *
from keras.models import Sequential
from keras.applications.resnet50 import ResNet50
import keras
CLASS_COUNT = 2
#using ResNet as the base model
base_model = ResNet50(
weights='imagenet',
include_top=False,
input_shape=(250, 250, 3),
pooling='avg',
)
#base_model.trainable = False
model_resnet50 = Sequential([
base_model,
Dense(CLASS_COUNT, activation='softmax'),
])
#add the optimizer and loss functions
opt = keras.optimizers.Adam(learning_rate=0.0001)
model_resnet50.compile(optimizer = opt , loss = 'categorical_crossentropy' , metrics = ['accuracy'])
model_resnet50.summary()
from keras.utils import to_categorical
train_labels = to_categorical(y_train)
test_labels = to_categorical(y_test)
history = model_resnet50.fit(datagen.flow(x_train,train_labels, batch_size = 32) ,epochs = 15 , validation_data = (x_test,test_labels))
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#explain interpretation
# All Training Set Confusion Matrix
predictions = model_resnet50.predict_classes(all_data)
predictions = predictions.reshape(1,-1)[0]
predictions = np.array(list(map(str,predictions)))
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(all_labels.astype(int), predictions.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
# Validation Set Confusion Matrix
predictions = model_resnet50.predict_classes(x_test)
predictions = predictions.reshape(1,-1)[0]
predictions = np.array(list(map(str,predictions)))
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test.astype(int), predictions.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
test_data = np.array(test_data)
predictions = model_resnet50.predict_classes(test_data)
predictions = predictions.reshape(1,-1)[0]
test_labels = np.array(list(map(str,predictions)))
names['labels']=test_labels
import pandas as pd
pd.DataFrame(test_labels).to_csv("/content/drive/My Drive/ResNet50_Label.csv")
pd.DataFrame(names).to_csv("/content/drive/My Drive/ResNet50_Label.csv")
###Output
_____no_output_____
###Markdown
VGG1683.82%
###Code
import tensorflow as tf
import numpy as np
import keras
VGG16_MODEL=tf.keras.applications.VGG16(input_shape=(250,250,3), include_top=False,weights='imagenet')
VGG16_MODEL.trainable=False
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
prediction_layer = tf.keras.layers.Dense(2,activation='softmax')
model_vgg16 = tf.keras.Sequential([
VGG16_MODEL,
global_average_layer,
prediction_layer
])
model_vgg16.compile(optimizer= 'adam' , loss= 'categorical_crossentropy', metrics=['accuracy'])
model_vgg16.summary()
#from keras.utils import to_categorical
#train_labels = to_categorical(y_train)
#test_labels = to_categorical(y_test)
history_vgg16 = model_vgg16.fit(x_train,train_labels,
epochs=20,
#steps_per_epoch=2,
#validation_steps=2,
validation_data=(x_test,test_labels))
# Plot training & validation accuracy values
plt.plot(history_vgg16.history['accuracy'])
plt.plot(history_vgg16.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# All Training Set Confusion Matrix
predictions_vgg16 = model_vgg16.predict_classes(all_data)
predictions_vgg16 = predictions_vgg16.reshape(1,-1)[0]
predictions_vgg16 = np.array(list(map(str,predictions_vgg16)))
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm_vgg16 = confusion_matrix(all_labels.astype(int), predictions_vgg16.astype(int))
plt.figure()
plot_confusion_matrix(cm_vgg16,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
# Validation Set Confusion Matrix
predictions_vgg16_val = model_vgg16.predict_classes(x_test)
predictions_vgg16_val = predictions_vgg16_val.reshape(1,-1)[0]
predictions_vgg16_val = np.array(list(map(str,predictions_vgg16_val)))
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm_vgg16_val = confusion_matrix(y_test.astype(int), predictions_vgg16_val.astype(int))
plt.figure()
plot_confusion_matrix(cm_vgg16_val,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
test_data = np.array(test_data)
predictions = model_vgg16.predict_classes(test_data)
predictions = predictions.reshape(1,-1)[0]
test_labels = np.array(list(map(str,predictions)))
names['labels']=test_labels
import pandas as pd
pd.DataFrame(test_labels).to_csv("/content/drive/My Drive/VGG16_Label.csv")
pd.DataFrame(names).to_csv("/content/drive/My Drive/VGG16_Label.csv")
###Output
_____no_output_____
###Markdown
AlexNet82.61%
###Code
# Importing dependency
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten,\
Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
import numpy as np
np.random.seed(99)
# Create a sequential model
model = Sequential()
# 1st Convolutional Layer
model.add(Conv2D(filters=96, input_shape=(250,250,3), kernel_size=(11,11),strides=(4,4), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation before passing it to the next layer
model.add(BatchNormalization())
# 2nd Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation
model.add(BatchNormalization())
# 3rd Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# 4th Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# 5th Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation
model.add(BatchNormalization())
# Passing it to a dense layer
model.add(Flatten())
# 1st Dense Layer
model.add(Dense(4096, input_shape=(250*250*3,)))
model.add(Activation('relu'))
# Add Dropout to prevent overfitting
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# 2nd Dense Layer
model.add(Dense(4096))
model.add(Activation('relu'))
# Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# 3rd Dense Layer
model.add(Dense(1000))
model.add(Activation('relu'))
# Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# Output Layer
model.add(Dense(2))
model.add(Activation('softmax'))
model.summary()
# (4) Compile
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
from keras.utils import to_categorical
train_labels = to_categorical(y_train)
test_labels = to_categorical(y_test)
history = model.fit(datagen.flow(x_train,train_labels, batch_size = 32) ,epochs = 15 , validation_data = (x_test,test_labels))
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# All Training Set Confusion Matrix
predictions = model.predict_classes(all_data)
predictions = predictions.reshape(1,-1)[0]
predictions = np.array(list(map(str,predictions)))
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(all_labels.astype(int), predictions.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
# Validation Set Confusion Matrix
predictions = model.predict_classes(x_test)
predictions = predictions.reshape(1,-1)[0]
predictions = np.array(list(map(str,predictions)))
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test.astype(int), predictions.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
test_data = np.array(test_data)
predictions = model.predict_classes(test_data)
predictions = predictions.reshape(1,-1)[0]
test_labels = np.array(list(map(str,predictions)))
names['labels']=test_labels
import pandas as pd
pd.DataFrame(test_labels).to_csv("/content/drive/My Drive/AlexNet_Label.csv")
pd.DataFrame(names).to_csv("/content/drive/My Drive/AlexNet_Label.csv")
###Output
_____no_output_____
###Markdown
CNN183.82%
###Code
#build the model step by step
def build_model():
input_img = Input(shape=(250,250,3), name='ImageInput')
x = Conv2D(32, (3,3), activation='relu' , padding='same', name='Conv1_1')(input_img)
x = MaxPooling2D((2,2), name='pool1')(x)
x = Conv2D(64, (3,3), activation='relu', padding='same', name='Conv2_1')(x)
x = MaxPooling2D((2,2), name='pool2')(x)
x = Conv2D(128, (3,3), activation='relu', padding='same', name='Conv3_1')(x)
x = BatchNormalization(name='bn1')(x)
x = MaxPooling2D((2,2), name='pool3_1')(x)
x = Flatten(name='flatten')(x)
x = Dense(64, activation='relu', name='fc1')(x)
x = Dropout(0.5, name='dropout2')(x)
#try sigmoid function for classification
x = Dense(1, activation='sigmoid', name='fc3')(x)
model = Model(inputs=input_img, outputs=x)
return model
CNN1=build_model()
CNN1.summary()
epochs = 20
batch_size = 60
l_r = Adam(lr=0.0001)
CNN1.compile(loss='binary_crossentropy', metrics=['accuracy'],optimizer=l_r)
# Fit the model
History = CNN1.fit_generator(datagen.flow(x_train,y_train, batch_size=batch_size),
epochs = epochs, validation_data = (x_test,y_test),
verbose = 1, steps_per_epoch=x_train.shape[0] // batch_size)
plt.plot(History.history['accuracy'])
plt.plot(History.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(History.history['loss'])
plt.plot(History.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#explain interpretation
pred=CNN1.predict(all_data)
pred_labels = np.where(pred>0.5, 1, 0)
cm = confusion_matrix(all_labels.astype(int), pred_labels.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
training_loss, training_score = CNN1.evaluate(all_data, all_labels, batch_size=60)
print("Loss on training set: ", training_loss)
print("Accuracy on training set: ", training_score)
test_data=np.array(test_data)
test_pred = CNN1.predict(test_data, batch_size=16)
test_labels = np.where(test_pred>0.5, 1, 0)
names['labels']=test_labels
#pd.DataFrame(test_labels).to_csv("/content/drive/My Drive/CNN1_label.csv")
pd.DataFrame(names).to_csv("/content/drive/My Drive/CNN1_label.csv")
###Output
_____no_output_____
###Markdown
CNN283.82%
###Code
#build the model step by step
def build_model():
input_img = Input(shape=(250,250,3), name='ImageInput')
x = Conv2D(128, (3,3), activation='relu' , padding='same', name='Conv1_1')(input_img)
x = MaxPooling2D((2,2), name='pool1')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same', name='Conv2_1')(x)
x = MaxPooling2D((2,2), name='pool2')(x)
x = Flatten(name='flatten')(x)
x = Dense(128, activation='relu', name='fc1')(x)
x = Dropout(0.5, name='dropout1')(x)
x = Dense(64, activation='relu', name='fc2')(x)
x = Dropout(0.5, name='dropout2')(x)
#try sigmoid function for classification
x = Dense(1, activation='sigmoid', name='fc3')(x)
model = Model(inputs=input_img, outputs=x)
return model
CNN2=build_model()
CNN2.summary()
epochs = 20
batch_size = 60
l_r = Adam(lr=0.0001)
CNN2.compile(loss='binary_crossentropy', metrics=['accuracy'],optimizer=l_r)
# Fit the model
History = CNN2.fit_generator(datagen.flow(x_train,y_train, batch_size=batch_size),
epochs = epochs, validation_data = (x_test,y_test),
verbose = 1, steps_per_epoch=x_train.shape[0] // batch_size)
plt.plot(History.history['accuracy'])
plt.plot(History.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(History.history['loss'])
plt.plot(History.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#explain interpretation
pred=CNN2.predict(all_data)
pred_labels = np.where(pred>0.5, 1, 0)
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(all_labels.astype(int), pred_labels.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
training_loss, training_score = CNN2.evaluate(all_data, all_labels, batch_size=60)
print("Loss on training set: ", training_loss)
print("Accuracy on training set: ", training_score)
test_data=np.array(test_data)
test_pred = CNN2.predict(test_data, batch_size=16)
test_labels = np.where(test_pred>0.5, 1, 0)
names['labels']=test_labels
#pd.DataFrame(test_labels).to_csv("/content/drive/My Drive/CNN2_label.csv")
pd.DataFrame(names).to_csv("/content/drive/My Drive/CNN2_label.csv")
###Output
_____no_output_____
###Markdown
CNN386.47%
###Code
#build the model step by step
def build_model():
input_img = Input(shape=(250,250,3), name='ImageInput')
x = Conv2D(64, (3,3), activation='relu' , padding='same', name='Conv1_1')(input_img)
x = MaxPooling2D((2,2), name='pool1')(x)
x = Conv2D(64, (3,3), activation='relu', padding='same', name='Conv2_1')(x)
x = MaxPooling2D((2,2), name='pool2')(x)
x = Conv2D(256, (3,3), activation='relu', padding='same', name='Conv3_1')(x)
x = MaxPooling2D((2,2), name='pool3')(x)
x = Flatten(name='flatten')(x)
x = Dense(128, activation='relu', name='fc1')(x)
x = Dropout(0.5, name='dropout1')(x)
x = Dense(64, activation='relu', name='fc2')(x)
x = Dropout(0.5, name='dropout2')(x)
#try sigmoid function for classification
x = Dense(1, activation='sigmoid', name='fc3')(x)
model = Model(inputs=input_img, outputs=x)
return model
CNN3=build_model()
CNN3.summary()
epochs = 45
batch_size = 60
l_r = Adam(lr=0.0001)
CNN3.compile(loss='binary_crossentropy', metrics=['accuracy'],optimizer=l_r)
# Fit the model
History = CNN3.fit_generator(datagen.flow(x_train,y_train, batch_size=batch_size),
epochs = epochs, validation_data = (x_test,y_test),
verbose = 1, steps_per_epoch=x_train.shape[0] // batch_size)
plt.plot(History.history['accuracy'])
plt.plot(History.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(History.history['loss'])
plt.plot(History.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#explain interpretation
pred=CNN3.predict(all_data)
pred_labels = np.where(pred>0.5, 1, 0)
from mlxtend.plotting import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(all_labels.astype(int), pred_labels.astype(int))
plt.figure()
plot_confusion_matrix(cm,figsize=(12,8), hide_ticks=True,cmap=plt.cm.Blues)
plt.xticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.yticks(range(2), ['Normal', 'Pneumonia'], fontsize=16)
plt.show()
training_loss, training_score = CNN3.evaluate(all_data, all_labels, batch_size=60)
print("Loss on training set: ", training_loss)
print("Accuracy on training set: ", training_score)
test_data=np.array(test_data)
test_pred = CNN3.predict(test_data, batch_size=16)
test_labels = np.where(test_pred>0.5, 1, 0)
names['labels']=test_labels
#pd.DataFrame(test_labels).to_csv("/content/drive/My Drive/CNN3_label.csv")
pd.DataFrame(names).to_csv("/content/drive/My Drive/CNN3_label.csv")
###Output
_____no_output_____ |
spectrum/spectrum.ipynb | ###Markdown
Energy Spectrum of Prompt Fission Neutrons=======================The energy spectrum of the prompt fission neutrons, which we'll call as theprompt fission spectrum for simplicity, represents the probability of neutronsof different energies to be born in a fission event — a very importantparameter.This quantity, usually denoted by $\chi_p$, is a probability distribution.So let's check some simple approaches to it.
###Code
import numpy as np
from math import pi, sqrt
import matplotlib as mpl
import matplotlib.pyplot as plt
import sympy as sym
from pyne import ace
from sympy.plotting import plot
mpl.style.use('seaborn')
###Output
_____no_output_____
###Markdown
We'll define 2 common distributions used to represent the fission spectrum:the Maxwell distribution and the Watt distribution.At fission spectrum energies we are quite far from thermal equilibrium, so both are fittings.We'll talk just briefly about the modern theory in the end.The Maxwell distribution **in energy** is of the form:$ P(E') = \frac{2\pi}{(\pi kT)^{3/2}} \sqrt{E'} \ \mathrm{e}^{\frac{-E'}{kT}} $For convenience, we can separate the coefficients in such a way:$ P(E') = c_1 \sqrt{E'} \ \mathrm{e}^{-c_2 E'} $; where:$ c_1(kT) = \frac{2\pi}{(\pi kT)^{3/2}} $ and $ c_2(kT) = {\frac{1}{kT}} $In these coefficients $ k $ is the Boltzmann constant and $ T $ is the absolute temperature, however it is convenient toleave the product $ kT $, which is a measure of temperature in units of energy, because $ T $ itself is unknown, but$ kT $ is a quantity called "nuclear temperature" (also found as $ T(E) $), which is tabulated for incoming neutronenergies on ENDF files.We can find the approximate Prompt Fission Neutron Spectrum (PFNS), represented by $ \chi_p(E') $, taking$ kT = 1.2887 $ MeV, giving a $ c_1 = 0.771 $ and $ c_2 = 0.776 $.$ \chi_p(E') = 0.771 \sqrt{E'} \ \mathrm{e}^{-0.776E'} $
###Code
def maxwell(c1, c2):
P = c1 * sym.sqrt(Ep) * sym.exp(-c2 * Ep)
return P
def maxwell_coe(kT):
c1 = 2*pi / pow(pi*kT, 3/2)
c2 = 1/kT
return c1, c2
xmin = 0
xmax = 10
points = 10000000
fig_size = (5,3)
xlabel = "$E'$ (MeV)"
ylabel = "$ \chi_{\mathrm{p}} (\mathrm{MeV}^{-1}) $"
###Output
_____no_output_____
###Markdown
Let's start by checking how the Maxwell distribution does.
###Code
kT = 1.2887 # MeV
print("Coefficients c1 and c2 are:", maxwell_coe(kT))
Ep = sym.symbols('Ep')
maxwell_chi = maxwell(*maxwell_coe(kT))
maxwell_lin = plot(maxwell_chi, (Ep, xmin, xmax),
adaptive = False,
nb_of_points = points,
size= fig_size,
xlabel=xlabel,
ylabel=ylabel,
axis_center=(-0.5,-0.01),
show=False)
maxwell_lin.save('chi_maxwell.pdf')
x_maxwellPFNS, y_maxwellPFNS = maxwell_lin._series[0].get_points()
fig1, ax1 = plt.subplots(figsize=fig_size)
# ax1.set_xscale('log')
ax1.semilogx(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax1.set_xlabel(xlabel)
ax1.set_ylabel(ylabel)
plt.savefig('chi_maxwell_logx.pdf', bbox_inches='tight')
fig2, ax2 = plt.subplots(figsize=fig_size)
ax2.semilogy(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax2.set_xlabel(xlabel)
ax2.set_ylabel(ylabel)
plt.savefig('chi_maxwell_logy.pdf', bbox_inches='tight')
fig3, ax3 = plt.subplots(figsize=fig_size)
ax3.loglog(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax3.set_xlabel(xlabel)
ax3.set_ylabel(ylabel)
ax3.set_xlim(1E-4, 10)
ax3.set_ylim(0.005, 1)
plt.savefig('chi_maxwell_logxy.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
The Maxwell distribution is unimodal (i.e., has only one mode), so when wecalculate the mode, we can simply pick the only value in the list.We can find this mode as one would expect: take the derivative and solve for the 0s.
###Code
def dist_modes(f_x, x):
fp_x = sym.diff(f_x, x)
modes = sym.solve(fp_x, x)
return modes
# modes = dist_modes(maxwell_chi, Ep)
#
# max_Ep = modes[0]
# max_chi = maxwell_chi.subs(Ep, max_Ep)
#
# print('Most probable fission promp neutron energy: ', max_Ep)
# print('With probability of: ', max_chi)
###Output
_____no_output_____
###Markdown
With this, we found that the most probable prompt neutron from fission hasenergy of around 0.64 MeV. (at least according to Maxwell)But for such a skewed distribution, it is quite obvious that the average energyof the prompt neutrons $ \bar{E'} $ is quite different.We can find it by calculating the mean of the distribution.$ \bar{E'} = \frac{\int_{0}^{\infty} E'\ \chi(E') dE'}{\int_{0}^{\infty} \chi(E') dE'} $
###Code
def dist_mean(f_x, x):
interval = (x, 0, sym.oo)
num = sym.integrate(f_x*x, interval)
den = sym.integrate(f_x, interval)
return num / den
mean_Ep = dist_mean(maxwell_chi, Ep)
mean_chi = maxwell_chi.subs(Ep, mean_Ep)
print('Fission promp neutron mean energy: ', mean_Ep)
print('With probability of : ', mean_chi)
###Output
Fission promp neutron mean energy: 1.93305000000000
With probability of : 0.239280406306773
###Markdown
So, according to the Maxwellian distribution, the average fission promptneutron energy is around 1.93 MeV.Usually, the value is simply rounded and accepted to be around 2 MeV.Now let's repeat everything for the Watt distribution and see how it does.The Watt's distribution is empirical, of the form:$ \chi_p(E') = c \sinh{\sqrt{b(E)\ E'}} \ \mathrm{e}^{\frac{-E'}{a(E)}} $where $E$ is the energy of the incoming neutron (the neutron that causesfission) and $E'$ is the energy of the outgoing prompt fission neutron andthe parameters $ a b c $ are tabulated, and $ a $ seems to represent somethinglike "nuclear temperature" as in the Maxwellian distribution.The idea is that it approximates the Maxwellian spectrum corrected for thetransformation from the center-of-mass system to the laboratory system.We can already expect the Watt function to perform a bit better, if for noother reason than by having more tuning parameters (assuming that the functionswere well-chosen — and they were)
###Code
def watt_function(c, b, a):
P = c * sym.sinh(sym.sqrt(b * Ep)) * sym.exp(-Ep/a)
return P
# I actually forgot where I got these, but I know that they are tabulated on ENDF
c = 0.4865
b = 2
a = 1
watt_chi = watt_function(c, b, a)
watt_lin = plot(watt_chi, (Ep, xmin, xmax),
adaptive = False,
nb_of_points = points,
size=fig_size,
xlabel=xlabel,
ylabel=ylabel,
axis_center=(-0.5,-0.01),
show=False)
watt_lin.save ('chi_watt.pdf')
# modes = dist_modes(watt_chi, Ep)
#
# max_Ep = modes[0]
# max_chi = watt_chi.subs(Ep, max_Ep)
# print('Most probable fission promp neutron energy: ', max_Ep)
# print('With probability of: ', max_chi)
mean_Ep = dist_mean(watt_chi, Ep)
mean_chi = watt_chi.evalf(subs={Ep: mean_Ep})
print('Fission promp neutron mean energy: ', mean_Ep)
print('With probability of : ', mean_chi)
###Output
Fission promp neutron mean energy: 2.00000000000000
With probability of : 0.238794720840315
###Markdown
As an empirical formulation, the Watt distribution nails the mean fission promptneutron energy at 2 MeV.Naturally, that happens because the parameters were carefully chosen.Now let's plot both distributions together to see how they differ qualitatively.
###Code
x_wattPFNS, y_wattPFNS = watt_lin._series[0].get_points()
fig, ax = plt.subplots(figsize=fig_size)
ax.plot(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax.plot(x_wattPFNS, y_wattPFNS, label='Watt PFNS')
ax.legend()
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.savefig('maxwell_watt_comparison.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=fig_size)
ax.semilogy(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax.plot(x_wattPFNS, y_wattPFNS, label='Watt PFNS')
ax.legend()
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.savefig('maxwell_watt_comparison_logy.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
We can also plot using a log scale on the energy axis to investigate emissionof neutrons with lower energy.The distribution shows that some do appear, but the vast majority is clearly onthe MeV range.Considering that fission cross-sections become much bigger in the eV range, wecan develop intuitively the idea of moderation.
###Code
fig, ax = plt.subplots(figsize=fig_size)
ax.set_xscale('log')
ax.plot(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax.plot(x_wattPFNS, y_wattPFNS, label='Watt PFNS')
ax.legend()
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.savefig('maxwell_watt_comparison_logx.pdf', bbox_inches='tight')
ymin = 8E-4
fig, ax = plt.subplots(figsize=fig_size)
ax.loglog(x_maxwellPFNS, y_maxwellPFNS, label='Maxwell PFNS')
ax.plot(x_wattPFNS, y_wattPFNS, label='Watt PFNS')
ax.legend()
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_ylim(ymin=ymin)
plt.savefig('maxwell_watt_comparison_logxy.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
The modern take on fission spectrum of prompt neutrons is based on theMadland-Nix Spectrum.This model is based on the standard nuclear evaporation theory and utilizesan isospin-dependent optical potential for the inverse process of compoundnucleus formation in neutron-rich fission fragments.Quite impressive keywords there you have to admit!The intention of this elaborate model is to represent the fission products infull acceleration in the laboratory system in order to get the right spectrumfor the neutrons that evaporate from the primary fission products.This is enough about fission spectrum for our purposes.It allows us to know that there are different models available, with differentaccuracies and sophistication behind.But more importantly, now you know that the mean fission neutron energy isaround 2 MeV, so now when we plot some other figures, you know where thefission neutron is located.Below, we do even better by plotting the PFNS with the fission cross-sectionfor U-235, which shows how the overlap between both is minimal.
###Code
xlims = (1E-9, 10)
ylims = np.array([8E-1, 4E3])
xlabel_xs = 'Neutron energy (MeV)'
ylabel_xs = '$ \sigma_{\mathrm{f}} $ (barn)'
U235 = ace.Library("/Users/rodrigo/opt/Serpent2/xsdata/jeff311/acedata/92235JEF311.ace")
U235.read()
U235_12 = U235.tables['92235.12c']
fig, ax = plt.subplots()
ax.loglog(U235_12.energy, U235_12.reactions[18].sigma,
linewidth=0.5,
label='U235(n,f)')
ax.set_xlim(*xlims)
ax.set_ylim(*ylims)
ax.set(xlabel=xlabel_xs,
ylabel=ylabel_xs,
title='U-235 fission XS at 1200 K')
h1, l1 = ax.get_legend_handles_labels()
plt.savefig('U235_1200K_MT18.pdf', bbox_inches='tight')
ax.set(title='U-235 fission XS at 1200 K and Watt PFNS')
ax2 = ax.twinx()
ax2.loglog(x_wattPFNS, y_wattPFNS, color='red', label='Watt PFNS')
ax2.set_ylabel(ylabel)
ax2.set_ylim(*ylims/1E3)
ax2.grid(False)
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(h1+h2, l1+l2)
plt.savefig('U235_1200K_MT18_chip.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
In order to improve the chance of fission, we would like to shift the neutronspectrum to somewhere on the eV range, which is the idea of adding a moderatorto the reactors.So we'll check that idea here by studying energy and speed at reactor operatingtemperatures.
###Code
#From NIST in 14/2/2022
k_B = 8.617333262E-5 # eV/K
kB_J = 1.380649E-23 # J/K
neutron_mass = 1.67492749804E-27 # kg
eV2Joule = 1.602176565E-19
def E_mean(T):
return 3/2 * k_B * T
def particle_speed(m, E):
return sqrt(2*E/m)
def report_neutron_data(T):
E = T*k_B
v = particle_speed(neutron_mass, E*eV2Joule)
print(f"{T} K represents", E, "eV")
print(f"Neutron speed", v, "m/s")
Em = E_mean(T)
vm = particle_speed(neutron_mass, Em*eV2Joule)
print(f"Mean energy", Em, "eV")
print(f"Mean speed:", vm, "m/s\n")
# test energy and idea
report_neutron_data(300)
report_neutron_data(600)
###Output
300 K represents 0.025851999786 eV
Neutron speed 2223.920462567311 m/s
Mean energy 0.038777999679 eV
Mean speed: 2723.735180912124 m/s
600 K represents 0.051703999572 eV
Neutron speed 3145.098479801738 m/s
Mean energy 0.077555999358 eV
Mean speed: 3851.9432331586618 m/s
###Markdown
So we can see that with a moderator at 300 K (+- room temperature) we have a neutronenergy of just around 0.025 eV, which intuitively already looks promising if we lookat the cross-section values.But we can reach better conclusions by plotting the Maxwellian spectrum at thistemperature and see what happens. Unlikk when we wanted to calculate the PFNS, whichrequired a tabulated "nuclear temperature", here we can calculate everything from thedesired temperature.
###Code
T1 = 300
T2 = 600 # reactor coolant temperature in K
kT1_thermal = T1*k_B # eV
kT2_thermal = T2*k_B
print(f"Coefficients c1 and c2 at {T1} K:", maxwell_coe(kT1_thermal))
print(f"Coefficients c1 and c2 at {T2} K:", maxwell_coe(kT2_thermal))
maxwell_T1 = maxwell(*maxwell_coe(kT1_thermal))
maxwell_T2 = maxwell(*maxwell_coe(kT2_thermal))
xlabel_thermal = "$E'$ (eV)"
ylabel_thermal = "Spectrum $(\mathrm{eV}^{-1})$"
maxwell_T_lin = plot(maxwell_T1, maxwell_T2, (Ep, 1E-9, 1),
adaptive = False,
nb_of_points = 1000000,
size= fig_size,
xlabel=xlabel_thermal,
ylabel=ylabel_thermal,
axis_center=(-0.04,-0.3),
show=False)
x_maxwellT1, y_maxwellT1 = maxwell_T_lin._series[0].get_points()
x_maxwellT2, y_maxwellT2 = maxwell_T_lin._series[1].get_points()
fig, ax = plt.subplots(figsize=fig_size)
ax.plot(x_maxwellT1, y_maxwellT1, label=f'{T1} K')
ax.plot(x_maxwellT2, y_maxwellT2, label=f'{T2} K')
ax.set_xlabel(xlabel_thermal)
ax.set_ylabel(ylabel_thermal)
ax.set_xlim(-0.01, 0.4)
ax.legend()
plt.savefig('maxwell_thermal.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=fig_size)
ax.semilogx(x_maxwellT1, y_maxwellT1, label=f'{T1} K')
ax.semilogx(x_maxwellT2, y_maxwellT2, label=f'{T2} K')
ax.set_xlabel(xlabel_thermal)
ax.set_ylabel(ylabel_thermal)
ax.set_xlim(1E-6, 1)
ax.legend()
plt.savefig('maxwell_thermal_logx.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=fig_size)
ax.loglog(x_maxwellT1, y_maxwellT1, label=f'{T1} K')
ax.loglog(x_maxwellT2, y_maxwellT2, label=f'{T2} K')
ax.set_xlabel(xlabel_thermal)
ax.set_ylabel(ylabel_thermal)
ax.set_xlim(1E-6, 1)
ax.set_ylim(1E-4, 50)
ax.legend()
plt.savefig('maxwell_thermal_logxy.pdf', bbox_inches='tight')
###Output
Coefficients c1 and c2 at 300 K: (271.46499994684825, 38.68172707248528)
Coefficients c1 and c2 at 600 K: (95.97737115861109, 19.34086353624264)
###Markdown
The distributions appear to be in the right place. We can put that together withthe fission cross-section and PFNS and observe that we shifted the spectrum tothe left of the resonance range.This is much better from the point of view of maximizing fission as long as we canachieve that effectively, which means that we have to reduce the neutron energy asfast as possible to avoid the resonance range that is in between the PFNS and theMaxwellian distributions.
###Code
x_maxwellT1 = x_maxwellT1*10E-6 # shift x from eV to MeV
x_maxwellT2 = x_maxwellT2*10E-6
y_maxwellT1 = y_maxwellT1/10E-6 # shift x from eV to MeV
y_maxwellT2 = y_maxwellT2/10E-6
fig, ax = plt.subplots()
ax.loglog(U235_12.energy, U235_12.reactions[18].sigma,
linewidth=0.5,
label='U235(n,f)')
ax.set(xlabel='Neutron energy (MeV)',
ylabel='$ \sigma_{\mathrm{f}} $ (barn)',
title='U-235 fission XS at 1200 K with PFNS and thermal spectra')
ax.set_xlim(*xlims)
ax.set_ylim(*ylims)
h1, l1 = ax.get_legend_handles_labels()
ax2 = ax.twinx()
ax2.grid(False)
ax2.set_ylabel(ylabel)
ax2.set_ylim(*ylims/1E3)
ax2.loglog(x_wattPFNS, y_wattPFNS, color='red', label='Watt PFNS')
h2, l2 = ax2.get_legend_handles_labels()
ax3 = ax.twinx()
ax3.spines['right'].set_position(('outward', 40))
ax3.grid(False)
ax3.set_ylabel("Spectrum $(\mathrm{MeV}^{-1})$")
ax3.set_ylim(*ylims*1E3)
ax3.loglog(x_maxwellT1, y_maxwellT1, color='green', label=f'Maxwell {T1} K')
h3, l3 = ax3.get_legend_handles_labels()
plt.legend(h1+h2+h3, l1+l2+l3)
plt.savefig('U235_1200K_MT18_chip_maxwellT1.pdf', bbox_inches='tight')
ax3.loglog(x_maxwellT2, y_maxwellT2, color='orange', label=f'Maxwell {T2} K')
h3, l3 = ax3.get_legend_handles_labels()
plt.legend(h1+h2+h3, l1+l2+l3)
plt.savefig('U235_1200K_MT18_chip_maxwellT12.pdf', bbox_inches='tight')
###Output
_____no_output_____ |
R for Drug Development - Theophylline.ipynb | ###Markdown
R for Drug Development Theophylline single-dose Concentration vs. time by individual https://github.com/philbowsher/Managers-R-for-Drug-Development-Session/tree/master/nlmixr
###Code
# install.packages("devtools")
devtools::install_github("nlmixrdevelopment/nlmixr")
###Output
_____no_output_____
###Markdown
nlmixrnlmixr is an R package for fitting general dynamic models, pharmacokinetic (PK) models and pharmacokinetic-pharmacodynamic (PKPD) models in particular, with either individual data or population data.https://github.com/nlmixrdevelopment/nlmixr
###Code
library(nlmixr)
library(ggplot2)
library(xpose.nlmixr)
###Output
_____no_output_____ |
nbs/xor-trajectory.ipynb | ###Markdown
XOR Trajectory when learning _Aside: Should do [Working efficiently with jupyter lab](https://florianwilhelm.info/2018/11/working_efficiently_with_jupyter_lab/)_
###Code
%load_ext autoreload
%autoreload 2
#%matplotlib widget
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Fetch our tools:
###Code
from lib.nn import Network, Layer, IdentityLayer, AffineLayer, MapLayer
from lib.nnbench import NNBench
from lib.nnvis import NNVis
###Output
_____no_output_____
###Markdown
___ SetupBuild our `xor` net:
###Code
net = Network()
net.extend(AffineLayer(2,2))
net.extend(MapLayer(np.tanh, lambda d: 1.0 - np.tanh(d)**2))
net.extend(AffineLayer(2,1))
net.extend(MapLayer(np.tanh, lambda d: 1.0 - np.tanh(d)**2))
###Output
_____no_output_____
###Markdown
Make a test bench and a visualizer:
###Code
bench = NNBench(net)
vis = NNVis(bench)
###Output
_____no_output_____
###Markdown
Prepare fixed training data for the learning process:
###Code
training_batch = (np.array([[-0.5, -0.5],
[-0.5, 0.5],
[ 0.5, 0.5],
[ 0.5, -0.5]]),
np.array([[-0.5],
[ 0.5],
[-0.5],
[ 0.5]]))
bench.training_batch = lambda n: training_batch
###Output
_____no_output_____
###Markdown
Set the state to an ordinary example starting point, for consistent notebook behavior below. We also make it the checkpoint in the bench.
###Code
net.set_state_from_vector(np.array([-0.88681521, -1.28596788, 0.3248974 , -2.33838503, 0.34761944,
-0.94541789, 1.99448043, 0.38704839, -3.8844268 ]))
bench.checkpoint_net()
###Output
_____no_output_____
###Markdown
The track that learning takesLet us examine the trajectory in state space during learning, and the loss function.Each learning iteration changes the net state. We can examine those deltas.Questions:1. Are there regimes of direction-of-change (DoC) in state space, or does the DoC wander chaotically?1. What are the spectral characteristics of the DoC? Length characteristics?1. How do the DoC characteristics relate to the loss function, and it's first difference?1. How do these trajectories vary with learning rate? Are there clues in these to adapt the learning rate?1. How do the trajectory characteristics vary across different starting nets?1. How do these measures vary with the objective function of the learning process, that is, what you're trying to teach the net?1. How do the different layers with learning state evolve? Do they settle at different times? How does an upstream layer change, as a consequence of learning, affect downstream layers? Down affect up?
###Code
bench.rollback_net()
#bench.net.eta = 0.25
bench.net.eta = 1
learned_track = bench.learn_track(1000)
###Output
_____no_output_____
###Markdown
Find the angles between trajectory steps, from$$\mathbf {a} \cdot \mathbf {b} = \left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta \\\cos \theta = \frac{\mathbf {a} \cdot \mathbf {b}}{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|} \\$$where $\mathbf {a}$ and $\mathbf {b}$ are a state-space trajectory step and the succeeding step respectively
###Code
traja = bench.analyze_learning_track(learned_track)
vis.plot_trajectory(traja)
###Output
_____no_output_____
###Markdown
ConclusionsWe asked:1. Are there regimes of direction-of-change (DoC) in state space, or does the DoC wander chaotically? \ _**Answer:** Regimes exist. Often the initial direction is maintained (cos near 1) up to some region where the direction becomes chaotic, the loss improves, then falls into back-and-forth oscillations (cos near 0) perfecting the loss._1. What are the spectral characteristics of the DoC? Length characteristics?1. How do the DoC characteristics relate to the loss function, and it's first difference?1. How do these trajectories vary with learning rate? Are there clues in these to adapt the learning rate?1. How do the trajectory characteristics vary across different starting nets?1. How do these measures vary with the objective function of the learning process, that is, what you're trying to teach the net?1. How do the different layers with learning state evolve? Do they settle at different times? How does an upstream layer change, as a consequence of learning, affect downstream layers? Down affect up? --- Scratch
###Code
assert False, "Stop notebook execution here if entering from above"
bench.randomize_net()
t = bench.learn(1000)
bench.net.state_vector()
net = bench.net
net([[-.5, -.5], [-.5, .5]])
net.layers
from nnbench import Thing
t = Thing(color='brown', weight=7)
t.cow = 'moo'
t.cow
t.color
###Output
_____no_output_____
###Markdown
---
###Code
# Boneyard
assert False, "Stop notebook execution, the rest is scrap"
###Output
_____no_output_____
###Markdown
Wrangle the state-space trajectory and the losses into form.
###Code
trajectory = np.vstack([v[0] for v in lt])
losses = np.vstack([v[1] for v in lt])
###Output
_____no_output_____
###Markdown
Take first differences, which represent the changes at each step
###Code
traj_steps = np.diff(trajectory, axis=0)
loss_steps = np.diff(losses, axis=0)
traj_steps[:5]
###Output
_____no_output_____
###Markdown
Find the L2 norm of the trajectory steps $\lVert traj \rVert$:
###Code
traj_L2 = np.sqrt(np.einsum('...i,...i', traj_steps, traj_steps))
len(traj_L2), traj_L2[:5], traj_L2[-5:]
###Output
_____no_output_____
###Markdown
Find the angles between trajectory steps, from$$\mathbf {a} \cdot \mathbf {b} = \left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta \\\cos \theta = \frac{\mathbf {a} \cdot \mathbf {b}}{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|} \\$$where $\mathbf {a}$ and $\mathbf {b}$ are a state-space trajectory step and the succeeding step respectively Find $\mathbf {a} \cdot \mathbf {b}$:
###Code
trajn_dot_nplus1 = np.einsum('...i,...i', traj_steps[:-1], traj_steps[1:])
trajn_dot_nplus1[:5], np.any(trajn_dot_nplus1 < 0)
###Output
_____no_output_____
###Markdown
Find $\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|$:
###Code
traj_cos_denom = np.multiply(traj_L2[:-1], traj_L2[1:])
###Output
_____no_output_____
###Markdown
This will be the divisor. Some entries may be zero, so we adapt
###Code
len(traj_L2) - np.count_nonzero(traj_L2)
np.equal(traj_L2, 0)
###Output
_____no_output_____
###Markdown
Find $\cos \theta$ by dividing, excluding division by zero:
###Code
traj_cos = np.divide(trajn_dot_nplus1, traj_cos_denom, where=traj_cos_denom!=0.0)
traj_cos[:5], traj_cos[-5:], min(traj_cos), max(traj_cos)
#traj_theta = np.arccos(traj_cos)
#traj_theta[:5], traj_theta[-5:]
net = Network()
net.extend(AffineLayer(2,2))
#leak = 0
#net.extend(MapLayer(lambda x: (x*(1+leak/2)+abs(x)*(1-leak/2))/2, lambda d: [leak,1][1 if d>0 else 0]))
#net.extend(MapLayer(lambda x: max(0, np.sign(x)) * x, lambda d: max(0, np.sign(d))))
net.extend(MapLayer(np.tanh, lambda d: 1.0 - np.tanh(d)**2))
net.extend(AffineLayer(2,1))
net.extend(MapLayer(np.tanh, lambda d: 1.0 - np.tanh(d)**2))
#sigmoid = lambda x: 1/(np.exp(x)+1)
#net.extend(MapLayer(sigmoid, lambda d: sigmoid(d)*(1-sigmoid(d))))
#net.extend(MapLayer(lambda x: max(0, np.sign(x)) * x, lambda d: max(0, np.sign(d))))
###Output
_____no_output_____ |
Week 1/Practicing Numpy.ipynb | ###Markdown
Numpy- **NumPy (or Numpy) is a Linear Algebra Library for Python**- **Almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks.**- **Numpy is also incredibly fast beacuse of its bindings to C libraries.**- **To install use:** conda install numpy
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Numpy arrays have two forms: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column). Creating numpy arrays:- **From a Python List**- **Built-in ways to create Arrays** - **arange: for evenly spaced values within a given interval** - **zeros and ones: arrays of zeros or ones** - **np.full: Return a new array of given shape and type, filled with fill_value.** - **np.repeat:Repeat elements of an array** - **linspace: evenly spaced numbers over a specified interval** - **eye: for an identity matrix**- **Random: create random number arrays** - **rand: For creating array of the given shape and fill it with random samples from a uniform distribution over ``[0, 1)``** - **randn: creates a sample (or samples) from the "standard normal" distribution. rand is uniform distribution** - **randint: Creates random integers from low (inclusive) to high (exclusive). Assigning `step` parameter is possible** Methods and Attributes:- **reshape: returns array with same data but with a new shape**- **max,min,argmax,argmin: Methods for finding max or min values or to find their index locations using argmin or argmax**- **shape: to know the shape of array**- **dtype: to know the data type of the object in the array**
###Code
list1 = [5,5,6,7,8,5,2,5]
list1
np.array(list1)
matrix1 = [[8,9,5.2],[5,161,2],[846,2,8]]
matrix1
np.array(matrix1)
###Output
_____no_output_____
###Markdown
Built-in ways to create Arrays arange: for evenly spaced values within a given interval
###Code
np.arange(0,20)
np.arange(0,50,3)
###Output
_____no_output_____
###Markdown
np.zeros and np.ones: arrays of zeros or ones np.full: Return a new array of given shape and type, filled with fill_value. np.repeat:Repeat elements of an array
###Code
zeros = np.zeros(10)
zeros
ones = np.ones(10)
ones
array = np.full(10, 0.0)
array
array = np.repeat(0.0, 10)
array
array = np.repeat([0.0, 1.0], 5)
array
array = np.repeat([0.0, 1.0], [2, 3])
array
###Output
_____no_output_____
###Markdown
linspace: evenly spaced numbers over a specified interval
###Code
np.linspace(0,20,3)
np.linspace(0,30,50)
###Output
_____no_output_____
###Markdown
eye: for an identity matrix
###Code
np.eye(7)
###Output
_____no_output_____
###Markdown
Random: create random number arrays rand: - **For creating array of the given shape and fill it with random samples from a uniform distribution over [0, 1)**
###Code
np.random.rand(5)
np.random.rand(7,5)
###Output
_____no_output_____
###Markdown
randn: - **creates a sample (or samples) from the "standard normal" distribution. rand is uniform distribution**
###Code
np.random.randn(5)
np.random.randn(7,5)
###Output
_____no_output_____
###Markdown
randint: - **Creates random integers from low (inclusive) to high (exclusive). Assigning step parameter is possible**
###Code
np.random.randint(1,500)
np.random.randint(1,500,10)
###Output
_____no_output_____
###Markdown
Methods and Attributes: reshape: - **returns array with same data but with a new shape**
###Code
arr = np.arange(30)
ranarr = np.random.randint(0,500,10)
arr
ranarr
###Output
_____no_output_____
###Markdown
Reshape- **Returns an array containing the same data with a new shape.**
###Code
arr.reshape(6,5)
###Output
_____no_output_____
###Markdown
max,min,argmax,argmin- **These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax**
###Code
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
###Output
_____no_output_____
###Markdown
Shape- **to know the shape of array**
###Code
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,30)
arr.reshape(1,30).shape
arr.reshape(30,1)
arr.reshape(30,1).shape
###Output
_____no_output_____
###Markdown
dtype- **You can also grab the data type of the object in the array**
###Code
arr.dtype
###Output
_____no_output_____
###Markdown
Accessing the element of an array by index:
###Code
el = array[1]
print(el)
###Output
0.0
###Markdown
Accessing multuple elements of an array by a list of indices:
###Code
array[[4, 2, 0]]
###Output
_____no_output_____
###Markdown
Assignment:
###Code
array[1] = 1
print(array)
###Output
[0. 1. 1. 1. 1.]
###Markdown
Multi-dimensional NumPy arrays Specify the shape with a tuple:
###Code
zeros = np.zeros((5, 2), dtype=np.float32)
zeros
numbers = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
numbers = np.array(numbers)
print(numbers[0, 1])
###Output
2
###Markdown
Assignment: use a tuple (row index, column index)
###Code
numbers[0, 1] = 10
numbers
numbers[0]
###Output
_____no_output_____
###Markdown
Slicing: getting a column:
###Code
numbers[:, 1]
###Output
_____no_output_____
###Markdown
Assigning a row:
###Code
numbers[1] = [1, 1, 1]
numbers
###Output
_____no_output_____
###Markdown
Assigning a column:
###Code
numbers[:, 2] = [9, 9, 9]
numbers
###Output
_____no_output_____
###Markdown
Element-wise operations
###Code
rng = np.arange(5)
rng
###Output
_____no_output_____
###Markdown
Every item in the array is multiplied by 2:
###Code
rng * 2
(rng - 1) * 3 / 2 + 1
###Output
_____no_output_____
###Markdown
Adding one array with another
###Code
np.random.seed(2)
noise = 0.01 * np.random.rand(5)
noise
numbers = np.arange(5)
numbers
result = numbers + noise
result
###Output
_____no_output_____
###Markdown
Rounding the numbers to 4th digit:
###Code
result.round(4)
###Output
_____no_output_____
###Markdown
Two ways to square each element:- element-wise multiplication with itself**- the power operator (**)
###Code
np.random.seed(2)
pred = np.random.rand(3).round(2)
pred
square = pred * pred
square
square = pred ** 2
square
###Output
_____no_output_____
###Markdown
Other element-wise operations: exp log sqrt
###Code
np.exp(pred)
np.log(pred)
np.sqrt(pred)
###Output
_____no_output_____
###Markdown
Comparison operations
###Code
np.random.seed(2)
pred = np.random.rand(3).round(2)
pred
result = pred >= 0.5
result
np.random.seed(2)
pred1 = np.random.rand(3).round(2)
pred1
pred2 = np.random.rand(3).round(2)
pred2
pred1 >= pred2
###Output
_____no_output_____
###Markdown
Logical operations
###Code
pred1 = np.random.rand(3) >= 0.3
pred2 = np.random.rand(3) >= 0.4
pred1 & pred2
pred1 | pred2
###Output
_____no_output_____
###Markdown
Summarizing operations- **Summarizing operations process and array and return a single number**
###Code
np.random.seed(2)
pred = np.random.rand(3).round(2)
pred_sum = pred.sum()
pred
pred_sum
print('min = %.2f' % pred.min())
print('mean = %.2f' % pred.mean())
print('max = %.2f' % pred.max())
print('std = %.2f' % pred.std())
###Output
min = 0.03
mean = 0.34
max = 0.55
std = 0.22
###Markdown
For two-dimentional array it works in the same way:
###Code
np.random.seed(2)
matrix = np.random.rand(4, 3).round(2)
matrix
matrix.max()
###Output
_____no_output_____
###Markdown
But we can specify the axis along which we apply the summarizing operation- **axis=1 - apply to each rows**- **axis=0 - apply to each column**
###Code
matrix.max(axis=1)
matrix.max(axis=0)
matrix.sum(axis=1)
###Output
_____no_output_____
###Markdown
Sorting
###Code
np.random.seed(2)
pred = np.random.rand(4).round(2)
pred
np.sort(pred)
pred
###Output
_____no_output_____
###Markdown
Sorts in place:
###Code
pred.sort()
pred
###Output
_____no_output_____
###Markdown
Argsort - instead of sorting, return the indexes of the array in sorted order
###Code
np.random.seed(2)
pred = np.random.rand(4).round(2)
pred
idx = pred.argsort()
idx
pred[idx]
###Output
_____no_output_____
###Markdown
Putting mulitple arrays together:- **concatenate**- **hstack : Stack arrays in sequence horizontally (column wise).**- **vstack: Stack arrays in sequence vertically (row wise).**- **column_stack: Stack 1-D arrays as columns into a 2-D array.**
###Code
vec = np.arange(3)
vec
mat = np.arange(6).reshape(3, 2)
mat
np.concatenate([vec, vec])
np.hstack([vec, vec])
np.hstack([mat, mat])
np.concatenate([mat, mat])
np.column_stack([vec, mat])
np.column_stack([vec, vec])
np.vstack([vec, vec])
np.vstack([mat, mat])
###Output
_____no_output_____
###Markdown
Transpose
###Code
mat.T
np.vstack([vec, mat.T])
###Output
_____no_output_____
###Markdown
Slicing- Taking a part of the array
###Code
mat = np.arange(15).reshape(5, 3)
mat
mat[:3]
mat[1:3, :2]
mat[:, :2]
mat[1:3, :2]
mat[[3, 0, 1]]
mat[:, 0] % 2 == 1
mat[mat[:, 0] % 2 == 1]
###Output
_____no_output_____ |
scr/TensorFlow.ipynb | ###Markdown
What's this TensorFlow business?For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow. What is it?TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray. Why?* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) * We want you to be exposed to the sort of deep learning code you might run into in academia or industry. How will I learn TensorFlow?TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here. Table of ContentsThis notebook has 5 parts. We will walk through TensorFlow at three different levels of abstraction, which should help you better understand it and prepare you for working on your project.1. Preparation: load the CIFAR-10 dataset.2. Barebone TensorFlow: we will work directly with low-level TensorFlow graphs. 3. Keras Model API: we will use `tf.keras.Model` to define arbitrary neural network architecture. 4. Keras Sequential API: we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently. 5. CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. Here is a table of comparison:| API | Flexibility | Convenience ||---------------|-------------|-------------|| Barebone | High | Low || `tf.keras.Model` | High | Medium || `tf.keras.Sequential` | Low | High | Part I: PreparationFirst, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
###Code
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
###Output
Train data shape: (49000, 32, 32, 3)
Train labels shape: (49000,) int32
Validation data shape: (1000, 32, 32, 3)
Validation labels shape: (1000,)
Test data shape: (10000, 32, 32, 3)
Test labels shape: (10000,)
###Markdown
Preparation: Dataset objectFor our own convenience we'll define a lightweight `Dataset` class which lets us iterate over data and labels. This is not the most flexible or most efficient way to iterate through data, but it will serve our purposes.
###Code
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
###Output
0 (64, 32, 32, 3) (64,)
1 (64, 32, 32, 3) (64,)
2 (64, 32, 32, 3) (64,)
3 (64, 32, 32, 3) (64,)
4 (64, 32, 32, 3) (64,)
5 (64, 32, 32, 3) (64,)
6 (64, 32, 32, 3) (64,)
###Markdown
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
###Code
# Set up some global variables
USE_GPU = False
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
###Output
Using device: /cpu:0
###Markdown
Part II: Barebone TensorFlowTensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.TensorFlow is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.This means that a typical TensorFlow program is written in two distinct phases:1. Build a computational graph that describes the computation that you want to perform. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.2. Run the computational graph many times. Each time the graph is run you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph. TensorFlow warmup: Flatten FunctionWe can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:- N is the number of datapoints (minibatch size)- H is the height of the feature map- W is the width of the feature map- C is the number of channels in the feature mapThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector. The flatten function below first reads in the value of N from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be H x W x C, but we don't need to specify that explicitly). **NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
###Code
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Clear the current TensorFlow graph.
tf.reset_default_graph()
# Stage I: Define the TensorFlow graph describing our computation.
# In this case the computation is trivial: we just want to flatten
# a Tensor using the flatten function defined above.
# Our computation will have a single input, x. We don't know its
# value yet, so we define a placeholder which will hold the value
# when the graph is run. We then pass this placeholder Tensor to
# the flatten function; this gives us a new Tensor which will hold
# a flattened view of x when the graph is run. The tf.device
# context manager tells TensorFlow whether to place these Tensors
# on CPU or GPU.
with tf.device(device):
x = tf.placeholder(tf.float32)
x_flat = flatten(x)
# At this point we have just built the graph describing our computation,
# but we haven't actually computed anything yet. If we print x and x_flat
# we see that they don't hold any data; they are just TensorFlow Tensors
# representing values that will be computed when the graph is run.
print('x: ', type(x), x)
print('x_flat: ', type(x_flat), x_flat)
print()
# We need to use a TensorFlow Session object to actually run the graph.
with tf.Session() as sess:
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Run our computational graph to compute a concrete output value.
# The first argument to sess.run tells TensorFlow which Tensor
# we want it to compute the value of; the feed_dict specifies
# values to plug into all placeholder nodes in the graph. The
# resulting value of x_flat is returned from sess.run as a
# numpy array.
x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
print('x_flat_np:\n', x_flat_np, '\n')
# We can reuse the same graph to perform the same computation
# with different input data
x_np = np.arange(12).reshape((2, 3, 2))
print('x_np:\n', x_np, '\n')
x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
print('x_flat_np:\n', x_flat_np)
test_flatten()
###Output
x: <class 'tensorflow.python.framework.ops.Tensor'> Tensor("Placeholder:0", dtype=float32, device=/device:CPU:0)
x_flat: <class 'tensorflow.python.framework.ops.Tensor'> Tensor("Reshape:0", shape=(?, ?), dtype=float32, device=/device:CPU:0)
x_np:
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
x_flat_np:
[[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.]
[12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.]]
x_np:
[[[ 0 1]
[ 2 3]
[ 4 5]]
[[ 6 7]
[ 8 9]
[10 11]]]
x_flat_np:
[[ 0. 1. 2. 3. 4. 5.]
[ 6. 7. 8. 9. 10. 11.]]
###Markdown
Barebones TensorFlow: Two-Layer NetworkWe will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores. It's important to keep in mind that calling the `two_layer_fc` function **does not** perform any computation; instead it just sets up the computational graph for the forward computation. To actually run the network we need to enter a TensorFlow Session and feed data to the computational graph.After defining the network architecture in the `two_layer_fc` function, we will test the implementation by setting up and running a computational graph, feeding zeros to the network and checking the shape of the output.It's important that you read and understand this implementation.
###Code
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
# TensorFlow's default computational graph is essentially a hidden global
# variable. To avoid adding to this default graph when you rerun this cell,
# we clear the default graph before constructing the graph we care about.
tf.reset_default_graph()
hidden_layer_size = 42
# Scoping our computational graph setup code under a tf.device context
# manager lets us tell TensorFlow where we want these Tensors to be
# placed.
with tf.device(device):
# Set up a placehoder for the input of the network, and constant
# zero Tensors for the network weights. Here we declare w1 and w2
# using tf.zeros instead of tf.placeholder as we've seen before - this
# means that the values of w1 and w2 will be stored in the computational
# graph itself and will persist across multiple runs of the graph; in
# particular this means that we don't have to pass values for w1 and w2
# using a feed_dict when we eventually run the graph.
x = tf.placeholder(tf.float32)
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function to set up the computational
# graph for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
# Use numpy to create some concrete data that we will pass to the
# computational graph for the x placeholder.
x_np = np.zeros((64, 32, 32, 3))
with tf.Session() as sess:
# The calls to tf.zeros above do not actually instantiate the values
# for w1 and w2; the following line tells TensorFlow to instantiate
# the values of all Tensors (like w1 and w2) that live in the graph.
sess.run(tf.global_variables_initializer())
# Here we actually run the graph, using the feed_dict to pass the
# value to bind to the placeholder for x; we ask TensorFlow to compute
# the value of the scores Tensor, which it returns as a numpy array.
scores_np = sess.run(scores, feed_dict={x: x_np})
print(scores_np.shape)
two_layer_fc_test()
###Output
(64, 10)
###Markdown
Barebones TensorFlow: Three-Layer ConvNetHere you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two2. ReLU nonlinearity3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one4. ReLU nonlinearity5. Fully-connected layer with bias, producing scores for `C` classes.**HINT**: For convolutions: https://www.tensorflow.org/api_docs/python/tf/nn/conv2d; be careful with padding!**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
###Code
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
x = tf.nn.conv2d(x, conv_w1, strides = [1,1,1,1], padding="SAME")
x = tf.nn.bias_add(x, conv_b1)
x = tf.nn.relu(x)
x = tf.nn.conv2d(x, conv_w2, strides = [1,1,1,1], padding="SAME")
x = tf.nn.bias_add(x, conv_b2)
x = tf.nn.relu(x)
x = flatten(x)
scores = tf.add(tf.matmul(x, fc_w), fc_b)
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
###Output
_____no_output_____
###Markdown
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we use the `three_layer_convnet` function to set up the computational graph, then run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.When you run this function, `scores_np` should have shape `(64, 10)`.
###Code
def three_layer_convnet_test():
tf.reset_default_graph()
with tf.device(device):
x = tf.placeholder(tf.float32)
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
x_np = np.zeros((64, 32, 32, 3))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores, feed_dict={x: x_np})
print('scores_np has shape: ', scores_np.shape)
with tf.device('/cpu:0'):
three_layer_convnet_test()
###Output
scores_np has shape: (64, 10)
###Markdown
Barebones TensorFlow: Training StepWe now define the `training_step` function which sets up the part of the computational graph that performs a single training step. This will take three basic steps:1. Compute the loss2. Compute the gradient of the loss with respect to all network weights3. Make a weight update step using (stochastic) gradient descent.Note that the step of updating the weights is itself an operation in the computational graph - the calls to `tf.assign_sub` in `training_step` return TensorFlow operations that mutate the weights when they are executed. There is an important bit of subtlety here - when we call `sess.run`, TensorFlow does not execute all operations in the computational graph; it only executes the minimal subset of the graph necessary to compute the outputs that we ask TensorFlow to produce. As a result, naively computing the loss would not cause the weight update operations to execute, since the operations needed to compute the loss do not depend on the output of the weight update. To fix this problem, we insert a **control dependency** into the graph, adding a duplicate `loss` node to the graph that does depend on the outputs of the weight update operations; this is the object that we actually return from the `training_step` function. As a result, asking TensorFlow to evaluate the value of the `loss` returned from `training_step` will also implicitly update the weights of the network using that minibatch of data.We need to use a few new TensorFlow functions to do all of this:- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:https://www.tensorflow.org/api_docs/python/tf/reduce_mean- For computing gradients of the loss with respect to the weights we'll use `tf.gradients`: https://www.tensorflow.org/api_docs/python/tf/gradients- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub`: https://www.tensorflow.org/api_docs/python/tf/assign_sub- We'll add a control dependency to the graph using `tf.control_dependencies`: https://www.tensorflow.org/api_docs/python/tf/control_dependencies
###Code
def training_step(scores, y, params, learning_rate):
"""
Set up the part of the computational graph which makes a training step.
Inputs:
- scores: TensorFlow Tensor of shape (N, C) giving classification scores for
the model.
- y: TensorFlow Tensor of shape (N,) giving ground-truth labels for scores;
y[i] == c means that c is the correct class for scores[i].
- params: List of TensorFlow Tensors giving the weights of the model
- learning_rate: Python scalar giving the learning rate to use for gradient
descent step.
Returns:
- loss: A TensorFlow Tensor of shape () (scalar) giving the loss for this
batch of data; evaluating the loss also performs a gradient descent step
on params (see above).
"""
# First compute the loss; the first line gives losses for each example in
# the minibatch, and the second averages the losses acros the batch
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
loss = tf.reduce_mean(losses)
# Compute the gradient of the loss with respect to each parameter of the the
# network. This is a very magical function call: TensorFlow internally
# traverses the computational graph starting at loss backward to each element
# of params, and uses backpropagation to figure out how to compute gradients;
# it then adds new operations to the computational graph which compute the
# requested gradients, and returns a list of TensorFlow Tensors that will
# contain the requested gradients when evaluated.
grad_params = tf.gradients(loss, params)
# Make a gradient descent step on all of the model parameters.
new_weights = []
for w, grad_w in zip(params, grad_params):
new_w = tf.assign_sub(w, learning_rate * grad_w)
new_weights.append(new_w)
# Insert a control dependency so that evaluting the loss causes a weight
# update to happen; see the discussion above.
with tf.control_dependencies(new_weights):
return tf.identity(loss)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Training LoopNow we set up a basic training loop using low-level TensorFlow operations. We will train the model using stochastic gradient descent without momentum. The `training_step` function sets up the part of the computational graph that performs the training step, and the function `train_part2` iterates through the training data, making training steps on each minibatch, and periodically evaluates accuracy on the validation set.
###Code
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
# First clear the default graph
tf.reset_default_graph()
is_training = tf.placeholder(tf.bool, name='is_training')
# Set up the computational graph for performing forward and backward passes,
# and weight updates.
with tf.device(device):
# Set up placeholders for the data and labels
x = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int32, [None])
params = init_fn() # Initialize the model parameters
scores = model_fn(x, params) # Forward pass of the model
loss = training_step(scores, y, params, learning_rate)
# Now we actually run the graph many times using the training data
with tf.Session() as sess:
# Initialize variables that will live in the graph
sess.run(tf.global_variables_initializer())
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data; recall that asking
# TensorFlow to evaluate loss will cause an SGD step to happen.
feed_dict = {x: x_np, y: y_np}
loss_np = sess.run(loss, feed_dict=feed_dict)
# Periodically print the loss and check accuracy on the val set
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss_np))
check_accuracy(sess, val_dset, x, scores, is_training)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Check AccuracyWhen training the model we will use the following function to check the accuracy of our model on the training or validation sets. Note that this function accepts a TensorFlow Session object as one of its arguments; this is needed since the function must actually run the computational graph many times on the data that it loads from the dataset `dset`.Also note that we reuse the same computational graph both for taking training steps and for evaluating the model; however since the `check_accuracy` function never evalutes the `loss` value in the computational graph, the part of the graph that updates the weights of the graph do not execute on the validation data.
###Code
def check_accuracy(sess, dset, x, scores, is_training=None):
"""
Check accuracy on a classification model.
Inputs:
- sess: A TensorFlow Session that will be used to run the graph
- dset: A Dataset object on which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- scores: A TensorFlow Tensor representing the scores output from the
model; this is the Tensor we will ask TensorFlow to evaluate.
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
feed_dict = {x: x_batch, is_training: 0}
scores_np = sess.run(scores, feed_dict=feed_dict)
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
return acc
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: InitializationWe'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
###Code
def kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.random_normal(shape) * np.sqrt(2.0 / fan_in)
###Output
_____no_output_____
###Markdown
Barebones TensorFlow: Train a Two-Layer NetworkWe are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.We just need to define a function to initialize the weights of the model, and call `train_part2`.Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.You don't need to tune any hyperparameters, but you should achieve accuracies above 40% after one epoch of training.
###Code
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
Inputs: None
Returns: A list of:
- w1: TensorFlow Variable giving the weights for the first layer
- w2: TensorFlow Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
###Output
Iteration 0, loss = 3.3969
Got 129 / 1000 correct (12.90%)
Iteration 100, loss = 1.7710
Got 384 / 1000 correct (38.40%)
Iteration 200, loss = 1.5113
Got 393 / 1000 correct (39.30%)
Iteration 300, loss = 1.9097
Got 376 / 1000 correct (37.60%)
Iteration 400, loss = 1.7774
Got 411 / 1000 correct (41.10%)
Iteration 500, loss = 1.8494
Got 431 / 1000 correct (43.10%)
Iteration 600, loss = 1.9620
Got 418 / 1000 correct (41.80%)
Iteration 700, loss = 1.9547
Got 450 / 1000 correct (45.00%)
###Markdown
Barebones TensorFlow: Train a three-layer ConvNetWe will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 22. ReLU3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 14. ReLU5. Fully-connected layer (with bias) to compute scores for 10 classesYou don't need to do any hyperparameter tuning, but you should see accuracies above 43% after one epoch of training.
###Code
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow Variable giving weights for the first conv layer
- conv_b1: TensorFlow Variable giving biases for the first conv layer
- conv_w2: TensorFlow Variable giving weights for the second conv layer
- conv_b2: TensorFlow Variable giving biases for the second conv layer
- fc_w: TensorFlow Variable giving weights for the fully-connected layer
- fc_b: TensorFlow Variable giving biases for the fully-connected layer
"""
params = []
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
conv_w1 = tf.Variable(tf.random_normal((5, 5, 3, 32)))
conv_b1 = tf.Variable(tf.random_normal((32,)))
conv_w2 = tf.Variable(tf.random_normal((3, 3, 32, 16)))
conv_b2 = tf.Variable(tf.random_normal((16,)))
fc_w = tf.Variable(tf.random_normal((32 * 32 * 16, 10)))
fc_b = tf.Variable(tf.random_normal((10,)))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
###Output
Iteration 0, loss = 12597.2656
Got 120 / 1000 correct (12.00%)
Iteration 100, loss = 10.9905
Got 108 / 1000 correct (10.80%)
Iteration 200, loss = 4.0134
Got 107 / 1000 correct (10.70%)
Iteration 300, loss = 3.0172
Got 103 / 1000 correct (10.30%)
Iteration 400, loss = 3.2325
Got 105 / 1000 correct (10.50%)
Iteration 500, loss = 3.1852
Got 104 / 1000 correct (10.40%)
Iteration 600, loss = 2.4241
Got 105 / 1000 correct (10.50%)
Iteration 700, loss = 3.0222
Got 106 / 1000 correct (10.60%)
###Markdown
Part III: Keras Model APIImplementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters, and we had to use a control dependency to implement the gradient descent update step. This was fine for a small network, but could quickly become unweildy for a large complex model.Fortunately TensorFlow provides higher-level packages such as `tf.keras` and `tf.layers` which make it easy to build models out of modular, object-oriented layers; `tf.train` allows you to easily train these models using a variety of different optimization algorithms.In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:1. Define a new class which subclasses `tf.keras.model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.layers` package provides many common neural-network layers, like `tf.layers.Dense` for fully-connected layers and `tf.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super().__init__()` as the first line in your initializer!3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II. Module API: Two-Layer NetworkHere is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.variance_scaling_initializer` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/api_docs/python/tf/variance_scaling_initializerWe construct `tf.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation=tf.nn.relu` to the constructor; the second layer does not apply any activation function.Unfortunately the `flatten` function we defined in Part II is not compatible with the `tf.keras.Model` API; fortunately we can use `tf.layers.flatten` to perform the same operation. The issue with our `flatten` function from Part II has to do with static vs dynamic shapes for Tensors, which is beyond the scope of this notebook; you can read more about the distinction [in the documentation](https://www.tensorflow.org/programmers_guide/faqtensor_shapes).
###Code
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super().__init__()
initializer = tf.variance_scaling_initializer(scale=2.0)
self.fc1 = tf.layers.Dense(hidden_size, activation=tf.nn.relu,
kernel_initializer=initializer)
self.fc2 = tf.layers.Dense(num_classes,
kernel_initializer=initializer)
def call(self, x, training=None):
x = tf.layers.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
tf.reset_default_graph()
input_size, hidden_size, num_classes = 50, 42, 10
# As usual in TensorFlow, we first need to define our computational graph.
# To this end we first construct a TwoLayerFC object, then use it to construct
# the scores Tensor.
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
x = tf.zeros((64, input_size))
scores = model(x)
# Now that our computational graph has been defined we can run the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_TwoLayerFC()
###Output
(64, 10)
###Markdown
Funtional API: Two-Layer NetworkThe `tf.layers` package provides two different higher-level APIs for defining neural network models. In the example above we used the **object-oriented API**, where each layer of the neural network is represented as a Python object (like `tf.layers.Dense`). Here we showcase the **functional API**, where each layer is a Python function (like `tf.layers.dense`) which inputs and outputs TensorFlow Tensors, and which internally sets up Tensors in the computational graph to hold any learnable weights.To construct a network, one needs to pass the input tensor to the first layer, and construct the subsequent layers sequentially. Here's an example of how to construct the same two-layer nework with the functional API.
###Code
def two_layer_fc_functional(inputs, hidden_size, num_classes):
initializer = tf.variance_scaling_initializer(scale=2.0)
flattened_inputs = tf.layers.flatten(inputs)
fc1_output = tf.layers.dense(flattened_inputs, hidden_size, activation=tf.nn.relu,
kernel_initializer=initializer)
scores = tf.layers.dense(fc1_output, num_classes,
kernel_initializer=initializer)
return scores
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
tf.reset_default_graph()
input_size, hidden_size, num_classes = 50, 42, 10
# As usual in TensorFlow, we first need to define our computational graph.
# To this end we first construct a two layer network graph by calling the
# two_layer_network() function. This function constructs the computation
# graph and outputs the score tensor.
with tf.device(device):
x = tf.zeros((64, input_size))
scores = two_layer_fc_functional(x, hidden_size, num_classes)
# Now that our computational graph has been defined we can run the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_two_layer_fc_functional()
###Output
(64, 10)
###Markdown
Keras Model API: Three-Layer ConvNetNow it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:1. Convolutional layer with 5 x 5 kernels, with zero-padding of 22. ReLU nonlinearity3. Convolutional layer with 3 x 3 kernels, with zero-padding of 14. ReLU nonlinearity5. Fully-connected layer to give class scoresYou should initialize the weights of your network using the same initialization method as was used in the two-layer network above.**Hint**: Refer to the documentation for `tf.layers.Conv2D` and `tf.layers.Dense`:https://www.tensorflow.org/api_docs/python/tf/layers/Conv2Dhttps://www.tensorflow.org/api_docs/python/tf/layers/Dense
###Code
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
initializer = tf.variance_scaling_initializer(scale=2.0)
self.conv1 = tf.layers.Conv2D(filters=channel_1, kernel_size=5,
padding='same', activation='relu',
kernel_initializer=initializer)
self.conv2 = tf.layers.Conv2D(filters=channel_2, kernel_size=3,
padding='same', activation='relu',
kernel_initializer=initializer)
self.fc1 = tf.layers.Dense(num_classes, kernel_initializer=initializer)
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=None):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
x = self.conv1(x)
x = self.conv2(x)
x = tf.layers.flatten(x)
scores = self.fc1(x)
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
###Output
_____no_output_____
###Markdown
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
###Code
def test_ThreeLayerConvNet():
tf.reset_default_graph()
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
scores_np = sess.run(scores)
print(scores_np.shape)
test_ThreeLayerConvNet()
###Output
(64, 10)
###Markdown
Keras Model API: Training LoopWe need to implement a slightly different training loop when using the `tf.keras.Model` API. Instead of computing gradients and updating the weights of the model manually, we use an `Optimizer` object from the `tf.train` package which takes care of these details for us. You can read more about `Optimizer`s here: https://www.tensorflow.org/api_docs/python/tf/train/Optimizer
###Code
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during trainingn
"""
losses = []
acc = []
tf.reset_default_graph()
with tf.device(device):
# Construct the computational graph we will use to train the model. We
# use the model_init_fn to construct the model, declare placeholders for
# the data and labels
x = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int32, [None])
# We need a place holder to explicitly specify if the model is in the training
# phase or not. This is because a number of layers behaves differently in
# training and in testing, e.g., dropout and batch normalization.
# We pass this variable to the computation graph through feed_dict as shown below.
is_training = tf.placeholder(tf.bool, name='is_training')
# Use the model function to build the forward pass.
scores = model_init_fn(x, is_training)
# Compute the loss like we did in Part II
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
loss = tf.reduce_mean(loss)
losses.append(loss)
# Use the optimizer_fn to construct an Optimizer, then use the optimizer
# to set up the training step. Asking TensorFlow to evaluate the
# train_op returned by optimizer.minimize(loss) will cause us to make a
# single update step using the current minibatch of data.
# Note that we use tf.control_dependencies to force the model to run
# the tf.GraphKeys.UPDATE_OPS at each training step. tf.GraphKeys.UPDATE_OPS
# holds the operators that update the states of the network.
# For example, the tf.layers.batch_normalization function adds the running mean
# and variance update operators to tf.GraphKeys.UPDATE_OPS.
optimizer = optimizer_init_fn()
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
# Now we can run the computational graph many times to train the model.
# When we call sess.run we ask it to evaluate train_op, which causes the
# model to update.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
t = 0
for epoch in range(num_epochs):
print('Starting epoch %d' % epoch)
for x_np, y_np in train_dset:
feed_dict = {x: x_np, y: y_np, is_training:1}
loss_np, _ = sess.run([loss, train_op], feed_dict=feed_dict)
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss_np))
check_accuracy(sess, val_dset, x, scores, is_training=is_training)
print()
t += 1
###Output
_____no_output_____
###Markdown
Keras Model API: Train a Two-Layer NetworkWe can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.train.GradientDescentOptimizer` function; you can [read about it here](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer).You don't need to tune any hyperparameters here, but you should achieve accuracies above 40% after one epoch of training.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
return TwoLayerFC(hidden_size, num_classes)(inputs)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 2.9667
Got 110 / 1000 correct (11.00%)
Iteration 100, loss = 1.8058
Got 376 / 1000 correct (37.60%)
Iteration 200, loss = 1.4414
Got 374 / 1000 correct (37.40%)
Iteration 300, loss = 1.9085
Got 363 / 1000 correct (36.30%)
Iteration 400, loss = 1.7841
Got 416 / 1000 correct (41.60%)
Iteration 500, loss = 1.8047
Got 430 / 1000 correct (43.00%)
Iteration 600, loss = 1.8641
Got 410 / 1000 correct (41.00%)
Iteration 700, loss = 1.8969
Got 439 / 1000 correct (43.90%)
###Markdown
Keras Model API: Train a Two-Layer Network (functional API)Similarly, we train the two-layer network constructed using the functional API.
###Code
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
return two_layer_fc_functional(inputs, hidden_size, num_classes)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 3.3552
Got 95 / 1000 correct (9.50%)
Iteration 100, loss = 1.9409
Got 350 / 1000 correct (35.00%)
Iteration 200, loss = 1.5143
Got 389 / 1000 correct (38.90%)
Iteration 300, loss = 1.8507
Got 355 / 1000 correct (35.50%)
Iteration 400, loss = 1.7691
Got 424 / 1000 correct (42.40%)
Iteration 500, loss = 1.8280
Got 421 / 1000 correct (42.10%)
Iteration 600, loss = 1.8360
Got 412 / 1000 correct (41.20%)
Iteration 700, loss = 2.0037
Got 438 / 1000 correct (43.80%)
###Markdown
Keras Model API: Train a Three-Layer ConvNetHere you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.To train the model you should use gradient descent with Nesterov momentum 0.9. **HINT**: https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizerYou don't need to perform any hyperparameter tuning, but you should achieve accuracies above 45% after training for one epoch.
###Code
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn(inputs, is_training):
############################################################################
# TODO: Complete the implementation of model_init_fn. #
############################################################################
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
############################################################################
# END OF YOUR CODE #
############################################################################
return model(inputs)
def optimizer_init_fn():
############################################################################
# TODO: Complete the implementation of optimizer_init_fn. #
############################################################################
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9, use_nesterov=True)
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 3.2916
Got 79 / 1000 correct (7.90%)
Iteration 100, loss = 1.9118
Got 384 / 1000 correct (38.40%)
Iteration 200, loss = 1.3646
Got 421 / 1000 correct (42.10%)
Iteration 300, loss = 1.5375
Got 469 / 1000 correct (46.90%)
Iteration 400, loss = 1.5252
Got 477 / 1000 correct (47.70%)
Iteration 500, loss = 1.5717
Got 509 / 1000 correct (50.90%)
Iteration 600, loss = 1.4823
Got 519 / 1000 correct (51.90%)
Iteration 700, loss = 1.3657
Got 522 / 1000 correct (52.20%)
###Markdown
Part IV: Keras Sequential APIIn Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model. Keras Sequential API: Two-Layer NetworkHere we rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.You don't need to perform any hyperparameter tuning here, but you should see accuracies above 40% after training for one epoch.
###Code
learning_rate = 1e-2
def model_init_fn(inputs, is_training):
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.layers.Flatten(input_shape=input_shape),
tf.layers.Dense(hidden_layer_size, activation=tf.nn.relu,
kernel_initializer=initializer),
tf.layers.Dense(num_classes, kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model(inputs)
def optimizer_init_fn():
return tf.train.GradientDescentOptimizer(learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 3.3224
Got 125 / 1000 correct (12.50%)
Iteration 100, loss = 1.7925
Got 370 / 1000 correct (37.00%)
Iteration 200, loss = 1.4363
Got 371 / 1000 correct (37.10%)
Iteration 300, loss = 1.7298
Got 377 / 1000 correct (37.70%)
Iteration 400, loss = 1.7340
Got 418 / 1000 correct (41.80%)
Iteration 500, loss = 1.7553
Got 433 / 1000 correct (43.30%)
Iteration 600, loss = 1.7571
Got 434 / 1000 correct (43.40%)
Iteration 700, loss = 1.9348
Got 423 / 1000 correct (42.30%)
###Markdown
Keras Sequential API: Three-Layer ConvNetHere you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:1. Convolutional layer with 16 5x5 kernels, using zero padding of 22. ReLU nonlinearity3. Convolutional layer with 32 3x3 kernels, using zero padding of 14. ReLU nonlinearity5. Fully-connected layer giving class scoresYou should initialize the weights of the model using a `tf.variance_scaling_initializer` as above.You should train the model using Nesterov momentum 0.9.You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
###Code
def model_init_fn(inputs, is_training):
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
input_shape = (32, 32, 32)
channel_1, channel_2, classes = 16, 32, 10
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.keras.layers.Conv2D(channel_1, 5, padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.Conv2D(channel_2, 3, padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(classes, kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
############################################################################
# END OF YOUR CODE #
############################################################################
return model(inputs)
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of optimizer_init_fn. #
############################################################################
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9, use_nesterov=True)
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
###Output
Starting epoch 0
Iteration 0, loss = 3.3567
Got 103 / 1000 correct (10.30%)
Iteration 100, loss = 1.6687
Got 409 / 1000 correct (40.90%)
Iteration 200, loss = 1.4292
Got 462 / 1000 correct (46.20%)
Iteration 300, loss = 1.4880
Got 477 / 1000 correct (47.70%)
Iteration 400, loss = 1.4397
Got 493 / 1000 correct (49.30%)
Iteration 500, loss = 1.5528
Got 503 / 1000 correct (50.30%)
Iteration 600, loss = 1.5598
Got 511 / 1000 correct (51.10%)
Iteration 700, loss = 1.5416
Got 528 / 1000 correct (52.80%)
###Markdown
Part V: CIFAR-10 TrainingIn this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the `check_accuracy` and `train` functions from above, or you can implement your own training loop.Describe what you did at the end of the notebook. Some things you can try:- **Filter size**: Above we used 5x5 and 3x3; is this optimal?- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?- **Pooling**: We didn't use any pooling above. Would this improve the model?- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout? WARNING: Batch Normalization / DropoutBatch Normalization and Dropout **WILL NOT WORK CORRECTLY** if you use the `train_part34()` function with the object-oriented `tf.keras.Model` or `tf.keras.Sequential` APIs; if you want to use these layers with this training loop then you **must use the tf.layers functional API**.We wrote `train_part34()` to explicitly demonstrate how TensorFlow works; however there are some subtleties that make it tough to handle the object-oriented batch normalization layer in a simple training loop. In practice both `tf.keras` and `tf` provide higher-level APIs which handle the training loop for you, such as [keras.fit](https://keras.io/models/sequential/) and [tf.Estimator](https://www.tensorflow.org/programmers_guide/estimators), both of which will properly handle batch normalization when using the object-oriented API. Tips for trainingFor each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:- If the parameters are working well, you should see improvement within a few hundred iterations- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyondIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.- Model ensembles- Data augmentation- New Architectures - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output. - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together. - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32) Have fun and happy training!
###Code
def model_init_fn(inputs, is_training):
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
input_shape = (8, 8, 64)
channel_1, channel_2 = 6, 16
fc1, fc2, classes = 120, 84, 10
initializer = tf.variance_scaling_initializer(scale=2.0)
layers = [
tf.keras.layers.Conv2D(32, 5, padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPool2D(strides=(2,2)),
tf.keras.layers.Conv2D(32, 5, padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPool2D(strides=(2,2)),
tf.keras.layers.Conv2D(64, 5, padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPool2D(strides=(2,2)),
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(fc1, kernel_initializer=initializer),
tf.keras.layers.Dense(fc2, kernel_initializer=initializer),
tf.keras.layers.Dense(classes, kernel_initializer=initializer),
]
net = tf.keras.Sequential(layers)
############################################################################
# END OF YOUR CODE #
############################################################################
return net(inputs)
def optimizer_init_fn():
############################################################################
# TODO: Construct an optimizer that performs well on CIFAR-10 #
############################################################################
optimizer = tf.train.AdamOptimizer()
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
device = '/cpu:0'
print_every = 700
num_epochs = 10
train_part34(model_init_fn, optimizer_init_fn, num_epochs)
###Output
Starting epoch 0
Iteration 0, loss = 7.2850
Got 98 / 1000 correct (9.80%)
Iteration 700, loss = 1.3082
Got 592 / 1000 correct (59.20%)
Starting epoch 1
Iteration 1400, loss = 1.0440
Got 647 / 1000 correct (64.70%)
Starting epoch 2
Iteration 2100, loss = 0.7923
Got 656 / 1000 correct (65.60%)
Starting epoch 3
Iteration 2800, loss = 0.7238
Got 670 / 1000 correct (67.00%)
Starting epoch 4
Iteration 3500, loss = 0.7241
Got 685 / 1000 correct (68.50%)
Starting epoch 5
Iteration 4200, loss = 0.4730
Got 687 / 1000 correct (68.70%)
Starting epoch 6
Iteration 4900, loss = 0.5601
Got 673 / 1000 correct (67.30%)
Starting epoch 7
Iteration 5600, loss = 0.4840
Got 665 / 1000 correct (66.50%)
Starting epoch 8
Iteration 6300, loss = 0.5560
Got 677 / 1000 correct (67.70%)
Starting epoch 9
Iteration 7000, loss = 0.3689
Got 676 / 1000 correct (67.60%)
|
basics/8 Save and Load Data.ipynb | ###Markdown
Save/Load Model StructuresUses pickle
###Code
torch.save(model, model_path)
model = torch.load(model_path)
###Output
_____no_output_____
###Markdown
ONNXAllows inference on different platforms. Like srsly, ALL platforms.- Need an example input image for exporting?
###Code
input_image = torch.ones((1, 3, 224, 224))
onnx.export(model, input_image, "model.onnx")
###Output
_____no_output_____ |
C06_metodos_de_clase_y_estaticos.ipynb | ###Markdown
 Métodos de clase y métodos de estáticos. Existen algunos casos en los que se requiera de extender o restringir el ámbito de un método. Los métodos de clase y los métodos estáticos son ejemplo de este tipo de casos. Métodos de clase.Un método de clase puede modificar el estado de una clase, accediendo a los atributos de dicha clase, aún cuando el método es invocado desde un objeto. En lugar de definirse utilizando _self_ como primer parámetro, se utiliza _cls_. Los métodos de clase se definen con la siguiente sintaxis:```@classmethoddef (cls , ): ...```**Ejemplo:**
###Code
class PoblacionCensada():
'''Clase capaz de registrar la cantidad de habitantes de todas sus instancias.'''
poblacion = 0
'''Crea censos de población. '''
@classmethod
def opera_poblacion(cls, operador, cantidad):
'''Método de clase que registra el número total de población de todas las instancias de la clase.'''
cls.poblacion = eval(str(cls.poblacion) + operador + str(cantidad))
@classmethod
def despliega_total(cls):
'''Método de clase que despliega el atributo de clase cls. población.'''
return cls.poblacion
def __init__(self, nombre, numero=0):
print("Se ha creado la población {} con {} habitantes.".format(nombre, numero))
self.nombre = nombre
self.poblacion = numero
self.opera_poblacion('+', self.poblacion)
def __del__(self):
self.opera_poblacion('-', self.poblacion)
edomex = [PoblacionCensada("Tlalnepantla", 600000), PoblacionCensada("Toluca", 1000000),
PoblacionCensada("Valle de Chalco", 750000), PoblacionCensada('Valle de Bravo', 100000)]
edomex[0].despliega_total()
PoblacionCensada.poblacion
edomex[0].poblacion
del edomex[1]
edomex[0].despliega_total()
for entidad in edomex:
print(entidad.nombre, entidad.poblacion)
PoblacionCensada.poblacion
###Output
_____no_output_____
###Markdown
Métodos estáticos.Los métodos estáticos están restringidos en su ámbito, de tal manera que no tienen acceso a los atributos del objeto. Se definen de forma idéntica a una función, sin necesidad de ingresar el parámetro inicial _self_.Sintaxis:```@staticmethoddef (): ... ``` **Ejemplo:**
###Code
class Servidor:
'''Clase que emula a un servidor muy básico.'''
usuarios_activos = set(())
def __init__(self, dominio, lista):
self.lista_usuarios = lista
self.dominio = dominio
def conexion(self, usuario):
'''Conexión de un usuario válido al servidor.'''
if usuario in self.lista_usuarios:
self.usuarios_activos.add(usuario)
else:
return False
@staticmethod
def ping(ip):
'''Regresa el ping a la IP de origen.'''
return (ip, "ping")
server = Servidor("demo.pythonista.mx", ["josech", "juan", "mglez", "jklx"])
server.ping("182.168.100.1")
server.lista_usuarios
server.conexion('juan')
server.usuarios_activos
###Output
_____no_output_____ |
notebooks/Evolving_Clustering_Experiment_1_Static_Dataset_1.ipynb | ###Markdown
Mount Results Directory
###Code
path = "/content/gdrive/My Drive/Evolving_Results/"
from google.colab import drive
drive.mount("/content/gdrive")
###Output
_____no_output_____
###Markdown
Install Libraries
###Code
#@title
!apt-get update
!apt-get install r-base
!pip install rpy2
!apt-get install libmagick++-dev
#!apt-get install r-cran-rjava
import os #importing os to set environment variable
def install_java():
!apt-get install -y openjdk-8-jdk-headless -qq > /dev/null #install openjdk
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" #set environment variable
os.environ["LD_LIBRARY_PATH"] = "/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server"
!java -version #check java version
install_java()
!R CMD javareconf
#!apt-get install r-cran-rjava
#!apt-get install libgdal-dev libproj-dev
!R -e 'install.packages(c("magick", "animation", "stream", "rJava", "streamMOA"))'
###Output
_____no_output_____
###Markdown
Install R Packages
###Code
# enables the %%R magic, not necessary if you've already done this
%load_ext rpy2.ipython
%%R
dyn.load("/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so")
library("stream")
library("streamMOA")
###Output
_____no_output_____
###Markdown
Read Data Stream
###Code
%%R
experiment <- function(){
df <- read.csv("https://query.data.world/s/faarcpz24ukr2g53druxljp4nghhzn", header=TRUE);
nsamples <- nrow(df)
df <- df[sample(nsamples),]
stream <- DSD_Memory(df[,c("x", "y")], class=df[,"class"], k=max(df[,"class"]))
return (get_points(stream, n = nsamples, class = TRUE))
}
###Output
_____no_output_____
###Markdown
Run Benchmark Models Benchmark methods:* DenStream* ClusStream* Stream KM++ Benchmark metrics:* cRand
###Code
# Experiment parameters
nclusters = 9
nsamples = 3000
train_size = 1800
window_size = 100
metric = "cRand"
trials = 30
%%R -i metric -i window_size -i trials -i path -i nclusters
alg_names <- c("DenStream", "Clustream", "StreamKM")
trials_df <- data.frame(matrix(ncol = length(alg_names), nrow = 0))
colnames(trials_df) <- alg_names
for (i in 1:(trials)){
algorithms <- list("DenStream" = DSC_DenStream(epsilon=0.01, mu=4, beta=0.4),
"Clustream" = DSC_CluStream(m = 10, horizon = 10, t = 1, k=NULL),
"StreamKM" = DSC_StreamKM(sizeCoreset = 100, numClusters = nclusters)
)
writeLines(sprintf("Trial: %d", i))
evaluation <- sapply(algorithms, FUN = function(alg) {
df <- read.csv("https://query.data.world/s/faarcpz24ukr2g53druxljp4nghhzn", header=TRUE);
nsamples <- nrow(df)
df <- df[sample(nsamples),]
stream <- DSD_Memory(df[,c("x", "y")], class=df[,"class"], k=max(df[,"class"]))
update(alg, stream, n=nsamples)
reset_stream(stream)
evaluate(alg, stream, measure = metric, n = nsamples, type = "macro", assign = "macro")
})
trials_df[nrow(trials_df) + 1,] = as.data.frame(evaluation)[,'evaluation']
}
write.csv(trials_df, paste(path,"results_DS3_benchmark.csv"))
###Output
_____no_output_____
###Markdown
Run Evolving Clustering* Convert to X,y format* run prequential routine* plot results
###Code
!pip install -U git+https://github.com/cseveriano/evolving_clustering
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from evolving import EvolvingClustering, load_dataset, Metrics, Benchmarks, util
from sklearn.metrics import adjusted_rand_score
import time
import rpy2.robjects as robjects
from rpy2.robjects import pandas2ri
pandas2ri.activate()
r = robjects.r
evol_trials_df = pd.DataFrame(columns=["microTEDAclus"])
for i in np.arange(trials):
named_tuple = time.localtime() # get struct_time
time_string = time.strftime("%m/%d/%Y, %H:%M:%S", named_tuple)
print("Trial: ",i ," at ",time_string)
stream_df = pandas2ri.ri2py_dataframe(r.experiment())
X = stream_df[['x', 'y']].values
y = stream_df['class'].values
evol_model = EvolvingClustering.EvolvingClustering(variance_limit=0.0006, debug=False)
evol_model.fit(X)
y_hat = evol_model.predict(X)
error = adjusted_rand_score(y, y_hat)
evol_trials_df = evol_trials_df.append({'microTEDAclus': error}, ignore_index=True)
evol_trials_df.to_csv(path+'results_DS3_evolving.csv', index=False)
###Output
_____no_output_____ |
notebooks/test_gpt2_ml.ipynb | ###Markdown
Test GPT2-MLReference: [现在可以用Keras玩中文GPT2了(GPT2_ML)](https://kexue.fm/archives/7292/comment-page-1)
###Code
#! -*- coding: utf-8 -*-
# 基本测试:中文GPT2模型
# 介绍链接:https://kexue.fm/archives/7292
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
from bert4keras.models import build_transformer_model
from bert4keras.tokenizers import Tokenizer
from bert4keras.snippets import AutoRegressiveDecoder
from bert4keras.snippets import uniout
config_path = 'config.json'
checkpoint_path = '../pretrained/model.ckpt-100000.data-00000-of-00001'
dict_path = 'vocab.txt'
tokenizer = Tokenizer(dict_path,
token_start=None,
token_end=None,
do_lower_case=True) # 建立分词器
model = build_transformer_model(config_path=config_path,
checkpoint_path=checkpoint_path,
model='gpt2_ml') # 建立模型,加载权重
class ArticleCompletion(AutoRegressiveDecoder):
"""基于随机采样的文章续写
"""
@AutoRegressiveDecoder.set_rtype('probas')
def predict(self, inputs, output_ids, step):
token_ids = np.concatenate([inputs[0], output_ids], 1)
return model.predict(token_ids)[:, -1]
def generate(self, text, n=1, topk=5):
token_ids, _ = tokenizer.encode(text)
results = self.random_sample([token_ids], n, topk) # 基于随机采样
return [text + tokenizer.decode(ids) for ids in results]
article_completion = ArticleCompletion(start_id=None,
end_id=511, # 511是中文句号
maxlen=256,
minlen=128)
print(article_completion.generate(u'今天天气不错'))
###Output
WARNING: Logging before flag parsing goes to stderr.
W1221 17:51:25.821680 13032 deprecation.py:323] From C:\Users\tsyo\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
|
validate_map/1_NASSvsCSDLvsCDL.ipynb | ###Markdown
Figures 4 & 5. Validate CSDL against NASS county statistics Here we create Figures 4 & 5 from the paper, which summarize the agreement between CSDL and NASS statistics at the county level. CDL and NASS agreement is also shown for comparision.
###Code
# if needed, change these to the directories containing your data and the name of your data file
nass_dir = '../data/nass_counties/'
cdl_dir = '../data/cdl_counties/'
csdl_dir = '../data/csdl_counties/'
###Output
_____no_output_____
###Markdown
Step 1. Load Datasets 1.1. NASS county-level statistics First, we load the NASS county-level statistics from 1999-2018.
###Code
nass_corn = pd.read_csv(os.path.join(nass_dir, 'NASS_cropAreaCorn_1999to2018_raw.csv'))
nass_soy = pd.read_csv(os.path.join(nass_dir, 'NASS_cropAreaSoy_1999to2018_raw.csv'))
cols = ['state_fips_code', 'county_code', 'year', 'state_alpha',
'class_desc', 'short_desc','statisticcat_desc', 'commodity_desc',
'util_practice_desc', 'Value']
nass_soy = nass_soy[cols]
nass_corn = nass_corn[cols]
nass = pd.concat([nass_corn, nass_soy])
print(nass.shape)
nass.head()
# Add the unique county FIPS code: stateFIPS+countyFIPS
nass['county_code'] = nass['county_code'].map(int).apply(lambda x: '{0:0>3}'.format(x))
nass['state_fips_code'] = nass['state_fips_code'].apply(lambda x: '{0:0>2}'.format(x))
nass['fips'] = (nass['state_fips_code'].map(str)+nass['county_code'].map(str)).map(int)
nass['commodity_desc'] = nass['commodity_desc'].str.title()
nass = nass.rename(columns={"commodity_desc":"crop", "state_alpha":"state", "Value":"Nass_Area_acres"})
nass['Nass_Area_acres'] = nass['Nass_Area_acres'].str.replace(',', '').astype('float')
nass["Nass_Area_m2"] = nass["Nass_Area_acres"] * 4046.86
nass = nass[['fips', 'year', 'state', 'crop', 'Nass_Area_m2']]
nass.head()
###Output
_____no_output_____
###Markdown
1.2. Aggregated CDL at county level Next, we load CDL aggregated to the county level across the 13 states from 1999-2018. This data was exported in Google Earth Engine. A few steps of reorganizing the dataframe allows us to compare it to the NASS data.
###Code
cdl = pd.DataFrame()
for filename in sorted(os.listdir(cdl_dir)):
if (filename.endswith('.csv')):
temp = pd.read_csv(os.path.join(cdl_dir, filename)).drop(['.geo','system:index'],axis=1)
cdl = pd.concat([temp, cdl], sort=True)
cdl = cdl[cdl['masterid'] != 0] # drop dummy feature
print(cdl.shape)
cdl.head()
# compute CDL coverage by county
classes = list(set(cdl.columns.tolist()) - set(['Year', 'area_m2', 'masterid', 'COUNTYFP', 'STATEFP']))
other = list(set(classes) - set(['1', '5']))
cdl['cdl_coverage'] = cdl[classes].sum(axis=1)
cdl['Other'] = cdl[other].sum(axis=1)
cdl = cdl.drop(other, axis=1)
cdl.head()
maxcoverage = cdl[cdl['Year'] == 2018][['masterid', 'cdl_coverage']].rename(
{'cdl_coverage': '2018cdl_coverage'}, axis=1)
cdl = cdl.merge(maxcoverage, on='masterid')
cdl['CDL_perccov'] = cdl['cdl_coverage'] / cdl['2018cdl_coverage'] * 100
cdl_key = {'1': "Corn", '5': "Soybeans", 'Year': "year"}
cdl = cdl.rename(cdl_key, axis=1)
cdl.head()
crops = ["Corn", "Soybeans", "Other"]
cdl = pd.melt(cdl, id_vars=['masterid', 'year', 'COUNTYFP', 'STATEFP', 'CDL_perccov'], value_vars=crops, value_name='CDL_Area_m2')
cdl = cdl.rename(columns={"variable": "crop", "masterid": "fips"})
cdl.head()
###Output
_____no_output_____
###Markdown
1.3. Aggregated CSDL at county level Lastly, we load CSDL aggregated to the county level, also data that was exported from GEE.
###Code
csdl = pd.DataFrame()
for filename in sorted(os.listdir(csdl_dir)):
if (filename.endswith('.csv')):
temp = pd.read_csv(os.path.join(csdl_dir, filename))
csdl = pd.concat([temp, csdl], sort=True)
csdl = csdl.dropna(subset=['masterid'])
csdl[['COUNTYFP', 'STATEFP', 'masterid', 'year']] = csdl[['COUNTYFP', 'STATEFP', 'masterid', 'year']].astype(int)
csdl = csdl[['0', '1', '5', 'COUNTYFP', 'STATEFP', 'masterid', 'year']]
print(csdl.shape)
csdl.head()
csdl = csdl.merge(maxcoverage, on='masterid')
csdl['CSDL_coverage'] = csdl[['0','1','5']].sum(axis=1)
csdl['CSDL_perccov'] = csdl['CSDL_coverage'] / csdl['2018cdl_coverage']*100
# note that CDL covers lakes, rivers; CSDL does not
csdl_key = {'0':"Other",'1': "Corn", '5': "Soybeans"}
csdl = csdl.rename(csdl_key,axis=1)
csdl.head()
crops = ["Corn", "Soybeans", "Other"]
csdl = pd.melt(csdl, id_vars=['masterid', 'year', 'COUNTYFP', 'STATEFP', 'CSDL_perccov'], value_vars=crops, value_name='CSDL_Area_m2')
csdl = csdl.rename(columns={"variable": "crop", "masterid": "fips"})
csdl.head()
###Output
_____no_output_____
###Markdown
1.4. Join CDL, CSDL and NASS
###Code
df = nass.merge(cdl, on=['year', 'fips', 'crop'], how='left').merge(
csdl, on=['year', 'fips', 'crop', 'COUNTYFP', 'STATEFP'], how='left')
# convert m2 to km2: 1e6
df['Nass_Area_km2'] = df['Nass_Area_m2'] / 1e6
df['CDL_Area_km2'] = df['CDL_Area_m2'] / 1e6
df['CSDL_Area_km2'] = df['CSDL_Area_m2'] / 1e6
print(df.shape)
df.head()
###Output
(39986, 14)
###Markdown
Step 2. Generate Figure 4: comparing NASS-CSDL
###Code
recent_years = np.arange(2008, 2019)
past_years = np.arange(1999, 2008)
df['year_group'] = '1999-2007'
df.loc[df['year'].isin(recent_years),'year_group'] = '2008-2018'
df_sub = df[df['CDL_perccov'] > 95]
print(df.shape)
print(df_sub.shape)
color_mapCSDL = { "Corn": 'orange', "Soybeans": 'green'}
color_mapCDL = { "Corn": 'brown', "Soybeans": 'darkseagreen'}
fsize = 15
lm = linear_model.LinearRegression()
bbox_props = dict(boxstyle="round", fc="w", ec="0.5", alpha=0.0)
fig, ax = plt.subplots(nrows=2, ncols=4,figsize=(16,8))
for i,(id,group) in enumerate(df_sub[df_sub['year_group']=='1999-2007'].groupby(['crop'])):
ax[i,0].scatter(group['CDL_Area_km2'],group['Nass_Area_km2'], color = color_mapCDL[id], alpha=0.3, label=id)
ax[i,1].scatter(group['CSDL_Area_km2'],group['Nass_Area_km2'], color = color_mapCSDL[id], alpha=0.3, label=id)
group1= group.dropna(subset=['CDL_Area_km2','Nass_Area_km2'])
R2 = r2_score(group1['CDL_Area_km2'].values,group1['Nass_Area_km2'].values)
ax[i,0].text(0.05, 0.9, '$R^2$={0:.3f}'.format(R2), ha="left", va="center", size=fsize, bbox=bbox_props, transform=ax[i,0].transAxes)
group2= group.dropna(subset=['CSDL_Area_km2','Nass_Area_km2'])
R2 = r2_score(group2['CSDL_Area_km2'].values,group2['Nass_Area_km2'].values)
ax[i,1].text(0.05, 0.9, '$R^2$={0:.3f}'.format(R2), ha="left", va="center", size=fsize, bbox=bbox_props, transform=ax[i,1].transAxes)
lims = [0,2.6e+9/1e6] # km2
# now plot both limits against each other
ax[i,0].plot(lims, lims, 'k--', alpha=0.75, zorder=0)
ax[i,1].plot(lims, lims, 'k--', alpha=0.75, zorder=0)
ax[i,0].set_xlim(lims)
ax[i,1].set_xlim(lims)
ax[i,0].set_ylim(lims)
ax[i,1].set_ylim(lims)
ax[i,0].set_aspect('equal')
ax[i,1].set_aspect('equal')
ax[0,0].set_xticklabels('', rotation=90)
ax[0,1].set_xticklabels('', rotation=90)
ax[i,1].set_yticklabels('', rotation=90)
for i,(id,group) in enumerate(df_sub[df_sub['year_group']=='2008-2018'].groupby(['crop'])):
ax[i,2].scatter(group['CDL_Area_km2'],group['Nass_Area_km2'], color = color_mapCDL[id], alpha=0.3, label=id)
ax[i,3].scatter(group['CSDL_Area_km2'],group['Nass_Area_km2'], color = color_mapCSDL[id], alpha=0.3, label=id)
# set background to gray and alpha=0.2
ax[i,2].set_facecolor((0.5019607843137255, 0.5019607843137255, 0.5019607843137255, 0.2))
ax[i,3].set_facecolor((0.5019607843137255, 0.5019607843137255, 0.5019607843137255, 0.2))
group1= group.dropna(subset=['CDL_Area_km2','Nass_Area_km2'])
R2 = r2_score(group1['CDL_Area_km2'].values, group1['Nass_Area_km2'].values)
ax[i,2].text(0.05, 0.9, '$R^2$={0:.3f}'.format(R2), ha="left", va="center", size=fsize, bbox=bbox_props, transform=ax[i,2].transAxes)
group2= group.dropna(subset=['CSDL_Area_km2','Nass_Area_km2'])
R2 = r2_score(group2['CSDL_Area_km2'].values, group2['Nass_Area_km2'].values)
ax[i,3].text(0.05, 0.9, '$R^2$={0:.3f}'.format(R2), ha="left", va="center", size=fsize, bbox=bbox_props, transform=ax[i,3].transAxes)
lims = [0,2.6e+9/1e6] # area in km2
# now plot both limits against each other
ax[i,2].plot(lims, lims, 'k--', alpha=0.75, zorder=0)
ax[i,3].plot(lims, lims, 'k--', alpha=0.75, zorder=0)
ax[i,2].set_xlim(lims)
ax[i,3].set_xlim(lims)
ax[i,2].set_ylim(lims)
ax[i,3].set_ylim(lims)
ax[i,2].set_aspect('equal')
ax[i,3].set_aspect('equal')
ax[0,2].set_xticklabels('', rotation=90)
ax[0,3].set_xticklabels('', rotation=90)
ax[i,2].set_yticklabels('', rotation=90)
ax[i,3].set_yticklabels('', rotation=90)
ax[0,0].set_ylabel('NASS county area [$km^2$]',fontsize=fsize)
ax[1,0].set_ylabel('NASS county area [$km^2$]',fontsize=fsize)
ax[1,0].set_xlabel('CDL county area [$km^2$]',fontsize=fsize)
ax[1,1].set_xlabel('CSDL county area [$km^2$]',fontsize=fsize)
ax[1,2].set_xlabel('CDL county area [$km^2$]',fontsize=fsize)
ax[1,3].set_xlabel('CSDL county area [$km^2$]',fontsize=fsize)
ax[0,0].set_title('1999-2007',fontsize=fsize)
ax[0,1].set_title('1999-2007',fontsize=fsize)
ax[0,2].set_title('2008-2018',fontsize=fsize)
ax[0,3].set_title('2008-2018',fontsize=fsize)
# Create the legend
legend_elements = [ Line2D([0], [0], color='k', linewidth=1, linestyle='--', label='1:1'),
Line2D([0], [0], marker='o', color='w', markerfacecolor='brown', markersize=15, label='Corn CDL'),
Line2D([0], [0], marker='o', color='w', markerfacecolor='orange', markersize=15, label='Corn CSDL'),
Line2D([0], [0], marker='o', color='w', markerfacecolor='darkseagreen', markersize=15, label='Soybeans CDL'),
Line2D([0], [0], marker='o', color='w', markerfacecolor='green', markersize=15, label='Soybeans CSDL'),
mpatches.Patch(facecolor='gray', edgecolor='gray', alpha = 0.2, label='NASS informed by CDL')]
fig.legend(handles=legend_elements,
loc='lower center',
bbox_to_anchor=(0.5,-0.01),
fontsize = 'x-large',
ncol=7)
fig.tight_layout(rect=[0,0.05,1,1]) # legend on the bottom
###Output
_____no_output_____
###Markdown
Step 3. Generate Figure 5: comparing NASS and CSDL over time by state
###Code
def corrfun(df, col1, col2):
df = df.dropna(subset=[col2])
if df.shape[0] != 0:
r2 = r2_score(df[col1].values, df[col2].values)
mse = mean_squared_error(df[col1].values, df[col2].values)
else:
r2 = np.nan
mse = np.nan
return pd.Series({'R': r2, 'mse': mse, 'Ncounties': df.shape[0]}) # return R2 (coef of determination)
totArea = df.groupby(['year', 'state', 'crop'])['Nass_Area_m2', 'CDL_Area_m2', 'CSDL_Area_m2'].sum().reset_index()
print(totArea.shape) # 20years * 13states * 2commodity = 520 rows
corr_cdl = df[df['CDL_perccov'] > 90].groupby(['state','year','crop']).apply(corrfun,'Nass_Area_m2','CDL_Area_m2').reset_index().rename(
{'R': 'R_NASS_CDL', 'mse': 'mse_NASS_CDL', 'Ncounties': 'Ncounties_CDL'}, axis=1)
corr_csdl = df.groupby(['state','year','crop']).apply(corrfun, 'Nass_Area_m2', 'CSDL_Area_m2').reset_index().rename(
{'R': 'R_NASS_CSDL', 'mse': 'mse_NASS_CSDL', 'Ncounties': 'Ncounties_CSDL'}, axis=1)
corr = corr_cdl.merge(corr_csdl, on=['state','year','crop'], how='outer')
corr = corr.merge(totArea, on=['state','year','crop'], how='left')
print(corr.shape)
abbr_to_state = {'IL':'Illinois', 'IA':'Iowa', 'IN':'Indiana', 'NE':'Nebraska', 'ND':'North Dakota',
'SD':'South Dakota', 'MN':'Minnesota', 'WI':'Wisconsin', 'MI':'Michigan',
'KS':'Kansas','KY':'Kentucky', 'OH':'Ohio', 'MO':'Missouri'}
corr['state_abbrs'] = corr['state']
corr['state_name'] = corr['state'].replace(abbr_to_state)
corr.head()
state_geosorted = ['North Dakota', 'Minnesota', 'Wisconsin', 'Michigan',
'South Dakota','Iowa', 'Illinois', 'Indiana',
'Nebraska', 'Missouri', 'Kentucky', 'Ohio',
'Kansas']
color_mapCSDL = { "Corn": 'orange', "Soybeans": 'green'}
color_mapCDL = { "Corn": 'brown', "Soybeans": 'darkseagreen'}
marker_map ={ "Corn": 'o', "Soybeans": '^'}
fsize = 15
bbox_props = dict(boxstyle="round", fc="w", ec="0.5", alpha=0.0)
fig, ax = plt.subplots(nrows=4, ncols=4,figsize=(16,12))
for i, id in enumerate(state_geosorted):
group = corr[corr['state_name']==id]
thisAx = ax[int(np.floor(i/4)), i%4]
thisAx.text(0.05, 0.1, id, ha="left", va="center", size=18, bbox=bbox_props, transform=thisAx.transAxes)
ylabels = np.arange(0, 12, 2)/10
thisAx.set_yticks(ylabels)
thisAx.set_yticklabels('', rotation=0)
xlabels = np.arange(1999, 2019, 2)
thisAx.set_xticks(xlabels)
thisAx.set_xticklabels('', rotation=90)
thisAx.fill_between(np.arange(2008, 2019), 0, 1.1, alpha = 0.2, color='gray')
crops = ['Soybeans','Corn']
for l,id2 in enumerate(crops):
group2 = group[group['crop']==id2]
group2 = group2.sort_values('year')
thisAx.plot(group2['year'],group2['R_NASS_CDL'],color=color_mapCDL[id2], alpha=1, marker=marker_map[id2])
thisAx.plot(group2['year'],group2['R_NASS_CSDL'],color=color_mapCSDL[id2], alpha=1, marker=marker_map[id2])
thisAx.set_xlim([1999,2018])
thisAx.set_ylim([0.0, 1.1])
thisAx.grid(True)
ax[0,0].set_ylabel('$R^{2}$',fontsize=fsize)
ax[1,0].set_ylabel('$R^{2}$',fontsize=fsize)
ax[2,0].set_ylabel('$R^{2}$',fontsize=fsize)
ax[3,0].set_ylabel('$R^{2}$',fontsize=fsize)
ax[0,0].set_yticklabels(ylabels, rotation=0,fontsize=fsize)
ax[1,0].set_yticklabels(ylabels, rotation=0,fontsize=fsize)
ax[2,0].set_yticklabels(ylabels, rotation=0,fontsize=fsize)
ax[3,0].set_yticklabels(ylabels, rotation=0,fontsize=fsize)
ax[3,0].set_xticklabels(xlabels, rotation=90,fontsize=fsize)
ax[-1,-1].axis('off')
ax[-1,-2].axis('off')
ax[-1,-3].axis('off')
# Create the legend manually
legend_elements = [ Line2D([0], [0], color='brown', linewidth=3, marker='o', linestyle='-', label='Corn CDL'),
Line2D([0], [0], color='orange', linewidth=3, marker='o', linestyle='-', label='Corn CSDL'),
Line2D([0], [0], color='darkseagreen', linewidth=3, marker='^', linestyle='-', label='Soybeans CDL'),
Line2D([0], [0], color='green', linewidth=3, marker='^', linestyle='-', label='Soybeans CSDL')]
fig.legend(handles=legend_elements,
loc="lower right", # Position of legend
bbox_to_anchor=(0.6, 0.05),
fontsize = 'xx-large')
bckground_patch = mpatches.Patch(color='gray', alpha = 0.2, label='Training years')
fig.legend(handles=[bckground_patch],
loc="lower right",
bbox_to_anchor=(0.8, 0.1),
fontsize = 'xx-large')
fig.tight_layout()
ax[2,1].set_xticklabels(xlabels, rotation=90,fontsize=fsize)
ax[2,2].set_xticklabels(xlabels, rotation=90,fontsize=fsize)
ax[2,3].set_xticklabels(xlabels, rotation=90,fontsize=fsize)
fig.show()
###Output
_____no_output_____ |
ml/perdiz-svm.ipynb | ###Markdown
Support Vector Machine: Predicting community membership of Perdiz arrow points
###Code
# load analysis packages
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score
from sklearn import svm
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn import metrics
# read data
data = pd.read_csv('perdizsite.csv')
data.head()
###Output
_____no_output_____
###Markdown
select features and response
###Code
# attributes for analysis
feature_cols = ['maxl', 'maxw', 'maxth', 'maxstl', 'maxstw']
X = data[feature_cols]
# cast from string to int
reg_num = {'north':0, 'south':1}
data['reg_num'] = data.region.map(reg_num)
data.head()
y = data.reg_num
###Output
_____no_output_____
###Markdown
ensure that features and responses are numeric
###Code
X.dtypes
y.dtypes
###Output
_____no_output_____
###Markdown
split data for train/test
###Code
# split data into train/test sets (75/25 split)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.25,
random_state = 0)
print('X_train: ', X_train.shape)
print('X_test: ', X_test.shape)
print('y_train:', y_train.shape)
print('y_test: ', y_test.shape)
###Output
X_train: (50, 5)
X_test: (17, 5)
y_train: (50,)
y_test: (17,)
###Markdown
decrease sensitivity of algorithm to outliers through standardizing features
###Code
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
###Output
_____no_output_____
###Markdown
create SVM classifier with linear kernel
###Code
clf = svm.SVC(kernel = 'linear')
clf.fit(X_train_std, y_train)
###Output
_____no_output_____
###Markdown
grid search and nested cross validation of training dataset
###Code
# grid search
pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state = 0))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range,
'clf__kernel': ['linear']},
{'clf__C': param_range,
'clf__gamma': param_range,
'clf__kernel': ['rbf']}]
gs = GridSearchCV(estimator = pipe_svc,
param_grid = param_grid,
scoring = 'accuracy',
cv = 10,
n_jobs = 1)
gs = gs.fit(X_train_std, y_train)
print('Grid Search Best Score: ', gs.best_score_)
print('Grid Search Best Parameters: ', gs.best_params_)
# use the test dataset to estimate model performance
clf = gs.best_estimator_
clf.fit(X_train_std, y_train)
clf.score(X_test_std, y_test)
# nested cross validation
gs = GridSearchCV(estimator = pipe_svc,
param_grid = param_grid,
scoring = 'accuracy',
cv = 10,
n_jobs = 1)
scores = cross_val_score(gs, X_train_std, y_train,
scoring = 'accuracy',
cv = 10)
print('Cross Validation Scores: ', scores)
print('Cross Validation Mean Score: ', scores.mean())
###Output
Grid Search Best Score: 0.8600000000000001
Grid Search Best Parameters: {'clf__C': 10.0, 'clf__kernel': 'linear'}
Cross Validation Scores: [0.8 1. 0.4 1. 0.8 0.8 1. 1. 0.8 1. ]
Cross Validation Mean Score: 0.8600000000000001
###Markdown
make predictions + evaluate accuracy
###Code
y_pred = clf.predict(X_test_std)
print('Receiver Operator Curve Score: ', roc_auc_score(y_true = y_test,
y_score = y_pred))
print('Accuracy Score: ', accuracy_score(y_test, y_pred))
print('Precision: ', metrics.precision_score(y_test, y_pred))
print('Recall: ', metrics.recall_score(y_test, y_pred))
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for Perdiz classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
###Output
_____no_output_____ |
33_binning/2_binning_titanic.ipynb | ###Markdown
**Binning or Discretization of Continuous features** - **Binning or discretization** is the process of **transforming numerical variables into categorical counterparts.**- Numerical variables are usually discretized in the modeling methods based on frequency tables. Statistical data binning is a way to group numbers of more or less continuous values into a smaller number of "bins".- We will use the **Pandas CUT** function. This method basically means that cut divide up the underlying data into **equal-sized or categorical bins**. The function defines the bins using **percentiles** based on the distribution of the data, not the actual numeric edges of the bins.
###Code
import pandas as pd
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
df = pd.read_csv('/content/drive/MyDrive/CloudyML/titanic.zip')
df.head()
df.info()
bins = [0, 2, 5, 10, 14, 17, 30, 60, 100]
labels = ['infant', 'toddler', 'yound child', 'older child', 'teenager', 'young adult', 'adult', 'elderly']
df['Age Category'] = pd.cut(df['Age'], bins, labels=labels)
df
df['Age Category'].value_counts()
plt.figure(figsize=(10,8))
df['Age Category'].value_counts().plot(kind='barh')
plt.figure(figsize=(15,11))
df['Age Category'].value_counts().plot(kind='pie')
###Output
_____no_output_____ |
05_Questions_Category/03_Questions_Classifications_RNN_2_Labels.ipynb | ###Markdown
Questions Type Categories 2 LabelsBased on on the first notebook of this series we are gooing to create a RNN model that classifies questions with two categories. Which means we are going two have two ouput for each question, which we will visualize.We are going to change from Sequential to Functional API, the most of the code cells remains unchanged same, where there's change i will highlight. Imports
###Code
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
import csv, json, time
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
tf.__version__
###Output
_____no_output_____
###Markdown
GPU
###Code
devices = tf.config.list_physical_devices("GPU")
try:
tf.config.experimental.set_visible_devices(devices[0], "GPU")
print("GPU set")
except RuntimeError as e:
print(e)
###Output
GPU set
###Markdown
SEED
###Code
SEED = 42
tf.random.set_seed(
SEED
)
np.random.seed(SEED)
###Output
_____no_output_____
###Markdown
Mounting the google drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
DataAgain we are not going to create files as we did in the last notebook, because all the data splits were done in the first notebook that's the reason we saved the csv files which have the following file names:```train.csvtest.csvval.csv```
###Code
data_path = "/content/drive/MyDrive/NLP Data/questions-classification"
###Output
_____no_output_____
###Markdown
Loading the files
###Code
train_path = "train.csv"
val_path = "val.csv"
test_path = "test.csv"
train_dataframe = pd.read_csv(
os.path.join(data_path, train_path)
)
val_dataframe = pd.read_csv(
os.path.join(data_path, val_path)
)
test_dataframe = pd.read_csv(
os.path.join(data_path, test_path)
)
###Output
_____no_output_____
###Markdown
As i said in this notebook we are going to predict two categories. So this means we are intrested in 3 columns for each set which are:```1. Questions (feature)2. Category0 (label_1) (6 classes) 3. Category2 (label_2) (47 classes)```
###Code
train_dataframe.Category2.unique()
###Output
_____no_output_____
###Markdown
Question Classes
###Code
train_dataframe.Category0.unique()
###Output
_____no_output_____
###Markdown
We `6` different categories for these questions which are:```categories_0 = ['ENTITY', 'DESCRIPTION', 'NUMERIC', 'HUMAN', 'LOCATION', 'ABBREVIATION']```We also have `47` classes of these questions which are:```categories_1 = ['dismed', 'count', 'ind', 'food', 'date', 'other', 'money', 'dist', 'period', 'gr', 'perc', 'sport', 'country', 'city', 'desc', 'exp', 'word', 'abb', 'cremat', 'animal', 'body', 'reason', 'def', 'manner', 'letter', 'termeq', 'substance', 'state', 'color', 'event', 'product', 'symbol', 'volsize', 'mount', 'weight', 'veh', 'techmeth', 'plant', 'title', 'code', 'ord', 'speed', 'lang', 'temp', 'instru', 'currency', 'religion']``` Features and Labels* We are going to one_hot encode all labels
###Code
train_features = train_dataframe.Questions.values
train_labels_0 = train_dataframe.Category0.values
train_labels_1 = train_dataframe.Category2.values
test_features = test_dataframe.Questions.values
test_labels_0 = test_dataframe.Category0.values
test_labels_1 = test_dataframe.Category2.values
val_features = val_dataframe.Questions.values
val_labels_0 = val_dataframe.Category0.values
val_labels_1 = val_dataframe.Category2.values
###Output
_____no_output_____
###Markdown
Label Encoding.We are going to create encode the labels to numerical represantation using the `sklearn` `LabelEncoder` class. Since we have two labels we are going to create two surperate `LabelEncoder`'s
###Code
label_0_encoder = LabelEncoder()
label_0_encoder.fit(train_labels_0)
label_1_encoder = LabelEncoder()
label_1_encoder.fit(train_labels_1)
###Output
_____no_output_____
###Markdown
Joining the validation and training set
###Code
train_valid_features = np.concatenate([train_features, val_features], axis = 0)
train_valid_labels_0 = np.concatenate([train_labels_0, val_labels_0], axis = 0)
train_valid_labels_1 = np.concatenate([train_labels_1, val_labels_1], axis = 0)
train_valid_labels_1.shape, train_valid_labels_0.shape, train_valid_features.shape
train_labels_0 = label_0_encoder.transform(train_valid_labels_0).astype("int32")
train_labels_1 = label_1_encoder.transform(train_valid_labels_1).astype("int32")
test_labels_0 = label_0_encoder.transform(test_labels_0).astype("int32")
test_labels_1 = label_1_encoder.transform(test_labels_1).astype("int32")
class_names_label_0 = label_0_encoder.classes_
class_names_label_1 = label_1_encoder.classes_
class_names_label_0.shape, class_names_label_1.shape
###Output
_____no_output_____
###Markdown
Next we are going to `one_hot_encode` labels.We are only going to `one_hot` encode the train labels. We will also do the same to the test set so that it will act as our validation set since the dataset is very small. I'm going to use the `numpy` method `eye`. You can use the `sklearn` ``OneHotEncoder`` class or the `tensorflow` method `one_hot`.**For this we will create two functions that one_hot_encode labels with different depth**
###Code
def one_hot_label_1(index, depth=6): # 6 classes
return np.eye(depth, dtype=np.float32)[index]
print(one_hot_label_1(2))
def one_hot_label_2(index, depth=47): # 47 classes
return np.eye(depth, dtype=np.float32)[index]
one_hot_label_2(2)
train_labels_0_one_hot = np.array(list(map(one_hot_label_1, train_labels_0)))
test_labels_0_one_hot = np.array(list(map(one_hot_label_1, test_labels_0)))
train_labels_1_one_hot = np.array(list(map(one_hot_label_2, train_labels_1)))
test_labels_1_one_hot = np.array(list(map(one_hot_label_2, test_labels_1)))
test_labels_0_one_hot[0]
###Output
_____no_output_____
###Markdown
Features For now we are done with labels. In our case our feature is a question. All we need to do in this dataset is to create a helper function that will convert the text to lower case that's all we need. After that we will then prepare/preprocess the train and validation features for the model.
###Code
to_lower = lambda sent: sent.lower()
train_features = np.array(
list(
map(to_lower, train_features)
)
)
test_features= np.array(
list(
map(to_lower, test_features)
)
)
train_features[:2]
###Output
_____no_output_____
###Markdown
Preprocessing Features(Questions)1. Create a vocabulary2. Create a `stoi` from each sentencs3. pad the sentences to have the same size**Note** - During creation of the vocabulary we are going to use the `train` set. The model should not have an idea about the validation set, because we want the validation set to represent the test set as mush as possible. Vocab size, (aka) the number of unique words.I'm going to use spacy to tokenize each sentence and then we count number of unique words in the train set.
###Code
from collections import Counter
from nltk.tokenize import word_tokenize
import nltk
nltk.download('punkt')
train_features[0]
counter = Counter()
for sent in list(train_valid_features):
words = word_tokenize(sent)
for word in words:
counter[word] += 1
counter.most_common(9)
vocab_size = len(counter)
vocab_size
###Output
_____no_output_____
###Markdown
We have `~8K` unique words in the train set. Next we are going to create word vectors. Word vectors
###Code
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(train_valid_features)
word_indices = tokenizer.word_index
word_indices_reversed = dict([
(v, k) for (k, v) in word_indices.items()
])
###Output
_____no_output_____
###Markdown
Helper functionsWe will create some helper function that converts sequences to text and text to sequences. These function will be used for inference later on.
###Code
def seq_to_text(sequences):
return " ".join(word_indices_reversed[i] for i in sequences )
def text_to_seq(sent):
words = word_tokenize(sent.lower())
sequences = []
for word in words:
try:
sequences.append(word_indices[word])
except:
sequences.append(0)
return sequences
###Output
_____no_output_____
###Markdown
Pretrained embeding weightsSince this model is using RNN's we are going to use the `pretrained glove.6B` word embeddings. I've already uploaded these word embedding on my google colab so that we can load them as follows
###Code
embedding_path = "/content/drive/MyDrive/NLP Data/glove.6B/glove.6B.100d.txt"
embedding_dict = dict()
with open(embedding_path, encoding="utf8") as glove:
for line in glove:
records = line.split();
word = records[0]
vectors = np.asarray(records[1: ], dtype=np.float32)
embedding_dict[word] = vectors
print(len(embedding_dict))
embedding_dict["what"].shape
###Output
400000
###Markdown
But wait? `40000` words? Where are they comming from?Okay don't panic we will create the embedding matrix that will suit our vocab_size next. Embedding matrixWe will then create an embedding matrix that will suit our data.
###Code
embedding_matrix = np.zeros((vocab_size, 100))
for word, index in word_indices.items():
vector = embedding_dict.get(word)
if vector is not None:
try:
embedding_matrix[index] = vector
except IndexError or Exception:
pass
len(embedding_matrix)
###Output
_____no_output_____
###Markdown
Now the `embedding_matrix` suits our data which has `~8` words. Creating Sequences
###Code
train_sequences = tokenizer.texts_to_sequences(train_valid_features)
test_sequences = tokenizer.texts_to_sequences(test_features)
test_features[0], test_sequences[0]
###Output
_____no_output_____
###Markdown
Testing our helper functions
###Code
seq_to_text(test_sequences[0])
text_to_seq("this is why the unknownnnn word ?")
###Output
_____no_output_____
###Markdown
Pad sequencesOur final step is to pad our suquences to have the same length. We are going to do this on the train and validation sets only.
###Code
max_words = 100
train_tokens_padded = pad_sequences(
train_sequences,
maxlen=max_words,
padding="post",
truncating="post"
)
test_tokens_padded = pad_sequences(
test_sequences,
maxlen=max_words,
padding="post",
truncating="post"
)
###Output
_____no_output_____
###Markdown
Building the model.Model Achitecture.The model achitecture will be ploted
###Code
forward_layer = keras.layers.GRU(
128, return_sequences=True, dropout=.5,
name="gru_forward_layer"
)
backward_layer = keras.layers.LSTM(
128, return_sequences=True, dropout=.5,
go_backwards=True, name="lstm_backward_layer"
)
input_layer = keras.layers.Input(shape=(100, ))
embedding_layer = keras.layers.Embedding(
vocab_size, 100,
input_length=max_words,
weights=[embedding_matrix],
trainable=True,
name = "embedding_layer"
)(input_layer)
bidirectional_layer = keras.layers.Bidirectional(
forward_layer,
backward_layer = backward_layer,
name= "bidirectional_layer"
)(embedding_layer)
gru_layer = keras.layers.GRU(
512, return_sequences=True,
dropout=.5,
name= "gru_layer"
)(bidirectional_layer)
lstm_layer = keras.layers.LSTM(
512, return_sequences=True,
dropout=.5,
name="lstm_layer"
)(gru_layer)
pooling_layer = keras.layers.GlobalAveragePooling1D(
name="average_pooling_layer"
)(lstm_layer)
# FC hidden layers
dense_1 = keras.layers.Dense(64, activation='relu', name="dense_1")(pooling_layer)
dropout_1 = keras.layers.Dropout(rate= .5, name="dropout_layer_1")(dense_1)
dense_2 = keras.layers.Dense(512, activation='relu', name="dense_2")(dense_1)
dropout_2 = keras.layers.Dropout(rate= .5, name="dropout_layer_2")(dense_2)
dense_3 = keras.layers.Dense(128, activation='relu', name="dense_3")(pooling_layer)
dropout_3 = keras.layers.Dropout(rate= .5, name="dropout_layer_3")(dense_3)
output_1 = keras.layers.Dense(6, activation='softmax', name="output_1")(dropout_3)
output_2 = keras.layers.Dense(47, activation="softmax", name="output_2")(dropout_3)
question_category_model = keras.Model(inputs=input_layer, outputs=[output_1, output_2], name = "question_category_model")
question_category_model.summary()
###Output
Model: "question_category_model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 100)] 0
__________________________________________________________________________________________________
embedding_layer (Embedding) (None, 100, 100) 847500 input_2[0][0]
__________________________________________________________________________________________________
bidirectional_layer (Bidirectio (None, 100, 256) 205568 embedding_layer[0][0]
__________________________________________________________________________________________________
gru_layer (GRU) (None, 100, 512) 1182720 bidirectional_layer[0][0]
__________________________________________________________________________________________________
lstm_layer (LSTM) (None, 100, 512) 2099200 gru_layer[0][0]
__________________________________________________________________________________________________
average_pooling_layer (GlobalAv (None, 512) 0 lstm_layer[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 128) 65664 average_pooling_layer[0][0]
__________________________________________________________________________________________________
dropout_layer_3 (Dropout) (None, 128) 0 dense_3[0][0]
__________________________________________________________________________________________________
output_1 (Dense) (None, 6) 774 dropout_layer_3[0][0]
__________________________________________________________________________________________________
output_2 (Dense) (None, 47) 6063 dropout_layer_3[0][0]
==================================================================================================
Total params: 4,407,489
Trainable params: 4,407,489
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Let's plot the model and see it's stucture
###Code
keras.utils.plot_model(question_category_model, dpi=64, show_shapes=True)
###Output
_____no_output_____
###Markdown
Compiling the modelWe are going two have two loss functions for each output label.
###Code
early_stopping = keras.callbacks.EarlyStopping(
monitor='val_output_2_loss',
min_delta=0,
patience=5,
verbose=1,
mode='auto',
baseline=None,
restore_best_weights=False,
)
question_category_model.compile(
loss = {
"output_1" : keras.losses.CategoricalCrossentropy(from_logits=False),
"output_2" : keras.losses.CategoricalCrossentropy(from_logits=False)
},
optimizer = keras.optimizers.Adam(1e-3, 0.5),
metrics = ["accuracy"]
)
###Output
_____no_output_____
###Markdown
How long should we train the model?We are going to train the model for more epochs, as soon as the model loss stop decreasing we will stop.
###Code
EPOCHS = 50
history = question_category_model.fit(
train_tokens_padded,
y = [train_labels_0_one_hot, train_labels_1_one_hot],
validation_data = (
test_tokens_padded, [test_labels_0_one_hot, test_labels_1_one_hot],
),
verbose = 1,
epochs = EPOCHS,
batch_size = 64,
shuffle = True,
validation_batch_size = 16,
callbacks = [early_stopping]
)
###Output
Epoch 1/50
72/72 [==============================] - 24s 231ms/step - loss: 5.0517 - output_1_loss: 1.7390 - output_2_loss: 3.3127 - output_1_accuracy: 0.2359 - output_2_accuracy: 0.1625 - val_loss: 5.6584 - val_output_1_loss: 2.0756 - val_output_2_loss: 3.5827 - val_output_1_accuracy: 0.2268 - val_output_2_accuracy: 0.1604
Epoch 2/50
72/72 [==============================] - 15s 208ms/step - loss: 4.6434 - output_1_loss: 1.5959 - output_2_loss: 3.0475 - output_1_accuracy: 0.3014 - output_2_accuracy: 0.2024 - val_loss: 4.0568 - val_output_1_loss: 1.3027 - val_output_2_loss: 2.7542 - val_output_1_accuracy: 0.4204 - val_output_2_accuracy: 0.3093
Epoch 3/50
72/72 [==============================] - 15s 208ms/step - loss: 3.8797 - output_1_loss: 1.2496 - output_2_loss: 2.6301 - output_1_accuracy: 0.4927 - output_2_accuracy: 0.3451 - val_loss: 3.1518 - val_output_1_loss: 0.8737 - val_output_2_loss: 2.2781 - val_output_1_accuracy: 0.6690 - val_output_2_accuracy: 0.4273
Epoch 4/50
72/72 [==============================] - 15s 208ms/step - loss: 3.2263 - output_1_loss: 0.9484 - output_2_loss: 2.2779 - output_1_accuracy: 0.6427 - output_2_accuracy: 0.4154 - val_loss: 2.8593 - val_output_1_loss: 0.7750 - val_output_2_loss: 2.0844 - val_output_1_accuracy: 0.7205 - val_output_2_accuracy: 0.4456
Epoch 5/50
72/72 [==============================] - 15s 208ms/step - loss: 2.8750 - output_1_loss: 0.8192 - output_2_loss: 2.0558 - output_1_accuracy: 0.7041 - output_2_accuracy: 0.4584 - val_loss: 2.5435 - val_output_1_loss: 0.6724 - val_output_2_loss: 1.8711 - val_output_1_accuracy: 0.7663 - val_output_2_accuracy: 0.4960
Epoch 6/50
72/72 [==============================] - 15s 208ms/step - loss: 2.6385 - output_1_loss: 0.7287 - output_2_loss: 1.9098 - output_1_accuracy: 0.7513 - output_2_accuracy: 0.4837 - val_loss: 2.4429 - val_output_1_loss: 0.6373 - val_output_2_loss: 1.8057 - val_output_1_accuracy: 0.7743 - val_output_2_accuracy: 0.5223
Epoch 7/50
72/72 [==============================] - 15s 208ms/step - loss: 2.4520 - output_1_loss: 0.6488 - output_2_loss: 1.8031 - output_1_accuracy: 0.7790 - output_2_accuracy: 0.5169 - val_loss: 2.3175 - val_output_1_loss: 0.5912 - val_output_2_loss: 1.7263 - val_output_1_accuracy: 0.7927 - val_output_2_accuracy: 0.5281
Epoch 8/50
72/72 [==============================] - 15s 208ms/step - loss: 2.2242 - output_1_loss: 0.5518 - output_2_loss: 1.6724 - output_1_accuracy: 0.8137 - output_2_accuracy: 0.5429 - val_loss: 2.1097 - val_output_1_loss: 0.4926 - val_output_2_loss: 1.6171 - val_output_1_accuracy: 0.8247 - val_output_2_accuracy: 0.5487
Epoch 9/50
72/72 [==============================] - 15s 207ms/step - loss: 2.0795 - output_1_loss: 0.4994 - output_2_loss: 1.5801 - output_1_accuracy: 0.8480 - output_2_accuracy: 0.5737 - val_loss: 2.0985 - val_output_1_loss: 0.5188 - val_output_2_loss: 1.5797 - val_output_1_accuracy: 0.8156 - val_output_2_accuracy: 0.5498
Epoch 10/50
72/72 [==============================] - 15s 208ms/step - loss: 1.9201 - output_1_loss: 0.4461 - output_2_loss: 1.4740 - output_1_accuracy: 0.8644 - output_2_accuracy: 0.5901 - val_loss: 1.9583 - val_output_1_loss: 0.4665 - val_output_2_loss: 1.4918 - val_output_1_accuracy: 0.8282 - val_output_2_accuracy: 0.5934
Epoch 11/50
72/72 [==============================] - 15s 209ms/step - loss: 1.7778 - output_1_loss: 0.3951 - output_2_loss: 1.3828 - output_1_accuracy: 0.8786 - output_2_accuracy: 0.6143 - val_loss: 1.8288 - val_output_1_loss: 0.4351 - val_output_2_loss: 1.3937 - val_output_1_accuracy: 0.8408 - val_output_2_accuracy: 0.6105
Epoch 12/50
72/72 [==============================] - 15s 207ms/step - loss: 1.6430 - output_1_loss: 0.3438 - output_2_loss: 1.2992 - output_1_accuracy: 0.8943 - output_2_accuracy: 0.6386 - val_loss: 1.8546 - val_output_1_loss: 0.4561 - val_output_2_loss: 1.3985 - val_output_1_accuracy: 0.8362 - val_output_2_accuracy: 0.6014
Epoch 13/50
72/72 [==============================] - 15s 208ms/step - loss: 1.5627 - output_1_loss: 0.3334 - output_2_loss: 1.2293 - output_1_accuracy: 0.8956 - output_2_accuracy: 0.6556 - val_loss: 1.7780 - val_output_1_loss: 0.4127 - val_output_2_loss: 1.3653 - val_output_1_accuracy: 0.8499 - val_output_2_accuracy: 0.6231
Epoch 14/50
72/72 [==============================] - 15s 208ms/step - loss: 1.4423 - output_1_loss: 0.2898 - output_2_loss: 1.1525 - output_1_accuracy: 0.9100 - output_2_accuracy: 0.6715 - val_loss: 1.6923 - val_output_1_loss: 0.4145 - val_output_2_loss: 1.2778 - val_output_1_accuracy: 0.8706 - val_output_2_accuracy: 0.6564
Epoch 15/50
72/72 [==============================] - 15s 208ms/step - loss: 1.3884 - output_1_loss: 0.2886 - output_2_loss: 1.0998 - output_1_accuracy: 0.9155 - output_2_accuracy: 0.6866 - val_loss: 1.7102 - val_output_1_loss: 0.4041 - val_output_2_loss: 1.3061 - val_output_1_accuracy: 0.8683 - val_output_2_accuracy: 0.6380
Epoch 16/50
72/72 [==============================] - 15s 207ms/step - loss: 1.2861 - output_1_loss: 0.2523 - output_2_loss: 1.0337 - output_1_accuracy: 0.9212 - output_2_accuracy: 0.7023 - val_loss: 1.7649 - val_output_1_loss: 0.4315 - val_output_2_loss: 1.3335 - val_output_1_accuracy: 0.8580 - val_output_2_accuracy: 0.6403
Epoch 17/50
72/72 [==============================] - 15s 208ms/step - loss: 1.2385 - output_1_loss: 0.2263 - output_2_loss: 1.0122 - output_1_accuracy: 0.9295 - output_2_accuracy: 0.7102 - val_loss: 1.7124 - val_output_1_loss: 0.4400 - val_output_2_loss: 1.2724 - val_output_1_accuracy: 0.8625 - val_output_2_accuracy: 0.6632
Epoch 18/50
72/72 [==============================] - 15s 208ms/step - loss: 1.1642 - output_1_loss: 0.2187 - output_2_loss: 0.9455 - output_1_accuracy: 0.9314 - output_2_accuracy: 0.7312 - val_loss: 1.5949 - val_output_1_loss: 0.3877 - val_output_2_loss: 1.2072 - val_output_1_accuracy: 0.8729 - val_output_2_accuracy: 0.6586
Epoch 19/50
72/72 [==============================] - 15s 208ms/step - loss: 1.0533 - output_1_loss: 0.1718 - output_2_loss: 0.8815 - output_1_accuracy: 0.9445 - output_2_accuracy: 0.7419 - val_loss: 1.6185 - val_output_1_loss: 0.4350 - val_output_2_loss: 1.1836 - val_output_1_accuracy: 0.8809 - val_output_2_accuracy: 0.6976
Epoch 20/50
72/72 [==============================] - 15s 207ms/step - loss: 1.0745 - output_1_loss: 0.1955 - output_2_loss: 0.8790 - output_1_accuracy: 0.9439 - output_2_accuracy: 0.7486 - val_loss: 1.6212 - val_output_1_loss: 0.4248 - val_output_2_loss: 1.1964 - val_output_1_accuracy: 0.8797 - val_output_2_accuracy: 0.6873
Epoch 21/50
72/72 [==============================] - 15s 209ms/step - loss: 0.9585 - output_1_loss: 0.1548 - output_2_loss: 0.8037 - output_1_accuracy: 0.9524 - output_2_accuracy: 0.7733 - val_loss: 1.5891 - val_output_1_loss: 0.4281 - val_output_2_loss: 1.1610 - val_output_1_accuracy: 0.8832 - val_output_2_accuracy: 0.6987
Epoch 22/50
72/72 [==============================] - 15s 208ms/step - loss: 0.9314 - output_1_loss: 0.1589 - output_2_loss: 0.7725 - output_1_accuracy: 0.9489 - output_2_accuracy: 0.7759 - val_loss: 1.4745 - val_output_1_loss: 0.3770 - val_output_2_loss: 1.0976 - val_output_1_accuracy: 0.8866 - val_output_2_accuracy: 0.6999
Epoch 23/50
72/72 [==============================] - 15s 209ms/step - loss: 0.8608 - output_1_loss: 0.1467 - output_2_loss: 0.7142 - output_1_accuracy: 0.9568 - output_2_accuracy: 0.7893 - val_loss: 1.8682 - val_output_1_loss: 0.5555 - val_output_2_loss: 1.3126 - val_output_1_accuracy: 0.8786 - val_output_2_accuracy: 0.7113
Epoch 24/50
72/72 [==============================] - 15s 208ms/step - loss: 0.8417 - output_1_loss: 0.1412 - output_2_loss: 0.7005 - output_1_accuracy: 0.9563 - output_2_accuracy: 0.7978 - val_loss: 1.4798 - val_output_1_loss: 0.3860 - val_output_2_loss: 1.0938 - val_output_1_accuracy: 0.8969 - val_output_2_accuracy: 0.7228
Epoch 25/50
72/72 [==============================] - 15s 208ms/step - loss: 0.7927 - output_1_loss: 0.1281 - output_2_loss: 0.6646 - output_1_accuracy: 0.9605 - output_2_accuracy: 0.8065 - val_loss: 1.6014 - val_output_1_loss: 0.4578 - val_output_2_loss: 1.1436 - val_output_1_accuracy: 0.8832 - val_output_2_accuracy: 0.7228
Epoch 26/50
72/72 [==============================] - 15s 209ms/step - loss: 0.7577 - output_1_loss: 0.1203 - output_2_loss: 0.6375 - output_1_accuracy: 0.9616 - output_2_accuracy: 0.8091 - val_loss: 1.5944 - val_output_1_loss: 0.4535 - val_output_2_loss: 1.1409 - val_output_1_accuracy: 0.8900 - val_output_2_accuracy: 0.7274
Epoch 27/50
72/72 [==============================] - 15s 208ms/step - loss: 0.7471 - output_1_loss: 0.1228 - output_2_loss: 0.6242 - output_1_accuracy: 0.9624 - output_2_accuracy: 0.8238 - val_loss: 1.4716 - val_output_1_loss: 0.4045 - val_output_2_loss: 1.0671 - val_output_1_accuracy: 0.8946 - val_output_2_accuracy: 0.7308
Epoch 28/50
72/72 [==============================] - 15s 208ms/step - loss: 0.6818 - output_1_loss: 0.1085 - output_2_loss: 0.5733 - output_1_accuracy: 0.9648 - output_2_accuracy: 0.8310 - val_loss: 1.6697 - val_output_1_loss: 0.4836 - val_output_2_loss: 1.1861 - val_output_1_accuracy: 0.8992 - val_output_2_accuracy: 0.7457
Epoch 29/50
72/72 [==============================] - 15s 209ms/step - loss: 0.6727 - output_1_loss: 0.1061 - output_2_loss: 0.5667 - output_1_accuracy: 0.9670 - output_2_accuracy: 0.8401 - val_loss: 1.6513 - val_output_1_loss: 0.5015 - val_output_2_loss: 1.1499 - val_output_1_accuracy: 0.8751 - val_output_2_accuracy: 0.7320
Epoch 30/50
72/72 [==============================] - 15s 209ms/step - loss: 0.6136 - output_1_loss: 0.0938 - output_2_loss: 0.5197 - output_1_accuracy: 0.9725 - output_2_accuracy: 0.8541 - val_loss: 1.6285 - val_output_1_loss: 0.4854 - val_output_2_loss: 1.1431 - val_output_1_accuracy: 0.8912 - val_output_2_accuracy: 0.7480
Epoch 31/50
72/72 [==============================] - 15s 208ms/step - loss: 0.5629 - output_1_loss: 0.0893 - output_2_loss: 0.4737 - output_1_accuracy: 0.9736 - output_2_accuracy: 0.8629 - val_loss: 1.5678 - val_output_1_loss: 0.4773 - val_output_2_loss: 1.0905 - val_output_1_accuracy: 0.8981 - val_output_2_accuracy: 0.7491
Epoch 32/50
72/72 [==============================] - 15s 208ms/step - loss: 0.5418 - output_1_loss: 0.0827 - output_2_loss: 0.4590 - output_1_accuracy: 0.9731 - output_2_accuracy: 0.8635 - val_loss: 1.5225 - val_output_1_loss: 0.4550 - val_output_2_loss: 1.0674 - val_output_1_accuracy: 0.8969 - val_output_2_accuracy: 0.7606
Epoch 00032: early stopping
###Markdown
Plotting the model's training history.
###Code
import matplotlib.pyplot as plt
df = pd.DataFrame(history.history)
df.plot(title="Model training history")
plt.show()
###Output
_____no_output_____
###Markdown
Converting test data to numeric and then pad it.
###Code
def text_to_padded_sequences(sent):
tokens = text_to_seq(sent)
padded_tokens = pad_sequences([tokens], maxlen=max_words, padding="post", truncating="post")
return tf.squeeze(padded_tokens)
question_category_model.evaluate(
test_tokens_padded, [test_labels_0_one_hot, test_labels_1_one_hot],
verbose = 1,
batch_size = 32
)
###Output
28/28 [==============================] - 2s 63ms/step - loss: 1.5225 - output_1_loss: 0.4550 - output_2_loss: 1.0674 - output_1_accuracy: 0.8969 - output_2_accuracy: 0.7606
###Markdown
As we can see that we are able to get an accuracy of `~90%` on the first category and `~76%` accuracy on the second category of the test data. Which can be improved by having a lot of train examples as well as balancing the data in the train set. Model inference (making predictions)
###Code
from prettytable import PrettyTable
def tabulate(column_names, data, max_characters:int, question:str):
table = PrettyTable(column_names)
table.align[column_names[0]] = "l"
table.align[column_names[1]] = "l"
table.align[column_names[2]] = "l"
table.title = question
table._max_width = {column_names[0] :max_characters, column_names[1] :max_characters, column_names[2] :max_characters}
for row in data:
table.add_row(row)
print(table)
def predict(model, sent, real_label_1, real_label_2):
classes_1 = class_names_label_0
classes_2 = class_names_label_1
tokens = text_to_seq(sent)
padded_tokens = pad_sequences([tokens], maxlen=max_words,
padding="post", truncating="post")
probabilities_1, probabilities_2 = model.predict(padded_tokens)
prediction_1 = tf.argmax(probabilities_1, axis=1).numpy()[0]
class_name_1 = classes_1[prediction_1]
prediction_2 = tf.argmax(probabilities_2, axis=1).numpy()[0]
class_name_2 = classes_2[prediction_2]
table_headers =["KEY", "CATEGORY 1", "CATEGORY 2"]
table_data = [
["PREDICTED CLASS", prediction_1, prediction_2],
["PREDICTED CLASS NAME", class_name_1, class_name_2],
["REAL CLASS", real_label_1, real_label_2],
["REAL CLASS NAME", classes_1[real_label_1], classes_2[real_label_2]],
["CONFIDENCE OVER OTHER CLASSES", f'{probabilities_1[0][prediction_1] * 100:.2f}%', f'{probabilities_2[0][prediction_2] * 100:.2f}%'],
]
tabulate(table_headers, table_data, 50, sent)
###Output
_____no_output_____
###Markdown
Making predictions on the test data.
###Code
for label_1, label_2, sent in zip(test_labels_0[:10], test_labels_1, test_features):
predict(question_category_model, sent, real_label_1=label_1, real_label_2= label_2)
###Output
+---------------------------------------------------------+
| how many wings does a flea have ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 5 | 6 |
| PREDICTED CLASS NAME | NUMERIC | count |
| REAL CLASS | 5 | 6 |
| REAL CLASS NAME | NUMERIC | count |
| CONFIDENCE OVER OTHER CLASSES | 100.00% | 99.96% |
+-------------------------------+------------+------------+
+---------------------------------------------------------+
| who is samuel pickering ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 3 | 12 |
| PREDICTED CLASS NAME | HUMAN | desc |
| REAL CLASS | 3 | 12 |
| REAL CLASS NAME | HUMAN | desc |
| CONFIDENCE OVER OTHER CLASSES | 100.00% | 98.90% |
+-------------------------------+------------+------------+
+---------------------------------------------------------+
| how many watts make a kilowatt ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 5 | 6 |
| PREDICTED CLASS NAME | NUMERIC | count |
| REAL CLASS | 5 | 6 |
| REAL CLASS NAME | NUMERIC | count |
| CONFIDENCE OVER OTHER CLASSES | 100.00% | 99.96% |
+-------------------------------+------------+------------+
+--------------------------------------------------------------------------------+
| independent silversmith 's account for what percentage of silver production ? |
+--------------------------------------------+-----------------+-----------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+--------------------------------------------+-----------------+-----------------+
| PREDICTED CLASS | 5 | 29 |
| PREDICTED CLASS NAME | NUMERIC | period |
| REAL CLASS | 5 | 28 |
| REAL CLASS NAME | NUMERIC | perc |
| CONFIDENCE OVER OTHER CLASSES | 89.34% | 25.93% |
+--------------------------------------------+-----------------+-----------------+
+------------------------------------------------------------------------------------------+
| in the movie groundshog day what is the name of the character played by andie macdowell ? |
+--------------------------------------------------+-------------------+-------------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+--------------------------------------------------+-------------------+-------------------+
| PREDICTED CLASS | 2 | 8 |
| PREDICTED CLASS NAME | ENTITY | cremat |
| REAL CLASS | 3 | 19 |
| REAL CLASS NAME | HUMAN | ind |
| CONFIDENCE OVER OTHER CLASSES | 99.74% | 99.85% |
+--------------------------------------------------+-------------------+-------------------+
+---------------------------------------------------------+
| when did they canonize the bible ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 5 | 10 |
| PREDICTED CLASS NAME | NUMERIC | date |
| REAL CLASS | 5 | 10 |
| REAL CLASS NAME | NUMERIC | date |
| CONFIDENCE OVER OTHER CLASSES | 100.00% | 100.00% |
+-------------------------------+------------+------------+
+----------------------------------------------------------+
| what is the root of all evil ? |
+-------------------------------+-------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+-------------+------------+
| PREDICTED CLASS | 1 | 11 |
| PREDICTED CLASS NAME | DESCRIPTION | def |
| REAL CLASS | 1 | 12 |
| REAL CLASS NAME | DESCRIPTION | desc |
| CONFIDENCE OVER OTHER CLASSES | 99.75% | 98.15% |
+-------------------------------+-------------+------------+
+---------------------------------------------------------+
| what are the largest libraries in the us ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 2 | 27 |
| PREDICTED CLASS NAME | ENTITY | other |
| REAL CLASS | 4 | 27 |
| REAL CLASS NAME | LOCATION | other |
| CONFIDENCE OVER OTHER CLASSES | 92.42% | 91.97% |
+-------------------------------+------------+------------+
+---------------------------------------------------------+
| who is john macarthur , 1767-1834 ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 3 | 12 |
| PREDICTED CLASS NAME | HUMAN | desc |
| REAL CLASS | 3 | 12 |
| REAL CLASS NAME | HUMAN | desc |
| CONFIDENCE OVER OTHER CLASSES | 100.00% | 98.47% |
+-------------------------------+------------+------------+
+---------------------------------------------------------+
| where can i find full written draft of ctbt ? |
+-------------------------------+------------+------------+
| KEY | CATEGORY 1 | CATEGORY 2 |
+-------------------------------+------------+------------+
| PREDICTED CLASS | 4 | 27 |
| PREDICTED CLASS NAME | LOCATION | other |
| REAL CLASS | 4 | 27 |
| REAL CLASS NAME | LOCATION | other |
| CONFIDENCE OVER OTHER CLASSES | 99.99% | 99.85% |
+-------------------------------+------------+------------+
###Markdown
Confusion matrix. Next.We are going to use `Conv` nets to do this task.
###Code
###Output
_____no_output_____ |
10-37 Data Types and Arithmetic.ipynb | ###Markdown
Data Types in R
###Code
print(class(4))
print(class(4L))
print(class(T))
print(class(1+4i))
print(class("Sample"))
print(class(charToRaw("Sample")))
is.numeric(2+3i) # use is.type() to check type
is.integer(9L)
is.logical(0)
as.integer(1.2) # use as.type() to convert
###Output
_____no_output_____
###Markdown
Arithmetic in R
###Code
sprintf("4 + 5 = %d", 4 + 5)
sprintf("4 - 5 = %d", 4 - 5)
sprintf("4 * 5 = %d", 4 * 5)
sprintf("4 / 5 = %1.3f", 4 / 5)
sprintf("43 %%%% 5 = %d", 43 %% 5) # sprintf needs % to escape each percent sign :)
sprintf("4^2 = %d", 4^2)
###Output
_____no_output_____ |
content/post/laplacian/index.ipynb | ###Markdown
---title: "EEG Current Source Density and the Surface Laplacian"summary: "How does it work? Why does it work? I try to figure out this and more."date: 2020-01-30source: jupyter--- In some of our [recent work](https://www.biorxiv.org/content/10.1101/782813v1),we use Surface Laplacians to estimate Current Source Density (CSD) from EEG data.This transforms the data from a measure of electric potentials at the scalpto an estimate of underlying current sources and sinks: the Current Source Density (CSD).The Surface Laplacian can also be thought of as a spatial high-pass filter applied to the data,which attenuates low-spatial-frequency signals that are broadly distributed across the scalp,but preserves high-spatial-frequency signals that are more localised.I've learned a lot about this approach from- Mike X Cohen's excellent book, [*Analying Neural Time Series Data*](http://mikexcohen.com/books) and the accompanying MATLAB code. I also recommend his [videos lectures](http://mikexcohen.com/lectures.html)- This [Python/MNE port of Cohen's code](https://github.com/alberto-ara/Surface-Laplacian)- Kayser & Tenke (2006). [*Principal components analysis of Laplacian waveforms as a generic method for identifying ERP generator patterns*](http://psychophysiology.cpmc.columbia.edu/pdf/kayser2005b.pdf). This is the method we use.- Another classic book, Nunez & Srinivasan (2006). [*Electric fields of the brain: the neurophysics of EEG*](https://brainmaster.com/software/pubs/brain/Nunez%202ed.pdf)However, despite these great resources, I wasn't happy with my intuitions about how this works.In this post, I work through a toy example that helped me make sense of just why and how this method works.> Disclaimer:> I've written this post mostly to help me understand something myself.> I'm not the expert here, and it may all be horribly wrong.> No warranties.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns
sns.set(rc={'figure.figsize':(8, 6)},
font_scale=1.5)
sns.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
The Surface LaplacianTo begin, let's pretend the world is two-dimensional,because it makes things simple.- A ***Laplacian*** is basically a spatial second-order derivative.- A derivative is a number that indicates how quickly something changes.- A spatial derivative indicates how much something changes from place to place. If we measure elevation about sea level at two points 10 m apart and find the first is at +5 m while the second is at +15, the spatial derivative is the slope between the these two places. More formally, $\frac{\delta Elevation}{\delta Location}$ (change in elevation per change in location) = $\frac{10\ m}{10\ m} = 1$ meters per meter. - A second derivative indicates how quickly the first derivate changes. Imagine a third point another 10 m further along that is 30 m above sea level. Remember, we're pretending everything is two-dimensional, so all these points are along a line. The slope between points 1 and 2 is $\frac{10\ m}{10\ m} = 1$. The slope between points 2 and 3 is $\frac{15\ m}{10\ m} = 1.5$. The difference in slopes is $0.5$.Here's all of the above in plot form.
###Code
delta_loc = 10
locs = np.array([0, 10, 20])
heights = np.array([5, 15, 30])
diff_height = np.diff(heights)
diff_diff_height = np.diff(diff_height)
diff_locs = locs[:-1] + .5*delta_loc # Keep the x-axis properly aligned
diff_diff_locs = locs[:-2] + 1*delta_loc
plt.plot(locs, heights, '-o', label='Elevation (m)')
plt.plot(diff_locs, diff_height, '-o', label=u'Slope (δ Elevation)')
plt.plot(diff_diff_locs, diff_diff_height, '-o', label=u'δ Slope')
plt.hlines(0, *plt.xlim(), linestyle='dashed')
plt.xlabel('Location (m)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
EEG LaplaciansSo why does calculating the second spatial derivative of the EEG signal help us?To see, I simulate some one-dimensional data (pretend we only have electrodes on the midline)illustrating a common issue in EEG:our data contains a small, narrow negative component and a large, broad positive one.
###Code
electrode_spacing = .1
space = np.arange(0, 10, electrode_spacing)
comp1 = 2 * stats.norm.pdf(space, loc=4, scale=1)
comp2 = -.25 * stats.norm.pdf(space, loc=5, scale=.5)
def do_label():
plt.legend(loc='upper right')
plt.xlabel('Location')
plt.ylabel('')
fig_ground_truth = plt.figure()
for comp, label in zip([comp1, comp2], ['Positive', 'Negative']):
plt.plot(space, comp, label=label)
do_label()
###Output
_____no_output_____
###Markdown
Of course, in our recordings both of these components are superimposed,and the little one is obscured by the big one.Poor little component.
###Code
eeg_signal = comp1 + comp2
plt.plot(space, eeg_signal, label='EEG Signal')
do_label()
###Output
_____no_output_____
###Markdown
Where did our negative component go?It's still there, as a wiggle on the right hand side of the positive component,but good luck detecting this in noisy data.Yes, I did tweak these parameters to make the effect particularly dramatic.Next, we take the first spatial derivative of the EEG signal:how much it changes between electrodes.This doesn't tell us much by itself.
###Code
first_derivative = np.diff(eeg_signal)
space_d1 = space[:-1] + .5 * electrode_spacing # Keep locations lined up with differenced data
plt.plot(space_d1, first_derivative, label='First spatial derivative')
do_label()
###Output
_____no_output_____
###Markdown
Next, we take the second spatial derivative: the derivative of the derivative.
###Code
second_derivative = np.diff(first_derivative)
space_d2 = space[:-2] + .5 * electrode_spacing
plt.plot(space_d2, second_derivative, label='Second spatial derivative')
do_label()
###Output
_____no_output_____
###Markdown
...and multiply it by $-1$.
###Code
plt.plot(space_d2, -1*second_derivative, label='-1 × Second spatial derivative')
do_label()
###Output
_____no_output_____
###Markdown
There it is!Our positive component is narrower, but still has it's peak at $x=4$.Our negative component can now be seen at $x=5$For comparison, here are the original components again.
###Code
fig_ground_truth
###Output
_____no_output_____
###Markdown
By calculating the Laplacian, we've squeezed the components into a narrower spatial range,which make it possible to see the otherwise hidden negative component.It looks like we've also introduced some other artifacts,like the negative dip at around $x=2$.However, this makes sense from an electrophysiological point of view:if there is a positive current source at $x=4$, there must be corresponding negative sinks around it.(I might be confusing sinks and sources here, but no matter). 2D LaplaciansOK, this works with one-dimensional data,but EEG electrodes are located in three-dimensional space around the head.Let's at least see how it works in the two-dimensional case.We'll pretend the scalp is a flat square of 20 cm on each side.
###Code
X = Y = np.arange(-10, 10.01, .1)
XY = np.array([(x, y) for x in X for y in Y])
def show(X):
'''Plot an image from a 2D array'''
mx = np.abs(X).max()
im = plt.imshow(X, cmap='seismic', vmin=-mx, vmax=mx)
plt.colorbar(im)
locs = [(0, -4), (0, 2)] # Center of each component
peaks = [-1, 6] # Peak amplitudes
scales = [2, 8] # Standard deviations
V = np.zeros(XY.shape[0]) # Voltages
for loc, peak, scale in zip(locs, peaks, scales):
cov = np.eye(2) * scale
thisV = stats.multivariate_normal.pdf(XY, mean=loc, cov=cov)
gain = peak / thisV.max()
V += thisV * gain
wV = V.reshape(len(X), len(X)).T # Voltage as a 2D grid
show(wV)
###Output
_____no_output_____
###Markdown
Again, where did that little guy go?As before, we can look at the first derivatives, but again they don't tell us much.
###Code
def combine(dx, dy):
'''Combine two one-dimensional partial derivates to form a Laplacian
Drops the last value in each dimension so the shapes match.
Is this the right thing to do?
Wikipedia (https://en.wikipedia.org/wiki/Laplace_operator#Definition)
says "the Laplacian of f is the sum of all the unmixed second partial
derivatives in the Cartesian coordinates xi." I think this means yes.
'''
n = dx.shape[0]
assert(dy.shape[1]==n)
return dx[:n, :n] + dy[:n, :n]
dv_dx = np.diff(wV, axis=0)[:-1, :] # Drop the last value on the appropriate axis to keep things symmetrical
dv_dy = np.diff(wV, axis=1)[:, :-1]
gradient = combine(dv_dx, dv_dy)
show(gradient)
###Output
_____no_output_____
###Markdown
The negative second derivatives - the Laplacian - on the other hand...
###Code
ddv_ddx = np.diff(dv_dx, axis=0)[:-1, :]
ddv_ddy = np.diff(dv_dy, axis=1)[:, :-1]
laplacian = -combine(ddv_ddx, ddv_ddy)
show(laplacian)
###Output
_____no_output_____ |
Day1/2_evaluation_metrics.ipynb | ###Markdown
Metrics To Evaluate Machine Learning Algorithms in PythonThe metrics that you choose to evaluate your machine learning algorithms are very important.Choice of metrics influences how the performance of machine learning algorithms is measured and compared. They influence how you weight the importance of different characteristics in the results and your ultimate choice of which algorithm to choose.In this post, you will discover how to select and use different machine learning performance metrics in Python with scikit-learn.Let’s get started. About the RecipesVarious different machine learning evaluation metrics are demonstrated in this post using small code recipes in Python and scikit-learn.Each recipe is designed to be standalone so that you can copy-and-paste it into your project and use it immediately.Metrics are demonstrated for both classification and regression type machine learning problems.For classification metrics, the Pima Indians onset of diabetes dataset is used as demonstration. This is a binary classification problem where all of the input variables are numeric (update: download from here).For regression metrics, the Boston House Price dataset is used as demonstration. this is a regression problem where all of the input variables are also numeric (update: download data from here).In each recipe, the dataset is downloaded directly from the UCI Machine Learning repository.All recipes evaluate the same algorithms, Logistic Regression for classification and Linear Regression for the regression problems. A 10-fold cross-validation test harness is used to demonstrate each metric, because this is the most likely scenario where you will be employing different algorithm evaluation metrics.A caveat in these recipes is the cross_val_score function used to report the performance in each recipe.It does allow the use of different scoring metrics that will be discussed, but all scores are reported so that they can be sorted in ascending order (largest score is best).Some evaluation metrics (like mean squared error) are naturally descending scores (the smallest score is best) and as such are reported as negative by the cross_val_score() function. This is important to note, because some scores will be reported as negative that by definition can never be negative.You can learn more about machine learning algorithm performance metrics supported by scikit-learn on the page Model evaluation: quantifying the quality of predictions.Let’s get on with the evaluation metrics. ============================== Part A: Classification Metrics ==============================Classification problems are perhaps the most common type of machine learning problem and as such there are a myriad of metrics that can be used to evaluate predictions for these problems.In this section we will review how to use the following metrics:1. Classification Accuracy.2. Logarithmic Loss.3. Area Under ROC Curve.4. Confusion Matrix.5. Classification Report. 1. Classification AccuracyClassification accuracy is the number of correct predictions made as a ratio of all predictions made.This is the most common evaluation metric for classification problems, it is also the most misused. It is really only suitable when there are an equal number of observations in each class (which is rarely the case) and that all predictions and prediction errors are equally important, which is often not the case.Below is an example of calculating classification accuracy.
###Code
# Cross Validation Classification Accuracy
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LogisticRegression()
scoring = 'accuracy'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Accuracy:", results.mean(), results.std())
###Output
Accuracy: 0.7695146958304853 0.04841051924567195
###Markdown
You can see that the ratio is reported. This can be converted into a percentage by multiplying the value by 100, giving an accuracy score of approximately 77% accurate. 2. Logarithmic LossLogarithmic loss (or logloss) is a performance metric for evaluating the predictions of probabilities of membership to a given class.The scalar probability between 0 and 1 can be seen as a measure of confidence for a prediction by an algorithm. Predictions that are correct or incorrect are rewarded or punished proportionally to the confidence of the prediction.You can learn more about logarithmic on the Loss functions for classification Wikipedia article.Below is an example of calculating logloss for Logistic regression predictions on the Pima Indians onset of diabetes dataset.
###Code
# Cross Validation Classification LogLoss
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LogisticRegression()
scoring = 'neg_log_loss'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Logloss: ", results.mean(), results.std())
###Output
Logloss: -0.4925638827542425 0.04702914812116142
###Markdown
Smaller logloss is better with 0 representing a perfect logloss. As mentioned above, the measure is inverted to be ascending when using the cross_val_score() function. 3. Area Under ROC CurveArea under ROC Curve (or AUC for short) is a performance metric for binary classification problems.The AUC represents a model’s ability to discriminate between positive and negative classes. An area of 1.0 represents a model that made all predictions perfectly. An area of 0.5 represents a model as good as random. Learn more about ROC here.ROC can be broken down into sensitivity and specificity. A binary classification problem is really a trade-off between sensitivity and specificity.Sensitivity is the true positive rate also called the recall. It is the number instances from the positive (first) class that actually predicted correctly.Specificity is also called the true negative rate. Is the number of instances from the negative class (second) class that were actually predicted correctly.You can learn more about ROC on the Wikipedia page.The example below provides a demonstration of calculating AUC.
###Code
# Cross Validation Classification ROC AUC
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LogisticRegression()
scoring = 'roc_auc'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("AUC:",results.mean(), results.std())
###Output
AUC: 0.823716379293716 0.040723558409611726
###Markdown
You can see the the AUC is relatively close to 1 and greater than 0.5, suggesting some skill in the predictions. 4. Confusion MatrixThe confusion matrix is a handy presentation of the accuracy of a model with two or more classes.The table presents predictions on the x-axis and accuracy outcomes on the y-axis. The cells of the table are the number of predictions made by a machine learning algorithm.For example, a machine learning algorithm can predict 0 or 1 and each prediction may actually have been a 0 or 1. Predictions for 0 that were actually 0 appear in the cell for prediction=0 and actual=0, whereas predictions for 0 that were actually 1 appear in the cell for prediction = 0 and actual=1. And so on.You can learn more about the Confusion Matrix on the Wikipedia article.Below is an example of calculating a confusion matrix for a set of prediction by a model on a test set.
###Code
# Cross Validation Classification Confusion Matrix
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
test_size = 0.33
seed = 7
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=test_size, random_state=seed)
model = LogisticRegression()
model.fit(X_train, Y_train)
predicted = model.predict(X_test)
matrix = confusion_matrix(Y_test, predicted)
print(matrix)
###Output
[[141 21]
[ 41 51]]
###Markdown
Although the array is printed without headings, you can see that the majority of the predictions fall on the diagonal line of the matrix (which are correct predictions). 5. Classification ReportScikit-learn does provide a convenience report when working on classification problems to give you a quick idea of the accuracy of a model using a number of measures.The classification_report() function displays the precision, recall, f1-score and support for each class.The example below demonstrates the report on the binary classification problem.
###Code
# Cross Validation Classification Report
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
test_size = 0.33
seed = 7
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=test_size, random_state=seed)
model = LogisticRegression()
model.fit(X_train, Y_train)
predicted = model.predict(X_test)
report = classification_report(Y_test, predicted)
print(report)
###Output
precision recall f1-score support
0.0 0.77 0.87 0.82 162
1.0 0.71 0.55 0.62 92
micro avg 0.76 0.76 0.76 254
macro avg 0.74 0.71 0.72 254
weighted avg 0.75 0.76 0.75 254
###Markdown
============================== Part B: Regression Metrics ==============================In this section will review 3 of the most common metrics for evaluating predictions on regression machine learning problems:1. Mean Absolute Error.2. Mean Squared Error.3. R^2. 1. Mean Absolute ErrorThe Mean Absolute Error (or MAE) is the sum of the absolute differences between predictions and actual values. It gives an idea of how wrong the predictions were.The measure gives an idea of the magnitude of the error, but no idea of the direction (e.g. over or under predicting).You can learn more about Mean Absolute error on Wikipedia.The example below demonstrates calculating mean absolute error on the Boston house price dataset.
###Code
# Cross Validation Regression MAE
import pandas
from sklearn import model_selection
from sklearn.linear_model import LinearRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.data"
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
dataframe = pandas.read_csv(url, delim_whitespace=True, names=names)
array = dataframe.values
X = array[:,0:13]
Y = array[:,13]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LinearRegression()
scoring = 'neg_mean_absolute_error'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("MAE: ", results.mean(), results.std())
###Output
MAE: -4.00494663532398 2.083599268709534
###Markdown
A value of 0 indicates no error or perfect predictions. Like logloss, this metric is inverted by the cross_val_score() function. 2. Mean Squared ErrorThe Mean Squared Error (or MSE) is much like the mean absolute error in that it provides a gross idea of the magnitude of error.Taking the square root of the mean squared error converts the units back to the original units of the output variable and can be meaningful for description and presentation. This is called the Root Mean Squared Error (or RMSE).You can learn more about Mean Squared Error on Wikipedia.The example below provides a demonstration of calculating mean squared error.
###Code
# Cross Validation Regression MSE
import pandas
from sklearn import model_selection
from sklearn.linear_model import LinearRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.data"
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
dataframe = pandas.read_csv(url, delim_whitespace=True, names=names)
array = dataframe.values
X = array[:,0:13]
Y = array[:,13]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LinearRegression()
scoring = 'neg_mean_squared_error'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("MSE: ", results.mean(), results.std())
###Output
MSE: -34.705255944524744 45.573999200308585
###Markdown
This metric too is inverted so that the results are increasing. Remember to take the absolute value before taking the square root if you are interested in calculating the RMSE. 3. R^2 MetricThe R^2 (or R Squared) metric provides an indication of the goodness of fit of a set of predictions to the actual values. In statistical literature, this measure is called the coefficient of determination.This is a value between 0 and 1 for no-fit and perfect fit respectively.You can learn more about the Coefficient of determination article on Wikipedia.The example below provides a demonstration of calculating the mean R^2 for a set of predictions.
###Code
# Cross Validation Regression R^2
import pandas
from sklearn import model_selection
from sklearn.linear_model import LinearRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.data"
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
dataframe = pandas.read_csv(url, delim_whitespace=True, names=names)
array = dataframe.values
X = array[:,0:13]
Y = array[:,13]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LinearRegression()
scoring = 'r2'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("R^2:", results.mean(), results.std())
###Output
R^2: 0.20252899006056518 0.5952960169512289
###Markdown
You can see that the predictions have a poor fit to the actual values with a value close to zero and less than 0.5. SummaryIn this post, you discovered metrics that you can use to evaluate your machine learning algorithms.You learned about 3 classification metrics:Accuracy.Logarithmic Loss.Area Under ROC Curve.Also 2 convenience methods for classification prediction results:Confusion Matrix.Classification Report.And 3 regression metrics:Mean Absolute Error.Mean Squared Error.R^2.Do you have any questions about metrics for evaluating machine learning algorithms or this post? Ask your question in the comments and I will do my best to answer it. ============================== Part C: Clustering Metrics ==============================This is advanced material, if you have time, please continue. You may also want to help your classmates A Quick Review on Clustering: Load Wisconsin Breast Cancer Dataset
###Code
import numpy as np
from sklearn.datasets import load_breast_cancer
# load data
data = load_breast_cancer()
X = data.data
y = data.target
print(X.shape, data.feature_names)
###Output
(569, 30) ['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
###Markdown
Partition based Clustering Example
###Code
from sklearn.cluster import KMeans
km = KMeans(n_clusters=2, random_state=2)
km.fit(X)
labels = km.labels_
centers = km.cluster_centers_
print(labels[:10])
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
bc_pca = pca.fit_transform(X)
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
fig.suptitle('Visualizing breast cancer clusters')
fig.subplots_adjust(top=0.85, wspace=0.5)
ax1.set_title('Actual Labels')
ax2.set_title('Clustered Labels')
for i in range(len(y)):
if y[i] == 0:
c1 = ax1.scatter(bc_pca[i,0], bc_pca[i,1],c='g', marker='.')
if y[i] == 1:
c2 = ax1.scatter(bc_pca[i,0], bc_pca[i,1],c='r', marker='.')
if labels[i] == 0:
c3 = ax2.scatter(bc_pca[i,0], bc_pca[i,1],c='g', marker='.')
if labels[i] == 1:
c4 = ax2.scatter(bc_pca[i,0], bc_pca[i,1],c='r', marker='.')
l1 = ax1.legend([c1, c2], ['0', '1'])
l2 = ax2.legend([c3, c4], ['0', '1'])
###Output
_____no_output_____
###Markdown
Hierarchical Clustering Example
###Code
from scipy.cluster.hierarchy import dendrogram, linkage
import numpy as np
np.set_printoptions(suppress=True)
Z = linkage(X, 'ward')
print(Z)
plt.figure(figsize=(8, 3))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('Data point')
plt.ylabel('Distance')
dendrogram(Z)
plt.axhline(y=10000, c='k', ls='--', lw=0.5)
plt.show()
from scipy.cluster.hierarchy import fcluster
max_dist = 10000
hc_labels = fcluster(Z, max_dist, criterion='distance')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
fig.suptitle('Visualizing breast cancer clusters')
fig.subplots_adjust(top=0.85, wspace=0.5)
ax1.set_title('Actual Labels')
ax2.set_title('Hierarchical Clustered Labels')
for i in range(len(y)):
if y[i] == 0:
c1 = ax1.scatter(bc_pca[i,0], bc_pca[i,1],c='g', marker='.')
if y[i] == 1:
c2 = ax1.scatter(bc_pca[i,0], bc_pca[i,1],c='r', marker='.')
if hc_labels[i] == 1:
c3 = ax2.scatter(bc_pca[i,0], bc_pca[i,1],c='g', marker='.')
if hc_labels[i] == 2:
c4 = ax2.scatter(bc_pca[i,0], bc_pca[i,1],c='r', marker='.')
l1 = ax1.legend([c1, c2], ['0', '1'])
l2 = ax2.legend([c3, c4], ['1', '2'])
###Output
_____no_output_____
###Markdown
Clustering Model's Evaluation MetricsEvaluating the performance of a clustering algorithm is not as trivial as counting the number of errors or the precision and recall of a supervised classification algorithm. In particular any evaluation metric should not take the absolute values of the cluster labels into account but rather if this clustering define separations of the data similar to some ground truth set of classes or satisfying some assumption such that members belong to the same class are more similar that members of different classes according to some similarity metric.Build two clustering models on the breast cancer dataset
###Code
km2 = KMeans(n_clusters=2, random_state=42).fit(X)
km2_labels = km2.labels_
km5 = KMeans(n_clusters=5, random_state=42).fit(X)
km5_labels = km5.labels_
###Output
_____no_output_____
###Markdown
Homogeneity, Completeness and V-measureGiven the knowledge of the ground truth class assignments of the samples, it is possible to define some intuitive metric using conditional entropy analysis.In particular Rosenberg and Hirschberg (2007) define the following two desirable objectives for any cluster assignment:- homogeneity: each cluster contains only members of a single class.- completeness: all members of a given class are assigned to the same cluster.- v-measure: actually equivalent to the mutual information (NMI) discussed above, with the aggregation function being the arithmetic mean [B2011].
###Code
from sklearn import datasets, metrics
km2_hcv = np.round(metrics.homogeneity_completeness_v_measure(y, km2_labels), 3)
km5_hcv = np.round(metrics.homogeneity_completeness_v_measure(y, km5_labels), 3)
print('Homogeneity, Completeness, V-measure metrics for num clusters=2: ', km2_hcv)
print('Homogeneity, Completeness, V-measure metrics for num clusters=5: ', km5_hcv)
###Output
Homogeneity, Completeness, V-measure metrics for num clusters=2: [0.422 0.517 0.465]
Homogeneity, Completeness, V-measure metrics for num clusters=5: [0.602 0.298 0.398]
###Markdown
AdvantagesBounded scores: 0.0 is as bad as it can be, 1.0 is a perfect score.Intuitive interpretation: clustering with bad V-measure can be qualitatively analyzed in terms of homogeneity and completeness to better feel what ‘kind’ of mistakes is done by the assignment.No assumption is made on the cluster structure: can be used to compare clustering algorithms such as k-means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with “folded” shapes. DrawbacksThe previously introduced metrics are not normalized with regards to random labeling: this means that depending on the number of samples, clusters and ground truth classes, a completely random labeling will not always yield the same values for homogeneity, completeness and hence v-measure. In particular random labeling won’t yield zero scores especially when the number of clusters is large.This problem can safely be ignored when the number of samples is more than a thousand and the number of clusters is less than 10. For smaller sample sizes or larger number of clusters it is safer to use an adjusted index such as the Adjusted Rand Index (ARI). Silhouette CoefficientIf the ground truth labels are not known, evaluation must be performed using the model itself. The Silhouette Coefficient (sklearn.metrics.silhouette_score) is an example of such an evaluation, where a higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores:a: The mean distance between a sample and all other points in the same class.b: The mean distance between a sample and all other points in the next nearest cluster.The Silhouette Coefficient s for a single sample is then given as:The Silhouette Coefficient for a set of samples is given as the mean of the Silhouette Coefficient for each sample.
###Code
from sklearn import metrics
km2_silc = metrics.silhouette_score(X, km2_labels, metric='euclidean')
km5_silc = metrics.silhouette_score(X, km5_labels, metric='euclidean')
print('Silhouette Coefficient for num clusters=2: ', km2_silc)
print('Silhouette Coefficient for num clusters=5: ', km5_silc)
###Output
Silhouette Coefficient for num clusters=2: 0.6972646156059464
Silhouette Coefficient for num clusters=5: 0.5102292997907839
###Markdown
AdvantagesThe score is bounded between -1 for incorrect clustering and +1 for highly dense clustering. Scores around zero indicate overlapping clusters.The score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster. DrawbacksThe Silhouette Coefficient is generally higher for convex clusters than other concepts of clusters, such as density based clusters like those obtained through DBSCAN. Calinski-Harabaz IndexIf the ground truth labels are not known, the Calinski-Harabaz index (sklearn.metrics.calinski_harabaz_score) - also known as the Variance Ratio Criterion - can be used to evaluate the model, where a higher Calinski-Harabaz score relates to a model with better defined clusters.For clusters, the Calinski-Harabaz score is given as the ratio of the between-clusters dispersion mean and the within-cluster dispersion:where is the between group dispersion matrix and is the within-cluster dispersion matrix defined by:with be the number of points in our data, be the set of points in cluster , be the center of cluster , be the center of , be the number of points in cluster .
###Code
km2_chi = metrics.calinski_harabaz_score(X, km2_labels)
km5_chi = metrics.calinski_harabaz_score(X, km5_labels)
print('Calinski-Harabaz Index for num clusters=2: ', km2_chi)
print('Calinski-Harabaz Index for num clusters=5: ', km5_chi)
###Output
Calinski-Harabaz Index for num clusters=2: 1300.2082268895424
Calinski-Harabaz Index for num clusters=5: 1621.0110530063253
###Markdown
AdvantagesThe score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.The score is fast to compute DrawbacksThe Calinski-Harabaz index is generally higher for convex clusters than other concepts of clusters, such as density based clusters like those obtained through DBSCAN. ============================== Part D: Model Tuning ==============================This is an even more advance material. Proceed with caution. Build and Evaluate Default ModelFirst build a default SVM modelC-Support Vector Classification.The implementation is based on libsvm. The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of 10000 samples.The multiclass support is handled according to a one-vs-one scheme.For details on the precise mathematical formulation of the provided kernel functions and how gamma, coef0 and degree affect each other, see the corresponding section in the narrative documentation: Kernel functions.
###Code
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
# prepare datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# build default SVM model
def_svc = SVC(random_state=42)
def_svc.fit(X_train, y_train)
# predict and evaluate performance
def_y_pred = def_svc.predict(X_test)
print('Default Model Stats:')
report = classification_report(y_test, def_y_pred)
print(report)
###Output
Default Model Stats:
precision recall f1-score support
0 0.00 0.00 0.00 63
1 0.63 1.00 0.77 108
micro avg 0.63 0.63 0.63 171
macro avg 0.32 0.50 0.39 171
weighted avg 0.40 0.63 0.49 171
###Markdown
Tune Model with Grid SearchSearch through the "paramters" (called hyper-parameters here), we are interested in the kernal being either linear or rbf (radial basis function, which is gaussian).When training an SVM with the Radial Basis Function (RBF) kernel, two parameters must be considered: C and gamma. The parameter C, common to all SVM kernels, trades off misclassification of training examples against simplicity of the decision surface. A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly. gamma defines how much influence a single training example has. The larger gamma is, the closer other examples must be to be affected.Proper choice of C and gamma is critical to the SVM’s performance. One is advised to use sklearn.model_selection.GridSearchCV with C and gamma spaced exponentially far apart to choose good values.
###Code
from sklearn.model_selection import GridSearchCV
# setting the parameter grid
grid_parameters = {'kernel': ['linear', 'rbf'],
'gamma': [1e-3, 1e-4],
'C': [1, 10, 50, 100]}
# perform hyperparameter tuning
print("# Tuning hyper-parameters for accuracy\n")
clf = GridSearchCV(SVC(random_state=42), grid_parameters, cv=5, scoring='accuracy')
clf.fit(X_train, y_train)
# view accuracy scores for all the models
print("Grid scores for all the models based on CV:\n")
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.5f (+/-%0.05f) for %r" % (mean, std * 2, params))
# check out best model performance
print("\nBest parameters set found on development set:", clf.best_params_)
print("Best model validation accuracy:", clf.best_score_)
gs_best = clf.best_estimator_
tuned_y_pred = gs_best.predict(X_test)
print('\n\nTuned Model Stats:')
report = classification_report(y_test, tuned_y_pred)
print(report)
###Output
Tuned Model Stats:
precision recall f1-score support
0 0.95 0.97 0.96 63
1 0.98 0.97 0.98 108
micro avg 0.97 0.97 0.97 171
macro avg 0.97 0.97 0.97 171
weighted avg 0.97 0.97 0.97 171
|
src/pre_processing/dataset_comparison.ipynb | ###Markdown
Data Description Trainsets
###Code
train_set1 = pd.read_csv('data/Fold1/train_cleaned.csv')
train_set2 = pd.read_csv('data/Fold2/train_cleaned.csv')
train_set3 = pd.read_csv('data/Fold3/train_cleaned.csv')
train_set4 = pd.read_csv('data/Fold4/train_cleaned.csv')
train_set5 = pd.read_csv('data/Fold5/train_cleaned.csv')
train_sets = [train_set1,train_set2,train_set3,train_set4,train_set5]
rel_values = pd.DataFrame()
for nb, dataset in enumerate(train_sets):
rel_values = rel_values.append(pd.DataFrame({'DataSet':'TrainSet_{}'.format(nb+1),
'rel_0': dataset.rel.value_counts()[0],
'rel_1': dataset.rel.value_counts()[1],
'rel_2': dataset.rel.value_counts()[2],
'rel_3': dataset.rel.value_counts()[3],
'rel_4': dataset.rel.value_counts()[4],
}, index=[0]), ignore_index=True)
rel_values
fig,ax = plt.subplots()
rel_values.plot(style='o', ax=ax)
plt.xlim(-.5,4.5)
group_labels = ['','TrainSet1','TrainSet2','TrainSet3','TrainSet4','TrainSet5','']
_=ax.set_xticklabels(group_labels)
_=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
_=plt.title('Relevance Frequency across Datasets', size=14)
###Output
_____no_output_____
###Markdown
Moments for Relevance
###Code
moments = pd.DataFrame()
for nb, dataset in enumerate(train_sets):
moments = moments.append(pd.DataFrame({'DataSet':'TrainSet_{}'.format(nb+1),
'mean': dataset.rel.mean(),
'median': dataset.rel.median(),
'skewness': dataset.rel.skew(),
'kurtosis': dataset.rel.kurt(),
'var': dataset.rel.var(),
'std': dataset.rel.std()
}, index=[0]),ignore_index=True)
moments
histo = pd.DataFrame()
histo['Relevance'] = [0,1,2,3,4]
for nb, dataset in enumerate(train_sets):
histo['TrainSet_{}'.format(nb+1)] = [dataset.rel.value_counts()[x] for x in range(5)]
histo = histo.set_index('Relevance')
histo.plot(kind='bar', stacked=False)
plt.title("Relevance Distribution")
###Output
_____no_output_____
###Markdown
Kolmogorov-Smirnov statistic on 2 samplesRef: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ks_2samp.htmlThis is a two-sided test for the null hypothesis that 2 independent samples are drawn from the same continuous distribution. If the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same.
###Code
for i,x in enumerate(train_sets):
pval_list = []
for j,y in enumerate(train_sets):
if i == j:
pass
else:
s,pval = ks_2samp(x['rel'],y['rel'])
pval_list.append(pval)
pval_avg = sum(pval_list)/len(pval_list)
if pval_avg > 0.05:
answer = "SAME"
else:
answer = "Different"
print("TrainSet"+str(i+1)+" vs All: KS-Test => AVG pvalue = "+str(pval_avg)+" "+str(answer))
small = pd.read_csv('data/Fold1/prediction_data/train_set_small_cleaned.csv')
large = pd.read_csv('data/Fold1/prediction_data/train_set_large_cleaned.csv')
used_sets = [small,large,train_set1]
used_sets_label = ['small','large','train_set1']
for nb, i in enumerate(used_sets):
print(used_sets_label[nb])
print((len(i)))
print((len(i)/len(train_set1))*100)
print(i.qid.nunique())
print(i.rel.value_counts(normalize=True))
print("\n")
print(small.rel.value_counts(normalize=True)[0])
new_sets = pd.DataFrame()
for nb in range(5):
new_sets = new_sets.append(pd.DataFrame({
'Small Set': small.rel.value_counts(normalize=True)[nb],
'Large Set': large.rel.value_counts(normalize=True)[nb],
'Fold_1 Set': train_set1.rel.value_counts(normalize=True)[nb]}, index=[0]),ignore_index=True)
new_sets.set_index('Relevance')
new_sets.plot(kind='bar', stacked=False)
plt.title("Relevance Distribution (Normalized)")
plt.xlabel("Relevance")
histo
###Output
_____no_output_____ |
notebooks/03. Model Selection.ipynb | ###Markdown
Model Selection ObjectiveThe purpose of this notebook is to test different models for the project. Import libraries
###Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn import tree
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load dataset
###Code
#- Define data file
file ='../dataset/ObesityDataSet_raw_and_data_sinthetic.csv'
#- Load dataset to a pandas dataframe for analysis
ds = pd.read_csv(file)
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# Transformation of binary data
ds["Gender"] = ds.Gender.apply(lambda s: 1 if s == "Female" else 0)
ds["family_history_with_overweight"] = ds.family_history_with_overweight.apply(lambda s: 1 if s == "yes" else 0)
ds["FAVC"] = ds.FAVC.apply(lambda s: 1 if s == "yes" else 0)
ds["SMOKE"] = ds.SMOKE.apply(lambda s: 1 if s == "yes" else 0)
ds["SCC"] = ds.SCC.apply(lambda s: 1 if s == "yes" else 0)
# One hot encoding for categorical data
CAEC_list = pd.get_dummies(ds.CAEC, prefix="CAEC")
ds.drop("CAEC", inplace=True, axis=1)
ds = ds.join(CAEC_list)
CALC_list = pd.get_dummies(ds.CALC, prefix="CALC")
ds.drop("CALC", inplace=True, axis=1)
ds = ds.join(CALC_list)
MTRANS_list = pd.get_dummies(ds.MTRANS, prefix="MTRANS")
ds.drop("MTRANS", inplace=True, axis=1)
ds = ds.join(MTRANS_list)
# Transformation of target feature through a dictionary
obesity = {"Insufficient_Weight":1, "Normal_Weight":2, "Overweight_Level_I":3, "Overweight_Level_II":4, "Obesity_Type_I":5, "Obesity_Type_II":6, "Obesity_Type_III":7}
ds["NObeyesdad"] = ds.NObeyesdad.map(obesity)
###Output
_____no_output_____
###Markdown
Obtain Train and Test datasets
###Code
# Obtain train and test datasets
X_train, X_test, y_train, y_test = train_test_split(ds.drop('NObeyesdad',axis=1),
ds['NObeyesdad'],
test_size=0.30,
random_state=0)
# Standard scaling
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
# Train model
log_model = LogisticRegression(max_iter=10000)
log_model.fit(X_train, y_train)
# Test model
y_pred = log_model.predict(X_test)
log_accuracy = accuracy_score (y_test, y_pred)
log_accuracy
# Evaluate model
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm
###Output
_____no_output_____
###Markdown
SVM
###Code
# Train model
svm_model = svm.SVC()
svm_model.fit(X_train, y_train)
# Test model
y_pred = svm_model.predict(X_test)
svm_accuracy = accuracy_score (y_test, y_pred)
svm_accuracy
# Evaluate model
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm
###Output
_____no_output_____
###Markdown
KNN
###Code
# Train model
knn_model = KNeighborsClassifier()
knn_model.fit(X_train, y_train)
# Test model
y_pred = knn_model.predict(X_test)
knn_accuracy = accuracy_score (y_test, y_pred)
knn_accuracy
# Evaluate model
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm
###Output
_____no_output_____
###Markdown
Gaussian Naive Bayes
###Code
# Train model
gnb_model = GaussianNB()
gnb_model.fit(X_train, y_train)
# Test model
y_pred = gnb_model.predict(X_test)
gnb_accuracy = accuracy_score (y_test, y_pred)
gnb_accuracy
# Evaluate model
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm
###Output
_____no_output_____
###Markdown
Decision Trees
###Code
# Train model
tree_model = tree.DecisionTreeClassifier()
tree_model.fit(X_train, y_train)
# Test model
y_pred = tree_model.predict(X_test)
tree_accuracy = accuracy_score (y_test, y_pred)
tree_accuracy
# Evaluate model
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm
###Output
_____no_output_____
###Markdown
Model Accuracy Comparison
###Code
print ("Logistic Regression: ", log_accuracy)
print ("SVM: ", svm_accuracy)
print ("KNN: ", knn_accuracy)
print ("Gaussian Naive Bayes: ", gnb_accuracy)
print ("Decision Trees: ", tree_accuracy)
###Output
Logistic Regression: 0.8911671924290221
SVM: 0.8564668769716088
KNN: 0.8123028391167192
Gaussian Naive Bayes: 0.5473186119873817
Decision Trees: 0.9321766561514195
|
prediction/transfer learning fine-tuning/function documentation generation/php/small_model.ipynb | ###Markdown
**Predict the documentation for php code using codeTrans transfer learning finetuning model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" #@param {type:"raw"}
!pip install tree_sitter
!git clone https://github.com/tree-sitter/tree-sitter-php
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-php']
)
PHP_LANGUAGE = Language('build/my-languages.so', 'php')
parser = Parser()
parser.set_language(PHP_LANGUAGE)
def get_string_from_code(node, lines):
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
get_string_from_code(node, lines)
elif node.type == 'string':
get_string_from_code(node, lines)
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
Your max_length is set to 512, but you input_length is only 98. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
|
01 python/Lecture_08.ipynb | ###Markdown
Parsing
###Code
!pip install requests
!pip install fake_useragent
from bs4 import BeautifulSoup
import pandas as pd
import requests
url = 'http://books.toscrape.com/catalogue/page-1.html'
response = requests.get(url)
response
import requests
url = 'http://setdekov.xtreemhost.com/?i=1'
response = requests.get(url)
response
response.content[:1000]
###Output
_____no_output_____
###Markdown
Внутри переменной `tree` теперь лежит дерево из тегов, по которому мы можем совершенно спокойно бродить.
###Code
from bs4 import BeautifulSoup
# распарсили страничку в дерево
tree = BeautifulSoup(response.content, 'html.parser')
tree.html.head.title
###Output
_____no_output_____
###Markdown
Можно вытащить из того места, куда мы забрели, текст с помощью метода `text`.
###Code
tree.html.head.title.text
###Output
_____no_output_____
###Markdown
С текстом можно работать классическими питоновскими методами. Например, можно избавиться от лишних отступов.
###Code
tree.html.head.title.text.strip()
###Output
_____no_output_____
###Markdown
Более того, зная адрес элемента, мы сразу можем найти его. Например, вот так в коде страницы мы можем найти где именно для каждой книги лежит основная информация. Видно, что она находится внутри тега `article`, для которого прописан класс `product_pod` (грубо говоря, в html класс задаёт оформление соотвествующего кусочка страницы). Вытащим инфу о книге из этого тега.
###Code
books = tree.find_all('article', {'class' : 'product_pod'})
books[0]
books[0].find('p', {'class': 'price_color'}).text
price = books[0].find('p', {'class': 'price_color'}).text
float(price[1:])
books[0].h3.a.get('href')
###Output
_____no_output_____
###Markdown
pandas
###Code
def get_page(p):
# изготовили ссылку
url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(p)
# сходили по ней
response = requests.get(url)
# построили дерево
tree = BeautifulSoup(response.content, 'html.parser')
# нашли в нём всё самое интересное
books = tree.find_all('article', {'class' : 'product_pod'})
infa = [ ]
for book in books:
infa.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title')})
return infa
###Output
_____no_output_____
###Markdown
Осталось только пройтись по всем страничкам от page-1 до page-50 циклом и данные у нас в кармане.
###Code
infa = []
for p in range(1,51):
infa.extend(get_page(p))
df = pd.DataFrame(infa)
print(df.shape)
df.head()
def get_price(price):
return float(price[1:])
df['newprice'] = df['price'].apply(get_price)
df['price'].apply(get_price).hist()
df.head()
from fake_useragent import UserAgent
UserAgent().chrome
###Output
_____no_output_____
###Markdown
Например, https://knowyourmeme.com/ не захочет пускать к себе python и выдаст ошибку 403. Она выдается сервером, если он доступен и способен обрабатывать запросы, но по некоторым личным причинам отказывается это делать.
###Code
url = 'https://knowyourmeme.com/'
response = requests.get(url)
response
###Output
_____no_output_____
###Markdown
А если сгенерировать User-Agent, вопросов у сервера не возникнет.
###Code
response = requests.get(url, headers={'User-Agent': UserAgent().chrome})
response
response.content[:400]
###Output
_____no_output_____
###Markdown
контрольная на лекции - спарсить сайт http://quotes.toscrape.com/ * автор цитаты* текст цитаты* теги
###Code
p =1
url = 'http://quotes.toscrape.com/page/{}/'.format(p)
# сходили по ней
response = requests.get(url)
# построили дерево
tree = BeautifulSoup(response.content, 'html.parser')
tree
quotes = tree.find_all('div', {'class':'quote'})
quotes[0]
infa.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title')})
quotes[0].find('small', {'class':'author'}).text
quotes[0].find('span', {'class':'text'}).text
alltags = quotes[0].find('div', {'class':'tags'}).find_all('a', {'class':'tag'})
alltags
import re
for i in alltags:
i_text = i.text
print(i_text)
type(alltags[0])
re.findall()
infa = [ ]
for one_quote in quotes:
print(one_quote)
quotes[0]
###Output
_____no_output_____ |
notebooks/Process Landsat.ipynb | ###Markdown
*Technology transfer for rapid family forest assessments and stewardship planning* Compile Landsat Data and Calculate Vegetation Indicies This notebook facilitates the process of collecting 17 years of Landsat multispectral data (2002-2018) and calculating several vegetation indices from these images using Google Earth Engine Python API. Indices include tasseled cap brightness, greenness, and wetness, NDVI, SAVI, and ENDVI.Images are then exported to Google drive.
###Code
import ee
import IPython.display
import pprint
import datetime
import dateutil.parser
import ipywidgets
import numpy as np
import pandas as pd
import traitlets
import ipyleaflet
from geetools import batch
# Configure the pretty printing output.
pp = pprint.PrettyPrinter(depth=4)
##Initialize connection to server
ee.Initialize()
#specify coordinates for area of interest.
#this region covers western Oregon
aoi = ee.Geometry.Polygon([
[-124.6, 41.9], [-117.0, 41.9], [-117.0, 49.0],
[-124.6, 49.0], [-124.6, 41.9]])
#pull in Landsat 8 collection for 2013-2018
#filter to aoi
l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR').filter(ee.Filter.lt('CLOUD_COVER', 5))\
.select(['B2', 'B3', 'B4', 'B5', 'B6', 'B7'])\
.filterBounds(aoi)
#print number of images in Landsat 8 collection
pp.pprint(l8sr.size().getInfo())
#pull in Landsat 7 collection for 2012
#filter to aoi
l7sr = ee.ImageCollection('LANDSAT/LE07/C01/T1_SR').filter(ee.Filter.lt('CLOUD_COVER', 5))\
.select(['B1', 'B2', 'B3', 'B4', 'B5', 'B7'])\
.filterBounds(aoi)
#print number of images in Landsat 7 collection
pp.pprint(l7sr.size().getInfo())
#pull in Landsat 5 collection for 2002-2011
#filter to aoi
l5sr = ee.ImageCollection('LANDSAT/LT05/C01/T1_SR').filter(ee.Filter.lt('CLOUD_COVER', 5))\
.select(['B1', 'B2', 'B3', 'B4', 'B5', 'B7'])\
.filterBounds(aoi)
#print number of images in Landsat 5 collection
pp.pprint(l5sr.size().getInfo())
#create function to rename bands to common idenifiers for Landsat 8
def renameLand8(image):
bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7',]
new_bands = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2']
return image.select(bands).rename(new_bands)
#map function to landsat 8 collection
land8 = ee.ImageCollection(l8sr).map(renameLand8)
#create function to rename bands to common idenifiers for Landsat 7
def renameLand7(image):
bands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B7']
new_bands = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2']
return image.select(bands).rename(new_bands)
#map function to landsat 7 collection
land7 = ee.ImageCollection(l7sr).map(renameLand7)
#function to rename bands to common idenifiers for Landsat 5
def renameLand5(image):
bands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B7']
new_bands = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2']
return image.select(bands).rename(new_bands)
#map function to landsat 5 collection
land5 = ee.ImageCollection(l5sr).map(renameLand5)
#Landsat 8
#function to calculation tasseled cap indicies - brightness, greenness, wetness
#add these bands to the image
def tasseled_cap_8(image):
blue = image.select("blue")
green = image.select("green")
red = image.select("red")
nir = image.select("nir")
swir1 = image.select("swir1")
swir2 = image.select("swir2")
#calculate tasseled cap transformations
bright = ((blue.multiply(0.03029)).add(green.multiply(0.02786)).add(red.multiply(0.04733))\
.add(nir.multiply(0.05599)).add(swir1.multiply(0.0508)).add(swir2.multiply(0.01872))).toFloat().rename('brightness')
green = ((blue.multiply(-0.02941)).add(green.multiply(-0.00243)).add(red.multiply(-0.05424))\
.add(nir.multiply(0.07276)).add(swir1.multiply(00.00713)).add(swir2.multiply(-0.01608))).toFloat().rename('greenness')
wet = ((blue.multiply(0.01511)).add(green.multiply(0.01973)).add(red.multiply(00.03283))\
.add(nir.multiply(0.03407)).add(swir1.multiply(-0.07117)).add(swir2.multiply(-0.04559))).toFloat().rename('wetness')
return image.addBands(bright).addBands(green).addBands(wet)
#map tasseled cap function to landsat 8 collection
land8_tc = ee.ImageCollection(land8).map(tasseled_cap_8)
#Landsat 7
#function to calculation tasseled cap indicies - brightness, greenness, wetness
#add these bands to the image
def tasseled_cap_7(image):
blue = image.select("blue")
green = image.select("green")
red = image.select("red")
nir = image.select("nir")
swir1 = image.select("swir1")
swir2 = image.select("swir2")
#calculate tasseled cap transformations
bright = ((blue.multiply(0.03561)).add(green.multiply(0.03972)).add(red.multiply(0.03904))\
.add(nir.multiply(0.06966)).add(swir1.multiply(0.02286)).add(swir2.multiply(0.01596))).toFloat().rename('brightness')
green = ((blue.multiply(-0.03344)).add(green.multiply(-0.03544)).add(red.multiply(-0.02630))\
.add(nir.multiply(0.06966)).add(swir1.multiply(-0.00242)).add(swir2.multiply(-0.01608))).toFloat().rename('greenness')
wet = ((blue.multiply(0.02626)).add(green.multiply(0.02141)).add(red.multiply(0.00926))\
.add(nir.multiply(0.00656)).add(swir1.multiply(-00.07629)).add(swir2.multiply(-0.05388))).toFloat().rename('wetness')
return image.addBands(bright).addBands(green).addBands(wet)
#map tasseled cap function to landsat 7 collection
land7_tc = ee.ImageCollection(land7).map(tasseled_cap_7)
#Landsat 5
#function to calculation tasseled cap indicies - brightness, greenness, wetness
#add these bands to the image
def tasseled_cap_5(image):
blue = image.select("blue")
green = image.select("green")
red = image.select("red")
nir = image.select("nir")
swir1 = image.select("swir1")
swir2 = image.select("swir2")
#calculate tasseled cap transformations
bright = ((blue.multiply(0.02043)).add(green.multiply(0.04158)).add(red.multiply(0.05524))\
.add(nir.multiply(0.05741)).add(swir1.multiply(0.03124)).add(swir2.multiply(0.02303))).toFloat().rename('brightness')
green = ((blue.multiply(-0.01603)).add(green.multiply(-0.02819)).add(red.multiply(-0.04934))\
.add(nir.multiply(0.07940)).add(swir1.multiply(-0.00002)).add(swir2.multiply(-0.01446))).toFloat().rename('greenness')
wet = ((blue.multiply(0.00315)).add(green.multiply(0.02021)).add(red.multiply(0.03102))\
.add(nir.multiply(0.01594)).add(swir1.multiply(-0.06806)).add(swir2.multiply(-0.06109))).toFloat().rename('wetness')
return image.addBands(bright).addBands(green).addBands(wet)
#map tasseled cap function to landsat 5 collection
land5_tc = ee.ImageCollection(land5).map(tasseled_cap_5)
#merge landsat 5,7,8 collections into a single collection
land_merge = ee.ImageCollection(land5_tc.merge(land7_tc.merge(land8_tc)));
#print number of images in entire landsat collection
pp.pprint(land_merge.size().getInfo())
#create a list of years, 2002 - 2018
years = ee.List.sequence(2002, 2018)
#function to create a single image for each year by taking the mean value for a given year
def make_time_series(year):
year_filter = ee.Filter.calendarRange(year, field='year')
month_filter = ee.Filter.calendarRange(6,9, field='month')
filtered = land_merge.filter(year_filter).filter(month_filter)
return filtered.mean().set('system:time_start', ee.Date.fromYMD(year, 1, 1).millis())
#map function to each year in the years list
time_series = ee.ImageCollection(years.map(make_time_series))
#function to add ndvi, savi, and endvi value bands to each image (year)
def indices(image):
red = image.select('red')
nir = image.select('nir')
green = image.select('green')
blue = image.select('blue')
ndvi = (nir.subtract(red)).divide(nir.add(red)).rename('ndvi')
savi = (nir.subtract(red).divide(nir.add(red).add(.5)).multiply(1.5)).rename('savi')
endvi = (nir.add(green).subtract(blue.multiply(2)).divide(nir.add(green).add(blue.multiply(2)))).rename('endvi')
#a function to compute NDVI
return image.addBands(ndvi).addBands(savi).addBands(endvi)
#map function to yearly time series of landsat data
land_mets = ee.ImageCollection(time_series.map(indices))
#count bands in each image to see if any images are missing bands
def count(image):
return image.set('count', image.bandNames().length())
nullimages = ee.ImageCollection(land_mets.map(count).filter(ee.Filter.eq('count', 12)))
#print number of images that have 12 bands, should equal 17 for 17 years.
pp.pprint(nullimages.size().getInfo())
#function to convert all bands to float values
#this is because all bands must have the same data type to export
def cast(image):
return image.toFloat()
#map float function to image collection
land_mets = ee.ImageCollection(land_mets.map(cast))
pp.pprint(land_mets.first().getInfo())
#load layer with inventory plot buffers for BLM/USFS/DNR
plots = ee.FeatureCollection('users/saraloreno/blm_usfs_wadnr_plot_footprints')
#add year as a date column to the feature collection
#calculate mean for each band within the buffer area
ft = ee.FeatureCollection(ee.List([]))
def fill(img, ini):
inift = ee.FeatureCollection(ini)
ft2 = img.reduceRegions(plots, ee.Reducer.mean(), scale=30)
date = img.date().format('YYYY-MM-DD')
ft3 = ft2.map(lambda f : f.set("date", date))
return inift.merge(ft3)
plotsMean = ee.FeatureCollection(land_mets.iterate(fill, ft))
#export mean values to a CSV table
task = ee.batch.Export.table.toDrive(collection=plotsMean, folder='Landsat', description='plotsMean', fileFormat='CSV')
ee.batch.data.startProcessing(task.id, task.config)
#upload boundaries for each acquisition year
area2004 = ee.FeatureCollection('users/saraloreno/2004_lidar')
area2010 = ee.FeatureCollection('users/saraloreno/2010_lidar2')
area2012 = ee.FeatureCollection('users/saraloreno/2012_lidar')
area2014 = ee.FeatureCollection('users/saraloreno/2014_lidar')
area2017 = ee.FeatureCollection('users/saraloreno/2017_lidar')
#filter land_merge to year of acquistion for each lidar footprint and clip to boundaries
land2004 = ee.ImageCollection(land_mets).filterDate('2004-1-01', '2004-12-31').mean().toFloat().clip(area2004)
land2010 = ee.ImageCollection(land_mets).filterDate('2010-1-01', '2010-12-31').mean().toFloat().clip(area2010)
land2012 = ee.ImageCollection(land_mets).filterDate('2012-1-01', '2012-12-31').mean().toFloat().clip(area2012)
land2014 = ee.ImageCollection(land_mets).filterDate('2014-1-01', '2014-12-31').mean().toFloat().clip(area2014)
land2017 = ee.ImageCollection(land_mets).filterDate('2017-1-01', '2017-12-31').mean().toFloat().clip(area2017)
task2012 = ee.batch.Export.image.toDrive(land2012, folder='Landsat', description='land2012', scale=30,
datatype="float", maxPixels = 10000000000, region=[
[-122.3, 42.27], [-121.87, 42.27], [-121.87, 42.00], [-122.3, 42.00], [-122.3, 42.27]])
task2012.start()
task2010 = ee.batch.Export.image.toDrive(land2010, folder='Landsat', description='land2010', scale=30,
datatype="float", maxPixels = 10000000000, region=[
[-122.19, 43.07], [-120.81, 43.07], [-120.81, 42.13], [-122.19, 42.13], [-122.19, 43.07]])
task2010.start()
task2014 = ee.batch.Export.image.toDrive(land2014, folder='Landsat', description='land2014', scale=30,
datatype="float", maxPixels = 10000000000, region=[
[-123.25, 45.71], [-121.79, 45.71], [-121.79, 45.2], [-123.25, 45.2], [-123.25, 45.71]])
task2014.start()
task2004 = ee.batch.Export.image.toDrive(land2004, folder='Landsat', description='land2004', scale=30,
datatype="float", maxPixels = 10000000000, region=[
[-122.09, 42.76], [-120.94, 42.76], [-120.94, 42.33], [-122.09, 42.33], [-122.09, 42.76]])
task2004.start()
task2017 = ee.batch.Export.image.toDrive(land2017, folder='Landsat', description='land2017', scale=30,
datatype="float", maxPixels = 10000000000, region=[
[-122.85, 42.20], [-122.31, 42.20], [-122.31, 42.00], [-122.85, 42.00], [-122.85, 42.20]])
task2017.start()
#export each image to Google Drive as an individual image
#colList = land_mets.toList(land_mets.size())
#colSize = colList.size().getInfo()
#for i in range(colSize):
# img = ee.Image(colList.get(i))
# imgdate = ee.Date(img.get('system:time_start')).format('yyyy-MM-dd').getInfo()
# imgname = 'img-' + imgdate
# ee.batch.Export.image.toDrive(img, name=imgname, scale=30, region=[[-123.6, 42.0], [-119.9, 41.9],
# [-121.1, 45.6], [-123.8, 45.9], [-123.6, 42.0]], dataype='float',
# maxPixels = 10000000000).start()
#Display landsat 8 image
#thumbnail_url = sample_image.getThumbUrl({
# 'bands': 'wetness',
# 'min': -1,
# 'max': +1,
# 'palette': ['white', 'blue'],
# 'region': sample_image.geometry().bounds().getInfo()
#})
#IPython.display.HTML('Thumbnail URL: <a href={0}>{0}</a>'.format(thumbnail_url))
#IPython.display.Image(url=thumbnail_url)
#Display landsat 8 image
#thumbnail_url = sample_image.getThumbUrl({
# 'bands': 'savi',
# 'min': -1,
# 'max': 1,
# 'palette': ['blue', 'white', 'green'],
# 'region': sample_image.geometry().bounds().getInfo()
#})
#IPython.display.HTML('Thumbnail URL: <a href={0}>{0}</a>'.format(thumbnail_url))
#IPython.display.Image(url=thumbnail_url)
###Output
_____no_output_____
###Markdown
The following cells create a video of the landsat time series and export it to Google Drive.
###Code
#def image_viz(image):
# return image.visualize({'bands': ['blue', 'green', 'red'],
# 'region':[[-123.6, 42.0], [-119.9, 41.9], [-121.1, 45.6], [-123.8, 45.9], [-123.6, 42.0]]})
#images = land_mets.map(image_viz)
#def convertBit(image):
# return image.multiply(512).uint8()
#imageVideo = images.map(convertBit)
#ee.batch.Export.video.toDrive(imageVideo, description='image_yearly', dimensions = 720, folder = "Landsat",
# framesPerSecond = 2, region=([-123.6, 42.0], [-119.9, 41.9], [-121.1, 45.6],
# [-123.8, 45.9], [-123.6, 42.0]),
# maxFrames=10000).start()
###Output
_____no_output_____ |
my_notes/chapter09_notes.ipynb | ###Markdown
Activity 23
###Code
import threading
import queue
import cProfile
import itertools
import numpy as np
np.random.seed(0)
in_queue = queue.Queue()
out_queue = queue.Queue()
def random_number_threading():
while True:
n = in_queue.get()
if n == 'STOP':
return
random_numbers = np.random.rand(n)
out_queue.put(random_numbers)
def generate_random_numbers(show_output, up_to):
thread = threading.Thread(target=random_number_threading)
thread.start()
for i in range(up_to):
in_queue.put(i)
random_nums = out_queue.get()
if show_output:
print(random_nums)
in_queue.put('STOP')
thread.join()
generate_random_numbers(True, 10)
cProfile.run('generate_random_numbers(False, 2000)')
###Output
74058 function calls in 0.106 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.003 0.003 0.106 0.106 <ipython-input-21-b58bc638e4f8>:1(generate_random_numbers)
1 0.000 0.000 0.106 0.106 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 _weakrefset.py:38(_remove)
1 0.000 0.000 0.000 0.000 _weakrefset.py:81(add)
2001 0.004 0.000 0.014 0.000 queue.py:121(put)
2000 0.007 0.000 0.089 0.000 queue.py:153(get)
4000 0.001 0.000 0.002 0.000 queue.py:208(_qsize)
2001 0.001 0.000 0.001 0.000 queue.py:212(_put)
2000 0.001 0.000 0.001 0.000 queue.py:216(_get)
1 0.000 0.000 0.000 0.000 threading.py:1012(join)
1 0.000 0.000 0.000 0.000 threading.py:1050(_wait_for_tstate_lock)
2 0.000 0.000 0.000 0.000 threading.py:1116(daemon)
2 0.000 0.000 0.000 0.000 threading.py:1225(current_thread)
1 0.000 0.000 0.000 0.000 threading.py:216(__init__)
4002 0.001 0.000 0.002 0.000 threading.py:240(__enter__)
4002 0.001 0.000 0.002 0.000 threading.py:243(__exit__)
2001 0.001 0.000 0.001 0.000 threading.py:249(_release_save)
2001 0.001 0.000 0.001 0.000 threading.py:252(_acquire_restore)
6002 0.002 0.000 0.003 0.000 threading.py:255(_is_owned)
2001 0.006 0.000 0.074 0.000 threading.py:264(wait)
4001 0.006 0.000 0.011 0.000 threading.py:335(notify)
1 0.000 0.000 0.000 0.000 threading.py:499(__init__)
2 0.000 0.000 0.000 0.000 threading.py:507(is_set)
1 0.000 0.000 0.000 0.000 threading.py:534(wait)
1 0.000 0.000 0.000 0.000 threading.py:728(_newname)
1 0.000 0.000 0.000 0.000 threading.py:763(__init__)
1 0.000 0.000 0.000 0.000 threading.py:834(start)
1 0.000 0.000 0.000 0.000 threading.py:977(_stop)
2002 0.001 0.000 0.001 0.000 {built-in method _thread.allocate_lock}
2 0.000 0.000 0.000 0.000 {built-in method _thread.get_ident}
1 0.000 0.000 0.000 0.000 {built-in method _thread.start_new_thread}
1 0.000 0.000 0.106 0.106 {built-in method builtins.exec}
4000 0.000 0.000 0.000 0.000 {built-in method builtins.len}
4002 0.001 0.000 0.001 0.000 {method '__enter__' of '_thread.lock' objects}
4002 0.000 0.000 0.000 0.000 {method '__exit__' of '_thread.lock' objects}
12006 0.066 0.000 0.066 0.000 {method 'acquire' of '_thread.lock' objects}
1 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects}
4002 0.000 0.000 0.000 0.000 {method 'append' of 'collections.deque' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 0.000 0.000 0.000 0.000 {method 'discard' of 'set' objects}
1 0.000 0.000 0.000 0.000 {method 'locked' of '_thread.lock' objects}
2000 0.000 0.000 0.000 0.000 {method 'popleft' of 'collections.deque' objects}
4003 0.003 0.000 0.003 0.000 {method 'release' of '_thread.lock' objects}
2001 0.000 0.000 0.000 0.000 {method 'remove' of 'collections.deque' objects}
|
idaes/examples/tutorials/Tutorial_3_Dynamic_Flowsheets.ipynb | ###Markdown
Tutorial 3 - Dynamic Flowsheets=============================Introduction------------The previous [tutorials](https://idaes-pse.readthedocs.io/en/latest/examples/index.html) have looked at developing flowsheets for steady-state processes, however it is often important to study how a process behaves under transient conditions. This tutorial will demonstrate how to create a model for a dynamic process using the IDAES modeling framework.For this tutorial, we will use a similar process to the previous tutorials, using two CSTRs to perform a simple chemical reaction. However, for this tutorial we will add a second feed stream and a mixer unit before the first reactor, to represent the mixing of two reactant streams as shown below.ethyl acetate + NaOH $\rightarrow$ sodium acetate + ethanolIn this tutorial, you will learn how to:* setup a dynamic flowsheet and define the time domain* add additional variables and constraints to a Unit Model* transform a time domain to automatically populate the time domain and define derivative terms* set initial conditions for a dynamic model* initialize and solve a dynamic model* introduce a step change to the model* plot profiles of key variables as a function of timeImporting Libraries----------------------------As before, the first step in creating our model is to import the necessary components from Python, Pyomo and IDAES. The components we will require are discussed below.Firstly, we need all the components from Pyomo that we have used in the previous tutorials (`ConcreteModel`, `SolverFactory` and `TransformationFactory` from `pyomo.environ` and `Arc` from `pyomo.network`). In addition to this, we are going to need the `Constraint` and `Var` components from `pyomo.environ` in order to add additional variables and constraints to our model.Import these components below:
###Code
from pyomo.environ import ConcreteModel, Constraint, Objective, SolverFactory, TransformationFactory, Constraint, Var
from pyomo.network import Arc
###Output
_____no_output_____
###Markdown
Next, we will need the IDAES components we have used in the previous tutorials:`FlowsheetBlock` from `idaes.core``idaes.property_models.examples.saponification_thermo``idaes.property_models.examples.saponification_reactions``CSTR` from `idaes.unit_models`We will also need the [Mixer](https://idaes-pse.readthedocs.io/en/latest/models/mixer.html) model from the IDAES unit model library (`idaes.unit_models`)Import these components below:
###Code
from idaes.core import FlowsheetBlock
from idaes.unit_models import CSTR, Mixer
import idaes.property_models.examples.saponification_thermo as thermo_props
import idaes.property_models.examples.saponification_reactions as reaction_props
###Output
_____no_output_____
###Markdown
Finally, we are going to need a tool for plotting our results. For this tutorial, we will use the `pyplot` tool in `matplotlib` (part of most scientific Python distributions), which we will import as `plt` for convenience.If you do not have `matplotlib` installed in your Python distribution, see your Python package manager for details on how to install it.The code to import `pyplot` is given below.
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Creating Our Model-----------------------------Now that we have imported all the tools we will need, we can begin constructing our model. As before, the first step is to create a `ConcreteModel` to hold our flowsheet.
###Code
m = ConcreteModel()
###Output
_____no_output_____
###Markdown
Next, we add a `FlowsheetBlock` to our model. In previous tutorials, we have set the `dynamic` argument to be `False`, indicating a steady-state flowsheet. In this case however, we wish to create a dynamic flowsheet, so we set this argument to `True`.Additionally, for a dynamic model, we need to provide some information about the time domain for our model. At a minimum, we need to set the start and end times for our time domain, which for this case will be 0 and 20 seconds respectively. We can also specify specific time points that we wish to ensure are in our time domain, such as points where we know a step-change will occur. Later in this tutorial we will introduce a step-change at t = 1 second, so we will include this in our time domain now.Thus, we want to create a time domain that begins at t = 0, has a point at t = 1 and ends at t = 20. To do this, we use the `time_set` argument when creating our `FlowsheetBlock` and provide these values as a Python list (or equivalent) - i.e. `time_set: [0, 1, 20]` (note that the time values may be integers or floating point numbers at this stage). **Note:** If a `time_set` is not provided when creating a dynamic flowsheet, a default time domain of `[0, 1]` will be used.The code to do this is shown below, and more details on `Flowsheet` options and arguments are available [here](https://idaes-pse.readthedocs.io/en/latest/core/flowsheet_model.html):
###Code
m.fs = FlowsheetBlock(default={"dynamic": True, "time_set": [0, 1, 20]})
###Output
_____no_output_____
###Markdown
To see what happened in this step, let us `display` the time domain of our flowsheet.
###Code
m.fs.time.display()
###Output
time : Dim=0, Dimen=1, Size=3, Domain=None, Ordered=Sorted, Bounds=(0.0, 20.0)
[0.0, 1.0, 20.0]
###Markdown
We should see the following:```time : Dim=0, Dimen=1, Size=3, Domain=None, Ordered=Sorted, Bounds=(0.0, 20.0) [0.0, 1.0, 20.0]```This shows us that our `time` domain is a sorted Pyomo `Set` (`Ordered=Sorted`) with points at 0.0, 1.0 and 20.0 and bounds of 0.0 and 20.0. This is what we would expect, as we told the `FlowsheetBlock` to create a time domain with these points.The fact that the `Set` is sorted is important, as it indicates the points in the `Set` are in order, allowing us to find the next and previous points if required.Next we need to add the property parameter blocks to our flowsheet, which is done in the same way as for steady-state flowsheets. Remember to link the `ReactionParameterBlock` to the `PhysicalParameterBlock`.
###Code
m.fs.thermo_params = thermo_props.SaponificationParameterBlock()
m.fs.reaction_params = reaction_props.SaponificationReactionParameterBlock(
default={"property_package": m.fs.thermo_params})
###Output
_____no_output_____
###Markdown
Just as for steady-state models, the next step is to add the necessary unit models to our flowsheet. As we have declared our flowsheet to be a dynamic flowsheet, all unit models within it will automatically be assumed to be dynamic unit operations. However, there are some cases where we might want to place a steady-state model within an otherwise dynamic flowsheet (e.g. for a unit with a very fast response time relative to the rest of the process). The IDAES framework supports this by allowing us to specifically state that a unit operation is steady-state, overriding the automatic assumption of a dynamic unit. Note that the opposite is not true, and that we cannot place dynamic models within a steady-state flowsheet.In our flowsheet, the first unit operation is a `Mixer` unit. Normally we assume an ideal mixer, which has zero volume and thus no holdup or accumulation - i.e. it is effectively a steady-state unit operation (in more rigorous models we might use a tank model to represent a real mixer). Thus, let us add a `Mixer` unit to our flowsheet and declare it to be a steady-state unit operation. To do this, we simply pass the following argument to the unit model as we construct it: `"dynamic": False` to indicate that the model should be a steady-state model. A `Mixer` unit also requires a physical property package (but not a reaction package) to define the list of components that are present in the unit, so we also need to pass the `"property_package"` argument pointing to the appropriate `PhysicalParameterBlock`. More details on the `Mixer` model is available [here](https://idaes-pse.readthedocs.io/en/latest/models/mixer.html)
###Code
m.fs.mix = Mixer(default={"dynamic": False,
"property_package": m.fs.thermo_params})
###Output
_____no_output_____
###Markdown
Next, we need to add the two `CSTR` units to our flowsheet, which is done in the same way as in the previous tutorials.
###Code
m.fs.Tank1 = CSTR(default={"property_package": m.fs.thermo_params,
"reaction_package": m.fs.reaction_params,
"has_equilibrium_reactions": False,
"has_heat_of_reaction": True,
"has_heat_transfer": True,
"has_pressure_change": False})
m.fs.Tank2 = CSTR(default={"property_package": m.fs.thermo_params,
"reaction_package": m.fs.reaction_params,
"has_equilibrium_reactions": False,
"has_heat_of_reaction": True,
"has_heat_transfer": True,
"has_pressure_change": False})
###Output
_____no_output_____
###Markdown
Whilst the unit models alone are sufficient for a steady-state flowsheet, dynamic flowsheets require extra constraints to determine the flow of material in the process. In most cases, we need to write a set of pressure-flow constraints for each dynamic unit in the process to determine the rate at which material leaves the unit.For this tutorial, we will write a set of simple constraints for each CSTR with the form:$F = C \times \sqrt h$where $F$ is volumetric flowrate, $C$ is a flow coefficient and $h$ is the depth of fluid within the CSTR. Additionally, we will need a constraint that relates depth to the volume of the tank:$V = A \times h$where $V$ is volume and $A$ is the cross-section area of the tank.To do this, first we need to create some new variables in our CSTR units. The CSTRs already have variables for the volume and the volumetric flowrate, so we need to create new variables for the flow coefficient, depth and area. To do this, we create new Pyomo `Var` objects in each CSTR. To create a `Var` for area, we do the following:
###Code
m.fs.Tank1.area = Var(initialize=1.0,
doc="Cross-sectional area of tank [m^2]")
###Output
_____no_output_____
###Markdown
Here, we are creating a new `Var` object named `area` within the first CSTR (`m.fs.Tank1`). We have also given it an initial value of 1.0 using the `initialize` argument, and a documentation string to explain what the variable is (through the `doc` argument).In the case of `area`, we know that it will not vary with time and thus we created a single `Var` to represent the area at all points in time. However, we expect the depth and potentially the flow coefficient to vary with time as the amount of fluid in the reactor and the valve opening changes. For these variables, we need to create `Var` objects which are indexed by `time`. In Pyomo (and thus IDAES), this is done by providing the indexing set (or indexing sets) as arguments when creating the `Var` object (or any other component).For example, we create the flow coefficient as shown below:
###Code
m.fs.Tank1.flow_coeff = Var(m.fs.time,
initialize=5e-5,
doc="Tank outlet flow coefficient")
###Output
_____no_output_____
###Markdown
Now try creating a `Var` named `height` below to represent the depth of fluid in the reactor.
###Code
m.fs.Tank1.height = Var(m.fs.time,
initialize=1.0,
doc="Tank height [m]")
###Output
_____no_output_____
###Markdown
Now that we have the variables we need to represent the pressure driven flow, we need to write the `Constraints` which relate these. Let us start with the simpler of the two constraints, namely $V = A \times h$.To begin with, each of the `Constraints` we are going to construct need to be indexed by `time`, as they involve `Vars` that are indexed by time. To do this, we need to create a method or rule which tells Pyomo how to construct the `Constraint`. For this, we declare a method using the Python `def` keyword followed by a name for the method (which we will call `geometry`). This is followed by a pair of parentheses which contain the following arguments for our method:* the first argument is a placeholder for the model on which the `Constraint` will be constructed. This name can be anything you want (as it is just a placeholder), but within IDAES we tend to use either b (for "block") or m (for "model).* placeholders for each index that will be used in the `Constraint`. In our case our only index is `time` (which we will represent with t), but there can be as many indexes as required.The result is the following:```def geometry(m, t):```Note that the line of code ends with a `:`.Next, we need to add the body of our method, which needs to return an *expression* that describes the `Constraint` to be written at each combination of indices. More complex `Constraints` may involve `if/else` statements here, but in our case we wish to have the same form of the expression at all points in time, namely $V_t = A \times h_t$. To achieve this, we write the following:``` return m.volume[t] == m.area*m.height[t]```Firstly, note that this line of code is indented - this is Python notation to show that this line of code is part of the previously declared method. Next, the line begins with a `return` command, which indicates that the method will return the result of this line to whatever called it (which will be the `Constraint` constructor in our case). Finally, the rest of the line is the *expression* that we wish to return. You will see that we have used our placeholder for the model (`m`) when referencing the `Vars` in that model, and that we have included time indices on the appropriate `Vars` (using `[t]`).Lastly, we need to add the actual `Constraint` to our model. Similar to adding a `Var` to our CSTRs, we now add a `Constraint` to the CSTR by declaring a `Constraint` with the name `geometry` and passing the following arguments to it:* the first index (or indices) are the indexing sets for the `Constraint`* we set the rule for the `Constraint` using the `rule` argument, and point it to the method we just wrote.```m.fs.Tank1.geometry = Constraint(m.fs.time, rule=geometry)```The completed code is shown below for you to run.
###Code
def geometry(m, t):
return m.volume[t] == m.area*m.height[t]
m.fs.Tank1.geometry = Constraint(m.fs.time, rule=geometry)
###Output
_____no_output_____
###Markdown
Next, we need to add the `Constraint` for $F_t = C_t \times \sqrt h_t$. We use the same steps as above, but in this case we have one minor complication - what is the name of the variable for $F_t$? We created `Vars` for $C$ and $h$ ourselves, but $F$ is constructed as part of the `CSTR` model, so we need to go and find its name.To do this, we need to understand something about the structure of the `CSTR` model (and IDAES models in general). Unit models within IDAES (or at least the core model libraries) use a standardized structure based on the concept of [*control volumes*](https://idaes-pse.readthedocs.io/en/latest/core/control_volume.html) . Each unit model contains one or more control volumes, which represent distinct volumes of fluid over which material, energy and momentum balances will be written. Each control volume then contains a number of *state blocks* which contain the information on the state of the material at a specific time and location within the control volume (both extensive and intensive).The structure of unit models will be discussed more in later tutorials, but for now it is sufficient to know that our `CSTR` contains a single control volume (named `control_volume`), with two state blocks named `properties_in` and `properties_out` which are indexed by time.So, we want the volumetric flowrate of material leaving the CSTR. The IDAES documentation contains a list of [standard names](https://idaes-pse.readthedocs.io/en/latest/standards.htmlstandard-variable-names) for common properties, which shows us that volumetric flowrate should be named `flow_vol`, so the variable we want (at time $t$) should be called:```tank.control_volume.properties_out[t].flow_vol```Thus, the code to construct the pressure driven flow constraint is:
###Code
def outlet_flowrate(m, t):
return m.control_volume.properties_out[t].flow_vol == m.flow_coeff[t]*m.height[t]**0.5
m.fs.Tank1.outlet_flowrate = Constraint(m.fs.time, rule=outlet_flowrate)
###Output
_____no_output_____
###Markdown
We now need to do the same procedure for `Tank2`. First, try creating the three `Vars` (`area`, `height` and `flow_coeff`) required for the `Constraints` in `Tank2` (don't forget to change the name of the tank if you copy from the code above).
###Code
m.fs.Tank2.area = Var(initialize=1.0,
doc="Cross-sectional area of tank [m^2]")
m.fs.Tank2.flow_coeff = Var(m.fs.time,
initialize=5e-5,
doc="Tank outlet flow coefficient")
m.fs.Tank2.height = Var(m.fs.time,
initialize=1.0,
doc="Tank height [m]")
###Output
_____no_output_____
###Markdown
Next, we need to construct the `Constraints` for `Tank2`. One of the good things about the way Pyomo and Python work is that we can create general methods to do common tasks to prevent ourselves from having to repeat our code. If you look at the methods we wrote whilst creating the `Constraints` for `Tank1`, you might notice that the rules are in fact general - we used a placeholder `b` in place of specific unit names, so we can actually reuse the same methods for `Tank2`. All we need to do is create the `Constraint` objects on `Tank2` as we did before and pass the same rule:
###Code
m.fs.Tank2.geometry = Constraint(m.fs.time, rule=geometry)
m.fs.Tank2.outlet_flowrate = Constraint(m.fs.time, rule=outlet_flowrate)
###Output
_____no_output_____
###Markdown
Connecting the Flowsheet--------------------------------------Now that we have all the unit models and associated constraints in our flowsheet, the next step is to define the connectivity between units. This is mostly the same as for steady-state flowsheets with one additional step required.Firstly, we need to define the `Arcs` which connect our units together. We need to define the following `Arcs` for our flowsheet:* an Arc connecting the outlet of `mix` to the inlet of `Tank1`* an Arc connecting the outlet of `Tank1` to the inlet of `Tank2`Do this below:
###Code
m.fs.stream1 = Arc(source=m.fs.mix.outlet,
destination=m.fs.Tank1.inlet)
m.fs.stream2 = Arc(source=m.fs.Tank1.outlet,
destination=m.fs.Tank2.inlet)
###Output
_____no_output_____
###Markdown
Next, we need to perform an additional step that is not present when we are working with a steady-state flowsheet. When we created our `time` domain, we only populated it with three time points (0.0, 1.0 and 20.0) which is insufficient for accurately modeling our process. At this point we need to fill in the time domain with a sufficient number of points, and also to write constraints approximating the derivative terms within our models (i.e. accumulation terms). Thankfully, Pyomo offers a tool to automate this process for us through the `TransformationFactory`.We have already used the `TransformationFactory` before when expanding `Arcs` in the previous tutorials - the `TransformationFactory` is a general purpose interface within Pyomo for employing tools to transform your model from one form into another. In this case, we are going to transform our model in Differential Algebraic Equation form into a numerical approximation of the model.When working with a `time` domain, the most common numerical approximation for the temporal derivatives is done using a first order backwards finite difference method, which we will use here (Pyomo offers a number of different discretization schemes, which can be found [here](https://pyomo.readthedocs.io/en/stable/modeling_extensions/dae.html)). We also need to decide upon how many time points we will use in discretizing our model, which depends upon the response time of our process. In this case, we will use 200 discretization points.To transform our time domain, we apply the following transformation:
###Code
m.discretizer = TransformationFactory('dae.finite_difference')
m.discretizer.apply_to(m,
nfe=200,
wrt=m.fs.time,
scheme="BACKWARD")
###Output
_____no_output_____
###Markdown
First, we create a `TransformationFactory` object using the `'dae.finite_difference'` method, which allows us to apply a number of different finite difference methods to a model.We then apply this transformation to our model, and indicate the number of finite elements (`nfe`), the domain to be discretized (`m.fs.time`) and the discretization scheme to use (`"BACKWARD"` indicating a 1st order backwards finite difference).To see an indication of what this has done to our model, let us display the time domain again:
###Code
m.fs.time.display()
###Output
time : Dim=0, Dimen=1, Size=201, Domain=None, Ordered=Sorted, Bounds=(0.0, 20.0)
[0.0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0, 1.074219, 1.148438, 1.296875, 1.445312, 1.519531, 1.59375, 1.667969, 1.742188, 1.890625, 2.039062, 2.113281, 2.1875, 2.261719, 2.335938, 2.484375, 2.632812, 2.707031, 2.78125, 2.855469, 2.929688, 3.078125, 3.226562, 3.300781, 3.375, 3.449219, 3.523438, 3.671875, 3.820312, 3.894531, 3.96875, 4.042969, 4.117188, 4.265625, 4.414062, 4.488281, 4.5625, 4.636719, 4.710938, 4.859375, 5.007812, 5.082031, 5.15625, 5.230469, 5.304688, 5.453125, 5.601562, 5.675781, 5.75, 5.824219, 5.898438, 6.046875, 6.195312, 6.269531, 6.34375, 6.417969, 6.492188, 6.640625, 6.789062, 6.863281, 6.9375, 7.011719, 7.085938, 7.234375, 7.382812, 7.457031, 7.53125, 7.605469, 7.679688, 7.828125, 7.976562, 8.050781, 8.125, 8.199219, 8.273438, 8.421875, 8.570312, 8.644531, 8.71875, 8.792969, 8.867188, 9.015625, 9.164062, 9.238281, 9.3125, 9.386719, 9.460938, 9.609375, 9.757812, 9.832031, 9.90625, 9.980469, 10.054688, 10.203125, 10.351562, 10.425781, 10.5, 10.574219, 10.648438, 10.796875, 10.945312, 11.019531, 11.09375, 11.167969, 11.242188, 11.390625, 11.539062, 11.613281, 11.6875, 11.761719, 11.835938, 11.984375, 12.132812, 12.207031, 12.28125, 12.355469, 12.429688, 12.578125, 12.726562, 12.800781, 12.875, 12.949219, 13.023438, 13.171875, 13.320312, 13.394531, 13.46875, 13.542969, 13.617188, 13.765625, 13.914062, 13.988281, 14.0625, 14.136719, 14.210938, 14.359375, 14.507812, 14.582031, 14.65625, 14.730469, 14.804688, 14.953125, 15.101562, 15.175781, 15.25, 15.324219, 15.398438, 15.546875, 15.695312, 15.769531, 15.84375, 15.917969, 15.992188, 16.140625, 16.289062, 16.363281, 16.4375, 16.511719, 16.585938, 16.734375, 16.882812, 16.957031, 17.03125, 17.105469, 17.179688, 17.328125, 17.476562, 17.550781, 17.625, 17.699219, 17.773438, 17.921875, 18.070312, 18.144531, 18.21875, 18.292969, 18.367188, 18.515625, 18.664062, 18.738281, 18.8125, 18.886719, 18.960938, 19.109375, 19.257812, 19.332031, 19.40625, 19.480469, 19.554688, 19.703125, 19.851562, 19.925781, 20.0]
###Markdown
Last time we displayed the `time` domain, it contained only three points. We can now see that the `time` domain contains many more points - 201 in fact as indicated by the `Size` attribute in the first line. As we told the transformation to use 200 finite elements, this is what we would expect ((number of elements plus 1) points).After we have transformed the `time` domain, we can then expand the `Arcs` in our model as we did in the previous tutorials. It is important to note that the two transformation steps must be performed in this order due to the way the Pyomo transformations work (it is hoped to address this in the future to make the steps order independent).
###Code
TransformationFactory("network.expand_arcs").apply_to(m)
###Output
_____no_output_____
###Markdown
Setting Inlet, Design and Initial Conditions------------------------------------------------------------------As before, the next step is to begin fixing those conditions we know in our process. However, there are some important differences between steady-state and dynamic models that you need to be aware of.Firstly, let us fix the inlet conditions for our model. In this tutorial, we have added a mixer before the first tank, so we need to set conditions for both streams. As our starting point, let us use the following conditions:**Inlet 1*** flow_vol = 0.5* conc_mol_comp["H2O"] = 55388.0* conc_mol_comp["NaOH"] = 100.0* conc_mol_comp["EthylAcetate"] = 0.0* conc_mol_comp["SodiumAcetate"] = 0.0* conc_mol_comp["Ethanol"] = 0.0* temperature = 303.15* pressure = 101325.0**Inlet 2*** flow_vol = 0.5* conc_mol_comp["H2O"] = 55388.0* conc_mol_comp["NaOH"] = 0.0* conc_mol_comp["EthylAcetate"] = 100.0* conc_mol_comp["SodiumAcetate"] = 0.0* conc_mol_comp["Ethanol"] = 0.0* temperature = 303.15* pressure = 101325.0So, we have one stream containing only water and NaOH, and one of only water and EthylAcetate.More importantly however, unlike in previous tutorials, we need to fix the feed conditions at *ALL* points in time. In the previous tutorials, we explicitly fixed the conditions at time 0 because that was the only point in the `time` domain - e.g. `m.fs.Tank1.inlet.conc_mol_comp[0, "H2O"].fix(55388.0)`There are many ways we could fix the conditions at all points in time, such as iterating over all points in the time domain. However, Pyomo offers tools which can do this very easily.If the variable has the same value at ALL points across all indices, we can use the following:`m.fs.mix.inlet_1.flow_vol.fix(0.5)`This fixes all values (at all combinations of indices) of the variable to the given valueFor cases where we have multiple indices and wish to fix the value at one value across the full range of one index, but different values in another, we can use *slice notation*. For example:`m.fs.mix.inlet_1.conc_mol_comp[:, "H2O"].fix(55388.0)`This fixed the value of the variable `m.fs.mix.inlet_1.conc_mol_comp` to 55388.0 at all points in the first index (`time`) where the second index is `"H2O"`.To demonstrate this, the following code fixes the inlet conditions for `inlet_1` in the mixer for our process.
###Code
m.fs.mix.inlet_1.flow_vol.fix(0.5)
m.fs.mix.inlet_1.conc_mol_comp[:, "H2O"].fix(55388.0)
m.fs.mix.inlet_1.conc_mol_comp[:, "NaOH"].fix(100.0)
m.fs.mix.inlet_1.conc_mol_comp[:, "EthylAcetate"].fix(0.0)
m.fs.mix.inlet_1.conc_mol_comp[:, "SodiumAcetate"].fix(0.0)
m.fs.mix.inlet_1.conc_mol_comp[:, "Ethanol"].fix(0.0)
m.fs.mix.inlet_1.temperature.fix(303.15)
m.fs.mix.inlet_1.pressure.fix(101325.0)
###Output
_____no_output_____
###Markdown
Try to set the conditions for `inlet_2` below.
###Code
m.fs.mix.inlet_2.flow_vol.fix(0.5)
m.fs.mix.inlet_2.conc_mol_comp[:, "H2O"].fix(55388.0)
m.fs.mix.inlet_2.conc_mol_comp[:, "NaOH"].fix(0.0)
m.fs.mix.inlet_2.conc_mol_comp[:, "EthylAcetate"].fix(100.0)
m.fs.mix.inlet_2.conc_mol_comp[:, "SodiumAcetate"].fix(0.0)
m.fs.mix.inlet_2.conc_mol_comp[:, "Ethanol"].fix(0.0)
m.fs.mix.inlet_2.temperature.fix(303.15)
m.fs.mix.inlet_2.pressure.fix(101325.0)
###Output
_____no_output_____
###Markdown
Next, we need to fix the design conditions for the unit operations in our process. The conditions we need to set are:* `area` of both tanks = 1* `flow_coeff` of both tanks = 0.5* `heat_duty` of both tanks = 0.0Fix these conditions for both `Tank1` and `Tank2` below:
###Code
m.fs.Tank1.area.fix(1.0)
m.fs.Tank1.flow_coeff.fix(0.5)
m.fs.Tank1.heat_duty.fix(0.0)
m.fs.Tank2.area.fix(1.0)
m.fs.Tank2.flow_coeff.fix(0.5)
m.fs.Tank2.heat_duty.fix(0.0)
###Output
_____no_output_____
###Markdown
In addition to feed and design conditions, dynamic process models also require us to define the initial conditions for the process. There are many potential initial conditions we may want to study, but the simplest of these is to assume the process exists at steady-state at the beginning of the simulation. To do this, we need to set all the temporal derivatives to zero at the first time point.Rather than having to do this manually, the IDAES framework provides a method to do this for us automatically:
###Code
m.fs.fix_initial_conditions()
###Output
_____no_output_____
###Markdown
There is also an equivalent `unfix_initial_conditions` method which will unfix all temporal derivatives at the first point in the `time` domain if required. Documentation on these methods is available [here](https://idaes-pse.readthedocs.io/en/latest/core/process_block.html).Initializing the Model------------------------------At this point, our model should be fully constructed and defined, so we can start on the process of initializing and solving our model. The first unit in our flowsheet is the `Mixer`, which should be fully defined with all inlets fixed. Thus, we can call the `initialize` method of the mixer with no additional arguments.Call the mixer initialization method below:
###Code
m.fs.mix.initialize()
###Output
2020-04-18 18:12:36 - INFO - idaes.property_models.examples.saponification_thermo - fs.mix.mixed_state Initialization Complete.
2020-04-18 18:12:37 - INFO - idaes.property_models.examples.saponification_thermo - fs.mix.inlet_1_state State Released.
2020-04-18 18:12:37 - INFO - idaes.property_models.examples.saponification_thermo - fs.mix.inlet_2_state State Released.
###Markdown
Next, we need to initialize the first `CSTR`. In the previous tutorials, we simply used the inlet conditions to initialize subsequent units, but this is almost always a poor guess for initial conditions. Unit operations by definition represent a change in state between inlet and outlet, and conditions in dynamic models inherently vary with time. Thus initializing to a single state throughout the process is a poor choice.For this tutorial, we will demonstrate how to use the initialized outlet of a previous unit to initialize the next unit in a process. As our initial process is at steady-state, we will not consider variances in the state as a function of time. At present, passing values from the outlet of one unit to the inlet of another is a manual process, however tools are being developed to assist with this.In the previous tutorials, we initialized the second tank using the following code:```m.fs.Tank2.initialize(state_args={ "flow_vol": 1.0, "conc_mol_comp": {"H2O": 55388.0, "NaOH": 100.0, "EthylAcetate": 100.0, "SodiumAcetate": 0.0, "Ethanol": 0.0}, "temperature": 303.15, "pressure": 101325.0})```In this case, we were passing fixed values for each state variable to the initialization method to use as initial guesses. In this tutorial, we will replace these fixed values with values taken from the outlet of the previous unit. To get the value of a state in the outlet of another unit, we simply need to point to the specific unit and outlet, e.g. `m.fs.mix.outlet` and the variable of interest within that outlet, e.g. `flow_vol` and ask for the value of that variable using the `value` attribute.For example, the following line of code will get the value of the volumetric flowrate in the outlet of the mixer at time 0:`m.fs.mix.outlet.flow_vol[0].value`It is important to note that we need the `time` index when asking for a value. `flow_vol` itself does not have a `value`, rather it is a collection of values at each point in `time`. Thus, to get an actual value for the variable, we have to ask for it at a specific point in `time`. For this tutorial, we will use the value at `time=0`.Thus, to initialize `Tank1`, we use the following code:
###Code
m.fs.Tank1.initialize(state_args={
"flow_vol": m.fs.mix.outlet.flow_vol[0].value,
"conc_mol_comp": {"H2O": m.fs.mix.outlet.conc_mol_comp[0, "H2O"].value,
"NaOH": m.fs.mix.outlet.conc_mol_comp[0, "NaOH"].value,
"EthylAcetate": m.fs.mix.outlet.conc_mol_comp[0, "EthylAcetate"].value,
"SodiumAcetate": m.fs.mix.outlet.conc_mol_comp[0, "SodiumAcetate"].value,
"Ethanol": m.fs.mix.outlet.conc_mol_comp[0, "Ethanol"].value},
"temperature": m.fs.mix.outlet.temperature[0].value,
"pressure": m.fs.mix.outlet.pressure[0].value})
###Output
2020-04-18 18:12:37 - INFO - idaes.property_models.examples.saponification_thermo - fs.Tank1.control_volume.properties_out Initialization Complete.
2020-04-18 18:12:37 - INFO - idaes.property_models.examples.saponification_reactions - fs.Tank1.control_volume.reactions Initialization Complete.
2020-04-18 18:12:39 - INFO - idaes.property_models.examples.saponification_thermo - fs.Tank1.control_volume.properties_in State Released.
###Markdown
Now that `Tank1` has been initialized, we can repeat the process for `Tank2`. Try to do this for below:
###Code
m.fs.Tank2.initialize(state_args={
"flow_vol": m.fs.Tank1.outlet.flow_vol[0].value,
"conc_mol_comp": {"H2O": m.fs.Tank1.outlet.conc_mol_comp[0, "H2O"].value,
"NaOH": m.fs.Tank1.outlet.conc_mol_comp[0, "NaOH"].value,
"EthylAcetate": m.fs.Tank1.outlet.conc_mol_comp[0, "EthylAcetate"].value,
"SodiumAcetate": m.fs.Tank1.outlet.conc_mol_comp[0, "SodiumAcetate"].value,
"Ethanol": m.fs.Tank1.outlet.conc_mol_comp[0, "Ethanol"].value},
"temperature": m.fs.Tank1.outlet.temperature[0].value,
"pressure": m.fs.Tank1.outlet.pressure[0].value})
###Output
2020-04-18 18:12:39 - INFO - idaes.property_models.examples.saponification_thermo - fs.Tank2.control_volume.properties_out Initialization Complete.
2020-04-18 18:12:39 - INFO - idaes.property_models.examples.saponification_reactions - fs.Tank2.control_volume.reactions Initialization Complete.
2020-04-18 18:12:41 - INFO - idaes.property_models.examples.saponification_thermo - fs.Tank2.control_volume.properties_in State Released.
###Markdown
Now that the each unit model has been initialized, we need to solve the entire model to finish initializing the constraints that link the units together (the `Arcs`).Create a solver object below and use it solve the flowsheet.
###Code
solver = SolverFactory('ipopt')
###Output
_____no_output_____
###Markdown
Adding a Disturbance--------------------------------Before we move on, we should make sure that the previous `solve` found a feasible solution to the problem. Let us print the `results` object and check the solver status.
###Code
results = solver.solve(m, tee=True)
print(results)
###Output
Problem:
- Lower bound: -inf
Upper bound: inf
Number of objectives: 1
Number of constraints: 17274
Number of variables: 17274
Sense: unknown
Solver:
- Status: ok
Message: Ipopt 3.13.2\x3a Optimal Solution Found
Termination condition: optimal
Id: 0
Error rc: 0
Time: 0.5929579734802246
Solution:
- number of solutions: 0
number of solutions displayed: 0
###Markdown
Hopefully you will see that the solver found an optimal solution, and that there were 17274 variables and constraints in the problem. If the solver failed to find an optimal solution, or the number of variables and/or constraints is incorrect, you will need to go back and work out what is missing.However, the problem we just solved is rather boring - the process is at steady-state and there are no changes in operating conditions. To get some more interesting results, let us add a step change in one of the feed streams at `time=1.0` (this is why we added a point at `time=1.0` back when we created the time domain).To do this, we need a `for` loop which will iterate over time, and at any point where `time >= 1` we will change the concentration of EthylAcetate in `inlet_2` to the mixer to 90.0.This is shown below:
###Code
for t in m.fs.time:
if t >= 1.0:
m.fs.mix.inlet_2.conc_mol_comp[t, "EthylAcetate"].fix(90.0)
###Output
_____no_output_____
###Markdown
Now, let us re-solve the problem with the step change. To do this, we can reuse the solver we created earlier, and just apply it to the updated model.
###Code
results = solver.solve(m, tee=True)
###Output
Ipopt 3.13.2:
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 48996
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 11055
Total number of variables............................: 17274
variables with only lower bounds: 6030
variables with lower and upper bounds: 2010
variables with only upper bounds: 0
Total number of equality constraints.................: 17274
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 5.00e+00 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 0.0000000e+00 7.45e-01 9.80e+00 -1.0 4.55e+05 - 5.96e-01 9.90e-01h 1
2 0.0000000e+00 5.48e-02 3.69e+02 -1.0 1.44e+05 - 6.21e-01 9.90e-01h 1
3 0.0000000e+00 2.19e-04 1.00e+03 -1.0 8.15e+03 - 9.90e-01 1.00e+00h 1
4 0.0000000e+00 4.77e-07 9.83e+04 -1.0 3.11e+01 - 9.90e-01 1.00e+00h 1
5 0.0000000e+00 6.26e-07 2.71e+02 -1.0 3.97e-04 - 9.97e-01 5.00e-01h 2
6 0.0000000e+00 4.84e-07 3.17e-08 -2.5 1.99e-04 - 1.00e+00 1.00e+00h 1
7 0.0000000e+00 4.77e-07 5.82e-11 -8.6 9.66e-08 - 1.00e+00 1.00e+00h 1
8 0.0000000e+00 4.77e-07 2.51e-14 -8.6 9.13e-08 - 1.00e+00 1.00e+00h 1
9 0.0000000e+00 4.92e-07 2.51e-14 -8.6 8.72e-08 - 1.00e+00 5.00e-01h 2
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.73e-08 - 1.00e+00 1.25e-01h 4
11 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.73e-08 - 1.00e+00 1.25e-01h 4
12 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.73e-08 - 1.00e+00 1.25e-01h 4
13 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.73e-08 - 1.00e+00 1.25e-01h 4
14 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.73e-08 - 1.00e+00 1.25e-01h 4
15 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.74e-08 - 1.00e+00 1.25e-01h 4
16 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.74e-08 - 1.00e+00 1.25e-01h 4
17 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.74e-08 - 1.00e+00 1.25e-01h 4
18 0.0000000e+00 4.92e-07 2.51e-14 -8.6 5.74e-08 - 1.00e+00 1.25e-01h 4
Number of Iterations....: 18
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 2.9274451662786305e-07 4.9173831939697266e-07
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 2.9274451662786305e-07 4.9173831939697266e-07
Number of objective function evaluations = 59
Number of objective gradient evaluations = 19
Number of equality constraint evaluations = 59
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 19
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 18
Total CPU secs in IPOPT (w/o function evaluations) = 1.543
Total CPU secs in NLP function evaluations = 0.146
EXIT: Solved To Acceptable Level.
###Markdown
Once again, let us `print` the `results` to be sure the problem solved.
###Code
print(results)
###Output
Problem:
- Lower bound: -inf
Upper bound: inf
Number of objectives: 1
Number of constraints: 17274
Number of variables: 17274
Sense: unknown
Solver:
- Status: ok
Message: Ipopt 3.13.2\x3a Solved To Acceptable Level.
Termination condition: optimal
Id: 1
Error rc: 0
Time: 1.9330439567565918
Solution:
- number of solutions: 0
number of solutions displayed: 0
###Markdown
Once again, you should see an optimal solution was found, and that there were 17274 variables and constraints.Plotting the Results-----------------------------We now have some results to look at, but just printing the numbers to the screen will be difficult to interpret. Instead, let us plot the concentration profiles leaving both tanks as a function of time.To start with, let us plot the profiles leaving `Tank1`. To do this, we will use the `pyplot` library we imported at the beginning of the tutorial (which we called `plt`). To start with, we need to create a new `figure` object to hold our plot.
###Code
plt.figure("Tank 1 Outlet")
###Output
_____no_output_____
###Markdown
Next, we need to add curves to the plot for each concentration profile and add a legend and some axis labels so that other people can read our plot.To add a curve to a plot, we use the `plot` method which takes three arguments:* the x-axis values,* the y-axis values, and* a label for the curve.For the x-axis values we wish to use the `time` domain of our model, which we can use directly as `m.fs.time`. For the y-axis values, we need to use a slice of the `conc_mol_comp` variable at all points in `time` (`:`) and a specific component name. We also need to include the `.value` attribute in this case, as Pyomo `slicers` do not behave in the same way as the `time` domain.To plot all the concentration profiles for the outlet of `Tank1`, we do the following:
###Code
plt.plot(m.fs.time,
list(m.fs.Tank1.outlet.conc_mol_comp[:, "NaOH"].value),
label='NaOH')
plt.plot(m.fs.time,
list(m.fs.Tank1.outlet.conc_mol_comp[:, "EthylAcetate"].value),
label='EthylAcetate')
plt.plot(m.fs.time,
list(m.fs.Tank1.outlet.conc_mol_comp[:, "SodiumAcetate"].value),
label='SodiumAcetate')
plt.plot(m.fs.time,
list(m.fs.Tank1.outlet.conc_mol_comp[:, "Ethanol"].value),
label='Ethanol')
plt.legend()
plt.grid()
plt.xlabel("Time [s]")
plt.ylabel("Concentration [mol/m^3]")
###Output
_____no_output_____
###Markdown
Try to plot the concentration profiles leaving `Tank2` below.
###Code
plt.plot(m.fs.time,
list(m.fs.Tank2.outlet.conc_mol_comp[:, "NaOH"].value),
label='NaOH')
plt.plot(m.fs.time,
list(m.fs.Tank2.outlet.conc_mol_comp[:, "EthylAcetate"].value),
label='EthylAcetate')
plt.plot(m.fs.time,
list(m.fs.Tank2.outlet.conc_mol_comp[:, "SodiumAcetate"].value),
label='SodiumAcetate')
plt.plot(m.fs.time,
list(m.fs.Tank2.outlet.conc_mol_comp[:, "Ethanol"].value),
label='Ethanol')
plt.legend()
plt.grid()
plt.xlabel("Time [s]")
plt.ylabel("Concentration [mol/m^3]")
###Output
_____no_output_____ |
notebooks/spark_nlp.ipynb | ###Markdown
###Code
!java -version
!pip install virtualenv
!virtualenv spark_nlp
!source /content/spark_nlp/bin/activate; pip install spark-nlp==3.4.0 pyspark==3.1.2 numpy==1.21.5
%%file test.py
# Import Spark NLP
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.pretrained import PretrainedPipeline
import sparknlp
# Start SparkSession with Spark NLP
# start() functions has 5 parameters: gpu, spark23, spark24, spark32, and memory
# sparknlp.start(gpu=True) will start the session with GPU support
# sparknlp.start(spark23=True) is when you have Apache Spark 2.3.x installed
# sparknlp.start(spark24=True) is when you have Apache Spark 2.4.x installed
# sparknlp.start(spark32=True) is when you have Apache Spark 3.2.x installed
# sparknlp.start(memory="16G") to change the default driver memory in SparkSession
spark = sparknlp.start()
# Download a pre-trained pipeline
pipeline = PretrainedPipeline('explain_document_dl', lang='en')
print(pipeline)
# Your testing dataset
text = """
The Mona Lisa is a 16th century oil painting created by Leonardo.
It's held at the Louvre in Paris.
"""
# Annotate your testing dataset
result = pipeline.annotate(text)
# What's in the pipeline
for k, v in result.items():
print(k)
print(v)
!source /content/spark_nlp/bin/activate; python test.py
###Output
_____no_output_____ |
EHR_Claims/Lasso/.ipynb_checkpoints/EHR_C_Death_SMOTE-checkpoint.ipynb | ###Markdown
Template LR
###Code
def lr(X_train, y_train):
from sklearn.linear_model import Lasso
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import StandardScaler
model = LogisticRegression(penalty = 'l1', solver = 'liblinear')
param_grid = [
{'C' : np.logspace(-4, 4, 20)}
]
clf = GridSearchCV(model, param_grid, cv = 5, verbose = True, n_jobs = -1)
best_clf = clf.fit(X_train, y_train)
return best_clf
def train_scores(X_train,y_train):
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import log_loss
pred = best_clf.predict(X_train)
actual = y_train
print(accuracy_score(actual,pred))
print(f1_score(actual,pred))
print(fbeta_score(actual,pred, average = 'macro', beta = 2))
print(roc_auc_score(actual, best_clf.decision_function(X_train)))
print(log_loss(actual,pred))
def test_scores(X_test,y_test):
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import log_loss
pred = best_clf.predict(X_test)
actual = y_test
print(accuracy_score(actual,pred))
print(f1_score(actual,pred))
print(fbeta_score(actual,pred, average = 'macro', beta = 2))
print(roc_auc_score(actual, best_clf.decision_function(X_test)))
print(log_loss(actual,pred))
###Output
_____no_output_____
###Markdown
General Population
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 42)
co_train_gpop_sm,out_train_death_gpop_sm = sm.fit_resample(co_train_gpop,out_train_death_gpop)
best_clf = lr(co_train_gpop_sm, out_train_death_gpop_sm)
train_scores(co_train_gpop_sm, out_train_death_gpop_sm)
print()
train_scores(co_train_gpop, out_train_death_gpop)
print()
test_scores(co_validation_gpop, out_validation_death_gpop)
comb = []
for i in range(len(predictor_variable_claims)):
comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1]))
comb
###Output
Fitting 5 folds for each of 20 candidates, totalling 100 fits
###Markdown
High Continuity
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 42)
co_train_high_sm,out_train_death_high_sm = sm.fit_resample(co_train_high, out_train_death_high)
best_clf = lr(co_train_high_sm, out_train_death_high_sm)
train_scores(co_train_high_sm, out_train_death_high_sm)
print()
train_scores(co_train_high, out_train_death_high)
print()
test_scores(co_validation_high, out_validation_death_high)
comb = []
for i in range(len(predictor_variable_claims)):
comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1]))
comb
###Output
Fitting 5 folds for each of 20 candidates, totalling 100 fits
###Markdown
Low Continuity
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 42)
co_train_low_sm,out_train_death_low_sm = sm.fit_resample(co_train_low,out_train_death_low)
best_clf = lr(co_train_low_sm, out_train_death_low_sm)
train_scores(co_train_low_sm, out_train_death_low_sm)
print()
train_scores(co_train_low, out_train_death_low)
print()
test_scores(co_validation_low, out_validation_death_low)
comb = []
for i in range(len(predictor_variable_claims)):
comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1]))
comb
###Output
Fitting 5 folds for each of 20 candidates, totalling 100 fits
|
crawling/lol_crawling.ipynb | ###Markdown
http://www.op.gg/ 최근 5개월 간 LOL play 시간 알아보기 - 크롤링을 위한 selenium 패키지와- 로딩시간을 위해 코드 실행을 잠시 멈추는 time 패키지 import
###Code
from selenium import webdriver
import time
# lol id를 인자로 받는 함수 생성
def lol_total_game_time(id):
# op.gg 창 띄우기
url = "http://www.op.gg/"
driver = webdriver.Chrome()
driver.get(url)
# op.gg 메인 홈페이지 아이디 입력 창에 아이디 써넣기
input_id = driver.find_element_by_css_selector("body > div.l-wrap.l-wrap--index > div.l-container > form > input")
input_id.send_keys(id)
# 아이디 제출 버튼 클릭하기
button = driver.find_element_by_css_selector("body > div.l-wrap.l-wrap--index > div.l-container > form > button.summoner-search-form__button > i")
button.click()
# 페이지 로딩 시간 계산
time.sleep(2)
# 정보 업데이트를 위한 업데이트 버튼 클릭
# update_button = driver.find_element_by_css_selector("#SummonerRefreshButton")
# update_button.click()
# time.sleep(3)
# detail한 정보 가져올 때 필요한 detail 정보 보기 버튼 클릭
# detail_button = driver.find_element_by_css_selector("#SummonerLayoutContent > div.tabItem.Content.SummonerLayoutContent.summonerLayout-summary > div.RealContent > div > div.Content > div.GameItemList > div:nth-child(1) > div > div.Content > div.StatsButton > div > div > a")
# detail_button.click()
# 게임 플레이 시간과 게임 한 날짜 데이터 프레임으로 만들기
game_time = []
game_day = []
page_num = 3
while page_num:
try:
for i in range(1,21):
game_time.append(driver.find_element_by_css_selector("#SummonerLayoutContent > div.tabItem.Content.SummonerLayoutContent.summonerLayout-summary > div.RealContent > div > div.Content > div:nth-child("+ str(page_num) +") > div:nth-child(" + str(i) +") > div > div.Content > div.GameStats > div.GameLength").text)
game_day.append(driver.find_element_by_css_selector("#SummonerLayoutContent > div.tabItem.Content.SummonerLayoutContent.summonerLayout-summary > div.RealContent > div > div.Content > div:nth-child("+ str(page_num) +") > div:nth-child(" + str(i) +") > div > div.Content > div.GameStats > div.TimeStamp > span").get_attribute("title"))
except:
break
try:
driver.find_element_by_css_selector("#SummonerLayoutContent > div.tabItem.Content.SummonerLayoutContent.summonerLayout-summary > div.RealContent > div > div.Content > div.GameMoreButton.Box > a").click()
page_num += 2
time.sleep(3)
except:
break
df = pd.DataFrame(columns=["time", "day"])
df["time"] = game_time
df["day"] = game_day
return df
###Output
_____no_output_____
###Markdown
아이디 입력 받아 데이터 프레임 형태로 저장
###Code
result = lol_total_game_time("갓분투")
result.shape
for i in range(len(result)):
result["time"][i] = result["time"][i].replace("m","")
result["time"][i] = result["time"][i].replace("s","")
ls = []
for i in range(len(result)):
m, s = result.time[i].split()
ls.append(int(m) * 60 + int(s))
result["second"] = ls
result.tail()
total_seconds = result.second.sum()
total_seconds # 초
total_minutes = total_seconds / 60
total_minutes # 분
total_hours = total_minutes / 60
total_hours # 시간
total_days = total_hours / 24
total_days # 일
0.67 * 24
###Output
_____no_output_____ |
NLP/Week_02/src/02_Twitter_bot_LSTM.ipynb | ###Markdown
Created on Tue Jul 18 20:19:28 2017@author: Rahul
###Code
verbose=1
from __future__ import print_function
import keras
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.layers import LSTM
from keras.optimizers import RMSprop
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import pairwise_distances
import numpy as np
import twitter # pip install python-twitter
# Fetching tweets
def get_tweets():
api = twitter.api(consumer_key='d3t5F3wXmgiyUkbKNJzQo8CmT',
consumer_secret='Rklcd7zkTbmOw7X9DS9U5DPwZWjE74HbZhfVsovIblNYgLqgDj',
access_token_key='2sVJtXU60KhcEFuE3eXznF8Rn',
access_token_secret='l0ZmId6T4PHqK6VolwVQvVt7Io9F424AAF5puduXLKMvs7IG7q')
tweets = []
max_id = None
for _ in range(100):
tweets.extend(list(api.GetUserTimeline(screen_name='doctorsroom',
max_id=max_id,
count=200,
include_rts=False,
exclude_replies=True)))
max_id = tweets[-1].id
return [tweet.text for tweet in tweets]
#np.save(file='./doctorsroom_tweets.npy', arr=get_tweets())
get_tweets=np.load(file='./doctorsroom_tweets.npy')
# Creating the corpus
CORPUS_LENGTH = None
def get_corpus(verbose=0):
tweets = np.load('./doctorsroom_tweets.npy')
tweets = [t for t in tweets if 'http' not in t]
tweets = [t for t in tweets if '>' not in t]
corpus = u' '.join(tweets)
global CORPUS_LENGTH
CORPUS_LENGTH = len(corpus)
if verbose:
print('Corpus Length:', CORPUS_LENGTH)
return corpus
corpus = get_corpus(verbose=verbose)
# Chracter index mapping
N_CHARS = None
def create_index_char_map(corpus, verbose=0):
chars = sorted(list(set(corpus)))
global N_CHARS
N_CHARS = len(chars)
if verbose:
print('No. of unique characters:', N_CHARS)
char_to_idx = {c: i for i, c in enumerate(chars)}
idx_to_char = {i: c for i, c in enumerate(chars)}
return chars, char_to_idx, idx_to_char
chars, char_to_idx, idx_to_char = create_index_char_map(corpus, verbose=verbose)
# Sequence creation
MAX_SEQ_LENGTH = 40
SEQ_STEP = 3
N_SEQS = None
def create_sequences(corpus, verbose=0):
sequences, next_chars = [], []
for i in range(0, CORPUS_LENGTH - MAX_SEQ_LENGTH, SEQ_STEP):
sequences.append(corpus[i:i + MAX_SEQ_LENGTH])
next_chars.append(corpus[i + MAX_SEQ_LENGTH])
global N_SEQS
N_SEQS = len(sequences)
if verbose:
print('No. of sequences:', len(sequences))
return np.array(sequences), np.array(next_chars)
sequences, next_chars = create_sequences(corpus, verbose=verbose)
# One hot encoding
def one_hot_encode(sequences, next_chars, char_to_idx):
X = np.zeros((N_SEQS, MAX_SEQ_LENGTH, N_CHARS), dtype=np.bool)
y = np.zeros((N_SEQS, N_CHARS), dtype=np.bool)
for i, sequence in enumerate(sequences):
for t, char in enumerate(sequence):
X[i, t, char_to_idx[char]] = 1
y[i, char_to_idx[next_chars[i]]] = 1
return X, y
X, y = one_hot_encode(sequences, next_chars, char_to_idx)
# Create an LSTM model
def build_model(hidden_layer_size=128, dropout=0.2, learning_rate=0.01, verbose=0):
model = Sequential()
model.add(LSTM(hidden_layer_size, return_sequences=True, input_shape=(MAX_SEQ_LENGTH, N_CHARS)))
model.add(Dropout(dropout))
model.add(LSTM(hidden_layer_size, return_sequences=False))
model.add(Dropout(dropout))
model.add(Dense(N_CHARS, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=learning_rate))
if verbose:
print('Model Summary:')
model.summary()
return model
model = build_model(verbose=verbose)
# Training a model
def train_model(model, X, y, batch_size=128, nb_epoch=60, verbose=0):
checkpointer = ModelCheckpoint(filepath="weights.hdf5", monitor='loss', verbose=verbose, save_best_only=True, mode='min')
model.fit(X, y, batch_size=batch_size, epochs=nb_epoch, verbose=verbose, callbacks=[checkpointer])
train_model(model, X, y, verbose=verbose)
# Set random seed
np.random.seed(1337)
def sample(preds):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / 0.2
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
# Generate a tweet
def generate_tweets(model, corpus, char_to_idx, idx_to_char, n_tweets=10, verbose=0):
model.load_weights('weights.hdf5')
tweets = []
spaces_in_corpus = np.array([idx for idx in range(CORPUS_LENGTH) if corpus[idx] == ' '])
for i in range(1, n_tweets + 1):
begin = np.random.choice(spaces_in_corpus)
tweet = u''
sequence = corpus[begin:begin + MAX_SEQ_LENGTH]
tweet += sequence
if verbose:
print('Tweet no. %03d' % i)
print('=' * 13)
print('Generating with seed:')
print(sequence)
print('_' * len(sequence))
for _ in range(100):
x = np.zeros((1, MAX_SEQ_LENGTH, N_CHARS))
for t, char in enumerate(sequence):
x[0, t, char_to_idx[char]] = 1.0
preds = model.predict(x, verbose=0)[0]
next_idx = sample(preds)
next_char = idx_to_char[next_idx]
tweet += next_char
sequence = sequence[1:] + next_char
if verbose:
print(tweet)
print()
tweets.append(tweet)
return tweets
tweets = generate_tweets(model, corpus, char_to_idx, idx_to_char, verbose=verbose)
# Evaluate the model
vectorizer = TfidfVectorizer()
tfidf = vectorizer.fit_transform(sequences)
Xval = vectorizer.transform(tweets)
print(pairwise_distances(Xval, Y=tfidf, metric='cosine').min(axis=1).mean())
###Output
Using TensorFlow backend.
|
notebooks/deep-learning-on-synthetic-data.ipynb | ###Markdown
Deep Learning on Synthetic Data 1. Make interactive visualization of the cup
###Code
import plotly.graph_objects as go
import plotly.io as pio
import plotly.figure_factory as ff
import trimesh
mesh = trimesh.load("../data/cup/cup_triangle.ply")
vertices = mesh.vertices
faces = mesh.faces
face_colors = mesh.visual.face_colors
x = vertices[:, 0]
y = vertices[:, 1]
z = vertices[:, 2]
i = faces[:, 0]
j = faces[:, 1]
k = faces[:, 2]
fig = go.Figure(ff.create_trisurf(x=x, y=y, z=z,
simplices=faces,
plot_edges=True,
edges_color="black",
colormap="rgb(200, 200, 200)",
show_colorbar=False).data)
buttons = [dict(label="w/ mesh", method="update", args=[dict(visible=[True, True])]),
dict(label="w/o mesh", method="update", args=[dict(visible=[True, False])]),
dict(label="wireframe", method="update", args=[dict(visible=[False, True])])]
camera = dict(eye=dict(x=2, y=2, z=2))
fig.update_layout(scene=dict(
xaxis=dict(visible=False),
yaxis=dict(visible=False),
zaxis=dict(visible=False),
aspectmode='data'),
height=500,
margin=dict(r=0, l=0, b=0, t=0, pad=0),
scene_dragmode="orbit",
scene_camera=camera,
updatemenus=[dict(buttons=buttons, x=0.1, y=1)],
showlegend=False)
# Save figure
pio.write_html(fig,
file='../_includes/figures/cup.html',
full_html=False,
include_plotlyjs='cdn')
###Output
_____no_output_____
###Markdown
2. Train MASK R-CNN from Detectron2 on cup data
###Code
import os
import random
import yaml
import cv2
import matplotlib.pyplot as plt
from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.data.datasets import load_coco_json, register_coco_instances
from detectron2.engine import DefaultTrainer, DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer, ColorMode
# Setup
path_to_coco_json = "/path/to/blenderproc/coco_annotations.json"
path_to_images = "/path/to/blenderproc/images/coco_data"
path_to_config_yaml = "/path/to/detectron2/config/mask_rcnn_R_50_FPN_3x.yaml"
# Use this for training. Use the below two lines instead for inference if you want "cup" as label instead of "1".
register_coco_instances("cup", {}, path_to_coco_json, path_to_images)
# DatasetCatalog.register("cup", lambda: load_coco_json(path_to_coco_json, path_to_images))
# MetadataCatalog.get("cup").set(thing_classes=["cup"], json_file=path_to_coco_json, image_root=path_to_images)
# Config settings
cfg = get_cfg()
cfg.merge_from_file(path_to_config_yaml)
cfg.INPUT.MASK_FORMAT="bitmask" # Standard output format of BlenderProc
cfg.DATASETS.TRAIN = ("cup",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 8
# initialize from model zoo
cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.0025
cfg.SOLVER.MAX_ITER = 300
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
# Train model
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
# Load trained weights and run inference (on train data; just for visualization)
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model
cfg.DATASETS.TEST = ("cup",)
predictor = DefaultPredictor(cfg)
metadata = MetadataCatalog.get("cup")
dataset_dicts = DatasetCatalog.get("cup")
figure, axes = plt.subplots(1, 3, figsize=(16, 16), tight_layout=True)
axes = axes.tolist()
for d in random.sample(dataset_dicts, 3):
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=metadata,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels
)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
axis = axes.pop()
axis.imshow(v.get_image())
axis.axis('off')
plt.show()
###Output
_____no_output_____ |
master/Intro_to_Machine_learning_regression.ipynb | ###Markdown
Log completion by ML regression - Typical and useful Pandas - Data exploration using Matplotlib - Basic steps for data cleaning - **Exercise: Find problem in specific well log data.** - Feature engineering- Setup scikit-learn workflow - Making X and y- Choosing a model - Classification vs Regression- Evaluating model performance - Parameter selection and tuning - GridSearch- Add more data / remove data More Pandas--- Load Numpy, Pandas and Matplotlib
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
###Output
_____no_output_____
###Markdown
Define the name of the file to be loaded and use Pandas to read it. Note that the name can be a PATH pointing at the file.
###Code
datafile = '../data/training_DataFrame.csv'
###Output
_____no_output_____
###Markdown
Pandas expects by default a column on the file to be an index for each row of values. For this example, column 1 (index = 0) is that column.
###Code
wells = pd.read_csv(datafile, index_col=0)
###Output
_____no_output_____
###Markdown
Data Exploration and cleaning Before feeding our machines with data to learn from, it's important to make sure that we feed them the best possible data. Pandas has a few methods to explore the contents of the data. The `head()` method shows the top rows of the DataFrame.
###Code
wells.head()
###Output
_____no_output_____
###Markdown
Another useful Pandas method is `describe()`, which compile useful statistics of each numeric column in the `DataFrame`.
###Code
wells.describe()
###Output
_____no_output_____
###Markdown
Note how the `count` row is not the same for all columns? This means that there are some values that Pandas doesn't think they are numbers! (Could be missing values or `NaN`s). There are many strategies to deal with missing data but for this excercise we're just going to ignore the rows that contain these bad values.
###Code
wells = wells.dropna()
wells.describe()
###Output
_____no_output_____
###Markdown
Now every column in the `DataFrame` should contain the same number of elements and now we can focus on the statistics themselves. Look at each log property, do those `mean`, `min` and `max` look OK? `ILD` shouldn't have negative values. Let's take them out of our set:
###Code
wells = wells[wells.ILD > 0]
wells.describe()
###Output
_____no_output_____
###Markdown
Another typical first approach to explore the data is to study the distribution of values in the dataset...
###Code
ax = wells.hist(column="RHOB", figsize=(8,6), bins=20)
###Output
_____no_output_____
###Markdown
Exercise: That distribution doesn't seem right. Can you exclude the `DataFrame` values for which `RHOB` is higher than `1800`?
###Code
# Put your code here
#!--
wells = wells[wells.RHOB > 1800]
#--!
###Output
_____no_output_____
###Markdown
Exercise: Explore the rest of the `DataFrame`. Do all distributions look OK? Seaborn has a few tricks to display histograms better
###Code
import seaborn as sns
wells.ILD.values
sns.distplot(wells['ILD'])
###Output
_____no_output_____
###Markdown
Exercise: Calculate the `log` of ILD and store it in the `DataFrame`
###Code
# Put your code here
#!--
wells['log_ILD'] = np.log10(wells['ILD'])
axs = wells['log_ILD'].hist(bins=20)
#--!
wells = wells[wells.DPHI > 0]
sns.distplot(wells.DPHI)
###Output
_____no_output_____
###Markdown
Load testing data
###Code
w_train = wells.copy()
w_test = pd.read_csv('../data/testing_DataFrame.csv', index_col=0)
w_test_complete = pd.read_csv('../data/testing_DataFrame_complete.csv', index_col=0)
w_test.head()
w_test.describe()
w_test = w_test[w_test.DPHI > 0]
w_test_complete = w_test_complete[w_test_complete.DPHI > 0]
w_test.describe()
###Output
_____no_output_____
###Markdown
Let's start testing our training pipeline with a subset of wells. We can come back to this and change the number of wells we include, to see how it affects the result.
###Code
w_train = w_train[w_train.well_ID < 25]
# Make X and y
X = w_train[['Depth','GR','ILD','NPHI']].as_matrix()
y = w_train['RHOB'].values
X.shape
###Output
_____no_output_____
###Markdown
Set up the testing matrix of features we want to use to predict the missing `RHOB`
###Code
X_test = w_test[['Depth','GR','ILD','NPHI']].as_matrix()
###Output
_____no_output_____
###Markdown
We will display the predicted vs. true results for a test well
###Code
well_id = 81
###Output
_____no_output_____
###Markdown
Available scikit-learn models to choose from:http://scikit-learn.org/stable/supervised_learning.html Linear Regression A first simple approach is to apply a linear model
###Code
from sklearn import linear_model
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X,y)
# Make predictions using the testing set
y_test_LR = regr.predict(X_test)
# add a new column to data frame that already exists
w_test_complete['RHOB_pred_LinReg'] = y_test_LR
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_LinReg, my_well.Depth,'r')
###Output
_____no_output_____
###Markdown
Exercise: Complete the following code to test the different classifiers similar to the Linear Regression case Decision Tree Regressor
###Code
from sklearn import tree
clf = tree.DecisionTreeRegressor()
#--!
clf = clf.fit(X, y)
y_test_DTR = clf.predict(X_test)
#--!
# add a new column to data frame that already exists and plot the results
#!--
w_test_complete['RHOB_pred_DTR'] = y_test_DTR
w_test_complete.head()
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_DTR, my_well.Depth,'r')
#--!
###Output
_____no_output_____
###Markdown
Nearest Neighbours
###Code
from sklearn.neighbors import KNeighborsRegressor
nbrs = KNeighborsRegressor()
#!--
nbrs.fit(X, y)
y_test_KNN = nbrs.predict(X_test)
#--!
# add a new column to data frame that already exists and plot the results
#!--
w_test_complete['RHOB_pred_KNN'] = y_test_KNN
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_KNN, my_well.Depth,'r')
#--!
###Output
_____no_output_____
###Markdown
Gradient Boosting Ensemble Regressor
###Code
import numpy as np
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
#!--
est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.05,
max_depth=5, random_state=0, loss='ls')
est.fit(X, y)
y_test_GBT = est.predict(X_test)
w_test_complete['RHOB_pred_GBT'] = y_test_GBT
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_GBT, my_well.Depth,'r')
#--!
###Output
_____no_output_____
###Markdown
Evaluation Metrics Although it's good to see how the plots look, a more generalized way to determine how good a model is at predicting data http://scikit-learn.org/stable/model_selection.htmlmodel-selection "Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test. Note that the word “experiment” is not intended to denote academic use only, because even in commercial settings machine learning usually starts out experimentally."
###Code
from sklearn.model_selection import cross_val_score
scores = cross_val_score(est, X_test, w_test_complete.RHOB, cv=5, scoring='neg_mean_squared_error')
scores
###Output
_____no_output_____
###Markdown
Regression metrics [TOP](top) http://scikit-learn.org/stable/modules/model_evaluation.htmlregression-metrics
###Code
from sklearn.metrics import explained_variance_score
print(explained_variance_score(my_well.RHOB, my_well.RHOB_pred_LinReg))
print(explained_variance_score(my_well.RHOB, my_well.RHOB_pred_DTR))
print(explained_variance_score(my_well.RHOB, my_well.RHOB_pred_KNN))
from sklearn.metrics import mean_squared_error
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_LinReg))
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_DTR))
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_KNN))
###Output
52029.85435914759
22815.466421389483
12719.967630416686
###Markdown
Feature Engineering What can we do to help our classifier? Exercise: Create a function using `np.convolve` to smooth a log curve and return the smoothed version to add to the `DataFrame`
###Code
#!--
def smooth(y, box_len=10):
box = np.ones(box_len)/box_len
y_smooth = np.convolve(y, box, mode='same')
return y_smooth
#--!
w_train.columns
w_train["s_NPHI"] = smooth(w_train["NPHI"].values, box_len=50)
w_train["well_ID"].unique()
idx_test_well = 0
plt.plot(w_train[w_train.well_ID == idx_test_well]["NPHI"])
plt.plot(w_train[w_train.well_ID == idx_test_well]["s_NPHI"])
w_test["s_NPHI"] = smooth(w_test["NPHI"].values, box_len=50)
X_test = w_test[['Depth','GR','ILD','NPHI','s_NPHI']].as_matrix()
# s_NPHI will be the smoothed array!
X = w_train[['Depth','GR','ILD','NPHI','s_NPHI']].as_matrix()
#!--
est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.05,
max_depth=5, random_state=0, loss='ls')
est.fit(X, y)
y_test_GBT = est.predict(X_test)
w_test_complete['RHOB_pred_GBT'] = y_test_GBT
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_GBT, my_well.Depth,'r')
#--!
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_GBT))
from sklearn.metrics import mean_squared_error
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_GBT))
###Output
8305.108866119877
|
Python_Exercise_1_(9_1_21).ipynb | ###Markdown
Matrix and Its Operations
###Code
#numpy
import numpy as np
A = np.array([[-5,0],[4,1]])
B = np.array([[6, -3],[2,3]])
print(A+B) #sum = A +B
import numpy as np
A = np.array([[-5,0],[4,1]])
B = np.array([[6, -3],[2,3]])
print(B-A) #difference1 = B - A
import numpy as np
A = np.array([[-5,0],[4,1]])
B = np.array([[6, -3],[2,3]])
print(A-B) #difference2 = A - B
###Output
[[-11 3]
[ 2 -2]]
|
01_mysteries_of_neural_networks/06_numpy_convolutional_neural_net/notebooks/SequentialFlatten+Dense.ipynb | ###Markdown
Sequential Flatten + Dense 00. Imports
###Code
import numpy as np
import seaborn as sn
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import fashion_mnist
from sklearn.metrics import confusion_matrix
import sys
sys.path.append("../")
from src.activation.relu import ReluLayer
from src.activation.softmax import SoftmaxLayer
from src.layers.dense import DenseLayer
from src.layers.flatten import FlattenLayer
from src.model.sequential import SequentialModel
from src.utils.core import convert_categorical2one_hot, convert_prob2categorical
from src.utils.metrics import softmax_accuracy
from src.optimizers.gradient_descent import GradientDescent
from src.optimizers.rms_prop import RMSProp
from src.optimizers.adam import Adam
###Output
_____no_output_____
###Markdown
01. Settings
###Code
# number of samples in the train data set
N_TRAIN_SAMPLES = 10000
# number of samples in the test data set
N_TEST_SAMPLES = 1000
# number of samples in the validation data set
N_VALID_SAMPLES = 1000
# number of classes
N_CLASSES = 10
# image size
IMAGE_SIZE = 28
###Output
_____no_output_____
###Markdown
02. Build data set
###Code
((trainX, trainY), (testX, testY)) = fashion_mnist.load_data()
print("trainX shape:", trainX.shape)
print("trainY shape:", trainY.shape)
print("testX shape:", testX.shape)
print("testY shape:", testY.shape)
X_train = trainX[:N_TRAIN_SAMPLES, :, :]
y_train = trainY[:N_TRAIN_SAMPLES]
X_test = trainX[N_TRAIN_SAMPLES:N_TRAIN_SAMPLES+N_TEST_SAMPLES, :, :]
y_test = trainY[N_TRAIN_SAMPLES:N_TRAIN_SAMPLES+N_TEST_SAMPLES]
X_valid = testX[:N_VALID_SAMPLES, :, :]
y_valid = testY[:N_VALID_SAMPLES]
###Output
_____no_output_____
###Markdown
**NOTE:** We need to change the data format to the shape supported by my implementation.
###Code
X_train = X_train / 255
y_train = convert_categorical2one_hot(y_train)
X_test = X_test / 255
y_test = convert_categorical2one_hot(y_test)
X_valid = X_valid / 255
y_valid = convert_categorical2one_hot(y_valid)
print("X_train shape:", X_train.shape)
print("y_train shape:", y_train.shape)
print("X_test shape:", X_test.shape)
print("y_test shape:", y_test.shape)
print("X_valid shape:", X_valid.shape)
print("y_valid shape:", y_valid.shape)
###Output
X_train shape: (10000, 28, 28)
y_train shape: (10000, 10)
X_test shape: (1000, 28, 28)
y_test shape: (1000, 10)
X_valid shape: (1000, 28, 28)
y_valid shape: (1000, 10)
###Markdown
03. Build model
###Code
layers = [
FlattenLayer(),
DenseLayer.initialize(units_prev=IMAGE_SIZE * IMAGE_SIZE, units_curr=1000),
ReluLayer(),
DenseLayer.initialize(units_prev=1000, units_curr=1000),
ReluLayer(),
DenseLayer.initialize(units_prev=1000, units_curr=500),
ReluLayer(),
DenseLayer.initialize(units_prev=500, units_curr=100),
ReluLayer(),
DenseLayer.initialize(units_prev=100, units_curr=25),
ReluLayer(),
DenseLayer.initialize(units_prev=25, units_curr=N_CLASSES),
SoftmaxLayer()
]
optimizer = Adam(lr=0.003)
model = SequentialModel(
layers=layers,
optimizer=optimizer
)
###Output
_____no_output_____
###Markdown
04. Train
###Code
model.train(
x_train=X_train,
y_train=y_train,
x_test=X_test,
y_test=y_test,
epochs=30,
verbose=True
)
###Output
Iter: 00001 - test loss: 0.94885 - test accuracy: 0.69100
Iter: 00002 - test loss: 0.64911 - test accuracy: 0.73500
Iter: 00003 - test loss: 0.50478 - test accuracy: 0.81500
Iter: 00004 - test loss: 0.49665 - test accuracy: 0.83100
Iter: 00005 - test loss: 0.44253 - test accuracy: 0.85500
Iter: 00006 - test loss: 0.40021 - test accuracy: 0.86900
Iter: 00007 - test loss: 0.42632 - test accuracy: 0.84700
Iter: 00008 - test loss: 0.39435 - test accuracy: 0.86000
Iter: 00009 - test loss: 0.41667 - test accuracy: 0.85600
Iter: 00010 - test loss: 0.39953 - test accuracy: 0.86900
Iter: 00011 - test loss: 0.48938 - test accuracy: 0.85800
Iter: 00012 - test loss: 0.40086 - test accuracy: 0.87200
Iter: 00013 - test loss: 0.46205 - test accuracy: 0.85200
Iter: 00014 - test loss: 0.60497 - test accuracy: 0.82800
Iter: 00015 - test loss: 0.55620 - test accuracy: 0.84300
Iter: 00016 - test loss: 0.53282 - test accuracy: 0.83900
Iter: 00017 - test loss: 0.52941 - test accuracy: 0.85300
Iter: 00018 - test loss: 0.51698 - test accuracy: 0.84100
Iter: 00019 - test loss: 0.52851 - test accuracy: 0.86000
Iter: 00020 - test loss: 0.50630 - test accuracy: 0.85700
Iter: 00021 - test loss: 0.56005 - test accuracy: 0.84700
Iter: 00022 - test loss: 0.48347 - test accuracy: 0.87100
Iter: 00023 - test loss: 0.45002 - test accuracy: 0.85600
Iter: 00024 - test loss: 0.46615 - test accuracy: 0.85900
Iter: 00025 - test loss: 0.44619 - test accuracy: 0.86500
Iter: 00026 - test loss: 0.43474 - test accuracy: 0.87400
Iter: 00027 - test loss: 0.55208 - test accuracy: 0.85600
Iter: 00028 - test loss: 0.40952 - test accuracy: 0.87800
Iter: 00029 - test loss: 0.82644 - test accuracy: 0.82200
Iter: 00030 - test loss: 0.50707 - test accuracy: 0.86900
###Markdown
05. Predict and examine results
###Code
y_hat = model.predict(X_valid)
acc = softmax_accuracy(y_hat, y_valid)
print("acc: ", acc)
y_hat = convert_prob2categorical(y_hat)
y_valid = convert_prob2categorical(y_valid)
df_cm = pd.DataFrame(
confusion_matrix(y_valid, y_hat),
range(10),
range(10)
)
plt.figure(figsize = (20,14))
sn.heatmap(df_cm, annot=True)
plt.show()
plt.figure(figsize=(16,10))
plt.title("Adam")
plt.plot(model.history["train_acc"], label="train")
plt.plot(model.history["test_acc"], label="test")
plt.legend(loc="upper left", prop={'size': 14})
plt.ylim(0., 1.)
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.show()
plt.figure(figsize=(16,10))
plt.title("Adam")
plt.plot(model.history["train_loss"], label="train")
plt.plot(model.history["test_loss"], label="test")
plt.legend(loc="upper left", prop={'size': 14})
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.show()
###Output
_____no_output_____ |
NoteBooks/Day 4.ipynb | ###Markdown
Question 47> **_Define a class named Circle which can be constructed by a radius. The Circle class has a method which can compute the area._**
###Code
class Circle():
def __init__ (self, radius):
self.radius = radius
self.PI = 22/7
def area(self):
'''
Calculates the area of the Circle
'''
return self.PI*self.radius**2
def perimeter(self):
'''
Calculates the perimeter of the circle
'''
return 2*self.PI*self.radius
r14 = Circle(14)
print(f'Area = {r14.area()}')
print(f'Perimeter = {r14.perimeter()}')
###Output
Area = 616.0
Perimeter = 88.0
###Markdown
Question 48> **_Define a class named Rectangle which can be constructed by a length and width. The Rectangle class has a method which can compute the area._**
###Code
class Rectangle():
def __init__ (self, l, w):
self.l = l
self.w = w
def area(self):
'''
Calculates the area of the Rectangle
'''
return self.l*self.w
def perimeter(self):
'''
Calculates the perimeter of the Rectangle
'''
return 2*(self.l+self.w)
l4w2 = Rectangle(4,2)
print(f'Area = {l4w2.area()}')
print(f'Perimeter = {l4w2.perimeter()}')
###Output
Area = 8
Perimeter = 12
###Markdown
Question 49> **_Define a class named Shape and its subclass Square. The Square class has an init function which takes a length as argument. Both classes have a area function which can print the area of the shape where Shape's area is 0 by default._**
###Code
class Shape():
def area(self):
'''
Calculates the area of the Shape
'''
return 0
def perimeter(self):
'''
Calculates the perimeter of the Shape
'''
return 0
class Square(Shape):
def __init__(self, l):
self.l = l
def area(self):
'''
Calculates the area of the square
'''
return self.l**2
def perimeter(self):
'''
Calculates the perimeter of the Shape
'''
return 4*self.l
sq5 = Square(5)
print(f'Area = {sq5.area()}')
print(f'Perimeter = {sq5.perimeter()}')
###Output
Area = 25
Perimeter = 20
###Markdown
Question 50> **_Please raise a RuntimeError exception._**
###Code
def zero(x):
if x == 0:
print('Zero')
else:
raise RuntimeError('Argumet has to be Zero')
try:
zero(10)
except RuntimeError as exc:
print (exc)
###Output
Argumet has to be Zero
###Markdown
Question 51> ***Write a function to compute 5/0 and use try/except to catch the exceptions.***
###Code
try:
5/0
except Exception as exc:
print(exc)
###Output
division by zero
###Markdown
Question 52> ***Define a custom exception class which takes a string message as attribute.***
###Code
class MyError(Exception):
'''
My own exception class
Attributes:
msg -- explanation of the error
'''
def __init__(self, msg):
self.msg =msg
err = MyError('My Error')
err.msg
###Output
_____no_output_____
###Markdown
Question 53> ***Assuming that we have some email addresses in the "[email protected]" format, please write program to print the user name of a given email address. Both user names and company names are composed of letters only.***
###Code
email = '[email protected]'
pos = email.index('@')
username = email[:pos]
company = email[pos+1:email.index('.')]
print (f'User Name: {username} and Company: {company}')
###Output
User Name: sandeep and Company: cts
###Markdown
Question 54> **_Assuming that we have some email addresses in the "[email protected]" format, please write program to print the company name of a given email address. Both user names and company names are composed of letters only._**> **_Example:> If the following email address is given as input to the program:_**```[email protected]```> **_Then, the output of the program should be:_**```google```
###Code
email = '[email protected]'
pos = email.index('@')
username = email[:pos]
company = email[pos+1:email.index('.')]
print (f'User Name: {username} and Company: {company}')
###Output
User Name: sandeep and Company: cts
###Markdown
Question 55> **_Write a program which accepts a sequence of words separated by whitespace as input to print the words composed of digits only._**> **_Example:> If the following words is given as input to the program:_**```2 cats and 3 dogs.```> **_Then, the output of the program should be:_**```['2', '3']```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**
###Code
check_digit = lambda x: x.isdigit()
print(list(filter(check_digit, input('Enter words: ').split())))
###Output
Enter words: 2 cats and 3 dogs.
['2', '3']
###Markdown
Question 56> **_Print a unicode string "hello world"._**--- Hints> **_Use u'strings' format to define unicode string._**
###Code
unicode_string = u"Hello World!"
print(unicode_string)
###Output
Hello World!
###Markdown
Question 57> **_Write a program to read an ASCII string and to convert it to a unicode string encoded by utf-8._**--- Hints> **_Use unicode()/encode() function to convert._**
###Code
s = input('Enter String: ')
u = s.encode('utf-8')
print(u)
###Output
Enter String: Life is Good
b'Life is Good'
###Markdown
Question 58> **_Write a special comment to indicate a Python source code file is in unicode._**
###Code
# used in Python 2 but useless in Python 3
# Python 3 default encoding is utf-8
#Solution below for Python 2
# -*- coding: utf-8 -*-
###Output
_____no_output_____
###Markdown
Question 59> **_Write a program to compute 1/2+2/3+3/4+...+n/n+1 with a given n input by console (n>0)._**> **_Example:> If the following n is given as input to the program:_**```5```> **_Then, the output of the program should be:_**```3.55```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**
###Code
def func(n):
if n == 0:
return 0
else:
return round(n/(n+1) + func(n-1), 2)
print(func(5))
###Output
3.55
###Markdown
Question 60> **_Write a program to compute:_**```f(n)=f(n-1)+100 when n>0and f(0)=0```> **_with a given n input by console (n>0)._**> **_Example:> If the following n is given as input to the program:_**```5```> **_Then, the output of the program should be:_**```500```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**
###Code
def func(n):
if n == 0:
return 0
else:
return 100 + func(n-1)
print(func(5))
###Output
500
###Markdown
Question 61> **_The Fibonacci Sequence is computed based on the following formula:_**```f(n)=0 if n=0f(n)=1 if n=1f(n)=f(n-1)+f(n-2) if n>1```> **_Please write a program to compute the value of f(n) with a given n input by console._**> **_Example:> If the following n is given as input to the program:_**```7```> **_Then, the output of the program should be:_**```13```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**---
###Code
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n-1) + fib(n-2)
n = int(input('Enter n: '))
print(func(n))
###Output
Enter n: 7
13
###Markdown
Question 62> **_The Fibonacci Sequence is computed based on the following formula:_**```f(n)=0 if n=0f(n)=1 if n=1f(n)=f(n-1)+f(n-2) if n>1```> **_Please write a program to compute the value of f(n) with a given n input by console._**> **_Example:> If the following n is given as input to the program:_**```7```> **_Then, the output of the program should be:_**```0,1,1,2,3,5,8,13```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**
###Code
def fib(n):
if n < 2:
return n
else:
return fib(n-1) + fib(n-2)
n = int(input('Enter n: '))
fibs = [str(fib(i)) for i in range(n+1)]
print(','.join(fibs))
###Output
Enter n: 8
0,1,1,2,3,5,8,13,21
###Markdown
Question 63> **_Please write a program using generator to print the even numbers between 0 and n in comma separated form while n is input by console._**> **_Example:> If the following n is given as input to the program:_**```10```> **_Then, the output of the program should be:_**```0,2,4,6,8,10```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**
###Code
def even_gen(n):
for i in range(0, n+1, 2):
yield i
even = [str(x) for x in even_gen(int(input('Enter n: ')))]
print(','.join(even))
###Output
Enter n: 10
0,2,4,6,8,10
###Markdown
Question 64> **_Please write a program using generator to print the numbers which can be divisible by 5 and 7 between 0 and n in comma separated form while n is input by console._**> **_Example:> If the following n is given as input to the program:_**```100```> **_Then, the output of the program should be:_**```0,35,70```> **_In case of input data being supplied to the question, it should be assumed to be a console input._**
###Code
def div_5_7(n):
for i in range(n+1):
if not i%7 and not i%5:
yield i
l = [str(i) for i in div_5_7(int(input('Enter n: ')))]
print(','.join(l))
###Output
Enter n: 100
0,35,70
|
W-stacking-improved Optimised - (no z-offseting) full image- W=4, x0=0.2.ipynb | ###Markdown
Reference:Ye, H. (2019). Accurate image reconstruction in radio interferometry (Doctoral thesis). https://doi.org/10.17863/CAM.39448Haoyang Ye, Stephen F Gull, Sze M Tan, Bojan Nikolic, Optimal gridding and degridding in radio interferometry imaging, Monthly Notices of the Royal Astronomical Society, Volume 491, Issue 1, January 2020, Pages 1146–1159, https://doi.org/10.1093/mnras/stz2970Github: https://github.com/zoeye859/Imaging-Tutorial
###Code
%matplotlib notebook
import numpy as np
from scipy.optimize import leastsq, brent
from scipy.linalg import solve_triangular
import matplotlib.pyplot as plt
import scipy.integrate as integrate
from time import process_time
from numpy.linalg import inv
np.set_printoptions(precision=6)
from Imaging_core_new import *
from Gridding_core import *
import pickle
with open("min_misfit_gridding_7_0p2.pkl", "rb") as pp:
opt_funcs = pickle.load(pp)
###Output
_____no_output_____
###Markdown
1. Read in the data
###Code
######### Read in visibilities ##########
data = np.genfromtxt('out_barray_6d.csv', delimiter = ',')
jj = complex(0,1)
u_original = data.T[0]
v_original = data.T[1]
w_original = -data.T[2]
V_original = data.T[3] + jj*data.T[4]
n_uv = len(u_original)
uv_max = max(np.sqrt(u_original**2+v_original**2))
V,u,v,w = Visibility_minusw(V_original,u_original,v_original,w_original)
#### Determine the pixel size ####
X_size = 900 # image size on x-axis
Y_size = 900 # image size on y-axis
X_min = -np.pi/60. #You can change X_min and X_max in order to change the pixel size.
X_max = np.pi/60.
X = np.linspace(X_min, X_max, num=X_size+1)[0:X_size]
Y_min = -np.pi/60. #You can change Y_min and Y_max in order to change the pixel size.
Y_max = np.pi/60.
Y = np.linspace(Y_min,Y_max,num=Y_size+1)[0:Y_size]
pixel_resol_x = 180. * 60. * 60. * (X_max - X_min) / np.pi / X_size
pixel_resol_y = 180. * 60. * 60. * (Y_max - Y_min) / np.pi / Y_size
print ("The pixel size on x-axis is ", pixel_resol_x, " arcsec")
###Output
The pixel size on x-axis is 23.999999999999996 arcsec
###Markdown
2. determine the number of w planes Here is a short introduction to the original W-Stacking method (Offringa, 2014). Starting from $V(u,v,w) = \int\int \frac{\text{d}l \text{d}m }{\sqrt{1-l^2-m^2}}I(l,m)\exp\left[-i2\pi\left(ul+vm+w\left(\sqrt{1-l^2-m^2}-1\right)\right)\right] $The right-hand side can be separated into one part without any $w$-term and the rest which contains the $w$-term. For each given non--zero $w_i$, we have$V(u,v,w_i) = \int \int \mathrm{d}l \mathrm{d}m \frac{I(l,m)}{\sqrt{1-l^2-m^2}} \exp{[-i2\pi(ul+vm)]}\exp{\Big\{-i2\pi\Big[w_i\Big(\sqrt{1-l^2-m^2}-1\Big)\Big]}\Big\}$For each given non-zero $w_i$, this is essentially a two--dimensional Fourier transform. By inverting the transform, we have:$\frac{I(l,m)}{\sqrt{1-l^2-m^2}} = \int \int \mathrm{d}u \mathrm{d}v V(u,v,w_i)\exp{[i2\pi(ul+vm)]} \exp{\Big\{i2\pi\Big[w_i\Big(\sqrt{1-l^2-m^2}-1\Big)\Big]}\Big\} $If we integrate both sides along the $w$-axis from the minimum $w_{\rm min}$ to the maximum $w_{\rm max}$ on the right-hand side, the result is$\frac{I(l,m)(w_{\rm max} - w_{\rm min})}{\sqrt{1-l^2-m^2}} = \int_{w_{\rm min}}^{w_{\rm max}} \mathrm{d}w \exp{\Big\{\mathrm{i}2\pi\Big[w\Big(\sqrt{1-l^2-m^2}-1\Big)\Big]}\Big\}$$\int \int \mathrm{d}u \mathrm{d}v V(u,v,w)\exp{[i2\pi(ul+vm)]}$In practice, the values of $w$ are discretised into $N_w$ uniform samples along the $w$-axis, can be written as:$\frac{I(l,m)(w_{\rm max} - w_{\rm min})}{\sqrt{1-l^2-m^2}} = \sum_{n = 0}^{N_w-1}\exp{\Big\{i2\pi\Big[w_n\Big(\sqrt{1-l^2-m^2}-1\Big)\Big]}\Big\}\int \int \mathrm{d}u \mathrm{d}v V(u,v,w_n)\exp{[i2\pi(ul+vm)]} $Following (Offringa, 2014), the separation of two subsequent $w$ values $\delta w$ should satisfy the criterion:$|\delta w 2\pi(\sqrt{1-l^2-m^2}-1)| \ll 1$If $\delta w$ is larger than this, more $w$ samples are needed. Hence, $N_w$ is determined by:$N_w \geq 2\pi(w_{\rm max} - w_{\rm min})\max_{l,m}(1-\sqrt{1-l^2-m^2})$ 2.1 Redetermin the number of w planesIn the normalised coordinate system plotted above, the field of view is confined in the region of $[-0.5,0.5]$. $x_0$ and $y_0$ represents how much of the image will be cropped after the FFT process (Ye,2019 on optimal gridding and degridding). Here we have $x_0 = y_0$. The angular size of the map required for the FFT in $l$ is $l_{\rm range}/(2x_0)$, and is $N_x$ pixels. If $u$ is specified in wavelengths, it is multiplied by $l_{\rm range}/(2x_0)$ to convert to pixels, so that we redetermin the number of $w$-planes as:$N_x \geq (u_{\rm max}-u_{\rm min})l_{\rm range}/x_0$We now consider the choice of the $w$ or $z$ grid. We set the phase centre $z=0$ at $n=0$ and $z=-x_0$ at $n_{\rm min}$. In our improved W-Stacking, we do not apply FFT in the $z$ direction, so there is no advantage in oversampling the beam considerably. Consequently,$N_z \geq (w_{\rm max}-w_{\rm min})n_{\rm min}/x_0$The number of $w$-planes using least-misfit gridding function can therefore be determined as:$N_{w'} \equiv N_z \geq\frac{\max_{l,m}(1-\sqrt{1-l^2-m^2})(w_{\rm max} - w_{\rm min})}{x_0} + W$The additional $w$-planes enable visibilities located close to the top or bottom $w$-planes to be gridded to grids on both sides.The number of $w$-planes, $N_{w}$, determined by the original W-Stacking method is more than 1.5 times greater than the first part of $N_{w'}$. Thus, for given data, the improved W-Stacking method may require fewer $w$--planes than the original W-Stacking method. $x_0 = 0.25, W\geq 7$ is recommended so as to achieve the single precision limit in the image misfit level. 2. Determine w plane number Nw_2R
###Code
W = 4
M, x0, h = opt_funcs[W].M, opt_funcs[W].x0, opt_funcs[W].h
Nw_2R, w_values, dw = Wplanes(W, X_max, Y_max, w, x0)
###Output
We will have 40 w-planes
###Markdown
3 3D Gridding + Imaging + CorrectingTo know more about gridding, you can refer to https://github.com/zoeye859/Imaging-Tutorial Calculating gridding values for w respectively
###Code
im_size = 2250
ind = find_nearestw(w_values, w)
C_w = cal_grid_w(w, w_values, ind, dw, W, h, M, x0)
###Output
Elapsed time during the w gridding value calculation in seconds: 6.80206471
###Markdown
Gridding on w-axis
###Code
V_wgrid, u_wgrid, v_wgrid, beam_wgrid = grid_w(V, u, v, w, C_w, w_values, W, Nw_2R, ind)
###Output
Elapsed time during the w-gridding calculation in seconds: 0.6315521040000007
###Markdown
Imaging
###Code
def FFTnPShift_fullimage(V_grid, ww, X, Y, im_size, x0):
"""
FFT the gridded V_grid, and apply a phaseshift to it
Args:
V_grid (np.narray): gridded visibility on a certain w-plane
ww (np.narray): the value of the w-plane we are working on at the moment
im_size (int): the image size, it is to be noted that this is before the image cropping
x_0 (float): central 2*x_0*100% of the image will be retained
X (np.narray): X or l in radius on the image plane
Y (np.narray): Y or m in radius on the image plane
Returns:
I (np.narray): the FFT and phaseshifted image
"""
print ('FFTing...')
jj = complex(0,1)
I = np.fft.ifftshift(np.fft.ifftn(np.fft.ifftshift(V_grid)))
#I_cropped = image_crop(I, im_size, x0)
I_size = int(im_size)
I_FFTnPShift = np.zeros((I_size,I_size),dtype = np.complex_)
print ('Phaseshifting...')
#### Determine the pixel size ####
X_size = 2250# image size on x-axis
Y_size = 2250 # image size on y-axis
X_min = -np.pi/24 #You can change X_min and X_max in order to change the pixel size.
X_max = np.pi/24
X = np.linspace(X_min, X_max, num=X_size+1)[0:X_size]
Y_min = -np.pi/24 #You can change Y_min and Y_max in order to change the pixel size.
Y_max = np.pi/24
Y = np.linspace(Y_min,Y_max,num=Y_size+1)[0:Y_size]
for l_i in range(0,I_size):
for m_i in range(0,I_size):
#print (m_i, I_size)
ll = X[l_i]
mm = Y[m_i]
nn = np.sqrt(1 - ll**2 - mm**2)-1
I_FFTnPShift[l_i,m_i] = np.exp(jj*2*np.pi*ww*nn)*I[l_i,m_i]
return I_FFTnPShift
I_size = int(im_size)
I_image = np.zeros((I_size,I_size),dtype = np.complex_)
B_image = np.zeros((I_size,I_size),dtype = np.complex_)
t2_start = process_time()
for w_ind in range(Nw_2R):
print ('Gridding the ', w_ind, 'th level facet out of ',Nw_2R,' w facets.\n')
V_update = np.asarray(V_wgrid[w_ind])
u_update = np.asarray(u_wgrid[w_ind])
v_update = np.asarray(v_wgrid[w_ind])
beam_update = np.asarray(beam_wgrid[w_ind])
V_grid, B_grid = grid_uv(V_update, u_update, v_update, beam_update, W, im_size, X_max, X_min, Y_max, Y_min, h, M, x0)
print ('FFT the ', w_ind, 'th level facet out of ',Nw_2R,' w facets.\n')
I_image += FFTnPShift_fullimage(V_grid, w_values[w_ind], X, Y, im_size, x0)
B_image += FFTnPShift_fullimage(B_grid, w_values[w_ind], X, Y, im_size, x0)
B_grid = np.zeros((im_size,im_size),dtype = np.complex_)
V_grid = np.zeros((im_size,im_size),dtype = np.complex_)
t2_stop = process_time()
print("Elapsed time during imaging in seconds:", t2_stop-t2_start)
###Output
Gridding the 0 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.837794959
Elapsed time during the u/v gridding value calculation in seconds: 0.8408800169999999
FFT the 0 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 1 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.5852052990000018
Elapsed time during the u/v gridding value calculation in seconds: 1.6025700260000022
FFT the 1 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 2 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 2.305926268999997
Elapsed time during the u/v gridding value calculation in seconds: 2.3063081239999974
FFT the 2 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 3 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 2.8567345979999885
Elapsed time during the u/v gridding value calculation in seconds: 2.862190394999999
FFT the 3 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 4 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 2.526404977999988
Elapsed time during the u/v gridding value calculation in seconds: 2.5284883149999757
FFT the 4 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 5 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 2.1960780830000033
Elapsed time during the u/v gridding value calculation in seconds: 2.182166603000013
FFT the 5 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 6 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.8989128890000018
Elapsed time during the u/v gridding value calculation in seconds: 1.8949816609999743
FFT the 6 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 7 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.6926040820000026
Elapsed time during the u/v gridding value calculation in seconds: 1.666758504000029
FFT the 7 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 8 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.5196736010000222
Elapsed time during the u/v gridding value calculation in seconds: 1.498724033999963
FFT the 8 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 9 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.3161373290000142
Elapsed time during the u/v gridding value calculation in seconds: 1.310740965999969
FFT the 9 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 10 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.1652626079999777
Elapsed time during the u/v gridding value calculation in seconds: 1.1607664060000502
FFT the 10 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 11 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.0520644059999995
Elapsed time during the u/v gridding value calculation in seconds: 1.0485624290000146
FFT the 11 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 12 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.9474148369999966
Elapsed time during the u/v gridding value calculation in seconds: 0.9480688579999992
FFT the 12 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 13 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.8217844529999638
Elapsed time during the u/v gridding value calculation in seconds: 0.8169116810000219
FFT the 13 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 14 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.723647767999978
Elapsed time during the u/v gridding value calculation in seconds: 0.7310632730000179
FFT the 14 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 15 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.6247717930000363
Elapsed time during the u/v gridding value calculation in seconds: 0.6252756009999985
FFT the 15 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 16 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.5647411040000634
Elapsed time during the u/v gridding value calculation in seconds: 0.5544946349999691
FFT the 16 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 17 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.5131861339999659
Elapsed time during the u/v gridding value calculation in seconds: 0.5098896100000729
FFT the 17 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 18 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.42487816999994266
Elapsed time during the u/v gridding value calculation in seconds: 0.4271283970000468
FFT the 18 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 19 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.385028573999989
Elapsed time during the u/v gridding value calculation in seconds: 0.3843188739999732
FFT the 19 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 20 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.33592407699995874
Elapsed time during the u/v gridding value calculation in seconds: 0.3325197440000238
FFT the 20 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 21 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.2755747620000193
Elapsed time during the u/v gridding value calculation in seconds: 0.27034302699996715
FFT the 21 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 22 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.24303615899998476
Elapsed time during the u/v gridding value calculation in seconds: 0.2414146979999714
FFT the 22 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 23 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.17590093900003012
Elapsed time during the u/v gridding value calculation in seconds: 0.17646504399999685
FFT the 23 th level facet out of 40 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 24 th level facet out of 40 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.14518634599994584
Elapsed time during the u/v gridding value calculation in seconds: 0.14594693900005495
FFT the 24 th level facet out of 40 w facets.
FFTing...
###Markdown
Rescale and have a look
###Code
I_image_now = image_rescale(I_image,im_size, n_uv)
B_image_now = image_rescale(B_image,im_size, n_uv)
plt.figure()
plt.imshow(np.rot90(I_image_now.real,1), origin = 'lower')
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
B_image_now[1125,1125]
###Output
_____no_output_____
###Markdown
Correcting functions h(x)h(y) on x and y axis W= 4, x0 = 0.2
###Code
def xy_correct_fullimage(I, opt_func, im_size, x0):
"""
Rescale the obtained image
Args:
W (int): support width of the gridding function
im_size (int): the image size, it is to be noted that this is before the image cropping
opt_func (np.ndarray): The vector of grid correction values sampled on [0,x0) to optimize
I (np.narray): summed up image
Return:
I_xycorrected (np.narray): corrected image on x,y axis
"""
I_size = int(im_size)
x = np.arange(-im_size/2, im_size/2)/im_size
h_map = get_grid_correction(opt_func, x)
I_xycorrected = np.zeros([I_size,I_size],dtype = np.complex_)
for i in range(0,I_size):
for j in range(0,I_size):
I_xycorrected[i,j] = I[i,j] * h_map[i] * h_map[j]
return I_xycorrected
I_xycorrected = xy_correct_fullimage(I_image_now, opt_funcs[W], im_size, x0)
B_xycorrected = xy_correct_fullimage(B_image_now, opt_funcs[W], im_size, x0)
plt.figure()
plt.imshow(np.rot90(I_xycorrected.real,1), origin = 'lower')
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
B_xycorrected[1125,1125]
###Output
_____no_output_____
###Markdown
Correcting function on z axis
###Code
def z_correct_cal_fullimage(lut, X_min, X_max, Y_min, Y_max, dw, h, im_size, W, M, x0):
"""
Updated by Sze M. Tan
Return:
Cor_gridz (np.narray): correcting function on z-axis
"""
I_size = int(im_size)
nu, x = make_evaluation_grids(W, M, I_size)
gridder = calc_gridder(h, x0, nu, W, M)
grid_correction = gridder_to_grid_correction(gridder, nu, x, W)
xrange = X_max - X_min
yrange = Y_max - Y_min
ny = im_size
nx = im_size
l_map = np.linspace(X_min, X_max, nx+1)[:nx]/(2*x0)
m_map = np.linspace(Y_min, Y_max, ny+1)[:ny]/(2*x0)
ll, mm = np.meshgrid(l_map, m_map)
# Do not allow NaN or values outside the x0 for the optimal function
z = dw*(1. - np.sqrt(np.maximum(0.0, 1. - ll**2 - mm**2)))
z[z > x0] = x0
fmap = lut.interp(z)
#Cor_gridz = image_crop(fmap, im_size, x0)
return fmap
def z_correct_fullimage(I, Cor_gridz, im_size, x0):
"""
Rescale the obtained image
Args:
W (int): support width of the gridding function
im_size (int): the image size, it is to be noted that this is before the image cropping
h (np.ndarray): The vector of grid correction values sampled on [0,x0) to optimize
I (np.narray): summed up image
Return:
I_zcorrected (np.narray): corrected image on z-axis
"""
I_size = int(im_size)
I_zcorrected = np.zeros([I_size,I_size],dtype = np.complex_)
for i in range(0,I_size):
for j in range(0,I_size):
I_zcorrected[i,j] = I[i,j] * Cor_gridz[i,j]
return I_zcorrected
lut = setup_lookup_table(opt_funcs[W], 256, 7, x0)
Cor_gridz = z_correct_cal_fullimage(lut, X_min, X_max, Y_min, Y_max, dw, h, im_size, W, M, x0)
I_zcorrected = z_correct_fullimage(I_xycorrected, Cor_gridz, im_size, x0)
B_zcorrected = z_correct_fullimage(B_xycorrected, Cor_gridz, im_size, x0)
plt.figure()
plt.imshow(np.rot90(I_zcorrected.real,1), origin = 'lower')
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
B_zcorrected[1125,1125]
###Output
_____no_output_____
###Markdown
4 DFT and FFT dirty image difference
###Code
I_DFT = np.loadtxt('I_DFT_900_out6db.csv', delimiter = ',')
I_dif = I_DFT - I_zcorrected[675:1575,675:1575].real
plt.figure()
plt.imshow(np.rot90(I_dif,1), origin = 'lower')
plt.colorbar()
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
rms = RMS(I_dif, im_size, 0.5, x0=0.25)
print (rms)
np.savetxt('Difference_improved_withoutoffset_fullimage_W4_x02.csv',I_dif, delimiter=',')
###Output
_____no_output_____ |
Course 1 - Natural Language Processing with Classification and Vector Spaces/Week4/.ipynb_checkpoints/C1_W4_Assignment-checkpoint.ipynb | ###Markdown
Assignment 4 - Naive Machine Translation and LSHYou will now implement your first machine translation system and then youwill see how locality sensitive hashing works. Let's get started by importingthe required functions!If you are running this notebook in your local computer, don't forget todownload the twitter samples and stopwords from nltk.```nltk.download('stopwords')nltk.download('twitter_samples')``` **NOTE**: The `Exercise xx` numbers in this assignment **_are inconsistent_** with the `UNQ_Cx` numbers. This assignment covers the folowing topics:- [1. The word embeddings data for English and French words](1) - [1.1 Generate embedding and transform matrices](1-1) - [Exercise 1](ex-01)- [2. Translations](2) - [2.1 Translation as linear transformation of embeddings](2-1) - [Exercise 2](ex-02) - [Exercise 3](ex-03) - [Exercise 4](ex-04) - [2.2 Testing the translation](2-2) - [Exercise 5](ex-05) - [Exercise 6](ex-06) - [3. LSH and document search](3) - [3.1 Getting the document embeddings](3-1) - [Exercise 7](ex-07) - [Exercise 8](ex-08) - [3.2 Looking up the tweets](3-2) - [3.3 Finding the most similar tweets with LSH](3-3) - [3.4 Getting the hash number for a vector](3-4) - [Exercise 9](ex-09) - [3.5 Creating a hash table](3-5) - [Exercise 10](ex-10) - [3.6 Creating all hash tables](3-6) - [Exercise 11](ex-11)
###Code
import pdb
import pickle
import string
import time
import gensim
import matplotlib.pyplot as plt
import nltk
import numpy as np
import scipy
import sklearn
from gensim.models import KeyedVectors
from nltk.corpus import stopwords, twitter_samples
from nltk.tokenize import TweetTokenizer
from utils import (cosine_similarity, get_dict,
process_tweet)
from os import getcwd
# add folder, tmp2, from our local workspace containing pre-downloaded corpora files to nltk's data path
filePath = f"{getcwd()}/../tmp2/"
nltk.data.path.append(filePath)
###Output
_____no_output_____
###Markdown
1. The word embeddings data for English and French wordsWrite a program that translates English to French. The dataThe full dataset for English embeddings is about 3.64 gigabytes, and the Frenchembeddings are about 629 megabytes. To prevent the Coursera workspace fromcrashing, we've extracted a subset of the embeddings for the words that you'lluse in this assignment.If you want to run this on your local computer and use the full dataset,you can download the* English embeddings from Google code archive word2vec[look for GoogleNews-vectors-negative300.bin.gz](https://code.google.com/archive/p/word2vec/) * You'll need to unzip the file first.* and the French embeddings from[cross_lingual_text_classification](https://github.com/vjstark/crosslingual_text_classification). * in the terminal, type (in one line) `curl -o ./wiki.multi.fr.vec https://dl.fbaipublicfiles.com/arrival/vectors/wiki.multi.fr.vec`Then copy-paste the code below and run it. ```python Use this code to download and process the full dataset on your local computerfrom gensim.models import KeyedVectorsen_embeddings = KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary = True)fr_embeddings = KeyedVectors.load_word2vec_format('./wiki.multi.fr.vec') loading the english to french dictionariesen_fr_train = get_dict('en-fr.train.txt')print('The length of the english to french training dictionary is', len(en_fr_train))en_fr_test = get_dict('en-fr.test.txt')print('The length of the english to french test dictionary is', len(en_fr_train))english_set = set(en_embeddings.vocab)french_set = set(fr_embeddings.vocab)en_embeddings_subset = {}fr_embeddings_subset = {}french_words = set(en_fr_train.values())for en_word in en_fr_train.keys(): fr_word = en_fr_train[en_word] if fr_word in french_set and en_word in english_set: en_embeddings_subset[en_word] = en_embeddings[en_word] fr_embeddings_subset[fr_word] = fr_embeddings[fr_word]for en_word in en_fr_test.keys(): fr_word = en_fr_test[en_word] if fr_word in french_set and en_word in english_set: en_embeddings_subset[en_word] = en_embeddings[en_word] fr_embeddings_subset[fr_word] = fr_embeddings[fr_word]pickle.dump( en_embeddings_subset, open( "en_embeddings.p", "wb" ) )pickle.dump( fr_embeddings_subset, open( "fr_embeddings.p", "wb" ) )``` The subset of dataTo do the assignment on the Coursera workspace, we'll use the subset of word embeddings.
###Code
en_embeddings_subset = pickle.load(open("en_embeddings.p", "rb"))
fr_embeddings_subset = pickle.load(open("fr_embeddings.p", "rb"))
###Output
_____no_output_____
###Markdown
Look at the data* en_embeddings_subset: the key is an English word, and the vaule is a300 dimensional array, which is the embedding for that word.```'the': array([ 0.08007812, 0.10498047, 0.04980469, 0.0534668 , -0.06738281, ....```* fr_embeddings_subset: the key is an French word, and the vaule is a 300dimensional array, which is the embedding for that word.```'la': array([-6.18250e-03, -9.43867e-04, -8.82648e-03, 3.24623e-02,...``` Load two dictionaries mapping the English to French words* A training dictionary* and a testing dictionary.
###Code
# loading the english to french dictionaries
en_fr_train = get_dict('en-fr.train.txt')
print('The length of the English to French training dictionary is', len(en_fr_train))
en_fr_test = get_dict('en-fr.test.txt')
print('The length of the English to French test dictionary is', len(en_fr_train))
###Output
The length of the English to French training dictionary is 5000
The length of the English to French test dictionary is 5000
###Markdown
Looking at the English French dictionary* `en_fr_train` is a dictionary where the key is the English word and the valueis the French translation of that English word.```{'the': 'la', 'and': 'et', 'was': 'était', 'for': 'pour',```* `en_fr_test` is similar to `en_fr_train`, but is a test set. We won't look at ituntil we get to testing. 1.1 Generate embedding and transform matrices Exercise 01: Translating English dictionary to French by using embeddingsYou will now implement a function `get_matrices`, which takes the loaded dataand returns matrices `X` and `Y`.Inputs:- `en_fr` : English to French dictionary- `en_embeddings` : English to embeddings dictionary- `fr_embeddings` : French to embeddings dictionaryReturns:- Matrix `X` and matrix `Y`, where each row in X is the word embedding for anenglish word, and the same row in Y is the word embedding for the Frenchversion of that English word. Figure 2 Use the `en_fr` dictionary to ensure that the ith row in the `X` matrixcorresponds to the ith row in the `Y` matrix. **Instructions**: Complete the function `get_matrices()`:* Iterate over English words in `en_fr` dictionary.* Check if the word have both English and French embedding. Hints Sets are useful data structures that can be used to check if an item is a member of a group. You can get words which are embedded into the language by using keys method. Keep vectors in `X` and `Y` sorted in list. You can use np.vstack() to merge them into the numpy matrix. numpy.vstack stacks the items in a list as rows in a matrix.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_matrices(en_fr, french_vecs, english_vecs):
"""
Input:
en_fr: English to French dictionary
french_vecs: French words to their corresponding word embeddings.
english_vecs: English words to their corresponding word embeddings.
Output:
X: a matrix where the columns are the English embeddings.
Y: a matrix where the columns correspong to the French embeddings.
R: the projection matrix that minimizes the F norm ||X R -Y||^2.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# X_l and Y_l are lists of the english and french word embeddings
X_l = list()
Y_l = list()
# get the english words (the keys in the dictionary) and store in a set()
english_set = set(english_vecs.keys())
# get the french words (keys in the dictionary) and store in a set()
french_set = set(french_vecs.keys())
# store the french words that are part of the english-french dictionary (these are the values of the dictionary)
french_words = set(en_fr.values())
# loop through all english, french word pairs in the english french dictionary
for en_word, fr_word in en_fr.items():
# check that the french word has an embedding and that the english word has an embedding
if fr_word in french_set and en_word in english_set:
# get the english embedding
en_vec = english_vecs[en_word]
# get the french embedding
fr_vec = french_vecs[fr_word]
# add the english embedding to the list
X_l.append(en_vec)
# add the french embedding to the list
Y_l.append(fr_vec)
# stack the vectors of X_l into a matrix X
X = np.stack(X_l)
# stack the vectors of Y_l into a matrix Y
Y = np.stack(Y_l)
### END CODE HERE ###
return X, Y
###Output
_____no_output_____
###Markdown
Now we will use function `get_matrices()` to obtain sets `X_train` and `Y_train`of English and French word embeddings into the corresponding vector space models.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# getting the training set:
X_train, Y_train = get_matrices(
en_fr_train, fr_embeddings_subset, en_embeddings_subset)
###Output
_____no_output_____
###Markdown
2. Translations Figure 1 Write a program that translates English words to French words using word embeddings and vector space models. 2.1 Translation as linear transformation of embeddingsGiven dictionaries of English and French word embeddings you will create a transformation matrix `R`* Given an English word embedding, $\mathbf{e}$, you can multiply $\mathbf{eR}$ to get a new word embedding $\mathbf{f}$. * Both $\mathbf{e}$ and $\mathbf{f}$ are [row vectors](https://en.wikipedia.org/wiki/Row_and_column_vectors).* You can then compute the nearest neighbors to `f` in the french embeddings and recommend the word that is most similar to the transformed word embedding. Describing translation as the minimization problemFind a matrix `R` that minimizes the following equation. $$\arg \min _{\mathbf{R}}\| \mathbf{X R} - \mathbf{Y}\|_{F}\tag{1} $$ Frobenius normThe Frobenius norm of a matrix $A$ (assuming it is of dimension $m,n$) is defined as the square root of the sum of the absolute squares of its elements:$$\|\mathbf{A}\|_{F} \equiv \sqrt{\sum_{i=1}^{m} \sum_{j=1}^{n}\left|a_{i j}\right|^{2}}\tag{2}$$ Actual loss functionIn the real world applications, the Frobenius norm loss:$$\| \mathbf{XR} - \mathbf{Y}\|_{F}$$is often replaced by it's squared value divided by $m$:$$ \frac{1}{m} \| \mathbf{X R} - \mathbf{Y} \|_{F}^{2}$$where $m$ is the number of examples (rows in $\mathbf{X}$).* The same R is found when using this loss function versus the original Frobenius norm.* The reason for taking the square is that it's easier to compute the gradient of the squared Frobenius.* The reason for dividing by $m$ is that we're more interested in the average loss per embedding than the loss for the entire training set. * The loss for all training set increases with more words (training examples), so taking the average helps us to track the average loss regardless of the size of the training set. [Optional] Detailed explanation why we use norm squared instead of the norm: Click for optional details The norm is always nonnegative (we're summing up absolute values), and so is the square. When we take the square of all non-negative (positive or zero) numbers, the order of the data is preserved. For example, if 3 > 2, 3^2 > 2^2 Using the norm or squared norm in gradient descent results in the same location of the minimum. Squaring cancels the square root in the Frobenius norm formula. Because of the chain rule, we would have to do more calculations if we had a square root in our expression for summation. Dividing the function value by the positive number doesn't change the optimum of the function, for the same reason as described above. We're interested in transforming English embedding into the French. Thus, it is more important to measure average loss per embedding than the loss for the entire dictionary (which increases as the number of words in the dictionary increases). Exercise 02: Implementing translation mechanism described in this section. Step 1: Computing the loss* The loss function will be squared Frobenoius norm of the difference betweenmatrix and its approximation, divided by the number of training examples $m$.* Its formula is:$$ L(X, Y, R)=\frac{1}{m}\sum_{i=1}^{m} \sum_{j=1}^{n}\left( a_{i j} \right)^{2}$$where $a_{i j}$ is value in $i$th row and $j$th column of the matrix $\mathbf{XR}-\mathbf{Y}$. Instructions: complete the `compute_loss()` function* Compute the approximation of `Y` by matrix multiplying `X` and `R`* Compute difference `XR - Y`* Compute the squared Frobenius norm of the difference and divide it by $m$. Hints Useful functions: Numpy dot , Numpy sum, Numpy square, Numpy norm Be careful about which operation is elementwise and which operation is a matrix multiplication. Try to use matrix operations instead of the numpy norm function. If you choose to use norm function, take care of extra arguments and that it's returning loss squared, and not the loss itself.
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def compute_loss(X, Y, R):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
R: a matrix of dimension (n,n) - transformation matrix from English to French vector space embeddings.
Outputs:
L: a matrix of dimension (m,n) - the value of the loss function for given X, Y and R.
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# m is the number of rows in X
m = len(X)
# diff is XR - Y
diff = np.matmul(X,R)-Y
# diff_squared is the element-wise square of the difference
diff_squared = diff ** 2
# sum_diff_squared is the sum of the squared elements
sum_diff_squared = np.sum(diff_squared)
# loss i the sum_diff_squard divided by the number of examples (m)
loss = sum_diff_squared/m
### END CODE HERE ###
return loss
###Output
_____no_output_____
###Markdown
Exercise 03 Step 2: Computing the gradient of loss in respect to transform matrix R* Calculate the gradient of the loss with respect to transform matrix `R`.* The gradient is a matrix that encodes how much a small change in `R`affect the change in the loss function.* The gradient gives us the direction in which we should decrease `R`to minimize the loss.* $m$ is the number of training examples (number of rows in $X$).* The formula for the gradient of the loss function $𝐿(𝑋,𝑌,𝑅)$ is:$$\frac{d}{dR}𝐿(𝑋,𝑌,𝑅)=\frac{d}{dR}\Big(\frac{1}{m}\| X R -Y\|_{F}^{2}\Big) = \frac{2}{m}X^{T} (X R - Y)$$**Instructions**: Complete the `compute_gradient` function below. Hints Transposing in numpy Finding out the dimensions of matrices in numpy Remember to use numpy.dot for matrix multiplication
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def compute_gradient(X, Y, R):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
R: a matrix of dimension (n,n) - transformation matrix from English to French vector space embeddings.
Outputs:
g: a matrix of dimension (n,n) - gradient of the loss function L for given X, Y and R.
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# m is the number of rows in X
m = len(X)
# gradient is X^T(XR - Y) * 2/m
gradient = 2/m*(np.dot(X.T,(np.matmul(X,R)-Y)))
### END CODE HERE ###
return gradient
###Output
_____no_output_____
###Markdown
Step 3: Finding the optimal R with gradient descent algorithm Gradient descent[Gradient descent](https://ml-cheatsheet.readthedocs.io/en/latest/gradient_descent.html) is an iterative algorithm which is used in searching for the optimum of the function. * Earlier, we've mentioned that the gradient of the loss with respect to the matrix encodes how much a tiny change in some coordinate of that matrix affect the change of loss function.* Gradient descent uses that information to iteratively change matrix `R` until we reach a point where the loss is minimized. Training with a fixed number of iterationsMost of the time we iterate for a fixed number of training steps rather than iterating until the loss falls below a threshold. OPTIONAL: explanation for fixed number of iterations click here for detailed discussion You cannot rely on training loss getting low -- what you really want is the validation loss to go down, or validation accuracy to go up. And indeed - in some cases people train until validation accuracy reaches a threshold, or -- commonly known as "early stopping" -- until the validation accuracy starts to go down, which is a sign of over-fitting. Why not always do "early stopping"? Well, mostly because well-regularized models on larger data-sets never stop improving. Especially in NLP, you can often continue training for months and the model will continue getting slightly and slightly better. This is also the reason why it's hard to just stop at a threshold -- unless there's an external customer setting the threshold, why stop, where do you put the threshold? Stopping after a certain number of steps has the advantage that you know how long your training will take - so you can keep some sanity and not train for months. You can then try to get the best performance within this time budget. Another advantage is that you can fix your learning rate schedule -- e.g., lower the learning rate at 10% before finish, and then again more at 1% before finishing. Such learning rate schedules help a lot, but are harder to do if you don't know how long you're training. Pseudocode:1. Calculate gradient $g$ of the loss with respect to the matrix $R$.2. Update $R$ with the formula:$$R_{\text{new}}= R_{\text{old}}-\alpha g$$Where $\alpha$ is the learning rate, which is a scalar. Learning rate* The learning rate or "step size" $\alpha$ is a coefficient which decides how much we want to change $R$ in each step.* If we change $R$ too much, we could skip the optimum by taking too large of a step.* If we make only small changes to $R$, we will need many steps to reach the optimum.* Learning rate $\alpha$ is used to control those changes.* Values of $\alpha$ are chosen depending on the problem, and we'll use `learning_rate`$=0.0003$ as the default value for our algorithm. Exercise 04 Instructions: Implement `align_embeddings()` Hints Use the 'compute_gradient()' function to get the gradient in each step
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def align_embeddings(X, Y, train_steps=100, learning_rate=0.0003):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
train_steps: positive int - describes how many steps will gradient descent algorithm do.
learning_rate: positive float - describes how big steps will gradient descent algorithm do.
Outputs:
R: a matrix of dimension (n,n) - the projection matrix that minimizes the F norm ||X R -Y||^2
'''
np.random.seed(129)
# the number of columns in X is the number of dimensions for a word vector (e.g. 300)
# R is a square matrix with length equal to the number of dimensions in th word embedding
R = np.random.rand(X.shape[1], X.shape[1])
for i in range(train_steps):
if i % 25 == 0:
print(f"loss at iteration {i} is: {compute_loss(X, Y, R):.4f}")
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# use the function that you defined to compute the gradient
gradient = compute_gradient(X, Y, R)
# update R by subtracting the learning rate times gradient
R -= learning_rate*gradient
### END CODE HERE ###
return R
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Testing your implementation.
np.random.seed(129)
m = 10
n = 5
X = np.random.rand(m, n)
Y = np.random.rand(m, n) * .1
R = align_embeddings(X, Y)
###Output
loss at iteration 0 is: 3.7242
loss at iteration 25 is: 3.6283
loss at iteration 50 is: 3.5350
loss at iteration 75 is: 3.4442
###Markdown
**Expected Output:**```loss at iteration 0 is: 3.7242loss at iteration 25 is: 3.6283loss at iteration 50 is: 3.5350loss at iteration 75 is: 3.4442``` Calculate transformation matrix RUsing those the training set, find the transformation matrix $\mathbf{R}$ by calling the function `align_embeddings()`.**NOTE:** The code cell below will take a few minutes to fully execute (~3 mins)
###Code
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
R_train = align_embeddings(X_train, Y_train, train_steps=400, learning_rate=0.8)
###Output
loss at iteration 0 is: 963.0146
loss at iteration 25 is: 97.8292
loss at iteration 50 is: 26.8329
loss at iteration 75 is: 9.7893
loss at iteration 100 is: 4.3776
loss at iteration 125 is: 2.3281
loss at iteration 150 is: 1.4480
loss at iteration 175 is: 1.0338
loss at iteration 200 is: 0.8251
loss at iteration 225 is: 0.7145
loss at iteration 250 is: 0.6534
loss at iteration 275 is: 0.6185
loss at iteration 300 is: 0.5981
loss at iteration 325 is: 0.5858
loss at iteration 350 is: 0.5782
loss at iteration 375 is: 0.5735
###Markdown
Expected Output```loss at iteration 0 is: 963.0146loss at iteration 25 is: 97.8292loss at iteration 50 is: 26.8329loss at iteration 75 is: 9.7893loss at iteration 100 is: 4.3776loss at iteration 125 is: 2.3281loss at iteration 150 is: 1.4480loss at iteration 175 is: 1.0338loss at iteration 200 is: 0.8251loss at iteration 225 is: 0.7145loss at iteration 250 is: 0.6534loss at iteration 275 is: 0.6185loss at iteration 300 is: 0.5981loss at iteration 325 is: 0.5858loss at iteration 350 is: 0.5782loss at iteration 375 is: 0.5735``` 2.2 Testing the translation k-Nearest neighbors algorithm[k-Nearest neighbors algorithm](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) * k-NN is a method which takes a vector as input and finds the other vectors in the dataset that are closest to it. * The 'k' is the number of "nearest neighbors" to find (e.g. k=2 finds the closest two neighbors). Searching for the translation embeddingSince we're approximating the translation function from English to French embeddings by a linear transformation matrix $\mathbf{R}$, most of the time we won't get the exact embedding of a French word when we transform embedding $\mathbf{e}$ of some particular English word into the French embedding space. * This is where $k$-NN becomes really useful! By using $1$-NN with $\mathbf{eR}$ as input, we can search for an embedding $\mathbf{f}$ (as a row) in the matrix $\mathbf{Y}$ which is the closest to the transformed vector $\mathbf{eR}$ Cosine similarityCosine similarity between vectors $u$ and $v$ calculated as the cosine of the angle between them.The formula is $$\cos(u,v)=\frac{u\cdot v}{\left\|u\right\|\left\|v\right\|}$$* $\cos(u,v)$ = $1$ when $u$ and $v$ lie on the same line and have the same direction.* $\cos(u,v)$ is $-1$ when they have exactly opposite directions.* $\cos(u,v)$ is $0$ when the vectors are orthogonal (perpendicular) to each other. Note: Distance and similarity are pretty much opposite things.* We can obtain distance metric from cosine similarity, but the cosine similarity can't be used directly as the distance metric. * When the cosine similarity increases (towards $1$), the "distance" between the two vectors decreases (towards $0$). * We can define the cosine distance between $u$ and $v$ as$$d_{\text{cos}}(u,v)=1-\cos(u,v)$$ **Exercise 05**: Complete the function `nearest_neighbor()`Inputs:* Vector `v`,* A set of possible nearest neighbors `candidates`* `k` nearest neighbors to find.* The distance metric should be based on cosine similarity.* `cosine_similarity` function is already implemented and imported for you. It's arguments are two vectors and it returns the cosine of the angle between them.* Iterate over rows in `candidates`, and save the result of similarities between current row and vector `v` in a python list. Take care that similarities are in the same order as row vectors of `candidates`.* Now you can use [numpy argsort]( https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.htmlnumpy.argsort) to sort the indices for the rows of `candidates`. Hints numpy.argsort sorts values from most negative to most positive (smallest to largest) The candidates that are nearest to 'v' should have the highest cosine similarity To get the last element of a list 'tmp', the notation is tmp[-1:]
###Code
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def nearest_neighbor(v, candidates, k=1):
"""
Input:
- v, the vector you are going find the nearest neighbor for
- candidates: a set of vectors where we will find the neighbors
- k: top k nearest neighbors to find
Output:
- k_idx: the indices of the top k closest vectors in sorted form
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
similarity_l = []
# for each candidate vector...
for row in candidates:
# get the cosine similarity
cos_similarity = np.dot(v, row) / (np.linalg.norm(row)*np.linalg.norm(v))
# append the similarity to the list
similarity_l.append(cos_similarity)
# sort the similarity list and get the indices of the sorted list
sorted_ids = np.argsort(similarity_l)
# get the indices of the k most similar candidate vectors
k_idx = sorted_ids[-k:]
### END CODE HERE ###
return k_idx
# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Test your implementation:
v = np.array([1, 0, 1])
candidates = np.array([[1, 0, 5], [-2, 5, 3], [2, 0, 1], [6, -9, 5], [9, 9, 9]])
print(candidates[nearest_neighbor(v, candidates, 3)])
###Output
[[9 9 9]
[1 0 5]
[2 0 1]]
###Markdown
**Expected Output**:`[[9 9 9] [1 0 5] [2 0 1]]` Test your translation and compute its accuracy**Exercise 06**:Complete the function `test_vocabulary` which takes in Englishembedding matrix $X$, French embedding matrix $Y$ and the $R$matrix and returns the accuracy of translations from $X$ to $Y$ by $R$.* Iterate over transformed English word embeddings and check if theclosest French word vector belongs to French word that is the actualtranslation.* Obtain an index of the closest French embedding by using`nearest_neighbor` (with argument `k=1`), and compare it to the indexof the English embedding you have just transformed.* Keep track of the number of times you get the correct translation.* Calculate accuracy as $$\text{accuracy}=\frac{\(\text{correct predictions})}{\(\text{total predictions})}$$
###Code
# UNQ_C10 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def test_vocabulary(X, Y, R):
'''
Input:
X: a matrix where the columns are the English embeddings.
Y: a matrix where the columns correspong to the French embeddings.
R: the transform matrix which translates word embeddings from
English to French word vector space.
Output:
accuracy: for the English to French capitals
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# The prediction is X times R
pred = np.matmul(X, R)
# initialize the number correct to zero
num_correct = 0
# loop through each row in pred (each transformed embedding)
for i in range(len(pred)):
# get the index of the nearest neighbor of pred at row 'i'; also pass in the candidates in Y
pred_idx = nearest_neighbor(pred[i], Y)
# if the index of the nearest neighbor equals the row of i... \
if pred_idx == i:
# increment the number correct by 1.
num_correct += 1
# accuracy is the number correct divided by the number of rows in 'pred' (also number of rows in X)
accuracy = num_correct/len(X)
### END CODE HERE ###
return accuracy
###Output
_____no_output_____
###Markdown
Let's see how is your translation mechanism working on the unseen data:
###Code
X_val, Y_val = get_matrices(en_fr_test, fr_embeddings_subset, en_embeddings_subset)
# UNQ_C11 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
acc = test_vocabulary(X_val, Y_val, R_train) # this might take a minute or two
print(f"accuracy on test set is {acc:.3f}")
###Output
accuracy on test set is 0.557
###Markdown
**Expected Output**:```0.557```You managed to translate words from one language to another languagewithout ever seing them with almost 56% accuracy by using some basiclinear algebra and learning a mapping of words from one language to another! 3. LSH and document searchIn this part of the assignment, you will implement a more efficient versionof k-nearest neighbors using locality sensitive hashing.You will then apply this to document search.* Process the tweets and represent each tweet as a vector (represent adocument with a vector embedding).* Use locality sensitive hashing and k nearest neighbors to find tweetsthat are similar to a given tweet.
###Code
# get the positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
all_tweets = all_positive_tweets + all_negative_tweets
###Output
_____no_output_____
###Markdown
3.1 Getting the document embeddings Bag-of-words (BOW) document modelsText documents are sequences of words.* The ordering of words makes a difference. For example, sentences "Apple pie isbetter than pepperoni pizza." and "Pepperoni pizza is better than apple pie"have opposite meanings due to the word ordering.* However, for some applications, ignoring the order of words can allowus to train an efficient and still effective model.* This approach is called Bag-of-words document model. Document embeddings* Document embedding is created by summing up the embeddings of all wordsin the document.* If we don't know the embedding of some word, we can ignore that word. **Exercise 07**:Complete the `get_document_embedding()` function.* The function `get_document_embedding()` encodes entire document as a "document" embedding.* It takes in a docoument (as a string) and a dictionary, `en_embeddings`* It processes the document, and looks up the corresponding embedding of each word.* It then sums them up and returns the sum of all word vectors of that processed tweet. Hints You can handle missing words easier by using the `get()` method of the python dictionary instead of the bracket notation (i.e. "[ ]"). See more about it here The default value for missing word should be the zero vector. Numpy will broadcast simple 0 scalar into a vector of zeros during the summation. Alternatively, skip the addition if a word is not in the dictonary. You can use your `process_tweet()` function which allows you to process the tweet. The function just takes in a tweet and returns a list of words.
###Code
# UNQ_C12 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_document_embedding(tweet, en_embeddings):
'''
Input:
- tweet: a string
- en_embeddings: a dictionary of word embeddings
Output:
- doc_embedding: sum of all word embeddings in the tweet
'''
doc_embedding = np.zeros(300)
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# process the document into a list of words (process the tweet)
processed_doc = process_tweet(tweet)
for word in processed_doc:
# add the word embedding to the running total for the document embedding
doc_embedding += en_embeddings[word] if word in en_embeddings else 0
### END CODE HERE ###
return doc_embedding
# UNQ_C13 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# testing your function
custom_tweet = "RT @Twitter @chapagain Hello There! Have a great day. :) #good #morning http://chapagain.com.np"
tweet_embedding = get_document_embedding(custom_tweet, en_embeddings_subset)
tweet_embedding[-5:]
###Output
_____no_output_____
###Markdown
**Expected output**:```array([-0.00268555, -0.15378189, -0.55761719, -0.07216644, -0.32263184])``` Exercise 08 Store all document vectors into a dictionaryNow, let's store all the tweet embeddings into a dictionary.Implement `get_document_vecs()`
###Code
# UNQ_C14 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_document_vecs(all_docs, en_embeddings):
'''
Input:
- all_docs: list of strings - all tweets in our dataset.
- en_embeddings: dictionary with words as the keys and their embeddings as the values.
Output:
- document_vec_matrix: matrix of tweet embeddings.
- ind2Doc_dict: dictionary with indices of tweets in vecs as keys and their embeddings as the values.
'''
# the dictionary's key is an index (integer) that identifies a specific tweet
# the value is the document embedding for that document
ind2Doc_dict = {}
# this is list that will store the document vectors
document_vec_l = []
for i, doc in enumerate(all_docs):
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# get the document embedding of the tweet
doc_embedding = get_document_embedding(doc, en_embeddings)
# save the document embedding into the ind2Tweet dictionary at index i
ind2Doc_dict[i] = doc_embedding
# append the document embedding to the list of document vectors
document_vec_l.append(doc_embedding)
### END CODE HERE ###
# convert the list of document vectors into a 2D array (each row is a document vector)
document_vec_matrix = np.vstack(document_vec_l)
return document_vec_matrix, ind2Doc_dict
document_vecs, ind2Tweet = get_document_vecs(all_tweets, en_embeddings_subset)
# UNQ_C15 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
print(f"length of dictionary {len(ind2Tweet)}")
print(f"shape of document_vecs {document_vecs.shape}")
###Output
length of dictionary 10000
shape of document_vecs (10000, 300)
###Markdown
Expected Output```length of dictionary 10000shape of document_vecs (10000, 300)``` 3.2 Looking up the tweetsNow you have a vector of dimension (m,d) where `m` is the number of tweets(10,000) and `d` is the dimension of the embeddings (300). Now youwill input a tweet, and use cosine similarity to see which tweet in ourcorpus is similar to your tweet.
###Code
my_tweet = 'i am sad'
process_tweet(my_tweet)
tweet_embedding = get_document_embedding(my_tweet, en_embeddings_subset)
# UNQ_C16 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# this gives you a similar tweet as your input.
# this implementation is vectorized...
idx = np.argmax(cosine_similarity(document_vecs, tweet_embedding))
print(all_tweets[idx])
###Output
@zoeeylim sad sad sad kid :( it's ok I help you watch the match HAHAHAHAHA
###Markdown
Expected Output```@zoeeylim sad sad sad kid :( it's ok I help you watch the match HAHAHAHAHA``` 3.3 Finding the most similar tweets with LSHYou will now implement locality sensitive hashing (LSH) to identify the most similar tweet.* Instead of looking at all 10,000 vectors, you can just search a subset to findits nearest neighbors.Let's say your data points are plotted like this: Figure 3 You can divide the vector space into regions and search within one region for nearest neighbors of a given vector. Figure 4
###Code
N_VECS = len(all_tweets) # This many vectors.
N_DIMS = len(ind2Tweet[1]) # Vector dimensionality.
print(f"Number of vectors is {N_VECS} and each has {N_DIMS} dimensions.")
###Output
Number of vectors is 10000 and each has 300 dimensions.
###Markdown
Choosing the number of planes* Each plane divides the space to $2$ parts.* So $n$ planes divide the space into $2^{n}$ hash buckets.* We want to organize 10,000 document vectors into buckets so that every bucket has about $~16$ vectors.* For that we need $\frac{10000}{16}=625$ buckets.* We're interested in $n$, number of planes, so that $2^{n}= 625$. Now, we can calculate $n=\log_{2}625 = 9.29 \approx 10$.
###Code
# The number of planes. We use log2(625) to have ~16 vectors/bucket.
N_PLANES = 10
# Number of times to repeat the hashing to improve the search.
N_UNIVERSES = 25
###Output
_____no_output_____
###Markdown
3.4 Getting the hash number for a vectorFor each vector, we need to get a unique number associated to that vector in order to assign it to a "hash bucket". Hyperlanes in vector spaces* In $3$-dimensional vector space, the hyperplane is a regular plane. In $2$ dimensional vector space, the hyperplane is a line.* Generally, the hyperplane is subspace which has dimension $1$ lower than the original vector space has.* A hyperplane is uniquely defined by its normal vector.* Normal vector $n$ of the plane $\pi$ is the vector to which all vectors in the plane $\pi$ are orthogonal (perpendicular in $3$ dimensional case). Using Hyperplanes to split the vector spaceWe can use a hyperplane to split the vector space into $2$ parts.* All vectors whose dot product with a plane's normal vector is positive are on one side of the plane.* All vectors whose dot product with the plane's normal vector is negative are on the other side of the plane. Encoding hash buckets* For a vector, we can take its dot product with all the planes, then encode this information to assign the vector to a single hash bucket.* When the vector is pointing to the opposite side of the hyperplane than normal, encode it by 0.* Otherwise, if the vector is on the same side as the normal vector, encode it by 1.* If you calculate the dot product with each plane in the same order for every vector, you've encoded each vector's unique hash ID as a binary number, like [0, 1, 1, ... 0]. Exercise 09: Implementing hash bucketsWe've initialized hash table `hashes` for you. It is list of `N_UNIVERSES` matrices, each describes its own hash table. Each matrix has `N_DIMS` rows and `N_PLANES` columns. Every column of that matrix is a `N_DIMS`-dimensional normal vector for each of `N_PLANES` hyperplanes which are used for creating buckets of the particular hash table.*Exercise*: Your task is to complete the function `hash_value_of_vector` which places vector `v` in the correct hash bucket.* First multiply your vector `v`, with a corresponding plane. This will give you a vector of dimension $(1,\text{N_planes})$.* You will then convert every element in that vector to 0 or 1.* You create a hash vector by doing the following: if the element is negative, it becomes a 0, otherwise you change it to a 1.* You then compute the unique number for the vector by iterating over `N_PLANES`* Then you multiply $2^i$ times the corresponding bit (0 or 1).* You will then store that sum in the variable `hash_value`.**Intructions:** Create a hash for the vector in the function below.Use this formula:$$ hash = \sum_{i=0}^{N-1} \left( 2^{i} \times h_{i} \right) $$ Create the sets of planes* Create multiple (25) sets of planes (the planes that divide up the region).* You can think of these as 25 separate ways of dividing up the vector space with a different set of planes.* Each element of this list contains a matrix with 300 rows (the word vector have 300 dimensions), and 10 columns (there are 10 planes in each "universe").
###Code
np.random.seed(0)
planes_l = [np.random.normal(size=(N_DIMS, N_PLANES))
for _ in range(N_UNIVERSES)]
###Output
_____no_output_____
###Markdown
Hints numpy.squeeze() removes unused dimensions from an array; for instance, it converts a (10,1) 2D array into a (10,) 1D array
###Code
# UNQ_C17 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def hash_value_of_vector(v, planes):
"""Create a hash for a vector; hash_id says which random hash to use.
Input:
- v: vector of tweet. It's dimension is (1, N_DIMS)
- planes: matrix of dimension (N_DIMS, N_PLANES) - the set of planes that divide up the region
Output:
- res: a number which is used as a hash for your vector
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# for the set of planes,
# calculate the dot product between the vector and the matrix containing the planes
# remember that planes has shape (300, 10)
# The dot product will have the shape (1,10)
dot_product = np.dot(v, planes)
# get the sign of the dot product (1,10) shaped vector
sign_of_dot_product = np.sign(dot_product)
# set h to be false (eqivalent to 0 when used in operations) if the sign is negative,
# and true (equivalent to 1) if the sign is positive (1,10) shaped vector
h = sign_of_dot_product > 0
# remove extra un-used dimensions (convert this from a 2D to a 1D array)
h = np.squeeze(h)
# initialize the hash value to 0
hash_value = 0
n_planes = planes.shape[1]
for i in range(n_planes):
# increment the hash value by 2^i * h_i
hash_value += 2**i*h[i]
### END CODE HERE ###
# cast hash_value as an integer
hash_value = int(hash_value)
return hash_value
# UNQ_C18 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
np.random.seed(0)
idx = 0
planes = planes_l[idx] # get one 'universe' of planes to test the function
vec = np.random.rand(1, 300)
print(f" The hash value for this vector,",
f"and the set of planes at index {idx},",
f"is {hash_value_of_vector(vec, planes)}")
###Output
The hash value for this vector, and the set of planes at index 0, is 768
###Markdown
Expected Output```The hash value for this vector, and the set of planes at index 0, is 768``` 3.5 Creating a hash table Exercise 10Given that you have a unique number for each vector (or tweet), You now want to create a hash table. You need a hash table, so that given a hash_id, you can quickly look up the corresponding vectors. This allows you to reduce your search by a significant amount of time. We have given you the `make_hash_table` function, which maps the tweet vectors to a bucket and stores the vector there. It returns the `hash_table` and the `id_table`. The `id_table` allows you know which vector in a certain bucket corresponds to what tweet. Hints a dictionary comprehension, similar to a list comprehension, looks like this: `{i:0 for i in range(10)}`, where the key is 'i' and the value is zero for all key-value pairs.
###Code
# UNQ_C19 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# This is the code used to create a hash table: feel free to read over it
def make_hash_table(vecs, planes):
"""
Input:
- vecs: list of vectors to be hashed.
- planes: the matrix of planes in a single "universe", with shape (embedding dimensions, number of planes).
Output:
- hash_table: dictionary - keys are hashes, values are lists of vectors (hash buckets)
- id_table: dictionary - keys are hashes, values are list of vectors id's
(it's used to know which tweet corresponds to the hashed vector)
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# number of planes is the number of columns in the planes matrix
num_of_planes = planes.shape[1]
# number of buckets is 2^(number of planes)
num_buckets = 2 ** num_of_planes
# create the hash table as a dictionary.
# Keys are integers (0,1,2.. number of buckets)
# Values are empty lists
hash_table = {i:[] for i in range(num_buckets)}
# create the id table as a dictionary.
# Keys are integers (0,1,2... number of buckets)
# Values are empty lists
id_table = {i:[] for i in range(num_buckets)}
# for each vector in 'vecs'
for i, v in enumerate(vecs):
# calculate the hash value for the vector
h = hash_value_of_vector(v, planes)
# store the vector into hash_table at key h,
# by appending the vector v to the list at key h
hash_table[h].append(v)
# store the vector's index 'i' (each document is given a unique integer 0,1,2...)
# the key is the h, and the 'i' is appended to the list at key h
id_table[h].append(i)
### END CODE HERE ###
return hash_table, id_table
# UNQ_C20 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
np.random.seed(0)
planes = planes_l[0] # get one 'universe' of planes to test the function
vec = np.random.rand(1, 300)
tmp_hash_table, tmp_id_table = make_hash_table(document_vecs, planes)
print(f"The hash table at key 0 has {len(tmp_hash_table[0])} document vectors")
print(f"The id table at key 0 has {len(tmp_id_table[0])}")
print(f"The first 5 document indices stored at key 0 of are {tmp_id_table[0][0:5]}")
###Output
_____no_output_____
###Markdown
Expected output```The hash table at key 0 has 3 document vectorsThe id table at key 0 has 3The first 5 document indices stored at key 0 of are [3276, 3281, 3282]``` 3.6 Creating all hash tablesYou can now hash your vectors and store them in a hash table thatwould allow you to quickly look up and search for similar vectors.Run the cell below to create the hashes. By doing so, you end up havingseveral tables which have all the vectors. Given a vector, you thenidentify the buckets in all the tables. You can then iterate over thebuckets and consider much fewer vectors. The more buckets you use, themore accurate your lookup will be, but also the longer it will take.
###Code
# Creating the hashtables
hash_tables = []
id_tables = []
for universe_id in range(N_UNIVERSES): # there are 25 hashes
print('working on hash universe #:', universe_id)
planes = planes_l[universe_id]
hash_table, id_table = make_hash_table(document_vecs, planes)
hash_tables.append(hash_table)
id_tables.append(id_table)
###Output
_____no_output_____
###Markdown
Approximate K-NN Exercise 11Implement approximate K nearest neighbors using locality sensitive hashing,to search for documents that are similar to a given document at theindex `doc_id`. Inputs* `doc_id` is the index into the document list `all_tweets`.* `v` is the document vector for the tweet in `all_tweets` at index `doc_id`.* `planes_l` is the list of planes (the global variable created earlier).* `k` is the number of nearest neighbors to search for.* `num_universes_to_use`: to save time, we can use fewer than the totalnumber of available universes. By default, it's set to `N_UNIVERSES`,which is $25$ for this assignment.The `approximate_knn` function finds a subset of candidate vectors thatare in the same "hash bucket" as the input vector 'v'. Then it performsthe usual k-nearest neighbors search on this subset (instead of searchingthrough all 10,000 tweets). Hints There are many dictionaries used in this function. Try to print out planes_l, hash_tables, id_tables to understand how they are structured, what the keys represent, and what the values contain. To remove an item from a list, use `.remove()` To append to a list, use `.append()` To add to a set, use `.add()`
###Code
# UNQ_C21 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# This is the code used to do the fast nearest neighbor search. Feel free to go over it
def approximate_knn(doc_id, v, planes_l, k=1, num_universes_to_use=N_UNIVERSES):
"""Search for k-NN using hashes."""
assert num_universes_to_use <= N_UNIVERSES
# Vectors that will be checked as possible nearest neighbor
vecs_to_consider_l = list()
# list of document IDs
ids_to_consider_l = list()
# create a set for ids to consider, for faster checking if a document ID already exists in the set
ids_to_consider_set = set()
# loop through the universes of planes
for universe_id in range(num_universes_to_use):
# get the set of planes from the planes_l list, for this particular universe_id
planes = planes_l[universe_id]
# get the hash value of the vector for this set of planes
hash_value = hash_value_of_vector(v, planes)
# get the hash table for this particular universe_id
hash_table = hash_tables[universe_id]
# get the list of document vectors for this hash table, where the key is the hash_value
document_vectors_l = hash_table[hash_value]
# get the id_table for this particular universe_id
id_table = id_tables[universe_id]
# get the subset of documents to consider as nearest neighbors from this id_table dictionary
new_ids_to_consider = id_table[hash_value]
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# remove the id of the document that we're searching
if doc_id in new_ids_to_consider:
None
print(f"removed doc_id {doc_id} of input vector from new_ids_to_search")
# loop through the subset of document vectors to consider
for i, new_id in enumerate(new_ids_to_consider):
# if the document ID is not yet in the set ids_to_consider...
if new_id not in ids_to_consider_set:
# access document_vectors_l list at index i to get the embedding
# then append it to the list of vectors to consider as possible nearest neighbors
document_vector_at_i = None
None
# append the new_id (the index for the document) to the list of ids to consider
None
# also add the new_id to the set of ids to consider
# (use this to check if new_id is not already in the IDs to consider)
None
### END CODE HERE ###
# Now run k-NN on the smaller set of vecs-to-consider.
print("Fast considering %d vecs" % len(vecs_to_consider_l))
# convert the vecs to consider set to a list, then to a numpy array
vecs_to_consider_arr = np.array(vecs_to_consider_l)
# call nearest neighbors on the reduced list of candidate vectors
nearest_neighbor_idx_l = nearest_neighbor(v, vecs_to_consider_arr, k=k)
# Use the nearest neighbor index list as indices into the ids to consider
# create a list of nearest neighbors by the document ids
nearest_neighbor_ids = [ids_to_consider_l[idx]
for idx in nearest_neighbor_idx_l]
return nearest_neighbor_ids
#document_vecs, ind2Tweet
doc_id = 0
doc_to_search = all_tweets[doc_id]
vec_to_search = document_vecs[doc_id]
# UNQ_C22 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Sample
nearest_neighbor_ids = approximate_knn(
doc_id, vec_to_search, planes_l, k=3, num_universes_to_use=5)
print(f"Nearest neighbors for document {doc_id}")
print(f"Document contents: {doc_to_search}")
print("")
for neighbor_id in nearest_neighbor_ids:
print(f"Nearest neighbor at document id {neighbor_id}")
print(f"document contents: {all_tweets[neighbor_id]}")
###Output
_____no_output_____ |
CRM/Hubpsot/_Workflows/Workflow_Emailing_campaigns/Hubspot Connector.ipynb | ###Markdown
Hubspot Connector Step 1. Install pip package
###Code
!pip install requests
!pip install markdown2
###Output
Requirement already satisfied: requests in /home/ftp/.local/lib/python3.8/site-packages (2.23.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/ftp/.local/lib/python3.8/site-packages (from requests) (1.25.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.8/site-packages (from requests) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.8/site-packages (from requests) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in /home/ftp/.local/lib/python3.8/site-packages (from requests) (2.9)
Requirement already satisfied: markdown2 in /home/ftp/.local/lib/python3.8/site-packages (2.3.9)
###Markdown
Step 2. Import hubspot connector
###Code
import hubspot_connector
###Output
_____no_output_____
###Markdown
Step 3. set variable - token - token for hubspot api- deal - deal id- username - login username for gmail. its always "apikey" for sendgrid- passwrod - login password for gmail . token for sendgrid- to - dataframe which contain `email` column - subject - subject of the email- from_email- from emaail address- markdown_path - path to markdown file which contains email content - service - `gmail` or `sendgrid`
###Code
token = ""
deal = "1983759191"
df = hubspot_connector.connect(token, deal)
df
###Output
_____no_output_____
###Markdown
confuguring email variable
###Code
username = "apikey"
password ="-----------"
to = df
subject = "Test from Naas Awesome Notebooks"
mail_from = "[email protected]"
markdown_path = "Sales-emailing.md"
service = "sendgrid"
###Output
_____no_output_____
###Markdown
sending emails
###Code
hubspot_connector.send_email_from_df(username,password,mail_from,to,subject,markdown_path,service)
###Output
_____no_output_____ |
src/3_load_and_dispatch_signals/.ipynb_checkpoints/extract_load_and_dispatch_signals-checkpoint.ipynb | ###Markdown
Extract Load and Dispatch SignalsData from AEMO's MMSDM database are used to construct historic load and dispatch signals. To illustrate how these signals can be constructed we extract data for one month as a sample. As the schema is the same for all MMSDM files, signals from different periods can be constructed by changing the csv file imported. Also, signals with longer time horizons can be constructed by chaining together data from multiple months. Procedure1. Import packages and declare paths to directories2. Import datasets3. Pivot dataframes extracting data from desired columns4. Save data to file Import packages
###Code
import os
import pandas as pd
###Output
_____no_output_____
###Markdown
Paths to directories
###Code
# Core data directory (common files)
data_dir = os.path.abspath(os.path.join(os.path.curdir, os.path.pardir, os.path.pardir, 'data'))
# MMSDM data directory
mmsdm_dir = os.path.join(data_dir, 'AEMO', 'MMSDM')
# Output directory
output_dir = os.path.abspath(os.path.join(os.path.curdir, 'output'))
###Output
_____no_output_____
###Markdown
Datasets MMSDMA summary of the tables used from AEMO's MMSDM database [1] is given below:| Table | Description || :----- | :----- ||DISPATCH_UNIT_SCADA | MW dispatch at 5 minute (dispatch) intervals for DUIDs within the NEM.||TRADINGREGIONSUM | Contains load in each NEM region at 30 minute (trading) intervals.| Unit DispatchParse and save unit dispatch data. Note that dispatch in MW is given at 5min intervals, and that the time resolution of demand data is 30min intervals, corresponding to the length of a trading period in the NEM. To align the time resolution of these signals unit dispatch data are aggregated, with mean power output over 30min intervals computed for each DUID.
###Code
# Unit dispatch data
df_DISPATCH_UNIT_SCADA = pd.read_csv(os.path.join(mmsdm_dir, 'PUBLIC_DVD_DISPATCH_UNIT_SCADA_201706010000.CSV'),
skiprows=1, skipfooter=1, engine='python')
# Convert to datetime objects
df_DISPATCH_UNIT_SCADA['SETTLEMENTDATE'] = pd.to_datetime(df_DISPATCH_UNIT_SCADA['SETTLEMENTDATE'])
# Pivot dataframe. Dates are the index values, columns are DUIDs, values are DUID dispatch levels
df_DISPATCH_UNIT_SCADA_piv = df_DISPATCH_UNIT_SCADA.pivot(index='SETTLEMENTDATE', columns='DUID', values='SCADAVALUE')
# To ensure the 30th minute interval is included during each trading interval the time index is offset
# by 1min. Once the groupby operation is performed this offset is removed.
df_DISPATCH_UNIT_SCADA_agg = df_DISPATCH_UNIT_SCADA_piv.groupby(pd.Grouper(freq='30Min', base=1, label='right')).mean()
df_DISPATCH_UNIT_SCADA_agg = df_DISPATCH_UNIT_SCADA_agg.set_index(df_DISPATCH_UNIT_SCADA_agg.index - pd.Timedelta(minutes=1))
df_DISPATCH_UNIT_SCADA_agg.to_csv(os.path.join(output_dir, 'signals_dispatch.csv'))
###Output
_____no_output_____
###Markdown
Regional LoadLoad in each NEM region is given at 30min intervals.
###Code
# Regional summary for each trading interval
df_TRADINGREGIONSUM = pd.read_csv(os.path.join(data_dir, 'AEMO', 'MMSDM', 'PUBLIC_DVD_TRADINGREGIONSUM_201706010000.CSV'),
skiprows=1, skipfooter=1, engine='python')
# Convert settlement date to datetime
df_TRADINGREGIONSUM['SETTLEMENTDATE'] = pd.to_datetime(df_TRADINGREGIONSUM['SETTLEMENTDATE'])
# Pivot dataframe. Index is timestamp, columns are NEM region IDs, values are total demand
df_TRADINGREGIONSUM_piv = df_TRADINGREGIONSUM.pivot(index='SETTLEMENTDATE', columns='REGIONID', values='TOTALDEMAND')
df_TRADINGREGIONSUM_piv.to_csv(os.path.join(output_dir, 'signals_regional_load.csv'))
###Output
_____no_output_____ |
Lessons&CourseWorks/3.ObjectTracking&Localization/2.RobotLocalization/4.SenseFunctionNormalized/1. Normalized Sense Function, exercise.ipynb | ###Markdown
Normalized Sense FunctionIn this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing and updating that distribution and finally normalizing that distribution.1. The robot starts off knowing nothing; the robot is equally likely to be anywhere and so `p` is a uniform distribution.2. Then the robot senses a grid color: red or green, and updates this distribution `p` according to the values of pHit and pMiss.3. **We normalize `p` such that its components sum to 1.**
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Modify your code so that it normalizes the output for the sense function. This means that the entries in `q` should sum to one.Note that `pHit` refers to the probability that the robot correctly senses the color of the square it is on, so if a robot senses red *and* is on a red square, we'll multiply the current location probability (0.2) with pHit. Same goes for if a robot senses green *and* is on a green square.
###Code
# given initial variables
p=[0.2, 0.2, 0.2, 0.2, 0.2]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
## Complete this function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
##TODO: normalize q
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
#normalize
norm = np.sum(q)
q = q/norm
return q
q = sense(p,Z)
print(q)
display_map(q)
###Output
[ 0.11111111 0.33333333 0.33333333 0.11111111 0.11111111]
|
deprecated/text2vec.ipynb | ###Markdown
Opening
###Code
#getting texts (this is independent of the doc2vec) This will go in the Hazelnut file
#I need to import doc2text.py
import doc2text
dirpath = "/Users/braulioantonio/Documents/Git/QMIND-knowledge-graph-project/Development/datasets/qmind_onboarding/"
files_dictionary = doc2text.get_files(dirpath)
#files_dictionary
corpus_data = []
for key in files_dictionary.keys():
for file in files_dictionary[key]:
file_metadata = doc2text.metadata(dirpath + file)
corpus_data.append(doc2text.extract_text(dirpath+file, key) + file_metadata.all())
# Extract title of project
titles = [text.partition("\n")[0] for text in corpus_data]
# Delete title from each element in data
for i in range(len(corpus_data)):
corpus_data[i] = corpus_data[i].replace(titles[i], "")
# Create df
df = pd.DataFrame(list(zip(titles, corpus_data)),columns =['title', 'desc'])
# Create text column
df["text"] = df["title"] + " " + df["desc"]
# Display
#df
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
df["clean_text"] = df["text"].apply(preprocessing.stdtextpreprocessing)
df = df[["title", "clean_text"]]
# Display
#df
###Output
_____no_output_____
###Markdown
Training the Model
###Code
#building bag of words and tokens This will go in doc2vec
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
text_docs = [TaggedDocument(doc.split(' '), [i])
for i, doc in enumerate(df["clean_text"])]
# Display the tagged docs
#text_docs
#Source for Hyperparameter tuning
# https://medium.com/betacom/hyperparameters-tuning-tf-idf-and-doc2vec-models-73dd418b4d
# Instantiate model
model = Doc2Vec(vector_size=64, window=5, min_count=1, workers=8, epochs=40)
# Build vocab
model.build_vocab(text_docs)
# Train model
model.train(text_docs, total_examples=model.corpus_count, epochs=model.epochs)
# Generate vectors
#from gensim.test.utils import get_tmpfile
#hazelnut_model = get_tmpfile("my_doc2vec_model")
model.save(os.getcwd() + '/hazelnut_model')
###Output
_____no_output_____
###Markdown
GloVe
###Code
import numpy as np
from scipy import spatial
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import pandas as pd
"""GloVe class
- Needs to be instantiated with a file type of pretrained vectors
i.e.,
- glove.6B.50d.txt
- glove.6B.100d.txt
- glove.6B.200d.txt
- glove.6B.300d.txt
In this in main,
glove = GloVePrebuild("glove.6B.50d.txt")
OR:
Also takes in a pandas dataframe or a list object,
as seen in the constructor switch statement
etc.,
"""
class GloVe:
def __init__(self, Vectors):
super().__init__()
self.embeddings_dict = {}
if isinstance(Vectors, str):
# Vectors is a filepath
with open(Vectors, 'r', encoding="utf-8") as f:
for line in f:
values = line.split()
word = values[0]
vect = np.asarray(values[1:], "float32")
self.embeddings_dict[word] = vect
elif isinstance(Vectors, pd.DataFrame):
# Vectors is a dataframe
self.embeddings_dict = Vectors.to_dict()
elif isinstance(Vectors, list):
# Vectors is a list
for line in Vectors:
word = line[0]
vect = np.asarray(line[1:], "float32")
self.embeddings_dict[word] = vect
def find_closest_embeddings(self, index):
return sorted(self.embeddings_dict.keys(), key=lambda word: self.compute_word_distance(word, index))
def plot_all(self, amount = 1000):
tsne = TSNE(n_components=2, random_state=0)
words = list(self.embeddings_dict.keys())
vectors = [self.embeddings_dict[word] for word in words]
Y = tsne.fit_transform(vectors[:amount])
plt.scatter(Y[:, 0], Y[:, 1])
for label, x, y in zip(words, Y[:, 0], Y[:, 1]):
plt.annotate(label, xy=(x, y), xytext=(0, 0), textcoords="offset points")
plt.show()
def plot_word_neighbors(self, word, amount = 250):
tsne = TSNE(n_components=2, random_state=0)
words = self.find_closest_embeddings(word)[0:amount]
vectors = [self.embeddings_dict[word] for word in words]
Y = tsne.fit_transform(vectors[:amount+1])
plt.scatter(Y[:, 0], Y[:, 1])
for label, x, y in zip(words, Y[:, 0], Y[:, 1]):
plt.annotate(label, xy=(x, y), xytext=(0, 0), textcoords="offset points")
plt.show()
def compute_word_distance(self, word1, word2):
return spatial.distance.euclidean(self.embeddings_dict[word1], self.embeddings_dict[word2])
###Output
_____no_output_____
###Markdown
Applying the model
###Code
proj2vec = [model.infer_vector((df['clean_text'][i].split(' '))) for i in range(0,len(df['clean_text']))]
# Display
#proj2vec
df['proj2vec'] = np.array(proj2vec).tolist()
# Display
#df
import tensorflow_hub as hub
# Download Universal Sentence Encoder
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-large/5")
df['use'] = np.array(embed(df['clean_text'])).tolist()
# Display
#df
# Visualize Vectors
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, learning_rate='auto',init='random')
#df['tsneP2V'] = tsne.fit_transform(X)
#Y = np.asarray( list(df['use']) )
#df['tsneUSE'] = TSNE(n_components=2, learning_rate='auto',init='random').fit_transform(Y)
X = np.asarray( list(df['proj2vec']) )
df_subset = pd.DataFrame()
df_subset['tsne-P2V-one'] = tsne.fit_transform(X)[:,0]
df_subset['tsne-P2V-two'] = tsne.fit_transform(X)[:,1]
X = np.asarray( list(df['use']) )
df_subset['tsne-use-one'] = tsne.fit_transform(X)[:,0]
df_subset['tsne-use-two'] = tsne.fit_transform(X)[:,1]
#training Doc2vec
#plotting #Hazelnut
#TSNE(n_components=2, learning_rate='auto',init='random').fit_transform(X)
###Output
_____no_output_____
###Markdown
Visualization
###Code
#import matplotlib.pyplot as plt
"""#plt.figure(figsize=(16,10))
plt.scatter(
x="tsne-P2V-one", y="tsne-P2V-two",
data=df_subset,
)"""
"""#plt.figure(figsize=(16,10))
plt.scatter(
x="tsne-use-one", y="tsne-use-two",
data=df_subset,
)"""
###Output
_____no_output_____ |
2 Linear Regression with One Variable.ipynb | ###Markdown
Model Representation Recap- Linear Regression เป็นปัญหา supervised learning เพราะเรามี "right answer" เป็น train data- Regression Problem > predict real-valued output for *continuous data* CourseraTo establish notation for future use, we’ll use $x^{(i)}$ to denote the “input” variables (living area in this example), also called input features, and $y^{(i)}$ to denote the “output” or target variable that we are trying to predict (price). A pair ($x^{(i)},y^{(i)}$) is called a training example, and the dataset that we’ll be using to learn—a list of m training examples ($x^{(i)},y^{(i)}$); i=1,...,m — is called a training set. Note that the superscript “$(i)$” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use $X$ to denote the space of input values, and $Y$ to denote the space of output values. In this example, $X = Y = ℝ$.To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function $h : X → Y$ so that $h(x)$ is a “good” predictor for the corresponding value of $y$. For historical reasons, this function $h$ is called a hypothesis.$$h_{\theta}(x) = \theta_{0}+\theta_{1}x$$Linear regression with one variable = **Univariate linear regression.**One Variable ที่ว่าคือ $x$When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem. When $y$ can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem. Cost Function (Mean squared error)We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference (actually a fancier version of an average) of all the results of the hypothesis with inputs from x's and the actual output y's.$$ J(\theta_{0},\theta_{1})=\frac{1}{2m}\sum_{i=1}^{m}(\hat{y_{i}}−y_{i})^{2} = \frac{1}{2m}\sum_{i=1}^{m}(h_{\theta}(x_{i})−y_{i})^{2} $$To break it apart, it is $\frac{1}{2}\bar{x}$ where $\bar{x}$ is the mean of the squares of $h_{\theta}(x_{i})−y_{i}$ , or the difference between the predicted value and the actual value.This function is otherwise called the "Squared error function", or "Mean squared error". The mean is halved ($\frac{1}{2}$) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the $\frac{1}{2}$ term. The following image summarizes what the cost function does: Cost Function - Intuition IIf we try to think of it in visual terms, our training data set is scattered on the $x-y$ plane. We are trying to make a straight line (defined by $h_{\theta}(x)$) which passes through these scattered data points.Our objective is to get the best possible line. The best possible line will be such so that the average squared vertical distances of the scattered points from the line will be the least. Ideally, the line should pass through all the points of our training data set. In such a case, the value of $J(\theta_{0},\theta_{1})$ will be 0. The following example shows the ideal situation where we have a cost function of 0.When $\theta_{1} = 1$, we get a slope of 1 which goes through every single data point in our model. Conversely, when $\theta_{1} = 0.5$, we see the vertical distance from our fit to the data points increase.This increases our cost function to 0.58. Plotting several other points yields to the following graph:Thus as a goal, we should try to minimize the cost function. In this case, $\theta_{1}=1$ is our global minimum. Cost Function - Intuition IIA contour plot is a graph that contains many contour lines. A contour line of a two variable function has a constant value at all points of the same line. An example of such a graph is the one to the right below.Taking any color and going along the 'circle', one would expect to get the same value of the cost function. For example, the three green points found on the green line above have the same value for $J(\theta_{0},\theta_{1})$ and as a result, they are found along the same line. The circled $x$ displays the value of the cost function for the graph on the left when $\theta_{0} = 800$ and $\theta_{1} = -0.15$. Taking another $h(x)$ and plotting its contour plot, one gets the following graphs:When $\theta_{0} = 360$ and $\theta_{1} = 0$, the value of $J(\theta_{0},\theta_{1})$ in the contour plot gets closer to the center thus reducing the cost function error. Now giving our hypothesis function a slightly positive slope results in a better fit of the data.The graph above minimizes the cost function as much as possible and consequently, the result of $\theta_{1}$ and $\theta_{0}$ tend to be around 0.12 and 250 respectively. Plotting those values on our graph to the right seems to put our point in the center of the inner most 'circle'. Gradient DescentSo we have our hypothesis function and we have a way of measuring how well it fits into the data. Now we need to estimate the parameters in the hypothesis function. That's where gradient descent comes in.Imagine that we graph our hypothesis function based on its fields $\theta_{0}$ and $\theta_{1}$ (actually we are graphing the cost function as a function of the parameter estimates). We are not graphing $x$ and $y$ itself, but the parameter range of our hypothesis function and the cost resulting from selecting a particular set of parameters.We put $\theta_{0}$ on the $x$ axis and $\theta_{1}$ on the $y$ axis, with the cost function on the vertical $z$ axis. The points on our graph will be the result of the cost function using our hypothesis with those specific theta parameters. The graph below depicts such a setup.We will know that we have succeeded when our cost function is at the very bottom of the pits in our graph, i.e. when its value is the minimum. The red arrows show the minimum points in the graph.The way we do this is by taking the derivative (the tangential line to a function) of our cost function. The slope of the tangent is the derivative at that point and it will give us a direction to move towards. We make steps down the cost function in the direction with the steepest descent. **The size of each step is determined by the parameter $\alpha$, which is called the learning rate.**For example, the distance between each 'star' in the graph above represents a step determined by our parameter $\alpha$. A smaller $\alpha$ would result in a smaller step and a larger $\alpha$ results in a larger step. The direction in which the step is taken is determined by the partial derivative of $J(\theta_{0},\theta_{1})$. Depending on where one starts on the graph, one could end up at different points. The image above shows us two different starting points that end up in two different places.The gradient descent algorithm is:repeat until convergence:$$ \theta_{j} := \theta_{j} − \alpha \frac{∂J(\theta_{0},\theta_{1})}{∂\theta_{j}} $$where$j=0,1$ represents the feature index number.At each iteration $j$, one should simultaneously update the parameters $θ_{1},θ_{2},...,θ_{n}$. Updating a specific parameter prior to calculating another one on the $j(th)$ iteration would yield to a wrong implementation. Gradient Descent IntuitionIn this video we explored the scenario where we used one parameter $\theta_{1}$ and plotted its cost function to implement a gradient descent. Our formula for a single parameter was :Repeat until convergence:$$\theta_{1} := \theta_{1} − \alpha \frac{dJ(\theta_{1})}{d\theta_{1}}$$Regardless of the slope's sign for $\frac{dJ(\theta_{1})}{d\theta_{1}}$, $\theta_{1}$ eventually converges to its minimum value. The following graph shows that when the slope is negative, the value of $\theta_{1}$ increases and when it is positive, the value of $\theta_{1}$ decreases.On a side note, we should adjust our parameter $\alpha$ to ensure that the gradient descent algorithm converges in a reasonable time. Failure to converge or too much time to obtain the minimum value imply that our step size is wrong.How does gradient descent converge with a fixed step size $\alpha$?The intuition behind the convergence is that $\frac{dJ(\theta_{1})}{d\theta_{1}}$ approaches 0 as we approach the bottom of our convex function. At the minimum, the derivative will always be 0 and thus we get:$$ \theta_{1} := \theta_{1}−\alpha\cdot0 $$ Gradient Descent For Linear RegressionWhen specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to :repeat until convergence: {$$ \theta_{0} := \theta_{0}− \alpha\frac{1}{m}\sum_{i=1}^{m}(h_{\theta}(x_{i})−y_{i}) $$$$ \theta_{1} := \theta_{1}− \alpha\frac{1}{m}\sum_{i=1}^{m}((h_{\theta}(x_{i})−y_{i})x_{i}) $$}where $m$ is the size of the training set, $\theta_{0}$ a constant that will be changing simultaneously with $\theta_{1}$ and $x^{(i)}$,yiare values of the given training set (data).Note that we have separated out the two cases for $\theta_{j}$ into separate equations for $\theta_{0}$ and $\theta_{1}$; and that for $\theta_{1}$ we are multiplying $x_{i}$ at the end due to the derivative. The following is a derivation of $\frac{\partial J(\theta)}{\partial\theta_{j}}$ for a single example :> The point of all this is that if we start with a guess for our hypothesis and then repeatedly apply these gradient descent equations, our hypothesis will become more and more accurate.So, this is simply gradient descent on the original cost function J. **This method looks at every example in the entire training set on every step, and is called batch gradient descent.** **Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. **Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function.The ellipses shown above are the contours of a quadratic function. Also shown is the trajectory taken by gradient descent, which was initialized at (48,30). The x’s in the figure (joined by straight lines) mark the successive values of θ that gradient descent went through as it converged to its minimum. ================ Code Example ================ ตัวอย่างข้อมูลสำหรับ Linear Regression with One Variableมี independence variable 1 ตัว (x) และ dependence variable 1 ตัว (y)
###Code
import pandas as pd
import matplotlib.pyplot as plt
# The file ex1data1.txt contains the dataset for our linear regression prob- lem.
# The first column is the population of a city and the second column is the profit of a food truck in that city.
# A negative value for profit indicates a loss.
data1 = pd.read_csv('programing/machine-learning-ex1/ex1/ex1data1.txt',names=['Population','Profit of food truck'])
data1.head()
###Output
_____no_output_____
###Markdown
ก่อนจะเริ่มทำอะไร ลอง visualize ดูความสัมพันธ์ของข้อมูลนี้สักหน่อยว่าเป็นแบบไหน
###Code
x = data1.values[:,0]
y = data1.values[:,1]
plt.scatter(x, y)
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.show()
###Output
_____no_output_____
###Markdown
ดูแล้วน่าจะเป็น Linear นะ แล้วมี input แค่ตัวเดียว ดังนั้น Model น่าจะเป็น Univariate linear regression. ซึ่ง**แต่ละจุด** ของ model คือ ($x_{i},y_{i}$) จะมีความสัมพันธ์กันดังนี้$$\hat{y_{i}} = h_{\theta}(x_{i}) = \theta_{0}+\theta_{1}x_{i}$$ดังนั้น step ถัดไปคือ เราต้องหา พารามิเตอร์ $\theta_{0}$, $\theta_{1}$ ด้วย Gradient Descent (วนหาค่า $\theta_{0}$, $\theta_{1}$ ที่ทำให้ Cost Function มีค่าน้อยสุด)จาก dataset ตอนนี้ค่าที่เรามีคือ $x = \begin{bmatrix} x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}$ และ $y = \begin{bmatrix} y_{1}\\y_{2}\\\vdots \\y_{m}\end{bmatrix}$ เมื่อ $m$ คือจำนวนจุด(แถว)ของ dataset ที่เรามีเมื่อ $\hat{y_{i}}$ เป็น function ของ $x_{i}$ จะได้ว่า$$ \begin{bmatrix} \hat{y_{1}}\\\hat{y_{2}}\\\vdots \\\hat{y_{m}}\end{bmatrix} = \begin{bmatrix} \theta_{0}+\theta_{1}x_{1}\\\theta_{0}+\theta_{1}x_{2}\\\vdots \\\theta_{0}+\theta_{1}x_{m}\end{bmatrix}$$ทำ Vectorization จะได้$$ \begin{bmatrix} \hat{y_{1}}\\\hat{y_{2}}\\\vdots \\\hat{y_{m}}\end{bmatrix} = \begin{bmatrix} 1 & x_{1}\\1 & x_{2}\\\vdots & \vdots \\1 & x_{m}\end{bmatrix}\begin{bmatrix}\theta_{0}\\\theta_{1}\end{bmatrix}$$$$y = X\theta$$เมื่อ $y \in \mathbb{R}^m$, $X \in \mathbb{R}^{mx2}$ และ $\theta\in\mathbb{R}^2$ Cost Function (Mean squared error)$$ J(\theta_{0},\theta_{1})=\frac{1}{2m}\sum_{i=1}^{m}(\hat{y_{i}}−y_{i})^{2} = \frac{1}{2m}\sum_{i=1}^{m}(h_{\theta}(x_{i})−y_{i})^{2} $$เมื่อ $m$ คือจำนวนจุดทั้งหมด(แถว)ของ dataset ที่เรามีจะได้$$ J(\theta_{0},\theta_{1})=\frac{1}{2m}\cdot \text{sum}\Bigg(\begin{bmatrix} (\hat{y_{1}} - y_1)^2 \\(\hat{y_{2}}-y_2)^2\\\vdots \\(\hat{y_{m}}-y_m)^2\end{bmatrix} \Bigg) $$ทำให้อยู่ในรูปของฟังก์ชั่นของ dataset ที่เรามีจะได้$$ J(\theta_{0},\theta_{1}) = \frac{1}{2m}\cdot \text{sum}\Bigg(\text{square_each_row}\Bigg( \begin{bmatrix} 1 & x_{1}\\1 & x_{2}\\\vdots & \vdots \\1 & x_{m}\end{bmatrix}\begin{bmatrix}\theta_{0}\\\theta_{1}\end{bmatrix} - \begin{bmatrix} y_{1}\\y_{2}\\\vdots \\y_{m}\end{bmatrix}\Bigg)\Bigg)$$ $$ J(\theta) = \frac{1}{2m}(X\theta -y)^{T}(X\theta -y)$$ จะเห็นว่าถ้าเราแทนค่า $X$ และ $y$ (ค่าจาก dataset) ลงไป ฟังก์ชั่น $J(\theta)$ จะขึ้นกับการเปลี่ยนค่าของ $\theta$ เท่านั้น ซึ่งนี่คือสิ่งที่ Gradient Descent ทำเพื่อที่จะหาค่า $\text{min}\big(J(\theta)\big)$สร้าง ฟังก์ชั่นคำนวณค่า Cost Function จะได้
###Code
import numpy as np
def computeCost(X,y,theta):
# COMPUTECOST Compute cost for linear regression
# J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
# parameter for linear regression to fit the data points in X and y
X = np.array(X)
y = np.array(y)
theta = np.array(theta)
# Initialize some useful values
m = len(y) # number of training examples
J = 0
yhat = X.dot(theta)
deltaY = yhat-y
sqrDeltaY = deltaY*deltaY
J = sum(sqrDeltaY)/(2*m)
return J
###Output
_____no_output_____
###Markdown
จากข้อมูลในไฟล์ `ex1data1.txt` ถ้าให้ $\theta = \begin{bmatrix} \theta_0 \\ \theta_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ จะได้ค่า Cost Function ดังนี้ (ควรได้ 32.07)
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
data1 = pd.read_csv('programing/machine-learning-ex1/ex1/ex1data1.txt',names=['Population','Profit of food truck'])
y = data1.values[:,1] # --> y variable
x = data1.values[:,0]
ones = np.ones(len(x))
X = np.stack((ones,x)).transpose() # --> X variable
theta = np.array([0,0]) # --> theta variable
J = computeCost(X,y,theta)
print(J)
###Output
32.0727338775
###Markdown
Gradient Descent $$ \theta_{j} := \theta_{j} − \alpha \frac{∂J(\theta_{0},\theta_{1})}{∂\theta_{j}} $$where$j=0,1$ represents the feature index number.สำหรับ Linear Regression with One Variable จะได้$$ \theta_{0} := \theta_{0}− \alpha\frac{1}{m}\sum_{i=1}^{m}(h_{\theta}(x_{i})−y_{i}) $$$$ \theta_{1} := \theta_{1}− \alpha\frac{1}{m}\sum_{i=1}^{m}((h_{\theta}(x_{i})−y_{i})x_{i}) $$การทำ Gradient Descent คือการเริ่มหาค่า $J(\theta)$ (cost function) ที่ค่า $\theta$ ใด $\theta$ หนึ่งก่อน แล้วก็วน(iteration)ทำซ้ำๆปรับค่า $\theta$ ไปเรื่อยๆจนเจอค่า $\theta$ ที่ทำให้ $J(\theta)$ มีค่าน้อยสุดดังนั้นในการทำ function GD จะต้องมี input ดังนี้ 1. input สำหรับใช้กับ `computeCost(X,y,theta)` 2. $\alpha$ จากสมการข้างบน ใช้ปรับค่า $\theta$3. จำนวนการทำซ้ำ
###Code
import numpy as np
def gradientDescent(X, y, theta, alpha, num_iters):
# GRADIENTDESCENT Performs gradient descent to learn theta
# theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
# taking num_iters gradient steps with learning rate alpha
# Initialize some useful values
m = len(y) # number of training examples
J_history = np.zeros(num_iters)
n = len(theta)
for i in range(num_iters):
yhat = X.dot(theta)
deltaY = yhat-y
new_theta = np.zeros(n)
for j in range(n):
new_theta[j] = theta[j] - (alpha/m)*sum(deltaY*X[:,j])
J_history[i] = computeCost(X, y, theta)
theta = new_theta
return [theta, J_history]
###Output
_____no_output_____
###Markdown
จากข้อมูลในไฟล์ `ex1data1.txt` ถ้าให้ $\theta$ เริ่มต้นที่ $\begin{bmatrix} \theta_0 \\ \theta_1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ $\alpha = 0.01$ ทำซ้ำ 50 ครั้ง จะได้ Gradient Descent ดังนี้
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
data1 = pd.read_csv('programing/machine-learning-ex1/ex1/ex1data1.txt',names=['Population','Profit of food truck'])
y = data1.values[:,1] # --> y variable
x = data1.values[:,0]
ones = np.ones(len(x))
X = np.stack((ones,x)).transpose() # --> X variable
theta = np.array([0,0]) # --> theta variable
alpha = 0.01 # --> alpha variable
num_iters = 50
theta_with_J = gradientDescent(X,y,theta,alpha,num_iters)
###Output
_____no_output_____
###Markdown
ลอง check ค่าของ Cost Function ที่ได้จาก Gradient Decent ดูว่ามีพฤติกรรมยังไงเทียบกับจำนวน iterations
###Code
import matplotlib.pyplot as plt
Jtheta = theta_with_J[1]
plt.plot(Jtheta)
plt.xlabel('Number of iterations')
plt.ylabel('Cost J')
plt.show()
###Output
_____no_output_____
###Markdown
ถ้าลอง $\alpha = 0.001$ จะได้
###Code
alpha = 0.001 # --> alpha variable
theta_with_J = gradientDescent(X,y,theta,alpha,num_iters)
Jtheta = theta_with_J[1]
plt.plot(Jtheta)
plt.xlabel('Number of iterations')
plt.ylabel('Cost J')
plt.show()
###Output
_____no_output_____
###Markdown
ถ้าลอง $\alpha = 0.03$ จะได้
###Code
alpha = 0.03 # --> alpha variable
theta_with_J = gradientDescent(X,y,theta,alpha,num_iters)
Jtheta = theta_with_J[1]
plt.plot(Jtheta)
plt.xlabel('Number of iterations')
plt.ylabel('Cost J')
plt.show()
###Output
_____no_output_____
###Markdown
เลือก $\alpha$ ใหญ่ไป GD ลู่ออก ลองเอามาหา contour plot (แกน x,y คือ $\theta_0$,$\theta_1$ แกน z คือ $J(\theta)$) จะได้($\theta_0$,$\theta_1$ มีค่า 0-10 แต่ค่าเพิ่มทีละ 0.01)
###Code
import numpy as np
import matplotlib
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def contourOfCostFunc(theta0min,theta0max,theta1min,theta1max,step):
theta0 = np.arange(theta0min,theta0max,step)
theta1 = np.arange(theta1min,theta1max,step)
# initialize J vals to a matrix of 0's
J_vals = np.zeros([len(theta0),len(theta1)])
# Fill out J vals
for i in range(len(theta0)):
for j in range(len(theta1)):
thetas = np.array([theta0[i],theta1[j]])
J_vals[i,j] = computeCost(X, y, thetas)
CS = plt.contour(theta0,theta1, J_vals.transpose(), 25, linewidths=0.5, colors='k')
CS = plt.contourf(theta0, theta1, J_vals.transpose(), 25,vmax=abs(J_vals.transpose()).max(), vmin=-abs(J_vals.transpose()).max())
plt.colorbar() # draw colorbar
plt.title('Contour, Showing minimum')
plt.xlabel('Theta0')
plt.ylabel('Theta1')
plt.show()
contourOfCostFunc(-10,10,-1,4,0.01)
###Output
_____no_output_____ |
notebooks/exp183_analysis.ipynb | ###Markdown
Exp 183 analysisSee `./informercial/Makefile` for experimentaldetails.
###Code
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.exp import epsilon_bandit
from infomercial.exp import beta_bandit
from infomercial.exp import softbeta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
def plot_beta(env_name, result):
"""Plots!"""
# episodes, actions, scores_E, scores_R, values_E, values_R, ties, policies
episodes = result["episodes"]
actions =result["actions"]
bests =result["p_bests"]
scores_R = result["scores_R"]
values_R = result["values_R"]
beta = result["beta"]
# -
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Plotz
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# score
plt.subplot(grid[1, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.ylabel("Score")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[2, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="$Q_R$")
plt.ylabel("Value")
plt.xlabel("Episode")
# plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# best
plt.subplot(grid[3, 0])
plt.scatter(episodes, bests, color="red", alpha=.5, s=2)
plt.ylabel("p(best)")
plt.xlabel("Episode")
plt.ylim(0, 1)
def plot_critic(critic_name, env_name, result):
# -
env = gym.make(env_name)
best = env.best
# Data
critic = result[critic_name]
arms = list(critic.keys())
values = list(critic.values())
# Plotz
fig = plt.figure(figsize=(8, 3))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
# Arm
plt.subplot(grid[0])
plt.scatter(arms, values, color="black", alpha=.5, s=30)
plt.plot([best]*10, np.linspace(min(values), max(values), 10), color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylabel("Value")
plt.xlabel("Arm")
###Output
_____no_output_____
###Markdown
Load and process data
###Code
data_path ="/Users/qualia/Code/infomercial/data/"
exp_name = "exp183"
sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl"))
# print(sorted_params.keys())
best_params = sorted_params[0]
beta = best_params['beta']
sorted_params
print(best_params)
###Output
{'beta': 0.12542341214509153, 'lr_R': 0.17413938714012997, 'temp': 0.0811714128233918, 'total_R': 37218.0}
###Markdown
Performanceof best parameters
###Code
env_name = 'BanditUniform121-v0'
num_episodes = 60500
# Run w/ best params
result = softbeta_bandit(
env_name=env_name,
num_episodes=num_episodes,
lr_R=best_params["lr_R"],
beta=best_params["beta"],
temp=best_params["temp"],
seed_value=2,
)
print(best_params)
plot_beta(env_name, result=result)
plot_critic('critic', env_name, result)
###Output
_____no_output_____
###Markdown
Sensitivityto parameter choices
###Code
total_Rs = []
betas = []
lrs_R = []
temps = []
trials = list(sorted_params.keys())
for t in trials:
total_Rs.append(sorted_params[t]['total_R'])
lrs_R.append(sorted_params[t]['lr_R'])
betas.append(sorted_params[t]['beta'])
temps.append(sorted_params[t]['temp'])
# Init plot
fig = plt.figure(figsize=(5, 18))
grid = plt.GridSpec(6, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("total R")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.scatter(trials, lrs_R, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr_R")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.scatter(lrs_R, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("lrs_R")
plt.ylabel("total_Rs")
_ = sns.despine()
plt.subplot(grid[3, 0])
plt.scatter(betas, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("beta")
plt.ylabel("total_Rs")
_ = sns.despine()
plt.subplot(grid[4, 0])
plt.scatter(temps, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("temp")
plt.ylabel("total_Rs")
_ = sns.despine()
###Output
_____no_output_____
###Markdown
Distributionsof parameters
###Code
# Init plot
fig = plt.figure(figsize=(5, 6))
grid = plt.GridSpec(3, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(betas, color="black")
plt.xlabel("beta")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.hist(lrs_R, color="black")
plt.xlabel("lr_R")
plt.ylabel("Count")
_ = sns.despine()
###Output
_____no_output_____
###Markdown
of total reward
###Code
# Init plot
fig = plt.figure(figsize=(5, 2))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(total_Rs, color="black", bins=50)
plt.xlabel("Total reward")
plt.ylabel("Count")
# plt.xlim(0, 10)
_ = sns.despine()
###Output
_____no_output_____ |
notebooks/data-science/1-2-matplotlib.ipynb | ###Markdown
Matplotlib Hello World* standard library for plotting* offers simple, procedural, state-based MATLAB-like way of plotting * intended for interactive plots and * simple cases of programmatic plot generation https://matplotlib.org/api/_as_gen/matplotlib.pyplot.html Things to note on matplotlib* all the commands in one cell will result in a single plot
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Plotting data as lines
###Code
# tricky API, but very flexible
plt.plot?
x = [1, 2, 3, 4]
y = [10, 20, 25, 30]
plt.plot(x, y)
# be careful when plotting from point to point, first parameter is all the xs and second all the ys, this is not pairs
plt.plot([-1, 1], [-1, 1], 'ro-')
###Output
_____no_output_____
###Markdown
What can you pass as data?* all plotting functions expect np.array* array-like data structures might work* safest to explicitely convert everything to np.array* Pandas Dataframe: df.values Plotting as scattered points
###Code
plt.scatter([1, 2, 3, 4], [10, 20, 25, 30], color='red')
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);
###Output
_____no_output_____
###Markdown
Exercise Create a scatter plot that has 4 data points* freely choose the four data points* they shall be displayed as triangles* zoom into the plot so that you only see the two bottom left values
###Code
# matplotlib.markers?
# plt.scatter?
# plt.xlim?
# plt.ylim?
###Output
_____no_output_____
###Markdown
STOP HERE...---...---...---...---...---...---...
###Code
plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], marker='v')
plt.xlim(0, 2)
plt.ylim(0, 50)
# plt.yscale('log')
###Output
_____no_output_____ |
notebooks/Restaurant Data with Consumer Ratings y1s.ipynb | ###Markdown
This here is my pdp plot from the website
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
import matplotlib.pyplot as plt
feature = 'height'
plt.rcParams['figure.dpi'] = 300
isolated = pdp_isolate(
model=pipeline1.best_estimator_,
dataset=X,
model_features=X.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature)
###Output
_____no_output_____ |
define_a_function.ipynb | ###Markdown
define_a
###Code
# default_exp define_a
# export
def double_it(it):
return it * 2
# export
def halve_it(it):
return it / 2
from fastcore.test import all_equal, test
test(["a", "b"], ["a", "b"], all_equal)
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_core.ipynb.
Converted define_a_function.ipynb.
Converted index.ipynb.
Converted use_a_function.ipynb.
|
course-exercises/week2-Linear-regression/Predicting-house-prices.ipynb | ###Markdown
Launch Turi Create
###Code
import turicreate
###Output
_____no_output_____
###Markdown
Load house sales data
###Code
sales = turicreate.SFrame('../../data/home_data.sframe')
sales
###Output
_____no_output_____
###Markdown
Explore
###Code
sales.show()
###Output
_____no_output_____
###Markdown
Take the sales data, select only the houses with the Seattle zip code (98039), and compute the average price
###Code
filtered = sales.filter_by('98039', 'zipcode')
filtered
average_price = filtered['price'].mean()
average_price
###Output
_____no_output_____
###Markdown
Select the houses that have ‘sqft_living’ higher than 2000 sqft but no larger than 4000 sqft
###Code
filtered_houses = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] < 4000)]
###Output
_____no_output_____
###Markdown
What fraction of the all houses have ‘sqft_living’ in this range?
###Code
len(sales)
len(filtered_houses)
(len(filtered_houses) / len(sales)) * 100
###Output
_____no_output_____
###Markdown
Building a regression model with several more features
###Code
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
training_set, test_set = sales.random_split(.8,seed=0)
advanced_features_model = turicreate.linear_regression.create(training_set,target='price',features=advanced_features,validation_set=None)
###Output
_____no_output_____
###Markdown
Compute the RMSE (root mean squared error) on the test_data for the model using just my_features, and for the one using advanced_features.
###Code
print (my_features_model.evaluate(test_set))
print (advanced_features_model.evaluate(test_set))
###Output
{'max_error': 3152242.784868988, 'rmse': 180439.07296640595}
{'max_error': 3170363.181382781, 'rmse': 155269.6579279753}
###Markdown
What is the difference in RMSE between the model trained with my_features and the one trained with advanced_features?
###Code
180439.07296640595-155269.6579279753
###Output
_____no_output_____ |
notebooks/expansion.ipynb | ###Markdown
Automated Patent LandscapingThis notebook walks through the process of creating a patent landscape as described in the paper [Automated Patent Landscaping](AutomatedPatentLandscaping_2018Update.pdf) (Abood, Feltenberger 2016). The basic outline is:* load pre-trained word embeddings* load a seed set of patents and generate the positive and negative training data* train a deep neural network on the data* show how to do inference using the modelIf you haven't already, please make sure you've setup an environment following the instructions in the [README](README.md) so you have all the necessary dependencies.Copyright 2017 Google Inc.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. Basic Configuration
###Code
import tensorflow as tf
import pandas as pd
import os
#seed_name = 'hair_dryer'
#seed_name = 'video_codec'
seed_name = "office_furniture"
###Output
_____no_output_____
###Markdown
Download Embedding Model if NecessaryWe provide a pre-trained word2vec word embedding model that we trained on 5.9 million patent abstracts. See also `word2vec.py` in this repo if you'd like to train your own (though gensim is likely an easier path). The code below will download the model from Google Cloud Storage (GCS) and store it on the local filesystem, or simply load it from local disk if it's already present.
###Code
from fiz_lernmodule.word2vec import Word2VecReader
w2v_loader = Word2VecReader(src_dir=src_dir)
w2v_model = w2v_loader.load_word_embeddings()
###Output
Load mappings from .\5.9m\vocab\vocab.csv
Load config from .\5.9m\vocab\config.csv
INFO:tensorflow:Restoring parameters from .\5.9m\checkpoints\5.9m_abstracts.ckpt-1325000
###Markdown
Load Word2Vec EmbeddingsThis loads Word2Vec embeddings from a model trained on 5.9 million patent abstracts. Just as a demonstration, this also finds the k most similar words to a given word ranked by closeness in the embedding space. Finally, we use tSNE to visualize the word closeness in 2-dimensional space.Note that the actual model files are fairly large (e.g., the 5.9m dataset is 760mb per checkpoint), so they are not stored in the Github repository. They're stored in the [patent_landscapes](https://console.cloud.google.com/storage/browser/patent_landscapes) Google Cloud Storage bucket under the models/ folder. If you'd like to use them, download the `models` folder and put it into the root repository folder (e.g., if you checked out this repository into the `patent-models` folder, then the `5.9m` model should be in `patent-models/models/5.9m`, for example.
###Code
w2v_model.find_similar('codec', 10)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
w2v_model.visualize_embeddings("codec", 100)
###Output
_____no_output_____
###Markdown
Patent Landscape ExpansionThis section of the notebook creates an instance of the `PatentLandscapeExpander`, which accesses a BigQuery table of patent data to do the expansion of a provided seed set and produces each expansion level as well as the final training dataset as a Pandas dataframe. This does the actual expansion and displays the head of the final training data dataframe.
###Code
from fiz_lernmodule import data_reader
landscape_reader = data_reader.LandscapeDataReader(src_dir=src_dir)
training_data_full_df = landscape_reader.load_data(seed_name=seed_name)
training_df = training_data_full_df[
['publication_number', 'title_text', 'abstract_text', 'ExpansionLevel', 'refs', 'cpcs']]
training_data_full_df.head()
###Output
Loading landscape data from BigQuery.
###Markdown
Show some stats about the landscape training data
###Code
print('Seed/Positive examples:')
print(training_df[training_df.ExpansionLevel == 'Seed'].count())
print('\n\nAnti-Seed/Negative examples:')
print(training_df[training_df.ExpansionLevel == 'AntiSeed'].count())
###Output
Seed/Positive examples:
publication_number 1732
title_text 1732
abstract_text 1732
ExpansionLevel 1732
refs 1732
cpcs 1732
dtype: int64
Anti-Seed/Negative examples:
publication_number 9398
title_text 9398
abstract_text 9398
ExpansionLevel 9398
refs 9398
cpcs 9398
dtype: int64
###Markdown
Preparing / Transforming Training DataThe following takes the input landscape training dataframe and transforms it into a format readable by TensorFlow and Keras.
###Code
### Concatenating title and abstract, stemming and filtering out stopwords
from fiz_lernmodule.preprocessing import PreProcessor
pre = PreProcessor()
training_df["text_prep"] = training_df["title_text"].str.cat(training_df[["abstract_text"]], sep=" ")
#training_df['text_prep'] = training_df['text_prep'].map(lambda t: pre.preprocess_text(t))
training_df.head()
import fiz_lernmodule.data_preparation
data_prep = fiz_lernmodule.data_preparation.DataPreparation(training_df, w2v_model)
data_prep.prepare_data(percent_train = 0.8, text_column = "text_prep")
data_prep.show_instance_details(3)
###Output
Original:
piezoelectric film-attached substrate piezoelectric film element manufacturing provided piezoelectric film-attached substrate piezoelectric film thickness reflection spectrum relation light surface piezoelectric film irradiated irradiation light wavelength irradiation light reflected surface piezoelectric film light irradiation light transmitted piezoelectric film reflected surface electrode reflection spectrum point center part outer peripheral part piezoelectric film reflection spectrum maximum value minimum value reflectance maximum value 0 4
Tokenized:
piezoelectric film attached substrate piezoelectric film element manufacturing provided piezoelectric film attached substrate piezoelectric film thickness reflection spectrum relation light surface piezoelectric film irradiated irradiation light wavelength irradiation light reflected surface piezoelectric film light irradiation light transmitted piezoelectric film reflected surface electrode reflection spectrum point center part outer peripheral part piezoelectric film reflection spectrum maximum value minimum value reflectance maximum value _NUMBER_ _NUMBER_
TextAsEmbeddingIdx:
[1580, 131, 246, 76, 1580, 131, 73, 684, 36, 1580, 131, 246, 76, 1580, 131, 596, 1807, 1682, 974, 58, 34, 1580, 131, 3468, 2807, 58, 919, 2807, 58, 1230, 34, 1580, 131, 58, 2807, 58, 526, 1580, 131, 1230, 34, 148, 1807, 1682, 239, 407, 118, 150, 627, 118, 1580, 131, 1807, 1682, 791, 137, 1201, 137, 5145, 791, 137, 7, 7]
LabelAsIdx:
1
Label:
antiseed
###Markdown
Show some sample training data Train ModelThe following cells specify hyperparameters, the neural network architecture (using Keras) and actually trains and tests the model.The model is generally composed of:* sequential word embeddings from the patent Abstract of each training instance* references one-hot encoded into a fully-connected layer* CPC codes one-hot encoded into a fully-connected layer* the above three layers concatenated into a final fully-connected layer* a final, single sigmoid layer with the classification result Model Hyperparameters
###Code
nn_config = {
'batch_size': 64,
'dropout': 0.4,
'num_epochs': 4,
'lstm_size': 64
}
###Output
_____no_output_____
###Markdown
Build the Deep Neural Network
###Code
import fiz_lernmodule.model
model = fiz_lernmodule.model.LandscapingModel(data_prep, src_dir, seed_name, nn_config)
model.set_up_model_architecture()
###Output
Done building graph.
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
refs_input (InputLayer) (None, 50000) 0
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 12800256 refs_input[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 256) 0 dense_1[0][0]
__________________________________________________________________________________________________
embed_input (InputLayer) (None, 214) 0
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 256) 1024 dropout_1[0][0]
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 214, 300) 33072000 embed_input[0][0]
__________________________________________________________________________________________________
elu_1 (ELU) (None, 256) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
LSTM_1 (LSTM) (None, 64) 93440 embedding_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 64) 16448 elu_1[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 300) 19500 LSTM_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 64) 0 dense_2[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 300) 0 dense_4[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 64) 256 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 300) 1200 dropout_4[0][0]
__________________________________________________________________________________________________
elu_2 (ELU) (None, 64) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
elu_4 (ELU) (None, 300) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
concatenated_layer (Concatenate (None, 364) 0 elu_2[0][0]
elu_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 64) 23360 concatenated_layer[0][0]
__________________________________________________________________________________________________
dropout_5 (Dropout) (None, 64) 0 dense_5[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 64) 256 dropout_5[0][0]
__________________________________________________________________________________________________
elu_5 (ELU) (None, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 1) 65 elu_5[0][0]
==================================================================================================
Total params: 46,027,805
Trainable params: 12,954,437
Non-trainable params: 33,073,368
__________________________________________________________________________________________________
None
###Markdown
Train / Fit the Network
###Code
model.train_or_load_model("load")
###Output
Model has not been trained yet.
Training model.
Train on 8745 samples, validate on 2187 samples
Epoch 1/4
8745/8745 [==============================] - 58s 7ms/step - loss: 0.1910 - acc: 0.9292 - precision: 0.9891 - recall: 0.9273 - fmeasure: 0.9527 - val_loss: 0.0389 - val_acc: 0.9913 - val_precision: 0.9946 - val_recall: 0.9950 - val_fmeasure: 0.9948
Epoch 2/4
8745/8745 [==============================] - 58s 7ms/step - loss: 0.0632 - acc: 0.9835 - precision: 0.9938 - recall: 0.9871 - fmeasure: 0.9903 - val_loss: 0.0288 - val_acc: 0.9945 - val_precision: 0.9957 - val_recall: 0.9979 - val_fmeasure: 0.9967
Epoch 3/4
8745/8745 [==============================] - 66s 7ms/step - loss: 0.0346 - acc: 0.9906 - precision: 0.9935 - recall: 0.9957 - fmeasure: 0.9945 - val_loss: 0.0264 - val_acc: 0.9945 - val_precision: 0.9968 - val_recall: 0.9968 - val_fmeasure: 0.9968
Epoch 4/4
8745/8745 [==============================] - 62s 7ms/step - loss: 0.0232 - acc: 0.9939 - precision: 0.9962 - recall: 0.9968 - fmeasure: 0.9965 - val_loss: 0.0300 - val_acc: 0.9936 - val_precision: 0.9926 - val_recall: 1.0000 - val_fmeasure: 0.9962
Saving model to .\data\video_codec\model.pb
Model persisted and ready for inference!
###Markdown
Evaluate the Model on the Test Set Now it is getting interesting. We trained our model and want to evaluate whether it achieves an acceptable performance or not. Therefore, we will check the size of our test size of our test-set.
###Code
model.get_test_size()
###Output
_____no_output_____
###Markdown
Subsequently, we let our model predict the classes (Seed, AntiSeed) of these samples.
###Code
prediction = model.predict()
###Output
_____no_output_____
###Markdown
Confusion Matrices are a handy tool to associate a models prediction with the actual labels of the dataset. Hence, we will use such matrix to visualize our results.
###Code
from fiz_lernmodule.confusion import ConfusionMatrix
conf = ConfusionMatrix(model.data.testY, prediction)
conf.plot_matrix(labels=["Seed", "AntiSeed"])
###Output
_____no_output_____
###Markdown
The confusion matrix is basis for the calculation of metrics like accuracy, recall, precision and f1 measure. Before we derive these metrics, we will sample a single example for our test set.
###Code
model.get_test_sample()
###Output
Index: 6765
Original:
secure over-the-air registration cordless telephones registration portable unit utilized communication system network controller data base storing portable identification numbers base station portable unit subscriber communicates network controller information set subscriber qualifying information portable identification number key code entered portable subscriber entered link identification number over-the-air registration memory portable unit registration steps portable unit sends base station request registration request registration link identification number over-the-air registration portable identification number base station receives request registration portable unit sends network controller notice request registration portable identification number network controller receives notice request registration base station determines portable identification number network controller data base subscriber approved registration network controller sends portable unit base station registration information signal network controller determines portable identification number over-the-air registration network controller data base subscriber approved registration registration information signal encrypted secret subscriber identification number encrypted key code unencrypted link identification number base station access
Tokenized:
secure over the air registration cordless telephones registration portable unit utilized communication system network controller data base storing portable identification numbers base station portable unit subscriber communicates network controller information set subscriber qualifying information portable identification number key code entered portable subscriber entered link identification number over the air registration memory portable unit registration steps portable unit sends base station request registration request registration link identification number over the air registration portable identification number base station receives request registration portable unit sends network controller notice request registration portable identification number network controller receives notice request registration base station determines portable identification number network controller data base subscriber approved registration network controller sends portable unit base station registration information signal network controller determines portable identification number over the air registration network controller data base subscriber approved registration registration information signal encrypted secret subscriber identification number encrypted key code unencrypted link identification number base station access
TextAsEmbeddingIdx:
[1193, 154, 0, 108, 2333, 8276, 8281, 2333, 996, 49, 1205, 163, 27, 145, 276, 29, 87, 638, 996, 921, 2105, 87, 368, 996, 49, 1566, 2061, 145, 276, 64, 143, 1566, 14122, 64, 996, 921, 174, 586, 443, 3339, 996, 1566, 3339, 649, 921, 174, 154, 0, 108, 2333, 100, 996, 49, 2333, 467, 996, 49, 2357, 87, 368, 502, 2333, 502, 2333, 649, 921, 174, 154, 0, 108, 2333, 996, 921, 174, 87, 368, 541, 502, 2333, 996, 49, 2357, 145, 276, 9619, 502, 2333, 996, 921, 174, 145, 276, 541, 9619, 502, 2333, 87, 368, 859, 996, 921, 174, 145, 276, 29, 87, 1566, 9893, 2333, 145, 276, 2357, 996, 49, 87, 368, 2333, 64, 32, 145, 276, 859, 996, 921, 174, 154, 0, 108, 2333, 145, 276, 29, 87, 1566, 9893, 2333, 2333, 64, 32, 2679, 5198, 1566, 921, 174, 2679, 586, 443, 15144, 649, 921, 174, 87, 368, 267]
LabelAsIdx:
1
Label:
antiseed
PredictionAsIdx:
1
Prediction:
antiseed
###Markdown
Learning curve
###Code
from fiz_lernmodule import visualization_landscaping
if hasattr(model.tf_model, 'history'):
lcv = visualization_landscaping.LearningCurveVisualizer(model.tf_model)
lcv.plot_metrics('acc', 'precision', 'recall', 'fmeasure')
if hasattr(model.tf_model, 'history'):
lcv.plot_metrics('val_loss', 'loss')
###Output
_____no_output_____
###Markdown
Generate Document embeddings This tutorial referred to patent landscaping as the process of finding patents related to a certain topic. We tackled this task by using a deep neural network as a classifier to distinguish between two classes (seed and antiseed). The resulting classification for a certain patent is similar to its membership to the topic of the seed set.Nevertheless, this classification only assigns a label to a pre-processed patent.But wouldn't it be nice to actually visualize the patent landscape?The activations of our neural network are nothing else than continuous vector representations of our inputs (with less dimensions). We use the following cell to extract the activations of our model while predicting the test set. We'll consider two types of such document embeddings.We will use the final layer of our network (64 dimensional).We will use the average of our pre-trained word embeddings (300-dimensional).Let's see how they differ. One could naively assume that the first option may lead to a clearer distinction since it incorporates layers of non-linear transformation on top of the word embeddings. Although, the dimensionality of this vector is lower.
###Code
final_layer_test = model.get_final_layer_as_document_embedding("test")
final_layer_test.shape
final_layer_train = model.get_final_layer_as_document_embedding("train")
final_layer_train.shape
final_layer = pd.concat([final_layer_train, final_layer_test])
with open(src_dir + "/data/" + seed_name + "/final_layer_embedding.pkl", 'wb') as outfile:
pickle.dump(final_layer, protocol=pickle.HIGHEST_PROTOCOL)
###Output
_____no_output_____ |
answers embeddings.ipynb | ###Markdown
Generating wrong answers by a given true answerNone of the white papers I read have proposed a way to generate multiple answer questions. My idea is to use word embeddings to generate answers that are close to the correct answer and it's context.
###Code
import gensim
#model = gensim.models.KeyedVectors.load_word2vec_format('data/embeddings/GoogleNews-vectors-negative300.bin', binary=True)
###Output
_____no_output_____
###Markdown
The word2vec dataset bricked my laptop (twice). Seems like a smaller pretained embedding should suffice.
###Code
from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors
glove_file = datapath('D:\ML\QG\QG\data\embeddings\glove.6B.300d.txt')
tmp_file = get_tmpfile("D:\ML\QG\QG\data\embeddings\word2vec-glove.6B.300d.txt")
# call glove2word2vec script
# default way (through CLI): python -m gensim.scripts.glove2word2vec --input <glove_file> --output <w2v_file>
from gensim.scripts.glove2word2vec import glove2word2vec
glove2word2vec(glove_file, tmp_file)
model = KeyedVectors.load_word2vec_format(tmp_file)
model.most_similar(positive=['koala'], topn=10)
###Output
_____no_output_____
###Markdown
It seems to be working fine. Though what is a probo?Ok. After we have found a sentence worthy to be a question and the phrase that's going to be the answer we can find similar phrases that could fit the sentence. Let's see what we can do with the following sentence. *__Oxygen__ is a chemical element with symbol O and atomic number 8.*
###Code
model.most_similar(positive=['oxygen'], topn=10)
###Output
_____no_output_____
###Markdown
That was easy. Let's try something more difficult.*the oldest portuguese university was first established in **lisbon** before moving to coimbra.*
###Code
model.most_similar(positive=['lisbon'], topn=10)
###Output
_____no_output_____
###Markdown
Seems like we are getting closer to football teams rather than cities that could have had the oldest university in the country. Let's add some more words from the sentence.
###Code
model.most_similar(positive=['lisbon', 'university'], topn=10)
###Output
_____no_output_____
###Markdown
The words now are getting too close to university. It would be good if we can add more weight to the orignal answer.I can manually do it by taking the best 20 embeddings to the original answer and checking if they are also showing in the embeddings list of combined words. Though I could also extract the orignal top 20 or even 50 embeddings and add features like occurences with other words and train a model...
###Code
model.most_similar(positive=['lisbon', 'coimbra'], topn=10)
###Output
_____no_output_____
###Markdown
Using another city really makes a difference and shows some good candidates. I think it'll be a good idea to use a word in the sentence that is closest to the answer. I suspect a couple more problems.
###Code
model.most_similar(positive=['write'], topn=10)
###Output
_____no_output_____
###Markdown
For our problem it would make more sense to work with the stems of the words. So after we gather the closest embeddings we should use stemming to remove the duplicates. Another problem would be answers that are not of the same part of speech. If the correct answer is a verb the incorrect answers should also be verbs. Like with **write** - *read*, *publish*, *tell* are good candidates, but *books* could be easily discarded for being a noun. Let's carry on. How about numbers?
###Code
model.most_similar(positive=['1944'], topn=10)
###Output
_____no_output_____
###Markdown
Seems like embeddings for numbers aren't that bad. I think better than ramdon numbers or closest numbers. Atleast when there is an embedding for the number.What about names?
###Code
model.most_similar(positive=['bush'], topn=10)
model.most_similar(positive=['euclid'], topn=10)
model.most_similar(positive=['atanasov'], topn=10)
###Output
_____no_output_____
###Markdown
I expected to be a lot worse. Names that are on a well know post like president or greek mathematician come up pretty easily. But obviosly with some less known figures like a general in a certain battle it woulnd't work. In that case I think it would be better to train own embeddings on a more specific dataset. A bigger problem is that some of the answers contain multiple words. Looking at the answers from the SQuAD dataset, most of them are only single words and then those that are not: Some contain digits and some other words: *12 minutes after* *3 to 5 days* Though they could easily be handled my just messing with the digits alone. Some contain additional describing word*__chinese__ characters* *__german__ language* *have gained much __knowledge__* In those cases the answers could be just different names of languages and charecters. But I think those additional words shouldn't even be in the answer. Some are just long names of institutions*the william allen white school of journalism and mass communications*Which I guess are similar to regular names of people. For which some special care must be taken, because they rely heavily on context. And for some are just... hard*over 20 years 1/5 of the women changed their sexual identity at least once* Now let's do some of the proposed techiquesI'll asume we have a sentence and a single word as an answer.
###Code
sentence = 'oxygen is a chemical element with symbol O and atomic number 8.'
answer = 'oxygen'
###Output
_____no_output_____
###Markdown
StemmingFirst we'll stem the sentence and answer, asumming it hasn't been done already.
###Code
from nltk.stem import WordNetLemmatizer
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
sentence = stemmer.stem(sentence)
answer = stemmer.stem(answer)
print(sentence)
print(answer)
#Just to check it's working
print(stemmer.stem('writing'))
print(stemmer.stem('Koala'))
print(stemmer.stem('lisbon'))
print(stemmer.stem('amsterdam'))
print(stemmer.stem('portugal'))
###Output
write
koala
lisbon
amsterdam
portug
###Markdown
Part of speech
###Code
from nltk.corpus import wordnet as wn
words = ['write', 'oxygen', 'lisbon']
for w in words:
tmp = wn.synsets(w)[0].pos()
print (w, ":", tmp)
###Output
write : v
oxygen : n
lisbon : n
###Markdown
Removing stopwords from sentence
###Code
from nltk.corpus import stopwords
word_list = sentence.replace(answer, '').split()
word_list = sentence.replace('.', '').split()
filtered_words = [word for word in word_list if word not in stopwords.words('english')]
print(filtered_words)
###Output
['oxygen', 'chemical', 'element', 'symbol', 'atomic', 'number', '8']
###Markdown
Extracting closest embeddings
###Code
topEmbeddings = model.most_similar(positive=[answer], topn=30)
answerPartOfSpeech = wn.synsets(answer)[0].pos()
embeddings = []
for embeddingIndex in range(len(topEmbeddings)):
#Having a threshold. Word embedding shouldn't be further than 0.45
if topEmbeddings[embeddingIndex][1] > 0.45:
if wn.synsets(topEmbeddings[embeddingIndex][0])[0].pos() == answerPartOfSpeech:
word = stemmer.stem(topEmbeddings[embeddingIndex][0])
#Since we are stemming the embeddings, it's possible for a stem to appear more than once
if word not in embeddings:
embeddings.append(word)
print(embeddings)
#A list holding the occurences for each stemmed word of the original answer in the embeddings for every other word in the sentence
embeddingsOccurences = [0] * len(embeddings)
for sentenceWordIndex in range(len(filtered_words)):
senteceWordEmbeddings = model.most_similar(positive=[answer, filtered_words[sentenceWordIndex]], topn=30)
stemmedEmbeddings = []
for embeddingIndex in range(len(senteceWordEmbeddings)):
#Having a threshold. Word embedding shouldn't be further than 0.45
if senteceWordEmbeddings[embeddingIndex][1] > 0.45:
word = stemmer.stem(senteceWordEmbeddings[embeddingIndex][0])
#Since we are stemming the embeddings, it's possible for a stem to appear more than once
if word not in stemmedEmbeddings:
stemmedEmbeddings.append(word)
for stemmedEmbeddingIndex in range(len(stemmedEmbeddings)):
#Checking if the embedding is also contained in the embedding of the answer
if stemmedEmbeddings[stemmedEmbeddingIndex] in embeddings:
embeddingIndex = embeddings.index(stemmedEmbeddings[stemmedEmbeddingIndex])
embeddingsOccurences[embeddingIndex]+=1
print(embeddingsOccurences)
combined = list(zip(embeddings, embeddingsOccurences))
sorted(combined, key=lambda x: x[1], reverse=True)
print(embeddings)
###Output
['hydrogen', 'nitrogen', 'helium', 'nutrient', 'breath', 'chlorin', 'monoxid', 'dioxid', 'ammonia', 'carbon', 'liquid', 'hemoglobin', 'tissu', 'vapor', 'respir', 'atom', 'molecul', 'oxid', 'hypoxia', 'sulfur', 'phosphoru', 'photosynthesi']
###Markdown
This seems like a lot better order.
###Code
sentence = 'the oldest portuguese university was first established in lisbon before moving to coimbra.'
answer = 'lisbon'
#Stemming
stemmer = PorterStemmer()
sentence = stemmer.stem(sentence)
answer = stemmer.stem(answer)
#Removing stopwords, answer and punctuation from sentence
word_list = sentence.replace(answer, '').split()
word_list = sentence.replace('.', '').split()
filtered_words = [word for word in word_list if word not in stopwords.words('english')]
#Getting what part of speech the answer is
answerPartOfSpeech = wn.synsets(answer)[0].pos()
##Extracting closest embeddings for the answer
topEmbeddings = model.most_similar(positive=[answer], topn=30)
embeddings = []
for embeddingIndex in range(len(topEmbeddings)):
#Having a threshold. Word embedding shouldn't be further than 0.45
if topEmbeddings[embeddingIndex][1] > 0.45:
word = stemmer.stem(topEmbeddings[embeddingIndex][0])
#Removing words that are not of the same part of speech
if wn.synsets(word) != [] and wn.synsets(word)[0].pos() == answerPartOfSpeech:
#Since we are stemming the embeddings, it's possible for a stem to appear more than once
if word not in embeddings:
embeddings.append(word)
#List of occurences for each stemmed word of the original answer in the embeddings for every other word in the sentence
embeddingsOccurences = [0] * len(embeddings)
for sentenceWordIndex in range(len(filtered_words)):
senteceWordEmbeddings = model.most_similar(positive=[answer, filtered_words[sentenceWordIndex]], topn=30)
stemmedEmbeddings = []
for embeddingIndex in range(len(senteceWordEmbeddings)):
#Having a threshold. Word embedding shouldn't be further than 0.45
if senteceWordEmbeddings[embeddingIndex][1] > 0.45:
word = stemmer.stem(senteceWordEmbeddings[embeddingIndex][0])
#Since we are stemming the embeddings, it's possible for a stem to appear more than once
if word not in stemmedEmbeddings:
stemmedEmbeddings.append(word)
for stemmedEmbeddingIndex in range(len(stemmedEmbeddings)):
#Checking if the embedding is also contained in the embedding of the answer
if stemmedEmbeddings[stemmedEmbeddingIndex] in embeddings:
embeddingIndex = embeddings.index(stemmedEmbeddings[stemmedEmbeddingIndex])
embeddingsOccurences[embeddingIndex]+=1
combined = list(zip(embeddings, embeddingsOccurences))
bestEmbeddings = sorted(combined, key=lambda x: x[1], reverse=True)
print(bestEmbeddings)
print(embeddings)
###Output
[('madrid', 5), ('porto', 4), ('copenhagen', 4), ('oporto', 3), ('braga', 3), ('amsterdam', 2)]
['porto', 'copenhagen', 'madrid', 'oporto', 'amsterdam', 'braga']
|
maxsmi/results_analysis/results_plots.ipynb | ###Markdown
maxsmi Analysis of resultsThis notebook serves to analyse the results of the simulations ran on the Curta cluster from the Freie Universität Berlin. 🚨 WARNINGThe notebook will not run unless all simulations have been stored in the `output`folder._Note_: a `figures` folder will be created in which the tables are saved.Simulations can be run using the following command:```(maxsmi) $ python maxsmi/full_workflow.py --task ESOL --aug-strategy-train augmentation_with_duplication --aug-nb-train 10 --aug-nb-test 10```📝 Have a look at the [README](https://github.com/t-kimber/maxsmi/blob/main/README.md) page for more details. GoalThe aim of this notebook is to compare the results on the test set for- all tasks (ESOL, FreeSolv, and lipophilicity),- all models (CONV1D, CONV2D and RNN),- all augmentation strategies: no augmentation, augmentation with-, without-, with reduced duplication, and estimated maximum,and analyse the results.
###Code
import os
import numpy as np
import pandas as pd
from pathlib import Path
import matplotlib.pyplot as plt
import matplotlib.transforms as transforms
from maxsmi.utils_analysis import retrieve_metric
# Path to this notebook
HERE = Path(_dh[-1])
path_to_output = HERE.parents[0]
# Make a folder for output figures
os.makedirs(f"{HERE}/figures", exist_ok=True)
###Output
_____no_output_____
###Markdown
Plots Let's plots these results. Grid: augmentation numberThe models were run of a fine grid augmentation: from 1 to 20 with a step of 1 as well as a coarser grid: from 20 to 100 with a step of 10.
###Code
fine_grid = [elem for elem in range(1, 21, 1)]
coarse_grid = [elem for elem in range(10, 110, 10)]
temp_grid = [elem for elem in range(30, 110, 10)]
full_grid = fine_grid + temp_grid
###Output
_____no_output_____
###Markdown
Plot for best modelWhich model performs best?
###Code
def plot_metric_for_model(metric,
set_,
augmentation_strategy,
task="ESOL",
grid=full_grid):
"""
Plots the metric of interest on the set of interest for a given model.
Parameters
----------
metric : str
The metric of interest, such as the r2 score,
time or mean squared error.
set_ : str
The train set or test set.
augmentation_strategy : str
The augmentation strategy used.
task : str
The task considered.
grid : list
The grid to retrieve for aumgentation number.
Returns
-------
None
"""
models = ["CONV1D", "CONV2D", "RNN"]
x = [step for step in grid]
fig, ax = plt.subplots(1, 1)
for model in models:
legend_ = []
y_model = []
for augmentation_num in grid:
y = retrieve_metric(
path_to_output,
metric,
set_,
task,
augmentation_strategy,
augmentation_num,
augmentation_strategy,
augmentation_num,
model,
)
y_model.append(y)
ax.plot(x, y_model)
ax.set_title(f"Data: {task} \nStrategy: {augmentation_strategy}")
ax.set_xlabel("Number of augmentation")
ax.set_xlabel("Number of augmentation")
if metric == "rmse":
ax.set_ylabel(f"RMSE ({set_})")
elif metric == "time":
ax.set_ylabel(f"{metric} ({set_}) [sec]")
else:
ax.set_ylabel(f"{metric} ({set_})")
legend_.append(model)
ax.legend(models)
plt.savefig(f"figures/{task}_best_ml_model.png",
dpi=1200,
facecolor='w',
edgecolor='w',
orientation='portrait',
format="png",
transparent=False,
bbox_inches=None,
pad_inches=0.1,)
plt.show()
plot_metric_for_model("rmse", "test",
"augmentation_with_reduced_duplication",
grid=full_grid)
###Output
_____no_output_____
###Markdown
Plot for best strategyWhich strategy performs best?
###Code
def plot_metric_for_strategy(metric,
set_,
task="ESOL",
model="CONV1D",
grid=full_grid):
"""
Plots the metric of interest on the set of interest
for a given model.
Parameters
----------
metric : str
The metric of interest, such as the r2 score,
time or mean squared error.
set_ : str
The train set or test set.
task : str
The task of interest, e.g. "ESOL".
model : str
The model to consider.
grid : list
The grid to retrieve for aumgentation number.
Returns
-------
None
"""
x = [step for step in grid]
fig, ax = plt.subplots(1, 1)
for augmentation_strategy in [
"augmentation_with_duplication",
"augmentation_without_duplication",
"augmentation_with_reduced_duplication"
]:
legend_ = []
y_strategy = []
for augmentation_num in grid:
y = retrieve_metric(
path_to_output,
metric,
set_,
task,
augmentation_strategy,
augmentation_num,
augmentation_strategy,
augmentation_num,
model,
)
y_strategy.append(y)
ax.plot(x, y_strategy)
if metric == "rmse":
ax.set_ylabel(f"RMSE ({set_})")
elif metric == "time":
ax.set_ylabel(f"{metric} ({set_}) [sec]")
else:
ax.set_ylabel(f"{metric} ({set_})")
legend_.append(augmentation_strategy)
ax.set_title(f"Data: {task}\nModel: {model}")
ax.set_xlabel("Number of augmentation")
# Retrieve no augmentation
y_no_augmentation = retrieve_metric(
path_to_output,
metric,
set_,
task,
"no_augmentation",
0,
"no_augmentation",
0,
model,
)
# Retrieve augmentation with maximum duplication
if task == "ESOL":
task = "ESOL_small"
y_est_max = retrieve_metric(
path_to_output,
metric,
set_,
task,
"augmentation_maximum_estimation",
10,
"augmentation_maximum_estimation",
10,
model,
)
if task == "ESOL_small":
task = "ESOL"
plt.axhline(y=y_est_max, color='r', linestyle='-')
trans = transforms.blended_transform_factory(
ax.get_yticklabels()[0].get_transform(),
ax.transData)
ax.text(0, y_est_max,
f"{y_est_max:.3f}",
transform=trans,
color='red',
ha="right",
va="center")
plt.axhline(y=y_no_augmentation,
color='black',
linewidth=0.7,
linestyle='dashed')
ax.legend([
"augmentation_with_duplication",
"augmentation_without_duplication",
"augmentation_with_reduced_duplication",
"augmentation_with_estimated_maximum",
"no_augmentation"
])
plt.savefig(f"figures/{task}_{model}_best_strategy.png",
dpi=1200,
facecolor='w',
edgecolor='w',
orientation='portrait',
format="png",
transparent=False,
bbox_inches=None,
pad_inches=0.1,)
plt.show()
plot_metric_for_strategy("rmse",
"test",
task="FreeSolv",
model="CONV1D",
grid=full_grid)
plot_metric_for_strategy("rmse",
"test",
task="ESOL",
model="RNN",
grid=full_grid)
###Output
_____no_output_____
###Markdown
Plot all metrics, all models and all augmentation strategies
###Code
def plot_all(metric, set_, grid, save_fig=False):
"""
Plots the metric of interest on the set of interest.
Parameters
----------
metric : str
The metric of interest, such as the r2 score,
time or mean squared error.
set_ : str
The train set or test set.
augmentation_strategy : str
The augmentation strategy used.
max_augmentation : int, default is 100.
The largest number of augmentation that was performed.
save_fig : bool
Whether to save the figure or not.
Returns
-------
None
"""
tasks = ["ESOL", "FreeSolv", "lipophilicity"]
models = ["CONV1D", "CONV2D", "RNN"]
x = [step for step in grid]
fig, ax = plt.subplots(nrows=len(tasks),
ncols=len(models),
figsize=(20, 20))
for i, task in enumerate(tasks):
for j, model in enumerate(models):
legend_ = []
for augmentation_strategy in [
"augmentation_with_duplication",
"augmentation_without_duplication",
"augmentation_with_reduced_duplication"
]:
y_task_model_strategy = []
for augmentation_num in grid:
y = retrieve_metric(
path_to_output,
metric,
set_,
task,
augmentation_strategy,
augmentation_num,
augmentation_strategy,
augmentation_num,
model,
)
y_task_model_strategy.append(y)
ax[i, j].plot(x, y_task_model_strategy)
ax[i, j].set_title(f"Data: {task} \nModel: {model}")
ax[i, j].set_xlabel("Number of augmentation")
if metric == "rmse":
ax[i, j].set_ylabel(f"RMSE ({set_})")
elif metric == "time":
ax[i, j].set_ylabel(f"{metric} ({set_}) [sec]")
else:
ax[i, j].set_ylabel(f"{metric} ({set_})")
caption = f"{augmentation_strategy}"
legend_.append(caption)
ax[i, j].legend(legend_)
if save_fig:
plt.savefig(f"figures/all_combinations_{metric}_{set_}.png",
dpi=1200,
facecolor='w',
edgecolor='w',
orientation='portrait',
format="png",
transparent=False,
bbox_inches=None,
pad_inches=0.1,)
plt.show()
plot_all("rmse", "test", full_grid)
###Output
_____no_output_____
###Markdown
Given the plots above, we create a function to plot a single one.
###Code
def plot_single(metric, set_, grid, task, model):
"""
Plots the metric of interest on the set of interest.
Parameters
----------
metric : str
The metric of interest, such as the r2 score,
time or mean squared error.
set_ : str
The train set or test set.
task : str
The considered task.
model : str
The considered model.
Returns
-------
None
"""
x = [step for step in grid]
fig, ax = plt.subplots()
legend_ = []
for augmentation_strategy in [
"augmentation_with_duplication",
"augmentation_without_duplication",
"augmentation_with_reduced_duplication"
]:
y_task_model_strategy = []
for augmentation_num in grid:
y = retrieve_metric(
path_to_output,
metric,
set_,
task,
augmentation_strategy,
augmentation_num,
augmentation_strategy,
augmentation_num,
model,
)
y_task_model_strategy.append(y)
ax.plot(x, y_task_model_strategy)
ax.set_title(f"Data: {task} \nModel: {model}")
ax.set_xlabel("Number of augmentation")
ax.set_ylabel(f"{metric}")
if metric == "rmse":
ax.set_ylabel(f"RMSE ({set_})")
elif metric == "time":
ax.set_ylabel(f"{metric} ({set_}) [sec]")
else:
ax.set_ylabel(f"{metric} ({set_})")
caption = f"{augmentation_strategy}"
legend_.append(caption)
ax.legend(legend_)
plt.savefig(f"figures/{task}_{metric}_{model}_{set_}_3strategies.png",
dpi=1200,
facecolor='w',
edgecolor='w',
orientation='portrait',
format="png",
transparent=False,
bbox_inches=None,
pad_inches=0.1,)
plt.show()
plot_single("rmse", "test", full_grid, "FreeSolv", "CONV1D")
plot_single("rmse", "test", full_grid, "ESOL", "CONV1D")
plot_single("rmse", "test", full_grid, "lipophilicity", "CONV1D")
plot_single("time", "train", full_grid, "ESOL", "CONV1D")
###Output
_____no_output_____ |
webscraping-with-selenium.ipynb | ###Markdown
Requests et BeautifulSoup Installation Des Packages
###Code
#!pip install selenium
###Output
_____no_output_____
###Markdown
>**Remarque** : - pip est un gestionnaire de packages pour Python. Cela signifie que c'est un outil qui vous permet d'installer et de gérer des bibliothèques et des dépendances supplémentaires non distribuées dans le cadre de la bibliothèque standard. Assurez-vous de ne pas oublier **"!"** (comme ci-dessus) avant "pip install selenium", ce qui signifie l'exécuter comme une commande shell plutôt que comme une commande notebook (c'est comme si vous ouvriez un terminal et que vous le tapiez sans le **"!"**)Nous lançons le navigateur Chrome lorsque nous utilisons Selenium. Nous devons installer un pilote (localement dans le système et exécuter ce notebook uniquement localement), qui agit comme un pont entre Selenium et Chrome (ou quel que soit le navigateur que vous utilisez). Pour Chrome, cela s'appelle le pilote Chrome, et pour Firefox, c'est le gecko.
###Code
from selenium import webdriver
###Output
_____no_output_____
###Markdown
Nous devons dire à Selenium quel navigateur nous utilisons, et Selenium pilotera le pilote Web.
###Code
#Si nous devons exécuter Selenium en mode sans entête ; c'est sans voir l'interface graphique en cours d'exécution, utilisez le code ci-dessous
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium import webdriver
driverPath_1 = 'C:\Program Files\Google\Chrome\Application\chromedriver.exe'
driverPath_2 = 'C:\Program Files\Google\Chrome\Application\chrome.exe'
serviceConfig = Service(driverPath_1)
optionConfig = webdriver.ChromeOptions()
driver = webdriver.Chrome(service=serviceConfig, options=optionConfig)
###Output
_____no_output_____
###Markdown
Lorsque nous exécutons le code ci-dessus, chrome est automatisé, puis en gardant la page de message ouverte, nous utilisons la méthode get pour avoir l'URL de la page requise
###Code
items_url = "https://www.tayara.tn/"
driver.get(items_url)
###Output
_____no_output_____
###Markdown
>**Remarque** : si nous exécutons la cellule de code ci-dessus sans exécuter la dernière cellule en priorité immédiate, le message d'erreur "**chrome inaccessible**" s'affichera.Nous devons nous assurer que le navigateur affiche la page complète/les éléments complets de haut en bas (fin de page) comme nous, les humains, en faisant défiler vers le bas avec les valeurs des axes x et y. Et puis nous donnons un temps de sommeil de 4 secondes, donc nous ne rencontrons pas de problème où nous essayons de lire des éléments de la page, qui n'est pas encore chargée. Nous utilisons le code Selenium Execute pour cela. De plus, nous téléchargerons la fonction de temps pour ralentir un peu les choses/être doux sur le site Web.
###Code
import time
time.sleep(2) # dormir pendant deux secondes au début de la page, puis faire défiler vers le bas de la page
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4) #sleep pour charger tous les éléments
###Output
_____no_output_____
###Markdown
Comprendre les méthodes de base de Selenium pour obtenir les données requisesLes techniques suivantes nous aideront à trouver des éléments dans une page Web (ces méthodes renverront une liste) : find_elements_by_name find_elements_by_xpath find_elements_by_link_text find_elements_by_partial_link_text find_elements_by_tag_name find_elements_by_class_name find_elements_by_css_selector Écrivons maintenant un code Python pour récupérer des images sur le Web.Nous utiliserons le sélecteur CSS et choisirons d'abord l'élément total, puis aborderons les valeurs individuelles.Tous les éléments n'auront pas toutes les valeurs associées à cela, et certains peuvent également n'en avoir aucune. Nous utilisons donc chaque sélecteur CSS dans un bloc try-except qui fera quelque chose sauf pour aucun cas(**Exception en python**). Nous allons former une liste de dictionnaires, et nous pouvons stocker ces données dans un fichier JSON en utilisant le module [JSON](https://docs.python.org/3/library/json.html/) et la méthode [dump()](https://www.geeksforgeeks.org/json-dumps-in-python). Pour répéter le modèle précédent, nous le conserverons également dans un fichier CSV.
###Code
import time
import json
import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
def get_items(search, page):
search = search.replace(" ", "_")
url = "https://www.tayara.tn/ads/c/Le_plus_r%C3%A9cent?price={}&p={}".format(
search, page
)
driver.get(url)
# sleep for two second then scroll to the bottom of the page
time.sleep(2)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4) #sleep between loadings
items = driver.find_elements_by_css_selector(".mat-card")
output = []
for i in items:
try:
if len(i.find_element_by_css_selector(".title").text)>=1:
titre = i.find_element_by_css_selector(".title").text
except:
titre = None
try:
if len(i.find_element_by_css_selector(".price").text)>=1:
prix = i.find_element_by_css_selector(".price").text
except:
prix = None
try:
if len(i.find_element_by_css_selector(".mat-card-content a").text)>=1:
produit_url = i.find_element_by_css_selector(".mat-card-content a").get_attribute("href")
except:
produit_url = None
output_item = {"titre": titre, "prix": prix, "url": produit_url}
output.append(output_item)
return output
#elements-offer-price-normal__price
driverPath_1 = 'C:\Program Files\Google\Chrome\Application\chromedriver.exe'
serviceConfig = Service(driverPath_1)
optionConfig = webdriver.ChromeOptions()
driver = webdriver.Chrome(service=serviceConfig, options=optionConfig)
page = 1
search = "0,9999999" #since we would like to search for Inflight Items
all_items = []
#we will call the function get_items
while True:
print("getting page", page)
results = get_items(search, page)
all_items += results
if len(results) == 0 or page > 2:
break
page += 1
# save all the items to a json file
json.dump(all_items, open("product.json", "w"), indent=2)
driver.close()
results
###Output
_____no_output_____ |
NoteBooks/Curso de WebScraping/Unificado/DIY/Scraper_pag_12.ipynb | ###Markdown
Scraper página 12Este script tiene como propósito scrapear artículos por seccioens del periodico, quiero extraer1. Título2. Volanta3. Fecha... Ver curso.Y almacenarlas en un data frame.Para esto debo realizar uan función que me obtengo los links de las seccionesOtra que me permita extraer de las noticias individuales lo que requiero,Otra función que
###Code
import requests as rq
from bs4 import BeautifulSoup as bs
import pandas as pd
import re
url ='https://www.pagina12.com.ar/'
response = rq.get(url)
soup = bs(response.text, 'lxml')
# Obtengo los links de todas las secciones.
link_sections = []
sections = soup.find('div', attrs ={'class':'p12-dropdown-column'}).find_all('a')
for section in sections:
link_section = section.get('href')
link_sections.append(link_section)
link_sections
# Quiero conseguir por sección de noticias cada link de las páginas, es decir una lista de las noticias por cada pagian de sección.
url_section= 'https://www.pagina12.com.ar/secciones/el-pais'
response = rq.get(url_section)
soup = bs(response.text,'lxml')
links_news_for_section =[]
links_for_page = soup.find_all('div', attrs= {'class':'article-item__content'})
for i in links_for_page:
anchor = i.find_all('a')
links_news_for_section.append(anchor[0].get('href'))
links_news_for_section
# Scraper de noticia inividual
url = 'https://www.pagina12.com.ar/319733-escandalo-en-mendoza-por-la-invitacion-de-la-hija-del-gobern'
response = rq.get(url)
soup = bs(response.text, 'lxml')
pattern ='?'
dic_news ={}
dic_news['title'] = soup.h1.get_text()
dic_news['body'] = soup.find('div', attrs = {'class','article-text'}).get_text()
dic_news['datetime'] = soup.find('div', attrs={'class':'time'}).find_all('span')[0].get('datetime')
img_string = soup.find('div', attrs={'class':'article-main-media-image'}).find_all('img')[1].get('data-src')
dic_news['img'] = re.sub('\?.+', "",img_string)
dic_news['section'] = soup.find('div', attrs = {'class':'suplement'}).get_text()
dic_news
'''
RETO: Usaremos los bloques para construir una función po funciones que ejecuten la tarea.!!
1. Crear una función que me permita hacer un request y una sopa, porque voy a estar haciendo uso de eso.
'''
###Output
_____no_output_____ |
scripts/text_generation_script/Attention_text-generation_model.ipynb | ###Markdown
Here we are visualizing layer wise attetions based on trained GPT-2 model on `MedQuAD Cancer dataset`Reference:* This jupyter file is rewritten using [colab](https://colab.research.google.com/drive/1c9kBsbvSqpKkmd62u7nfqVhvWr0W8_LxscrollTo=XLu4wFlC0rK5) example notebook `head_view_gpt2.ipynb`**NOTE**: Currently, all paths for model load, save and data load, json save are hardcoded. So, please make sure you change them according to your requriements --------- Load packages
###Code
import sys
!test -d bertviz_repo && echo "FYI: bertviz_repo directory already exists, to pull latest version uncomment this line: !rm -r bertviz_repo"
!echo y | rm -r bertviz_repo # Uncomment if you need a clean pull from repo
!test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo
if not 'bertviz_repo' in sys.path:
sys.path += ['bertviz_repo']
from bertviz import head_view
from transformers import GPT2Tokenizer, GPT2Model
###Output
_____no_output_____
###Markdown
supporting fuction in Visualization
###Code
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))
###Output
_____no_output_____
###Markdown
load the weights and run the Visualization
###Code
model_version = './GPT2_text_generator'
model = GPT2Model.from_pretrained(model_version, output_attentions=True)
tokenizer = GPT2Tokenizer.from_pretrained(model_version)
text = "signs of ovarian germ cell tumor are swelling of the abdomen or vaginal bleeding after menopause."
inputs = tokenizer.encode_plus(text, return_tensors='pt', add_special_tokens=True)
input_ids = inputs['input_ids']
attention = model(input_ids)[-1]
input_id_list = input_ids[0].tolist() # Batch index 0
tokens = tokenizer.convert_ids_to_tokens(input_id_list)
call_html()
head_view(attention, tokens)
###Output
_____no_output_____ |
Heartfelt (GCP).ipynb | ###Markdown
Introduction **Professional Machine Learning Engineer Certification**The Professional Machine Learning Engineer Certification by Google Cloud was released to the public on October 15th, 2020. According to the official [examination page](https://cloud.google.com/certification/machine-learning-engineer):> A Professional Machine Learning Engineer designs, builds, and productionizes ML models to solve business challenges using Google Cloud technologies and knowledge of proven ML models and techniques. The ML Engineer is proficient in all aspects of model architecture, data pipeline interaction, and metrics interpretation and needs familiarity with application development, infrastructure management, data engineering, and security. In this session, we'll be looking at the following:1. The nature of an exam question and how it can be tackled.2. Building an appropriate solution with the identified tools.3. Deploying the produced solution on Google Cloud. The ProblemToday, we'll be building an AI-powered model capable of assigning one of five emojis to a given comment. The Problem StatementLet's set the stage with an exam-level question:*Heartfelt is a modern startup working on a new website comment system. It uses AI to generate one of five emotions from each comment (Happy, Funny, Scared, Angry, Sad). It then shows the number of each of the five reactions above the comment section.*``` 21341 😃 21 😂 0 😨 2 😠 5 😭_____________________________________________________Sathish This honestly made my day! 😃Pranay We really need more good news in the world, like this... 😃Maitreyi Awesome! 😃```*They have obtained and labeled a large number of comments from their existing non-AI based system and are looking to upgrade to the new solution within a year. Their in-house ML engineers have designed a proprietary model, and apart from using the cloud for training and deploying, they also want to be able to fine-tune it for accuracy.**What is the best way to build, tune, and deploy the model?* The Proposed Solution1. Files containing the labelled (Comment, Category) pairs are stored in a **Google Cloud Storage** bucket.2. **AI Platform Notebooks** will be used to access and process the data stored in the GCS bucket, rapidly prototype to find the best architecture, and train the model on a small portion of the dataset.3. **AI Platform Jobs** will be used to train the identified model architecture on the entirety of the dataset, performing hyperparameter tuning along the way. The trained model will be versioned and stored in a GCS bucket.4. **AI Platform Prediction** will be used to deploy the stored model. It allows for versioning, phased rollouts, and easy access to the model with REST API calls.5. **Google Cloud Storage** hosts the trained model, which is accessed by AI Platform Prediction. The Solution Imports
###Code
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import csv
import random
###Output
_____no_output_____
###Markdown
Obtaining and Processing the Data The dataset used here was entirely created by me. Let us take a moment to understand it.There are 200 sentence-label pairs, 40 in each of 5 categories- Happy (0), Funny (1), Scared (2), Angry (3), and Sad (4). The first thing we need to do is define a method that converts between these numbers and the emojis.
###Code
list_of_emojis = ['😃','😂','😨','😠','😭']
def label_to_emoji(label):
return list_of_emojis[label]
###Output
_____no_output_____
###Markdown
Now, we open the dataset and create two lists- one to store the strings (data) and the other to store the labels (labels).
###Code
# The next cell is only for use on GCP
import gcsfs
def prepare_data(filename):
data = []
labels = []
if filename.startswith('gs://'):
fs = gcsfs.GCSFileSystem(project='durable-will-291417')
with fs.open(filename, "rt", encoding="ascii") as my_dataset:
reader = csv.reader(my_dataset, delimiter=',')
for row in reader:
data.append(row[0])
labels.append(int(row[1]))
else:
with open(filename) as my_dataset:
reader = csv.reader(my_dataset, delimiter=',')
for row in reader:
data.append(row[0])
labels.append(int(row[1]))
return (data, labels)
# The next cell is only for use on Colab
# def prepare_data(filename):
# data = []
# labels = []
# with open(filename) as my_dataset:
# reader = csv.reader(my_dataset, delimiter=',')
# for row in reader:
# data.append(row[0])
# labels.append(int(row[1]))
# return (data, labels)
data, labels = prepare_data('heartfelt_dataset.csv')
###Output
_____no_output_____
###Markdown
The next block of code is used to shuffle these pairs while maintaining the order between them. We first split the data into 2 buckets - 150 in Train and 50 in Test. We then print 10 examples of both train and test data, along with the emojified values of the labels that represent them.
###Code
def shuffle_data(data, labels):
X_train = []
Y_train = []
X_test = []
Y_test = []
list_to_shuffle = list(zip(data, labels))
random.shuffle(list_to_shuffle)
shuffled_data, shuffled_labels = zip(*list_to_shuffle)
X_train = shuffled_data[0:150]
Y_train = shuffled_labels[0:150]
X_test = shuffled_data[150:200]
Y_test = shuffled_labels[150:200]
return (X_train, Y_train, X_test, Y_test)
X_train, Y_train, X_test, Y_test = shuffle_data(data, labels)
print ("Examples of training data:\n")
for i in range(10):
print (" {} : {}".format(X_train[i], label_to_emoji(Y_train[i])))
print ('')
print ("Examples of testing data:\n")
for i in range(10):
print (" {} : {}".format(X_test[i], label_to_emoji(Y_test[i])))
###Output
Examples of training data:
Get the hell out of here! : 😠
Disgusting. : 😠
Yuck, he ate his own poop? : 😂
Hope this ends soon. : 😭
Is no one going to talk about how funny this was? : 😂
My blood is boiling right now. : 😠
Are you insane? : 😠
Comedians need to be paid more! : 😂
Must watch! : 😃
That's right, always treat others with respect. : 😃
Examples of testing data:
How pathetic. : 😨
Can we throw these people in prison? : 😠
It's like you were born for comedy. : 😂
Is there a reason you're still here? : 😠
She did not just say that! : 😠
My team lost today. : 😭
You need to be more careful! : 😭
Rolling on the floor here. : 😂
Those laughs are infectious! : 😂
Yes! : 😃
###Markdown
Sanity checks are essential. They help us ensure that the data was not corrupted during the ingestion and provide some insight into the data's nature. Let us look at how many samples we have in Train and Test, along with the lengths of the longest and shortest strings in the dataset.
###Code
print ("The training set contains {} examples with {} labels\n".format(len(X_train), len(Y_train)))
print ("The testing set contains {} examples with {} labels\n".format(len(X_test), len(Y_test)))
min_len = len(min(X_train, key=len).split())
print ("The length of the shortest sentence is {}\n".format(min_len))
max_len = len(max(X_train, key=len).split())
print ("The length of the longest sentence is {}\n".format(max_len))
###Output
The training set contains 150 examples with 150 labels
The testing set contains 50 examples with 50 labels
The length of the shortest sentence is 1
The length of the longest sentence is 12
###Markdown
Okay, perfect. Constructing the Model The *model* is the algorithm we develop to be able to solve an AI problem. Without getting too deep into the architecture here, the basic idea is that we're building a model capable of learning the relationship between *embedded* forms of the sentences and their corresponding emojis.
###Code
def Model():
model = keras.Sequential([keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=1)),
keras.layers.LSTM(units=128, return_sequences=True),
keras.layers.Dropout(rate=0.5),
keras.layers.LSTM(units=128, return_sequences=False),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(units=5),
keras.layers.Activation("softmax")])
model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy'])
return model
model = Model()
###Output
_____no_output_____
###Markdown
Training Locally Before we can think about training this model on our data, there are some things we need to take care of. One-hot encoding involves modifying the labels, such that each label is a vector containing precisely one 1 and all other 0s, uniquely corresponding to the emoji it represents. The one-hot encoded forms are as follows:0 becomes [1, 0, 0, 0, 0] (Happy)1 becomes [0, 1, 0, 0, 0] (Funny)2 becomes [0, 0, 1, 0, 0] (Scared)3 becomes [0, 0, 0, 1, 0] (Angry)4 becomes [0, 0, 0, 0, 1] (Sad) The method below one-hot encodes the training and test labels and shows you a sample.
###Code
def convert_to_one_hot(labels):
return tf.one_hot(labels, 5)
Y_train_oh = convert_to_one_hot(labels=Y_train)
Y_test_oh = convert_to_one_hot(labels=Y_test)
pos_to_test = 75
print ("{} : {} (The numeric value of this is {})".format(X_train[pos_to_test], label_to_emoji(Y_train[pos_to_test]), Y_train[pos_to_test]))
print (Y_train_oh[pos_to_test])
###Output
I am genuinely terrified now. : 😨 (The numeric value of this is 2)
tf.Tensor([0. 0. 1. 0. 0.], shape=(5,), dtype=float32)
###Markdown
A sentence embedding is a vector representation of the sentence, where it's translated into a unique combination of N numbers (dimensions). Here, we are using N=128 dimensions. Apart from uniquely identifying a sentence, these numbers also carry relative semantic meaning, which allows the model to learn relationships between similar sentences. For example, if the model was trained on the sentence "I like puppies" and the word "adore" never appeared anywhere in the training set, it can still learn to infer that "I adore puppies" should produce the same emoji. As for these embeddings themselves, you can create your own from massively large datasets or use one that someone else created. Here, we're going with the latter. The embeddings used here come from TensorFlow Hub (more in the references section below). Look at how the sentence "I love you." becomes a vector of 128 numbers.
###Code
def load_hub_module():
embed = hub.load("https://tfhub.dev/google/nnlm-en-dim128-with-normalization/2")
return embed
embed = load_hub_module()
print ("I love you.\n\n{}".format(embed(["I love you."])))
###Output
I love you.
[[ 0.06720316 -0.05019467 0.19946459 0.12020276 0.14881054 -0.00456426
-0.05057551 -0.04841431 -0.19195782 -0.02451784 -0.12093704 -0.2852198
-0.00294066 -0.05971586 0.14719391 0.07701718 0.02095461 -0.06760143
0.15468313 0.01675761 -0.03760123 -0.05660797 -0.00736546 0.08649251
-0.08041847 -0.1563163 0.03294101 0.07168686 -0.11695939 -0.1823932
-0.08472857 -0.0987467 -0.03675023 -0.01732922 0.22283307 -0.05921975
-0.01329962 -0.05822768 0.06824754 0.17341426 -0.07891089 -0.23081318
-0.10878634 -0.30201977 0.0608683 0.08700971 -0.05933066 -0.13618678
-0.02547742 0.18084317 0.07031292 -0.11094392 0.04854394 0.15780668
-0.06359842 -0.09194843 0.02376433 -0.14330299 0.03715428 -0.06878028
-0.14734161 0.19417918 0.0823964 0.05661403 -0.05078657 -0.06345925
0.10933136 0.04545296 0.14868559 -0.03175765 -0.39383692 0.14416008
0.07267258 -0.11527298 0.09845851 0.01740749 0.08343427 -0.14131157
0.03251657 0.23407583 0.16797747 0.03399577 -0.057875 -0.10542976
-0.01186056 -0.05624991 0.01801045 -0.02148287 0.1361867 0.02048323
-0.10837713 0.00970503 0.06507829 -0.0664895 0.16917071 -0.03851476
0.13855007 0.06109535 -0.00296407 -0.01300511 0.09075141 -0.03050167
0.16900752 -0.12665273 0.08207355 -0.02063324 -0.16423716 0.12405533
-0.14883116 0.02858107 -0.11822073 -0.11193876 -0.02074423 0.03844061
-0.1346886 -0.05284637 -0.0035206 -0.14857209 -0.21723205 0.11980139
0.09087516 0.10499977 -0.0890507 0.29368913 0.17826816 -0.03604833
-0.05201548 0.09901257]]
###Markdown
Now for the fun part! We pass the embedded training (and testing) data, the one-hot encoded training (and testing) labels, and specify a few other things- like how long we want the training to take place and the number of examples to train with each time.
###Code
def train_model(model, embed, X_train, Y_train_oh, X_test, Y_test_oh, num_epochs=100, batch_size=10, shuffle=True):
return model.fit(embed(X_train),
Y_train_oh,
epochs = num_epochs,
batch_size = batch_size,
shuffle=shuffle,
validation_data=(embed(X_test), Y_test_oh))
history = train_model(model, embed, X_train, Y_train_oh, X_test, Y_test_oh)
###Output
Epoch 1/100
15/15 [==============================] - 1s 84ms/step - loss: 1.6105 - accuracy: 0.1800 - val_loss: 1.6114 - val_accuracy: 0.1400
Epoch 2/100
15/15 [==============================] - 0s 9ms/step - loss: 1.6049 - accuracy: 0.2933 - val_loss: 1.6124 - val_accuracy: 0.1800
Epoch 3/100
15/15 [==============================] - 0s 7ms/step - loss: 1.6000 - accuracy: 0.3067 - val_loss: 1.6134 - val_accuracy: 0.1800
Epoch 4/100
15/15 [==============================] - 0s 8ms/step - loss: 1.5969 - accuracy: 0.3200 - val_loss: 1.6149 - val_accuracy: 0.2600
Epoch 5/100
15/15 [==============================] - 0s 8ms/step - loss: 1.5884 - accuracy: 0.4733 - val_loss: 1.6140 - val_accuracy: 0.3200
Epoch 6/100
15/15 [==============================] - 0s 8ms/step - loss: 1.5757 - accuracy: 0.4267 - val_loss: 1.6117 - val_accuracy: 0.2800
Epoch 7/100
15/15 [==============================] - 0s 8ms/step - loss: 1.5535 - accuracy: 0.4333 - val_loss: 1.6043 - val_accuracy: 0.2800
Epoch 8/100
15/15 [==============================] - 0s 7ms/step - loss: 1.5228 - accuracy: 0.4667 - val_loss: 1.5946 - val_accuracy: 0.3000
Epoch 9/100
15/15 [==============================] - 0s 9ms/step - loss: 1.4723 - accuracy: 0.4867 - val_loss: 1.5762 - val_accuracy: 0.3200
Epoch 10/100
15/15 [==============================] - 0s 8ms/step - loss: 1.3920 - accuracy: 0.4667 - val_loss: 1.5541 - val_accuracy: 0.3200
Epoch 11/100
15/15 [==============================] - 0s 7ms/step - loss: 1.2867 - accuracy: 0.4667 - val_loss: 1.5368 - val_accuracy: 0.3400
Epoch 12/100
15/15 [==============================] - 0s 8ms/step - loss: 1.1834 - accuracy: 0.5467 - val_loss: 1.5361 - val_accuracy: 0.3800
Epoch 13/100
15/15 [==============================] - 0s 7ms/step - loss: 1.0884 - accuracy: 0.5933 - val_loss: 1.5053 - val_accuracy: 0.3400
Epoch 14/100
15/15 [==============================] - 0s 8ms/step - loss: 0.9699 - accuracy: 0.6600 - val_loss: 1.5256 - val_accuracy: 0.4200
Epoch 15/100
15/15 [==============================] - 0s 8ms/step - loss: 0.9099 - accuracy: 0.7133 - val_loss: 1.5484 - val_accuracy: 0.4200
Epoch 16/100
15/15 [==============================] - 0s 8ms/step - loss: 0.8214 - accuracy: 0.7200 - val_loss: 1.5328 - val_accuracy: 0.4200
Epoch 17/100
15/15 [==============================] - 0s 8ms/step - loss: 0.7049 - accuracy: 0.7867 - val_loss: 1.5873 - val_accuracy: 0.4000
Epoch 18/100
15/15 [==============================] - 0s 8ms/step - loss: 0.6944 - accuracy: 0.7867 - val_loss: 1.6383 - val_accuracy: 0.3800
Epoch 19/100
15/15 [==============================] - 0s 8ms/step - loss: 0.6500 - accuracy: 0.7667 - val_loss: 1.6919 - val_accuracy: 0.4000
Epoch 20/100
15/15 [==============================] - 0s 8ms/step - loss: 0.6262 - accuracy: 0.7867 - val_loss: 1.6960 - val_accuracy: 0.4400
Epoch 21/100
15/15 [==============================] - 0s 7ms/step - loss: 0.5873 - accuracy: 0.8000 - val_loss: 1.7493 - val_accuracy: 0.4000
Epoch 22/100
15/15 [==============================] - 0s 8ms/step - loss: 0.4598 - accuracy: 0.8667 - val_loss: 1.7248 - val_accuracy: 0.4600
Epoch 23/100
15/15 [==============================] - 0s 7ms/step - loss: 0.4636 - accuracy: 0.8533 - val_loss: 1.7595 - val_accuracy: 0.4800
Epoch 24/100
15/15 [==============================] - 0s 8ms/step - loss: 0.4201 - accuracy: 0.8667 - val_loss: 1.8540 - val_accuracy: 0.4400
Epoch 25/100
15/15 [==============================] - 0s 8ms/step - loss: 0.4369 - accuracy: 0.8667 - val_loss: 1.8754 - val_accuracy: 0.5000
Epoch 26/100
15/15 [==============================] - 0s 8ms/step - loss: 0.4108 - accuracy: 0.8667 - val_loss: 1.9044 - val_accuracy: 0.5000
Epoch 27/100
15/15 [==============================] - 0s 8ms/step - loss: 0.3492 - accuracy: 0.8800 - val_loss: 1.9960 - val_accuracy: 0.4600
Epoch 28/100
15/15 [==============================] - 0s 9ms/step - loss: 0.3303 - accuracy: 0.9067 - val_loss: 2.0492 - val_accuracy: 0.5000
Epoch 29/100
15/15 [==============================] - 0s 10ms/step - loss: 0.3039 - accuracy: 0.9133 - val_loss: 2.0592 - val_accuracy: 0.4600
Epoch 30/100
15/15 [==============================] - 0s 10ms/step - loss: 0.2769 - accuracy: 0.9000 - val_loss: 2.0965 - val_accuracy: 0.4800
Epoch 31/100
15/15 [==============================] - 0s 11ms/step - loss: 0.2566 - accuracy: 0.9333 - val_loss: 2.1565 - val_accuracy: 0.4800
Epoch 32/100
15/15 [==============================] - 0s 9ms/step - loss: 0.2995 - accuracy: 0.9200 - val_loss: 2.1998 - val_accuracy: 0.4800
Epoch 33/100
15/15 [==============================] - 0s 10ms/step - loss: 0.2518 - accuracy: 0.9467 - val_loss: 2.2709 - val_accuracy: 0.4600
Epoch 34/100
15/15 [==============================] - 0s 10ms/step - loss: 0.2127 - accuracy: 0.9733 - val_loss: 2.2817 - val_accuracy: 0.4600
Epoch 35/100
15/15 [==============================] - 0s 10ms/step - loss: 0.1763 - accuracy: 0.9667 - val_loss: 2.3662 - val_accuracy: 0.5000
Epoch 36/100
15/15 [==============================] - 0s 11ms/step - loss: 0.2508 - accuracy: 0.9333 - val_loss: 2.3590 - val_accuracy: 0.4600
Epoch 37/100
15/15 [==============================] - 0s 9ms/step - loss: 0.2153 - accuracy: 0.9400 - val_loss: 2.4232 - val_accuracy: 0.4600
Epoch 38/100
15/15 [==============================] - 0s 12ms/step - loss: 0.1465 - accuracy: 0.9733 - val_loss: 2.5625 - val_accuracy: 0.4600
Epoch 39/100
15/15 [==============================] - 0s 10ms/step - loss: 0.1848 - accuracy: 0.9467 - val_loss: 2.5554 - val_accuracy: 0.4600
Epoch 40/100
15/15 [==============================] - 0s 11ms/step - loss: 0.1970 - accuracy: 0.9133 - val_loss: 2.6434 - val_accuracy: 0.5000
Epoch 41/100
15/15 [==============================] - 0s 11ms/step - loss: 0.1632 - accuracy: 0.9667 - val_loss: 2.5945 - val_accuracy: 0.4600
Epoch 42/100
15/15 [==============================] - 0s 10ms/step - loss: 0.1006 - accuracy: 0.9933 - val_loss: 2.6683 - val_accuracy: 0.4600
Epoch 43/100
15/15 [==============================] - 0s 11ms/step - loss: 0.1491 - accuracy: 0.9667 - val_loss: 2.6889 - val_accuracy: 0.4600
Epoch 44/100
15/15 [==============================] - 0s 12ms/step - loss: 0.1141 - accuracy: 0.9867 - val_loss: 2.7530 - val_accuracy: 0.4600
Epoch 45/100
15/15 [==============================] - 0s 10ms/step - loss: 0.1244 - accuracy: 0.9600 - val_loss: 2.7795 - val_accuracy: 0.4600
Epoch 46/100
15/15 [==============================] - 0s 10ms/step - loss: 0.1418 - accuracy: 0.9600 - val_loss: 2.7948 - val_accuracy: 0.4600
Epoch 47/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0888 - accuracy: 0.9800 - val_loss: 2.8432 - val_accuracy: 0.4600
Epoch 48/100
15/15 [==============================] - 0s 9ms/step - loss: 0.1148 - accuracy: 0.9667 - val_loss: 2.8415 - val_accuracy: 0.4600
Epoch 49/100
15/15 [==============================] - 0s 10ms/step - loss: 0.1005 - accuracy: 0.9733 - val_loss: 2.9067 - val_accuracy: 0.4600
Epoch 50/100
15/15 [==============================] - 0s 12ms/step - loss: 0.1298 - accuracy: 0.9600 - val_loss: 2.9379 - val_accuracy: 0.4600
Epoch 51/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0890 - accuracy: 0.9733 - val_loss: 2.9504 - val_accuracy: 0.4600
Epoch 52/100
15/15 [==============================] - 0s 11ms/step - loss: 0.1166 - accuracy: 0.9733 - val_loss: 3.0125 - val_accuracy: 0.4800
Epoch 53/100
15/15 [==============================] - 0s 12ms/step - loss: 0.0704 - accuracy: 0.9867 - val_loss: 3.0926 - val_accuracy: 0.4800
Epoch 54/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0707 - accuracy: 0.9867 - val_loss: 3.1087 - val_accuracy: 0.4800
Epoch 55/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0593 - accuracy: 1.0000 - val_loss: 3.1966 - val_accuracy: 0.4600
Epoch 56/100
15/15 [==============================] - 0s 12ms/step - loss: 0.0747 - accuracy: 0.9800 - val_loss: 3.1760 - val_accuracy: 0.4800
Epoch 57/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0502 - accuracy: 1.0000 - val_loss: 3.2182 - val_accuracy: 0.4800
Epoch 58/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0695 - accuracy: 0.9867 - val_loss: 3.2735 - val_accuracy: 0.4600
Epoch 59/100
15/15 [==============================] - 0s 13ms/step - loss: 0.0337 - accuracy: 1.0000 - val_loss: 3.2925 - val_accuracy: 0.4600
Epoch 60/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0534 - accuracy: 1.0000 - val_loss: 3.3463 - val_accuracy: 0.4600
Epoch 61/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0453 - accuracy: 1.0000 - val_loss: 3.3970 - val_accuracy: 0.4600
Epoch 62/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0492 - accuracy: 0.9867 - val_loss: 3.4143 - val_accuracy: 0.4800
Epoch 63/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0477 - accuracy: 0.9867 - val_loss: 3.4328 - val_accuracy: 0.4800
Epoch 64/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0381 - accuracy: 0.9933 - val_loss: 3.4760 - val_accuracy: 0.5000
Epoch 65/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0387 - accuracy: 1.0000 - val_loss: 3.4956 - val_accuracy: 0.4800
Epoch 66/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0489 - accuracy: 0.9933 - val_loss: 3.4930 - val_accuracy: 0.5000
Epoch 67/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0262 - accuracy: 1.0000 - val_loss: 3.5097 - val_accuracy: 0.5000
Epoch 68/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0240 - accuracy: 1.0000 - val_loss: 3.5555 - val_accuracy: 0.5000
Epoch 69/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0403 - accuracy: 1.0000 - val_loss: 3.6135 - val_accuracy: 0.4600
Epoch 70/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0441 - accuracy: 0.9933 - val_loss: 3.7086 - val_accuracy: 0.4800
Epoch 71/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0408 - accuracy: 0.9933 - val_loss: 3.6738 - val_accuracy: 0.4800
Epoch 72/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0443 - accuracy: 0.9933 - val_loss: 3.7009 - val_accuracy: 0.4800
Epoch 73/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0409 - accuracy: 1.0000 - val_loss: 3.7593 - val_accuracy: 0.4800
Epoch 74/100
15/15 [==============================] - 0s 11ms/step - loss: 0.0362 - accuracy: 0.9933 - val_loss: 3.7390 - val_accuracy: 0.4800
Epoch 75/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0421 - accuracy: 0.9933 - val_loss: 3.7122 - val_accuracy: 0.4800
Epoch 76/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0421 - accuracy: 0.9867 - val_loss: 3.7604 - val_accuracy: 0.4800
Epoch 77/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0265 - accuracy: 1.0000 - val_loss: 3.8484 - val_accuracy: 0.4800
Epoch 78/100
15/15 [==============================] - 0s 13ms/step - loss: 0.0322 - accuracy: 0.9933 - val_loss: 3.9391 - val_accuracy: 0.4800
Epoch 79/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0356 - accuracy: 0.9933 - val_loss: 3.9315 - val_accuracy: 0.4400
Epoch 80/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0244 - accuracy: 1.0000 - val_loss: 3.9255 - val_accuracy: 0.5000
Epoch 81/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0210 - accuracy: 0.9933 - val_loss: 3.9484 - val_accuracy: 0.5000
Epoch 82/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0346 - accuracy: 0.9933 - val_loss: 3.9873 - val_accuracy: 0.5000
Epoch 83/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0332 - accuracy: 1.0000 - val_loss: 4.0705 - val_accuracy: 0.4400
Epoch 84/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0359 - accuracy: 0.9867 - val_loss: 4.0176 - val_accuracy: 0.4800
Epoch 85/100
15/15 [==============================] - 0s 10ms/step - loss: 0.0245 - accuracy: 1.0000 - val_loss: 4.0475 - val_accuracy: 0.5000
Epoch 86/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0273 - accuracy: 0.9933 - val_loss: 4.1013 - val_accuracy: 0.4800
Epoch 87/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0180 - accuracy: 1.0000 - val_loss: 4.1562 - val_accuracy: 0.4800
Epoch 88/100
15/15 [==============================] - 0s 7ms/step - loss: 0.0280 - accuracy: 1.0000 - val_loss: 4.1338 - val_accuracy: 0.5000
Epoch 89/100
15/15 [==============================] - 0s 7ms/step - loss: 0.0318 - accuracy: 0.9933 - val_loss: 4.1288 - val_accuracy: 0.4800
Epoch 90/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0137 - accuracy: 1.0000 - val_loss: 4.1548 - val_accuracy: 0.4800
Epoch 91/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0146 - accuracy: 1.0000 - val_loss: 4.1824 - val_accuracy: 0.4800
Epoch 92/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0259 - accuracy: 0.9933 - val_loss: 4.1917 - val_accuracy: 0.4800
Epoch 93/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0237 - accuracy: 0.9933 - val_loss: 4.1992 - val_accuracy: 0.4800
Epoch 94/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0787 - accuracy: 0.9867 - val_loss: 4.1721 - val_accuracy: 0.4800
Epoch 95/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0355 - accuracy: 0.9933 - val_loss: 4.2055 - val_accuracy: 0.4800
Epoch 96/100
15/15 [==============================] - 0s 7ms/step - loss: 0.0153 - accuracy: 1.0000 - val_loss: 4.1983 - val_accuracy: 0.5000
Epoch 97/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0179 - accuracy: 1.0000 - val_loss: 4.2252 - val_accuracy: 0.4800
Epoch 98/100
15/15 [==============================] - 0s 7ms/step - loss: 0.0188 - accuracy: 1.0000 - val_loss: 4.2519 - val_accuracy: 0.4800
Epoch 99/100
15/15 [==============================] - 0s 9ms/step - loss: 0.0152 - accuracy: 1.0000 - val_loss: 4.2572 - val_accuracy: 0.4800
Epoch 100/100
15/15 [==============================] - 0s 8ms/step - loss: 0.0153 - accuracy: 1.0000 - val_loss: 4.2599 - val_accuracy: 0.4800
###Markdown
If you don't understand this, don't worry. We're just having a look at how our model has been composed.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lambda (Lambda) (10, 1, 128) 0
_________________________________________________________________
lstm (LSTM) (10, 1, 128) 131584
_________________________________________________________________
dropout (Dropout) (10, 1, 128) 0
_________________________________________________________________
lstm_1 (LSTM) (10, 128) 131584
_________________________________________________________________
dropout_1 (Dropout) (10, 128) 0
_________________________________________________________________
dense (Dense) (10, 5) 645
_________________________________________________________________
activation (Activation) (10, 5) 0
=================================================================
Total params: 263,813
Trainable params: 263,813
Non-trainable params: 0
_________________________________________________________________
###Markdown
The plot shown here indicates how the model did in both Training, as well as in Validation. As you can see, the Validation Accuracy stopped growing after about 40-50% (though the Train Accuracy went up to 100%). Can you think of why? Hint: Lack of data on our end.
###Code
accuracy_train = history.history['accuracy']
accuracy_val = history.history['val_accuracy']
loss_train = history.history['loss']
loss_val = history.history['val_loss']
epochs = range(1,101)
plt.plot(epochs, accuracy_train, 'g', label='Training Accuracy')
plt.plot(epochs, accuracy_val, 'b', label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Predicting Locally And we're done! Time to see if our model works.
###Code
def predict(list_of_input_sentences, mode='deploy'):
'''
Input: A list of sentences and a mode. 'Test' mode is used to simply generate
and print the string-emoji pairs to make sure everything works properly.
'Deploy' mode is used when we want to return a prediction for one
sentence at a time.
Task: Use the model built to make emoji predictions with the given sentences.
'''
number_of_input_sentences = len(list_of_input_sentences)
embedded_list = embed(list_of_input_sentences)
prediction = model.predict(embedded_list)
if mode == 'test':
for i in range(number_of_input_sentences):
print ("{} : {}".format(list_of_input_sentences[i], label_to_emoji(tf.argmax(prediction[i]))))
elif mode == 'deploy':
return tf.argmax(prediction[0])
###Output
_____no_output_____
###Markdown
Here are some relatively straightforward sentences. If our model gets these too badly wrong, there might be a problem.
###Code
predict(["I feel like dancing today!",
"Now THAT was funny!",
"I am terrified.",
"This isn't fair!",
"That hurts!"], mode='test')
###Output
I feel like dancing today! : 😃
Now THAT was funny! : 😂
I am terrified. : 😨
This isn't fair! : 😠
That hurts! : 😭
###Markdown
Everything works fine. Now let's try to build out a mini-version of the comment system.
###Code
class Comments:
def __init__(self):
self.comments = []
self.generated_emojis = []
self.num_happy = 0
self.num_laugh = 0
self.num_scared=0
self.num_angry=0
self.num_sad = 0
def input_sentences(self, list_of_input_sentences):
'''
Input: A list of comments.
Task: Generates emojis from the comments and saves the comment-emoji pairs.
'''
for sentence in list_of_input_sentences:
self.comments.append(sentence)
result = predict([sentence])
self.generated_emojis.append(label_to_emoji(result))
if result == 0:
self.num_happy = self.num_happy + 1
elif result == 1:
self.num_laugh = self.num_laugh + 1
elif result == 2:
self.num_scared = self.num_scared + 1
elif result == 3:
self.num_angry = self.num_angry + 1
elif result == 4:
self.num_sad = self.num_sad + 1
def display_comments(self):
'''
Task: Displays the entire comment system.
'''
print("{} 😃 {} 😂 {} 😨 {} 😠 {} 😭".format(self.num_happy, self.num_laugh, self.num_scared, self.num_angry, self.num_sad))
print ("___________________________________\n\n")
comment_emoji_pairs = zip(self.comments, self.generated_emojis)
for comment, emoji in comment_emoji_pairs:
print ("{:<75}{:>1}\n".format(comment, emoji))
comment_system = Comments()
comment_system.input_sentences(["The darkness is closing all around us.",
"This is upsetting.",
"Your smile is beautiful!"])
comment_system.display_comments()
comment_system.input_sentences(["Are you mad?",
"Hilarious!"])
comment_system.display_comments()
###Output
2 😃 0 😂 2 😨 1 😠 0 😭
___________________________________
The darkness is closing all around us. 😨
This is upsetting. 😨
Your smile is beautiful! 😃
Are you mad? 😠
Hilarious! 😃
###Markdown
Training With AI Platform Jobs Now that we've built our model, and are happy with the performance, let's talk about training on AI Platform Jobs. Our notebook was used for rapid prototyping, and while we can (and have) produced a working model here, it may be better to train on Jobs.1. Jobs can be orchestrated within an AI pipeline.2. They run on temporarily allotted hardware, and unlike notebooks that are expensive to keep running, you're only charged for a job while it is running.3. You can use AI Platform's hyperparameter tuning service to find the best set of hyperparameters to use with your model. The approach is as follows:1. Train on a small dataset within your notebook (prototyping), and then with the entire dataset within Jobs. In this case, our dataset was anyway small, so this doesn't make a difference to us.2. When satisfied with the model code, build a Python package containing a task.py file (will be used only to read the hyperparameters passed as command-line arguments) and a model.py file (includes all the model code and program logic).3. Train the model with Jobs and save the model in Google Cloud Storage. First, we make a folder called *trainer*.
###Code
%mkdir train
%mkdir train/trainer
###Output
_____no_output_____
###Markdown
Let's start by writing an \__ init __.py file. This is required to make the folder be treated as a Python package.
###Code
%%writefile train/trainer/__init__.py
# This exists only as a formality
###Output
Writing train/trainer/__init__.py
###Markdown
Next, let's write the model.py file.
###Code
%%writefile train/trainer/model.py
import tensorflow as tf
import keras
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import csv
import random
import os
list_of_emojis = ['😃','😂','😨','😠','😭']
def label_to_emoji(label):
return list_of_emojis[label]
#-------------------------------------------------------------------------------
# This block is only for use on GCP
import gcsfs
def prepare_data(filename):
data = []
labels = []
if filename.startswith('gs://'):
fs = gcsfs.GCSFileSystem(project='durable-will-291417')
with fs.open(filename, "rt", encoding="ascii") as my_dataset:
reader = csv.reader(my_dataset, delimiter=',')
for row in reader:
data.append(row[0])
labels.append(int(row[1]))
else:
with open(filename) as my_dataset:
reader = csv.reader(my_dataset, delimiter=',')
for row in reader:
data.append(row[0])
labels.append(int(row[1]))
return (data, labels)
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
# This block is only for use on Colab
# def prepare_data(filename):
# data = []
# labels = []
# with open(filename) as my_dataset:
# reader = csv.reader(my_dataset, delimiter=',')
# for row in reader:
# data.append(row[0])
# labels.append(int(row[1]))
# return (data, labels)
#-------------------------------------------------------------------------------
def shuffle_data(data, labels):
X_train = []
Y_train = []
X_test = []
Y_test = []
list_to_shuffle = list(zip(data, labels))
random.shuffle(list_to_shuffle)
shuffled_data, shuffled_labels = zip(*list_to_shuffle)
X_train = shuffled_data[0:150]
Y_train = shuffled_labels[0:150]
X_test = shuffled_data[150:200]
Y_test = shuffled_labels[150:200]
return (X_train, Y_train, X_test, Y_test)
def Model():
model = keras.Sequential([keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=1)),
keras.layers.LSTM(units=128, return_sequences=True),
keras.layers.Dropout(rate=0.5),
keras.layers.LSTM(units=128, return_sequences=False),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(units=5),
keras.layers.Activation("softmax")])
model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy'])
return model
def convert_to_one_hot(labels):
return tf.one_hot(labels, 5)
def load_hub_module():
embed = hub.load("https://tfhub.dev/google/nnlm-en-dim128-with-normalization/2")
return embed
def train_model(model,embed, X_train, Y_train_oh, X_test, Y_test_oh, num_epochs=100, batch_size=10, shuffle=True):
return model.fit(embed(X_train),
Y_train_oh,
epochs = num_epochs,
batch_size = batch_size,
shuffle=shuffle,
validation_data=(embed(X_test), Y_test_oh))
# A helper method that uses all the above methods to create a model and save it
# in a Cloud Bucket
def train_and_save_model(dataset_location, output_dir):
data, labels = prepare_data(dataset_location)
print ("Data loading successful.\n")
X_train, Y_train, X_test, Y_test = shuffle_data(data, labels)
print ("Data shuffling and splitting successful.\n")
model = Model()
print ("Model creation successful.\n")
embed = load_hub_module()
print ("Hub module loading successful.\n")
Y_train_oh = convert_to_one_hot(labels=Y_train)
Y_test_oh = convert_to_one_hot(labels=Y_test)
print ("One-hot encoding successful.\n")
print ("Model training initiated.\n")
history = train_model(model, embed, X_train, Y_train_oh, X_test, Y_test_oh)
print ("\nModel training completed.\n")
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
tf.saved_model.save(model, savedmodel_dir)
print ("\nModel saving completed.\n")
###Output
Writing train/trainer/model.py
###Markdown
Lastly, let's write a task.py file that reads in command line arguments, trains the model, and then saves it.
###Code
%%writefile train/trainer/task.py
import argparse
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--data-file",
help="Path to the dataset file."
)
parser.add_argument(
'--job-dir',
help='Location where the model should be saved.',
)
args = parser.parse_args()
hparams = args.__dict__
model.train_and_save_model(hparams['data_file'], hparams['job_dir'])
###Output
Writing train/trainer/task.py
###Markdown
With that done, let's move the dataset into a cloud bucket as Jobs cannot read from a locally stored file (make the file public as well). Then, let's make sure the above code works by running task.py with Jobs, locally.
###Code
# The next cell is only for use on GCP
%%bash
gsutil cp -r heartfelt_dataset.csv gs://gccd_heartfelt
gsutil acl ch -u AllUsers:R gs://gccd_heartfelt/heartfelt_dataset.csv
# The next cell is only for use on GCP
%%bash
gcloud ai-platform local train \
--job-dir '/home/jupyter/' \
--module-name trainer.task \
--package-path train/trainer/ \
-- \
--data-file='gs://gccd_heartfelt/heartfelt_dataset.csv' \
# The next cell is only for use on Colab
# %%bash
# cd train
# python -m trainer.task \
# --data-file='/content/heartfelt_dataset.csv' \
# --job-dir='/content'
###Output
_____no_output_____
###Markdown
Now that we're sure it works, we can use AI Platform Jobs to perform the training.
###Code
# The next cell is only for use on GCP
# Increment the job number every time you run to avoid conflicts
%%bash
JOB_NAME=heartfelt_training_job_07
gcloud ai-platform jobs submit training $JOB_NAME \
--job-dir gs://gccd_heartfelt/$JOB_NAME \
--runtime-version 2.3 \
--python-version 3.7 \
--module-name trainer.task \
--package-path train/trainer \
--region us-central1 \
-- \
--data-file='gs://gccd_heartfelt/heartfelt_dataset.csv' \
###Output
jobId: heartfelt_training_job_07
state: QUEUED
###Markdown
Predicting With AI Platform Now that our model has been trained and saved in a GCS bucket, we can make it accessible to the public (here, our company's website comment system). We will deploy the model to the AI Platform Prediction service.
###Code
# The next cell is only for use on GCP
%%bash
MODEL_NAME=heartfelt_comment_system
REGION=us-central1
gcloud ai-platform models create $MODEL_NAME --regions=$REGION
MODEL_PATH=gs://gccd_heartfelt/heartfelt_training_job_06/savedmodel/
gcloud ai-platform versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_PATH \
--runtime-version 2.3 \
--python-version 3.7
# The next cell is only for use on GCP
import googleapiclient.discovery
PROJECT='durable-will-291417'
MODEL_NAME='heartfelt_comment_system'
VERSION_NAME='v1'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT, MODEL_NAME, VERSION_NAME)
list_of_inputs = ["What a beautiful day!", "Disgusting!", "Mommy, I'm scared."]
response = service.projects().predict(name=name,body={'instances': embed(list_of_inputs).numpy().tolist()}).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
results = response['predictions']
for i, result in enumerate(results):
prediction = tf.argmax(result['activation'])
print ("{} : {}".format(list_of_inputs[i], label_to_emoji(prediction.numpy())))
###Output
What a beautiful day! : 😃
Disgusting! : 😠
Mommy, I'm scared. : 😨
|
02_usecases/archive/object_detection/wip/4-custom-celebrity-recognition.ipynb | ###Markdown
Custom Celebrity Recognition Using Amazon Rekognition ***This notebook provides a walkthrough of recognizing custom celebrities using Amazon Rekognition. You will first index faces of custom celebrities and then use SearchFaces API (https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html and https://docs.aws.amazon.com/rekognition/latest/dg/API_StartFaceSearch.html) with sample image and video to detect custom celebrities.*** Initialize Stuff***
###Code
# initialise Notebook
import boto3
from IPython.display import HTML, display, Image as IImage
from PIL import Image, ImageDraw, ImageFont
import time
import os
from io import BytesIO
# Get current region to choose correct bucket
mySession = boto3.session.Session()
awsRegion = mySession.region_name
# Initialize clients
rekognition = boto3.client('rekognition')
dynamodb = boto3.client('dynamodb')
s3 = boto3.client('s3')
# S3 bucket that contains sample images and videos
# We are providing sample images and videos in this bucket so
# you do not have to manually download/upload test images and videos.
bucketName = "aws-workshops-" + awsRegion
# DynamoDB Table and Rekognition Collection names. We will be creating these in this module.
ddbTableName = "my-celebrities"
collectionId = "my-celebrities"
# Create temporary directory
# This directory is not needed to call Rekognition APIs.
# We will only use this directory to download images from S3 bucket and draw bounding boxes
!mkdir m2tmp
tempFolder = 'm2tmp/'
###Output
_____no_output_____
###Markdown
DynamoDB table to store custom celebrity metadata***In this step we will create a DynamoDB table to store custom celebrity metadata including id, name and url. You can store additional attributes for each celebrity if needed. List existing DynamoDB tables in your account
###Code
# List existing DynamoDB Tables
# Before creating DynamoDB table, let us first look at the list of existing DynamoDB tables in our account.
listTablesResponse = dynamodb.list_tables()
display(listTablesResponse["TableNames"])
###Output
_____no_output_____
###Markdown
Create new DynamoDB Table
###Code
# Create new DynamoDB Table
createTableResponse = dynamodb.create_table(
TableName=ddbTableName,
KeySchema=[
{
'AttributeName': 'id',
'KeyType': 'HASH' #Partition key
}
],
AttributeDefinitions=[
{
'AttributeName': 'id',
'AttributeType': 'S'
},
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
display(createTableResponse)
###Output
_____no_output_____
###Markdown
List DynamoDB Tables in your account to see newly created table
###Code
# List DynamoDB Tables
# Let us look at list of our DynamoDB tables again to make sure that table we just created appears in the list.
listTablesResponse = dynamodb.list_tables()
display(listTablesResponse["TableNames"])
###Output
_____no_output_____
###Markdown
Rekogniton Collection to store faces***In this step we will create a Rekognition Collection.Amazon Rekognition can store information about detected faces in server-side containers known as [collections](https://docs.aws.amazon.com/rekognition/latest/dg/collections.html). You can use the facial information that's stored in a collection to search for known faces in images, stored videos, and streaming videos. In this section you will learn how you can create and manage Rekognition Collections. List Rekognition Collections
###Code
# List Rekognition Collections
# Let us first see if we have already created any Rekognition collections in our account.
# If there is not an existing Rekognition in your account, you will see empty list
# otherwise you will a list with names of rekognition collections and face model version.
listCollectionsResponse = rekognition.list_collections()
display(listCollectionsResponse["CollectionIds"])
display(listCollectionsResponse["FaceModelVersions"])
###Output
_____no_output_____
###Markdown
Create new Rekognition collection
###Code
#cids = listCollectionsResponse["CollectionIds"]
#for cid in cids:
# rekognition.delete_collection(CollectionId=cid)
# Create Rekognition Collection
# Let us now create a new Rekognition collection that we will use to store faces of custom celebrities.
createCollectionResponse = rekognition.create_collection(
CollectionId=collectionId
)
display(createCollectionResponse)
###Output
_____no_output_____
###Markdown
List Rekognition collections to see newly created Rekognition collection
###Code
# List Rekognition Collections
# Let us make sure that Recognition we just created now appears in the list of collections in our AWS account.
listCollectionsResponse = rekognition.list_collections()
display(listCollectionsResponse["CollectionIds"])
display(listCollectionsResponse["FaceModelVersions"])
###Output
_____no_output_____
###Markdown
View additional information about the collection we just created
###Code
# Describe Rekognition Collection
# You can use DescribeCollection to get information,
# such as the number of faces indexed into a collection
# and the version of the model used by the collection for face detection etc.
# https://docs.aws.amazon.com/rekognition/latest/dg/API_DescribeCollection.html
# Since we have not indexed any faces yet, you should see FaceCount: 0
describeCollectionResponse = rekognition.describe_collection(
CollectionId=collectionId
)
display(describeCollectionResponse)
###Output
_____no_output_____
###Markdown
Index Custom Celebrity Faces***In this step, you will index faces of custom celebrities in Rekognition collection and store their additional information in the DynamoDB table created in earlier steps.We will index multiple images for each celebrity. By indexing multiple faces we increase the likelihood of detecting celebrities when their face is at different angles, etc. We will use [IndexFaces](https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html) to detect faces in the input image and [add them](https://docs.aws.amazon.com/rekognition/latest/dg/add-faces-to-collection-procedure.html) to the specified collection.You can read more about some of the best practices around [indexing faces here in the blog](https://aws.amazon.com/blogs/machine-learning/save-time-and-money-by-filtering-faces-during-indexing-with-amazon-rekognition/). Define methods to add face to Rekognition collection and add related attributes to DynamoDB
###Code
# We will define a method to index a face along with the celebrity id
# https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html
def indexFace (bucketName, imageName, celebrityId):
indexFaceResponse = rekognition.index_faces(
CollectionId=collectionId,
Image={
'S3Object': {
'Bucket': bucketName,
'Name': imageName,
}
},
ExternalImageId=celebrityId,
DetectionAttributes=[
'DEFAULT' #'DEFAULT'|'ALL',
],
MaxFaces=1,
QualityFilter='AUTO' #NONE | AUTO | LOW | MEDIUM | HIGH
)
display(indexFaceResponse)
# We will define a method to write metadata (id, name, url) of celebrity to DynamoDB
def addCelebrityToDynamoDB(celebrityId, celebrityName, celebrityUrl):
ddbPutItemResponse = dynamodb.put_item(
Item={
'id': {'S': celebrityId},
'name': {'S': celebrityName},
'url': { 'S': celebrityUrl},
},
TableName=ddbTableName,
)
###Output
_____no_output_____
###Markdown
Index first celebrity
###Code
# Index Celebrity 1
celebrityId = "1"
celebrityName = "Antje Barth"
celebrityUrl = "https://datascienceonaws.com"
addCelebrityToDynamoDB(celebrityId, celebrityName, celebrityUrl)
###Output
_____no_output_____
###Markdown
Index face 1
###Code
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': "content-moderation/media/antje.png"})))
# After you run this cell the biggest face from the image will be indexed.
# You will get a JSON response with a variety of information, but notice FaceId, ImageId and ExternalImageId
# Later when we will search celebrities, we will use this ExteralImageId to extract metadata from DynamoDB.
indexFace(bucketName, "content-moderation/media/antje.png", celebrityId)
###Output
_____no_output_____
###Markdown
Index face 2
###Code
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': "content-moderation/media/chris02.png"})))
indexFace(bucketName, "content-moderation/media/antje.png", celebrityId)
###Output
_____no_output_____
###Markdown
Index face 3
###Code
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': "content-moderation/media/chris03.png"})))
indexFace(bucketName, "content-moderation/media/chris03.png", celebrityId)
# Describe Rekognition Collection
# With three faces indexed for celebrity 1, you shoud now see FaceCount: 3
describeCollectionResponse = rekognition.describe_collection(
CollectionId=collectionId
)
display("FaceCount: {0}".format(describeCollectionResponse["FaceCount"]))
###Output
_____no_output_____
###Markdown
Index second celebrity
###Code
# Index Celebrity 2
celebrityId = "2"
celebrityName = "Kashif Imran"
celebrityUrl = "http://aws.amazon.com"
addCelebrityToDynamoDB(celebrityId, celebrityName, celebrityUrl)
###Output
_____no_output_____
###Markdown
Index face 1
###Code
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': "celebrity-rekognition/media/kashif01.jpg"})))
indexFace(bucketName, "celebrity-rekognition/media/kashif01.jpg", celebrityId)
###Output
_____no_output_____
###Markdown
Index face 2
###Code
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': "celebrity-rekognition/media/kashif02.jpg"})))
indexFace(bucketName, "celebrity-rekognition/media/kashif02.jpg", celebrityId)
###Output
_____no_output_____
###Markdown
Index face 3
###Code
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': "celebrity-rekognition/media/kashif03.jpg"})))
indexFace(bucketName, "celebrity-rekognition/media/kashif03.jpg", celebrityId)
# Describe Rekognition Collection
# You should now have FaceCount: 6 since we have indexed 3 faces for each of the 2 celebrities we indexed.
describeCollectionResponse = rekognition.describe_collection(
CollectionId=collectionId
)
display("FaceCount: {0}".format(describeCollectionResponse["FaceCount"]))
###Output
_____no_output_____
###Markdown
Recognize custom celebrities in image***Now let us try the image with custom celebrities and see if we can recognize people in that image.
###Code
imageName = "content-moderation/media/serverless-bytes.png"
display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': imageName})))
###Output
_____no_output_____
###Markdown
Call Rekognition to recognize custom celebrity in image by using face search
###Code
searchFacesResponse = rekognition.search_faces_by_image(
CollectionId=collectionId,
Image={
'S3Object': {
'Bucket': bucketName,
'Name': imageName,
}
},
MaxFaces=2,
FaceMatchThreshold=95
)
###Output
_____no_output_____
###Markdown
Review raw JSON response of search face by image API call
###Code
# You will see Rekognition response with SearchedFaceBoundingBox (which contains information about the biggest face
# in the image). Rekognition also returns FaceMatches, a list of matched faces. Each matched face has additional
# information including FaceId, ImageId and ExternalImageId. We will use ExternalImageId to extract information
# from DynamoDB about this celebrity.
display(searchFacesResponse)
###Output
_____no_output_____
###Markdown
Display image with bounding box around recognized custom celebrity
###Code
# Define functions to show image and bounded boxes around recognized celebrities
def displayWithBoundingBoxes (sourceImage, boxes):
# blue, green, red, grey
colors = ((220,220,220),(230,230,230),(76,182,252),(52,194,123))
# Download image locally
imageLocation = tempFolder+os.path.basename(sourceImage)
s3.download_file(bucketName, sourceImage, imageLocation)
# Draws BB on Image
bbImage = Image.open(imageLocation)
draw = ImageDraw.Draw(bbImage)
width, height = bbImage.size
col = 0
maxcol = len(colors)
line= 3
for box in boxes:
x1 = int(box[1]['Left'] * width)
y1 = int(box[1]['Top'] * height)
x2 = int(box[1]['Left'] * width + box[1]['Width'] * width)
y2 = int(box[1]['Top'] * height + box[1]['Height'] * height)
draw.text((x1,y1),box[0],colors[col])
for l in range(line):
draw.rectangle((x1-l,y1-l,x2+l,y2+l),outline=colors[col])
col = (col+1)%maxcol
imageFormat = "PNG"
ext = sourceImage.lower()
if(ext.endswith('jpg') or ext.endswith('jpeg')):
imageFormat = 'JPEG'
bbImage.save(imageLocation,format=imageFormat)
display(bbImage)
def getDynamoDBItem(itemId):
ddbGetItemResponse = dynamodb.get_item(
Key={'id': {'S': itemId} },
TableName=ddbTableName
)
itemToReturn = ('', '', '')
if('Item' in ddbGetItemResponse):
itemToReturn = (ddbGetItemResponse['Item']['id']['S'],
ddbGetItemResponse['Item']['name']['S'],
ddbGetItemResponse['Item']['url']['S'])
return itemToReturn
# After your run this cell you should see one of the faces recognized using Amazon Rekognition.
# You only see one face recognized in this example because
# SearchFacesByImage, for a given input image, first detects the largest face in the image,
# and then searches the specified collection for matching faces.
# In next section we will use DetectFaces API call to first detect faces in the image and then
# use SearchFacesByImage for each detected face to get it recognized.
def displaySearchedFace(sfr):
boxes = []
if(len(sfr['FaceMatches']) > 0):
bb = sfbb = sfr['SearchedFaceBoundingBox']
eid = sfr['FaceMatches'][0]['Face']['ExternalImageId']
conf = sfr['FaceMatches'][0]['Similarity']
celeb = getDynamoDBItem(eid)
boxes.append(("{0}-{1}-{2}%".format(celeb[0], celeb[1], round(conf,2)), bb))
displayWithBoundingBoxes(imageName, boxes)
displaySearchedFace(searchFacesResponse)
###Output
_____no_output_____
###Markdown
Recognize all custom celebrities in image***Now let us try an image with custom celebrities and see if we can recognize all people in that image. To recognize all faces in the image, we will first call detect faces and then for each face using face search API to recognize each face in the image.
###Code
imageName = "content-moderation/media/serverless-bytes.png"
###Output
_____no_output_____
###Markdown
Define helper functions to detect faces, crop faces in the main image, and then recognize each face
###Code
def detectFaces():
detectFacesResponse = rekognition.detect_faces(
Image={
'S3Object': {
'Bucket': bucketName,
'Name': imageName
}
},
Attributes=['DEFAULT'])
return detectFacesResponse
def getFaceCrop(imageBinary, box, image_width, image_height):
x1 = int(box['Left'] * image_width)-25
y1 = int(box['Top'] * image_height)-25
x2 = int(box['Left'] * image_width + box['Width'] * image_width)+25
y2 = int(box['Top'] * image_height + box['Height'] * image_height)+25
if x1 < 0 : x1=0
if y1 < 0 : y1=0
if x2 < 0 : x2=image_width
if y2 < 0 : y2=image_height
coordinates = (x1,y1,x2,y2)
image_crop = imageBinary.crop(coordinates)
stream2 = BytesIO()
iformat = "JPEG"
if(imageName.lower().endswith("png")):
iformat = "PNG"
image_crop.save(stream2,format=iformat)
image_region_binary = stream2.getvalue()
stream2.close()
return image_region_binary
def recognizeFace(faceCrop):
searchFacesResponse = rekognition.search_faces_by_image(
CollectionId=collectionId,
Image={
'Bytes': faceCrop
},
MaxFaces=2,
FaceMatchThreshold=95
)
if(len(searchFacesResponse['FaceMatches']) > 0):
eid = searchFacesResponse['FaceMatches'][0]['Face']['ExternalImageId']
conf = searchFacesResponse['FaceMatches'][0]['Similarity']
celeb = getDynamoDBItem(eid)
return "{0}-{1}-{2}%".format(celeb[0], celeb[1], round(conf,2))
else:
return ""
def recognizeAllCustomCelebrities():
detectedFaces = detectFaces()
# Download image locally
imageLocation = tempFolder+os.path.basename(imageName)
s3.download_file(bucketName, imageName, imageLocation)
imageBinary = Image.open(imageLocation)
width, height = imageBinary.size
boxes = []
for detectedFace in detectedFaces['FaceDetails']:
faceCrop = getFaceCrop(imageBinary, detectedFace['BoundingBox'], width, height)
recognizedFace = recognizeFace(faceCrop)
if(recognizedFace):
boxes.append((recognizedFace, detectedFace['BoundingBox']))
else:
boxes.append(("Unrecognized Face", detectedFace['BoundingBox']))
displayWithBoundingBoxes(imageName, boxes)
recognizeAllCustomCelebrities()
###Output
_____no_output_____
###Markdown
Recognize custom celebrities in video***
###Code
videoName = "content-moderation/media/serverless-bytes.mov"
###Output
_____no_output_____
###Markdown
Start face search job to find faces in the video that match faces in our Rekognition collection
###Code
startFaceSearchResponse = rekognition.start_face_search(
Video={
'S3Object': {
'Bucket': bucketName,
'Name': videoName
}
},
FaceMatchThreshold=90,
CollectionId=collectionId,
)
faceSearchJobId = startFaceSearchResponse['JobId']
display("Job ID: {0}".format(faceSearchJobId))
###Output
_____no_output_____
###Markdown
Wait until the face search job is complete
###Code
getFaceSearch = rekognition.get_face_search(
JobId=faceSearchJobId,
SortBy='TIMESTAMP'
)
while(getFaceSearch['JobStatus'] == 'IN_PROGRESS'):
time.sleep(5)
print('.', end='')
getFaceSearch = rekognition.get_face_search(
JobId=faceSearchJobId,
SortBy='TIMESTAMP'
)
display(getFaceSearch['JobStatus'])
###Output
_____no_output_____
###Markdown
Review raw JSON response from Rekognition
###Code
display(getFaceSearch)
###Output
_____no_output_____
###Markdown
Show recognized custom celebrities in the video
###Code
theCelebs = {}
# Display timestamps and celebrites detected at that time
strDetail = "Celebrites detected in vidoe<br>=======================================<br>"
strOverall = "Celebrities in the overall video:<br>=======================================<br>"
# Faces detected in each frame
for person in getFaceSearch['Persons']:
if('FaceMatches' in person and len(person["FaceMatches"])> 0):
ts = person["Timestamp"]
theFaceMatches = {}
for fm in person["FaceMatches"]:
conf = fm["Similarity"]
eid = fm["Face"]["ExternalImageId"]
if(eid not in theFaceMatches):
theFaceMatches[eid] = (eid, ts, round(conf,2))
if(eid not in theCelebs):
theCelebs[eid] = (getDynamoDBItem(eid))
for theFaceMatch in theFaceMatches:
celeb = theCelebs[theFaceMatch]
fminfo = theFaceMatches[theFaceMatch]
strDetail = strDetail + "At {0} ms<br> {2} (ID:{1}) Conf: {4}%<br>".format(fminfo[1],
celeb[0], celeb[1], celeb[2], fminfo[2])
# Unique faces detected in video
for theCeleb in theCelebs:
tc = theCelebs[theCeleb]
strOverall = strOverall + "{1} (ID: {0})<br>".format(tc[0], tc[1], tc[2])
# Display results
display(HTML(strOverall))
###Output
_____no_output_____
###Markdown
Display video in player
###Code
# Display video in player
s3VideoUrl = s3.generate_presigned_url('get_object', Params={'Bucket': bucketName, 'Key': videoName})
videoTag = "<video controls='controls' autoplay width='640' height='360' name='Video' src='{0}'></video>".format(s3VideoUrl)
videoui = "<table><tr><td style='vertical-align: top'>{}</td><td>{}</td></tr></table>".format(videoTag, strDetail)
display(HTML(videoui))
###Output
_____no_output_____
###Markdown
Index additional faces of known celebrities to improve recognition of these celebritiesYou can further improve the performance of your solution by indexing faces of celebrities that the Rekognition celebrity API can already recognize for most of your media, but might not perfom as well in certain situations. Below we are indexing a few images of Jeremy Clarkson and Richard Hammond even though they are recognized well by Rekognition's celebrity API. We are using same ID for them that Rekognition Celebrity API returns, so we can detect when both Celebrity API and FaceAPI recognize same celebrity in a frame.
###Code
# Index Celebrity 3
celebrityId = "2mW0ey5n"
celebrityName = "Jeremy Clarkson"
celebrityUrl = "https://www.imdb.com/name/nm0165087/"
addCelebrityToDynamoDB(celebrityId, celebrityName, celebrityUrl)
indexFace(bucketName, "celebrity-rekognition/media/jc04.png", celebrityId)
indexFace(bucketName, "celebrity-rekognition/media/jc05.png", celebrityId)
# Index Celebrity 4
celebrityId = "4TK3NJ"
celebrityName = "Richard Hammond"
celebrityUrl = "https://www.imdb.com/name/nm1414369/"
addCelebrityToDynamoDB(celebrityId, celebrityName, celebrityUrl)
indexFace(bucketName, "celebrity-rekognition/media/rh01.png", celebrityId)
###Output
_____no_output_____ |
Interns/Olivia/Machine_learning_style_preprocessing_with_lightkurve.ipynb | ###Markdown
###Code
!pip install lightkurve
import lightkurve as lk
import numpy as np
lk.__version__
###Output
_____no_output_____
###Markdown
Let’s look at exoplanet host star KIC 757450, which has a period of 𝑃=8.88492𝑑, 𝑇0=134.452𝑑, and transit duration 2.078 hours.
###Code
lcfs = lk.search_lightcurvefile('KIC 757450', mission='Kepler').download_all()
period, t0, duration_hours = 8.88492, 134.452, 2.078
#We can stitch all of the quarters together in one fell swoop.
lc_raw = lcfs.PDCSAP_FLUX.stitch()
lc_raw.flux.shape
#Clean outliers, but only those that are above the mean level
#(e.g. attributable to stellar flares or cosmic rays).
lc_clean = lc_raw.remove_outliers(sigma=20, sigma_upper=4)
###Output
_____no_output_____
###Markdown
We have to mask the transit to avoid self-subtraction the genuine planet signal when we flatten the lightcurve. We have to do a hack to find where the time series should be masked.
###Code
temp_fold = lc_clean.fold(period, t0=t0)
fractional_duration = (duration_hours / 24.0) / period
phase_mask = np.abs(temp_fold.phase) < (fractional_duration * 1.5)
transit_mask = np.in1d(lc_clean.time, temp_fold.time_original[phase_mask])
lc_flat, trend_lc = lc_clean.flatten(return_trend=True, mask=transit_mask)
lc_fold = lc_flat.fold(period, t0=t0)
lc_global = lc_fold.bin(bins=2001, method='median').normalize() - 1
lc_global = (lc_global / np.abs(lc_global.flux.min()) ) * 2.0 + 1
lc_global.flux.shape
lc_global.scatter()
phase_mask = (lc_fold.phase > -4*fractional_duration) & (lc_fold.phase < 4.0*fractional_duration)
lc_zoom = lc_fold[phase_mask]
#focus on one transit
lc_local = lc_zoom.bin(bins=201, method='median').normalize() -1
lc_local = (lc_local / np.abs(lc_local.flux.min()) ) * 2.0 + 1
lc_local.flux.shape
lc_local.scatter();
###Output
_____no_output_____ |
week5part3.ipynb | ###Markdown
As seen above we have a clear outsider that lies way outside SF, probably a typo. Hence we sort this datapoint out,
###Code
X = X[X['lon'] < -122]
X.plot(kind='scatter', x='lon', y='lat')
###Output
_____no_output_____
###Markdown
The points now all seem to be within SF borders
###Code
from sklearn.cluster import KMeans
#To work with out cluster we have to turn our panda dataframe into a numpy array,
np_X = np.array(X)
kmeans = KMeans(n_clusters=2)
kmeans.fit(np_X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
print "The %s cluster centers are located at %s " %(len(centroid),centroid)
colors = ["g.","r.","c."]
for i in range(len(np_X)):
plt.plot(np_X[i][0],np_X[i][1],colors[labels[i]],markersize=10)
plt.scatter(centroid[:,0],centroid[:,1], marker = "x", s=150, linewidths = 5, zorder =10)
plt.show()
###Output
The 2 cluster centers are located at [[-122.4178032 37.78740516]
[-122.41826714 37.7605898 ]]
###Markdown
I will now look at the total squared error in relation to the number of clusters, to find the ideal knee bend,
###Code
from sklearn.cluster import KMeans
#To work with out cluster we have to turn our panda dataframe into a numpy array,
np_X = X
kmeans = KMeans(n_clusters=2)
kmeans.fit(np_X)
centroid = kmeans.cluster_centers_
classified_data = kmeans.labels_
labels = kmeans.labels_
print "The %s cluster centers are located at %s " %(len(centroid),centroid)
classified_data
#copy dataframe (may be memory intensive but just for illustration)
df_processed = X.copy()
df_processed['Cluster Class'] = pd.Series(classified_data, index=df_processed.index)
df_processed.head()
centroid_df = DataFrame(centroid)
centroid_df.head()
df_processed.plot(kind='scatter', x='lon', y='lat',
c = 'Cluster Class', label='datapoints');
"""
import numpy
import pandas
from matplotlib import pyplot
import seaborn
seaborn.set(style='ticks')
numpy.random.seed(0)
N = 37
_genders= ['Female', 'Male', 'Non-binary', 'No Response']
df = pandas.DataFrame({
'Height (cm)': numpy.random.uniform(low=130, high=200, size=N),
'Weight (kg)': numpy.random.uniform(low=30, high=100, size=N),
'Gender': numpy.random.choice(_genders, size=N)
})
fg = seaborn.FacetGrid(data=df, hue='Gender', hue_order=_genders, aspect=1.61)
fg.map(pyplot.scatter, 'Weight (kg)', 'Height (cm)').add_legend()
########################################
import seaborn
seaborn.set(style='ticks')
fg = seaborn.FacetGrid(data=df_processed, hue='Cluster Class', hue_order=_classes, aspect=1.61)
fg.map(pyplot.scatter, 'Lat', 'Lon').add_legend()
"""
from scipy.spatial import distance
def dist_euc(lon,lat,centroid):
data_cord = [lon,lat]
return distance.euclidean(data_cord,centroid)
df_processed['distance'] = df_processed.apply(lambda row: dist_euc(row['lon'], row['lat'],centroid[row['Cluster Class']]), axis=1)
df_processed.head()
ksum = []
def get_ksum(k):
lonList = X['lon'].tolist()
latList = X['lat'].tolist()
for i in range(1,k):
kmeans = KMeans(n_clusters=i)
kmeans.fit(X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
tmp_sum = 0
for index, row in enumerate(lonList):
tmp_sum += dist_euc(lonList[index], latList[index], centroid[labels[index]])
ksum.append(tmp_sum)
get_ksum(10)
print ksum
#I Transform my data into a Dataframe to do easy and pretty plotting :-)
ksum_df = DataFrame(ksum, index = range(1,10))
ksum_df.plot()
###Output
_____no_output_____
###Markdown
As seen the error drops dramaticly as we move from 1 to 2 clusters. It also drops rather significantly from 2-3, thoughnot anything as much as the prior. The optimal solution would hence be either 2 or 3 clusters. CSV exporter for D3 data
###Code
import csv
csv_file = df_processed[['lon','lat','Cluster Class']].values
csv_file
with open('datapoints.csv','wb') as f:
w = csv.writer(f)
w.writerows(csv_file)
df_csv.head()
df_csv = X.copy(deep = True)
centroid_list = []
for i in range(1,7):
kmeans = KMeans(n_clusters=i)
kmeans.fit(X)
centroid = kmeans.cluster_centers_
labels = kmeans.labels_
column = "k%s" %i
df_csv[column] = labels
centroid_not_np = centroid.tolist()
centroid_list.append(centroid_not_np)
df_csv.head()
centroid_list
df_csv.to_csv('csv_clusters.csv', index=False)
with open('centroids.csv','wb') as csvfile:
w = csv.writer(csvfile,quoting=csv.QUOTE_MINIMAL)
w.writerows(centroid_list)
###Output
_____no_output_____ |
iris_classification_validationsplit.ipynb | ###Markdown
Evaluation
###Code
model.evaluate(x_data, y_data)
# 2개(loss, accuracy)의 편차
from sklearn.metrics import classification_report, confusion_matrix
y_pred = model.predict(x_data)
y_pred.shape, y_pred[4]
y_data.shape, y_data[4]
import numpy as np
y_pred_argmax = np.argmax(y_pred, axis= 1) # 행 기준
y_pred_argmax.shape, y_pred_argmax[4]
# 사이즈를 맞춤. classification 실행
print(classification_report(y_data, y_pred_argmax))
# f1-score: 1에 가까울수록 좋음
# accuracy:0.5는 넘어야 함
confusion_matrix(y_data, y_pred_argmax)
import seaborn as sns
sns.heatmap( confusion_matrix(y_data, y_pred_argmax), annot= True )
# 대각선 라인에 색이 들어와야 정상
###Output
_____no_output_____
###Markdown
Service
###Code
x_data[25], y_data[25]
pred = model.predict([[5. , 3. , 1.6, 0.2]])
pred
# 25번째 데이터를 보니, 1번째 분류일 가능성은 0.3302766, 2번째 분류 0.33684278 ~~
import numpy as np
np.argmax(pred)
from sklearn.metrics import roc_curve, auc
###Output
_____no_output_____ |
notebooks/bayes_model.ipynb | ###Markdown
Bayesian ModelingThis notebook uses the MNIST dataset to sample two sets of image labels and classify them using Bayes method.
###Code
import scipy.io
import numpy as np
# set the random seed
np.random.seed(42)
def generate_data():
"""This function generates the training and testing data for our model."""
# load the training mat files
file0 = scipy.io.loadmat('../data/train_0_img.mat')
file1 = scipy.io.loadmat('../data/train_1_img.mat')
# get the training data from the files
train0 = file0.get('target_img')
train1 = file1.get('target_img')
# reshape the training data
train0 = np.transpose(train0, axes=[2, 0, 1])
train1 = np.transpose(train1, axes=[2, 0, 1])
# shuffle the data
np.random.shuffle(train0)
np.random.shuffle(train1)
# create new training data arrays
new_train0 = train0[:5000]
new_train1 = train1[:5000]
# save the new training data arrays
scipy.io.savemat('../data/digit0_train.mat', {'target_img': new_train0})
scipy.io.savemat('../data/digit1_train.mat', {'target_img': new_train1})
# repeat the procedure for the testing data
file2 = scipy.io.loadmat('../data/test_0_img.mat')
file3 = scipy.io.loadmat('../data/test_1_img.mat')
# get the testing data from the files
test0 = file2.get('target_img')
test1 = file3.get('target_img')
# reshape the testing arrays
test0 = np.transpose(test0, axes=[2, 0, 1])
test1 = np.transpose(test1, axes=[2, 0, 1])
# save the testing data
scipy.io.savemat('../data/digit0_test.mat', {'target_img': test0})
scipy.io.savemat('../data/digit1_test.mat', {'target_img': test1})
# generate the training and testing data
generate_data()
# load the training and testing .mat files
file0 = scipy.io.loadmat('../data/digit0_train.mat')
file1 = scipy.io.loadmat('../data/digit1_train.mat')
file2 = scipy.io.loadmat('../data/digit0_test.mat')
file3 = scipy.io.loadmat('../data/digit1_test.mat')
# get the training and testing data
train0 = file0.get('target_img')
train1 = file1.get('target_img')
test0 = file2.get('target_img')
test1 = file3.get('target_img')
# data successfully loaded
print([len(train0), len(train1), len(test0), len(test1)])
print('Your trainset and testset are generated successfully!')
import matplotlib.pyplot as plt
%matplotlib inline
def visualize(image):
"""This function plots a given image."""
plt.imshow(image)
plt.show()
# examples
visualize(train0[0])
visualize(train1[0])
def generate_features(data: np.ndarray):
"""This function generates two features for a given data set."""
feature1 = np.array([np.mean(image) for image in data])
feature2 = np.array([np.std(image) for image in data])
return feature1, feature2
# test function
f01, f02 = generate_features(train0)
f11, f12 = generate_features(train1)
# visualize features
import seaborn as sns
from scipy.stats import norm
hist, axs = plt.subplots(1, 2, figsize=(15,5))
hist.suptitle('Feature Kernel Density Estimates');
sns.histplot(f01, kde=True, ax=axs[0], label='Image 0', color='blue');
sns.histplot(f11, kde=True, ax=axs[0], label='Image 1', color='red');
axs[0].set_xlabel('Average Brightness');
axs[0].legend();
sns.histplot(f02, kde=True, ax=axs[1], label='Image 0', color='blue');
sns.histplot(f12, kde=True, ax=axs[1], label='Image 1', color='red');
axs[1].set_xlabel('Image Brightness Standard Deviation');
axs[1].legend();
# combine feature arrays
f1, f2 = np.concatenate((f01, f11), axis=None), np.concatenate((f02, f12), axis=None)
import itertools
def generate_training_parameters(*args):
"""This function generates training parameters for the Bayesian Model."""
parameters = [[np.mean(feature), np.var(feature)] for feature in args]
parameters = np.array(list(itertools.chain.from_iterable(parameters)))
return parameters
# calculate training parameters
parameters = generate_training_parameters(f01, f02, f11, f12)
def naive_bayes(x, μ, σ):
"""This function computes the Naive Bayes prediction for a given sample."""
return (1 / (σ * np.sqrt(2 * np.pi))) * np.exp(-((x - μ)**2 / (2 * σ**2)))
def prior(sample, *args):
"""This function computes the prior probability for a given dataset."""
numerator = len(sample)
denominator = sum([len(data) for data in args])
return numerator / denominator
# calculate priors
p_y0 = prior(train0, train0, train1)
p_y1 = prior(train1, train0, train1)
# calculate testing features
t01, t02 = generate_features(test0)
t11, t12 = generate_features(test1)
def likelihood(data, μ, σ):
"""This function computes the likelihood: P(X|y)."""
return np.array([naive_bayes(x, μ, σ) for x in data])
def compute_probabilities(f1, f2, parameters):
"""This function computes the class probabilities."""
# unpack the training parameters
p1, p2, p3, p4, p5, p6, p7, p8 = parameters
# likelihood calculation: P(X|class 0)
p_x1_y0 = likelihood(f1, p1, np.sqrt(p2))
p_x2_y0 = likelihood(f2, p3, np.sqrt(p4))
p_x_y0 = p_x1_y0 * p_x2_y0
# likelihood calculation: P(X|class 1)
p_x1_y1 = likelihood(f1, p5, np.sqrt(p6))
p_x2_y1 = likelihood(f2, p7, np.sqrt(p8))
p_x_y1 = p_x1_y1 * p_x2_y1
return p_x_y0, p_x_y1
# compute class probabilities for 0 testing images
p_x_y0, p_x_y1 = compute_probabilities(t01, t02, parameters)
prob_y0 = p_x_y0 * p_y0
prob_y1 = p_x_y1 * p_y1
# predict class
prediction0 = [0 if p0 > p1 else 1 for p0, p1 in zip(prob_y0, prob_y1)]
# score 0 image classification
score0 = prediction0.count(0) / len(prediction0)
print('Testing accuracy for 0 labeled images: {}%'.format(100 * round(score0,4)))
###Output
Testing accuracy for 0 labeled images: 91.73%
###Markdown
Repeat the process for the 1 test images
###Code
# compute class probabilities
p_x_y0, p_x_y1 = compute_probabilities(t11, t12, parameters)
prob_y0 = p_x_y0 * p_y0
prob_y1 = p_x_y1 * p_y1
# predict class
prediction1 = [1 if p1 > p0 else 0 for p0, p1 in zip(prob_y0, prob_y1)]
# score 1 image classification
score1 = prediction1.count(1) / len(prediction1)
print('Testing accuracy for 0 labeled images: {}%'.format(100 * round(score1,4)))
###Output
Testing accuracy for 0 labeled images: 92.33%
|
examples/notebooks/general/make_subset_cube.ipynb | ###Markdown
Make a subset of a data cube This notebook shows how to view the metadata of an xcube dataset and how to make a subset xcube dataset for a particular region of interest. --- Importing necessary libraries and functions:
###Code
import shapely
import xarray as xr
from xcube.core.dsio import open_cube
from xcube.core.geom import clip_dataset_by_geometry
###Output
_____no_output_____
###Markdown
Load an xcube dataset containing Sea Surface Temperature data for the Southern North Sea from bucket:
###Code
cube = open_cube('https://s3.eu-central-1.amazonaws.com/xcube-examples/bc-sst-sns-l2c-2017_1x704x640.zarr', s3_kwargs=dict(anon=True) )
###Output
_____no_output_____
###Markdown
View the metadata of the cube:
###Code
cube
###Output
_____no_output_____
###Markdown
Print out the time stamps avaialbe in the cube:
###Code
# cube.time.values
###Output
_____no_output_____
###Markdown
Print out the number of time stamps:
###Code
cube.time.shape
###Output
_____no_output_____
###Markdown
View the metadata of a cubes variable:
###Code
cube.analysed_sst
###Output
_____no_output_____
###Markdown
View the shape and the chunking of a data cubes variable:
###Code
cube.analysed_sst.data
###Output
_____no_output_____
###Markdown
Plot a cubes variable for a specific time stamp:
###Code
cube.analysed_sst.isel(time=47).plot.imshow(figsize=(16,10), cmap='plasma')
###Output
_____no_output_____
###Markdown
Define an area to make a subset of a cube:
###Code
x1 = 0.0 # degree
y1 = 50.0 # degree
x2 = 5.0 # degree
y2 = 52.0 # degree
bbox = x1, y1, x2, y2
###Output
_____no_output_____
###Markdown
Convert bounding box into a shapely object:
###Code
bbox = shapely.geometry.box(*bbox)
###Output
_____no_output_____
###Markdown
Clip cube by using the bounding box:
###Code
subset_cube = clip_dataset_by_geometry(cube, bbox)
subset_cube.analysed_sst.isel(time=47).plot.imshow(figsize=(16,10), cmap='plasma')
###Output
_____no_output_____
###Markdown
The subset cube can be saved locally with:
###Code
# subset_cube.to_zarr('subset_cube_output_path.zarr')
###Output
_____no_output_____ |
notebooks/SPG-Bernstein.ipynb | ###Markdown
seeds = [8092, 4462, 8930, 4811, 568]
###Code
os.chdir('/home/matteo/potion/results')
###Output
_____no_output_____
###Markdown
CARTPOLE
###Code
nu.compare('cartpole',
['RSPG', 'GPOMDP', 'RSPG1', 'RSPG4'],
keys=['Perf'],
separate=False,
conf=None,
rows=None,
xkey='TotSamples')
nu.compare('cartpole',
['RSPG1'],
keys=['TotSamples'],
separate=False,
conf=.95,
rows=None)
nu.save_csv('cartpole', 'RSPG', 'Perf', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
nu.save_csv('cartpole', 'GPOMDP', 'Perf', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
nu.save_csv('cartpole', 'RSPG1', 'Perf', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
nu.save_csv('cartpole', 'RSPG4', 'Perf', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
nu.compare('cartpole',
['RSPG1'],
keys=['Target'],
separate=False,
conf=None,
rows=None,
xkey='Iterations')
nu.save_csv('cartpole', 'RSPG1', 'Target', xkey='Iterations', path='/home/matteo/potion/reports/target', conf=None, rows=None, step=10)
nu.save_csv('cartpole', 'RSPG1', 'Perf', xkey='Iterations', path='/home/matteo/potion/reports/target', conf=None, rows=None, step=10)
nu.compare('cartpole',
['RSPG1', 'RSPG', 'RSPG4'],
keys=['BatchSize'],
separate=False,
conf=None,
rows=None,
xkey='TotSamples')
nu.save_csv('cartpole', 'RSPG', 'BatchSize', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
nu.save_csv('cartpole', 'RSPG1', 'BatchSize', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
nu.save_csv('cartpole', 'RSPG4', 'BatchSize', xkey='TotSamples', path='/home/matteo/potion/reports', conf=None, rows=None, step=10)
###Output
334
137
668
|
lecture1/base.ipynb | ###Markdown
Base using for tkinter A base tkinter application.
###Code
import tkinter as tk # import the tkinter package.
app = tk.Tk() # The main program starts here.
app.mainloop() # Starts the application's main loop, waiting for mouse and keyboard events.
###Output
_____no_output_____
###Markdown
A mininal application with an $\textbf{Label}$.
###Code
import tkinter as tk
app = tk.Tk() # The main program starts here.
hello_label = tk.Label(app, text="hello world!") # create an Label.
hello_label.pack() # palce the label on the application
app.mainloop()
###Output
_____no_output_____
###Markdown
A mininal application with an "Quit" $\textbf{button}$.
###Code
import tkinter as tk
class Application(tk.Frame): # Your application class must inherit from Tkinter's Frame class.
def __init__(self, master=None):
tk.Frame.__init__(self, master) # Calls the constructor for the parent class, Frame.
self.grid() # Necessary to make the application actually appear on the screen.
self.createWidgets()
def createWidgets(self):
self.quitButton = tk.Button(self, text='Quit', command=self.quit)
self.quitButton.grid() # Places the button on the application.
app = Application()
app.master.title('Sample application')
app.mainloop()
'''
app = tk.Tk()
button = tk.Button(app, text='Quit',command=app.quit)
button.pack()
app.mainloop()
'''
###Output
_____no_output_____
###Markdown
resizeable the Quit Button to $\textbf{Fullfill}$ window.
###Code
import tkinter as tk
class Application(tk.Frame): # Your application class must inherit from Tkinter's Frame class.
def __init__(self, master=None):
tk.Frame.__init__(self, master) # Calls the constructor for the parent class, Frame.
self.grid(sticky=tk.N+tk.S+tk.E+tk.W) # Necessary to make the application actually appear on the screen.
self.createWidgets()
def createWidgets(self):
top = self.winfo_toplevel() # The “top level window” is the outermost window on the screen. However, this window is not your Application window—it is the parent of the Application instance. To get the top-level window, call the .winfo_toplevel() method on any widget in your application.
top.rowconfigure(0, weight=1) # This line makes row 0 of the top level window's grid stretchable.
top.columnconfigure(0, weight=1) # This line makes column 0 of the top level window's grid stretchable.
self.rowconfigure(0, weight=1) # Makes row 0 of the Application widget's grid stretchable.
self.columnconfigure(0, weight=1) # Makes column 0 of the Application widget's grid stretchable.
self.quitButton = tk.Button(self, text='Quit', command=self.quit)
self.quitButton.grid(row=0, column=0, sticky=tk.N+tk.S+tk.E+tk.W) # Places the button on the application.
app = Application()
app.master.title('Sample application')
app.mainloop()
###Output
_____no_output_____ |
PythonStatus/practice_problems.ipynb | ###Markdown
Python Practice Problems List Comprehension Problems
###Code
def identity(nums):
"""Identity:
Given a list of numbers, write a list comprehension that produces a copy of the list.
"""
return [i for i in nums]
def squared(nums):
"""Squared:
Given a list of numbers, write a list comprehension that produces a list of the squares of each number.
"""
return [round(i*i, 2) for i in nums]
def odds(nums):
"""Odds:
Given a list of numbers, write a list comprehension that produces a list of only the odd numbers in that list.
"""
return [i for i in nums if i % 2 != 0]
def selective_stringify_nums(nums):
"""Selectively stringify nums:
Given a list of numbers, write a list comprehension that produces a list of strings of each number that is divisible by 5.
"""
return [str(i) for i in nums if i % 5 == 0]
def vowels(word):
"""Vowels:
Given a string representing a word, write a list comprehension that produces a list of all the vowels in that word.
"""
vowels = "aeiou"
return [i for i in word if i in vowels]
def disemvowel(sentence):
"""Disemvowel:
Given a sentence, return the sentence with all vowels removed.
"""
vowels = "aeiou"
return "".join([i for i in sentence if i.lower() not in vowels])
def encrypt_lol(sentence):
"""Encrypt lol:
Given a sentence, return the sentence will all it's letter transposed by 1 in the alphabet, but only if the letter is a-y.
>>> encrypt_lol('the quick brown fox jumps over the lazy dog')
'uif rvjdl cspxo gpy kvnqt pwfs uif mbzy eph'
"""
#[alpha[alpha.index(i)+1]
alpha = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
return "".join([alpha[alpha.index(i)+1] if (i in alpha and i.lower() != 'z') else i for i in sentence])
###Output
_____no_output_____
###Markdown
Testing List Comprehensions
###Code
import unittest
class TestListComprehensions(unittest.TestCase):
def test_identity(self):
self.assertListEqual(identity([1,2,3,4,5]), [1,2,3,4,5])
self.assertEqual(identity([3.9, 4.5, 10.2]), [3.9, 4.5, 10.2])
def test_squared(self):
self.assertListEqual(squared([1,2,-3,-4,5]), [1,4,9,16,25])
self.assertListEqual(squared([3.9, 4.5, 10.2]), [15.21, 20.25, 104.04])
def test_odds(self):
self.assertListEqual(odds([1,2,3,4,5]), [1,3,5])
self.assertListEqual(odds([2,4,6]), [])
self.assertListEqual(odds([-2,-4,-7]), [-7])
def test_selective_stringify_nums(self):
self.assertListEqual(selective_stringify_nums([25, 91, 22, -7, -20]), ['25', '-20'])
self.assertListEqual(selective_stringify_nums([2,4,6,8,12]), [])
self.assertListEqual(selective_stringify_nums([0, -5, 10, -15, 20]), ['0', '-5', '10', '-15', '20'])
def test_vowels(self):
self.assertListEqual(vowels('mathematics'), ['a', 'e', 'a', 'i'])
self.assertListEqual(vowels('pineapple'), ['i', 'e', 'a', 'e'])
self.assertListEqual(vowels('apartment'), ['a', 'a', 'e'])
def test_disemvowel(self):
self.assertEqual(disemvowel('the quick brown fox jumps over the lazy dog'), 'th qck brwn fx jmps vr th lzy dg')
self.assertEqual(disemvowel('I am testing out this sentence'), ' m tstng t ths sntnc')
self.assertEqual(disemvowel(' I really hope that the last one worked!'), ' rlly hp tht th lst n wrkd!')
def test_encrypt_lol(self):
self.assertEqual(encrypt_lol('the quick brown fox jumps over the lazy dog'), 'uif rvjdl cspxo gpy kvnqt pwfs uif mbzz eph')
self.assertEqual(encrypt_lol('This is another test'), 'Uijt jt bopuifs uftu')
self.assertEqual(encrypt_lol('I hope that the last test shifted... except for z'), 'J ipqf uibu uif mbtu uftu tijgufe... fydfqu gps z')
unittest.main(argv=[' '], exit=False)
###Output
.........
----------------------------------------------------------------------
Ran 9 tests in 0.029s
OK
###Markdown
Lambda Practice
###Code
'''
Write a Python program to create a lambda function that adds 15 to a given number passed in as an argument.
'''
add = lambda n: n + 15
'''
Create a lambda function that multiplies argument x with argument y.
'''
mult = lambda x, y: x * y
'''
Write a Python program to create a function that takes one argument,
and that argument will be multiplied with an unknown given number.
'''
def unknown_mult(n):
return lambda a: a * n
'''
Write a Python program to sort a list of tuples using Lambda.
'''
def sort_by_grade(classes):
return sorted(classes, key=lambda x: x[1] )
'''
Write a Python program to filter even numbers from a list of integers using Lambda.
'''
def filter_evens(nums):
return list(filter(lambda x: x%2==0, nums))
print(filter_evens([1,9,2,8,3,7,4,6,5]))
###Output
[2, 8, 4, 6]
###Markdown
Testing Lambdas
###Code
import unittest
class TestLambdas(unittest.TestCase):
def test_add(self):
self.assertEqual(add(5), 20)
self.assertEqual(add(-5), 10)
self.assertEqual(add(0), 15)
def test_mult(self):
self.assertEqual(mult(0, 20), 0)
self.assertEqual(mult(2, 5), 10)
self.assertEqual(mult(-5, 6), -30)
def test_unknown_mult(self):
mydouble = unknown_mult(2)
self.assertEqual(mydouble(0), 0)
self.assertEqual(mydouble(2), 4)
mytriple = unknown_mult(3)
self.assertEqual(mytriple(1), 3)
self.assertEqual(mytriple(3), 9)
myquint = unknown_mult(5)
self.assertEqual(myquint(4), 20)
self.assertEqual(myquint(2), 10)
def test_sort_by_grade(self):
self.assertListEqual(sort_by_grade([("math", 80), ("social studies", 94), ("science", 88), ("english", 77)]),
[("english", 77), ("math", 80), ("science", 88), ("social studies", 94)])
self.assertListEqual(sort_by_grade([("english", 88), ("science", 90), ("math", 97), ("social studies", 90)]),
[("english", 88), ("science", 90), ("social studies", 90), ("math", 97)])
def test_filter_evens(self):
self.assertListEqual(filter_evens([1,9,2,8,3,7,4,6,5]), [2,8,4,6])
self.assertListEqual(filter_evens([62,37,50,28,19,82,35,12,35]),[62,50,28,82,12])
self.assertListEqual(filter_evens([65,17,75,97,71,65,73,89,75]), [])
unittest.main(argv=[' '], exit=False)
###Output
.....
----------------------------------------------------------------------
Ran 5 tests in 0.004s
OK
|
Python files/Perishable_Classification.ipynb | ###Markdown
###Code
import pandas as pd
from google.colab import files
uploaded=files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
import io
df = pd.read_csv(io.BytesIO(uploaded['items.csv']))
df
input=df.drop('Perishable',axis='columns')
target=df['Perishable']
from sklearn import tree
model=tree.DecisionTreeClassifier()
#training
model.fit(input,target)
#testing
model.predict([[7,32]])
import matplotlib.pyplot as plt
from matplotlib import style
%matplotlib inline
plt.scatter(input['Temperature'],target)
plt.show()
###Output
_____no_output_____
###Markdown
Irrespective of moisture content the food would be very less perishable if stored at lower temperature.High storage temperature would mean higher chances of perishability.Food stored in moderate temperature depends on the moisture content for predicting the perishability
###Code
plt.scatter(input['Moisture'],target)
plt.show()
###Output
_____no_output_____ |
site/es/guide/keras.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras View on TensorFlow.org Run in Google Colab View source on GitHub Keras es una API de alto nivel diseñada para constriuir y entrenar modelos para aprendizaje profundo.Se utiliza principalmente para prototipado rápido, investigación y producción, ya que presenta tresprincipales ventajas:- *Facilidad de uso* Keras expone una interfaz consistente, optimizada para casos de uso reales y comunes. Proporciona, además, una retroalimentación mucho más amigable para el usuario cuando se producen errores o problemas de ejecución.- *Modularidad* Los modelos Keras se construyen conectando bloques básicos, con muy pocas restricciones.- *Facilidad de extensión* Es posible escribir y desarrollar nuevos bloques básicos para expresar ideas no estándar, incluyendo nuevas capas, funciones de pérdida o modelos avanzados. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import layers
print(tf.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.md) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](./summaries_and_tensorboard.md).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.md) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.mdbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating_estimators_from_keras_models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.md) for debugging[Estimator input functions](./premade_estimators.mdcreate_input_functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.contrib.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.contrib.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.contrib.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____ |
term-paper-GNN/Part2_karate_GCN.ipynb | ###Markdown
Check type of initiated array.
###Code
print(type(y))
print(type(train_mask))
print(type(val_mask))
print(type(test_mask))
print(N)
print(F)
print(n_classes)
###Output
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
34
34
4
###Markdown
Building the model is no different than building any Keras model, but we will need to provide multiple inputs to the GraphConv layers (namely A and X):
###Code
from spektral.layers import GraphConv
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dropout
# Model definition
X_in = Input(shape=(F, )) # This imply expected input will be batches of F-dimensional matrix (F=1433, input features)
A_in = Input((N, ), sparse=True) # IThis imply expected input will be batches of N-dimensional matrix (N=2704, input adjacency), it is a sparse matrix.
graph_conv_1 = GraphConv(8, activation='relu')([X_in, A_in])
#dropout = Dropout(0.5)(graph_conv_1)
graph_conv_2 = GraphConv(n_classes, activation='softmax')([graph_conv_1, A_in])
# Build model
model = Model(inputs=[X_in, A_in], outputs=graph_conv_2)
###Output
_____no_output_____
###Markdown
An important thing to notice at this point is how we defined the Input layers of our model. Because the "elements" of our dataset are the node themselves, we are telling Keras to consider each node as a separate sample, so that the "batch" axis is implicitly defined as None.In other words, a sample of the node attributes will be _a row vector of shape (F, )_ and a sample of the adjacency matrix will be _one of its rows of shape (N, )_. (因為只需要睇該node的所有adjecency node)Keep this detail in mind for later. Before training the model, we have to pre-process the adjacency matrix to scale the weights of a node's connections according to its degree. In other words, the more a node is connected to others, the less relative importance those connections have. Most GNN layers available in Spektral require their own type of pre-processing in order to work correctly. You can find all necessary tools for pre-processing A in _spektral.utils_.
###Code
from spektral import utils
A = utils.localpooling_filter(A).astype('f4')
model.compile(optimizer='adam',
loss='categorical_crossentropy',
weighted_metrics=['acc'])
model.summary()
print(N, ' ', F, ' ', n_classes)
print(type(y))
print(type(train_mask))
from tensorflow.keras.callbacks import EarlyStopping
# Prepare data
validation_data = ([X, A], y, val_mask)
# Train model
model.fit([X, A],
y,
sample_weight=train_mask,
epochs=300,
batch_size=N, #batch size = no of nodes. Put all nodes into neural network at once.
validation_data=validation_data,
shuffle=False, # Shuffling data means shuffling the whole graph
callbacks=[
EarlyStopping(patience=10, restore_best_weights=True)
])
# Evaluate model
eval_results = model.evaluate([X, A],
y,
sample_weight=test_mask,
batch_size=N)
print('Done.\n'
'Test loss: {}\n'
'Test accuracy: {}'.format(*eval_results))
y_result = model.predict([X,A], batch_size=N)
y_group = []
for index, item in enumerate(y_result):
y_group.append(np.argmax(y_result[index]))
import networkx as nx
import matplotlib.pyplot as plt
def get_node_color(input):
if input==1:
return "red"
elif input==2:
return "green"
elif input==3:
return "blue"
elif input ==4:
return "yellow"
#drawing
size = float(len(ground_truth))
print(size)
labels_dict = dict(enumerate(y_group, 0))
pos = nx.spring_layout(zkc)
count = 0.
for com in set(y_group) :
count = count + 1.
list_nodes = [nodes for nodes in labels_dict
if labels_dict[nodes] == com]
nx.draw_networkx_nodes(zkc, pos, list_nodes, node_size = 100,
node_color = get_node_color(count / size))
nx.draw_networkx_edges(zkc, pos, alpha=0.5)
nx.draw_networkx_labels(zkc, pos)
plt.title('Node prediction: GNN')
plt.show()
from sklearn import metrics
print(metrics.adjusted_rand_score(ground_truth, y_group))
print(metrics.adjusted_mutual_info_score(ground_truth, y_group))
print(metrics.accuracy_score(ground_truth, y_group))
print(ground_truth)
print(y_group)
np.argmax(y_result, axis=-1)
###Output
_____no_output_____ |
02 Pandas/01 Creating Reading & Writing.ipynb | ###Markdown
Creating Reading & Writing------ Tutorial--- **Getting started** To use pandas, you'll typically start with the following line of code.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Creating dataThere are two core objects in pandas: the **DataFrame** and the **Series**. A **DataFrame** is a table. It contains an array of individual *entries*, each of which has a certain *value*. Each entry corresponds to *row* and *column*. The pd.DataFrame() constructor to generate DataFrame objects. The syntax for declaring a new one is a dictionary whose keys are the column names, and whose values are a list of entries.
###Code
pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]})
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Bland.']})
###Output
_____no_output_____
###Markdown
The dictionary-list constructor assigns values to the *column labels*, but just uses an ascending count from 0 for the *row labels*. The list of row labels used in a DataFrame is known as an **Index**. We can assign values to it by using an `index` parameter in our constructor:
###Code
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'],
'Sue': ['Pretty good.', 'Bland.']},
index=['Product A', 'Product B'])
###Output
_____no_output_____
###Markdown
A **Series**, by contrast, is a sequence of data values. If a DataFrame is a table, a Series is a list. And in fact you can create one with nothing more than a list. A Series is, in essence, a single column of a DataFrame. So you can assign column values to the Series the same way as before, using an `index` parameter. However, a Series does not have a column name, it only has one overall `name`.
###Code
pd.Series([1, 2, 3, 4, 5])
pd.Series([30, 35, 40], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product A')
# Dataset are available in google drive
from google.colab import drive
drive.mount('/content/gdrive')
# Mounded the drive now we can acces the data from its location
###Output
Mounted at /content/gdrive
###Markdown
Reading data filesData can be stored in any of a number of different forms and formats. By far the most basic of these is the humble CSV file. we can use the `pd.read_csv()` function to read the data into a DataFrame.
###Code
wine_reviews = pd.read_csv("/content/gdrive/MyDrive/Colab Notebooks/Kaggle_Courses/02 Pandas/winemag-data-130k-v2.csv")
wine_reviews.shape
wine_reviews.head()
###Output
_____no_output_____
###Markdown
The `pd.read_csv()` function is well-endowed, with over 30 optional parameters you can specify. To make pandas use that column for the index (instead of creating a new one from scratch), we can specify an `index_col`.
###Code
wine_reviews = pd.read_csv("/content/gdrive/MyDrive/Colab Notebooks/Kaggle_Courses/02 Pandas/winemag-data-130k-v2.csv", index_col=0)
wine_reviews.head()
###Output
_____no_output_____
###Markdown
Exercise--- In the cell below, create a DataFrame `fruits` that looks like this:
###Code
# Your code goes here. Create a dataframe matching the above diagram and assign it to the variable fruits.
fruits = pd.DataFrame({'Apples': [30], 'Bananas': [21]})
fruits
###Output
_____no_output_____
###Markdown
Create a dataframe `fruit_sales` that matches the diagram below:
###Code
# Your code goes here. Create a dataframe matching the above diagram and assign it to the variable fruit_sales.
fruit_sales = pd.DataFrame({'Apples': [35,41],
'Bananas': [21, 34]},
index=['2017 Sales', '2018 Sales'])
fruit_sales
###Output
_____no_output_____
###Markdown
Create a variable `ingredients` with a Series that looks like:```Flour 4 cupsMilk 1 cupEggs 2 largeSpam 1 canName: Dinner, dtype: object```
###Code
wine_reviews = pd.read_csv("/content/gdrive/MyDrive/Colab Notebooks/Kaggle_Courses/02 Pandas/winemag-data-130k-v2.csv", index_col=0)
###Output
_____no_output_____
###Markdown
Run the cell below to create and display a DataFrame called animals. In the cell below, write code to save this DataFrame to disk as a csv file with the name `cows_and_goats.csv`.
###Code
animals = pd.DataFrame({'Cows': [12, 20], 'Goats': [22, 19]}, index=['Year 1', 'Year 2'])
animals
# Your code goes here
animals.to_csv("cows_and_goats.csv")
###Output
_____no_output_____ |
ex23-CO2 and Global Temperature Anomaly.ipynb | ###Markdown
CO2 and Global Temperature Anomaly The rise of our planet's average surface temperature could largely be attributed to increased carbon dioxide and other human-made emissions into the atmosphere according to the IPCC AR5 report. We can easily check the latest CO2 and global temperature data in one figure based on publicly available datasets.CO2 data was downloaded from [esrl](https://www.esrl.noaa.gov/gmd/ccgg/trends/data.html), covering the period from Mar/1958 to Apr/2018. CO2 expressed as a mole fraction in dry air, micromol/mol, abbreviated as ppm.Global temperature anomaly (GTA, $^oC$) was downloaded from [ncdc](https://www.ncdc.noaa.gov/cag/global/time-series/globe/land_ocean/all/3/1958-2018). The data come from the Global Historical Climatology Network-Monthly (GHCN-M) data set and International Comprehensive Ocean-Atmosphere Data Set (ICOADS), which have data from 1880 to the present. These two datasets are blended into a single product to produce the combined global land and ocean temperature anomalies. The available timeseries of global-scale temperature anomalies are calculated with respect to the 20th century average. To match the period of CO2 data, only the period from Jan/1958 to Mar/2019 was used in this notebook. 1. Load all needed libraries
###Code
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
# Set some parameters to apply to all plots. These can be overridden
import matplotlib
# Plot size to 14" x 7"
matplotlib.rc('figure', figsize = (14, 7))
# Font size to 14
matplotlib.rc('font', size = 14)
# Do not display top and right frame lines
matplotlib.rc('axes.spines', top = False, right = False)
# Remove grid lines
matplotlib.rc('axes', grid = False)
# Set backgound color to white
matplotlib.rc('axes', facecolor = 'white')
###Output
matplotlib inline
###Markdown
2. Read Data 2.1 CO2
###Code
co2 = pd.read_csv('data\co2_mm_mlo.txt',
skiprows=72,
header=None,
comment = "#",
delim_whitespace = True,
names = ["year", "month", "decimal_date", "average", "interpolated", "trend", "days"],
na_values =[-99.99, -1])
co2['YM'] = co2['year']*100 + co2['month']
co2['YM'] = pd.to_datetime(co2['YM'], format='%Y%m', errors='ignore')
co2.set_index('YM', inplace=True)
co2.drop(["year", "month", "decimal_date", "average", "trend", "days"], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
2.2 Global Temperature Anomalies
###Code
gta = pd.read_csv('data\gta_1958_2018.csv',
sep=",",
skiprows=5,
names = ["YM", "GTA"])
gta['YM'] = pd.to_datetime(gta['YM'], format='%Y%m', errors='ignore')
gta.set_index('YM', inplace=True)
###Output
_____no_output_____
###Markdown
2.3 Merge CO2 and GTA into one dataframe
###Code
co2gta = co2.join(gta)
###Output
_____no_output_____
###Markdown
3. Visualize CO2 and GTA in one figure 3.1 Define a function for a plot with two y axes
###Code
def lineplot2y(x_data, x_label, y1_data, y1_color, y1_label, y2_data, y2_color, y2_label, title):
# Each variable will actually have its own plot object but they
# will be displayed in just one plot
# Create the first plot object and draw the line
_, ax1 = plt.subplots()
ax1.plot(x_data, y1_data, color = y1_color)
# Label axes
ax1.set_ylabel(y1_label, color = y1_color)
ax1.set_xlabel(x_label)
ax1.set_title(title)
# Create the second plot object, telling matplotlib that the two
# objects have the same x-axis
ax2 = ax1.twinx()
ax2.plot(x_data, y2_data, color = y2_color)
ax2.set_ylabel(y2_label, color = y2_color)
# Show right frame line
ax2.spines['right'].set_visible(True)
###Output
_____no_output_____
###Markdown
3.2 Call the function to create plot
###Code
lineplot2y(x_data = co2gta.index
, x_label = 'Month-Year'
, y1_data = co2gta['interpolated']
, y1_color = '#539caf'
, y1_label = 'CO2(ppm)'
, y2_data = co2gta['GTA']
, y2_color = '#7663b0'
, y2_label = 'Global Temperature Anoamly ($^oC$)'
, title = 'CO2 vs. GTA')
###Output
_____no_output_____ |
Data_Cleaning/MLR.ipynb | ###Markdown
Plotting to see all the features
###Code
def plotting_3_chart(df, feature):
style.use('fivethirtyeight')
## Creating a customized chart. and giving in figsize and everything.
fig = plt.figure(constrained_layout=True, figsize=(10,5))
## creating a grid of 3 cols and 3 rows.
grid = gridspec.GridSpec(ncols=3, nrows=3, figure=fig)
#gs = fig3.add_gridspec(3, 3)
## Customizing the histogram grid.
ax1 = fig.add_subplot(grid[0, :2])
## Set the title.
ax1.set_title('Histogram')
## plot the histogram.
sns.distplot(df.loc[:,feature], norm_hist=True, ax = ax1)
# customizing the QQ_plot.
ax2 = fig.add_subplot(grid[1, :2])
## Set the title.
ax2.set_title('QQ_plot')
## Plotting the QQ_Plot.
stats.probplot(df.loc[:,feature], plot = ax2)
## Customizing the Box Plot.
ax3 = fig.add_subplot(grid[:, 2])
## Set title.
ax3.set_title('Box Plot')
## Plotting the box plot.
sns.boxplot(df.loc[:,feature], orient='v', ax = ax3 );
plotting_3_chart(df, 'life_expectancy')
plotting_3_chart(df, 'adult_mortality_rate')
###Output
_____no_output_____
###Markdown
These two charts above can tell us a lot about our target variable.- Our target variable, life_expectancy is NOT normally distributed.- Our target variable is left-skewed.
###Code
#skewness and kurtosis
print("Skewness: " + str(df['life_expectancy'].skew()))
print("Kurtosis: " + str(df['life_expectancy'].kurt()))
###Output
Skewness: -0.5823312800398773
Kurtosis: -0.5587841214904765
###Markdown
We can fix this by using different types of transformation (more on this later). However, before doing that, I want to find out the relationships among the target variable and other predictor variables. Let's find out.
###Code
## Getting the correlation of all the features with target variable.
(df.corr()**2)["life_expectancy"].sort_values(ascending = False)[1:]
plotting_df = df[['life_expectancy', 'log_GDP', 'HIV_death_rate', 'avg_immunity', 'health_expense']]
fig, ax = plt.subplots(figsize = (6,6))
sns.heatmap(plotting_df.corr(), annot = True, linewidths=0.5, fmt = ".3f", ax=ax, cmap="PiYG")
# fix for mpl bug that cuts off top/bottom of seaborn viz
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
plt.xticks(rotation=90)
plt.yticks(rotation=0)
plt.title('Pearson Correlation Map')
plt.savefig('corr_fig.png',
dpi=300, bbox_inches='tight')
plt.show()
co_rr_l = plotting_df.corr()
cmap=sns.diverging_palette(5, 250, as_cmap=True)
co_rr_l.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '10pt'})\
.set_precision(2)
# plt.show()
# plt.savefig('corr_table.png',
# dpi=300, bbox_inches='tight')
axes = pd.plotting.scatter_matrix(plotting_df, alpha=1, figsize=(15, 15), diagonal='hist', grid=True)
corr = df.corr().as_matrix()
# for i, j in zip(*plt.np.triu_indices_from(axes, k=1)):
# axes[i, j].annotate("%.3f" %corr[i,j], (0.8, 0.8), xycoords='axes fraction', ha='center', va='center')
for ax in axes.ravel():
ax.set_xlabel(ax.get_xlabel(), fontsize = 12, rotation = 0)
ax.set_ylabel(ax.get_ylabel(), fontsize = 12, rotation = 90)
plt.savefig('corr_fig2.png',
dpi=300, bbox_inches='tight')
plt.show()
def customized_scatterplot(y, x):
style.use('fivethirtyeight')
plt.subplots(figsize = (6,4))
sns.scatterplot(y = y, x = x);
customized_scatterplot(df.life_expectancy, df.adult_mortality_rate)
customized_scatterplot(df.life_expectancy, df.HDI)
customized_scatterplot(df.adult_mortality_rate, df.HDI)
customized_scatterplot(df.polio_immunity, df.hep_B_immunity)
###Output
_____no_output_____
###Markdown
Assumptions of Regression- Linearity ( Correct functional form )- Homoscedasticity ( Constant Error Variance )( vs Heteroscedasticity ).- Independence of Errors ( vs Autocorrelation )- Multivariate Normality ( Normality of Errors )- No or little Multicollinearity. Linearity ( Correct functional form ) Linear regression needs the relationship between each independent variable and the dependent variable to be linear. The linearity assumption can be tested with scatter plots. The following two examples depict two cases, where no or little linearity is present.
###Code
## Plot sizing.
fig, (ax1, ax2) = plt.subplots(figsize = (10,5), ncols=2, sharey=False)
## Scatter plotting for adult_mortality_rate and life_expectancy.
sns.scatterplot( x = df.adult_mortality_rate, y = df.life_expectancy, ax=ax1)
## Putting a regression line.
sns.regplot(x=df.adult_mortality_rate, y=df.life_expectancy, ax=ax1)
## Scatter plotting for HDI and life_expectancy.
sns.scatterplot(x = df.HDI,y = df.life_expectancy, ax=ax2)
## regression line for HDI and life_expectancy.
sns.regplot(x=df.HDI, y=df.life_expectancy, ax=ax2);
## Plot sizing.
fig, (ax1, ax2) = plt.subplots(figsize = (10,5), ncols=2, sharey=False)
## Scatter plotting for adult_mortality_rate and HDI.
sns.scatterplot( x = df.adult_mortality_rate, y = df.HDI, ax=ax1)
## Putting a regression line.
sns.regplot(x=df.adult_mortality_rate, y=df.HDI, ax=ax1)
## Scatter plotting for polio_immunity and hep_B_immunity.
sns.scatterplot(x = df.polio_immunity,y = df.hep_B_immunity, ax=ax2)
## regression line for polio_immunity and hep_B_immunity.
sns.regplot(x=df.polio_immunity, y=df.hep_B_immunity, ax=ax2);
# Checking for residuals
plt.subplots(figsize = (6,4))
sns.residplot(df.adult_mortality_rate, df.life_expectancy);
# Checking for residuals
plt.subplots(figsize = (6,4))
sns.residplot(df.HDI, df.life_expectancy);
# Checking for residuals
plt.subplots(figsize = (6,4))
sns.residplot(df.HDI, df.adult_mortality_rate);
# Checking for residuals
plt.subplots(figsize = (6,4))
sns.residplot(df.hep_B_immunity, df.polio_immunity);
###Output
_____no_output_____
###Markdown
Multivariate Normality ( Normality of Errors): The linear regression analysis requires the dependent variable to be multivariate normally distributed. A histogram or a Q-Q-Plot can check whether the target variable is normally distributed or not. The goodness of fit test, e.g., the Kolmogorov-Smirnov test or can check for normality in the dependent variable. We already know that our target variable does not follow a normal distribution. Let's bring back the three charts to show our target variable.
###Code
plotting_3_chart(df, 'life_expectancy')
###Output
_____no_output_____
###Markdown
Now, let's make sure that the target variable follows a normal distribution.
###Code
plotting_df = df[['total_population', 'GDP', 'health_expense', 'HDI', 'HIV_death_rate',
'hep_B_immunity', 'polio_immunity', 'BCG_immunity','adult_mortality_rate', 'life_expectancy']]
data = plotting_df
data1 = plotting_df
data2 = plotting_df
data3 = plotting_df
data4 = plotting_df
data5 = plotting_df
## trainsforming target variable using numpy.log1p,
data['life_expectancy'] = boxcox(plotting_df.life_expectancy, -1.5)
## Plotting the newly transformed response variable
plotting_3_chart(data, 'life_expectancy')
# from scipy.stats import shapiro
# normality test
stat, p = shapiro(df.life_expectancy)
print('Statistics=%.3f, p=%.3f' % (stat, p))
# interpret
alpha = 0.05
if p > alpha:
print('Sample looks Gaussian (fail to reject H0)')
else:
print('Sample does not look Gaussian (reject H0)')
# from scipy.stats import anderson
# normality test
result = anderson(df.life_expectancy)
print('Statistic: %.3f' % result.statistic)
p = 0
for i in range(len(result.critical_values)):
sl, cv = result.significance_level[i], result.critical_values[i]
if result.statistic < result.critical_values[i]:
print('%.3f: %.3f, data looks normal (fail to reject H0)' % (sl, cv))
else:
print('%.3f: %.3f, data does not look normal (reject H0)' % (sl, cv))
# from scipy.stats import normaltest
# D’Agostino’s K^2 Test
# normality test
stat1, p1 = normaltest(df.life_expectancy)
print('Statistics=%.3f, p=%.3f' % (stat1, p1))
# interpret
alpha = 0.05
if p1 > alpha:
print('Sample looks Gaussian (fail to reject H0)')
else:
print('Sample does not look Gaussian (reject H0)')
## transforming target variable
# plotting_df = df[['total_population', 'GDP', 'health_expense', 'HDI', 'HIV_death_rate',
# 'hep_B_immunity', 'polio_immunity', 'BCG_immunity','adult_mortality_rate', 'life_expectancy']]
data1 = plotting_df
data1['adult_mortality_rate'] = boxcox(plotting_df.adult_mortality_rate, 1)
print('DATA1 : boxcox(NONE)')
print('\n\n')
print("Shapiro-Wilk Test:\n")
# normality test
stat, p0 = shapiro(data1.adult_mortality_rate)
print('Statistics=%.3f, p=%.5f' % (stat, p0))
# interpret
alphaa = 0.05
if p0 > alphaa:
print('Sample looks Gaussian (fail to reject H0)')
else:
print('Sample does not look Gaussian (reject H0)')
print('\n\n')
print("D’Agostino’s K^2 Test:\n")
# normality test
stat1, p1 = normaltest(data1.adult_mortality_rate)
print('Statistics=%.3f, p=%.5f' % (stat1, p1))
# interpret
if p1 > alphaa:
print('Sample looks Gaussian (fail to reject H0)')
else:
print('Sample does not look Gaussian (reject H0)')
print('\n\n')
print("Anderson-Darling Test:\n")
# normality test
result = anderson(data1.adult_mortality_rate)
print('Statistic: %.3f' % result.statistic)
p = 0
for i in range(len(result.critical_values)):
sl, cv = result.significance_level[i], result.critical_values[i]
if result.statistic < result.critical_values[i]:
print('%.3f: %.3f, data looks normal (fail to reject H0)' % (sl, cv))
else:
print('%.3f: %.3f, data does not look normal (reject H0)' % (sl, cv))
print('\n\n')
plotting_df = df[['total_population', 'GDP', 'health_expense', 'HDI', 'HIV_death_rate',
'hep_B_immunity', 'polio_immunity', 'BCG_immunity','adult_mortality_rate', 'life_expectancy']]
## Import necessary modules
from scipy.special import boxcox1p
from scipy.stats import boxcox_normmax
from scipy import stats
numeric_feats = plotting_df.dtypes[plotting_df.dtypes != "object"].index
skewed_feats = plotting_df[numeric_feats].apply(lambda x: stats.skew(x)).sort_values(ascending=False)
skewed_feats
sns.distplot(plotting_df.life_expectancy);
def fixing_skewness(df1):
"""
This function takes in a dataframe and return fixed skewed dataframe
"""
## Getting all the data that are not of "object" type.
numeric_feats = df1.dtypes[df1.dtypes != "object"].index
# Check the skew of all numerical features
skewed_feats = df1[numeric_feats].apply(lambda x: stats.skew(x)).sort_values(ascending=False)
high_skew = skewed_feats[abs(skewed_feats) > 0.5]
skewed_features = high_skew.index
for feat in skewed_features:
df1[feat] = boxcox1p(df1[feat], boxcox_normmax(df1[feat] + 1))
fixing_skewness(plotting_df)
sns.distplot(plotting_df.life_expectancy);
sns.distplot(plotting_df.adult_mortality_rate);
numeric_feats = plotting_df.dtypes[plotting_df.dtypes != "object"].index
skewed_feats = plotting_df[numeric_feats].apply(lambda x: stats.skew(x)).sort_values(ascending=False)
skewed_feats
###Output
_____no_output_____ |
_notebooks/2020-04-14-SARS-CoV2.ipynb | ###Markdown
"Genetic and antigenic mapping of SARS-CoV-2 variants."- toc: false- branch: master- badges: false- comments: false- hide: false The two figures above showing genetic distance map of SARS-CoV-2 variant strains in the Spike protein amino acid sequences and antigenic distance map of SARS-CoV-2 variant strains in the antigenic epitope amino acid sequences. The other two figures below showing genetic distance map of SARS-CoV-2 variant strains in the N-terminal domain (NTD) and receptor binding domain (RBD) amino acid sequences and antigenic distance map of SARS-CoV-2 variant strains in the NTD and RBD amino acid sequences. For genetic distance, the vertical and horizontal axes represent genetic distances measured by altered amino acid numbers (1 amino acid/1 A.A. = 1 amino acid alteration). For antigenic distance, the vertical and horizontal axes represent relative antigenic distances measured. (1 arbitrary unit/1 A.U. = 1-fold decrease in the neutralisation titre in log 2 scale). The two circles shows a two-fold and 16-fold decrease in the neutralisation titre respectively. Colour shows the genetic or antigenic distance compare to SARS-CoV-2 lineage A. Tips: * Mouse hover the points shows the Pangolin lineage of the amino acid sequence belongs to.* Drop down list can filter parent lineages of these points.* Zoom in or out can be controled by mouse wheel scrolling. Spike
###Code
#hide_input
import pandas as pd
import altair as alt
import numpy as np
df_gm = pd.read_csv("./Spike-GenetMap.csv")
input_dropdown_gm = alt.binding_select(options=np.insert(df_gm.Parent_lineage.unique(),0,None),labels=np.insert(df_gm.Parent_lineage.unique(),0,"ALL"))
selection_gm = alt.selection_single(fields=['Parent_lineage'], bind=input_dropdown_gm,name="Select")
color = alt.condition(selection_gm,
alt.Color('Distance:Q',scale=alt.Scale(scheme='turbo')),
alt.value('lightgray'))
gm = alt.Chart(df_gm,title="Spike - Genetic Map").mark_circle(size=100).encode(
x=alt.X('PC1',axis=alt.Axis(title='')),
y=alt.X('PC2',axis=alt.Axis(title='')),
color=color,
size=alt.Size('Distance:O',legend=None),
tooltip='name'
).add_selection(
selection_gm
).interactive().properties(
width=300,
height=280
)
theta = np.linspace(0, np.pi/2, 180)
x = np.cos(theta)
y = np.sin(theta)
x = np.append(x,-x)
y = np.append(y,y)
x2 = np.append(4*x,-x*4)
y2 = np.append(4*y,4*y)
cy = pd.DataFrame({'x':x,'y':y})
cy2 = pd.DataFrame({'x':x,'y':-y})
cy4 = pd.DataFrame({'x':x2,'y':y2})
cy42 = pd.DataFrame({'x':x2,'y':-y2})
cyp = alt.Chart(cy).mark_line().encode(x='x',y='y')
cyp2 = alt.Chart(cy2).mark_line().encode(x='x',y='y')
cyp4 = alt.Chart(cy4).mark_line().encode(x='x',y='y')
cyp42 = alt.Chart(cy42).mark_line().encode(x='x',y='y')
df = pd.read_csv("./Spike-AntiMap.csv")
input_dropdown = alt.binding_select(options=np.insert(df.Parent_lineage.unique(),0,None),labels=np.insert(df.Parent_lineage.unique(),0,"ALL"))
selection = alt.selection_single(fields=['Parent_lineage'], bind=input_dropdown,name="Select")
color = alt.condition(selection,
alt.Color('Distance:Q',scale=alt.Scale(scheme='turbo')),
alt.value('lightgray'))
base = alt.Chart(df,title='Spike - Antigenic Map').mark_circle(size=100).encode(
x=alt.X('PC1',axis=alt.Axis(title='')),
y=alt.X('PC2',axis=alt.Axis(title='')),
color=color,
size=alt.Size('Distance:O',legend=None),
tooltip='name'
).add_selection(
selection
).interactive().properties(
width=330,
height=280
)
alt.hconcat(gm,(cyp+cyp2+cyp4+cyp42+base))
###Output
_____no_output_____
###Markdown
NTD-RBD
###Code
#hide_input
import pandas as pd
import altair as alt
import numpy as np
df_gm = pd.read_csv("./NTD-RBD-GenetMap.csv")
input_dropdown_gm = alt.binding_select(options=np.insert(df_gm.Parent_lineage.unique(),0,None),labels=np.insert(df_gm.Parent_lineage.unique(),0,"ALL"))
selection_gm = alt.selection_single(fields=['Parent_lineage'], bind=input_dropdown_gm,name="Select")
color = alt.condition(selection_gm,
alt.Color('Distance:Q',scale=alt.Scale(scheme='turbo')),
alt.value('lightgray'))
gm = alt.Chart(df_gm,title="NTD-RBD-Genetic Map").mark_circle(size=100).encode(
x=alt.X('PC1',axis=alt.Axis(title='')),
y=alt.X('PC2',axis=alt.Axis(title='')),
color=color,
size=alt.Size('Distance:O',legend=None),
tooltip='name'
).add_selection(
selection_gm
).interactive().properties(
width=300,
height=280
)
theta = np.linspace(0, np.pi/2, 180)
x = np.cos(theta)
y = np.sin(theta)
x = np.append(x,-x)
y = np.append(y,y)
x2 = np.append(4*x,-x*4)
y2 = np.append(4*y,4*y)
cy = pd.DataFrame({'x':x,'y':y})
cy2 = pd.DataFrame({'x':x,'y':-y})
cy4 = pd.DataFrame({'x':x2,'y':y2})
cy42 = pd.DataFrame({'x':x2,'y':-y2})
cyp = alt.Chart(cy).mark_line().encode(x='x',y='y')
cyp2 = alt.Chart(cy2).mark_line().encode(x='x',y='y')
cyp4 = alt.Chart(cy4).mark_line().encode(x='x',y='y')
cyp42 = alt.Chart(cy42).mark_line().encode(x='x',y='y')
df = pd.read_csv("./NTD-RBD-AntiMap.csv")
input_dropdown = alt.binding_select(options=np.insert(df.Parent_lineage.unique(),0,None),labels=np.insert(df.Parent_lineage.unique(),0,"ALL"))
selection = alt.selection_single(fields=['Parent_lineage'], bind=input_dropdown,name="Select")
color = alt.condition(selection,
alt.Color('Distance:Q',scale=alt.Scale(scheme='turbo')),
alt.value('lightgray'))
base = alt.Chart(df,title='NTD-RBD-Antigenic Map').mark_circle(size=100).encode(
x=alt.X('PC1',axis=alt.Axis(title='')),
y=alt.X('PC2',axis=alt.Axis(title='')),
color=color,
size=alt.Size('Distance:O',legend=None),
tooltip='name'
).add_selection(
selection
).interactive().properties(
width=320,
height=280
)
alt.hconcat(gm,(cyp+cyp2+cyp4+cyp42+base))
###Output
_____no_output_____ |
covid_19_samples (2).ipynb | ###Markdown
importing all our important library
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import datetime as dt
import time
import ipywidgets
#from ipython.display import display
data1 = pd.read_csv("StatewiseTestingDetails2.csv",dayfirst = True)
data1.tail()
###Output
_____no_output_____
###Markdown
Data Cleaning Process
###Code
#Since there is too much NaN values hence we have to replace that with some value
data1.isnull().sum()
#since there is too much NaN value which is highly unpredictable hence we are going to work only on total samples predictor
new_data = data1[["Date","State","TotalSamples"]]
#new_data.head()
new_data = new_data.rename(columns = {"Date":"date","State":"state","TotalSamples":"total_samples"})
new_data["date"] = pd.to_datetime(new_data["date"])
new_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Data Visualisation through Graph
###Code
temp_state = new_data.state.unique()
new_temp_state = ipywidgets.Dropdown(
options=temp_state,
#options = temp_value
value=temp_state[0],
description='Select state:',
disabled=False,
)
def select (select):
temp2_state = new_data[new_data.state == new_temp_state.value]
#print("first",new_temp_state)
#print("second",temp2_state)
sns.set(rc = {'figure.figsize':(15,10)})
sns.scatterplot(x = "date", y = "total_samples", data = temp2_state, color = "g")
#data_select(data_select2)
#ipywidgets.interact(data_select, data_select = data_select2)
ipywidgets.interact(select, select = new_temp_state)
###Output
_____no_output_____
###Markdown
Here we are predicting total samples in future
###Code
select_date = ipywidgets.DatePicker(
description='Pick a Date',
disabled=False
)
#creating predict_data function for returning the future cases depend upon the user selection
def predict_data(prediction):
from sklearn.model_selection import train_test_split
#print("first",new_temp_state)
temp_state_value = new_data[new_data.state == new_temp_state.value]
temp_state_value['date'] = temp_state_value['date'].map(dt.datetime.toordinal)
#print("second",temp_state_value['date'])
#temp_state_value.head()
x = temp_state_value['date']
#print(temp)
y = temp_state_value['total_samples']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor()
rf.fit(np.array(x_train).reshape(-1,1),np.array(y_train).reshape(-1,1))
#print(select_date.value)
x = None
choose_date = select_date.value
#print(choose_date)
choose_date2 = dt.date.today()
#choose_date = choose_date.toordinal()
choose_date2 = choose_date2.toordinal()
if choose_date == x:
result = rf.predict([[choose_date2]])
if choose_date != x:
choose_date = choose_date.toordinal()
result = rf.predict([[choose_date]])
#print(result)
return result
new_temp_state
ipywidgets.interact(predict_data, prediction = select_date)
###Output
_____no_output_____ |
Cholesky_Decomposition.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/findingfoot/ML_practice-codes/blob/master/Cholesky_Decomposition.ipynb) This script will use TensorFlow's function, tf.cholesky() to decompose our design matrix and solve for the parameter matrix from linear regression.For linear regression we are given the system $A \cdot x = y$. Here, $A$ is our design matrix, $x$ is our parameter matrix (of interest), and $y$ is our target matrix (dependent values).For a Cholesky decomposition to work we assume that $A$ can be broken up into a product of a lower triangular matrix, $L$ and the transpose of the same matrix, $L^{T}$.Note that this is when $A$ is square. Of course, with an over determined system, $A$ is not square. So we factor the product $A^{T} \cdot A$ instead. We then assume:$$A^{T} \cdot A = L^{T} \cdot L$$For more information on the Cholesky decomposition and it's uses, see the following wikipedia link: The Cholesky DecompositionGiven that $A$ has a unique Cholesky decomposition, we can write our linear regression system as the following:$$ L^{T} \cdot L \cdot x = A^{T} \cdot y $$Then we break apart the system as follows:$$L^{T} \cdot z = A^{T} \cdot y$$and$$L \cdot x = z$$The steps we will take to solve for $x$ are the followingCompute the Cholesky decomposition of $A$, where $A^{T} \cdot A = L^{T} \cdot L$.Solve ($L^{T} \cdot z = A^{T} \cdot y$) for $z$.Finally, solve ($L \cdot x = z$) for $x$.
###Code
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.Session()
#Creating data
x_vals = np.linspace(0,10, 100)
y_vals = x_vals + np.random.normal(0, 1, 100)
#creating design matrix A
x_vals_column = np.transpose(np.matrix(x_vals))
ones_column = np.transpose(np.matrix(np.repeat(1, 100)))
A = np.column_stack((x_vals_column, ones_column))
# create a y matrix
y = np.transpose(np.matrix(y_vals))
# create tensors
A_tensor = tf.constant(A)
y_tensor = tf.constant(y)
at = tf.matmul(tf.transpose(A_tensor), A_tensor)
L = tf.cholesky(at)
aty = tf.matmul(tf.transpose(A_tensor), y)
answer1 = tf.matrix_solve(L, aty)
answer2 = tf.matrix_solve(tf.transpose(L), answer1)
final_answer = sess.run(answer2)
final_answer.shape
slope = final_answer[0][0]
print('slope :', slope)
intercept = final_answer[1][0]
print('intercept : ', intercept)
best_fit = []
for i in x_vals:
best_fit.append(slope*i+intercept)
plt.plot(x_vals, y_vals, '*', label = 'data')
plt.plot(x_vals, best_fit, '-', label = 'best fit line', linewidth = 4)
plt.legend(loc = 'best')
plt.show()
###Output
_____no_output_____ |
Pikachu augmented reality/Rendering_3D_Pikachu_Computer_Vision_Intro.ipynb | ###Markdown
Table of Contents1 Project resolution steps2 Loading target and applying rotations: load_targets3 Pre-processing video frame: binarize_frame4 Detect the target contours at the scene: get_contours5 Perspective transformation in detected contours: get_contour_perspective5.1 Storing all detected images in a dictionary list: get_detected_images6 Cross correlation comparison function: correlation_coefficient7 Selects images that are most similar to targets: select_detected_images8 Adjusting poses for object projection. change_img_pts_to_obj_orientation Introduction to Computer Vision - UFMG- Hernane Braga Pereira - 2014112627 --- Problem statementThe project goal is to detect all targets at the frame video and project a 3D Pikachu on each target. The image below shows an example.--- Project resolution stepsThese were the steps toward the final solution:1. Camera calibration from video frames2. Creating functions to detect the targets at the scene3. Selecting the targets for rendering4. Render the 3D Pikachu on the target**Final result frame:** --- Camera calibrationThe first step to calibrate the camera using a chessboard as a target. The Jean-Yves Bouguet e Matlab/Octave toolbox was used to estimate the intrinsic parameters. Below we can see the estimated intrinsic parameters using 39 frames. ---**Loading Libraries**
###Code
#!/usr/bin/python3
import cv2
import numpy as np
from matplotlib import pyplot as plt
from OpenGL.GL import *
from OpenGL.GLUT import *
from OpenGL.GLU import *
from PIL import Image
from objloader import *
from datetime import datetime
TARGET = './media/alvo.jpg'
OBJECT = './media/Pikachu.obj'
VIDEO_INPUT = './media/tp2-icv-input.mp4'
cameraMatrix = np.array([[1194.30008, 0, 959.5],
[0, 1193.33981, 539.5],
[0, 0, 1]])
###Output
_____no_output_____
###Markdown
--- Object detection at the sceneThe Pikachu is placed at this target when it appears on the frame: **Process steps:**1. Loading target function: `load_targets`2. Processing image input frame: `binarize_frame` Loading target and applying rotations: `load_targets`Before the target be detected in the video, we need to load the target and apply the following pre-processing steps: binarize the image, and save a target rotation at 90°, 180°, and 270°. Further, we'll compare the target at the scene with this rotation to get the object orientation.
###Code
def load_targets(filename):
""" Receive target file name, apply binarization and return a list of targets with rotation: 0°, 90°, 180° and 270°
"""
# Reading and applyng binarization
target = cv2.imread(filename)
target = cv2.cvtColor(target, cv2.COLOR_BGR2GRAY)
#target = cv2.bilateralFilter(target, 11, 17, 17)
ret, target = cv2.threshold(target, 127, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
targets_list = []
# Applyng rotation
for angle in [0, 270, 180, 90]:
M = cv2.getRotationMatrix2D((target.shape[1]/2, target.shape[0]/2), angle, 1)
targets_list.append(cv2.warpAffine(target, M, (target.shape[0], target.shape[1])))
return np.array(targets_list)
targets_list = load_targets(TARGET)
plt.subplots(1,4, figsize=(15,4))
plot_idx = 140
for i, target in enumerate(targets_list):
plot_idx +=1
rotation = 90 * i
plt.subplot(str(plot_idx))
plt.title(str(rotation)+'°'), plt.axis('off')
plt.imshow(target, cmap='gray')
###Output
_____no_output_____
###Markdown
--- Pre-processing video frame: `binarize_frame`For getting a better detection at the scene, are applied the following steps at the video frame:1. Grayscale the image2. Apply a filter3. Binarize the image (only black and white pixels)4. Detect image edgesThe `binarize_frame` function performs these first three steps.**Below are intermediate images of this image treatment pipeline.**
###Code
img_org = cv2.imread('media/img_cena_1.jpg', cv2.IMREAD_GRAYSCALE)
gray = cv2.bilateralFilter(img_org, 11, 17, 17)
# Converting image to a binary image
_, threshold = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
edged = cv2.Canny(threshold, 30, 200)
plt.subplots(figsize=(15, 8))
plt.subplot(141), plt.imshow(img_org, cmap='gray'), plt.title('Grayscale image')
plt.subplot(142), plt.imshow(gray, cmap='gray'), plt.title('Filtered image')
plt.subplot(143), plt.imshow(threshold, cmap='gray'), plt.title('Binary image')
plt.subplot(144), plt.imshow(edged, cmap='gray'), plt.title('Canny image')
plt.show()
def binarize_frame(frame):
"""
Binarize video frame
Parameters
----------
frame : numpy.ndarray
Video frame
Return
-------
numpy.ndarray
Returns video frame binarized
"""
img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(img, 11, 17, 17)
_, threshold = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
return threshold
frame = cv2.imread('media/img_cena_1.jpg')
bin_img = binarize_frame(frame)
plt.title('binarize_frame output')
plt.imshow(bin_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
--- Detect the target contours at the scene: `get_contours`From the binarized image, a transformation to detect edges is applied. If four dots that formes a rectangle are found, then the perimeter is calculated. If it's compared to a delimited threshold, then the dots are a candidate to form the edge of a target.
###Code
def grab_contours(cnts):
# support function
# source: https://github.com/jrosebr1/imutils/blob/5aae9887df3dcada5f8d8fa6af0df2122ad7aaca/imutils/convenience.py
# if the length the contours tuple returned by cv2.findContours
# is '2' then we are using either OpenCV v2.4, v4-beta, or
# v4-official
if len(cnts) == 2:
cnts = cnts[0]
# if the length of the contours tuple is '3' then we are using
# either OpenCV v3, v4-pre, or v4-alpha
elif len(cnts) == 3:
cnts = cnts[1]
# otherwise OpenCV has changed their cv2.findContours return
# signature yet again and I have no idea WTH is going on
else:
raise Exception(("Contours tuple must have length 2 or 3, "
"otherwise OpenCV changed their cv2.findContours return "
"signature yet again. Refer to OpenCV's documentation "
"in that case"))
# return the actual contours array
return cnts
def get_contours(bin_img, contour_threshold_perct=0.015):
"""
Get list of possible contours containing targets on a binary image
Parameters
----------
bin_img : numpy.ndarray
Video frame binarized
contour_threshold_perct: int
percentual of approximation allowed
Return
-------
list
Returns list of possible target contours
"""
# Get edges
edged = cv2.Canny(bin_img, 30, 200)
cnts = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[-2:]
cnts = grab_contours(cnts)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:10]
screenCnt = None
# loop over our contours
total_found = []
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
contour_threshold = peri*contour_threshold_perct
approx = cv2.approxPolyDP(c, contour_threshold, True)
# if our approximated contour has four points, then we can assume that we have found our screen
if len(approx) == 4:
screenCnt = approx
total_found.append(screenCnt)
return total_found
total_contours = get_contours(bin_img)
# Plot contours
contour_image = frame.copy()
cv2.drawContours(contour_image, total_contours, -1, (255, 255, 0), 4)
plt.subplots(figsize=(10,8))
plt.imshow(contour_image, cmap='gray'), plt.title('Contours detected by get_contours')
plt.show()
###Output
_____no_output_____
###Markdown
--- Perspective transformation in detected contours: `get_contour_perspective`Perspective transformation is applied to the detected contours. The `get_contour_perspective` function returns a dictionary with the contour image in perspective and the original image's points. The points are returned corresponding to the following points in the perspective image:1. top left2. top rigth3. bot rigth4. bot left
###Code
def get_contour_perspective(screenCnt, bin_img):
"""
Get contours points, apply perspective transformation and binarization
Parameters
----------
screenCnt : numpy.ndarray
scene contour points
bin_img : numpy.ndarray
Video frame binarized
Return
-------
dictionary
Returns the perspective transformation of image contours binarized, rotation matrix and original image points
"""
# adapted from this great tutorial: https://www.pyimagesearch.com/2014/04/21/building-pokedex-python-finding-game-boy-screen-step-4-6/
orig = bin_img
pts = screenCnt.reshape(4, 2)
rect = np.zeros((4, 2), dtype = "float32")
# the top-left point has the smallest sum whereas the
# bottom-right has the largest sum
s = pts.sum(axis = 1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
diff = np.diff(pts, axis = 1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
# now that we have our rectangle of points, let's compute
# the width of our new image
(tl, tr, br, bl) = rect
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
# ...and now for the height of our new image
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
# take the maximum of the width and height values to reach
# our final dimensions
maxWidth = max(int(widthA), int(widthB))
maxHeight = max(int(heightA), int(heightB))
# construct our destination points which will be used to
# map the screen to a top-down, "birds eye" view
dst = np.array([
[0, 0],
[maxWidth - 1, 0],
[maxWidth - 1, maxHeight - 1],
[0, maxHeight - 1]], dtype = "float32")
# calculate the perspective transform matrix and warp
# the perspective to grab the screen
M = cv2.getPerspectiveTransform(rect, dst)
warped = cv2.warpPerspective(orig, M, (maxWidth, maxHeight))
ret, warped_bin = cv2.threshold(warped, 127, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
detection_dict = { "warped_bin_img": warped_bin,
"perspective_M": M,
"pts_img": rect,
"angle": 0
}
return detection_dict
###Output
_____no_output_____
###Markdown
**Function output: `get_contour_perspective`**
###Code
screenCnt = total_contours[0]
detection_dict = get_contour_perspective(screenCnt, bin_img)
pts, M, warped = detection_dict['pts_img'], detection_dict['perspective_M'], detection_dict['warped_bin_img']
plt.subplots(1, 2,figsize=(15,8))
plt.subplot(121), plt.title('Points on image')
plt.imshow(bin_img, cmap='gray')
for i, pt in enumerate(pts):
plt.plot(pt[0], pt[1], 'ro')
plt.subplot(122), plt.title('Warped image')
plt.imshow(warped, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Storing all detected images in a dictionary list: `get_detected_images`To compare all images, they are saved in a list
###Code
def get_detected_images(total_contours, bin_img):
"""
Get list of detected contours and binarized frame and return a list of dictionaries containing:
- original points on image, perspective matrix and warped binarized image
"""
lst_detected_contours = []
for i, screenCnt in enumerate(total_contours):
detection_dict = get_contour_perspective(screenCnt, bin_img)
lst_detected_contours.append(detection_dict)
return lst_detected_contours
lst_detected_contours = get_detected_images(total_contours, bin_img)
###Output
_____no_output_____
###Markdown
--- Selection of targets to be rendered Cross correlation comparison function: `correlation_coefficient`Compare from 0 to 1, which images are similar, where 0 is the minimum, and 1 is the maximum.
###Code
def correlation_coefficient(patch1, patch2):
product = np.mean((patch1 - patch1.mean()) * (patch2 - patch2.mean()))
stds = patch1.std() * patch2.std()
if stds == 0:
return 0
else:
product /= stds
return product
###Output
_____no_output_____
###Markdown
Comparing the warped images with the other targets. Before comparison, the images are adjusted to have the same format.
###Code
img1 = warped
plt.subplots(1,4,figsize=(15,8))
idx = 141
for target in targets_list:
dsize = (img1.shape[1], img1.shape[0])
img2 = cv2.resize(target, dsize)
dist = correlation_coefficient(img1, img2)
plt.subplot(str(idx))
plt.title('similarity: '+ str(round(dist,3))), plt.imshow(target, cmap='gray')
idx+=1
###Output
_____no_output_____
###Markdown
--- Selects images that are most similar to targets: `select_detected_images`The function `select_detected_images` receives the list containing the dictionaries of the images located in the scene. These images are compared with all four possible orientations of the target using the similarity function. Pictures with a similarity score greater than 0.55 are selected.If more than three images are chosen to render the object, there is a chance that the same contour was selected, but with small points of difference. For this, the function `select_image_based_on_similarity` selects the target with the greatest similarity
###Code
def select_image_based_on_similarity(similarity_matrix):
""" Remove copied detected frames based on similarity """
selected_images = []
for j in np.arange(similarity_matrix.shape[1]):
index_lst, similarity_lst = [], []
for i in np.arange(similarity_matrix.shape[0]):
similarity = similarity_matrix[i, j]
if(similarity > 0):
index_lst.append(i)
similarity_lst.append(similarity)
# Compare 2 by 2 similarity and choose the best
if(len(similarity_lst) > 1):
for z in range(0, len(similarity_lst), 2):
a = similarity_lst[z]
b = similarity_lst[z+1]
if(a >= b): selected_images.append(index_lst[z])
else: selected_images.append(index_lst[z+1])
selected_images.sort()
return selected_images
def select_detected_images(lst_detected_contours, targets_list, threshold=0.55):
"""Receive list of dictionary images and target list. Return selected images for the 3 targets"""
similarity_matrix = np.empty((len(lst_detected_contours), 4))
# Calculating similarity matrix
for i, item in enumerate(lst_detected_contours):
warped_img = item['warped_bin_img']
for j, target in enumerate(targets_list):
shape = target.shape
dsize = (shape[1], shape[0])
warped_reshaped = cv2.resize(warped_img, dsize)
similarity_matrix[i,j] = correlation_coefficient(warped_reshaped, target)
if (similarity_matrix[i,j] < threshold): similarity_matrix[i,j] = 0
# Give rotation to image based on max similarity
rotation = np.argmax(similarity_matrix[i,:]) *90
item['angle'] = rotation
# Distinguisinh copied images based on similarity
selected_images_idx = select_image_based_on_similarity(similarity_matrix)
# Selecting final images
selected_images = []
aux = 0
for i, dict_img in enumerate(lst_detected_contours):
if(aux == len(selected_images_idx)): break
if(i == selected_images_idx[aux]):
aux+=1
selected_images.append(dict_img)
return selected_images
selected_images = select_detected_images(lst_detected_contours, targets_list)
###Output
_____no_output_____
###Markdown
The images with the points below are selected in the end:
###Code
plt.subplots(1, 1,figsize=(10,6))
pt1, pt2, pt3 = selected_images[0]['pts_img'], selected_images[1]['pts_img'], selected_images[2]['pts_img']
plt.title('Points on image')
plt.imshow(bin_img, cmap='gray')
for i, pt in enumerate(pt1): plt.plot(pt[0], pt[1], 'ro')
for i, pt in enumerate(pt2): plt.plot(pt[0], pt[1], 'go')
for i, pt in enumerate(pt3): plt.plot(pt[0], pt[1], 'bo')
###Output
_____no_output_____
###Markdown
--- Adjusting poses for object projection. `change_img_pts_to_obj_orientation`Finally, a function is defined to map the selected 2D points for the orientation of the 3D object by OpenGL**Order of points in the scene:**0. top left2. top rigth3. bot rigth4. bot left**Order of points of the 3D object:**0. bot left2. bot rigth3. top rigth4. top left
###Code
def change_img_pts_to_obj_orientation(imagePoints):
new_image_pts = imagePoints.copy()
new_image_pts[0,:] = imagePoints[3,:] # top left ---> bot left
new_image_pts[1,:] = imagePoints[2,:] # top rigth ---> bot rigth
new_image_pts[2,:] = imagePoints[1,:] # bot rigth ---> top rigth
new_image_pts[3,:] = imagePoints[0,:] # bot left ---> top left
return new_image_pts.astype(float)
old_imagePoints = selected_images[0]['pts_img']
imagePoints = change_img_pts_to_obj_orientation(old_imagePoints)
print('Original image points:\n', old_imagePoints)
print('\nImage points on object orientation:\n', imagePoints)
###Output
Original image points:
[[1254. 656.]
[1385. 752.]
[1300. 869.]
[1168. 765.]]
Image points on object orientation:
[[1168. 765.]
[1300. 869.]
[1385. 752.]
[1254. 656.]]
###Markdown
--- Rendering 3D object in the videoBelow the OpenGL functions are performed to render the Pikachu in the video. Essential details: the video is in 1920x1080 resolution. For some reason, still unknown, I was unable to adjust the texture for a smaller openCV window, so the OpenCV window is also in 1920x1080. I tried to load the video frames in a lower resolution to change this problem, but it was still impossible to render correctly in a smaller window.Another part to be commented on the resolution is that apparently, the window causes the top target to have the Pikachu appearing intermittently. I believe that the target would be rendered correctly with the smaller texture window since it was detected correctly during the video.When executing the cells below, the OpenGL window will be opened together with the video showing the detected outlines.
###Code
def object3D(imagePoints, angle, cameraMatrix, distCoeffs, obj):
objectPoints = np.array([[-1, -1, 1],
[ 1, -1, 1],
[ 1, 1, 1],
[-1, 1, 1]], dtype="float32")
_, rvecs, tvecs = cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs)
rotm = cv2.Rodrigues(rvecs)[0]
m = np.array([[rotm[0][0], rotm[0][1], rotm[0][2], tvecs[0]],
[rotm[1][0], rotm[1][1], rotm[1][2], tvecs[1]],
[rotm[2][0], rotm[2][1], rotm[2][2], tvecs[2]],
[ 0.0, 0.0, 0.0, 1.0]])
##opencv coordinate system to opengl coordinate system
flip_y_and_z_axis = np.array([[1, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, -1, 0],
[0, 0, 0, 1]])
m = np.dot(flip_y_and_z_axis, m)
m = np.transpose(m)
glLoadMatrixd(m)
#glutSolidCube(2.0) show cube
glRotate(90, 1, 0, 0)
glRotate(180, 0, 1, 1)
glCallList(obj.gl_list) # pikachuuuu
def idleCallback():
"""
Callback do glut
"""
glutPostRedisplay()
def displayCallback():
"""
Callback do glut.
"""
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
ret, frame_oginal = cap.read()
targets_list = load_targets(TARGET) # Loading target with different positions
frame = frame_oginal
if ret == True:
background = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
background = cv2.flip(background, 0)
height, width, channels = background.shape
background = np.frombuffer(background.tostring(), dtype=background.dtype, count=height * width * channels)
background.shape = (height, width, channels)
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, background_id)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, background)
glDepthMask(GL_FALSE)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
gluOrtho2D(0, width, 0, height)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glBegin(GL_QUADS)
glTexCoord2i(0, 0); glVertex2i(0, 0)
glTexCoord2i(1, 0); glVertex2i(width, 0)
glTexCoord2i(1, 1); glVertex2i(width, height)
glTexCoord2i(0, 1); glVertex2i(0, height)
glEnd()
glPopMatrix()
glMatrixMode(GL_PROJECTION)
glPopMatrix()
glMatrixMode(GL_MODELVIEW)
glDepthMask(GL_TRUE)
glDisable(GL_TEXTURE_2D)
# ====================================================== #
# Previously defined functions to get target on scene #
# ====================================================== #
bin_img = binarize_frame(frame) # binarize frame
total_contours = get_contours(bin_img, contour_threshold_perct=0.01) # list of possible target contours
# Plotting for debug
#cv2.drawContours(frame, total_contours, -1, (255, 255, 0), 4)
#cv2.imshow('debug contours', frame)
lst_detected_contours = get_detected_images(total_contours, bin_img) # detected contours
selected_images = select_detected_images(lst_detected_contours, targets_list, threshold=0.45) # selected targets on image
if(len(selected_images) > 0):
for i, item in enumerate(selected_images):
imagePoints = change_img_pts_to_obj_orientation(item['pts_img'])
angle = item['angle']
object3D(imagePoints, angle, cameraMatrix, distCoeffs, obj)
glutSwapBuffers()
# Uncomment for saving images and then make video
'''
now = datetime.now()
timestamp = datetime.timestamp(now)
data = glReadPixels(0, 0, 1920, 1080, GL_RGBA, GL_UNSIGNED_BYTE)
image = Image.frombytes("RGBA", (1920, 1080), data)
image = image.rotate(180)
image.save(f"img/pic_{timestamp}.png")
'''
else:
glutDestroyWindow(window)
def initOpenGL(cameraMatrix, dimensions):
(width_dim, height_dim) = dimensions
glClearColor(0.0, 0.0, 0.0, 0.0)
glClearDepth(1.0)
glDepthFunc(GL_LESS)
glEnable(GL_DEPTH_TEST)
glShadeModel(GL_SMOOTH)
lightAmbient = [1.0, 1.0, 1.0, 1.0]
lightDiffuse = [1.0, 1.0, 1.0, 1.0]
lightPosition = [1.0, 1.0, 1.0, 0.0]
glLightfv(GL_LIGHT0, GL_AMBIENT, lightAmbient)
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse)
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition)
glEnable(GL_LIGHT0)
glEnable(GL_LIGHTING)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
fx = cameraMatrix[0,0]
fy = cameraMatrix[1,1]
fovy = 2*np.arctan(0.5*height_dim/fy)*180/np.pi
aspect = (width_dim*fy)/(height_dim*fx)
near = 0.01
far = 250.0
gluPerspective(fovy, aspect, near, far)
glMatrixMode(GL_MODELVIEW)
obj = OBJ("Pikachu.obj", swapyz=True)
glEnable(GL_TEXTURE_2D)
background_id = glGenTextures(1)
return obj, background_id
if __name__ == '__main__':
cameraMatrix = np.array([[1194.30008, 0, 959.5],
[0, 1193.33981, 539.5],
[0, 0, 1]])
distCoeffs = np.array([0., 0., 0., 0., 0.])
dimensions = (1920, 1080 )
cap = cv2.VideoCapture(VIDEO_INPUT)
if (cap.isOpened() == False):
print("Error opening video stream or file")
glutInit()
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE)
glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_CONTINUE_EXECUTION)
glutInitWindowSize(*dimensions)
window = glutCreateWindow(b'Realidade Aumentada')
obj, background_id = initOpenGL(cameraMatrix, dimensions)
glutDisplayFunc(displayCallback)
glutIdleFunc(idleCallback)
glutMainLoop()
cap.release()
#uncomment for creating output video
'''
import cv2
import numpy as np
import glob
img_array = []
for filename in glob.glob('img/*.png'):
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
img_array.append(img)
out = cv2.VideoWriter('project.avi',cv2.VideoWriter_fourcc(*'DIVX'), 15, size)
for i in range(len(img_array)):
out.write(img_array[i])
out.release()
'''
###Output
_____no_output_____ |
assignment 1/files/svm.ipynb | ###Markdown
Multiclass Support Vector Machine exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*In this exercise you will: - implement a fully-vectorized **loss function** for the SVM- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** using numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
CIFAR-10 Data Loading and Preprocessing
###Code
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print 'Training data shape/: ', X_train.shape
print 'Validation data shape: ', X_val.shape
print 'Test data shape: ', X_test.shape
print 'dev data shape: ', X_dev.shape
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print mean_image[:10] # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print X_train.shape, X_val.shape, X_test.shape, X_dev.shape
###Output
(49000, 3073) (1000, 3073) (1000, 3073) (500, 3073)
###Markdown
SVM ClassifierYour code for this section will all be written inside **cs231n/classifiers/linear_svm.py**. As you can see, we have prefilled the function `compute_loss_naive` which uses for loops to evaluate the multiclass SVM loss function.
###Code
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.00001)
print 'loss: %f' % (loss, )
###Output
loss: 9.363516
###Markdown
The `grad` returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function `svm_loss_naive`. You will find it helpful to interleave your new code inside the existing function.To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
###Code
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad)
###Output
numerical: 4.999563 analytic: 4.999563, relative error: 2.785926e-12
numerical: -5.369367 analytic: -5.369367, relative error: 7.610511e-11
numerical: -2.786107 analytic: -2.786107, relative error: 3.473293e-10
numerical: 11.253089 analytic: 11.247423, relative error: 2.517802e-04
numerical: -29.446560 analytic: -29.446560, relative error: 1.260426e-11
numerical: -2.847353 analytic: -2.847353, relative error: 1.386084e-10
numerical: 12.902175 analytic: 12.902175, relative error: 3.252051e-11
numerical: -4.628786 analytic: -4.628786, relative error: 2.864311e-11
numerical: 6.501490 analytic: 6.501490, relative error: 3.391053e-11
numerical: 5.710601 analytic: 5.710601, relative error: 1.119376e-11
numerical: -43.696100 analytic: -43.696100, relative error: 5.761276e-13
numerical: 19.376208 analytic: 19.376208, relative error: 6.916496e-12
numerical: 16.064548 analytic: 15.979319, relative error: 2.659778e-03
numerical: 5.417354 analytic: 5.366163, relative error: 4.747124e-03
numerical: 11.924040 analytic: 11.924040, relative error: 1.188140e-11
numerical: 11.993793 analytic: 11.993793, relative error: 1.438461e-11
numerical: -18.700982 analytic: -18.694448, relative error: 1.747185e-04
numerical: -52.179168 analytic: -52.179168, relative error: 4.053072e-12
numerical: 0.309777 analytic: 0.273440, relative error: 6.230434e-02
numerical: 3.909188 analytic: 3.909188, relative error: 3.134604e-11
###Markdown
Inline Question 1:It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? *Hint: the SVM loss function is not strictly speaking differentiable***Your Answer:** *SVM loss function is doesn't get the absolute gradient, it get the approximate value of the gradient. The discrepancy can be caused by using a step size small enough that doesn't pass the 'hinges' of the functions. Whereby a step size of 0.001 would result in 0, whereas a step size of -0.001 would result in -1. Reason being numerial computation has a finite approximation.*
###Code
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# The losses should match but your vectorized impleme/ntation should be much faster.
print 'difference: %f' % (loss_naive - loss_vectorized)
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss and gradient: computed in %fs' % (toc - tic)
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss and gradient: computed in %fs' % (toc - tic)
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'difference: %f' % difference
###Output
Naive loss and gradient: computed in 0.371716s
Vectorized loss and gradient: computed in 0.012759s
difference: 0.000000
###Markdown
Stochastic Gradient DescentWe now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
###Code
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=5e4, num_iters=1500, verbose=True)
toc = time.time()
print 'That took %fs' % (toc - tic)
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )
y_val_pred = svm.predict(X_val)
print 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-7, 2.5e-6, 5e-5]
regularization_strengths = [5e4, 2.5e4, 1e5]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
for learning_rate in learning_rates:
for reg_str in regularization_strengths:
svm = LinearSVM()
svm.train(X_train, y_train, learning_rate=learning_rate, reg=reg_str, num_iters=600, verbose=False)
y_train_pred = svm.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val)
valid_acc = np.mean(y_val == y_val_pred)
print 'learning rate: %f, reg str: %f' % (learning_rate, reg_str)
print 'training accuracy: %f, validation accuracy: %f' % (train_acc, valid_acc)
if valid_acc >= best_val:
best_val = valid_acc
best_svm = svm
results[(learning_rate, reg_str)] = (train_acc, valid_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____ |
ETH & BTC Load Into SQL.ipynb | ###Markdown
Load Ethereum and Bitcoin Historical Data* This notebook should only be run **once** in order to load the cleaned, up-to-date (as of 8/14/20) 1-minute price history data for the Ethereum and Bitcoin cryptocurrencies* This notebook should be run only after creating the table schemas for the two cryptocurrencies in pgAdmin - the tables may be created in this notebook and loaded into the database directly from here at a later time, but for now that feature is unavailable* The table schema code will be included in this repo and the instructions to use it should be located in the README* A separate notebook will be created and used repeatedly to update the database incrementally as time goes on and the prices change and more data is generated Load Ethereum Data* The steps below outline how to initially load the Ethereum (ETH) data from the .csv located in the **Ethereum/IO/** folder
###Code
#create the string to connect to the database - will be used with sqlalchemy!
protocol = "postgres"
user = config.user
password = config.pw
location = "localhost"
port = "5432"
database = "crypto"
connection_string = f"{protocol}://{user}:{password}@{location}:{port}/{database}"
print(connection_string)
# load in Ethereum csv file for the notebook, to be loaded into SQL
eth_csv = './Ethereum/IO/ETH_1min.csv'
eth_df = pd.read_csv(eth_csv)
#convert the "Date" column to datetime objects with timezones, because it is read in as text
eth_df["Date"] = pd.to_datetime(eth_df["Date"], utc=True)
eth_df
#update the column names to match the schema of the database table
sql_columns = ["Unix_Timestamp", "Entry_Date", "Symbol", "Open_Price", "High_Price", "Low_Price", "Close_Price", "Coin_Volume"]
lowercase_sql_columns = [a.lower() for a in sql_columns]
eth_df.columns = lowercase_sql_columns
eth_df.head()
#setup the sqlalchemy engine
#create the engine to interact with the database with the connection string
engine = create_engine(connection_string)
#then load the dataframe into the SQL table!
#**********THIS WILL FAIL UPON RUNNING AS A DEFAULT - ONLY CHANGE THE "if_exists='fail'" PARAMETER BELOW TO 'append'
#**********IF LOADING DATA FOR THE FIRST TIME! OTHERWISE CHECK THE README FOR THE CORRECT NOTEBOOK TO UPDATE THE DATABASE!
eth_df.to_sql(name="ethereum", con=engine, index=False, if_exists="fail")
print("If you can see this, the table should have loaded successfully!")
#check that the table loaded correctly by reading it from sql and comparing it to the
#dataframe we inserted
check_df = pd.read_sql_table(table_name="ethereum", con=engine)
check_df
#output whether the data read matches the data written to the database!
#make sure both dataframes are sorted, and indexed correctly, or there may be issues - some data from the database was not
#matching due to having the order changed upon insertion somehow!
sorted_check_df = check_df.sort_values(by="unix_timestamp").reset_index(drop=True)
sorted_eth_df = eth_df.sort_values(by="unix_timestamp").reset_index(drop=True)
if(sorted_check_df.equals(sorted_eth_df)):
print("Good Job! You have successfully loaded the 'Ethereum' data!")
else:
print("It looks like the data you wrote to the database does not match the data read from the database.")
###Output
_____no_output_____
###Markdown
Load Bitcoin Data* The steps below outline how to initially load the Bitcoin (BTC) data from the .csv located in the **Bitcoin/IO/** folder* The steps outlined below are essentially the same as the steps for loading the Ethereum data above, with some table names changed, so if you got the Ethereum data loaded already, loading the Bitcoin data here should not be a problem!
###Code
#already connected to the database from when we loaded the Ethereum data above
#so the first step is to load the Bitcoin .csv file into the notebook
# load in Bitcoin csv file for the notebook, to be loaded into SQL
btc_csv = './Bitcoin/IO/coinbaseUSD_1-min_data.csv'
btc_df = pd.read_csv(btc_csv)
#convert the "Date" column to datetime objects with timezones, because it is read in as text
btc_df["Date"] = pd.to_datetime(btc_df["Date"], utc=True)
btc_df
#update the column names to match the schema of the database table
sql_columns = ["Unix_Timestamp", "Entry_Date", "Symbol", "Open_Price", "High_Price", "Low_Price", "Close_Price", "Coin_Volume"]
lowercase_sql_columns = [a.lower() for a in sql_columns]
btc_df.columns = lowercase_sql_columns
btc_df.head()
#load the dataframe into the SQL table!
#no need to create the engine, it should already exist from loading the Ethereum data
#**********THIS WILL FAIL UPON RUNNING AS A DEFAULT - ONLY CHANGE THE "if_exists='fail'" PARAMETER BELOW TO 'append'
#**********IF LOADING DATA FOR THE FIRST TIME! OTHERWISE CHECK THE README FOR THE CORRECT NOTEBOOK TO UPDATE THE DATABASE!
btc_df.to_sql(name="bitcoin", con=engine, index=False, if_exists="fail")
print("If you can see this, the table should have loaded successfully!")
#check that the table loaded correctly by reading it from sql and comparing it to the
#dataframe we inserted
check_btc_df = pd.read_sql_table(table_name="bitcoin", con=engine)
check_btc_df
#output whether the data read matches the data written to the database!
#make sure both dataframes are sorted, and indexed correctly, or there may be issues - the data from the database was not
#matching due to having the order changed upon insertion somehow!
sorted_check_btc_df = check_btc_df.sort_values(by="unix_timestamp").reset_index(drop=True)
sorted_btc_df = btc_df.sort_values(by="unix_timestamp").reset_index(drop=True)
if(sorted_check_btc_df.equals(sorted_btc_df)):
print("Good Job! You have successfully loaded the 'Bitcoin' data!")
else:
print("It looks like the data you wrote to the database does not match the data read from the database.")
###Output
_____no_output_____ |
utils/notebooks/self-supervised-neuroevolution/sb3_baselines.ipynb | ###Markdown
Acrobot-v1
###Code
env = gym.make('Acrobot-v1')
model = load('a2c', 'acrobot')
print('A2C ' + evaluate(env, model) )
model = load('dqn', 'acrobot')
print('DQN ' + evaluate(env, model, track_rewards=True) )
model = load('ppo', 'acrobot')
print('PPO ' + evaluate(env, model) )
model = load('qrdqn', 'acrobot')
print('QRDQN ' + evaluate(env, model) )
###Output
A2C -> -81.0±12.8
DQN -> -80.4±8.6
PPO -> -89.0±23.7
QRDQN -> -81.5±16.7
###Markdown
CartPole-v1
###Code
env = gym.make('CartPole-v1')
model = load('a2c', 'cart_pole')
print('A2C ' + evaluate(env, model) )
model = load('dqn', 'cart_pole')
print('DQN ' + evaluate(env, model) )
model = load('ppo', 'cart_pole')
print('PPO ' + evaluate(env, model, track_rewards=True) )
model = load('qrdqn', 'cart_pole')
print('QRDQN ' + evaluate(env, model) )
###Output
A2C -> 500.0±0.0
DQN -> 500.0±0.0
PPO -> 500.0±0.0
QRDQN -> 500.0±0.0
###Markdown
MountainCar-v0
###Code
env = gym.make('MountainCar-v0')
model = load('a2c', 'mountain_car')
print('A2C ' + evaluate(env, model) ) # -111.3 24.1
model = load('dqn', 'mountain_car')
print('DQN ' + evaluate(env, model, track_rewards=True) )
model = load('ppo', 'mountain_car')
print('PPO ' + evaluate(env, model) ) # -110.4 19.473
model = load('qrdqn', 'mountain_car')
print('QRDQN ' + evaluate(env, model) )
###Output
A2C -> -200.0±0.0
DQN -> -119.9±23.5
PPO -> -200.0±0.0
QRDQN -> -128.7±31.7
###Markdown
MountainCarContinuous-v0
###Code
env = gym.make('MountainCarContinuous-v0')
model = load('a2c', 'mountain_car_continuous')
print('A2C ' + evaluate(env, model) ) # 91.2 0.3
model = load('ddpg', 'mountain_car_continuous')
print('DDPG ' + evaluate(env, model) )
model = load('ppo', 'mountain_car_continuous')
print('PPO ' + evaluate(env, model) ) # 88.3 2.6
model = load('sac', 'mountain_car_continuous')
print('SAC ' + evaluate(env, model, track_rewards=True) )
model = load('td3', 'mountain_car_continuous')
print('TD3 ' + evaluate(env, model) )
model = load('tqc', 'mountain_car_continuous')
print('TQC ' + evaluate(env, model) )
###Output
A2C -> -99.9±0.0
DDPG -> 93.5±0.1
PPO -> -18.7±0.7
SAC -> 94.6±1.0
TD3 -> 93.4±0.1
TQC -> 83.9±30.9
###Markdown
Pendulum-v1
###Code
env = gym.make('Pendulum-v1')
model = load('a2c', 'pendulum')
print('A2C ' + evaluate(env, model) ) # -163.0 103.2
model = load('ddpg', 'pendulum')
print('DDPG ' + evaluate(env, model) )
model = load('ppo', 'pendulum')
print('PPO ' + evaluate(env, model) )
model = load('sac', 'pendulum')
print('SAC ' + evaluate(env, model) )
model = load('td3', 'pendulum')
print('TD3 ' + evaluate(env, model) )
model = load('tqc', 'pendulum')
print('TQC ' + evaluate(env, model, track_rewards=True) )
###Output
A2C -> -1593.8±21.3
DDPG -> -149.5±60.6
PPO -> -206.9±76.8
SAC -> -176.7±64.5
TD3 -> -154.1±64.4
TQC -> -150.6±61.2
###Markdown
LunarLander-v2
###Code
env = gym.make('LunarLander-v2')
model = load('a2c', 'lunar_lander')
print('A2C ' + evaluate(env, model) )
model = load('dqn', 'lunar_lander')
print('DQN ' + evaluate(env, model) )
model = load('ppo', 'lunar_lander')
print('PPO ' + evaluate(env, model, track_rewards=True) ) # 242.1 31.8
model = load('qrdqn', 'lunar_lander')
print('QRDQN ' + evaluate(env, model) )
###Output
A2C -> 150.8±132.3
DQN -> 115.0±103.1
PPO -> 142.7±21.0
QRDQN -> 156.4±133.1
###Markdown
LunarLanderContinuous-v2
###Code
env = gym.make('LunarLanderContinuous-v2')
model = load('a2c', 'lunar_lander_continuous')
print('A2C ' + evaluate(env, model) ) # 84.2 145.9
model = load('ddpg', 'lunar_lander_continuous')
print('DDPG ' + evaluate(env, model) )
model = load('ppo', 'lunar_lander_continuous')
print('PPO ' + evaluate(env, model) )
model = load('sac', 'lunar_lander_continuous')
print('SAC ' + evaluate(env, model, track_rewards=True) )
model = load('td3', 'lunar_lander_continuous')
print('TD3 ' + evaluate(env, model) )
model = load('tqc', 'lunar_lander_continuous')
print('TQC ' + evaluate(env, model) )
###Output
A2C -> -102.5±17.5
DDPG -> 194.4±147.7
PPO -> 128.7±41.4
SAC -> 269.7±20.4
TD3 -> 228.8±50.8
TQC -> 239.1±75.2
###Markdown
Swimmer-v3
###Code
env = gym.make('Swimmer-v3')
model = load('a2c', 'swimmer')
print('A2C ' + evaluate(env, model) )
# ValueError: Error: Unexpected observation shape (8,) for Box environment,
# please use (9,) or (n_env, 9) for the observation shape.
# model = load('ppo', 'swimmer')
# print('PPO ' + evaluate(env, model) ) # 281.6 9.7
model = load('sac', 'swimmer')
print('SAC ' + evaluate(env, model) )
model = load('td3', 'swimmer')
print('TD3 ' + evaluate(env, model, track_rewards=True) )
model = load('tqc', 'swimmer')
print('TQC ' + evaluate(env, model) )
###Output
A2C -> 122.9±5.7
SAC -> 334.6±2.8
TD3 -> 358.3±1.6
TQC -> 328.7±1.7
|
Readme/Session6_Assignment/GridSearch/Final_GridSearch_S_E_EVA5_S5_v6_FineTune_LR_scheduler_final_S6_L1_BN.ipynb | ###Markdown
FineTune_LR_scheduler - S5_v6 Target:1. FineTune LR scheduler. Set LR=0.1 as before but updated StepSize = 12 and Gamma = 0.2 Results:1. Parameters: 7,6122. Best Train Accuracy: 99.413. Best Test Accuracy: 99.49 Analysis:1. To get best combination values StepSize = 12 and Gamma =0.2, we tried many trails of these two values.2. The intuition behind above values is, we observed the accuracy is gradually increasing till around 10 epochs and getting stall from there. So we would like to update LR around 10-12 epochs.3. We tried with StepSize and Gamma combinations - (10, 0.1), (11, 0.1), (12, 0.1) But didn't help to get the target accuracy consistently at last few epochs.4. So we thought to increase the speed a little bit after 10-12 epochs by updating gamma = 0.2 and tried these StepSize and Gamma combinations - (10, 0.2), (11, 0.2), (12, 0.2) And finaally Stepsize=12, Gamma=0.2 gave best consistency of >=99.4% in the last 3 epochs and hit maximum of 99.49% with less than 8000 parameters Import Libraries
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
import matplotlib.pyplot as plt
import numpy as np
import random
import time
from google.colab import drive
drive.mount('/content/drive')
import logging
logger = logging.getLogger("")
#logging.basicConfig(level=logging.DEBUG)
filename = '/content/drive/My Drive/Final_GridSearch_S_E_EVA5_S5_v6_FineTune_LR_scheduler_final_S6_L1_BN_'+time.ctime().replace(' ','_')+'.txt'
logging.basicConfig(level = logging.DEBUG, filename = filename)
# logger.debug('Loging %s lewel', 'DEBUG')
# logger.info('Loging %s lewel', 'INFO')
# logger.warning('Loging %s lewel', 'WARN')
# logger.error('Loging %s lewel', 'ERROR')
# logger.critical('Loging %s lewel', 'CRITICAL')
time.ctime()
time.ctime().replace(' ','_')
###Output
_____no_output_____
###Markdown
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
###Code
train_transforms = transforms.Compose([
transforms.RandomRotation((-7.0, 7.0), fill=(1,)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
test_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
###Output
_____no_output_____
###Markdown
Dataset and Creating Train/Test Split
###Code
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
###Output
_____no_output_____
###Markdown
Dataloader Arguments & Test/Train Dataloaders
###Code
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
# logger.info("CUDA Available?", cuda)
# logger.info(f"CUDA Available? {cuda}")
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
###Output
_____no_output_____
###Markdown
The modelLet's start with the model we first saw
###Code
class BatchNorm(nn.BatchNorm2d):
def __init__(self, num_features, eps=1e-05, momentum=0.1, weight=True, bias=True):
super().__init__(num_features, eps=eps, momentum=momentum)
self.weight.data.fill_(1.0)
self.bias.data.fill_(0.0)
self.weight.requires_grad = weight
self.bias.requires_grad = bias
class GhostBatchNorm(BatchNorm):
def __init__(self, num_features, num_splits, **kw):
super().__init__(num_features, **kw)
self.num_splits = num_splits
self.register_buffer('running_mean', torch.zeros(num_features * self.num_splits))
self.register_buffer('running_var', torch.ones(num_features * self.num_splits))
def train(self, mode=True):
if (self.training is True) and (mode is False): # lazily collate stats when we are going to use them
self.running_mean = torch.mean(self.running_mean.view(self.num_splits, self.num_features), dim=0).repeat(
self.num_splits)
self.running_var = torch.mean(self.running_var.view(self.num_splits, self.num_features), dim=0).repeat(
self.num_splits)
return super().train(mode)
def forward(self, input):
N, C, H, W = input.shape
if self.training or not self.track_running_stats:
return F.batch_norm(
input.view(-1, C * self.num_splits, H, W), self.running_mean, self.running_var,
self.weight.repeat(self.num_splits), self.bias.repeat(self.num_splits),
True, self.momentum, self.eps).view(N, C, H, W)
else:
return F.batch_norm(
input, self.running_mean[:self.num_features], self.running_var[:self.num_features],
self.weight, self.bias, False, self.momentum, self.eps)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=8, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(8)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16)
) # output_size = 24
# TRANSITION BLOCK 1
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=8, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(8)
) # output_size = 12
# CONVOLUTION BLOCK 2
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16)
) # output_size = 10
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(32)
) # output_size = 8
# OUTPUT BLOCK
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10)
) # output_size = 8
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=8)
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.pool1(x)
x = self.convblock3(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.gap(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
###Output
_____no_output_____
###Markdown
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
###Code
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
# logger.info(device)
logger.info(f"Device : {device}")
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
# for i in model.parameters():
# logger.info(i)
# break
###Output
_____no_output_____
###Markdown
Training and TestingLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
###Code
def get_current_train_acc(model, train_loader):
model.eval()
train_loss = 0
correct = 0
with torch.no_grad():
for data, target in train_loader:
data, target = data.to(device), target.to(device)
output = model(data)
train_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
train_loss /= len(train_loader.dataset)
logger.info('\nTrain set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
train_loss, correct, len(train_loader.dataset),
100. * correct / len(train_loader.dataset)))
train_acc = 100. * correct / len(train_loader.dataset)
return train_acc, train_loss
from tqdm import tqdm
# train_losses = []
# test_losses = []
# train_acc = []
# test_acc = []
def train(model, device, train_loader, optimizer, lambda_l1=0, train_acc=[], train_losses=[]):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
#train_losses.append(loss)
# L1 regularisation
l1 = 0
for p in model.parameters():
l1 += p.abs().sum()
loss += lambda_l1 * l1
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Current_train_batch_accuracy={100*correct/processed:0.2f}')
current_train_acc, current_train_loss = get_current_train_acc(model, train_loader)
train_acc.append(current_train_acc)
train_losses.append(current_train_loss)
return train_acc, train_losses
def test(model, device, test_loader, test_acc=[], test_losses=[]):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
logger.info('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
return test_acc, test_losses
###Output
_____no_output_____
###Markdown
Let's Train and test our model
###Code
# def save_best_model(epochs, model, device, train_loader, optimizer, lambda_l1=0.0, scheduler):
# for epoch in range(EPOCHS):
# logger.info(f" ***** EPOCH:{epoch} ***** ")
# train(model, device, train_loader, optimizer, epoch, lambda_l1)
# scheduler.step()
# test(model, device, test_loader)
def get_best_train_test_acc(train_acc=[], test_acc=[]):
"""
Example:
train_acc_1=[96.5,98.7,99.2,99.3];test_acc_1=[97.2,98.5, 99.25, 99.2]
assert get_best_train_test_acc(train_acc_1, test_acc_1)==(99.2, 99.25)
"""
tr_te_acc_pairs = list(zip(train_acc, test_acc))
tr_te_acc_pairs_original = tr_te_acc_pairs[:]
tr_te_acc_pairs.sort(key = lambda x: x[1], reverse=True)
for tr_acc, te_acc in tr_te_acc_pairs:
if tr_acc > te_acc and tr_acc - te_acc >= 1:
tr_te_acc_pairs.remove((tr_acc, te_acc))
return tr_te_acc_pairs[0], tr_te_acc_pairs_original.index(tr_te_acc_pairs[0])+1
def save_model(model, PATH='./test_model.pickle'):
"""
Save trained model at given PATH
"""
torch.save(model.state_dict(), PATH)
logger.info(f"Model saved at {PATH}")
train_acc_1=[96.5,98.7,99.2,99.3];test_acc_1=[97.2,98.5, 99.25, 99.2]
get_best_train_test_acc(train_acc_1, test_acc_1)
assert get_best_train_test_acc(train_acc_1, test_acc_1)==((99.2, 99.25),3)
def fit_model(epochs, model, device, train_loader, test_loader, optimizer, lambda_l1, scheduler):
train_acc = []
train_losses = []
test_acc = []
test_losses = []
for epoch in range(EPOCHS):
logger.info(f"[EPOCH:{epoch}]")
train_acc, train_losses = train(model, device, train_loader, optimizer, lambda_l1, train_acc, train_losses)
scheduler.step()
test_acc, test_losses = test(model, device, test_loader, test_acc, test_losses)
return train_acc, train_losses, test_acc, test_losses
# train_acc, train_losses, test_acc, test_losses = fit_model(epochs, model, device, train_loader, test_loader, optimizer, para, scheduler)
# (best_train_acc, best_test_acc), epoch = get_best_train_test_acc(train_acc, test_acc)
# logger.info(f"For L1 lambda parameter {para} Best train Accuracy {best_train_acc}% and Best Test Accuracy {best_test_acc}% at Epoch {epoch}")
# all_lambdal1_train_test_acc_from_best_epoch.append((best_train_acc, best_test_acc, para))
# temp_best_train_acc_list.append(best_train_acc)
# temp_best_test_acc_list.append(best_test_acc)
def best_tr_te_acc_from_epoch(epochs, model, device, train_loader, test_loader, optimizer, lambda_l1=0, lambda_l2=0, scheduler=None):
temp_best_train_acc_list = []
temp_best_test_acc_list = []
all_lambdal1_train_test_acc_from_best_epoch =[]
train_acc, train_losses, test_acc, test_losses = fit_model(epochs, model, device, train_loader, test_loader, optimizer, lambda_l1=lambda_l1, scheduler=scheduler)
(best_train_acc, best_test_acc), epoch = get_best_train_test_acc(train_acc, test_acc)
logger.info(f"\n===================> For L1 lambda parameter {lambda_l1}, For L2 lambda parameter {lambda_l2}, Best train Accuracy {best_train_acc}% and Best Test Accuracy {best_test_acc}% at Epoch {epoch} <===================\n")
all_lambdal1_train_test_acc_from_best_epoch.append((best_train_acc, best_test_acc, lambda_l1, lambda_l2 ))
temp_best_train_acc_list.append(best_train_acc)
temp_best_test_acc_list.append(best_test_acc)
return temp_best_train_acc_list, temp_best_test_acc_list, all_lambdal1_train_test_acc_from_best_epoch
def my_grid_search(epochs, model, device, train_loader, test_loader, optimizer, scheduler, lambda_l1_range = [], lambda_l2_range = [], size = 20, l1_l2_trails=0):
best_lambdal1_train_acc = 0.0
best_lambdal1_test_acc = 0.0
best_lambdal1 = 0.0
all_lambdal1_train_test_acc_from_best_epoch = []
if lambda_l1_range and lambda_l2_range:
if lambda_l1_range[0]>lambda_l1_range[1] or lambda_l2_range[0]>lambda_l2_range[1]:
raise Exception("It should be => min<max")
options_l1 = np.random.uniform(low=lambda_l1_range[0], high=lambda_l1_range[1], size=size)
options_l2 = np.random.uniform(low=lambda_l2_range[0], high=lambda_l2_range[1], size=size)
for i in range(l1_l2_trails):
l1_value = random.choice(options_l1)
l2_value = random.choice(options_l2)
logger.info(f"\n L1&L2 Trail:{i+1} - Model is getting trained with L1 regularisation parameter {l1_value} and L2 regularisation parameter {l2_value}\n")
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=l2_value)
scheduler = StepLR(optimizer, step_size=12, gamma=0.2)
temp_best_train_acc_list, temp_best_test_acc_list, all_lambdal1_train_test_acc_from_best_epoch = best_tr_te_acc_from_epoch(epochs, model, device, train_loader, test_loader, optimizer, lambda_l1=l1_value, lambda_l2=l2_value, scheduler=scheduler)
(best_para_train_acc, best_para_test_acc), idx = get_best_train_test_acc(temp_best_train_acc_list, temp_best_test_acc_list)
idx -= 1
final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 = all_lambdal1_train_test_acc_from_best_epoch[idx]
logger.info(f"\n===================> final_best_train_acc: {final_best_train_acc}, final_best_test_acc: {final_best_test_acc}, final_best_lambda_l1: {final_best_lambda_l1} , final_best_lambda_l2: {final_best_lambda_l2} <===================\n")
return final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2
elif lambda_l1_range:
if lambda_l1_range[0]>lambda_l1_range[1]:
raise Exception("It should be => lambda_l1_range[0]<lambda_l1_range[1]")
options = np.random.uniform(low=lambda_l1_range[0], high=lambda_l1_range[1], size=size)
for i, para in enumerate(options):
logger.info(f"\n L1 Trail:{i+1} - Model is getting trained with L1 regularisation parameter {para}\n")
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = StepLR(optimizer, step_size=12, gamma=0.2)
temp_best_train_acc_list, temp_best_test_acc_list, all_lambdal1_train_test_acc_from_best_epoch = best_tr_te_acc_from_epoch(epochs, model, device, train_loader, test_loader, optimizer, lambda_l1=para, lambda_l2=0, scheduler=scheduler)
(best_para_train_acc, best_para_test_acc), idx = get_best_train_test_acc(temp_best_train_acc_list, temp_best_test_acc_list)
idx -= 1
final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 = all_lambdal1_train_test_acc_from_best_epoch[idx]
logger.info(f"\n===================> final_best_train_acc: {final_best_train_acc}, final_best_test_acc: {final_best_test_acc}, final_best_lambda_l1: {final_best_lambda_l1} <===================\n")
return final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2
elif lambda_l2_range:
if lambda_l2_range[0]>lambda_l2_range[1]:
raise Exception("It should be => lambda_l2_range[0]<lambda_l2_range[1]")
options = np.random.uniform(low=lambda_l2_range[0], high=lambda_l2_range[1], size=size)
for i, para in enumerate(options):
logger.info(f"\n L2 Trail:{i+1} - Model is getting trained with L2 regularisation parameter {para}\n")
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=para)
scheduler = StepLR(optimizer, step_size=12, gamma=0.2)
temp_best_train_acc_list, temp_best_test_acc_list, all_lambdal1_train_test_acc_from_best_epoch = best_tr_te_acc_from_epoch(epochs, model, device, train_loader, test_loader, optimizer=optimizer, lambda_l1=0, lambda_l2=para, scheduler=scheduler)
(best_para_train_acc, best_para_test_acc), idx = get_best_train_test_acc(temp_best_train_acc_list, temp_best_test_acc_list)
idx -= 1
final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 = all_lambdal1_train_test_acc_from_best_epoch[idx]
logger.info(f"\n===================> final_best_train_acc: {final_best_train_acc}, final_best_test_acc: {final_best_test_acc}, final_best_lambda_l2: {final_best_lambda_l2} <===================\n")
return final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2
else:
raise Exception("Select at least one parameter to search its mathematical space")
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
EPOCHS = 15
scheduler = StepLR(optimizer, step_size=12, gamma=0.2)
# lambda_l1=0
l1_l2_trails=50
# para_grid_lambda = [[0,0.1],[0,0.01],[0,0.001],[0,0.0001]]
para_grid_lambda = [[0,0.0001], [0,0.001], [0,0.01],[0,0.1]]
# para_grid_lambda = [[0,0.0001], [0,0.1]]
results_lambda_l1 = []
results_lambda_l2 = []
results_lambda_l1_l2 = []
size = 20 # Number of random choices in the given range
## L1 regularisation hyper parameter search
for para_range in para_grid_lambda:
logger.info(f"\n===================> Started - Trail on L1 reg parameters range - {para_range}, Number of trails - {size}, Number of Epochs - {EPOCHS} <===================\n")
final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 = my_grid_search(EPOCHS, model, device, train_loader, test_loader, optimizer, scheduler, lambda_l1_range = para_range, lambda_l2_range = [], size = size)
results_lambda_l1.append((final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2))
logger.info(f"\n===================> current results_lambda_l1 - {results_lambda_l1} <===================\n")
logger.info(f"\n===================> Completed - Trail on L1 reg parameters range - {para_range} <===================\n")
logger.info(f"\n===================> L1 - Results of Coarse/finer grid search in various ranges - {para_grid_lambda}<===================\n")
for final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 in results_lambda_l1:
logger.info(f"L1 reg parameter: {final_best_lambda_l1}, L2 reg parameter: {final_best_lambda_l2}, Train_acc: {final_best_train_acc}, Test_acc: {final_best_test_acc}")
# ## L2 regularisation hyper parameter search
# for para_range in para_grid_lambda:
# logger.info(f"\n===================> Started - Trail on L2 reg parameters range - {para_range}, Number of trails - {size}, Number of Epochs - {EPOCHS}<===================\n")
# final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 = my_grid_search(EPOCHS, model, device, train_loader, test_loader, optimizer, scheduler, lambda_l1_range = [], lambda_l2_range = para_range, size = size)
# results_lambda_l2.append((final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2))
# logger.info(f"\n===================> current results_lambda_l2 - {results_lambda_l2} <===================\n")
# logger.info(f"\n===================> Completed - Trail on L2 reg parameters range - {para_range} <===================\n")
# logger.info(f"\n===================> L2 - Results of Coarse/finer grid search in various ranges - {para_grid_lambda}<===================\n")
# for final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 in results_lambda_l2:
# logger.info(f"L1 reg parameter: {final_best_lambda_l1}, L2 reg parameter: {final_best_lambda_l2}, Train_acc: {final_best_train_acc}, Test_acc: {final_best_test_acc}")
# ## L1&L2 regularisation hyper parameter search
# # l1 and l2 reg paras in same range given but can be given different ranges by writing little more sophisticated logic
# for para_range in para_grid_lambda:
# logger.info(f"\n===================> Started - Trail on L1 & L2 reg parameters range - {para_range}, Number of para_ranges - {size}, , Number of trails per para_range - {l1_l2_trails}, Number of Epochs - {EPOCHS}<===================\n")
# final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 = my_grid_search(EPOCHS, model, device, train_loader, test_loader, optimizer, scheduler, lambda_l1_range = para_range, lambda_l2_range = para_range, size = size, l1_l2_trails=l1_l2_trails)
# results_lambda_l1_l2.append((final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2))
# logger.info(f"\n===================> current results_lambda_l1_l2 - {results_lambda_l1_l2} <===================\n")
# logger.info(f"\n===================> Completed - Trail on L1 and L2 reg parameters range - {para_range} <===================\n")
# logger.info(f"\n===================> L1 & L2 - Results of Coarse/finer grid search in various ranges - {para_grid_lambda} <===================\n")
# for final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 in results_lambda_l1_l2:
# logger.info(f"L1 reg parameter: {final_best_lambda_l1}, L2 reg parameter: {final_best_lambda_l2}, Train_acc: {final_best_train_acc}, Test_acc: {final_best_test_acc}")
# save_model(model)
# ## Load the model
# model_test = Net().to(device)
# PATH='./test_model.pickle'
# model_test.load_state_dict(torch.load(PATH))
# test(model_test, device, test_loader)
# fig, axs = plt.subplots(2,2,figsize=(15,10))
# axs[0, 0].plot(train_losses)
# axs[0, 0].set_title("Training Loss")
# axs[1, 0].plot(train_acc)
# axs[1, 0].set_title("Training Accuracy")
# axs[0, 1].plot(test_losses)
# axs[0, 1].set_title("Test Loss")
# axs[1, 1].plot(test_acc)
# axs[1, 1].set_title("Test Accuracy")
# # function boundaries
# xmin, xmax, xstep = -4.5, 4.5, .9
# ymin, ymax, ystep = -4.5, 4.5, .9
# # Let's create some points
# x1, y1 = np.meshgrid(np.arange(xmin, xmax + xstep, xstep), np.arange(ymin, ymax + ystep, ystep))
import numpy as np
np.mean(np.random.uniform(low=0.0, high=10.0, size=20))
logger.info(f"\n===================> L1 - Results of Coarse/finer grid search in various ranges - {para_grid_lambda}<===================\n")
for final_best_train_acc, final_best_test_acc, final_best_lambda_l1, final_best_lambda_l2 in results_lambda_l1:
logger.info(f"L1 reg parameter: {final_best_lambda_l1}, L2 reg parameter: {final_best_lambda_l2}, Train_acc: {final_best_train_acc}, Test_acc: {final_best_test_acc}")
###Output
_____no_output_____ |
libro_optimizacion/temas/3.optimizacion_convexa/3.3/Ejemplos_problemas_UCO_e_intro_CIEO_y_PI.ipynb | ###Markdown
(EJUCOINTCIECOPI)= 3.3 Ejemplos de problemas UCO, introducción a *Constrained Inequality and Equality Optimization* (CIEO) y puntos interiores ```{admonition} Notas para contenedor de docker:Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker y `` por la versión más actualizada que se presenta en la documentación.`docker run --rm -v :/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:`password para jupyterlab: `qwerty`Detener el contenedor de docker:`docker stop jupyterlab_optimizacion`Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).``` --- Nota generada a partir de [liga1](https://www.dropbox.com/s/6isby5h1e5f2yzs/4.2.Problemas_de_optimizacion_convexa.pdf?dl=0), [liga2](https://drive.google.com/file/d/1zCIHNAxe5Shc36Qo0XjehHgwrafKSJ_t/view), [liga3](https://drive.google.com/file/d/1oulU1QAKyLyYrkpJBLSPlbWnKFCWpllX/view), [liga4](https://drive.google.com/file/d/1RMwUXEN_SOHKue-J9Cx3Ldvj9bejLjiM/view). ```{admonition} Al final de esta nota la comunidad lectora::class: tip* Relacionará las definiciones revisadas en la nota sobre {ref}`definición de problemas de optimización, conjuntos y funciones convexas ` y los algoritmos en {ref}`algoritmos de descenso y búsqueda de línea en UCO ` con modelos ampliamente utilizados en *Machine Learning*.* Aprenderá las condiciones que puntos óptimos en un problema estándar de optimización general deben satisfacer, con nombre condiciones de Karush-Kuhn-Tucker de optimalidad, en una forma introductoria y la definición de la función Lagrangiana.* Aprenderá una idea general sobre los métodos de puntos interiores, en específico los de la clase primal-dual para resolver problemas de programación lineal. ``` Mínimos cuadrados Obsérvese que hay una gran cantidad de modelos por mínimos cuadrados, por ejemplo:* [Lineales](https://en.wikipedia.org/wiki/Linear_least_squares) u [ordinarios](https://en.wikipedia.org/wiki/Ordinary_least_squares) (nombre más usado en Estadística y Econometría).* [Generalizados](https://en.wikipedia.org/wiki/Generalized_least_squares), [ponderados](https://en.wikipedia.org/wiki/Weighted_least_squares).* [No lineales](https://en.wikipedia.org/wiki/Non-linear_least_squares).* [Totales](https://en.wikipedia.org/wiki/Total_least_squares) y [parciales](https://en.wikipedia.org/wiki/Partial_least_squares_regression).* [No negativos](https://en.wikipedia.org/wiki/Non-negative_least_squares).* [Rango reducido](https://epubs.siam.org/doi/abs/10.1137/1.9780898718867.ch7). Mínimos cuadrados lineales Se **asume** en esta sección que $A \in \mathbb{R}^{m \times n}$ con $m \geq n$ (más renglones que columnas en $A$). Cada uno de los modelos anteriores tienen diversas aplicaciones y propósitos. Los lineales son un caso particular del problema más general de **aproximación por normas**:$$\displaystyle \min_{x \in \mathbb{R}^n} ||Ax-b||$$donde: $A \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^m$ son datos del problema, $x \in \mathbb{R}^n$ es la variable de optimización y $|| \cdot||$ es una norma en $\mathbb{R}^m$. ```{admonition} Definiciones$x^* = \text{argmin}_{x \in \mathbb{R}^n} ||Ax-b||$ se le nombra **solución aproximada** de $Ax \approx b$ en la norma $|| \cdot ||$.El vector: $r(x) = Ax -b$ se le nombra **residual** del problema.``` ```{admonition} ComentarioEl problema de aproximación por normas también se le nombra **problema de regresión**. En este contexto, las componentes de $x$ son nombradas variables regresoras, las columnas de $A$, $a_j$, es un vector de *features* o atributos y el vector $\displaystyle \sum_{j=1}^n x_j^*a_j$ con $x^*$ óptimo del problema es nombrado la **regresión de $b$ sobre las regresoras**, $b$ es la **respuesta.**``` Si en el problema de aproximación de normas anterior se utiliza la norma Euclidiana o norma $2$, $|| \cdot ||_2$, y se eleva al cuadrado la función objetivo se tiene:$$\displaystyle \min_{x \in \mathbb{R}^n} ||Ax-b||^2_2$$que es el modelo por mínimos cuadrados lineales cuyo objetivo es minimizar la suma de cuadrados de las componentes del residual $r(x)$. **A partir de aquí, la variable de optimización será $\beta$ y no $x$** de modo que el problema es: $$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2$$ Supóngase que se han realizado mediciones de un fenómeno de interés en diferentes puntos $x_i$'s resultando en cantidades $y_i$'s $\forall i=0,1,\dots, m$ (se tienen $m+1$ puntos) y además las $y_i$'s contienen un ruido aleatorio causado por errores de medición: El objetivo de los mínimos cuadrados es construir una curva, $f(x|\beta)$ que "mejor" se ajuste a los datos $(x_i,y_i)$, $\forall i=0,1,\dots,m$. El término de "mejor" se refiere a que la suma: $$\displaystyle \sum_{i=0}^m (y_i -f(x_i|\beta))^2$$ sea lo "más pequeña posible", esto es, a que la suma de las distancias verticales entre $y_i$ y $f(x_i|\beta)$ $\forall i=0,1,\dots,m$ al cuadrado sea mínima. Por ejemplo: ```{admonition} Observación:class: tipLa notación $f(x|\beta)$ se utiliza para denotar que $\beta$ es un vector de parámetros a estimar, en específico $\beta_0, \beta_1, \dots \beta_n$, esto es: $n+1$ parámetros a estimar.``` Modelo en mínimos cuadrados lineales En los mínimos cuadrados lineales se asume un modelo: $$f(x|\beta) = \displaystyle \sum_{j=0}^n\beta_j\phi_j(x)$$con $\phi_j: \mathbb{R} \rightarrow \mathbb{R}$ funciones conocidas por lo que se tiene una gran flexibilidad para el proceso de ajuste. Con las funciones $\phi_j (\cdot)$ se construye a la matriz $A$. ```{admonition} Observación:class: tipSi $n=m$ entonces se tiene un problema de interpolación.``` Enfoque geométrico y algebraico para resolver el problema de mínimos cuadrados Si $m=3$ y $A \in \mathbb{R}^{3 \times 2}$ geométricamente el problema de **mínimos cuadrados lineales** se puede visualizar con el siguiente dibujo: En el dibujo anterior:* $r(\beta) = y-A\beta$,* el vector $y \in \mathbb{R}^m$ contiene las entradas $y_i$'s,* la matriz $A \in \mathbb{R}^{m \times n}$ contiene a las entradas $x_i$'s o funciones de éstas $\forall i=0,1,\dots,m$.Por el dibujo se tiene que cumplir que $A^Tr(\beta)=0$, esto es: las columnas de $A$ son ortogonales a $r(\beta)$. La condición anterior conduce a las **ecuaciones normales**: $$0=A^Tr(\beta)=A^T(y-A\beta)=A^Ty-A^TA\beta.$$ donde: $A$ se construye con las $\phi_j$'s evaluadas en los puntos $x_i$'s, el vector $\beta$ contiene a los parámetros $\beta_j$'s a estimar y el vector $y$, la variable **respuesta**, se construye con los puntos $y_i$'s: $$A = \left[\begin{array}{cccc}\phi_0(x_0) &\phi_1(x_0)&\dots&\phi_n(x_0)\\\phi_0(x_1) &\phi_1(x_1)&\dots&\phi_n(x_1)\\\vdots &\vdots& \vdots&\vdots\\\phi_0(x_n) &\phi_1(x_n)&\dots&\phi_n(x_n)\\\vdots &\vdots& \vdots&\vdots\\\phi_0(x_{m-1}) &\phi_1(x_{m-1})&\dots&\phi_n(x_{m-1})\\\phi_0(x_m) &\phi_1(x_m)&\dots&\phi_n(x_m)\end{array}\right] \in \mathbb{R}^{(m+1)x(n+1)},\beta=\left[\begin{array}{c}\beta_0\\\beta_1\\\vdots \\\beta_n\end{array}\right] \in \mathbb{R}^{n+1},y=\left[\begin{array}{c}y_0\\y_1\\\vdots \\y_m\end{array}\right] \in \mathbb{R}^{m + 1}$$ Finalmente, considerando la variable de optimización $\beta$ y al vector $y$ tenemos: $A^TA \beta = A^Ty$. ```{admonition} ComentarioSi $A$ es de $rank$ completo (tiene $n+1$ columnas linealmente independientes) una opción para resolver el sistema anterior es calculando la factorización $QR$ de $A$: $A = QR$ y entonces: $A^TA\beta = A^Ty$. Dado que $A=QR$ se tiene: $A^TA = (R^TQ^T)(QR)$ y $A^T = R^TQ^T$ por lo que:$$(R^TQ^T)(QR) \beta = R^TQ^T y$$y usando que $Q$ tiene columnas ortonormales:$$R^TR\beta = R^TQ^Ty$$Como $A$ tiene $n+1$ columnas linealmente independientes, la matriz $R$ es invertible por lo que $R^T$ también lo es y finalmente se tiene el **sistema de ecuaciones lineales** por resolver:$$R\beta = Q^Ty$$``` Enfoque utilizando directamente la función objetivo del problema de optimización La función objetivo en los mínimos cuadrados lineales puede escribirse de las siguientes formas: $$\begin{eqnarray}f_o(x_i|\beta)=\displaystyle \sum_{i=0}^{m} (y_i -f_o(x_i|\beta))^2 &=& \displaystyle \sum_{i=0}^{m} (y_i - A[i,:]^T\beta)^2 \\&=& ||y - A \beta||_2^2 \\&=& (y-A\beta)^T(y-A\beta) \\&=& y^Ty-2\beta^TA^Ty + \beta^TA^TA\beta\end{eqnarray}$$ con $A[i,:]$ $i$-ésimo renglón de $A$ visto como un vector en $\mathbb{R}^n$. Es común dividir por $2$ la función objetivo para finalmente tener el problema:$$\displaystyle \min_{\beta \in \mathbb{R}^n} \frac{1}{2}\beta^TA^TA\beta - \beta^TA^Ty + \frac{1}{2}y^Ty.$$ ```{admonition} Observación:class: tipEn cualquier reescritura de la función $f_o$, el problema de aproximación con normas, o bien en su caso particular de mínimos cuadrados, es un problema de **optimización convexa**.``` Ejemplo Planteamos un modelo del tipo: $f_o(x_i | \beta) = \beta_0\phi_0(x) + \beta_1 \phi_1(x) = \beta_0 + \beta_1 x$
###Code
import numpy as np
import matplotlib.pyplot as plt
import pprint
np.set_printoptions(precision=2, suppress=True)
np.random.seed(1989) #for reproducibility
mpoints = 20
x = np.random.randn(mpoints)
y = -3*x + np.random.normal(2,1,mpoints)
###Output
_____no_output_____
###Markdown
**Los datos ejemplo**
###Code
plt.plot(x,y, 'r*')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Puntos ejemplo')
plt.show()
###Output
_____no_output_____
###Markdown
Utilizamos el paquete [cvxpy](https://github.com/cvxgrp/cvxpy) para resolver el problema de mínimos cuadrados:
###Code
import cvxpy as cp
###Output
_____no_output_____
###Markdown
```{margin}Construimos a la matriz $A$.```
###Code
A=np.ones((mpoints,2)) #step 1 to build matrix A
A[:,1] = x #step 2 to build matrix A
###Output
_____no_output_____
###Markdown
```{margin}Definición de variables y función objetivo: $\frac{1}{2}\beta^TA^TA\beta - \beta^TA^Ty + \frac{1}{2}y^Ty$.```
###Code
n = 2 # number of variables
beta = cp.Variable(n) #optimization variable
fo_cvxpy = (1/2)*cp.quad_form(beta, A.T@A) - cp.sum(cp.multiply(A.T@y, beta)) + 1/2*y.dot(y) #objective function
prob = cp.Problem(cp.Minimize(fo_cvxpy))
print(prob.solve())
print("\nThe optimal value is", prob.value)
print("The optimal beta is")
print(beta.value)
print("The norm of the residual is ", cp.norm(A @ beta - y, p=2).value) #also works: cp.norm2(A @ beta - y).value
###Output
The optimal value is 10.21773841938797
The optimal beta is
[ 2.03 -2.65]
The norm of the residual is 4.520561562325624
###Markdown
El paquete *CVXPY* ya tiene una función para resolver el problema anterior, ver [least_squares](https://www.cvxpy.org/examples/basic/least_squares.html).
###Code
fo_cvxpy = 1/2*cp.sum_squares(A@beta -y)
prob = cp.Problem(cp.Minimize(fo_cvxpy))
print(prob.solve())
print("\nThe optimal value is", prob.value)
print("The optimal beta is")
print(beta.value)
print("The norm of the residual is ", cp.norm(A @ beta - y, p=2).value) #also works: cp.norm2(A @ beta - y).value
###Output
The optimal value is 10.217738419387942
The optimal beta is
[ 2.03 -2.65]
The norm of the residual is 4.520561562325624
###Markdown
Entonces el vector $\beta$ ajustado es: $\hat{\beta_0} \approx 2.03, \hat{\beta_1} \approx -2.65$ y por tanto el modelo es:$$f(x|\hat{\beta}) = 2.03 -2.65 x$$
###Code
y_hat_numpy = beta.value[0] + beta.value[1] * x
plt.plot(x, y_hat_numpy, "k-",
x, y, "r*")
plt.legend(["$f(x|\\hat{\\beta})$ = 2.03-2.65x","datos"], loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
El problema de clasificación en dos clases $\mathcal{C}_0, \mathcal{C}_1$ Sean $\mathcal{C}_0 , \mathcal{C}_1$ dos clases ajenas y $x \in \mathbb{R}^n$. El problema de clasificación consiste en clasificar al vector $x$ en alguna de las dos clases anteriores de modo que se minimice el error de clasificación.Ejemplos de lo anterior los encontramos en medicina (persona enferma o no dada una serie de mediciones en sangre), finanzas (persona sujeta a un crédito bancario o no dado un historial crediticio) o clasificación de textos (*spam* o no *spam*). Regresión logística: clasificación en $\mathcal{C}_0, \mathcal{C}_1$ El modelo por regresión logística tiene por objetivo **modelar las probabilidades de pertenencia a cada una de las clases** $\mathcal{C}_0, \mathcal{C}_1$ dado el vector de *features* o atributos $x \in \mathbb{R}^n$: $p(\mathcal{C}_0|x) , p(\mathcal{C}_1|x)$. En la regresión logística se utiliza la función **[sigmoide](https://en.wikipedia.org/wiki/Sigmoid_function)** $\sigma:\mathbb{R} \rightarrow \mathbb{R}$:$$\sigma(t)=\frac{1}{1+\exp(-t)}$$para modelar ambas probabilidades ya que mapea todo el eje real al intervalo $[0,1]$. Además resulta ser una aproximación continua y diferenciable a la función de **[Heaviside](https://en.wikipedia.org/wiki/Heaviside_step_function)** $H:\mathbb{R} \rightarrow \mathbb{R}$$$H(t) = \begin{cases}1 & \text{si } t \geq 0,\\0 & \text{si } t <0\\\end{cases}$$
###Code
mpoints = 100
t = np.linspace(-10, 10, mpoints)
Heaviside = 1*(t>0)
import matplotlib.pyplot as plt
plt.plot(t, Heaviside, '.')
plt.show()
###Output
_____no_output_____
###Markdown
A continuación graficamos a la sigmoide $\sigma(ht)$ para distintos valores de $h \in \{-3, -1, -1/2, 1/2, 1, 3\}$:
###Code
sigmoid = lambda t_value: 1/(1+np.exp(-t_value))
h = np.array([-3, -1, -1/2, 1/2, 1, 3])
n = len(h)
sigmoids = np.zeros((mpoints, n))
for i in range(len(h)):
sigmoids[:,i] = sigmoid(h[i]*t)
plt.figure(figsize=(7,7))
plt.plot(t, sigmoids[:,0],
t, sigmoids[:,1],
t, sigmoids[:,2],
t, sigmoids[:,3],
t, sigmoids[:,4],
t, sigmoids[:,5])
l = ["h=-3", "h=-1", "h=-1/2", "h=1/2", "h=1", "h=3"]
plt.legend(l, bbox_to_anchor=(1,1))
plt.title("Diferentes funciones sigmoides $\sigma(ht)$")
plt.show()
###Output
_____no_output_____
###Markdown
Obsérvese la forma de cada curva al variar $h$ en la función $\sigma(ht)$. Una regla de clasificación podría ser clasificar como perteneciente a $\mathcal{C}_0$ si la probabilidad (modelada por la curva sigmoide) es menor a $0.25$ (**punto de corte**) y perteneciente a $\mathcal{C}_1$ si es mayor o igual a $0.25$. Para diferentes curvas sigmoides presentadas en la gráfica anterior obsérvese que al fijar el punto de corte y tomar un valor de $t$ en el eje horizontal, la pertenencia a alguna de las clases es menos sensible al variar $t$ que en otras curvas. Así, la función sigmoide permite modelar la probabilidad de pertenencia a la clase $\mathcal{C}_1:$$$p(\mathcal{C}_1| x)=\sigma(a)$$para alguna $a \in \mathbb{R}$. Con el [teorema de Bayes](https://en.wikipedia.org/wiki/Bayes%27_theorem) se obtiene el valor de $a$:$$\begin{eqnarray}p(\mathcal{C}_1|x) &=& \frac{p(x|\mathcal{C}_1)p(\mathcal{C}_1)}{p(x|\mathcal{C}_0)p(\mathcal{C}_0)+p(x|\mathcal{C}_1)p(\mathcal{C}_1)} \nonumber \\&=& \left ( 1+ \frac{p(x|\mathcal{C}_0)p(\mathcal{C}_0)}{p(x|\mathcal{C}_1)p(\mathcal{C}_1)} \right )^{-1} \nonumber\end{eqnarray}$$ Por lo tanto: $$\begin{eqnarray}a(x)&=&\log\left( \frac{p(x|\mathcal{C}_1)p(\mathcal{C}_1)}{p(x|\mathcal{C}_0)p(\mathcal{C}_0)} \right ) \nonumber\end{eqnarray}$$ ```{admonition} Comentarios* Algunas propiedades que tiene la función $\sigma(\cdot)$ se encuentran:$$\begin{eqnarray}\sigma (-t)&=&1-\sigma (t) \nonumber \\\frac{d\sigma (t)}{dt}&=&\sigma (t)(1-\sigma (t)) \nonumber\end{eqnarray}$$* En Estadística a la función:$$a=\log\left(\frac{\sigma}{1-\sigma}\right)$$se le conoce como [**logit**](https://en.wikipedia.org/wiki/Logit) y modela el log momio:$$\log \left(\frac{p(\mathcal{C}_1|x)}{p(\mathcal{C}_0|x)}\right)=\log \left(\frac{p(\mathcal{C}_1|x)}{1-p(\mathcal{C}_1|x)}\right)$$que tiene una interpretación directa en términos de las probabilidades de pertenencia a cada clase $\mathcal{C}_0,\mathcal{C}_1$.``` Modelo en regresión logística de dos clases De forma similar como en el modelo por mínimos cuadrados lineales se modeló a la variable respuesta $y$ con una función lineal en sus parámetros, en el modelo en regresión logística **con dos clases e intercepto** se propone una **función lineal** en un vector de parámetros $(\beta_0,\beta) \in \mathbb{R}^{n+1}$ definida por el logit: $$\beta^T x+\beta_0=a(x|\beta_0,\beta)=\log \left(\frac{p(\mathcal{C}_1|x)}{p(\mathcal{C}_0|x)}\right).$$Obsérvese que si $y$ es considerada como variable respuesta que está en función de $x \in \mathbb{R}^{n+1}$ dado el vector $(\beta_0, \beta)$ se tiene: $$p(\mathcal{C}_1 | x ) = y(x | \beta_0, \beta) = \frac{1}{1+ e^{-(\beta_0, \beta)^T x}}$$que se lee "la probabilidad de pertenencia a la clase $\mathcal{C}_1$ dado el vector de atributos $x$ es igual a $y$". ```{admonition} Comentarios* El modelo con $2$ parámetros $\beta_0, \beta_1$ se ve como:$$p(\mathcal{C}_1 | x ) = y(x | \beta_0, \beta) = \frac{1}{1+ e^{-(\beta_0 + \beta_1x)}}$$con $x \in \mathbb{R}$.* El modelo puede extenderse utilizando $n+1$ funciones conocidas $\phi_j:\mathbb{R} \rightarrow \mathbb{R}$, $\phi_j(x)$ $j=0,\dots, n$ por lo que si $\phi(x)=(\phi_0(x),\phi_1(x),\dots,\phi_n(x))^T$ y $\beta_0 \in \mathbb{R}$, $\beta \in \mathbb{R}^n$, entonces se tiene el modelo por regresión logística:$$p(\mathcal{C}_1|\phi(x))=y(x|\beta_0, \beta)= \frac{1}{1+ e^{-(\beta_0, \beta)^T \phi(x)}}$$* La notación $y(x | \beta_0, \beta)$ se utiliza para denotar que $(\beta_0, \beta)$ es un vector de parámetros a estimar, en específico $\beta_0, \beta_1, \dots, \beta_n$, esto es: $n+1$ parámetros a estimar.* La variable de optimización es $(\beta_0, \beta) \in \mathbb{R}^{n+1}$.``` ¿Cómo se ajustan los parámetros del modelo por regresión logística de dos clases? Dados $(x_0,\hat{y}_0), \dots (x_m, \hat{y}_m)$ puntos se desean modelar $m+1$ probabilidades de pertenencias a las clases $\mathcal{C}_0, \mathcal{C}_1$ representadas con las etiquetas $\hat{y}_i \in \{0,1\} \forall i=0,1,\dots, m$. El número $0$ representa a la clase $\mathcal{C}_0$ y el $1$ a la clase $\mathcal{C}_1$. El vector $x_i \in \mathbb{R}^n$ .Cada probabilidad se modela como $y_0=y_0(x_0|\beta_0, \beta),y_1=y_1(x_1|\beta_0, \beta),\dots,y_m=y_m(x_n|\beta_0, \beta)$ utilizando:$$p(\mathcal{C}_1|x_i) = y_i(x_i|\beta_0,\beta) = \frac{1}{1+ e^{-(\beta_0 + \beta^T x_i)}} \forall i=0,1,\dots,m.$$ Los $n+1$ parámetros $\beta_0, \beta_1, \dots, \beta_n$ se ajustan **maximizando** la [función de verosimilitud](https://en.wikipedia.org/wiki/Likelihood_function):$$\mathcal{L}(\beta_0, \beta|x)=\displaystyle \prod_{i=0}^n y_i^{\hat{y}_i}(1-y_i)^{1-\hat{y}_i}$$donde: $\hat{y}_i \sim \text{Bernoulli}(y_i)$ y por tanto $\hat{y}_i \in \{0,1\}$: $\hat{y}_i = 1$ con probabilidad $y_i$ y $\hat{y}_i = 0$ con probabilidad $1-y_i$. Entonces se tiene el problema: $$\displaystyle \max_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \mathcal{L}(\beta_0, \beta|x)=\displaystyle \prod_{i=0}^n y_i^{\hat{y}_i}(1-y_i)^{1-\hat{y}_i}$$ Lo anterior es equivalente a maximizar la **log-verosimilitud**:$$\begin{eqnarray}\ell(\beta_0, \beta |x)&=&\log(\mathcal{L}(\beta_0, \beta| x)) \nonumber\\&=&\displaystyle \sum_{i=1}^m \hat{y}_i\log(y_i)+(1-\hat{y}_i)\log(1-y_i) \nonumber\\&=&\displaystyle \sum_{i=1}^m\hat{y}_i (\beta_0, \beta)^T x_i-\log(1+\exp((\beta_0, \beta)^Tx_i) \nonumber\end{eqnarray}$$o a minimizar la [**devianza**](https://en.wikipedia.org/wiki/Deviance_(statistics)): $$\begin{eqnarray}\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}}\mathcal{D}(\beta_0, \beta|x)&=&-2\ell(\beta_0, \beta|x) \nonumber \\&=&2\displaystyle \sum_{i=1}^m\log(1+\exp((\beta_0, \beta)^Tx_i))-\hat{y}_i(\beta_0, \beta)^Tx_i \nonumber\end{eqnarray}$$ ```{admonition} ComentarioLa devianza es una función convexa pues su Hessiana es:$$\nabla^2 D(\beta_0, \beta |x) = 2A^TPA$$con: $P$ una matriz diagonal con entradas $y_i(1-y_i)$ donde: $y_i$ está en función de $x_i$: $y_i(x_i|\beta_0,\beta) = \frac{1}{1+ e^{-(\beta_0 + \beta^T x_i)}} \forall i=0,1,\dots,m$ y la matriz A es:$$A = \left[\begin{array}{c}x_0\\x_1\\\vdots\\x_m\end{array}\right]=\left[\begin{array}{cccc}x_{01} & x_{02}&\dots& x_{0n}\\x_{11}& x_{12}&\dots& x_{1n}\\\vdots &\vdots& \vdots&\vdots\\x_{n1} &x_{n2}&\dots&x_{nn}\\\vdots &\vdots& \vdots&\vdots\\x_{m1} &x_{m2}&\dots&x_{mn}\end{array}\right] \in \mathbb{R}^{(m+1)x(n+1)}$$El valor $m$ representa el número de observaciones y el valor $n$ representa la dimensión del vector $\beta$.La expresión anterior de la Hessiana se obtiene a partir de la expresión del gradiente:$$\nabla D(\beta_0, \beta|x) = 2 \displaystyle \sum_{i=1}^m \left( y_i - \hat{y}_i \right )x_i = 2\sum_{i=1}^m \left( p(\mathcal{C}_1|x_i) - \hat{y}_i \right )x_i = 2A^T(p-\hat{y})$$donde:$$p=\left[\begin{array}{c}y_0(x_0|\beta_0,\beta)\\y_1(x_1|\beta_0,\beta)\\\vdots \\y_m(x_m|\beta_0,\beta)\end{array}\right]=\left[\begin{array}{c}p(\mathcal{C}_1|x_0)\\p(\mathcal{C}_1|x_1)\\\vdots \\p(\mathcal{C}_1|x_m)\end{array}\right] \in \mathbb{R}^{n+1},\hat{y}=\left[\begin{array}{c}\hat{y}_0\\\hat{y}_1\\\vdots \\\hat{y}_m\end{array}\right] \in \mathbb{R}^{m+1}$$Así, la Hessiana de la devianza es simétrica semidefinida positiva y por tanto es una función convexa.``` Ejemplo [Iris *dataset*](https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html) Utilizamos el conocido *dataset* de [*Iris*](https://en.wikipedia.org/wiki/Iris_flower_data_set) en el que se muestran **tres especies del género *Iris***. Las especies son: [*I. setosa*](https://en.wikipedia.org/wiki/Iris_setosa), [*I. virginica*](https://en.wikipedia.org/wiki/Iris_virginica) y [*I. versicolor*](https://en.wikipedia.org/wiki/Iris_versicolor):Imagen obtenida de [Iris Dataset](https://rpubs.com/AjinkyaUC/Iris_DataSet).
###Code
!pip install --quiet sklearn
from sklearn import datasets
iris = datasets.load_iris()
data_iris = iris["data"]
print(data_iris[0:10, 0:10])
m,n = data_iris.shape
print("número de observaciones:%d, número de atributos: %d" % (m,n))
###Output
número de observaciones:150, número de atributos: 4
###Markdown
Columnas en este orden: `Sepal.Length`, `Sepal.Width`, `Petal.Length`, `Petal.Width`
###Code
print(iris["target_names"])
print(np.unique(iris["target"]))
data_iris_setosa_versicolor = data_iris[0:100].copy()
print(np.corrcoef(data_iris_setosa_versicolor, rowvar=False))
classes = iris["target"][0:100].copy()
###Output
_____no_output_____
###Markdown
La clase $\mathcal{C}_0$ es `setosa` y $\mathcal{C}_1$ es `versicolor` codificadas como $0, 1$ respectivamente. La función objetivo como se revisó en la sección anterior está dada por la expresión de la devianza: $$\mathcal{D}(\beta_0, \beta|x)=-2\ell(\beta_0, \beta|x) = 2\displaystyle \sum_{i=1}^m\log(1+\exp((\beta_0, \beta)^Tx_i))-\hat{y}_i(\beta_0, \beta)^Tx_i$$donde: $\hat{y}_i \in \{0,1\}$, $x_i$ $i$-ésimo renglón de matriz $A \in \mathbb{R}^{100 \times 4}$. **Añadimos la columna que indica uso de intercepto y por tanto de un modelo con $\beta_0$:**
###Code
m,n = data_iris_setosa_versicolor.shape
data_iris_setosa_versicolor = np.column_stack((np.ones((m,1)), data_iris_setosa_versicolor))
print(data_iris_setosa_versicolor[0:10, 0:10])
###Output
[[1. 5.1 3.5 1.4 0.2]
[1. 4.9 3. 1.4 0.2]
[1. 4.7 3.2 1.3 0.2]
[1. 4.6 3.1 1.5 0.2]
[1. 5. 3.6 1.4 0.2]
[1. 5.4 3.9 1.7 0.4]
[1. 4.6 3.4 1.4 0.3]
[1. 5. 3.4 1.5 0.2]
[1. 4.4 2.9 1.4 0.2]
[1. 4.9 3.1 1.5 0.1]]
###Markdown
**Función objetivo:** $$2\displaystyle \sum_{i=1}^m\log(1+\exp((\beta_0, \beta)^Tx_i))-\hat{y}_i(\beta_0, \beta)^Tx_i$$ ```{margin}Ver [cvxpy: logistic regression](https://www.cvxpy.org/examples/machine_learning/logistic_regression.html)```
###Code
n = n+1 #number of variables
beta = cp.Variable(n) #optimization variable
fo_cvxpy = 2*cp.sum(
cp.logistic(data_iris_setosa_versicolor @ beta) - cp.multiply(classes, data_iris_setosa_versicolor @ beta)
)
obj = cp.Minimize(fo_cvxpy)
prob = cp.Problem(obj)
print(prob.solve())
print("\nThe optimal value is", prob.value)
print("The optimal beta is")
print(beta.value)
###Output
The optimal value is 2.935191734155412e-07
The optimal beta is
[ 9.71 -3.97 -13.54 11.16 25.52]
###Markdown
Cálculo de probabilidades de pertenencia a las clases $\mathcal{C}_0 :$ `setosa`, $\mathcal{C}_1 :$ `versicolor` **Para individuo $i$ se tiene:** $$p(\mathcal{C}_1|x_i) = y_i(x_i|\beta_0,\beta) = \frac{1}{1+ e^{-(\beta_0 + \beta^T x_i)}} \forall i=0,1,\dots,m.$$ Por ejemplo, para el renglón $1$ de `data_iris_setosa_versicolor`, que sabemos que pertenece a la clase $\mathcal{C}_0$: `setosa`: **Estimación de la probabilidad de pertenencia a la clase $\mathcal{C}_1$, `versicolor`**:
###Code
linear_value = -data_iris_setosa_versicolor[0,:].dot(beta.value)
print(1/(1+np.exp(linear_value)))
###Output
7.002516747523349e-17
###Markdown
**Estimación de la probabilidad de pertenencia a la clase $\mathcal{C}_0$, `setosa`**:
###Code
print(np.exp(linear_value) / (1+np.exp(linear_value)))
###Output
0.9999999999999999
###Markdown
Por ejemplo, para el último renglón de `data_iris_setosa_versicolor`, que sabemos que pertenece a la clase $\mathcal{C}_1$, `versicolor`: **Estimación de la probabilidad de pertenencia a la clase $\mathcal{C}_1$, `versicolor`**:
###Code
linear_value = -data_iris_setosa_versicolor[m-1,:].dot(beta.value)
print(1/(1+np.exp(linear_value)))
###Output
0.9999999999993738
###Markdown
**Estimación de la probabilidad de pertenencia a la clase $\mathcal{C}_0$, `setosa`**:
###Code
print(np.exp(linear_value) / (1+np.exp(linear_value)))
###Output
6.261520333516334e-13
###Markdown
```{admonition} Ejercicio:class: tipRealiza la clasificación y cálculo de probabilidades anterior para las clases `virginica` y `versicolor`.``` (INTCIEO)= Introducción a *Constrained Inequality and Equality Optimization* (CIEO) Recuérdese que para *Unconstrained Optimization* (UO) se dieron **condiciones** que deben satisfacer puntos para ser óptimos en {ref}`sobre problemas de optimización `. En esta sección se darán ejemplos que ayudarán a describir condiciones que caracterizan las soluciones para un {ref}`problema estándar de optimización `: $$\displaystyle \min f_o(x)$$$$\text{sujeto a:}$$$$f_i(x) \leq 0, \quad \forall i=1,\dots,m$$$$h_i(x) = 0, \quad \forall i=1,\dots,p$$con $f_o$, $f_i: \mathbb{R}^n \rightarrow \mathbb{R}$ $\forall i=1,\dots,m$, $h_i: \mathbb{R}^n \rightarrow \mathbb{R}$, $\forall i=1,\dots,p$. $f_i$ son las **restricciones de desigualdad**, $h_i$ son las **restricciones de igualdad**. ```{admonition} DefiniciónEl problema anterior se le nombra **problema primal**.``` **En lo que continúa se asume que** $f_i$, $h_i$ son funciones de clase $\mathcal{C}^2$ en sus dominios respectivos. (EJ1INTROCIEO)= Ejemplo 1 Considérese el siguiente problema de optimización: $$\min x_1 + x_2$$$$\text{sujeto a:}$$$$x_1^2 + x_2^2 -2 = 0$$ En el cual:$$\nabla f_o(x) = \left [\begin{array}{c}1 \\1\end{array}\right ],\nabla h_1(x) =\left [\begin{array}{c}2x_1 \\2x_2\end{array}\right ]$$ de modo que al evaluar en diferentes puntos los gradientes anteriores se tiene una situación siguiente: Por el dibujo anterior se tiene que el conjunto de factibilidad para este problema es una circunferencia de radio $\sqrt{2}$ con centro en el origen. Se puede observar además que $x^* = \left [ \begin{array}{c}-1 \\ -1 \end{array} \right ]$ pues si estuviéramos en cualquier otro punto, por ejemplo en el punto $x = \left [ \begin{array}{c}\sqrt{2} \\0 \end{array} \right ]$ entonces cualquier movimiento en dirección en sentido de las manecillas del reloj reducirá el valor de $f_o$. También se observa en el dibujo anterior que en la solución $x^*$, se cumple que:$$\nabla f_o(x^*) = - \nu_1^*\nabla h_1(x^*),$$ esto es, son paralelos, de hecho, $\nu_1^* = \frac{1}{2}$. ```{margin}Recuérdese que si $x$ es factible entonces $h_1(x) = 0$ y si $\nabla f_o(x) \neq 0$ entonces no es óptimo.``` Usando el teorema de Taylor aplicado a $h_1$ y asumiendo que $x$ es un punto factible, $\nabla f_o(x) \neq 0$ y $\Delta x$ una dirección de descenso de longitud pequeña tal que mantiene factibilidad se tiene que: $$0 = h_1(x + \Delta x) \approx h_1(x) + \nabla h_1(x)^T \Delta x = \nabla h_1(x)^T \Delta x.$$ En resúmen, si el paso $\Delta x$ mantiene la factibilidad entonces:$$\nabla h_1(x)^T \Delta x = 0.$$ Además, como $\Delta x$ es dirección de descenso:$$\nabla f_o (x)^T \Delta x < 0.$$ Si $x$ no es mínimo local entonces existe tal dirección $\Delta x$, análogamente si no existe tal dirección entonces $x$ es un mínimo local. En el dibujo anterior se verifica visualmente esto pues si ambos gradientes no son paralelos entonces podemos elegir una dirección de descenso que satisfaga ambas condiciones anteriores. ```{admonition} Observación:class: tipObsérvese que si $x^*$ es mínimo local entonces $\nabla f_o(x^*) = 0$ (condición necesaria de primer orden) por lo que no existen direcciones de descenso.``` (FUNLAGRANGIANA)= La función Lagrangiana ```{admonition} DefiniciónLa **función Lagrangiana** asociada al problema de optimización (primal) se define como: $$\mathcal{L}: \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^p \rightarrow \mathbb{R}$$con:$$\mathcal{L}(x, \lambda , \nu) = f_o(x) + \displaystyle \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x)$$y $\text{dom} \mathcal{L} = \mathcal{D} \times \mathbb{R}^m \times \mathbb{R}^p$ donde: $\mathcal{D}$ es el dominio del problema de optimización. **En lo que continúa se asume la restricción** $\lambda_i \geq 0 \forall i=1,\dots, m$.``` ```{admonition} Comentarios* $\lambda _i$ se le nombra **multiplicador de Lagrange** asociado con la $i$-ésima restricción de desigualdad $f_i(x) \leq 0$. * $\nu_i$ se le nombra **multiplicador de Lagrange** asociado con la $i$-ésima restricción de igualdad $h_i(x)=0$.* Los vectores $\lambda = (\lambda_i)_{i=1}^m$ y $\nu = (\nu_i)_{i=1}^p \in \mathbb{R}^p$ se les nombran **variables duales** o **vectores de multiplicadores de Lagrange** asociados con el problema de optimización. El vector $x \in \mathcal{D}$ se le nombra **variable primal**.``` Para el ejemplo anterior se tiene:$$\mathcal{L}(x, \nu_1) = f_o(x) + \nu_1 h_1(x).$$Obsérvese que $\nabla_x \mathcal{L}(x, \nu_1) = \nabla f_o(x) + \nu_1 \nabla h_1(x)$ y en la solución $x^*$, existe $\nu_1^*$ tal que $\nabla_x \mathcal{L} (x^*, \nu_1^*)= 0$. ```{admonition} Observación:class: tipLa notación $\nabla_x g(x, y)$ hace referencia al gradiente de $g(x,y)$ sólo derivando respecto a $x$.``` ```{admonition} Comentario* Aunque la condición $$\nabla f_o(x^*) = - \nu_1^*\nabla h_1(x^*)$$es necesaria para $x^*$ óptimo, ésta no es suficiente pues se satisface en el punto $x = \left [ \begin{array}{c}1 \\ 1 \end{array} \right ]$ para el ejemplo anterior con $\nu_1 = -\frac{1}{2}$ pero este punto **maximiza** $f_o$ en la circunferencia.* También el signo de $\nu_1$ **no es relevante** en este ejemplo pues si se considera la restricción $2-x_1^2-x_2^2=0$, la solución **sigue** siendo $(-1, -1)^T$ con $\nu_1^*=-\frac{1}{2}$.``` Ejemplo 2 $$\min x_1 + x_2$$$$\text{sujeto a:}$$$$x_1^2 + x_2^2 -2 \leq 0$$ Para este ejemplo el conjunto de factibilidad es el interior y frontera del círculo: $$\nabla f_1(x) =\left [\begin{array}{c}2x_1 \\2x_2\end{array}\right ]$$ ```{margin}$f_1(x) = x_1^2 + x_2^2 -2$``` Obsérvese en el dibujo anterior que $-\nabla f_1(x)$ apunta al interior del conjunto de factibilidad.La solución de este problema sigue siendo $x^* = \left [ \begin{array}{c}-1 \\ -1 \end{array} \right ]$ con $\lambda_1^* = \frac{1}{2}$ al igual que en el ejemplo 1 con $x_1^2 + x_2^2 -2 = 0$. Sin embargo, la diferencia con el ejemplo anterior es que el signo $\lambda_1$ es importante como se describirá a continuación. Si $x$ no es óptimo entonces como en el ejemplo anterior podemos encontrar una dirección $\Delta x$ que satisfaga factibilidad, $f_1(x) \leq 0$, y reduzca $f_o$. ```{margin}Recuérdese que si $x$ es factible entonces $f_1(x) \leq 0$ y si $\nabla f_o(x) \neq 0$ entonces no es óptimo.``` Usando el teorema de Taylor aplicado a $f_1$ y asumiendo que $x$ es un punto factible, $\nabla f_o(x) \neq 0$ y $\Delta x$ una dirección de descenso de longitud pequeña tal que mantiene factibilidad se tiene que: $$f_1(x) + \nabla f_1(x)^T \Delta x \approx f_1(x + \Delta x) \leq 0$$ En resúmen, si el paso $\Delta x$ mantiene la factibilidad entonces:$$f_1(x) + \nabla f_1(x)^T \Delta x \leq 0.$$ Además, como $\Delta x$ es dirección de descenso:$$\nabla f_o (x)^T \Delta x < 0.$$ Tenemos que analizar dos casos dependiendo si $f_1$ es o no activa en $x$ para la desigualdad $f_1(x) + \nabla f_1(x)^T \Delta x \leq 0$: ```{margin}Recuérdese que una restricción de desigualdad $f_1$ es activa en $x$ si $f_1(x) = 0$ e inactiva en $x$ si $f_1(x) < 0$.``` **Caso $f_1$ inactiva en $x$: $f_1(x) < 0$,** entonces $x$ está dentro del círculo: En este caso **cualquier** dirección $\Delta x$ cuya longitud sea suficientemente pequeña satisface $f_1(x) + \nabla f_1(x)^T \Delta x < 0$ si $\nabla f_o(x) \neq 0$ (por ejemplo tómese $\Delta x$ como $-\nabla f_o(x)$ normalizado y suficientemente pequeño). Si $\nabla f_o(x) = 0$ entonces $x$ es un punto crítico y no existen direcciones de descenso. ```{margin}Recuérdese que una restricción de desigualdad $f_1$ es activa en $x$ si $f_1(x) = 0$ e inactiva en $x$ si $f_1(x) < 0$.``` **Caso $f_1$ activa en $x$: $f_1(x) = 0$,** entonces $x$ está en la frontera del círculo: La condición que se debe satisfacer es: $$f_1(x) + \nabla f_1(x)^T \Delta x = \nabla f_1(x)^T \Delta x \leq 0.$$ la cual junto con la de descenso: $\nabla f_o(x) ^T \Delta x < 0$ definen un semi-espacio cerrado y uno abierto respectivamente: Si $\nabla f_o(x)$ y $\nabla f_1(x)$ son paralelos y apuntan en la misma dirección entonces la intersección entre estas dos regiones es vacía: En este caso $\nabla f_o(x)$ y $\nabla f_1(x)$ cumplen: $\nabla f_o(x) = -\lambda_1 \nabla f_1(x)$ para algún $\lambda_1 \geq 0$. Tal condición se da si $x$ **es mínimo**, esto es: $x=x^*$. En este caso el signo del multiplicador **sí es importante** pues si $\nabla f_o(x) = -\lambda_1 \nabla f_1(x)$ con $\lambda_1 \leq 0$ entonces $\nabla f_o(x)$ y $\nabla f_1(x)$ apuntarían en sentidos contrarios y por tanto el conjunto de direcciones que satisfacen:$$\nabla f_1(x)^T \Delta x \leq 0.$$ $$\nabla f_o(x) ^T \Delta x < 0$$construirían un semi-espacio abierto. Esto daría la posibilidad a tener una infinidad de direcciones de descenso: lo cual sería una contradicción si $x$ **es mínimo** pues se tendría $\nabla f_o(x) =0$ por condición necesaria de primer orden y no existiría $\Delta x$ tal que es dirección de descenso (recuérdese que hemos asumido $\nabla f_o(x) \neq 0$) Ambas condiciones para los casos anteriores se pueden obtener a partir de la función Lagrangiana: $$\mathcal{L}(x, \lambda_1) = f_o(x) + \lambda_1 f_1(x).$$Si no existe $\Delta x$ en un punto $x^*$ entonces:$$\nabla_x \mathcal{L}(x^*, \lambda_1^*) = \nabla f_o(x^*) + \lambda_1^* \nabla f_1(x^*) = 0$$ Y de acuerdo a los dos casos anteriores es importante el signo de $\lambda_1^*$. Además de la condición $\lambda_1^* \geq 0$ requerimos la condición con nombre **condición de complementariedad u holgura complementaria**: $$\lambda_1^* f_1(x^*) = 0$$ Así, para el caso en el que $f_1$ es inactiva en $x^*$ entonces por esta condición $\lambda_1^* = 0$ y por tanto $\nabla_x \mathcal{L}(x^*, \lambda_1^*) = \nabla f_o(x^*) = 0$. En el caso que $f_1$ es activa en $x^*$ entonces $\lambda_1^*$ puede tomar cualquier valor en $\mathbb{R}$ pero por los dos dibujos anteriores se debe cumplir que $\lambda_1^* \geq 0$ para consistencia con que $x^*$ es mínimo. ```{admonition} ComentarioLa condición de holgura complementaria indica que si $\lambda_1$ es positivo entonces $f_1$ es activa:$$\lambda_1 >0 \implies f_1(x) = 0$$o bien:$$f_1(x) <0 \implies \lambda_1 =0$$``` Ejemplo 3 $$\min x_1 + x_2$$$$\text{sujeto a:}$$$$x_1^2 + x_2^2 -2 \leq 0$$$$-x_2 \leq 0$$ Para este ejemplo el conjunto de factibilidad es el interior de la mitad superior del círculo (incluyendo su frontera): $$\nabla f_1(x) = \left [\begin{array}{c}2x_1 \\2x_2 \\\end{array}\right],\nabla f_2(x) = \left [\begin{array}{c}0 \\-1 \\\end{array}\right]$$ ```{margin}$f_1(x) = x_1^2 + x_2^2 -2$, $f_2(x) = -x_2$``` La solución para este ejemplo es $x^* = \left [ \begin{array}{c}-\sqrt{2} \\ 0 \end{array} \right ]$, un punto en el que ambas restricciones $f_1(x) = x_1^2 + x_2^2 -2$, $f_2(x) = -x_2$ son activas. Siguiendo con el desarrollo del ejemplo 2 de aproximación a primer orden con el teorema de Taylor se tiene que una dirección de descenso $\Delta x$ debe cumplir (considerando restricciones activas $f_1, f_2$): $$\nabla f_1(x)^T \Delta x \leq 0$$ $$\nabla f_2(x)^T \Delta x \leq 0$$ $$\nabla f_o(x)^T \Delta x < 0$$ **No existe** tal dirección $\Delta x$ en el mínimo $x^* = \left [ \begin{array}{c}-\sqrt{2} \\ 0 \end{array} \right ]$: En este caso la función Lagrangiana es: $\mathcal{L}(x, \lambda_1, \lambda_2) = f_o(x) + \lambda_1 f_1(x) + \lambda_2 f_2(x)$ y por el ejemplo 2 si no existe $\Delta x$ en un punto $x^*$ entonces:$$\nabla_x \mathcal{L}(x^*, \lambda^*) = 0$$$$\lambda^* \geq 0$$considerando $\lambda^*$ al vector de multiplicadores de Lagrange que contiene $\lambda_1^*, \lambda_2^*$ y la última desigualdad se refiere a que $\lambda_1^*, \lambda_2^* \geq 0$. Además la condición de holgura complementaria es:$$\lambda_1^*f_1(x^*) = 0$$$$\lambda_2^*f_2(x^*) = 0$$ Para el punto $x^* = \left [ \begin{array}{c}-\sqrt{2} \\ 0 \end{array} \right ]$ se tiene: ```{margin}$f_1(x) = x_1^2 + x_2^2 -2$, $f_2(x) = -x_2$``` $$\nabla f_o(x^*) = \left [\begin{array}{c}1 \\1 \\\end{array}\right],\nabla f_1(x^*) = \left [\begin{array}{c}-2\sqrt{2} \\0 \\\end{array}\right],\nabla f_2(x^*) = \left [\begin{array}{c}0 \\-1 \\\end{array}\right]$$ Y con $\lambda^* = \left [ \begin{array}{c} \frac{1}{2\sqrt{2}} \\ 1 \end{array} \right ]$ se cumple:$$\begin{eqnarray}\nabla_x \mathcal{L}(x^*, \lambda^*) &=& \nabla f_o(x^*) + \lambda^{*T} \left ( \nabla f_1(x^*) \quad \nabla f_2(x^*) \right )\nonumber \\&=& \nabla f_o(x^*) + \lambda_1^* \nabla f_1(x^*) + \lambda_2^* \nabla f_2(x^*) = 0 \nonumber\end{eqnarray}$$ Por tanto $x^*$ es mínimo local y no existe dirección de descenso $\Delta x$. Para un punto diferente a $x^*$ por ejemplo $x = \left [ \begin{array}{c}\sqrt{2} \\ 0 \end{array} \right ]$ ambas restricciones $f_1$ y $f_2$ son activas: $$\nabla f_o(x) = \left [\begin{array}{c}1 \\1 \\\end{array}\right],\nabla f_1(x) = \left [\begin{array}{c}2\sqrt{2} \\0 \\\end{array}\right],\nabla f_2(x) = \left [\begin{array}{c}0 \\-1 \\\end{array}\right]$$ ```{margin}$f_1(x) = x_1^2 + x_2^2 -2$, $f_2(x) = -x_2$``` Y el vector $\Delta x = \left [ \begin{array}{c}-1 \\ 0 \end{array} \right ]$ satisface las restricciones: $$\nabla f_1(x)^T \Delta x \leq 0$$ $$\nabla f_2(x)^T \Delta x \leq 0$$ $$\nabla f_o(x)^T \Delta x < 0$$ Revisando si tal punto satisface:$$\nabla_x \mathcal{L}(x, \lambda) = 0$$$$\lambda \geq 0$$$$\lambda_1f_1(x) = 0$$$$\lambda_2f_2(x) = 0$$ Si $\lambda = \left [ \begin{array}{c}\frac{-1}{2\sqrt{2}} \\ 1 \end{array} \right ]$ entonces $\nabla_x \mathcal{L}(x, \lambda) = 0$ pero $\lambda_1 <0$. Por lo tanto sí existe $\Delta x$ de descenso y $x$ no es mínimo. Para un punto diferente a $x^*$ en el interior del conjunto de factibilidad por ejemplo $x = \left [ \begin{array}{c}1 \\ 0 \end{array} \right ]$ sólo la restricción $f_2$ es activa: $$\nabla f_o(x) = \left [\begin{array}{c}1 \\1 \\\end{array}\right],\nabla f_1(x) = \left [\begin{array}{c}2\\0 \\\end{array}\right],\nabla f_2(x) = \left [\begin{array}{c}0 \\-1 \\\end{array}\right]$$ ```{margin}$f_1(x) = x_1^2 + x_2^2 -2$, $f_2(x) = -x_2$``` Dado que $f_1$ sólo restringe a estar en el interior del círculo, el vector $\Delta x$ en este caso debe cumplir con mantener la factibilidad dada por la restricción $f_2$ que representa la parte superior del círculo (incluyendo la frontera). Una dirección $\Delta x$ suficientemente pequeña cumplirá $f_1$. Entonces $\Delta x$ debe satisfacer:$$\nabla f_2(x)^T \Delta x \leq 0$$$$\nabla f_o(x)^T \Delta x < 0$$para ser de descenso. El vector $\Delta x = \left [ \begin{array}{c}-\frac{1}{2} \\ \frac{1}{4} \end{array} \right ]$ satisface lo anterior y por tanto es de descenso. Revisando si tal punto satisface:$$\nabla_x \mathcal{L}(x, \lambda) = 0$$$$\lambda \geq 0$$$$\lambda_1f_1(x) = 0$$$$\lambda_2f_2(x) = 0$$ Para este punto como $f_1$ es inactiva entonces $\lambda_1 = 0$ por holgura complementaria. Si deseamos que $\nabla_x \mathcal{L}(x, \lambda)=0$ entonces debemos encontrar $\lambda$ tal que:$$\begin{eqnarray}\nabla f_o(x) + \lambda_1 \nabla f_1(x) + \lambda_2 \nabla f_2(x) &=& \left [\begin{array}{c}1 \\1 \\\end{array}\right] + 0 \cdot\left [\begin{array}{c}2\\0 \\\end{array}\right] + \lambda_2 \left [\begin{array}{c}0 \\-1 \\\end{array}\right] \nonumber \\&=&\left [\begin{array}{c}1 \\1 - \lambda_2\end{array}\right ]=0\end{eqnarray}$$ No existe $\lambda_2$ y por tanto $\lambda$ que satisfaga la ecuación anterior. Por lo tanto sí existe $\Delta x$ de descenso y $x$ no es mínimo. (PRIMERAFORMULACIONCONDKKT)= ```{admonition} ComentarioEn resúmen de los 3 ejemplos anteriores: si $x^*$ es una solución local del {ref}`problema estándar de optimización ` entonces existen vectores multiplicadores de Lagrange $\nu^*, \lambda^*$ para las restricciones de igualdad y desigualdad respectivamente tales que:$$\nabla_x\mathcal{L}(x^*, \nu^*, \lambda^*) = 0$$$$h_i(x^*) = 0 \quad \forall i = 1, \dots, p$$$$f_i(x^*) \leq 0 \quad \forall i = 1, \dots, m$$$$\lambda_i^* \geq 0 \quad \forall i = 1, \dots, m$$$$\lambda_i^* f_i(x^*) = 0 \quad \forall i = 1, \dots, m$$**faltan considerar suposiciones importantes para tener completo el resultado** pero los ejemplos anteriores abren camino hacia las **condiciones de [Karush-Kuhn-Tucker](https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions), *aka KKT*, de optimalidad**.``` ```{admonition} Observación:class: tipObsérvese que las condiciones KKT de optimalidad son condiciones **necesarias** e involucran información de primer orden.``` Máquina de Soporte Vectorial (SVM) para datos linealmente separables, ejemplo de *Constrained Inequality Convex Optimization* (CICO): Clasificador lineal Considérese dos conjuntos de puntos en $\mathbb{R}^n$ cuyos atributos o *features* están dados por $\{x_1, x_2, \dots, x_N\}$, $\{y_1, y_2, \dots, y_M\}$ y tales puntos tienen etiquetas $\{-1, 1\}$ respectivamente.El objetivo es encontrar una función $f: \mathbb{R}^n \rightarrow \mathbb{R}$ que sea negativa en el primer conjunto de puntos y positiva en el segundo conjunto de puntos:$$f(x_i) < 0 \quad \forall i = 1, \dots, N$$$$f(y_i) > 0 \quad \forall i = 1, \dots, M$$ Si existe tal función entonces $f$ o el conjunto $\{x : f(x) = 0\}$ separa o clasifica los $2$ conjuntos de puntos. ```{admonition} Observación:class: tipLa clasificación puede ser débil que significa: $f(x_i) \leq 0$, $f(y_i) \geq 0$.``` Para el caso en el que los datos son linealmente separables se busca una función afín de la forma $f(x) = a^Tx - b$ que clasifique los puntos, esto es:$$a^Tx_i - b < 0, \quad \forall i=1, \dots, N$$$$a^Ty_i - b > 0, \quad \forall i=1, \dots, M$$ Geométricamente se busca un hiperplano de dimensión $n-1$ que separe ambos conjuntos: En el dibujo tómese los puntos sin rellenar como los pertenecientes a una clase de $\{-1, 1\}$. Si $x, y$ son puntos en el hiperplano entonces $f(x) = f(y) = 0$ y además $a^T(x-y) = b - b = 0$, así $a$ es ortogonal a todo punto en el hiperplano y determina la orientación de este: Si $x$ es un punto en el hiperplano tenemos $f(x) = 0$ por lo tanto la distancia perpendicular del origen al hiperplano está dada por: $\frac{|a^Tx|}{||a||_2} = \frac{|b|}{||a||_2}$ y el parámetro $b$ determina la localización del hiperplano: Sean $x \in \mathbb{R}^n$ y $x_{\perp} \in \mathbb{R}^n$ la proyección ortogonal en el hiperplano: tenemos: $a^Tx = a^Tx_{\perp} + k||a||_2 = b + k ||a||_2$ por lo que $a^Tx - b = k||a||_2$ y: $$k = \frac{a^Tx-b}{||a||_2} = \frac{f(x)}{||a||_2},$$esto es, $|f(x)|$ da una medida de distancia perpendicular de $x$ al hiperplano. Modelo de SVM En este modelo se desea obtener:$$a^Tx_i - b \leq -1, \quad \forall i=1, \dots, N$$$$a^Ty_i - b \geq 1, \quad \forall i=1, \dots, M$$ ```{admonition} Observación:class: tipEl uso de $\{-1, 1\}$ ayuda a la escritura y formulación matemática del modelo.``` para clasificar a datos linealmente separables: Obsérvese que existen una infinidad de hiperplanos que separan a los datos anteriores: En la SVM buscamos un hiperplano que tenga la máxima separación de cada una de las distancias de los datos al mismo: Una distancia al hiperplano para los datos $x_i, \forall i=1, \dots, N$ y otra distancia para los datos $y_i \forall i=1, \dots, M$. Por la sección anterior, la distancia al hiperplano para cada conjunto de datos está dada por:$$\displaystyle \min_{i=1,\dots, N} \frac{|f(x_i)|}{||a||_2}$$$$\displaystyle \min_{i=1,\dots, M} \frac{|f(y_i)|}{||a||_2}$$ Entonces encontrar un hiperplano que tenga la máxima separación de cada una de las distancias de los datos al hiperplano se puede escribir como el problema:$$\displaystyle \max_{a,b} \left \{ \min_{i=1,\dots, N} \frac{|f(x_i)|}{||a||_2} + \min_{i=1,\dots, M} \frac{|f(y_i)|}{||a||_2} \right \}$$ Obsérvese que el cociente $\frac{|f(x_i)|}{||a||_2}$ es invariante ante reescalamientos por ejemplo: $$\frac{|ka^Tx - kb|}{||ka||_2} = \frac{a^Tx - b}{||a||_2} \forall k \neq 0$$ Por tanto, si los índices $i_1, i_2$ cumplen:$$i_1 = \text{argmin}_{i=1,\dots, N} \left \{\frac{|f(x_i)|}{||a||_2} \right \}$$$$i_2 = \text{argmin}_{i=1,\dots, M} \left \{\frac{|f(y_i)|}{||a||_2} \right \}$$ se puede hacer un reescalamiento para obtener:$$|f(x_{i_1})| = 1, \quad |f(x_i)| > 1 \quad \forall i=1, \dots, N, i \neq i_1$$$$|f(y_{i_2})| = 1, \quad |f(y_i)| > 1 \quad \forall i=1, \dots, M, i \neq i_2$$ Esto es, $|f(x_i)| \geq 1, \forall i=1, \dots, N$, $|f(y_i)| \geq 1, \forall i=1, \dots, M$. Problema de optimización en SVM El problema canónico o estándar de la máquina de soporte vectorial es: $$\displaystyle \max_{a,b} \frac{2}{||a||_2}$$$$\text{sujeto a:}$$$$f(x_i) \leq -1, \quad \forall i=1, \dots, N$$$$f(y_i) \geq 1, \quad \forall i=1, \dots, M$$ El cual es equivalente a: $$\displaystyle \min_{a,b} \frac{||a||^2_2}{2}$$$$\text{sujeto a:}$$$$a^Tx_i - b \leq -1, \quad \forall i=1, \dots, N$$$$a^Ty_i -b \geq 1, \quad \forall i=1, \dots, M$$ ```{admonition} Comentarios* Se ha quitado el valor absoluto $|f(x_i)|, |f(y_i)|$ pues $x_i, y_i$ se asumen cumplen $a^Tx_i-b \geq 1$, $a^Ty_i-b \leq -1$, esto es, $x_i, y_i$ son clasificados correctamente.* Este problema es de optimización convexa con restricciones de desigualdad.``` Vectores de soporte Al encontrar este hiperplano tenemos otros dos hiperplanos paralelos ortogonales al vector $a$ sin ningún dato entre ellos a una distancia de $\frac{1}{||a||_2}$: ```{admonition} DefiniciónAquellas restricciones activas son originadas por puntos con el nombre de vectores de soporte.``` ```{admonition} Comentarios* Al resolver el problema de optimización siempre existirán al menos $2$ restricciones activas pues siempre hay una distancia mínima para cada conjunto de puntos.* Las otras dos rectas forman lo que se conoce como margen.``` Ejemplo [Iris *dataset*](https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html) Utilizamos el conocido *dataset* de [*Iris*](https://en.wikipedia.org/wiki/Iris_flower_data_set) en el que se muestran **tres especies del género *Iris***. Las especies son: [*I. setosa*](https://en.wikipedia.org/wiki/Iris_setosa), [*I. virginica*](https://en.wikipedia.org/wiki/Iris_virginica) y [*I. versicolor*](https://en.wikipedia.org/wiki/Iris_versicolor):Imagen obtenida de [Iris Dataset](https://rpubs.com/AjinkyaUC/Iris_DataSet). La clase $\mathcal{C}_{-1}$ es `setosa` y $\mathcal{C}_1$ es `versicolor` codificadas como $-1, 1$ respectivamente.
###Code
data_iris_setosa_versicolor = data_iris[0:100].copy()
classes = iris["target"][0:100].copy()
print(classes)
classes[0:50] = classes[0:50].copy()-1
print(classes)
###Output
[-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1]
###Markdown
Estandarizamos la matriz de datos:
###Code
data_iris_setosa_versicolor = (data_iris_setosa_versicolor-
data_iris_setosa_versicolor.mean(axis=0))/data_iris_setosa_versicolor.std(axis=0)
###Output
_____no_output_____
###Markdown
**Añadimos la columna que indica uso de intercepto:**
###Code
m,n = data_iris_setosa_versicolor.shape
data_iris_setosa_versicolor = np.column_stack((-1*np.ones((m,1)), data_iris_setosa_versicolor))
print(data_iris_setosa_versicolor[0:10, 0:10])
###Output
[[-1. -0.58 0.84 -1.01 -1.04]
[-1. -0.89 -0.21 -1.01 -1.04]
[-1. -1.21 0.21 -1.08 -1.04]
[-1. -1.36 0. -0.94 -1.04]
[-1. -0.74 1.05 -1.01 -1.04]
[-1. -0.11 1.68 -0.8 -0.69]
[-1. -1.36 0.63 -1.01 -0.86]
[-1. -0.74 0.63 -0.94 -1.04]
[-1. -1.68 -0.42 -1.01 -1.04]
[-1. -0.89 0. -0.94 -1.22]]
###Markdown
Visualizamos para algunos atributos: ```{margin}Obsérvese que considerando los atributos `Sepal.Length` y `Sepal.Width` se pueden separar la clase *I. setosa* codificada como $-1$ y la clase *I. versicolor* codificada como $1$ de forma lineal.```
###Code
plt.scatter(data_iris_setosa_versicolor[:, 1],
data_iris_setosa_versicolor[:, 2], c=classes, cmap=plt.cm.Set1,
edgecolor='k')
plt.xlabel("sepal length")
plt.ylabel("sepal width")
plt.show()
###Output
_____no_output_____
###Markdown
```{margin}Obsérvese que considerando los atributos `Petal.Length` y `Petal.Width` se pueden separar la clase *I. setosa* codificada como $-1$ y la clase *I. versicolor* codificada como $1$ de forma lineal.```
###Code
plt.scatter(data_iris_setosa_versicolor[:, 3],
data_iris_setosa_versicolor[:, 4], c=classes, cmap=plt.cm.Set1,
edgecolor='k')
plt.xlabel("petal length")
plt.ylabel("petal width")
plt.show()
###Output
_____no_output_____
###Markdown
**Función objetivo:**$$\frac{||a||^2_2}{2}$$**y restricciones:**$$a^Tx_i - b \leq -1, \quad \forall i=1, \dots, N$$$$a^Ty_i -b \geq 1, \quad \forall i=1, \dots, M$$ ```{margin}Ver [cvxpy: svm](https://www.cvxpy.org/examples/machine_learning/svm.html)``` ```{margin}El vector `vec` contiene al intercepto $b$ y al vector $a$.```
###Code
n = 5 #number of variables
vec = cp.Variable(n) #optimization variable
###Output
_____no_output_____
###Markdown
```{margin}`vec[0]` es $b$, `vec[1:n]` es $a$.```
###Code
fo_cvxpy = 1/2*cp.norm(vec[1:n],2)**2 #fo just includes a not intercept
constraints = [data_iris_setosa_versicolor[0:50,:]@vec <=-1,
data_iris_setosa_versicolor[50:100,:]@vec >= 1]
obj = cp.Minimize(fo_cvxpy)
prob = cp.Problem(obj, constraints)
print(prob.solve())
print("\nThe optimal value is", prob.value)
print("The optimal vector with intercept is")
print(vec.value)
###Output
The optimal value is 0.6190278040023801
The optimal vector with intercept is
[-0.24 0.27 -0.34 0.7 0.75]
###Markdown
Los primeros $50$ renglones pertenecen a la clase $\mathcal{C}_{-1}:$ *I. setosa* y los restantes $50$ renglones pertenecen a la clase $\mathcal{C}_{1}$: *I. versicolor*. Se clasifica de acuerdo al signo de: $a^Tx - b$:
###Code
print(np.sign(data_iris_setosa_versicolor[0:50,:]@vec.value))
print(np.sign(data_iris_setosa_versicolor[50:100,:]@vec.value))
###Output
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1.]
###Markdown
Visualización utilizando únicamente `Sepal.Length` y `Sepal.Width`:
###Code
x_min = np.min(data_iris_setosa_versicolor[:,1])
x_max = np.max(data_iris_setosa_versicolor[:,1])
y_min = np.min(data_iris_setosa_versicolor[:,2])
y_max = np.max(data_iris_setosa_versicolor[:,2])
x_plot = np.linspace(x_min, x_max, 100)
b, a = vec.value[0], vec.value[1:n]
y_plot = 1/a[1]*(-a[0]*x_plot + b)
y_plot_minus_1 = 1/a[1]*(-a[0]*x_plot + b -1)
y_plot_plus_1 = 1/a[1]*(-a[0]*x_plot + b + 1)
plt.scatter(data_iris_setosa_versicolor[:, 1],
data_iris_setosa_versicolor[:, 2], c=classes, cmap=plt.cm.Set1,
edgecolor='k')
plt.plot(x_plot,y_plot,'b',
x_plot,y_plot_minus_1, 'm',
x_plot, y_plot_plus_1,'r')
plt.xlabel("sepal length")
plt.ylabel("sepal width")
plt.show()
###Output
_____no_output_____
###Markdown
Visualización utilizando únicamente `Petal.Length` y `Petal.Width`:
###Code
x_min = np.min(data_iris_setosa_versicolor[:,3])
x_max = np.max(data_iris_setosa_versicolor[:,3])
y_min = np.min(data_iris_setosa_versicolor[:,4])
y_max = np.max(data_iris_setosa_versicolor[:,4])
x_plot = np.linspace(x_min, x_max, 100)
y_plot = 1/a[3]*(-a[2]*x_plot + b)
y_plot_minus_1 = 1/a[3]*(-a[2]*x_plot + b -1)
y_plot_plus_1 = 1/a[3]*(-a[2]*x_plot + b + 1)
plt.scatter(data_iris_setosa_versicolor[:, 3],
data_iris_setosa_versicolor[:, 4], c=classes, cmap=plt.cm.Set1,
edgecolor='k')
plt.plot(x_plot,y_plot,'b',
x_plot,y_plot_minus_1, 'm',
x_plot, y_plot_plus_1,'r')
plt.xlabel("petal length")
plt.ylabel("petal width")
plt.show()
###Output
_____no_output_____
###Markdown
```{admonition} Ejercicio:class: tipRealiza la clasificación anterior para las clases `virginica` y `versicolor`.``` Ajustando el modelo de la SVM sólo para dos atributos: `Sepal.Length` y `Sepal.Width`:
###Code
data_iris_setosa_versicolor = data_iris[0:100, 0:2].copy()
###Output
_____no_output_____
###Markdown
Aquí no estandarizamos.
###Code
data_iris_setosa_versicolor = np.column_stack((-1*np.ones((m,1)), data_iris_setosa_versicolor))
n = 3 #number of variables
vec = cp.Variable(n) #optimization variable
fo_cvxpy = 1/2*cp.norm(vec[1:n],2)**2 #fo just includes a not intercept
constraints = [data_iris_setosa_versicolor[0:50,:]@vec <=-1,
data_iris_setosa_versicolor[50:100,:]@vec >= 1]
obj = cp.Minimize(fo_cvxpy)
prob = cp.Problem(obj, constraints)
print(prob.solve())
print("\nThe optimal value is", prob.value)
print("The optimal vector with intercept is")
print(vec.value)
print(np.sign(data_iris_setosa_versicolor[0:50,:]@vec.value))
print(np.sign(data_iris_setosa_versicolor[50:100,:]@vec.value))
b, a = vec.value[0], vec.value[1:n]
print(b)
print(a)
x_min = np.min(data_iris_setosa_versicolor[:,1])
x_max = np.max(data_iris_setosa_versicolor[:,1])
y_min = np.min(data_iris_setosa_versicolor[:,2])
y_max = np.max(data_iris_setosa_versicolor[:,2])
x_plot = np.linspace(x_min, x_max, 100)
y_plot = 1/a[1]*(-a[0]*x_plot + b)
y_plot_minus_1 = 1/a[1]*(-a[0]*x_plot + b -1)
y_plot_plus_1 = 1/a[1]*(-a[0]*x_plot + b + 1)
plt.scatter(data_iris_setosa_versicolor[:, 1],
data_iris_setosa_versicolor[:, 2], c=classes, cmap=plt.cm.Set1,
edgecolor='k')
plt.plot(x_plot,y_plot,'b',
x_plot,y_plot_minus_1, 'm',
x_plot, y_plot_plus_1,'r')
plt.legend(["$17.32 + 6.32x_1 -5.26x_2=0$",
"$17.32 + 6.32x_1 -5.26x_2=-1$",
"$17.32 + 6.32x_1 -5.26x_2=1$"])
plt.xlabel("sepal length")
plt.ylabel("sepal width")
plt.show()
###Output
_____no_output_____ |
Autoencoding kernel convolution/06 VAE kernels.ipynb | ###Markdown
VAE kernelsCan we use VAE as a kernel for convolutional layer? The idea is that for each receptive field, VAE would find multinomial gaussian distribution shared across all instances of the receptive field. The resulting system would have features, feature value ranges and distributions at each receptive field, similar to neurons with various feature value preferences for each receptive field in the cortex. For example, a neuron may have a preference for horizontal lines, upward movement, slow speed movement. That would be equivalent to VAE finding 3 features with specific values for a given receptive field. A neuron can have preference for only one (or few) values of a given feature so a lot of neurons are needed to represent the domain of each feature with sufficient statistical redundency. But a VAE's output for a given receptive field, e.g. 0.3 for feature 1, 0.9 for feature 2 and 0.5 for feature 3 with low, high and medium variance, respectively, can represent 1000s of feature-value preferences and 100s of neurons (because a neuron can be selective to a different value for each feature). The similifying factor here is Laplace assumption. Maybe using an order of magnitude more of features would result at least in mixture of gaussians because a feature would get duplicated and learn different modes of the distribution. Basics
###Code
import logging
import torch
import torch.utils.data
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = "cuda" if torch.cuda.is_available() else "cpu"
logging.basicConfig(
level=logging.ERROR,
format='%(asctime)s.%(msecs)03d %(name)s:%(funcName)s %(levelname)s:%(message)s',
datefmt="%M:%S")
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from skimage.draw import line_aa
%matplotlib inline
plt.style.use('classic')
def show_image(image, vmin=None, vmax=None, title=None, print_values=False, figsize=(4, 4)):
#print("image ", image.shape)
image = image.cpu().numpy()
fig, ax1 = plt.subplots(figsize=figsize)
if title:
plt.title(title)
#i = image.reshape((height, width))
#print("i ", i.shape)
ax1.imshow(image, vmin=vmin, vmax=vmax, interpolation='none', cmap=plt.cm.plasma)
plt.show()
if print_values:
print(image)
###Output
_____no_output_____
###Markdown
Network
###Code
class VAE(nn.Module):
def __init__(self, input_width, input_height, feature_count):
super(VAE, self).__init__()
self.logger = logging.getLogger(self.__class__.__name__)
self.logger.setLevel(logging.WARN)
self.feature_count = feature_count
self.encoder = nn.Sequential(
nn.Linear(input_width * input_height , input_width * input_height * 2),
nn.BatchNorm2d(1),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(input_width * input_height * 2, input_width * input_height * 4),
nn.BatchNorm2d(1),
nn.LeakyReLU(0.2, inplace=True),
)
self.decoder = nn.Sequential(
nn.Linear(feature_count , input_width * input_height * 2),
nn.BatchNorm2d(1),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(input_width * input_height * 2, input_width * input_height),
nn.BatchNorm2d(1),
nn.Sigmoid(),
)
self.linear_mu = nn.Linear(input_width * input_height * 4, feature_count)
self.linear_sigma = nn.Linear(input_width * input_height * 4, feature_count)
self.lrelu = nn.LeakyReLU()
self.relu = nn.ReLU()
def encode(self, x):
self.logger.debug(f"x {x.shape}")
x = self.encoder(x)
return self.linear_mu(x), self.linear_sigma(x)
def decode(self, z):
z = z.view(-1, 1, 1, self.feature_count)
self.logger.debug(f"z {z.shape}")
return self.decoder(z)
def reparametrize(self, mu, logvar):
std = logvar.mul(0.5).exp_()
eps = torch.FloatTensor(std.size()).normal_().to(device)
eps = eps.mul(std).add_(mu)
eps = F.sigmoid(eps)
self.logger.debug(f"eps {eps.shape}")
return eps
def forward(self, x):
self.logger.debug(f"x {x.shape}")
mu, logvar = self.encode(x)
self.logger.debug(f"mu {mu.shape}")
self.logger.debug(f"logvar {logvar.shape}")
z = self.reparametrize(mu, logvar)
self.logger.debug(f"z {z.shape}")
decoded = self.decode(z)
self.logger.debug(f"decoded {decoded.shape}")
return decoded, mu, logvar, z
class Network(nn.Module):
def __init__(self, input_width, input_height, feature_count):
super(Network, self).__init__()
self.input_width = input_width
self.input_height = input_height
self.vae = VAE(input_width, input_height, feature_count)
def forward(self, x):
return self.vae(x)
def loss_function(self, recon_x, x, mu, logvar):
# print(recon_x.size(), x.size())
BCE = F.binary_cross_entropy(recon_x.view(-1, self.input_width * self.input_height), x.view(-1, self.input_width * self.input_height), size_average=True)
# see Appendix B from VAE paper:
# Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
# return BCE + KLD
BCE /= 0.001
#print(BCE, KLD)
return BCE + 3 * KLD
def train(self, input, num_epochs=3000):
learning_rate = 10e-3
optimizer = torch.optim.Adam(self.parameters(), lr=learning_rate,
weight_decay=1e-5)
done = False
epoch = 0
while not done:
output, mu, logvar, z = self(input)
loss = self.loss_function(output, input, mu, logvar)
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % int(num_epochs / 10) == 0:
print('epoch [{}/{}], loss:{:.4f}'
.format(epoch+1, num_epochs, loss.item()))
show_image(output[0, 0].view(self.input_height, self.input_width).detach(), title=f"output {0}", vmin=0, vmax=1)
if (loss.item() < 0.001 and epoch > 1500) or epoch > num_epochs:
done = True
epoch += 1
torch.save(self.state_dict(), self.save_path())
return output, mu, logvar, z
def save_path(self):
return f"network.pt"
###Output
_____no_output_____
###Markdown
Example
###Code
input_height = input_width = 3
input = [
[0, 0, 1,
0, 1, 0,
1, 0, 0],
[1, 0, 0,
0, 1, 0,
0, 0, 1],
[1, 0, 0,
1, 0, 0,
1, 0, 0],
[0, 0, 0,
0, 0, 0,
0, 0, 0],
[1, 1, 1,
1, 1, 1,
1, 1, 1],
[1, 1, 1,
1, 0, 0,
1, 0, 0],
[1, 0, 0,
1, 1, 1,
1, 0, 0],
[1, 0, 1,
0, 1, 0,
1, 0, 1],
[0, 1, 1,
0, 1, 1,
0, 1, 1],
[1, 1, 1,
0, 0, 0,
1, 1, 1],
[0, 0, 0,
0, 1, 0,
0, 0, 0],
[1, 1, 1,
1, 0, 1,
1, 1, 1],
]
feature_count = 4
input = torch.as_tensor(input).float().unsqueeze(dim=0).unsqueeze(dim=0).to(device)
image_count = input.shape[2]
for i in range(image_count):
show_image(input[0,0,i].view(input_height, input_width), vmin=0, vmax=1, title=f"input {i}")
network = Network(input_height, input_width, feature_count).to(device)
decoded, mu, logvar, z = network.train(input)
for i in range(image_count):
show_image(decoded[i,0,0].detach().view(input_height, input_width), vmin=0, vmax=1, title=f"output {i}")
# import os
# import glob
# files = glob.glob('./*.pt')
# for f in files:
# os.remove(f)
print(mu)
print(logvar.exp())
print(z)
decoded = network.vae.decode(torch.as_tensor(z[:,:,0:1,:]).to(device))
show_image(decoded[0,0,0].detach().view(input_height, input_width), vmin=0, vmax=1)
network.vae.reparametrize(mu, logvar)
###Output
_____no_output_____ |
docs/examples/introduction.ipynb | ###Markdown
Quick introduction to On The Fly python (*OTF*)> On the fly distributed workflows in ipythonI've been building infrastructure for quants and data-scientists since 2016. Over that time I observed that there's a big mismatch between how data-scientists work on a day to day basis and the interface provided by most workflow management systems. Typically data-scientist do *explorative programming* in notebooks, they rely on the data staying in memory and use the cells in the notebook as a form of checkpointing system (if you make a mistake you can edit the cell in which the mistake was made and re-run the code from there).The transition to big-data often involves throwing away the tools they are used to, dealing with errors that come from layers that are abstracted deep under the API they use (e.g.: serialisation problems). A lot of the pain comes from the fact that the systems they are transition too are designed to write complex distributed computing workload, not one off computations or glue code between existing algorithms. What is *OTF**OTF* is a framework to write glue code. It provides the building blocks to edit functions and launch computations directly from notebooks. Under the hood*OTF* is a framework that does "continuation-based" workflow orchestration. Other workflow management systems (*wms*) either:+ Use a **static computation graph**: create a graph of computations and then run that graph (aka: Define then run)+ Are **tied to a machine**: The *wms* relies on having a node stay up for the whole duration on the workflow.*OTF* is unique in that it is able to save snapshots of the orchestration code when it's waiting on computations. As a result the orchestration code can be resumed on any machine after the result of an awaited computation are awailable. As we'll see these snapshots also provide a great tool to debug runs, fix any issues we might have found with the code and resume our edited workflow from the middle.
###Code
import math
import otf
import pickle
from otf import local_scheduler
###Output
_____no_output_____
###Markdown
All the functions we are defining live in an environment. They only have access to the values defined in that environment:
###Code
e = otf.Environment(
math=otf.NamedReference(math),
local_scheduler=otf.NamedReference(local_scheduler),
)
@e.function
def is_prime(n: int) -> float:
"""check whether a number is prime
Taken from https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example
"""
if n < 2:
return False
if n == 2:
return True
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
###Output
_____no_output_____
###Markdown
`is_prime` is a function. It can be called:
###Code
is_prime(5), is_prime(435345235)
###Output
_____no_output_____
###Markdown
But it can also be serialised in a format that is easy to introspect and is stable:
###Code
print(otf.dump_text(is_prime, indent=4, format=otf.EXECUTABLE))
###Output
import otf
_0 = otf.Function(
name='is_prime',
signature=['n'],
body=(
' """check whether a number is prime\n'
'\n'
' Taken from https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example\n'
' """\n'
' if n < 2:\n'
' return False\n'
' if n == 2:\n'
' return True\n'
' if n % 2 == 0:\n'
' return False\n'
' sqrt_n = int(math.floor(math.sqrt(n)))\n'
' for i in range(3, sqrt_n + 1, 2):\n'
' if n % i == 0:\n'
' return False\n'
' return True'
)
)
otf.Closure(
environment=otf.Environment(
math=otf.NamedReference('math'),
local_scheduler=otf.NamedReference(
'otf.local_scheduler'
),
is_prime=_0
),
target=_0
)
###Markdown
Defining a workflow
###Code
@e.workflow
async def get_primes(candidates: list[int]) -> list[int]:
primes = []
futures = [local_scheduler.defer(is_prime, x) for x in candidates]
while futures:
candidate = candidates.pop()
fut = futures.pop()
ok = await fut
if ok:
primes.append(candidate)
return primes
V = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419,
]
with local_scheduler.Scheduler() as schd:
trace = schd.run(get_primes, V)
###Output
_____no_output_____
###Markdown
The scheduler returns a trace: it saved a checkpoint every time it has hit an `await`.> *NOTE* This part uses IPyWidgets, it works fine in Jupyter but some notebook renderers do not support it well.
###Code
trace
###Output
_____no_output_____
###Markdown
The trace has a linked list of all the checkpoints. The checkpoints contain all the information required to restart the computation:
###Code
print(otf.dump_text(trace.parent.suspension, indent=4))
###Output
otf.Suspension(
code=(
' primes = []\n'
' futures = [local_scheduler.defer(is_prime, x) for x in candidates]\n'
' while futures:\n'
' candidate = candidates.pop()\n'
' fut = futures.pop()\n'
' ok = await fut\n'
' if ok:\n'
' primes.append(candidate)\n'
' return primes'
),
position=1,
variables={
'candidate': 112_272_535_095_293,
'candidates': [],
'fut': otf.local_scheduler.future(
uid='62e6e282-e5c1-457c-b8c3-a967b36711eb:0',
task=otf.runtime.Task(
function=otf.Closure(
environment=otf.Environment(
math=otf.NamedReference('math'),
local_scheduler=otf.NamedReference(
'otf.local_scheduler'
),
is_prime=otf.Function(
name='is_prime',
signature=['n'],
body=(
' """check whether a number is prime\n'
'\n'
' Taken from https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example\n'
' """\n'
' if n < 2:\n'
' return False\n'
' if n == 2:\n'
' return True\n'
' if n % 2 == 0:\n'
' return False\n'
' sqrt_n = int(math.floor(math.sqrt(n)))\n'
' for i in range(3, sqrt_n + 1, 2):\n'
' if n % i == 0:\n'
' return False\n'
' return True'
)
)
),
target=ref(5)
),
args=[112_272_535_095_293]
)
),
'futures': [],
'ok': 1,
'primes': [
115_797_848_077_099,
115_280_095_190_773,
112_272_535_095_293,
112_582_705_942_171
]
},
environment=ref(23),
awaiting=ref(28)
)
###Markdown
And of-course, we can also access the result of our computation:
###Code
trace.value
###Output
_____no_output_____ |
DistancesCaseStudy.ipynb | ###Markdown
Euclidean and Manhattan Distance CalculationsIn this short mini project you will see examples and comparisons of distance measures. Specifically, you'll visually compare the Euclidean distance to the Manhattan distance measures. The application of distance measures has a multitude of uses in data science and is the foundation of many algorithms you'll be using such as Prinical Components Analysis.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
plt.style.use('ggplot')
# Load Course Numerical Dataset
df = pd.read_csv('data/distance_dataset.csv',index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Euclidean DistanceLet's visualize the difference between the Euclidean and Manhattan distance.We are using Pandas to load our dataset .CSV file and use Numpy to compute the __Euclidean distance__ to the point (Y=5, Z=5) that we choose as reference. On the left here we show the dataset projected onto the YZ plane and color coded per the Euclidean distance we just computed. As we are used to, points that lie at the same Euclidean distance define a regular 2D circle of radius that distance.Note that the __SciPy library__ comes with optimized functions written in C to compute distances (in the scipy.spatial.distance module) that are much faster than our (naive) implementation.
###Code
# In the Y-Z plane, we compute the distance to ref point (5,5)
distEuclid = np.sqrt((df.Z - 5)**2 + (df.Y - 5)**2)
###Output
_____no_output_____
###Markdown
**Create a distance to reference point (3,3) matrix similar to the above example.**
###Code
distEuclid_1 = np.sqrt((df.Z-3)**2 + (df.Y-3)**2)
###Output
_____no_output_____
###Markdown
**Replace the value set to 'c' in the plotting cell below with your own distance matrix and review the result to deepen your understanding of Euclidean distances. **
###Code
figEuclid = plt.figure(figsize=[10,8])
plt.scatter(df.Y - 3, df.Z-3, c=distEuclid_1, s=20)
plt.ylim([-2,6])
plt.xlim([-2,6])
plt.xlabel('Y - 3', size=14)
plt.ylabel('Z - 3', size=14)
plt.title('Euclidean Distance')
cb = plt.colorbar()
cb.set_label('Distance from (3,3)', size=14)
#figEuclid.savefig('plots/Euclidean.png')
###Output
_____no_output_____
###Markdown
Manhattan DistanceManhattan distance is simply the sum of absolute differences between the points coordinates. This distance is also known as the taxicab or city block distance as it measure distances along the coorinate axis which creates "paths" that look like a cab's route on a grid-style city map.We display the dataset projected on the XZ plane here color coded per the Manhattan distance to the (X=5, Z=5) reference point. We can see that points laying at the same distance define a circle that looks like a Euclidean square.
###Code
# In the Y-Z plane, we compute the distance to ref point (5,5)
distManhattan = np.abs(df.X - 5) + np.abs(df.Z - 5)
###Output
_____no_output_____
###Markdown
**Create a Manhattan distance to reference point (4,4) matrix similar to the above example and replace the value for 'c' in the plotting cell to view the result.**
###Code
distManhattan_1= np.abs(df.X-4) + np.abs(df.Z-4)
figManhattan = plt.figure(figsize=[10,8])
plt.scatter(df.X - 4, df.Z-4, c=distManhattan_1, s=20)
plt.ylim([-4,6])
plt.xlim([-4,6])
plt.xlabel('X - 4', size=14)
plt.ylabel('Z - 4', size=14)
plt.title('Manhattan Distance')
cb = plt.colorbar()
cb.set_label('Distance from (4,4)', size=14)
#figManhattan.savefig('plots/Manhattan.png')
###Output
_____no_output_____
###Markdown
Now let's create distributions of these distance metrics and compare them. We leverage the scipy dist function to create these matrices similar to how you manually created them earlier in the exercise.
###Code
import scipy.spatial.distance as dist
mat = df[['X','Y','Z']].to_numpy()
DistEuclid = dist.pdist(mat,'euclidean')
DistManhattan = dist.pdist(mat, 'cityblock')
largeMat = np.random.random((10000,100))
###Output
_____no_output_____
###Markdown
**Plot histograms of each distance matrix for comparison.**
###Code
plt.figure(figsize=[10,8])
plt.hist(DistEuclid, bins=20)
plt.xlabel('Distance', size=14)
plt.ylabel('counts', size=14)
plt.title('Euclidean Distance')
plt.show();
plt.figure(figsize=[10,8])
plt.hist(DistManhattan, bins=20)
plt.xlabel('Distance', size=14)
plt.ylabel('Counts', size=14)
plt.title('Manhattan Distance')
plt.show();
###Output
_____no_output_____ |
test/.ipynb_checkpoints/Rev_EliminacionXBloques_SVD-checkpoint.ipynb | ###Markdown
Revisión de código Eliminación por bloques basada en solver linear que usa SVD (parte 1-Funcion bloques)*ver parte 2 al fondo* **Fecha:** 12 de Abril de 2020 **Responsable de revisión:** Javier Valencia**Código revisado**
###Code
bloques<-function(A,b,corte) {
#Función que genera los bloques de la matriz A y el vector respuesta b
#Args: A- matriz inicial(n*n)
# b- Solución de Ax = b , (nx1)
# corte - tamaño del bloque
#Returns: lista con la matriz A dividida en 4 bloques: A11,A12, A21, A22 y el vector b
# dividido en 2 bloques b1, b2
a11 <- A[c(1:corte),c(1:corte)]
a12 <- A[c(1:corte),c((corte+1):dim(A)[2])]
a21 <- A[c((corte+1):dim(A)[1]),c(1:corte)]
a22 <- A[c((corte+1):dim(A)[1]),c((corte+1):dim(A)[2])]
b1 <- b[1:(corte)]
b2 <- b[((corte)+1):length(b)]
list(A11 = a11,A12 =a12, A21=a21, A22=a22, b1 = b1, b2 = b2)
}
#Prueba
A = matrix(sample(0:50,7*7,replace=TRUE), c(7,7))
b = c(1:dim(A)[1])
A
b
bloques(A,b,4)
###Output
_____no_output_____
###Markdown
**1.Sobre la documentación del código/de la función**¿Se encuentran presentes en la implementación los siguientes elementos? Por favor, ingrese explicaciones detalladas.**a) Descripción concisa y breve de lo que hace el código/la función**Sí, la documentación de la función "bloques" explica lo que hace, en este caso divide una matriz cuadrada en 4 bloques**b) Descripción de sus argumentos de entrada, su significado y rango de valores que pueden tomar**Se sugiere detallar que el valor de corte debe estar en siguiente rango de valores "0<corte<n".Debe de estar entre 1 y n el valor del rango de la matriz**c) Descripción de los tipos de argumentos de entrada y de salida (por ejemplo, valores enteros, reales, strings, dataframe, matrices, etc)**Es completa la documentación, la funcion regresa una lista con las 4 partes de la matriz A y las 2 partes del vector b**d) Descripción de la salida de la función, su significado y valores/objetos que deben regresa**La documentación de salida esta completa, especifica los argumentos de salida de la función y su significado **2. Cumplimiento de objetivos del código/de la función**Por favor, ingrese explicaciones detalladas.**a) ¿El código cumple los objetivos para los que fue diseñado?**Sí, la función cumple su objetivo regresando una lista con las particiones de la matriz A y del vector b**b) ¿La salida de la función regresa una lista con 4 matrices y 2 vectores?**Sí **3. Pruebas**Ocupe la presente sección para hacer diseño de pruebas variando los parámetros que recibe el código la función en diferentes rangos para evaluae su comportamiento y/o detectar posibles fallos**Test 1 cambiando el parametro de corte para valores pares e impares****Objetivo del test:** Revisar si la función realiza su trabajo para cortes de numeros pares o impares.**Implementación del test:**
###Code
#Prueba para valor de parametro de corte
A = matrix(c(0:24), nrow=5,ncol=5)
b = c(1:dim(A)[1])
A
b
n<-1
#Probamos con un valor par corte=2n
corte<-2
cat("Prueba corte con valor par (2n):\ncorte=",corte)
l<-bloques(A,b,corte)
l
#Probamos con un valor impar corte=2n+1
corte<-3
cat("Prueba corte con valor impar (2n+1):\ncorte=",corte)
l<-bloques(A,b,corte)
l
###Output
_____no_output_____
###Markdown
**Test 2 valores que generan errores en la funcion**
###Code
#Probamos con un valor corte mas grande que el rango de la matriz
corte<-5
cat("Prueba corte con valor mas grande que el rango de la matriz(corte>n):\ncorte=",corte)
l<-bloques(A,b,corte)
l
#Probamos con un valor corte mas grande que el rango de la matriz
corte<-0
cat("Prueba corte con valor mas grande que el rango de la matriz(corte>n):\ncorte=",corte)
l<-bloques(A,b,corte)
l
#Probamos con un valor corte mas grande que el rango de la matriz
corte<--1
cat("Prueba corte con valor mas grande que el rango de la matriz(corte>n):\ncorte=",corte)
l<-bloques(A,b,corte)
l
###Output
Prueba corte con valor mas grande que el rango de la matriz(corte>n):
corte= -1
###Markdown
Principales hallazos del test: **Rango de error de parametro corte** * Hallazgo 1: La función no funciona si el valor de corte no esta en el rango 0<corte<n, n siendo el rango de la matriz A* Hallazgo 2: La función funciona para corte=0 **4. Resumen detallado de posibles puntos faltantes en implementación*** Añadir a la documentación de la función bloques el rango permitido para el parametro de corte.* Los parametros matriz A y vector b no causan errores debido a sus valores. Por lo que no se sugiere algun cambio referente a ellos**Sugerencias para resolver los puntos anteriores*** se sugiere poner en la documentación el rango permitido para el parametro de corte. Revisión de código Eliminación por bloques basada en solver linear que usa SVD (parte 2-funcion eliminacion_bloques)**Fecha:** 13 de Abril de 2020 **Responsable de revisión:** Javier Valencia**Código revisado: se revisa la función eliminacion_bloques, pero esta se apoya de las otras funciones en el codigo**
###Code
install.packages("matrixcalc")
library("matrixcalc")
#install.packages("tictoc")
devtools::install_github("collectivemedia/tictoc")
library(tictoc)
###Output
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
Downloading GitHub repo collectivemedia/tictoc@master
###Markdown
Código del cual depende nuestra función que ya fue revisado y corregido
###Code
indices <- function(n) {
# Crea una lista de tamaño (n-1)n/2 con pares de índices de la siguiente
# manera: (1,2),..,(1,n),(2,3),..,(2,n),...,(n-1,n)
# Args:
# n: número entero postivo
# se refiere al número de columnas
#Returns:
# lista con pares de índices
a <- NULL
b <- NULL
indices <- NULL
for (i in 1:(n-1)){
a <- append(a,rep(i,n-i))
b <- append(b,seq(i+1,n))
}
for(i in 1:round(n*(n-1)/2))
indices[[i]] <- list(c(a[i], b[i]))
indices
}
ortogonal <- function(u,v,TOL=10^-8){
# Verifica si dos vectores son ortogonales, arrojando un 1 si lo es, y un 0 si no lo es.
# Args:
# u, v como vectores de la misma dimensión.Y un valor real de tolerancia TOL(10^-8).
# Nota: Se sugiere una TOL mayor a 10^-32.
# Returns:
# Valor booleano 0 (no son ortongoales), 1 (son ortogonales)
nu = norm(u,type ="2")
nv = norm(v,type ="2")
if ( nu < TOL | nv < TOL){
ret<-0
} else{
if( (u%*%v) /(nu*nv) < TOL ){
ret<-1
}
else{
ret<-0
}
}
ret
}
signo<-function(x) {
# Indica el signo de un número x
# Args:
# x (numeric): número a revisar
# Returns:
# 1 si el número es positivo o cero
# -1 si el número es negativo
ifelse(x<0,-1,1)
}
solver <- function(U,S,V,b){
# Construye la solución de un sistema de ecuaciones a partir de matrices
# U, S, V, y vector b. Se asume que S es diagonal.
# Para ello resuelve S d = U^Tb, para construir x=Vd.
# Notas:
# 1) Se utilizó la función backsolve para resolver el sistema triangular.
# 2) Al ser S diagonal, es indistinto si usar un solver para matrices traingulares inferiores o superiores.
# Args:
# U (mxm),V(nxn), S(mxn) matriz diagonal y b (m) un vector.
# Returns:
# x vector (m)
d = backsolve(S, t(U)%*%b)
x = V%*%d
return(x)
}
svd_jacobi_aprox <- function(A,TOL,maxsweep){
#Función que devuelve una lista con las matrices U, S, V que aproximan númericamente a A
#por medio del método de One-sided Jacobi
# Args:
# A: matriz a la que se le calculará su SVD
# TOL (numeric): controla la convergencia del método
# maxsweep (numeric): número máximo de sweeps
# Returns:
# lista con matrices U, S, V
#dimensiones
n<-dim(A)[2] #numero de columnas
m<-dim(A)[1] #numero de filas
nmax<-n*(n-1)/2
#inicialza valores del ciclo
ak<-A
vk<-diag(n)
sig <- NULL
uk <- ak
num_col_ortogonal<-0
k<-0
while(k<=maxsweep & num_col_ortogonal<nmax){
num_col_ortogonal<-0
ind <- indices(n)
for(i in 1:nmax){
col_j<-ak[,ind[[i]][[1]][2]]
col_i<-ak[,ind[[i]][[1]][1]]
#comprueba ortogonalidad
if(ortogonal(col_i,col_j,TOL)==1){
num_col_ortogonal<-num_col_ortogonal+1
}
else{
#calcula coeficientes de la matriz
a<-col_i%*%col_i#sum(col_i*col_i) #cambio
b<- col_j%*%col_j #sum(col_j*col_j) #cambio
c<-col_i%*%col_j
#calcula la rotacion givens que diagonaliza
epsilon<-(b-a)/(2*c)
t<-signo(epsilon)/(abs(epsilon)+sqrt(1+epsilon**2))
cs<-1/sqrt(1+t**2)
sn<-cs*t
#actualiza las columnas de la matriz ak
temp<-ak[,ind[[i]][[1]][1]] #cambio
ak[,ind[[i]][[1]][1]]<-c(cs)*temp-c(sn)*ak[,ind[[i]][[1]][2]]#cambio
ak[,ind[[i]][[1]][2]]<-c(sn)*temp+c(cs)*ak[,ind[[i]][[1]][2]]#cambio
# for(l in seq(1,m)){ #cambio
# temp<-ak[l,ind[[i]][[1]][1]] #cambio
# ak[l,ind[[i]][[1]][1]]<-cs*temp-sn*ak[l,ind[[i]][[1]][2]] #cambio
# ak[l,ind[[i]][[1]][2]]<-sn*temp+cs*ak[l,ind[[i]][[1]][2]] #cambio
# }
#actualiza las columnas de la matriz vk
temp<-vk[,ind[[i]][[1]][1]] #cambio
vk[,ind[[i]][[1]][1]]<-c(cs)*temp-c(sn)*vk[,ind[[i]][[1]][2]] #cambio
vk[,ind[[i]][[1]][2]]<-c(sn)*temp+c(cs)*vk[,ind[[i]][[1]][2]] #cambio
# for(l in seq(1,n)){ #cambio
# temp<-vk[l,ind[[i]][[1]][1]] #cambio
# vk[l,ind[[i]][[1]][1]]<-cs*temp-sn*vk[l,ind[[i]][[1]][2]] #cambio
# vk[l,ind[[i]][[1]][2]]<-sn*temp+cs*vk[l,ind[[i]][[1]][2]] #cambio
#
# }
}
}
k<-k+1
}
#Obtener sigma
sig<-apply(ak, 2, function(x){norm(x,"2")}) #cambio
#for(i in 1:n){ #cambio
# sig<- append(sig,norm(ak[,i],type ="2")) #cambio
#} #cambio
#Obtener U
#nrow_ak<-m
#uk<-apply(ak, 2, function(x){x/norm(x,"2")})*(sig<TOL)
for(i in 1:n){
if (sig[i]<TOL){
uk[,i]<-0
} else{
uk[,i] <- ak[,i]/sig[i]
}
}
# Indices de sigma ordenada en forma decreciente para ordenar V,S,U
index <- order(sig,decreasing = TRUE)
vk <- vk[,index]
S <- diag(sig[index])
uk <- uk[,index]
list(S = S, U = uk, V= vk)
}
# n =10**3
# A = matrix(rnorm(n**2), ncol=n)
#
# descom <- svd_jacobi_aprox(A,10**-8,1)
#
# S<-descom$S
# U<-descom$U
# V<-descom$V
#
# A_svd <- U%*%S%*%t(V)
#
# norm(A-A_svd,"2")
sel_solver<-function(A,b,TOL=10**-8,maxsweep=20){
#Función resuelve un sistema de ecuaciones lineales (SEL) utilizando la descomposición SVD
#por medio del método de One-sided Jacobi
#El SEL es de la forma Ax=b
# Args:
# A (float): matriz de incógnitas del SEL
# b (float): vector de igualdada del sistema
# TOL (numeric): controla la convergencia del método
# maxsweep (int): número máximo de sweeps
#Returns: x (float): vector solución
svd<-svd_jacobi_aprox(A,TOL,maxsweep)
x<-solver(svd$U,svd$S,svd$V,b)
x
}
bloques<-function(A,b,corte) {
#Función que genera los bloques de la matriz A y el vector respuesta b
#Args: A- matriz inicial(n*n)
# b- Solución de Ax = b , (nx1)
# corte - tamaño del bloque
#Returns: lista con la matriz A dividida en 4 bloques: A11,A12, A21, A22 y el vector b
# dividido en 2 bloques b1, b2
a11 <- A[c(1:corte),c(1:corte)]
a12 <- A[c(1:corte),c((corte+1):dim(A)[2])]
a21 <- A[c((corte+1):dim(A)[1]),c(1:corte)]
a22 <- A[c((corte+1):dim(A)[1]),c((corte+1):dim(A)[2])]
b1 <- b[1:(corte)]
b2 <- b[((corte)+1):length(b)]
list(A11 = a11,A12 =a12, A21=a21, A22=a22, b1 = b1, b2 = b2)
}
###Output
_____no_output_____
###Markdown
función a revisar: eliminacion_bloques
###Code
eliminacion_bloques <- function(A,b,corte, TOL, maxsweep){
#Función que realiza el método de eliminación por bloques
#Args: A- matriz inicial(n*n)
# b- Solución de Ax = b , (nx1)
# corte - tamaño del bloque
# TOL (numeric): controla la convergencia del método
# maxsweep (numeric): número máximo de sweeps
#Returns: x: vector
bloq = bloques(A,b,corte)
if(is.singular.matrix(A,TOL) ==FALSE & is.singular.matrix(bloq$A11,TOL) == FALSE){
svd<-svd_jacobi_aprox(bloq$A11,TOL,maxsweep) #cambio
print("obtenemos y")
#y = sel_solver(bloq$A11,bloq$b1,TOL, maxsweep) #cambio
y <-solver(svd$U,svd$S,svd$V,bloq$b1) #cambio
#Y= bloq$A12 #cambio
#for(i in 1:dim(bloq$A12)[2]){ #cambio
# Y[,i] = sel_solver(bloq$A11,bloq$A12[,i],TOL, maxsweep) #cambio
#} #cambio
print("obtenemos Y mayuscula") #cambio
#Y = sel_solver(bloq$A11,bloq$A12,TOL, maxsweep) #cambio
Y <-solver(svd$U,svd$S,svd$V,bloq$A12) #cambio
print("obtenemos S")
S = bloq$A22 - bloq$A21%*%Y
print("obtenemos b_hat")
b_hat = bloq$b2-bloq$A21%*%y
print("obtenemos x_2")
x_2 = sel_solver(S,b_hat,TOL, maxsweep)
print("obtenemos b_hat2")
b_hat2 = bloq$b1 -bloq$A12%*%x_2
print("obtenemos x_1")
#x_1 = sel_solver(bloq$A11,b_hat2,TOL, maxsweep) #cambio
x_1 <- solver(svd$U,svd$S,svd$V,b_hat2) #cambio
print("obtenemos x")
x <- c(x_1,x_2)
x
}else{
print("Singularidad de las matrices involucradas; no hay solucion")
}
}
###Output
_____no_output_____
###Markdown
**1.Sobre la documentación del código/de la función**¿Se encuentran presentes en la implementación los siguientes elementos? Por favor, ingrese explicaciones detalladas.**a) Descripción concisa y breve de lo que hace el código/la función**La documentación inicial de la función "eliminacion_bloques" es detallada, describe los argumentos de entrada, de salida y el objetivo de la función.**b) Descripción de sus argumentos de entrada, su significado y rango de valores que pueden tomar**Los argumentos de entrada, y salida estan descritos pero no hay un rango de valores que podrían tomar en los parametros de "corte", "TOL" y "maxsweep"**c) Descripción de los tipos de argumentos de entrada y de salida (por ejemplo, valores enteros, reales, strings, dataframe, matrices, etc)**Los argumentos de salida estan correctamente tipificados, los argumentos de entrada como corte y maxsweep falta especificar que son enteros, para TOL sería un double**d) Descripción de la salida de la función, su significado y valores/objetos que deben regresa**La documentación de salida esta completa, especifica los argumentos de salida de la función y su significado **2. Cumplimiento de objetivos del código/de la función**El codigo cumple su objetivo al resolver un sistema de ecuaciones siempre y cuando los argumentos de entrada cumplan con los supuestos establecidos: A tiene que ser una matriz cuadrada y no singular. Por otro lado, si no se cumplen algunos de estos supuestos el codigo nos indica que el sistema de ecuaciones no tiene solución. El codigo deja de ser funcional para matrices de orden de $10^4$ en adelante, ya que el empo necesario para ejecutarlo se incrementa en gran medida. **a) ¿El código cumple los objetivos para los que fue diseñado?**Sí, la función cumple su objetivo regresa la solucion para un sistema de ecuaciones si este tiene solución y cumple con los supuestos mencionaodos en el punto aterior, de lo contrario nos ñindica que el sistema de ecuaciones no tiene solucion.**b) ¿La salida de la función regresa un vector x?**Sí, si el sistema tiene solucion **3. Pruebas**Ocupe la presente sección para hacer diseño de pruebas variando los parámetros que recibe el código la función en diferentes rangos para evaluae su comportamiento y/o detectar posibles fallos**Test 1 probando diferentes tamaños de Matrices****Objetivo del test:** Revisar si la función realiza su trabajo para matrices de diferentes tamaños y el tiempo que toma.Nota: Para implementar las pruebas usé la libreria (tictoc) para medir el tiempo **Implementación del test:**
###Code
#prueba con Matriz pequeña
print("Prueba con una matriz pequeña A (n*n), donde n=10^2:")
tic("Tiempo de prueba_matriz_chica")
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-8
maxsweep= 1
corte = n/2
x<-eliminacion_bloques(A,b,corte, TOL, maxsweep)
norm(A%*%x-b,"2")/norm(b,"2")
toc()
###Output
[1] "Prueba con una matriz pequeña A (n*n), donde n=10^2:"
[1] "obtenemos y"
[1] "obtenemos Y mayuscula"
[1] "obtenemos S"
[1] "obtenemos b_hat"
[1] "obtenemos x_2"
[1] "obtenemos b_hat2"
[1] "obtenemos x_1"
[1] "obtenemos x"
###Markdown
**Prueba 2: Matriz de mediano tamaño** Se corrió el codigo en Google Colab con una matriz de $10^{3}X10^{3}$ debido a que el programa utiliza muchos recursos.Se puede reproducir este test haciendo click en la siguiente liga: [google colab](https://colab.research.google.com/drive/1uOUndB7HabictLo1p6Ak1WrU1INrC5Bq) Tiempo de prueba_matriz_mediana: 10 min aprox**Prueba 3: Matriz de tamaño grande**Se intentó correr el codigo desde Google Colab para una matriz de 10^4 pero el tiempo de ejecucion fue tan largo que me desconectó de la sesión que tenía Principales hallazos del test: **Tamaño de las matrices ** * Hallazgo 1: El codigo funciona adecuadamente para matrices chicas n=10^2, es lento pero funcional para matrices medianas n=10^3, y no es funcional para matrices grandes n=10^4 en adelante**Test 2 probando diferentes valores de los argumentos de entrada****Objetivo del test:** Revisar si la función realiza su trabajo variando los parametros de TOL, maxsweep y corte**Implementación del test:**
###Code
#prueba cambiando TOL
print("Prueba con una matriz pequeña A (n*n) y TOL muy chica")
tic("Tiempo de prueba")
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-322
maxsweep= 1
corte = n/2
x<-eliminacion_bloques(A,b,corte, TOL, maxsweep)
norm(A%*%x-b,"2")/norm(b,"2")
toc()
#prueba cambiando maxsweep=50
print("Prueba con una matriz pequeña A (n*n) y max sweep grande=50")
tic("Tiempo de prueba_maxsweep=50")
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-8
maxsweep= 50
corte = n/2
x<-eliminacion_bloques(A,b,corte, TOL, maxsweep)
norm(A%*%x-b,"2")/norm(b,"2")
toc()
#prueba cambiando maxsweep=20
print("Prueba con una matriz pequeña A (n*n) y max sweep =20")
tic("Tiempo de prueba_maxsweep=20")
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-8
maxsweep= 20
corte = n/2
x<-eliminacion_bloques(A,b,corte, TOL, maxsweep)
norm(A%*%x-b,"2")/norm(b,"2")
toc()
#prueba cambiando corte n/4
print("Prueba con una matriz pequeña A (n*n) y corte=n/8")
tic("Tiempo de prueba_corte=n/8")
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-8
maxsweep= 20
corte = n/4
x<-eliminacion_bloques(A,b,corte, TOL, maxsweep)
norm(A%*%x-b,"2")/norm(b,"2")
toc()
#prueba cambiando corte n/2
print("Prueba con una matriz pequeña A (n*n) y corte=n/8")
tic("Tiempo de prueba_corte=n/8")
n= 10**2
A = matrix(rnorm(n**2), ncol=n)
b = matrix(rnorm(n), ncol=1)
TOL = 10**-8
maxsweep= 20
corte = n/4
x<-eliminacion_bloques(A,b,corte, TOL, maxsweep)
norm(A%*%x-b,"2")/norm(b,"2")
toc()
###Output
[1] "Prueba con una matriz pequeña A (n*n) y corte=n/8"
[1] "obtenemos y"
[1] "obtenemos Y mayuscula"
[1] "obtenemos S"
[1] "obtenemos b_hat"
[1] "obtenemos x_2"
[1] "obtenemos b_hat2"
[1] "obtenemos x_1"
[1] "obtenemos x"
|
explore/marginal-gain/marginal-gain-multiple-genes-pca.ipynb | ###Markdown
How much predictiveness does gene expression add over covariates for multiple mutations (using pca)? Some notes on the results displayed here:1) Uses multiple genes2) Analyzes training/testing auroc for 3 data sets -covariates only -expression only -covariates+expression3) Pca performed instead of feature selection.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
import os
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, Imputer
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import roc_auc_score
from sklearn.decomposition import PCA
from matplotlib import gridspec
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import SelectKBest
from statsmodels.robust.scale import mad
from IPython.core.debugger import Tracer
from IPython.display import display
import warnings
warnings.filterwarnings("ignore") # ignore deprecation warning for grid_scores_
%matplotlib inline
plt.style.use('seaborn-notebook')
path = os.path.join('..', '..', 'download', 'covariates.tsv')
covariates = pd.read_table(path, index_col=0)
# Select acronym_x and n_mutations_log1p covariates only
selected_cols = [col for col in covariates.columns if 'acronym_' in col]
selected_cols.append('n_mutations_log1p')
covariates = covariates[selected_cols]
path = os.path.join('..', '..', 'download', 'expression-matrix.tsv.bz2')
expression = pd.read_table(path, index_col=0)
path = os.path.join('..','..','download', 'mutation-matrix.tsv.bz2')
Y = pd.read_table(path, index_col=0)
# Pre-process expression data for use later
n_components = 65
scaled_expression = StandardScaler().fit_transform(expression)
pca = PCA(n_components).fit(scaled_expression)
explained_variance = pca.explained_variance_
expression_pca = pca.transform(scaled_expression)
expression_pca = pd.DataFrame(expression_pca)
expression_pca = expression_pca.set_index(expression.index.values)
print('fraction of variance explained: ' + str(pca.explained_variance_ratio_.sum()))
# Create combo data set (processed expression + covariates)
combined = pd.concat([covariates,expression_pca],axis=1)
combined.shape
genes_LungCancer = {
'207': 'AKT1',
'238': 'ALK',
'673': 'BRAF',
'4921':'DDR2',
'1956':'EGFR',
'2064':'ERBB2',
'3845':'KRAS',
'5604':'MAP2K1',
'4893':'NRAS',
'5290':'PIK3CA',
'5728':'PTEN',
'5979':'RET',
'6016':'RIT1',
'6098':'ROS1',
}
genes_TumorSuppressors = {
'324': 'APC',
'672': 'BRCA1',
'675': 'BRCA2',
'1029':'CDKN2A',
'1630':'DCC',
'4089':'SMAD4',
'4087':'SMAD2',
'4221':'MEN1',
'4763':'NF1',
'4771':'NF2',
'7157':'TP53',
'5728':'PTEN',
'5925':'RB1',
'7428':'VHL',
'7486':'WRN',
'7490':'WT1',
}
genes_Oncogenes = {
'5155':'PDGFB', #growth factor
'5159':'PDGFRB', #growth factor
'3791':'KDR', #receptor tyrosine kinases
'25':'ABL1', #Cytoplasmic tyrosine kinases
'6714':'SRC', #Cytoplasmic tyrosine kinases
'5894':'RAF1',#cytoplasmic serine kinases
'3265':'HRAS',#regulatory GTPases
'4609':'MYC',#Transcription factors
'2353':'FOS',#Transcription factors
}
mutations = {**genes_LungCancer, **genes_TumorSuppressors, **genes_Oncogenes}
# Define model
param_fixed = {
'loss': 'log',
'penalty': 'elasticnet',
}
param_grid = {
'classify__alpha': [10 ** x for x in range(-6, 1)],
'classify__l1_ratio': [0],
}
pipeline = Pipeline(steps=[
('standardize', StandardScaler()),
('classify', SGDClassifier(random_state=0, class_weight='balanced', loss=param_fixed['loss'],
penalty=param_fixed['penalty']))
])
pipeline = GridSearchCV(estimator=pipeline, param_grid=param_grid, scoring='roc_auc')
# Helper training/evaluation functions.
def train_and_evaluate(data, pipeline):
"""
Train each model using grid search, and ouput:
1) all_best_estimator_aurocs: contains aurocs for mean_cv, train, and test for chosen grid parameters
2) all_grid_aurocs: contains aurocs for each hyperparameter-fold combo in grid search
"""
all_best_estimator_aurocs = list()
all_grid_aurocs = pd.DataFrame()
for m in list(mutations):
best_estimator_aurocs, grid_aurocs = get_aurocs(data, Y[m], pipeline)
best_estimator_aurocs['symbol'] = mutations[m]
grid_aurocs['symbol'] = mutations[m]
all_grid_aurocs = all_grid_aurocs.append(grid_aurocs, ignore_index = True)
all_best_estimator_aurocs.append(best_estimator_aurocs)
all_best_estimator_aurocs = pd.DataFrame(all_best_estimator_aurocs)
return all_best_estimator_aurocs, all_grid_aurocs
def get_aurocs(X, y, pipeline):
"""
Fit the classifier for the given mutation (y) and output predictions for it
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
pipeline.fit(X=X_train, y=y_train)
y_pred_train = pipeline.decision_function(X_train)
y_pred_test = pipeline.decision_function(X_test)
grid_aurocs = grid_scores_to_df(pipeline.grid_scores_)
best_estimator_aurocs = pd.Series()
best_estimator_aurocs['mean_cv_auroc'] = grid_aurocs['fold_mean'].max()
best_estimator_aurocs['training_auroc'] = roc_auc_score(y_train, y_pred_train)
best_estimator_aurocs['testing_auroc'] = roc_auc_score(y_test, y_pred_test)
best_estimator_aurocs['best_alpha'] = pipeline.best_params_['classify__alpha']
best_estimator_aurocs['best_11_ratio'] = pipeline.best_params_['classify__l1_ratio']
best_estimator_aurocs['n_positive_mutation'] = sum(y==1)
best_estimator_aurocs['n_negative_mutation'] = sum(y==0)
return best_estimator_aurocs, grid_aurocs
def grid_scores_to_df(grid_scores):
"""
Convert a sklearn.grid_search.GridSearchCV.grid_scores_ attribute to
a tidy pandas DataFrame where each row is a hyperparameter and each column a fold.
"""
rows = []
for grid_score in grid_scores:
row = np.concatenate(([grid_score.parameters['classify__alpha']],
[grid_score.parameters['classify__l1_ratio']],
grid_score.cv_validation_scores))
rows.append(row)
grid_aurocs = pd.DataFrame(rows,columns=['alpha', 'l1_ratio','fold_1','fold_2','fold_3'])
grid_aurocs['fold_mean'] = grid_aurocs.iloc[:,2:4].mean(axis=1)
grid_aurocs['fold_std'] = grid_aurocs.iloc[:,2:4].std(axis=1)
return grid_aurocs
# Helper visualization functions.
def visualize_grid_aurocs(grid_aurocs, gene_type=None, ax=None):
"""
Visualize grid search results for each mutation-alpha parameter combo.
"""
if ax==None: f, ax = plt.subplots()
grid_aurocs_mat = pd.pivot_table(grid_aurocs, values='fold_mean', index='symbol', columns='alpha')
order = grid_aurocs['symbol'].unique()
grid_aurocs_mat = grid_aurocs_mat.reindex(grid_aurocs['symbol'].unique()) # sort so labels in original order
sns.heatmap(grid_aurocs_mat, annot=True, fmt='.2', ax=ax)
ax.set_ylabel('Symbol')
ax.set_xlabel('Regularization strength multiplier (alpha)')
plt.setp(ax.get_yticklabels(), rotation=0)
if gene_type != None: ax.set_title(gene_type, fontsize=15)
def visualize_best_estimator_aurocs(estimator_aurocs, gene_type=None, ax=None, training_data_type=None):
"""
Creates a bar plot of mean_cv_auroc, training_auroc, and testing_auroc for each gene in df
"""
plot_df = pd.melt(estimator_aurocs, id_vars='symbol', value_vars=['mean_cv_auroc', 'training_auroc',
'testing_auroc'], var_name='kind', value_name='aurocs')
ax = sns.barplot(y='symbol', x='aurocs', hue='kind', data=plot_df,ax=ax)
if training_data_type == 'marginal_gain': ax.set(xlabel='delta aurocs')
else: ax.set(xlabel='aurocs')
ax.legend(bbox_to_anchor=(.65, 1.1), loc=2, borderaxespad=0.)
plt.setp(ax.get_yticklabels(), rotation=0)
if gene_type != None: ax.set_title(gene_type, fontsize=15)
def visualize_categories(data, plot_type, training_data_type):
"""
Wrapper that separates genes into categories and calls visualization subroutines.
"""
f, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(20,10))
for gene_type in ['lung', 'suppressor', 'onco']:
if gene_type == 'lung':
gene_list = genes_LungCancer.values()
ax = ax1
elif gene_type == 'suppressor':
gene_list = genes_TumorSuppressors.values()
ax = ax2
elif gene_type == 'onco':
gene_list = genes_Oncogenes.values()
ax = ax3
plot_df = data[data['symbol'].isin(gene_list)]
plt.yticks(rotation=90)
if plot_type == 'best_estimator': visualize_best_estimator_aurocs(plot_df, gene_type, ax, training_data_type)
elif plot_type == 'grid': visualize_grid_aurocs(plot_df, gene_type, ax)
f.suptitle(training_data_type, fontsize=20)
%%time
# Train with covariates data
all_best_estimator_aurocs_covariates, all_grid_aurocs_covariates = train_and_evaluate(covariates, pipeline)
# Visualize covariates data
visualize_categories(all_best_estimator_aurocs_covariates, 'best_estimator', 'covariates')
visualize_categories(all_grid_aurocs_covariates, 'grid', 'covariates')
%%time
# Train expression data
all_best_estimator_aurocs_expression, all_grid_aurocs_expression = train_and_evaluate(expression_pca, pipeline)
# Visualize expression data
visualize_categories(all_best_estimator_aurocs_expression, 'best_estimator', 'expression')
visualize_categories(all_grid_aurocs_expression, 'grid', 'expression')
%%time
# Train with combined data
all_best_estimator_aurocs_combined, all_grid_aurocs_combined = train_and_evaluate(combined, pipeline)
# Visualize combined data
visualize_categories(all_best_estimator_aurocs_combined, 'best_estimator', 'combined')
visualize_categories(all_grid_aurocs_combined, 'grid', 'combined')
# Display difference in auroc between combined and covariates only
diff_aurocs = all_best_estimator_aurocs_combined.iloc[:,0:3] - all_best_estimator_aurocs_covariates.iloc[:,0:3]
diff_aurocs['symbol'] = all_best_estimator_aurocs_combined.iloc[:,-1]
visualize_categories(diff_aurocs, 'best_estimator', 'marginal gain')
# Summary stats on marginal gain.
def visualize_marginal_gain(data, title, ax):
marginal_gain_df = pd.DataFrame({'auroc_type': ['mean_cv', 'training', 'testing'], 'marginal_gain': data})
sns.barplot(x='auroc_type',y='marginal_gain',data=marginal_gain_df,ax=ax)
ax.set(ylabel='marginal_gain')
ax.set_title(title)
f, (ax1, ax2,ax3) = plt.subplots(ncols=3, figsize=(16,4))
t1, t2, t3 = diff_aurocs['mean_cv_auroc'].sum(), diff_aurocs['training_auroc'].sum(),diff_aurocs['testing_auroc'].sum()
a1,a2,a3 = diff_aurocs['mean_cv_auroc'].mean(), diff_aurocs['training_auroc'].mean(),diff_aurocs['testing_auroc'].mean()
m1,m2,m3 = diff_aurocs['mean_cv_auroc'].median(), diff_aurocs['training_auroc'].median(),diff_aurocs['testing_auroc'].median()
visualize_marginal_gain([t1,t2,t3], 'Total marginal gain', ax1)
visualize_marginal_gain([a1,a2,a3], 'Mean marginal gain',ax2)
visualize_marginal_gain([m1,m2,m3], 'Median marginal gain',ax3)
# Is there a relationship between marginal gain and number of mutations?
def visualize_marginal_vs_n_mutations(auroc_type,color,ax):
x = np.array(all_best_estimator_aurocs_combined['n_positive_mutation']); x = x.reshape(len(x),1)
y = np.array(diff_aurocs[auroc_type]); y = y.reshape(len(y),1)
ax.scatter(x=x, y=y,color=color)
ax.set(xlabel="number of mutations")
ax.set(ylabel="delta auroc (combined - covariates only)")
ax.set_title(auroc_type)
f, (ax1, ax2,ax3) = plt.subplots(ncols=3, figsize=(16,4))
visualize_marginal_vs_n_mutations('mean_cv_auroc','red',ax=ax1)
visualize_marginal_vs_n_mutations('training_auroc','green',ax=ax2)
visualize_marginal_vs_n_mutations('testing_auroc','blue',ax=ax3)
###Output
_____no_output_____ |
python_crash_course/python_cheat_sheet_2.ipynb | ###Markdown
Python cheat sheet - iterations**Table of contents** - [Functions](Functions) - [Classes](Classes) - [Exception handling](Exception-handling) FunctionsSpecify optional parameters in the end. Specify the default values for optional parameters with = value notation def func_name(arg1, arg2=None): operations return value
###Code
def func_add_numbers(num1, num2=10):
return (num1 + num2)
func_add_numbers(2)
func_add_numbers(2,34)
func_add_numbers()
###Output
_____no_output_____
###Markdown
ClassesEverything is an object in Python including native types. You define class names with camel casing.You define the constructor with special name `__init__()`. The fields (private) are denoted with `_variable_name` specification and properties are decorated with `@property` decorator.Fields and properties are accessed within the class using `self.name` notation. This helps differentiate a class field / property from a local variable or method argument of the same name. A simple class class MyClass: _local_variables = "value" def __init__(self, args): constructor statements self._local_variables = args assign values to fields def func_1(self, args): statementsYou use this method by instantiating an object. obj1 = myClass(args_defined_in_constructor)
###Code
# Define a class to hold a satellite or aerial imagery file. Its properties give information
# such as location of the ground, area, dimensions, spatial and spectral resolution etc.
class ImageryObject:
_default_gsd = 5.0
def __init__(self, file_path):
self._file_path = file_path
self._gps_location = (3,4)
@property
def bands(self):
#count number of bands
count = 3
return count
@property
def gsd(self):
# logic to calculate the ground sample distance
gsd = 10.0
return gsd
@property
def address(self):
# logic to reverse geocode the self._gps_location to get address
# reverse geocode self._gps_location
address = "123 XYZ Street"
return address
#class methods
def display(self):
#logic to display picture
print("image is displayed")
def shuffle_bands(self):
#logic to shift RGB combination
print("shifting pands")
self.display()
# class instantiation
img1 = ImageryObject("user\img\file.img") #pass value to constructor
img1.address
img1._default_gsd
img1._gps_location
img1.shuffle_bands()
# Get help on any object. Only public methods, properties are displayed.
# fields are private, properties are public. Class variables beginning with _ are private fields.
help(img1)
###Output
Help on ImageryObject in module __main__ object:
class ImageryObject(builtins.object)
| Methods defined here:
|
| __init__(self, file_path)
| Initialize self. See help(type(self)) for accurate signature.
|
| display(self)
| #class methods
|
| shuffle_bands(self)
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| address
|
| bands
|
| gsd
###Markdown
Exception handlingExceptions are classes. You can define your own by inheriting from `Exception` class. try: statements except Exception_type1 as e1: handling statements except Exception_type2 as e2: specific handling statements except Exception as generic_ex: generic handling statements else: some more statements finally: default statements which will always be executed
###Code
try:
img2 = ImageryObject("user\img\file2.img")
img2.display()
except:
print("something bad happened")
try:
img2 = ImageryObject("user\img\file2.img")
img2.display()
except:
print("something bad happened")
else:
print("else block")
finally:
print("finally block")
try:
img2 = ImageryObject()
img2.display()
except:
print("something bad happened")
else:
print("else block")
finally:
print("finally block")
try:
img2 = ImageryObject()
img2.display()
except Exception as ex:
print("something bad happened")
print("exactly what whent bad? : " + str(ex))
try:
img2 = ImageryObject('path')
img2.dddisplay()
except TypeError as terr:
print("looks like you forgot a parameter")
except Exception as ex:
print("nope, it went worng here: " + str(ex))
###Output
nope, it went worng here: 'ImageryObject' object has no attribute 'dddisplay'
|
lectures/03_lecture/advertisinglarge.ipynb | ###Markdown
Advertising exampleTaken from VMLS book, 12.4
###Code
m, n = 100000, 5000
A = 0.5 + np.random.rand(m, n) # Force all entries to be between 0.5 and 1.5
v_des = 1000 * np.ones(m)
t_start = time()
Apinv = np.linalg.pinv(A)
x = Apinv.dot(v_des)
print("Pseudoinverse elapsed time: %.5f sec" % (time() - t_start))
def forward_substitution(L, b):
n = L.shape[0]
x = np.zeros(n)
for i in range(n):
x[i] = (b[i] - L[i,:i] @ x[:i])/L[i, i]
return x
def backward_substitution(U, b):
n = U.shape[0]
x = np.zeros(n)
for i in reversed(range(n)):
x[i] = (b[i] - U[i,i+1:] @ x[i+1:])/U[i, i]
return x
t_start = time()
M = A.T.dot(A)
q = A.T.dot(v_des)
L = llt(M)
x = forward_substitution(L, q)
x = backward_substitution(L.T, x)
print("LL' elapsed time: %.5f sec" % (time() - t_start))
v_des = 500*np.ones(m)
t_start = time()
q = A.T.dot(v_des)
x = forward_substitution(L, q)
x = backward_substitution(L.T, x)
print("LL' elapsed time (without factor): %.5f sec" % (time() - t_start))
###Output
LL' elapsed time (without factor): 0.38672 sec
|
Support Vector Machine/support-vector-machines-svm-beginner-to-advance.ipynb | ###Markdown
Support Vector Machines (SVM) : Beginner To Advanced Table of Contents* [Introduction to Support Vector Machines](Introduction-to-Support-Vector-Machines)* [SVM TERMINOLOGIES](SVM-TERMINOLOGIES)* [SVM Objective](SVM-Objective)* [Mathematical formulation of SVM](Mathematical-formulation-of-SVM)* [Dual form of SVM](Dual-form-of-SVM)* [What is Kernel trick?](What-is-Kernel-trick?)* [Support Vector Regression (SVR)](Support-Vector-Regression-(SVR))* [TUNING PARAMETERS OF SVM](TUNING-PARAMETERS-OF-SVM)* [Data Preparation for SVM](Data-Preparation-for-SVM)* [PROS AND CONS ASSOCIATED WITH SVM](PROS-AND-CONS-ASSOCIATED-WITH-SVM)* [Exploratory data analysis ](Exploratory-data-analysis )* [Declare feature vector and target variable](Declare-feature-vector-and-target-variable)* [Split data into separate training and test set](Split-data-into-separate-training-and-test-set)* [Feature Scaling](Feature-Scaling)* [Run SVM with default hyperparameters](Run-SVM-with-default-hyperparameters)* [Run SVM with linear kernel](Run-SVM-with-linear-kernel)* [Run SVM with polynomial kernel ](Run-SVM-with-polynomial-kernel )* [Run SVM with sigmoid kernel ](Run-SVM-with-sigmoid-kernel )* [Confusion matrix ](Confusion-matrix )* [Classification Report](Classification-Report)* [ROC - AUC Curve](ROC-AUC-Curve)* [Results and conclusion](Results-and-conclusion)* [References](References) **I hope you find this kernel useful and your UPVOTES would be highly appreciated** Introduction to Support Vector Machines: Support Vector Machines (SVMs in short) are machine learning algorithms that are used for classification and regression purposes. SVMs are one of the powerful machine learning algorithms for classification, regression and outlier detection purposes. An SVM classifier builds a model that assigns new data points to one of the given categories. Thus, it can be viewed as a non-probabilistic binary linear classifier.The original SVM algorithm was developed by Vladimir N Vapnik and Alexey Ya. Chervonenkis in 1963. At that time, the algorithm was in early stages. The only possibility is to draw hyperplanes for linear classifier. In 1992, Bernhard E. Boser, Isabelle M Guyon and Vladimir N Vapnik suggested a way to create non-linear classifiers by applying the kernel trick to maximum-margin hyperplanes. The current standard was proposed by Corinna Cortes and Vapnik in 1993 and published in 1995.SVMs can be used for linear classification purposes. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using the kernel trick. It enable us to implicitly map the inputs into high dimensional feature spaces. SVM TERMINOLOGIES HyperplaneA hyperplane is a decision boundary which separates between given set of data points having different class labels. The SVM classifier separates data points using a hyperplane with the maximum amount of margin. This hyperplane is known as the maximum margin hyperplane and the linear classifier it defines is known as the maximum margin classifier. Support VectorsSupport vectors are the sample data points, which are closest to the hyperplane. These data points will define the separating line or hyperplane better by calculating margins. MarginA margin is a separation gap between the two lines on the closest data points. It is calculated as the perpendicular distance from the line to support vectors or closest data points. In SVMs, we try to maximize this separation gap so that we get maximum margin.The following diagram illustrates these concepts visually.  SVM ObjectiveIn SVMs, our main objective is to select a hyperplane with the maximum possible margin between support vectors.SVM searches for the maximum margin hyperplane in the following 2 step process –1. Generate hyperplanes which segregates the classes in the best possible way. There are many hyperplanes that might classify the data. We should look for the best hyperplane that represents the largest separation, or margin, between the two classes.2. So, we choose the hyperplane so that distance from it to the support vectors on each side is maximized. If such a hyperplane exists, it is known as the **maximum margin hyperplane** and the linear classifier it defines is known as a **maximum margin classifier.**The following diagram illustrates the concept of maximum margin and maximum margin hyperplane in a clear manner.  Maximal-Margin ClassifierThe Maximal-Margin Classifier is a hypothetical classifier that best explains how SVM works in practice.The numeric input variables (x) in your data (the columns) form an n-dimensional space. For example, if you had two input variables, this would form a two-dimensional space.A hyperplane is a line that splits the input variable space. In SVM, a hyperplane is selected to best separate the points in the input variable space by their class, either class 0 or class 1. In two-dimensions you can visualize this as a line and let’s assume that all of our input points can be completely separated by this line. For example:B0 + (B1 * X1) + (B2 * X2) = 0Where the coefficients (B1 and B2) that determine the slope of the line and the intercept (B0) are found by the learning algorithm, and X1 and X2 are the two input variables.You can make classifications using this line. By plugging in input values into the line equation, you can calculate whether a new point is above or below the line.1. Above the line, the equation returns a value greater than 0 and the point belongs to the first class (class 0).2. Below the line, the equation returns a value less than 0 and the point belongs to the second class (class 1).3. A value close to the line returns a value close to zero and the point may be difficult to classify.4. If the magnitude of the value is large, the model may have more confidence in the prediction.The distance between the line and the closest data points is referred to as the margin. The best or optimal line that can separate the two classes is the line that as the largest margin. This is called the Maximal-Margin hyperplane.The margin is calculated as the perpendicular distance from the line to only the closest points. Only these points are relevant in defining the line and in the construction of the classifier. These points are called the support vectors. They support or define the hyperplane.The hyperplane is learned from training data using an optimization procedure that maximizes the margin. Mathematical formulation of SVM If the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations: Let’s say there are “m” dimensions:thus the equation of the hyperplane in the ‘M’ dimension can be given as =whereWi = vectors(W0,W1,W2,W3……Wm)b = biased term (W0)X = variables. Hard-Margin SVMAssume 3 hyperplanes namely (π, π+, π−) such that ‘π+’ is parallel to ‘π’ passing through the support vectors on the positive side and ‘π−’ is parallel to ‘π’ passing through the support vectors on the negative side. the equations of each hyperplane can be considered as: for the point X1 : Explanation: when the point X1 we can say that point lies on the hyperplane and the equation determines that the product of our actual output and the hyperplane equation is 1 which means the point is correctly classified in the positive domain. for the point X3: Explanation: when the point X3 we can say that point lies away from the hyperplane and the equation determines that the product of our actual output and the hyperplane equation is greater 1 which means the point is correctly classified in the positive domain. for the point X4: Explanation:when the point X4 we can say that point lies on the hyperplane in the negative region and the equation determines that the product of our actual output and the hyperplane equation is equal to 1 which means the point is correctly classified in the negative domain. for the point X6 : Explanation: when the point X6 we can say that point lies away from the hyperplane in the negative region and the equation determines that the product of our actual output and the hyperplane equation is greater 1 which means the point is correctly classified in the negative domain. Let’s look into the constraints which are not classified: for point X7: Explanation: When Xi = 7 the point is classified incorrectly because for point 7 the wT + b will be smaller than one and this violates the constraints. So we found the misclassification because of constraint violation. Similarly, we can also say for points Xi = 8. Thus from the above examples, we can conclude that for any point Xi:if Yi(WT*Xi +b) ≥ 1:then Xi is correctly classifiedelse:Xi is incorrectly classified.So we can see that if the points linearly separable then only our hyperplane is able to distinguish between them and if any outlier is introduced then it is not able to separate them. So these type of SVM is called as hard margin SVM (since we have very strict constraints to correctly classify each and every datapoint). Soft-Margin SVM We basically consider that the data is linearly separable and this might not be the case in real life scenario. We need an update so that our function may skip few outliers and be able to classify almost linearly separable points. For this reason, we introduce a new Slack variable ( ξ ) which is called Xi.if we introduce ξ it into our previous equation we can rewrite it as if ξi= 0,the points can be considered as correctly classified.else:ξi> 0 , Incorrectly classified points.so if ξi> 0 it means that Xi(variables)lies in incorrect dimension, thus we can think of ξi as an error term associated with Xi(variable). The average error can be given as thus our objective, mathematically can be described as This formulation is called the Soft margin technique. Note:To find the vector w and the scalar b such that the hyperplane represented by w and b maximizes the margin distance and minimizes the loss term subjected to the condition that all points are correctly classified. Dual form of SVM:Now, let’s consider the case when our data set is not at all linearly separable. basically, we can separate each data point by projecting it into the higher dimension by adding relevant features to it. But with SVM there is a powerful way to achieve this task of projecting the data into a higher dimension. The above-discussed formulation was the **Primal form of SVM** . The alternative method is dual form of SVM which uses Lagrange’s multiplier to solve the constraints optimization problem. Note:If αi>0 then Xi is a Support vector and when αi=0 then Xi is not a support vector. Observation:1. To solve the actual problem we do not require the actual data point instead only the dot product between every pair of a vector may suffice.2. To calculate the “b” biased constant we only require dot product.3. The major advantage of dual form of SVM over Lagrange formulation is that it only depends on the α. What is Kernel trick? In practice, SVM algorithm is implemented using a kernel. It uses a technique called the kernel trick. In simple words, a kernel is just a function that maps the data to a higher dimension where data is separable. A kernel transforms a low-dimensional input data space into a higher dimensional space. So, it converts non-linear separable problems to linear separable problems by adding more dimensions to it. Thus, the kernel trick helps us to build a more accurate classifier. Hence, it is useful in non-linear separation problems.We can define a kernel function as follows- Kernel function In the context of SVMs, there are 4 popular kernels:-1. Linear kernel,2. Polynomial kernel,3. Radial Basis Function (RBF) kernel (also called Gaussian kernel) 4. Sigmoid kernel. These are described below - Linear kernel :In linear kernel, the kernel function takes the form of a linear function as follows-linear kernel : K(xi , xj ) = xiT xjLinear kernel is used when the data is linearly separable. It means that data can be separated using a single line. It is one of the most common kernels to be used. It is mostly used when there are large number of features in a dataset. Linear kernel is often used for text classification purposes.Training with a linear kernel is usually faster, because we only need to optimize the C regularization parameter. When training with other kernels, we also need to optimize the γ parameter. So, performing a grid search will usually take more time. Polynomial KernelPolynomial kernel represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables. The polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of the input samples.For degree-d polynomials, the polynomial kernel is defined as follows – Polynomial kernel : Polynomial kernel is very popular in Natural Language Processing. The most common degree is d = 2 (quadratic), since larger degrees tend to overfit on NLP problems. It can be visualized with the following diagram.  Radial Basis Function KernelGaussian RBF(Radial Basis Function) is another popular Kernel method used in SVM models for more. RBF kernel is a function whose value depends on the distance from the origin or from some point. Gaussian Kernel is of the following format:- The following diagram demonstrates the SVM classification with rbf kernel SVM Classification with rbf kernel Sigmoid kernelSigmoid kernel has its origin in neural networks. We can use it as the proxy for neural networks. Sigmoid kernel is given by the following equation –sigmoid kernel : k (x, y) = tanh(αxTy + c)Sigmoid kernel can be visualized with the following diagram- Sigmoid kernel Support Vector Regression (SVR) Support Vector Regression (SVR) uses the same principle as SVM, but for regression problems. Let’s spend a few minutes understanding the idea behind SVR The Idea Behind Support Vector RegressionThe problem of regression is to find a function that approximates mapping from an input domain to real numbers on the basis of a training sample. So let’s now dive deep and understand how SVR works actually.  Consider these two red lines as the decision boundary and the blue line as the hyperplane. Our **objective,** when we are moving on with SVR, is to basically consider the points that are within the decision boundary line. Our best fit line is the hyperplane that has a maximum number of points.The first thing that we’ll understand is what is the decision boundary (the danger red line above!). Consider these lines as being at any distance, say ‘a’, from the hyperplane. So, these are the lines that we draw at distance ‘+a’ and ‘-a’ from the hyperplane. This ‘a’ in the text is basically referred to as epsilon.  Assuming that the equation of the hyperplane is as follows:Y = wx+b (equation of hyperplane)Then the equations of decision boundary become:wx+b= +awx+b= -aThus, any hyperplane that satisfies our SVR should satisfy:-a < Y- wx+b < +a Note:Our main aim here is to decide a decision boundary at ‘a’ distance from the original hyperplane such that data points closest to the hyperplane or the support vectors are within that boundary line.Hence, we are going to take only those points that are within the decision boundary and have the least error rate, or are within the Margin of Tolerance. This gives us a better fitting model. TUNING PARAMETERS OF SVM GammaThe gamma parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. In other words, with low gamma, points far away from plausible seperation line are considered in calculation for the seperation line. Where as high gamma means the points close to plausible line are considered in calculation.  Regularization Parameter( C )(A regularization parameter that controls the trade off between the achieving a low training error and a low testing error that is the ability to generalize your classifier to unseen data.)The Regularization parameter (often termed as Penalty parameter C) tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly.Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. The images below (same as image 1 and image 2) are example of two different regularization parameter. Top one has some misclassification due to lower regularization value. Higher value leads to results like bottom one. Image 1 Image 2 How should you choose the value of C?There is no rule of thumb to choose a C value, it totally depends on your testing data. The only option I see is trying bunch of different values and choose the value which gives you lowest misclassification rate on testing data. I would suggest you to use gridsearchCV, in which you can directly give a list of different values parameter and it will tell you which value is best. Margin And finally last but very importrant characteristic of SVM classifier. SVM to core tries to achieve a good margin. A margin is a separation of line to the closest class points.A good margin is one where this separation is larger for both the classes. Images below gives to visual example of good and bad margin. A good margin allows the points to be in their respective classes without crossing to other class.  Data Preparation for SVMHow to best prepare your training data when learning an SVM model.1. Numerical Inputs: SVM assumes that your inputs are numeric. If you have categorical inputs you may need to covert them to binary dummy variables (one variable for each category).2. Binary Classification: Basic SVM as described in this post is intended for binary (two-class) classification problems. Although, extensions have been developed for regression and multi-class classification. PROS AND CONS ASSOCIATED WITH SVM Pros 1. SVM can be used for linearly separable as well as non-linearly separable data. Linearly separable data is the hard margin whereas non-linearly separable data poses a soft margin.2. Feature Mapping used to be quite a load on the computational complexity of the overall training performance of the model. However, with the help of Kernel Trick, SVM can carry out the feature mapping using the simple dot product.3. It uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.4. SVM Classifiers offer good accuracy and perform faster prediction compared to Naïve Bayes algorithm.5. SVMs provide compliance to the semi-supervised learning models. It can be used in areas where the data is labeled as well as unlabeled. It only requires a condition to the minimization problem which is known as the Transductive SVM. Cons: 1. SVM doesn’t give the best performance for handling text structures as compared to other algorithms that are used in handling text data. This leads to loss of sequential information and thereby, leading to worse performance.2. It doesn’t perform well, when we have large data set because the required training time is higher3. It also doesn’t perform very well, when the data set has more noise i.e. target classes are overlapping4. SVM doesn’t directly provide probability estimates, these are calculated using an expensive five-fold cross-validation. 5. The choice of the kernel is perhaps the biggest limitation of the support vector machine. Considering so many kernels present, it becomes difficult to choose the right one for the data.
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # for data visualization
import seaborn as sns # for statistical data visualization
%matplotlib inline
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Import dataset
###Code
data = '../input/pulsar-stars/pulsar_stars.csv'
df = pd.read_csv(data)
###Output
_____no_output_____
###Markdown
Exploratory data analysis
###Code
df.shape
###Output
_____no_output_____
###Markdown
We can see that there are 17898 instances and 9 variables in the data set.
###Code
# let's preview the dataset
df.head()
###Output
_____no_output_____
###Markdown
We can see that there are 9 variables in the dataset. 8 are continuous variables and 1 is discrete variable. The discrete variable is target_class variable. It is also the target variable.
###Code
# Now, I will view the column names to check for leading and trailing spaces.
# view the column names of the dataframe
col_names = df.columns
col_names
# remove leading spaces from column names
df.columns = df.columns.str.strip()
# view column names again
df.columns
# rename column names because column name is very long
df.columns = ['IP Mean', 'IP Sd', 'IP Kurtosis', 'IP Skewness',
'DM-SNR Mean', 'DM-SNR Sd', 'DM-SNR Kurtosis', 'DM-SNR Skewness', 'target_class']
# view the renamed column names
df.columns
# check distribution of target_class column
df['target_class'].value_counts()
# view the percentage distribution of target_class column
df['target_class'].value_counts()/np.float(len(df))
###Output
_____no_output_____
###Markdown
We can see that percentage of observations of the class label 0 and 1 is 90.84% and 9.16%. So, this is a class imbalanced problem. I will deal with that in later section.
###Code
# view summary of dataset
df.info()
# check for missing values in variables
df.isnull().sum()
###Output
_____no_output_____
###Markdown
We can see that there are no missing values in the dataset.
###Code
# Outliers in numerical variables
# view summary statistics in numerical variables
round(df.describe(),2)
###Output
_____no_output_____
###Markdown
On closer inspection, we can suspect that all the continuous variables may contain outliers.I will draw boxplots to visualise outliers in the above variables.
###Code
plt.figure(figsize=(24,20))
plt.subplot(4, 2, 1)
fig = df.boxplot(column='IP Mean')
fig.set_title('')
fig.set_ylabel('IP Mean')
plt.subplot(4, 2, 2)
fig = df.boxplot(column='IP Sd')
fig.set_title('')
fig.set_ylabel('IP Sd')
plt.subplot(4, 2, 3)
fig = df.boxplot(column='IP Kurtosis')
fig.set_title('')
fig.set_ylabel('IP Kurtosis')
plt.subplot(4, 2, 4)
fig = df.boxplot(column='IP Skewness')
fig.set_title('')
fig.set_ylabel('IP Skewness')
plt.subplot(4, 2, 5)
fig = df.boxplot(column='DM-SNR Mean')
fig.set_title('')
fig.set_ylabel('DM-SNR Mean')
plt.subplot(4, 2, 6)
fig = df.boxplot(column='DM-SNR Sd')
fig.set_title('')
fig.set_ylabel('DM-SNR Sd')
plt.subplot(4, 2, 7)
fig = df.boxplot(column='DM-SNR Kurtosis')
fig.set_title('')
fig.set_ylabel('DM-SNR Kurtosis')
plt.subplot(4, 2, 8)
fig = df.boxplot(column='DM-SNR Skewness')
fig.set_title('')
fig.set_ylabel('DM-SNR Skewness')
###Output
_____no_output_____
###Markdown
The above boxplots confirm that there are lot of outliers in these variables. Handle outliers with SVMsThere are 2 variants of SVMs. They are hard-margin variant of SVM and soft-margin variant of SVM.The hard-margin variant of SVM does not deal with outliers. In this case, we want to find the hyperplane with maximum margin such that every training point is correctly classified with margin at least 1. This technique does not handle outliers well.Another version of SVM is called soft-margin variant of SVM. In this case, we can have a few points incorrectly classified or classified with a margin less than 1. But for every such point, we have to pay a penalty in the form of C parameter, which controls the outliers. Low C implies we are allowing more outliers and high C implies less outliers.The message is that since the dataset contains outliers, so the value of C should be high while training the model. Check the distribution of variablesNow, I will plot the histograms to check distributions to find out if they are normal or skewed.
###Code
# plot histogram to check distribution
plt.figure(figsize=(24,20))
plt.subplot(4, 2, 1)
fig = df['IP Mean'].hist(bins=20)
fig.set_xlabel('IP Mean')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 2)
fig = df['IP Sd'].hist(bins=20)
fig.set_xlabel('IP Sd')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 3)
fig = df['IP Kurtosis'].hist(bins=20)
fig.set_xlabel('IP Kurtosis')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 4)
fig = df['IP Skewness'].hist(bins=20)
fig.set_xlabel('IP Skewness')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 5)
fig = df['DM-SNR Mean'].hist(bins=20)
fig.set_xlabel('DM-SNR Mean')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 6)
fig = df['DM-SNR Sd'].hist(bins=20)
fig.set_xlabel('DM-SNR Sd')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 7)
fig = df['DM-SNR Kurtosis'].hist(bins=20)
fig.set_xlabel('DM-SNR Kurtosis')
fig.set_ylabel('Number of pulsar stars')
plt.subplot(4, 2, 8)
fig = df['DM-SNR Skewness'].hist(bins=20)
fig.set_xlabel('DM-SNR Skewness')
fig.set_ylabel('Number of pulsar stars')
###Output
_____no_output_____
###Markdown
Declare feature vector and target variable
###Code
# Declare feature vector and target variable
X = df.drop(['target_class'], axis=1)
y = df['target_class']
###Output
_____no_output_____
###Markdown
Split data into separate training and test set
###Code
# split X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# check the shape of X_train and X_test
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
# Feature Scaling
cols = X_train.columns
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train = pd.DataFrame(X_train, columns=[cols])
X_test = pd.DataFrame(X_test, columns=[cols])
X_train.describe()
###Output
_____no_output_____
###Markdown
Run SVM with default hyperparameters Default hyperparameter means C=1.0, kernel=rbf and gamma=auto among other parameters.
###Code
# import SVC classifier
from sklearn.svm import SVC
# import metrics to compute accuracy
from sklearn.metrics import accuracy_score
# instantiate classifier with default hyperparameters
svc=SVC()
# fit classifier to training set
svc.fit(X_train,y_train)
# make predictions on test set
y_pred=svc.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with default hyperparameters: {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
###Output
Model accuracy score with default hyperparameters: 0.9827
###Markdown
Run SVM with rbf kernel and C=100.0We have seen that there are outliers in our dataset. So, we should increase the value of C as higher C means fewer outliers. So, I will run SVM with kernel=rbf and C=100.0.
###Code
# instantiate classifier with rbf kernel and C=100
svc=SVC(C=100.0)
# fit classifier to training set
svc.fit(X_train,y_train)
# make predictions on test set
y_pred=svc.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with rbf kernel and C=100.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
###Output
Model accuracy score with rbf kernel and C=100.0 : 0.9832
###Markdown
We can see that we obtain a higher accuracy with C=100.0 as higher C means less outliers.Now, I will further increase the value of C=1000.0 and check accuracy.
###Code
# instantiate classifier with rbf kernel and C=1000
svc=SVC(C=1000.0)
# fit classifier to training set
svc.fit(X_train,y_train)
# make predictions on test set
y_pred=svc.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with rbf kernel and C=1000.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
###Output
Model accuracy score with rbf kernel and C=1000.0 : 0.9816
###Markdown
In this case, we can see that the accuracy had decreased with C=1000.0 Run SVM with linear kernel Run SVM with linear kernel and C=1.0
###Code
# instantiate classifier with linear kernel and C=1.0
linear_svc=SVC(kernel='linear', C=1.0)
# fit classifier to training set
linear_svc.fit(X_train,y_train)
# make predictions on test set
y_pred_test=linear_svc.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with linear kernel and C=1.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred_test)))
# Run SVM with linear kernel and C=100.0
# instantiate classifier with linear kernel and C=100.0
linear_svc100=SVC(kernel='linear', C=100.0)
# fit classifier to training set
linear_svc100.fit(X_train, y_train)
# make predictions on test set
y_pred=linear_svc100.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with linear kernel and C=100.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
# Run SVM with linear kernel and C=1000.0
# instantiate classifier with linear kernel and C=1000.0
linear_svc1000=SVC(kernel='linear', C=1000.0)
# fit classifier to training set
linear_svc1000.fit(X_train, y_train)
# make predictions on test set
y_pred=linear_svc1000.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with linear kernel and C=1000.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
###Output
Model accuracy score with linear kernel and C=1000.0 : 0.9832
###Markdown
We can see that we can obtain higher accuracy with C=100.0 and C=1000.0 as compared to C=1.0.Here, y_test are the true class labels and y_pred are the predicted class labels in the test-set. Compare the train-set and test-set accuracyNow, I will compare the train-set and test-set accuracy to check for overfitting.
###Code
y_pred_train = linear_svc.predict(X_train)
y_pred_train
print('Training-set accuracy score: {0:0.4f}'. format(accuracy_score(y_train, y_pred_train)))
# print the scores on training and test set
print('Training set score: {:.4f}'.format(linear_svc.score(X_train, y_train)))
print('Test set score: {:.4f}'.format(linear_svc.score(X_test, y_test)))
###Output
Training set score: 0.9783
Test set score: 0.9830
###Markdown
The training-set accuracy score is 0.9783 while the test-set accuracy to be 0.9830. These two values are quite comparable. So, there is no question of overfitting. Compare model accuracy with null accuracySo, the model accuracy is 0.9832. But, we cannot say that our model is very good based on the above accuracy. We must compare it with the null accuracy. Null accuracy is the accuracy that could be achieved by always predicting the most frequent class.So, we should first check the class distribution in the test set.
###Code
# check class distribution in test set
y_test.value_counts()
###Output
_____no_output_____
###Markdown
We can see that the occurences of most frequent class 0 is 3306. So, we can calculate null accuracy by dividing 3306 by total number of occurences.
###Code
# check null accuracy score
null_accuracy = (3306/(3306+274))
print('Null accuracy score: {0:0.4f}'. format(null_accuracy))
###Output
Null accuracy score: 0.9235
###Markdown
We can see that our model accuracy score is 0.9830 but null accuracy score is 0.9235. So, we can conclude that our SVM classifier is doing a very good job in predicting the class labels. Run SVM with polynomial kernel
###Code
# instantiate classifier with polynomial kernel and C=1.0
poly_svc=SVC(kernel='poly', C=1.0)
# fit classifier to training set
poly_svc.fit(X_train,y_train)
# make predictions on test set
y_pred=poly_svc.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with polynomial kernel and C=1.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
# instantiate classifier with polynomial kernel and C=100.0
poly_svc100=SVC(kernel='poly', C=100.0)
# fit classifier to training set
poly_svc100.fit(X_train, y_train)
# make predictions on test set
y_pred=poly_svc100.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with polynomial kernel and C=1.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
###Output
Model accuracy score with polynomial kernel and C=1.0 : 0.9824
###Markdown
Run SVM with sigmoid kernel
###Code
# instantiate classifier with sigmoid kernel and C=1.0
sigmoid_svc=SVC(kernel='sigmoid', C=1.0)
# fit classifier to training set
sigmoid_svc.fit(X_train,y_train)
# make predictions on test set
y_pred=sigmoid_svc.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with sigmoid kernel and C=1.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
# instantiate classifier with sigmoid kernel and C=100.0
sigmoid_svc100=SVC(kernel='sigmoid', C=100.0)
# fit classifier to training set
sigmoid_svc100.fit(X_train,y_train)
# make predictions on test set
y_pred=sigmoid_svc100.predict(X_test)
# compute and print accuracy score
print('Model accuracy score with sigmoid kernel and C=100.0 : {0:0.4f}'. format(accuracy_score(y_test, y_pred)))
###Output
Model accuracy score with sigmoid kernel and C=100.0 : 0.8855
###Markdown
Note: We get maximum accuracy with rbf and linear kernel with C=100.0. and the accuracy is 0.9832. Based on the above analysis we can conclude that our classification model accuracy is very good. Our model is doing a very good job in terms of predicting the class labels.But, this is not true. Here, we have an imbalanced dataset. The problem is that accuracy is an inadequate measure for quantifying predictive performance in the imbalanced dataset problem.So, we must explore alternative metrices that provide better guidance in selecting models. In particular, we would like to know the underlying distribution of values and the type of errors our classifer is making.One such metric to analyze the model performance in imbalanced classes problem is Confusion matrix. Confusion matrix A confusion matrix is a tool for summarizing the performance of a classification algorithm. A confusion matrix will give us a clear picture of classification model performance and the types of errors produced by the model. It gives us a summary of correct and incorrect predictions broken down by each category. The summary is represented in a tabular form.
###Code
# Print the Confusion Matrix and slice it into four pieces
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred_test)
print('Confusion matrix\n\n', cm)
print('\nTrue Positives(TP) = ', cm[0,0])
print('\nTrue Negatives(TN) = ', cm[1,1])
print('\nFalse Positives(FP) = ', cm[0,1])
print('\nFalse Negatives(FN) = ', cm[1,0])
###Output
Confusion matrix
[[3289 17]
[ 44 230]]
True Positives(TP) = 3289
True Negatives(TN) = 230
False Positives(FP) = 17
False Negatives(FN) = 44
###Markdown
The confusion matrix shows 3289 + 230 = 3519 correct predictions and 17 + 44 = 61 incorrect predictions.
###Code
# visualize confusion matrix with seaborn heatmap
cm_matrix = pd.DataFrame(data=cm, columns=['Actual Positive:1', 'Actual Negative:0'],
index=['Predict Positive:1', 'Predict Negative:0'])
sns.heatmap(cm_matrix, annot=True, fmt='d', cmap='YlGnBu')
###Output
_____no_output_____
###Markdown
Classification ReportClassification report is another way to evaluate the classification model performance. It displays the precision, recall, f1 and support scores for the model. I have described these terms in later.We can print a classification report as follows:-
###Code
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred_test))
# Classification accuracy
TP = cm[0,0]
TN = cm[1,1]
FP = cm[0,1]
FN = cm[1,0]
# print classification accuracy
classification_accuracy = (TP + TN) / float(TP + TN + FP + FN)
print('Classification accuracy : {0:0.4f}'.format(classification_accuracy))
# print classification error
classification_error = (FP + FN) / float(TP + TN + FP + FN)
print('Classification error : {0:0.4f}'.format(classification_error))
# print precision score
precision = TP / float(TP + FP)
print('Precision : {0:0.4f}'.format(precision))
# Recall
recall = TP / float(TP + FN)
print('Recall or Sensitivity : {0:0.4f}'.format(recall))
###Output
Recall or Sensitivity : 0.9868
###Markdown
ROC - AUC Curve ROC CurveAnother tool to measure the classification model performance visually is ROC Curve. ROC Curve stands for Receiver Operating Characteristic Curve. An ROC Curve is a plot which shows the performance of a classification model at various classification threshold levels.
###Code
# plot ROC Curve
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_test, y_pred_test)
plt.figure(figsize=(6,4))
plt.plot(fpr, tpr, linewidth=2)
plt.plot([0,1], [0,1], 'k--' )
plt.rcParams['font.size'] = 12
plt.title('ROC curve for Predicting a Pulsar Star classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.show()
# compute ROC AUC
from sklearn.metrics import roc_auc_score
ROC_AUC = roc_auc_score(y_test, y_pred_test)
print('ROC AUC : {:.4f}'.format(ROC_AUC))
###Output
ROC AUC : 0.9171
###Markdown
ROC AUC is a single number summary of classifier performance. The higher the value, the better the classifier.ROC AUC of our model approaches towards 1. So, we can conclude that our classifier does a good job in classifying the pulsar star.
###Code
# calculate cross-validated ROC AUC
from sklearn.model_selection import cross_val_score
Cross_validated_ROC_AUC = cross_val_score(linear_svc, X_train, y_train, cv=10, scoring='roc_auc').mean()
print('Cross validated ROC AUC : {:.4f}'.format(Cross_validated_ROC_AUC))
###Output
Cross validated ROC AUC : 0.9756
|
AccessAPIsamples/nyse_list_extract.ipynb | ###Markdown
NYSE Companies List Extract Adapted from https://datahub.io/core/nyse-other-listings
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Datafile: https://datahub.io/core/nyse-other-listings/r/nyse-listed.csv
###Code
url = "https://pkgstore.datahub.io/core/nyse-other-listings/nyse-listed_csv/data/3c88fab8ec158c3cd55145243fe5fcdf/nyse-listed_csv.csv"
companies_list = pd.read_csv(url)
companies_list
###Output
_____no_output_____
###Markdown
API for access to data:https://datahub.io/core/nyse-other-listingspython `pip3 install datapackage`
###Code
from datapackage import Package
package = Package('https://datahub.io/core/nyse-other-listings/datapackage.json')
###Output
_____no_output_____
###Markdown
Print list of all resources:
###Code
print(package.resource_names)
###Output
['validation_report', 'nyse-listed_csv', 'other-listed_csv', 'nyse-listed_json', 'other-listed_json', 'nyse-other-listings_zip', 'nyse-listed_csv_preview', 'other-listed_csv_preview', 'nyse-listed', 'other-listed']
###Markdown
Print processed tabular data (if exists any)```pythonfor resource in package.resources: if resource.descriptor['datahub']['type'] == 'derived/csv': print(resource.read())```
###Code
package.get_resource('nyse-listed_csv').read()
###Output
_____no_output_____ |
site/public/courses/DS-2.4/Notebooks/Advanced_Keras/Gradient_Keras_TF.ipynb | ###Markdown
Good Resource:http://laid.delanover.com/gradients-in-tensorflow/
###Code
i = K.placeholder(shape=(2,), name="input")
s = square(i)
grads = K.gradients(s, i)[0]
print(grads)
# iterate = K.function([input_img], [loss, grads])
f = K.function([i], [grads])
print(f([a]))
from keras import backend as K
m = K.placeholder(shape=(2,), name="input")
square_f = K.square(m)
grad = K.gradients([square_f], [m])[0]
f = K.function([m], [square_f, grad])
ival = np.array([3.0, 2.0])
print(f([ival]))
import scipy.optimize as so
def f(x):
return (x[0]*x[1]-1)**2+1, [(x[0]*x[1]-1)*x[1], (x[0]*x[1]-1)*x[0]]
g = np.array([0.1,0.1])
b=[(-10,10),(-10,10)]
so.fmin_tnc(f,g,bounds=b)
import scipy.optimize as so
def f(x): # The rosenbrock function
return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
# Derivative of f
def fprime(x):
return np.array((-2*.5*(1 - x[0]) - 4*x[0]*(x[1] - x[0]**2), 2*(x[1] - x[0]**2)))
so.fmin_l_bfgs_b(f, [2, 2], fprime=fprime)
###Output
_____no_output_____
###Markdown
Tensorflow 2.0
###Code
import tensorflow as tf
x = tf.placeholder(shape=(2,), dtype = tf.float64)
def f(x):
return x*x
def grad(x):
with tf.GradientTape() as t:
t.watch(x)
out = f(x)
return t.gradient(out, x)
a = np.array([3.0, 2.0])
# tf 2.0
# x = tf.convert_to_tensor(a)
# grad(x).numpy()
# tf 1.x
sess = tf.Session()
print(sess.run(grad(x), feed_dict={x: a}))
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
sess = tf.Session()
print(sess.run(dz_dy))
# assert dz_dy.numpy() == 8.0
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dx = t.gradient(z, x)
sess = tf.Session()
print(sess.run(dz_dx))
###Output
[[8. 8.]
[8. 8.]]
###Markdown
This is what I like :)
###Code
import numpy as np
import tensorflow as tf
x = tf.placeholder(dtype=tf.float32, shape=(2,))
def f(x): # The rosenbrock function
return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
# return .5*(1 - tf.gather(x, 0))**2 + (tf.gather(x, 1) - tf.gather(x, 0)**2)**2
# Derivative of f
def fprime(x):
with tf.GradientTape() as t:
t.watch(x)
out = f(x)
return t.gradient(out, x)
a = np.array([2.0, 2.0])
grad = fprime(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(2000):
grad_evaluated = sess.run(grad, feed_dict={x: a})
a = a - 0.1*grad_evaluated
print(a)
# optimizer.apply_gradients([(grads, init_image)])
import numpy as np
import tensorflow as tf
# import tensorflow.contrib.eager as tfe
# tfe.enable_eager_execution()
# x = tf.constant([2.0, 3.0])
x = tf.Variable([2.0, 3.0], dtype=tf.float32)
def f(x): # The rosenbrock function
return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
# return .5*(1 - tf.gather(x, 0))**2 + (tf.gather(x, 1) - tf.gather(x, 0)**2)**2
# Derivative of f
def fprime(x):
with tf.GradientTape() as t:
t.watch(x)
out = f(x)
return t.gradient(out, x)
a = np.array([2.0, 2.0])
# sess = tf.Session()
# print(sess.run(fprime(x), feed_dict={x: a}))
# optimizer = tf.train.AdamOptimizer()
optimizer = tf.train.GradientDescentOptimizer(0.5)
for i in range(10):
# print("Iteration: {}".format(i))
# grads = sess.run(fprime(x), feed_dict={x: a})
grads = fprime(x)
updates = optimizer.apply_gradients([(grads, x)])
print(updates)
import numpy as np
import tensorflow as tf
# import tensorflow.contrib.eager as tfe
# tfe.enable_eager_execution()
# x = tf.constant([2.0, 3.0])
x = tf.Variable([2.0, 3.0], dtype=tf.float32)
def f(x): # The rosenbrock function
return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
# return .5*(1 - tf.gather(x, 0))**2 + (tf.gather(x, 1) - tf.gather(x, 0)**2)**2
# Derivative of f
def fprime(x):
with tf.GradientTape() as t:
t.watch(x)
out = f(x)
return t.gradient(out, x)
a = np.array([2.0, 2.0])
# print(sess.run(fprime(x), feed_dict={x: a}))
# optimizer = tf.train.AdamOptimizer()
optimizer = tf.train.GradientDescentOptimizer(0.5)
grads = fprime(x)
trainables = tf.trainable_variables()
updates = optimizer.apply_gradients(zip(grads, trainables))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(10):
print(sess.run(updates, feed_dict={x: a}))
import numpy as np
import tensorflow as tf
# import tensorflow.contrib.eager as tfe
# tfe.enable_eager_execution()
x = tf.Variable([2.0, 2.0], dtype=tf.float32)
# def f(x): # The rosenbrock function
# return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
# return .5*(1 - tf.gather(x, 0))**2 + (tf.gather(x, 1) - tf.gather(x, 0)**2)**2
# Derivative of f
# def fprime(x):
# with tf.GradientTape() as t:
# t.watch(x)
# out = f(x)
# return t.gradient(out, x)
# a = np.array([2.0, 2.0])
f = .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
optimizer = tf.train.GradientDescentOptimizer(0.1).minimize(f)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for _ in range(100):
sess.run([optimizer])
print(sess.run(x))
import numpy as np
import tensorflow as tf
# import tensorflow.contrib.eager as tfe
# tfe.enable_eager_execution()
x = tf.Variable([2.0, 2.0], dtype=tf.float32)
# def f(x): # The rosenbrock function
# return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
# # Derivative of f
# def fprime(x):
# with tf.GradientTape() as t:
# t.watch(x)
# out = f(x)
# return t.gradient(out, x)
f = .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
optimizer = tf.train.GradientDescentOptimizer(0.1)
grads = optimizer.compute_gradients(f, var_list=tf.trainable_variables())
train_step = optimizer.apply_gradients(grads)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for _ in range(100):
sess.run([train_step])
print(sess.run(x))
import tensorflow as tf
x = tf.Variable(10.0, trainable=True)
f_x = 2 * x* x - 5 *x + 4
loss = f_x
opt = tf.train.GradientDescentOptimizer(0.1).minimize(f_x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(100):
print(sess.run([x,loss]))
sess.run(opt)
###Output
[10.0, 154.0]
[6.5, 56.0]
[4.3999996, 20.719995]
[3.1399999, 8.019199]
[2.3839998, 3.4469109]
[1.9303999, 1.8008881]
[1.65824, 1.2083197]
[1.494944, 0.9949951]
[1.3969663, 0.9181981]
[1.3381798, 0.89055157]
[1.302908, 0.88059855]
[1.2817447, 0.8770151]
[1.2690468, 0.8757255]
[1.2614281, 0.87526155]
[1.2568569, 0.87509394]
[1.2541142, 0.87503386]
[1.2524685, 0.87501216]
[1.251481, 0.8750043]
[1.2508886, 0.87500143]
[1.2505331, 0.8750005]
[1.2503198, 0.875]
[1.2501919, 0.87500024]
[1.2501152, 0.87499976]
[1.2500691, 0.875]
[1.2500415, 0.875]
[1.2500249, 0.87500024]
[1.2500149, 0.87500024]
[1.250009, 0.875]
[1.2500054, 0.87500024]
[1.2500032, 0.875]
[1.2500019, 0.875]
[1.2500012, 0.87500024]
[1.2500007, 0.87499976]
[1.2500005, 0.875]
[1.2500002, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
[1.2500001, 0.87500024]
|
MOOCS/Cs224n_2019/a1/exploring_word_vectors.ipynb | ###Markdown
CS224N Assignment 1: Exploring Word Vectors (25 Points)Welcome to CS224n! Before you start, make sure you read the README.txt in the same directory as this notebook.
###Code
# All Import Statements Defined Here
# Note: Do not add to this list.
# All the dependencies you need, can be installed by running .
# ----------------
import sys
assert sys.version_info[0]==3
assert sys.version_info[1] >= 5
from gensim.models import KeyedVectors
from gensim.test.utils import datapath
import pprint
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 5]
import nltk
nltk.download('reuters')
from nltk.corpus import reuters
import numpy as np
import random
import scipy as sp
from sklearn.decomposition import TruncatedSVD
from sklearn.decomposition import PCA
START_TOKEN = '<START>'
END_TOKEN = '<END>'
np.random.seed(0)
random.seed(0)
# ----------------
###Output
[nltk_data] Downloading package reuters to
[nltk_data] C:\Users\aladd\AppData\Roaming\nltk_data...
[nltk_data] Package reuters is already up-to-date!
###Markdown
Please Write Your SUNet ID Here: Word VectorsWord Vectors are often used as a fundamental component for downstream NLP tasks, e.g. question answering, text generation, translation, etc., so it is important to build some intuitions as to their strengths and weaknesses. Here, you will explore two types of word vectors: those derived from *co-occurrence matrices*, and those derived via *word2vec*. **Assignment Notes:** Please make sure to save the notebook as you go along. Submission Instructions are located at the bottom of the notebook.**Note on Terminology:** The terms "word vectors" and "word embeddings" are often used interchangeably. The term "embedding" refers to the fact that we are encoding aspects of a word's meaning in a lower dimensional space. As [Wikipedia](https://en.wikipedia.org/wiki/Word_embedding) states, "*conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with a much lower dimension*". Part 1: Count-Based Word Vectors (10 points)Most word vector models start from the following idea:*You shall know a word by the company it keeps ([Firth, J. R. 1957:11](https://en.wikipedia.org/wiki/John_Rupert_Firth))*Many word vector implementations are driven by the idea that similar words, i.e., (near) synonyms, will be used in similar contexts. As a result, similar words will often be spoken or written along with a shared subset of words, i.e., contexts. By examining these contexts, we can try to develop embeddings for our words. With this intuition in mind, many "old school" approaches to constructing word vectors relied on word counts. Here we elaborate upon one of those strategies, *co-occurrence matrices* (for more information, see [here](http://web.stanford.edu/class/cs124/lec/vectorsemantics.video.pdf) or [here](https://medium.com/data-science-group-iitr/word-embedding-2d05d270b285)). Co-OccurrenceA co-occurrence matrix counts how often things co-occur in some environment. Given some word $w_i$ occurring in the document, we consider the *context window* surrounding $w_i$. Supposing our fixed window size is $n$, then this is the $n$ preceding and $n$ subsequent words in that document, i.e. words $w_{i-n} \dots w_{i-1}$ and $w_{i+1} \dots w_{i+n}$. We build a *co-occurrence matrix* $M$, which is a symmetric word-by-word matrix in which $M_{ij}$ is the number of times $w_j$ appears inside $w_i$'s window.**Example: Co-Occurrence with Fixed Window of n=1**:Document 1: "all that glitters is not gold"Document 2: "all is well that ends well"| * | START | all | that | glitters | is | not | gold | well | ends | END ||----------|-------|-----|------|----------|------|------|-------|------|------|-----|| START | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 || all | 2 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 || that | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 || glitters | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 || is | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 || not | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 || gold | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 || well | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 || ends | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 || END | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |**Note:** In NLP, we often add START and END tokens to represent the beginning and end of sentences, paragraphs or documents. In thise case we imagine START and END tokens encapsulating each document, e.g., "START All that glitters is not gold END", and include these tokens in our co-occurrence counts.The rows (or columns) of this matrix provide one type of word vectors (those based on word-word co-occurrence), but the vectors will be large in general (linear in the number of distinct words in a corpus). Thus, our next step is to run *dimensionality reduction*. In particular, we will run *SVD (Singular Value Decomposition)*, which is a kind of generalized *PCA (Principal Components Analysis)* to select the top $k$ principal components. Here's a visualization of dimensionality reduction with SVD. In this picture our co-occurrence matrix is $A$ with $n$ rows corresponding to $n$ words. We obtain a full matrix decomposition, with the singular values ordered in the diagonal $S$ matrix, and our new, shorter length-$k$ word vectors in $U_k$.This reduced-dimensionality co-occurrence representation preserves semantic relationships between words, e.g. *doctor* and *hospital* will be closer than *doctor* and *dog*. **Notes:** If you can barely remember what an eigenvalue is, here's [a slow, friendly introduction to SVD](https://davetang.org/file/Singular_Value_Decomposition_Tutorial.pdf). If you want to learn more thoroughly about PCA or SVD, feel free to check out lectures [7](https://web.stanford.edu/class/cs168/l/l7.pdf), [8](http://theory.stanford.edu/~tim/s15/l/l8.pdf), and [9](https://web.stanford.edu/class/cs168/l/l9.pdf) of CS168. These course notes provide a great high-level treatment of these general purpose algorithms. Though, for the purpose of this class, you only need to know how to extract the k-dimensional embeddings by utilizing pre-programmed implementations of these algorithms from the numpy, scipy, or sklearn python packages. In practice, it is challenging to apply full SVD to large corpora because of the memory needed to perform PCA or SVD. However, if you only want the top $k$ vector components for relatively small $k$ — known as *[Truncated SVD](https://en.wikipedia.org/wiki/Singular_value_decompositionTruncated_SVD)* — then there are reasonably scalable techniques to compute those iteratively. Plotting Co-Occurrence Word EmbeddingsHere, we will be using the Reuters (business and financial news) corpus. If you haven't run the import cell at the top of this page, please run it now (click it and press SHIFT-RETURN). The corpus consists of 10,788 news documents totaling 1.3 million words. These documents span 90 categories and are split into train and test. For more details, please see https://www.nltk.org/book/ch02.html. We provide a `read_corpus` function below that pulls out only articles from the "crude" (i.e. news articles about oil, gas, etc.) category. The function also adds START and END tokens to each of the documents, and lowercases words. You do **not** have perform any other kind of pre-processing.
###Code
def read_corpus(category="crude"):
""" Read files from the specified Reuter's category.
Params:
category (string): category name
Return:
list of lists, with words from each of the processed files
"""
files = reuters.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(reuters.words(f))] + [END_TOKEN] for f in files]
###Output
_____no_output_____
###Markdown
Let's have a look what these documents are like….
###Code
reuters_corpus = read_corpus()
pprint.pprint(reuters_corpus[:3], compact=True, width=100)
###Output
[['<START>', 'japan', 'to', 'revise', 'long', '-', 'term', 'energy', 'demand', 'downwards', 'the',
'ministry', 'of', 'international', 'trade', 'and', 'industry', '(', 'miti', ')', 'will', 'revise',
'its', 'long', '-', 'term', 'energy', 'supply', '/', 'demand', 'outlook', 'by', 'august', 'to',
'meet', 'a', 'forecast', 'downtrend', 'in', 'japanese', 'energy', 'demand', ',', 'ministry',
'officials', 'said', '.', 'miti', 'is', 'expected', 'to', 'lower', 'the', 'projection', 'for',
'primary', 'energy', 'supplies', 'in', 'the', 'year', '2000', 'to', '550', 'mln', 'kilolitres',
'(', 'kl', ')', 'from', '600', 'mln', ',', 'they', 'said', '.', 'the', 'decision', 'follows',
'the', 'emergence', 'of', 'structural', 'changes', 'in', 'japanese', 'industry', 'following',
'the', 'rise', 'in', 'the', 'value', 'of', 'the', 'yen', 'and', 'a', 'decline', 'in', 'domestic',
'electric', 'power', 'demand', '.', 'miti', 'is', 'planning', 'to', 'work', 'out', 'a', 'revised',
'energy', 'supply', '/', 'demand', 'outlook', 'through', 'deliberations', 'of', 'committee',
'meetings', 'of', 'the', 'agency', 'of', 'natural', 'resources', 'and', 'energy', ',', 'the',
'officials', 'said', '.', 'they', 'said', 'miti', 'will', 'also', 'review', 'the', 'breakdown',
'of', 'energy', 'supply', 'sources', ',', 'including', 'oil', ',', 'nuclear', ',', 'coal', 'and',
'natural', 'gas', '.', 'nuclear', 'energy', 'provided', 'the', 'bulk', 'of', 'japan', "'", 's',
'electric', 'power', 'in', 'the', 'fiscal', 'year', 'ended', 'march', '31', ',', 'supplying',
'an', 'estimated', '27', 'pct', 'on', 'a', 'kilowatt', '/', 'hour', 'basis', ',', 'followed',
'by', 'oil', '(', '23', 'pct', ')', 'and', 'liquefied', 'natural', 'gas', '(', '21', 'pct', '),',
'they', 'noted', '.', '<END>'],
['<START>', 'energy', '/', 'u', '.', 's', '.', 'petrochemical', 'industry', 'cheap', 'oil',
'feedstocks', ',', 'the', 'weakened', 'u', '.', 's', '.', 'dollar', 'and', 'a', 'plant',
'utilization', 'rate', 'approaching', '90', 'pct', 'will', 'propel', 'the', 'streamlined', 'u',
'.', 's', '.', 'petrochemical', 'industry', 'to', 'record', 'profits', 'this', 'year', ',',
'with', 'growth', 'expected', 'through', 'at', 'least', '1990', ',', 'major', 'company',
'executives', 'predicted', '.', 'this', 'bullish', 'outlook', 'for', 'chemical', 'manufacturing',
'and', 'an', 'industrywide', 'move', 'to', 'shed', 'unrelated', 'businesses', 'has', 'prompted',
'gaf', 'corp', '&', 'lt', ';', 'gaf', '>,', 'privately', '-', 'held', 'cain', 'chemical', 'inc',
',', 'and', 'other', 'firms', 'to', 'aggressively', 'seek', 'acquisitions', 'of', 'petrochemical',
'plants', '.', 'oil', 'companies', 'such', 'as', 'ashland', 'oil', 'inc', '&', 'lt', ';', 'ash',
'>,', 'the', 'kentucky', '-', 'based', 'oil', 'refiner', 'and', 'marketer', ',', 'are', 'also',
'shopping', 'for', 'money', '-', 'making', 'petrochemical', 'businesses', 'to', 'buy', '.', '"',
'i', 'see', 'us', 'poised', 'at', 'the', 'threshold', 'of', 'a', 'golden', 'period', ',"', 'said',
'paul', 'oreffice', ',', 'chairman', 'of', 'giant', 'dow', 'chemical', 'co', '&', 'lt', ';',
'dow', '>,', 'adding', ',', '"', 'there', "'", 's', 'no', 'major', 'plant', 'capacity', 'being',
'added', 'around', 'the', 'world', 'now', '.', 'the', 'whole', 'game', 'is', 'bringing', 'out',
'new', 'products', 'and', 'improving', 'the', 'old', 'ones', '."', 'analysts', 'say', 'the',
'chemical', 'industry', "'", 's', 'biggest', 'customers', ',', 'automobile', 'manufacturers',
'and', 'home', 'builders', 'that', 'use', 'a', 'lot', 'of', 'paints', 'and', 'plastics', ',',
'are', 'expected', 'to', 'buy', 'quantities', 'this', 'year', '.', 'u', '.', 's', '.',
'petrochemical', 'plants', 'are', 'currently', 'operating', 'at', 'about', '90', 'pct',
'capacity', ',', 'reflecting', 'tighter', 'supply', 'that', 'could', 'hike', 'product', 'prices',
'by', '30', 'to', '40', 'pct', 'this', 'year', ',', 'said', 'john', 'dosher', ',', 'managing',
'director', 'of', 'pace', 'consultants', 'inc', 'of', 'houston', '.', 'demand', 'for', 'some',
'products', 'such', 'as', 'styrene', 'could', 'push', 'profit', 'margins', 'up', 'by', 'as',
'much', 'as', '300', 'pct', ',', 'he', 'said', '.', 'oreffice', ',', 'speaking', 'at', 'a',
'meeting', 'of', 'chemical', 'engineers', 'in', 'houston', ',', 'said', 'dow', 'would', 'easily',
'top', 'the', '741', 'mln', 'dlrs', 'it', 'earned', 'last', 'year', 'and', 'predicted', 'it',
'would', 'have', 'the', 'best', 'year', 'in', 'its', 'history', '.', 'in', '1985', ',', 'when',
'oil', 'prices', 'were', 'still', 'above', '25', 'dlrs', 'a', 'barrel', 'and', 'chemical',
'exports', 'were', 'adversely', 'affected', 'by', 'the', 'strong', 'u', '.', 's', '.', 'dollar',
',', 'dow', 'had', 'profits', 'of', '58', 'mln', 'dlrs', '.', '"', 'i', 'believe', 'the',
'entire', 'chemical', 'industry', 'is', 'headed', 'for', 'a', 'record', 'year', 'or', 'close',
'to', 'it', ',"', 'oreffice', 'said', '.', 'gaf', 'chairman', 'samuel', 'heyman', 'estimated',
'that', 'the', 'u', '.', 's', '.', 'chemical', 'industry', 'would', 'report', 'a', '20', 'pct',
'gain', 'in', 'profits', 'during', '1987', '.', 'last', 'year', ',', 'the', 'domestic',
'industry', 'earned', 'a', 'total', 'of', '13', 'billion', 'dlrs', ',', 'a', '54', 'pct', 'leap',
'from', '1985', '.', 'the', 'turn', 'in', 'the', 'fortunes', 'of', 'the', 'once', '-', 'sickly',
'chemical', 'industry', 'has', 'been', 'brought', 'about', 'by', 'a', 'combination', 'of', 'luck',
'and', 'planning', ',', 'said', 'pace', "'", 's', 'john', 'dosher', '.', 'dosher', 'said', 'last',
'year', "'", 's', 'fall', 'in', 'oil', 'prices', 'made', 'feedstocks', 'dramatically', 'cheaper',
'and', 'at', 'the', 'same', 'time', 'the', 'american', 'dollar', 'was', 'weakening', 'against',
'foreign', 'currencies', '.', 'that', 'helped', 'boost', 'u', '.', 's', '.', 'chemical',
'exports', '.', 'also', 'helping', 'to', 'bring', 'supply', 'and', 'demand', 'into', 'balance',
'has', 'been', 'the', 'gradual', 'market', 'absorption', 'of', 'the', 'extra', 'chemical',
'manufacturing', 'capacity', 'created', 'by', 'middle', 'eastern', 'oil', 'producers', 'in',
'the', 'early', '1980s', '.', 'finally', ',', 'virtually', 'all', 'major', 'u', '.', 's', '.',
'chemical', 'manufacturers', 'have', 'embarked', 'on', 'an', 'extensive', 'corporate',
'restructuring', 'program', 'to', 'mothball', 'inefficient', 'plants', ',', 'trim', 'the',
'payroll', 'and', 'eliminate', 'unrelated', 'businesses', '.', 'the', 'restructuring', 'touched',
'off', 'a', 'flurry', 'of', 'friendly', 'and', 'hostile', 'takeover', 'attempts', '.', 'gaf', ',',
'which', 'made', 'an', 'unsuccessful', 'attempt', 'in', '1985', 'to', 'acquire', 'union',
'carbide', 'corp', '&', 'lt', ';', 'uk', '>,', 'recently', 'offered', 'three', 'billion', 'dlrs',
'for', 'borg', 'warner', 'corp', '&', 'lt', ';', 'bor', '>,', 'a', 'chicago', 'manufacturer',
'of', 'plastics', 'and', 'chemicals', '.', 'another', 'industry', 'powerhouse', ',', 'w', '.',
'r', '.', 'grace', '&', 'lt', ';', 'gra', '>', 'has', 'divested', 'its', 'retailing', ',',
'restaurant', 'and', 'fertilizer', 'businesses', 'to', 'raise', 'cash', 'for', 'chemical',
'acquisitions', '.', 'but', 'some', 'experts', 'worry', 'that', 'the', 'chemical', 'industry',
'may', 'be', 'headed', 'for', 'trouble', 'if', 'companies', 'continue', 'turning', 'their',
'back', 'on', 'the', 'manufacturing', 'of', 'staple', 'petrochemical', 'commodities', ',', 'such',
'as', 'ethylene', ',', 'in', 'favor', 'of', 'more', 'profitable', 'specialty', 'chemicals',
'that', 'are', 'custom', '-', 'designed', 'for', 'a', 'small', 'group', 'of', 'buyers', '.', '"',
'companies', 'like', 'dupont', '&', 'lt', ';', 'dd', '>', 'and', 'monsanto', 'co', '&', 'lt', ';',
'mtc', '>', 'spent', 'the', 'past', 'two', 'or', 'three', 'years', 'trying', 'to', 'get', 'out',
'of', 'the', 'commodity', 'chemical', 'business', 'in', 'reaction', 'to', 'how', 'badly', 'the',
'market', 'had', 'deteriorated', ',"', 'dosher', 'said', '.', '"', 'but', 'i', 'think', 'they',
'will', 'eventually', 'kill', 'the', 'margins', 'on', 'the', 'profitable', 'chemicals', 'in',
'the', 'niche', 'market', '."', 'some', 'top', 'chemical', 'executives', 'share', 'the',
'concern', '.', '"', 'the', 'challenge', 'for', 'our', 'industry', 'is', 'to', 'keep', 'from',
'getting', 'carried', 'away', 'and', 'repeating', 'past', 'mistakes', ',"', 'gaf', "'", 's',
'heyman', 'cautioned', '.', '"', 'the', 'shift', 'from', 'commodity', 'chemicals', 'may', 'be',
'ill', '-', 'advised', '.', 'specialty', 'businesses', 'do', 'not', 'stay', 'special', 'long',
'."', 'houston', '-', 'based', 'cain', 'chemical', ',', 'created', 'this', 'month', 'by', 'the',
'sterling', 'investment', 'banking', 'group', ',', 'believes', 'it', 'can', 'generate', '700',
'mln', 'dlrs', 'in', 'annual', 'sales', 'by', 'bucking', 'the', 'industry', 'trend', '.',
'chairman', 'gordon', 'cain', ',', 'who', 'previously', 'led', 'a', 'leveraged', 'buyout', 'of',
'dupont', "'", 's', 'conoco', 'inc', "'", 's', 'chemical', 'business', ',', 'has', 'spent', '1',
'.', '1', 'billion', 'dlrs', 'since', 'january', 'to', 'buy', 'seven', 'petrochemical', 'plants',
'along', 'the', 'texas', 'gulf', 'coast', '.', 'the', 'plants', 'produce', 'only', 'basic',
'commodity', 'petrochemicals', 'that', 'are', 'the', 'building', 'blocks', 'of', 'specialty',
'products', '.', '"', 'this', 'kind', 'of', 'commodity', 'chemical', 'business', 'will', 'never',
'be', 'a', 'glamorous', ',', 'high', '-', 'margin', 'business', ',"', 'cain', 'said', ',',
'adding', 'that', 'demand', 'is', 'expected', 'to', 'grow', 'by', 'about', 'three', 'pct',
'annually', '.', 'garo', 'armen', ',', 'an', 'analyst', 'with', 'dean', 'witter', 'reynolds', ',',
'said', 'chemical', 'makers', 'have', 'also', 'benefitted', 'by', 'increasing', 'demand', 'for',
'plastics', 'as', 'prices', 'become', 'more', 'competitive', 'with', 'aluminum', ',', 'wood',
'and', 'steel', 'products', '.', 'armen', 'estimated', 'the', 'upturn', 'in', 'the', 'chemical',
'business', 'could', 'last', 'as', 'long', 'as', 'four', 'or', 'five', 'years', ',', 'provided',
'the', 'u', '.', 's', '.', 'economy', 'continues', 'its', 'modest', 'rate', 'of', 'growth', '.',
'<END>'],
['<START>', 'turkey', 'calls', 'for', 'dialogue', 'to', 'solve', 'dispute', 'turkey', 'said',
'today', 'its', 'disputes', 'with', 'greece', ',', 'including', 'rights', 'on', 'the',
'continental', 'shelf', 'in', 'the', 'aegean', 'sea', ',', 'should', 'be', 'solved', 'through',
'negotiations', '.', 'a', 'foreign', 'ministry', 'statement', 'said', 'the', 'latest', 'crisis',
'between', 'the', 'two', 'nato', 'members', 'stemmed', 'from', 'the', 'continental', 'shelf',
'dispute', 'and', 'an', 'agreement', 'on', 'this', 'issue', 'would', 'effect', 'the', 'security',
',', 'economy', 'and', 'other', 'rights', 'of', 'both', 'countries', '.', '"', 'as', 'the',
'issue', 'is', 'basicly', 'political', ',', 'a', 'solution', 'can', 'only', 'be', 'found', 'by',
'bilateral', 'negotiations', ',"', 'the', 'statement', 'said', '.', 'greece', 'has', 'repeatedly',
'said', 'the', 'issue', 'was', 'legal', 'and', 'could', 'be', 'solved', 'at', 'the',
'international', 'court', 'of', 'justice', '.', 'the', 'two', 'countries', 'approached', 'armed',
'confrontation', 'last', 'month', 'after', 'greece', 'announced', 'it', 'planned', 'oil',
'exploration', 'work', 'in', 'the', 'aegean', 'and', 'turkey', 'said', 'it', 'would', 'also',
'search', 'for', 'oil', '.', 'a', 'face', '-', 'off', 'was', 'averted', 'when', 'turkey',
'confined', 'its', 'research', 'to', 'territorrial', 'waters', '.', '"', 'the', 'latest',
'crises', 'created', 'an', 'historic', 'opportunity', 'to', 'solve', 'the', 'disputes', 'between',
'the', 'two', 'countries', ',"', 'the', 'foreign', 'ministry', 'statement', 'said', '.', 'turkey',
"'", 's', 'ambassador', 'in', 'athens', ',', 'nazmi', 'akiman', ',', 'was', 'due', 'to', 'meet',
'prime', 'minister', 'andreas', 'papandreou', 'today', 'for', 'the', 'greek', 'reply', 'to', 'a',
'message', 'sent', 'last', 'week', 'by', 'turkish', 'prime', 'minister', 'turgut', 'ozal', '.',
'the', 'contents', 'of', 'the', 'message', 'were', 'not', 'disclosed', '.', '<END>']]
###Markdown
Question 1.1: Implement `distinct_words` [code] (2 points)Write a method to work out the distinct words (word types) that occur in the corpus. You can do this with `for` loops, but it's more efficient to do it with Python list comprehensions. In particular, [this](https://coderwall.com/p/rcmaea/flatten-a-list-of-lists-in-one-line-in-python) may be useful to flatten a list of lists. If you're not familiar with Python list comprehensions in general, here's [more information](https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html).You may find it useful to use [Python sets](https://www.w3schools.com/python/python_sets.asp) to remove duplicate words.
###Code
def distinct_words(corpus):
""" Determine a list of distinct words for the corpus.
Params:
corpus (list of list of strings): corpus of documents
Return:
corpus_words (list of strings): list of distinct words across the corpus, sorted (using python 'sorted' function)
num_corpus_words (integer): number of distinct words across the corpus
"""
# ------------------
# Write your implementation here.
corpus_words = sorted(list(set([word for sentence in corpus for word in sentence])))
num_corpus_words = len(corpus_words)
# ------------------
return sorted(corpus_words), num_corpus_words
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness.
# ---------------------
# Define toy corpus
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
test_corpus_words, num_corpus_words = distinct_words(test_corpus)
# Correct answers
ans_test_corpus_words = sorted(list(set(["START", "All", "ends", "that", "gold", "All's", "glitters", "isn't", "well", "END"])))
ans_num_corpus_words = len(ans_test_corpus_words)
# Test correct number of words
assert(num_corpus_words == ans_num_corpus_words), "Incorrect number of distinct words. Correct: {}. Yours: {}".format(ans_num_corpus_words, num_corpus_words)
# Test correct words
assert (test_corpus_words == ans_test_corpus_words), "Incorrect corpus_words.\nCorrect: {}\nYours: {}".format(str(ans_test_corpus_words), str(test_corpus_words))
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
###Output
--------------------------------------------------------------------------------
Passed All Tests!
--------------------------------------------------------------------------------
###Markdown
Question 1.2: Implement `compute_co_occurrence_matrix` [code] (3 points)Write a method that constructs a co-occurrence matrix for a certain window-size $n$ (with a default of 4), considering words $n$ before and $n$ after the word in the center of the window. Here, we start to use `numpy (np)` to represent vectors, matrices, and tensors. If you're not familiar with NumPy, there's a NumPy tutorial in the second half of this cs231n [Python NumPy tutorial](http://cs231n.github.io/python-numpy-tutorial/).
###Code
def compute_co_occurrence_matrix(corpus, window_size=4):
""" Compute co-occurrence matrix for the given corpus and window_size (default of 4).
Note: Each word in a document should be at the center of a window. Words near edges will have a smaller
number of co-occurring words.
For example, if we take the document "START All that glitters is not gold END" with window size of 4,
"All" will co-occur with "START", "that", "glitters", "is", and "not".
Params:
corpus (list of list of strings): corpus of documents
window_size (int): size of context window
Return:
M (numpy matrix of shape (number of corpus words, number of corpus words)):
Co-occurence matrix of word counts.
The ordering of the words in the rows/columns should be the same as the ordering of the words given by the distinct_words function.
word2Ind (dict): dictionary that maps word to index (i.e. row/column number) for matrix M.
"""
words, num_words = distinct_words(corpus)
M = None
# ------------------
# Write your implementation here.
word2Ind = {word:idx for idx, word in enumerate(words)}
M = np.zeros((num_words, num_words))
for sentence in corpus:
for position, word in enumerate(sentence):
lower, upper = max(0, position - window_size), min(position + window_size, len(sentence) - 1)
nearby_words = sentence[lower : upper + 1]
nearby_words.remove(word)
for nearby_word in nearby_words:
M[word2Ind[word], word2Ind[nearby_word]] += 1
return M, word2Ind
# ---------------------
# Run this sanity check
# Note that this is not an exhaustive check for correctness.
# ---------------------
# Define toy corpus and get student's co-occurrence matrix
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)
# Correct M and word2Ind
M_test_ans = np.array(
[[0., 0., 0., 1., 0., 0., 0., 0., 1., 0.,],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 1.,],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,],
[1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 0.,],
[0., 0., 1., 0., 0., 0., 0., 1., 0., 0.,],
[0., 0., 0., 0., 0., 1., 1., 0., 0., 0.,],
[1., 0., 0., 0., 1., 1., 0., 0., 0., 1.,],
[0., 1., 1., 0., 1., 0., 0., 0., 1., 0.,]]
)
word2Ind_ans = {'All': 0, "All's": 1, 'END': 2, 'START': 3, 'ends': 4, 'glitters': 5, 'gold': 6, "isn't": 7, 'that': 8, 'well': 9}
# Test correct word2Ind
assert (word2Ind_ans == word2Ind_test), "Your word2Ind is incorrect:\nCorrect: {}\nYours: {}".format(word2Ind_ans, word2Ind_test)
# Test correct M shape
assert (M_test.shape == M_test_ans.shape), "M matrix has incorrect shape.\nCorrect: {}\nYours: {}".format(M_test.shape, M_test_ans.shape)
# Test correct M values
for w1 in word2Ind_ans.keys():
idx1 = word2Ind_ans[w1]
for w2 in word2Ind_ans.keys():
idx2 = word2Ind_ans[w2]
student = M_test[idx1, idx2]
correct = M_test_ans[idx1, idx2]
if student != correct:
print("Correct M:")
print(M_test_ans)
print("Your M: ")
print(M_test)
raise AssertionError("Incorrect count at index ({}, {})=({}, {}) in matrix M. Yours has {} but should have {}.".format(idx1, idx2, w1, w2, student, correct))
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
###Output
--------------------------------------------------------------------------------
Passed All Tests!
--------------------------------------------------------------------------------
###Markdown
Question 1.3: Implement `reduce_to_k_dim` [code] (1 point)Construct a method that performs dimensionality reduction on the matrix to produce k-dimensional embeddings. Use SVD to take the top k components and produce a new matrix of k-dimensional embeddings. **Note:** All of numpy, scipy, and scikit-learn (`sklearn`) provide *some* implementation of SVD, but only scipy and sklearn provide an implementation of Truncated SVD, and only sklearn provides an efficient randomized algorithm for calculating large-scale Truncated SVD. So please use [sklearn.decomposition.TruncatedSVD](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html).
###Code
def reduce_to_k_dim(M, k=2):
""" Reduce a co-occurence count matrix of dimensionality (num_corpus_words, num_corpus_words)
to a matrix of dimensionality (num_corpus_words, k) using the following SVD function from Scikit-Learn:
- http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html
Params:
M (numpy matrix of shape (number of corpus words, number of corpus words)): co-occurence matrix of word counts
k (int): embedding size of each word after dimension reduction
Return:
M_reduced (numpy matrix of shape (number of corpus words, k)): matrix of k-dimensioal word embeddings.
In terms of the SVD from math class, this actually returns U * S
"""
n_iters = 10 # Use this parameter in your call to `TruncatedSVD`
M_reduced = None
print("Running Truncated SVD over %i words..." % (M.shape[0]))
# ------------------
# Write your implementation here.
svd = TruncatedSVD(n_components=k, algorithm='randomized', n_iter=n_iters, random_state=52, tol=0.0)
M_reduced = svd.fit_transform(M)
# ------------------
print("Done.")
return M_reduced
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness
# In fact we only check that your M_reduced has the right dimensions.
# ---------------------
# Define toy corpus and run student code
test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")]
M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1)
M_test_reduced = reduce_to_k_dim(M_test, k=2)
# Test proper dimensions
assert (M_test_reduced.shape[0] == 10), "M_reduced has {} rows; should have {}".format(M_test_reduced.shape[0], 10)
assert (M_test_reduced.shape[1] == 2), "M_reduced has {} columns; should have {}".format(M_test_reduced.shape[1], 2)
# Print Success
print ("-" * 80)
print("Passed All Tests!")
print ("-" * 80)
###Output
Running Truncated SVD over 10 words...
Done.
--------------------------------------------------------------------------------
Passed All Tests!
--------------------------------------------------------------------------------
###Markdown
Question 1.4: Implement `plot_embeddings` [code] (1 point)Here you will write a function to plot a set of 2D vectors in 2D space. For graphs, we will use Matplotlib (`plt`).For this example, you may find it useful to adapt [this code](https://www.pythonmembers.club/2018/05/08/matplotlib-scatter-plot-annotate-set-text-at-label-each-point/). In the future, a good way to make a plot is to look at [the Matplotlib gallery](https://matplotlib.org/gallery/index.html), find a plot that looks somewhat like what you want, and adapt the code they give.
###Code
def plot_embeddings(M_reduced, word2Ind, words):
""" Plot in a scatterplot the embeddings of the words specified in the list "words".
NOTE: do not plot all the words listed in M_reduced / word2Ind.
Include a label next to each point.
Params:
M_reduced (numpy matrix of shape (number of unique words in the corpus , k)): matrix of k-dimensioal word embeddings
word2Ind (dict): dictionary that maps word to indices for matrix M
words (list of strings): words whose embeddings we want to visualize
"""
# ------------------
# Write your implementation here.
for word in words:
coord = M_reduced[word2Ind[word]]
x = coord[0]
y = coord[1]
plt.scatter(x, y, marker='x', color='red')
plt.text(x, y, word, fontsize=9)
plt.show()
# ------------------
# ---------------------
# Run this sanity check
# Note that this not an exhaustive check for correctness.
# The plot produced should look like the "test solution plot" depicted below.
# ---------------------
print ("-" * 80)
print ("Outputted Plot:")
M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])
word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}
words = ['test1', 'test2', 'test3', 'test4', 'test5']
plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)
print ("-" * 80)
###Output
--------------------------------------------------------------------------------
Outputted Plot:
###Markdown
**Test Plot Solution** Question 1.5: Co-Occurrence Plot Analysis [written] (3 points)Now we will put together all the parts you have written! We will compute the co-occurrence matrix with fixed window of 4, over the Reuters "crude" corpus. Then we will use TruncatedSVD to compute 2-dimensional embeddings of each word. TruncatedSVD returns U\*S, so we normalize the returned vectors, so that all the vectors will appear around the unit circle (therefore closeness is directional closeness). **Note**: The line of code below that does the normalizing uses the NumPy concept of *broadcasting*. If you don't know about broadcasting, check out[Computation on Arrays: Broadcasting by Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html).Run the below cell to produce the plot. It'll probably take a few seconds to run. What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? **Note:** "bpd" stands for "barrels per day" and is a commonly used abbreviation in crude oil topic articles.
###Code
# -----------------------------
# Run This Cell to Produce Your Plot
# ------------------------------
reuters_corpus = read_corpus()
M_co_occurrence, word2Ind_co_occurrence = compute_co_occurrence_matrix(reuters_corpus)
M_reduced_co_occurrence = reduce_to_k_dim(M_co_occurrence, k=2)
# Rescale (normalize) the rows to make them each of unit-length
M_lengths = np.linalg.norm(M_reduced_co_occurrence, axis=1)
M_normalized = M_reduced_co_occurrence / M_lengths[:, np.newaxis] # broadcasting
words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']
plot_embeddings(M_normalized, word2Ind_co_occurrence, words)
###Output
Running Truncated SVD over 8185 words...
Done.
###Markdown
Write your answer here. We would expect:- barrels, oil, petroleum, industry, bpd should be closer- The countries to be close to eachother (which they are) and close to oil because all countries export oil Part 2: Prediction-Based Word Vectors (15 points)As discussed in class, more recently prediction-based word vectors have come into fashion, e.g. word2vec. Here, we shall explore the embeddings produced by word2vec. Please revisit the class notes and lecture slides for more details on the word2vec algorithm. If you're feeling adventurous, challenge yourself and try reading the [original paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).Then run the following cells to load the word2vec vectors into memory. **Note**: This might take several minutes.
###Code
def load_word2vec():
""" Load Word2Vec Vectors
Return:
wv_from_bin: All 3 million embeddings, each lengh 300
"""
import gensim.downloader as api
wv_from_bin = api.load("word2vec-google-news-300")
vocab = list(wv_from_bin.vocab.keys())
print("Loaded vocab size %i" % len(vocab))
return wv_from_bin
# -----------------------------------
# Run Cell to Load Word Vectors
# Note: This may take several minutes
# -----------------------------------
wv_from_bin = load_word2vec()
###Output
[==================================================] 100.0% 1662.8/1662.8MB downloaded
###Markdown
**Note: If you are receiving out of memory issues on your local machine, try closing other applications to free more memory on your device. You may want to try restarting your machine so that you can free up extra memory. Then immediately run the jupyter notebook and see if you can load the word vectors properly. If you still have problems with loading the embeddings onto your local machine after this, please follow the Piazza instructions, as how to run remotely on Stanford Farmshare machines.** Reducing dimensionality of Word2Vec Word EmbeddingsLet's directly compare the word2vec embeddings to those of the co-occurrence matrix. Run the following cells to:1. Put the 3 million word2vec vectors into a matrix M2. Run reduce_to_k_dim (your Truncated SVD function) to reduce the vectors from 300-dimensional to 2-dimensional.
###Code
def get_matrix_of_vectors(wv_from_bin, required_words=['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']):
""" Put the word2vec vectors into a matrix M.
Param:
wv_from_bin: KeyedVectors object; the 3 million word2vec vectors loaded from file
Return:
M: numpy matrix shape (num words, 300) containing the vectors
word2Ind: dictionary mapping each word to its row number in M
"""
import random
words = list(wv_from_bin.vocab.keys())
print("Shuffling words ...")
random.shuffle(words)
words = words[:10000]
print("Putting %i words into word2Ind and matrix M..." % len(words))
word2Ind = {}
M = []
curInd = 0
for w in words:
try:
M.append(wv_from_bin.word_vec(w))
word2Ind[w] = curInd
curInd += 1
except KeyError:
continue
for w in required_words:
try:
M.append(wv_from_bin.word_vec(w))
word2Ind[w] = curInd
curInd += 1
except KeyError:
continue
M = np.stack(M)
print("Done.")
return M, word2Ind
# -----------------------------------------------------------------
# Run Cell to Reduce 300-Dimensinal Word Embeddings to k Dimensions
# Note: This may take several minutes
# -----------------------------------------------------------------
M, word2Ind = get_matrix_of_vectors(wv_from_bin)
M_reduced = reduce_to_k_dim(M, k=2)
###Output
Shuffling words ...
Putting 10000 words into word2Ind and matrix M...
Done.
Running Truncated SVD over 10010 words...
Done.
###Markdown
Question 2.1: Word2Vec Plot Analysis [written] (4 points)Run the cell below to plot the 2D word2vec embeddings for `['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']`.What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? How is the plot different from the one generated earlier from the co-occurrence matrix?
###Code
words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']
plot_embeddings(M_reduced, word2Ind, words)
###Output
_____no_output_____
###Markdown
Write your answer here. Clusters togethor:Difficult to tell but:- Energy, industry, oil, petroleum, barrels- Venezuela, Ecuador, Output- bpd is in the image in a separate cluster but should perhaps be closer to oil, petroleum etc.Are perhaps the two clusters I would recognize if I tried. Cosine SimilarityNow that we have word vectors, we need a way to quantify the similarity between individual words, according to these vectors. One such metric is cosine-similarity. We will be using this to find words that are "close" and "far" from one another.We can think of n-dimensional vectors as points in n-dimensional space. If we take this perspective L1 and L2 Distances help quantify the amount of space "we must travel" to get between these two points. Another approach is to examine the angle between two vectors. From trigonometry we know that:Instead of computing the actual angle, we can leave the similarity in terms of $similarity = cos(\Theta)$. Formally the [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) $s$ between two vectors $p$ and $q$ is defined as:$$s = \frac{p \cdot q}{||p|| ||q||}, \textrm{ where } s \in [-1, 1] $$ Question 2.2: Polysemous Words (2 points) [code + written] Find a [polysemous](https://en.wikipedia.org/wiki/Polysemy) word (for example, "leaves" or "scoop") such that the top-10 most similar words (according to cosine similarity) contains related words from *both* meanings. For example, "leaves" has both "vanishes" and "stalks" in the top 10, and "scoop" has both "handed_waffle_cone" and "lowdown". You will probably need to try several polysemous words before you find one. Please state the polysemous word you discover and the multiple meanings that occur in the top 10. Why do you think many of the polysemous words you tried didn't work?**Note**: You should use the `wv_from_bin.most_similar(word)` function to get the top 10 similar words. This function ranks all other words in the vocabulary with respect to their cosine similarity to the given word. For further assistance please check the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.htmlgensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__.
###Code
# ------------------
# Write your polysemous word exploration code here.
#wv_from_bin.most_similar("pike")
wv_from_bin.most_similar("engaged")
# ------------------
###Output
_____no_output_____
###Markdown
Write your answer here.- It is used depending on context and for example 'pike' has multiple meanings (fish, spear) but the data probably depicts pike as a fish most of the same judging from the synonyms generated Question 2.3: Synonyms & Antonyms (2 points) [code + written] When considering Cosine Similarity, it's often more convenient to think of Cosine Distance, which is simply 1 - Cosine Similarity.Find three words (w1,w2,w3) where w1 and w2 are synonyms and w1 and w3 are antonyms, but Cosine Distance(w1,w3) < Cosine Distance(w1,w2). For example, w1="happy" is closer to w3="sad" than to w2="cheerful". Once you have found your example, please give a possible explanation for why this counter-intuitive result may have happened.You should use the the `wv_from_bin.distance(w1, w2)` function here in order to compute the cosine distance between two words. Please see the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.htmlgensim.models.keyedvectors.FastTextKeyedVectors.distance)__ for further assistance.
###Code
# ------------------
# Write your synonym & antonym exploration code here.
w1 = "fun"
w2 = "amusing"
w3 = "boring"
w1_w2_dist = wv_from_bin.distance(w1, w2)
w1_w3_dist = wv_from_bin.distance(w1, w3)
print("Synonyms {}, {} have cosine distance: {}".format(w1, w2, w1_w2_dist))
print("Antonyms {}, {} have cosine distance: {}".format(w1, w3, w1_w3_dist))
# ------------------
###Output
Synonyms fun, amusing have cosine distance: 0.5848543047904968
Antonyms fun, boring have cosine distance: 0.5349873304367065
###Markdown
Write your answer here.- Perhaps they are used in similar contexts, i.e something was fun or something was boring. Then the learning algorithm associates that in a certain context these words can be used and in this case fun, boring. Solving Analogies with Word VectorsWord2Vec vectors have been shown to *sometimes* exhibit the ability to solve analogies. As an example, for the analogy "man : king :: woman : x", what is x?In the cell below, we show you how to use word vectors to find x. The `most_similar` function finds words that are most similar to the words in the `positive` list and most dissimilar from the words in the `negative` list. The answer to the analogy will be the word ranked most similar (largest numerical value).**Note:** Further Documentation on the `most_similar` function can be found within the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.htmlgensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__.
###Code
# Run this cell to answer the analogy -- man : king :: woman : x
pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'king'], negative=['man']))
###Output
[('queen', 0.7118192911148071),
('monarch', 0.6189674139022827),
('princess', 0.5902431011199951),
('crown_prince', 0.5499460697174072),
('prince', 0.5377321243286133),
('kings', 0.5236844420433044),
('Queen_Consort', 0.5235945582389832),
('queens', 0.518113374710083),
('sultan', 0.5098593235015869),
('monarchy', 0.5087411999702454)]
###Markdown
Question 2.4: Finding Analogies [code + written] (2 Points)Find an example of analogy that holds according to these vectors (i.e. the intended word is ranked top). In your solution please state the full analogy in the form x:y :: a:b. If you believe the analogy is complicated, explain why the analogy holds in one or two sentences.**Note**: You may have to try many analogies to find one that works!
###Code
# ------------------
# Write your analogy exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=['human', 'banana'], negative=['monkey']))
# ------------------
###Output
[('papaya', 0.41025421023368835),
('potato', 0.405283659696579),
('mango', 0.38979673385620117),
('cassava', 0.3826466202735901),
('cashew', 0.3766744136810303),
('agricultural', 0.3756161034107208),
('sugar_cane', 0.37352174520492554),
('agro_ecosystems', 0.36790210008621216),
('avocado', 0.3663007915019989),
('muscovado', 0.36593759059906006)]
###Markdown
Write your answer here.- I chose a difficult analogy and I do not know the correct answer but it seems to be quite fair: monkey - banana as human is potato Question 2.5: Incorrect Analogy [code + written] (1 point)Find an example of analogy that does *not* hold according to these vectors. In your solution, state the intended analogy in the form x:y :: a:b, and state the (incorrect) value of b according to the word vectors.
###Code
# ------------------
# Write your incorrect analogy exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=['light','long'], negative=['short']))
# ------------------
###Output
[('glow', 0.41996973752975464),
('lights', 0.4155903458595276),
('Incandescent_lights', 0.4097457230091095),
('yellowish_glow', 0.40891095995903015),
('dark', 0.4074743390083313),
('sensitive_photoreceptor_cells', 0.4029213488101959),
('injuries_Fuellhardt', 0.39579325914382935),
('sunlight', 0.3876835107803345),
('illumination', 0.3876822590827942),
('dim_glow', 0.38765883445739746)]
###Markdown
Write your answer here.- Short:long, light:heavy, it says light:glow which I am not sure why Question 2.6: Guided Analysis of Bias in Word Vectors [written] (1 point)It's important to be cognizant of the biases (gender, race, sexual orientation etc.) implicit to our word embeddings.Run the cell below, to examine (a) which terms are most similar to "woman" and "boss" and most dissimilar to "man", and (b) which terms are most similar to "man" and "boss" and most dissimilar to "woman". What do you find in the top 10?
###Code
# Run this cell
# Here `positive` indicates the list of words to be similar to and `negative` indicates the list of words to be
# most dissimilar from.
pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'boss'], negative=['man']))
print()
pprint.pprint(wv_from_bin.most_similar(positive=['man', 'boss'], negative=['woman']))
###Output
[('bosses', 0.5522644519805908),
('manageress', 0.49151360988616943),
('exec', 0.459408164024353),
('Manageress', 0.45598435401916504),
('receptionist', 0.4474116861820221),
('Jane_Danson', 0.44480547308921814),
('Fiz_Jennie_McAlpine', 0.44275766611099243),
('Coronation_Street_actress', 0.44275569915771484),
('supremo', 0.4409852921962738),
('coworker', 0.4398624897003174)]
[('supremo', 0.6097397804260254),
('MOTHERWELL_boss', 0.5489562153816223),
('CARETAKER_boss', 0.5375303626060486),
('Bully_Wee_boss', 0.5333974361419678),
('YEOVIL_Town_boss', 0.5321705341339111),
('head_honcho', 0.5281980037689209),
('manager_Stan_Ternent', 0.525971531867981),
('Viv_Busby', 0.5256163477897644),
('striker_Gabby_Agbonlahor', 0.5250812768936157),
('BARNSLEY_boss', 0.5238943099975586)]
###Markdown
Write your answer here.- I guess woman:man as boss:receptionist perhaps - Man:Woman, boss:caretaker_boss Question 2.7: Independent Analysis of Bias in Word Vectors [code + written] (2 points)Use the `most_similar` function to find another case where some bias is exhibited by the vectors. Please briefly explain the example of bias that you discover.
###Code
# ------------------
# Write your bias exploration code here.
pprint.pprint(wv_from_bin.most_similar(positive=['white', 'good'], negative=['black']))
print()
pprint.pprint(wv_from_bin.most_similar(positive=['black', 'good'], negative=['white']))
# ------------------
###Output
[('nice', 0.6383840441703796),
('great', 0.6161424517631531),
('terrific', 0.614184558391571),
('bad', 0.5943924784660339),
('excellent', 0.5763711929321289),
('decent', 0.5739346146583557),
('fantastic', 0.5718001127243042),
('perfect', 0.5275565981864929),
('better', 0.5265002250671387),
('wonderful', 0.5130242109298706)]
[('bad', 0.6289401054382324),
('great', 0.624528169631958),
('decent', 0.5894373655319214),
('terrific', 0.5581986904144287),
('tough', 0.5361088514328003),
('nice', 0.525139331817627),
('terrible', 0.5203973650932312),
('excellent', 0.5200915336608887),
('fantastic', 0.5186727046966553),
('lousy', 0.5168720483779907)]
|
handsOn_lecture13_error-measures-and-ml-advice/handsOn13_error-measures-and-ml-advice.ipynb | ###Markdown
ROC CurvesRecall that an ROC curve takes the **ranking** of the examples from a binary classifier, and plots the **True positive rate** versus the **False positive rate**. Problem 1:Let's say the binary classifier produces scores $\text{scr}(\vec{x})$. We are going to prove the following fact:$$\text{Area under the ROC curve } \quad = \quad \mathop{\text{Pr}}_{\vec{x}_+ \sim \text{ Class+}, \; \vec{x}_- \sim \text{ Class-}} (\text{scr}(\vec{x}_+) \geq \text{scr}(\vec{x}_-))$$ Part a)Convince yourself of the following, that we can draw the ROC curve using the following procedure:1. Sort the $\vec{x}$'s by their score1. Draw a blank $1 \times 1$ square1. Put your pen on the lower lefthand corner1. For each of the $\vec{x}$'s in the sorted list: 1. Draw **up** a length equal to $1/\text{num pos examples}$ if the example is a **positive** example 1. Draw **right** a length equal to $1/\text{num neg examples}$ if the example is a **negative** example Part b)Now see if you can figure out WHY the statement we want to prove is true.*Hint*: Think about how to sample a positive example and a negative example, and how that relates to the shape of the ROC curve Problem 2: Learning curvesLet's use learning curves to explore the bias / variance tradeoff. Learning curves work by plotting the training and test accuracy as a function of the amount of data the model has access to. As we learned in lecture, this helps us examine a few things:- A large gap between training and test performance that doesn't converge with more data can indicate **high variance**; the model is overfitting the training data and not generalizing to data it hasn't seen yet (the test set)- Curves that converge but at a level of accuracy that is unacceptable can indicate **high bias**; while variance isn't a concern, if the model isn't performing as well as we think it should on either train or test set, it may be biased
###Code
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
# image courtesy of Raschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print.
Image(filename='learning-curve.png', width=600)
###Output
_____no_output_____
###Markdown
Plotting our own learning curvesLet's plot a learning curve for a logistic regression classifier on the trusty [iris dataset](http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html).
###Code
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
X = iris.data
y = iris.target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.6, random_state=0)
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve
def plot_learning_curve(model, X_train, y_train):
# code adapted into function from ch6 of Raschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print.
train_sizes, train_scores, test_scores =\
learning_curve(estimator=model,
X=X_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='cross-validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
pipe_lr = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(random_state=0))])
plot_learning_curve(pipe_lr, X_train, y_train)
###Output
_____no_output_____
###Markdown
Notice how we plot the standard deviation too; in addition to seeing whether the training and test accuracy converge we can see how much variation exists across the k-folds of the training runs with each sample size. This variance can in itself help us determine whether or not our model suffers from variance.Also: a quick note on [pipelines](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html): they are a handy way to package preprocessing steps into an object that adheres to the fit/transform/predict paradigm of scikit-learn. Most linear models require that the data is scaled, so we use the [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) pipeline step. Tree based models do not require a scaled dataset.**Q**: How would you characterize the bias and variance of the logistic regression model?**Exercise**: Try adjusting the inverse regularization parameter `C` of LogisticRegression to see how it affects the learning curve. Can you reduce variance?
###Code
# Your code goes here
###Output
_____no_output_____
###Markdown
**Exercise**: Try out a couple of different models (e.g [decision trees](httpm://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html), [random forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) or [svm](http://scikit-learn.org/stable/modules/svm.html) . Can you find a model that strikes a good balance between bias and variance (e.g high performance, small gap between training and test accuracy and low variance within k-folds?)
###Code
# Your code goes here
###Output
_____no_output_____
###Markdown
Tuning a SVM classifier for an unbalanced datasetLet's look at how tuning the class weights of an SVM can help us better fit an imbalanced dataset.We will create a dataset with 1000 samples with label `0` and 100 samples with label `1`.
###Code
rng = np.random.RandomState(0)
n_samples_1 = 1000
n_samples_2 = 100
X_unbalanced = np.r_[1.5 * rng.randn(n_samples_1, 2),
0.5 * rng.randn(n_samples_2, 2) + [2, 2]]
y_unbalanced = [0] * (n_samples_1) + [1] * (n_samples_2)
from sklearn import svm
# fit the model and get the separating hyperplane
model = svm.SVC(kernel='linear', C=1.0)
model.fit(X_unbalanced, y_unbalanced)
w = model.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - model.intercept_[0] / w[1]
# plot separating hyperplanes and samples
h0 = plt.plot(xx, yy, 'k-', label='no weights')
plt.scatter(X_unbalanced[:, 0], X_unbalanced[:, 1], c=y_unbalanced, cmap=plt.cm.Paired)
plt.axis('tight')
plt.show()
###Output
_____no_output_____
###Markdown
Notice how the separating hyperplane fails to capture a high percentage of the positive examples. Assuming we wish to capture more of the positive examples, we can use the `class_weight` parameter of [svm.SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) to add more emphasis.**Exercise**: try increasing the weight for class label `1` and see how it improves performance by plotting an updated decision boundary.
###Code
# Your code goes here
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.