path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Advanced Algorithms/Greed Algorithms/Min_Operations.ipynb | ###Markdown
Min Operations Starting from the number `0`, find the minimum number of operations required to reach a given positive `target number`. You can only use the following two operations: 1. Add 1 2. Double the number Example:1. For `Target = 18`, `output = 6`, because it takes at least 6 steps shown below to reach the target * `start = 0` * `step 1 ==> 0 + 1 = 1` * `step 2 ==> 1 * 2 = 2` or 1 + 1 = 2 * `step 3 ==> 2 * 2 = 4` * `step 4 ==> 4 * 2 = 8` * `step 5 ==> 8 + 1 = 9` * `step 6 ==> 9 * 2 = 18` 2. For `Target = 69`, `output = 9`, because it takes at least 8 steps to reach `69` from `0` using the allowed operations * `start = 0` * `step 1 ==> 0 + 1 = 1` * `step 2 ==> 1 + 1 = 2` * `step 3 ==> 2 * 2 = 4` * `step 4 ==> 4 * 2 = 8` * `step 5 ==> 8 * 2 = 16` * `step 6 ==> 16 + 1 = 17` * `step 7 ==> 17 * 2 = 34` * `step 8 ==> 34 * 2 = 68` * `step 9 ==> 68 + 1 = 69`
###Code
# Your solution
def min_operations(target):
"""
Return number of steps taken to reach a target number
input: target number (as an integer)
output: number of steps (as an integer)
"""
num_steps = 0
while target != 0:
# start backwards from the target
# if target is odd --> subtract 1
# if target is even --> divide by 2
if target % 2 == 0:
target = target // 2
num_steps += 1
else:
target = target - 1
num_steps += 1
return num_steps
# Test Cases
def test_function(test_case):
target = test_case[0]
solution = test_case[1]
output = min_operations(target)
if output == solution:
print("Pass")
else:
print("Fail")
target = 18
solution = 6
test_case = [target, solution]
test_function(test_case)
target = 69
solution = 9
test_case = [target, solution]
test_function(test_case)
###Output
_____no_output_____
###Markdown
Hide Solution
###Code
def min_operations(target):
"""
Return number of steps taken to reach a target number
input:- target number an integer
output:- number of steps an integer
"""
num_steps = 0
# start backwards from the target
# if target is odd --> subtract 1
# if target is even --> divide by 2
while target != 0:
if target % 2 == 0:
target = target // 2
else:
target = target - 1
num_steps += 1
return num_steps
###Output
_____no_output_____ |
Python for Data Science/Week 7 - Intro to Machine Learning/Clustering Model - Weather Data Clustering with k-Means.ipynb | ###Markdown
Clustering with k-Means Model We will use cluster analysis to generate a big picture model of the weather at a local station using a minute-graunlarity data. Goal: Create 12 clusters of them**NOTE:** The dataset is in a large CSV file called *minute_weather.csv*. The download link is: https://drive.google.com/open?id=0B8iiZ7pSaSFZb3ItQ1l4LWRMTjg Importing the libraries
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import os
import utils
from sklearn import metrics
from scipy.spatial.distance import cdist
import pandas as pd
import numpy as np
from itertools import cycle,islice
import matplotlib.pyplot as plt
from pandas.plotting import parallel_coordinates
%matplotlib inline
###Output
_____no_output_____
###Markdown
Minute Weather Data DescriptionThe **minute weather dataset** comes from the same source as the daily weather dataset that we used in the decision tree based classifier notebook. The main difference between these two datasets is that the minute weather dataset contains raw sensor measurements captured at one-minute intervals. Daily weather dataset instead contained processed and well curated data. The data is in the file **minute_weather.csv**, which is a comma-separated file.As with the daily weather data, this data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.Each row in **minute_weather.csv** contains weather data captured for a one-minute interval. Each row, or sample, consists of the following variables:* **rowID:** unique number for each row (*Unit: NA*)* **hpwren_timestamp:** timestamp of measure (*Unit: year-month-day hour:minute:second*)* **air_pressure:** air pressure measured at the timestamp (*Unit: hectopascals*)* **air_temp:** air temperature measure at the timestamp (*Unit: degrees Fahrenheit*)* **avg_wind_direction:** wind direction averaged over the minute before the timestamp (*Unit: degrees, with 0 means coming from the North, and increasing clockwise*)* **avg_wind_speed:** wind speed averaged over the minute before the timestamp (*Unit: meters per second*)* **max_wind_direction:** highest wind direction in the minute before the timestamp (*Unit: degrees, with 0 being North and increasing clockwise*)* **max_wind_speed:** highest wind speed in the minute before the timestamp (*Unit: meters per second*)* **min_wind_direction:** smallest wind direction in the minute before the timestamp (*Unit: degrees, with 0 being North and inceasing clockwise*)* **min_wind_speed:** smallest wind speed in the minute before the timestamp (*Unit: meters per second*)* **rain_accumulation:** amount of accumulated rain measured at the timestamp (*Unit: millimeters*)* **rain_duration:** length of time rain has fallen as measured at the timestamp (*Unit: seconds*)* **relative_humidity:** relative humidity measured at the timestamp (*Unit: percent*)
###Code
data = pd.read_csv('./weather/minute_weather.csv')
data.head()
# check missing data
total = data.isnull().sum().sort_values(ascending=False)
percent = (data.isnull().sum()/data.isnull().count()*100).sort_values(ascending=False)
dataMissing = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
dataMissing.head(15)
data.shape
###Output
_____no_output_____
###Markdown
Data SamplingGet every 10th row
###Code
dfTen = data[data['rowID'] % 10 == 0]
dfTen.shape
dfTen.head()
###Output
_____no_output_____
###Markdown
Statistics
###Code
dfTen.describe().transpose()
dfTen[dfTen['rain_accumulation'] == 0].shape
dfTen[dfTen['rain_duration'] == 0].shape
###Output
_____no_output_____
###Markdown
Dropping all the rows with empty rain_duration and rain_accumulation
###Code
del dfTen['rain_accumulation']
del dfTen['rain_duration']
print('Rows before: ' + str(dfTen.shape[0]))
dfTen = dfTen.dropna()
print('Rows after: ' + str(dfTen.shape[0]))
###Output
Rows before: 158726
Rows after: 158680
###Markdown
**Lost 0.3% of dataframe**
###Code
dfTen.columns
###Output
_____no_output_____
###Markdown
Select features of interest for clustering
###Code
features = ['air_pressure', 'air_temp',
'avg_wind_direction', 'avg_wind_speed',
'max_wind_direction', 'max_wind_speed',
'relative_humidity'
]
df = dfTen[features]
df.head()
###Output
_____no_output_____
###Markdown
Scaling the features using StandardScaler
###Code
X = StandardScaler().fit_transform(df)
X
###Output
_____no_output_____
###Markdown
The Elbow Method
###Code
def elbowMethod(data,maxK):
distortions = []
K = range(1,maxK)
for k in K:
model = KMeans(n_clusters=k).fit(data)
model.fit(data)
distortions.append(sum(np.min(cdist(data,model.cluster_centers_,'euclidean'),axis=1)) / data.shape[0])
plt.plot(K,distortions,'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
elbowMethod(X,20)
###Output
_____no_output_____
###Markdown
**k = 5 seems to be a good choice** Using k-Means Clustering **For k = 12**
###Code
kmeans12 = KMeans(n_clusters = 12)
model12 = kmeans12.fit(X)
centers12 = model12.cluster_centers_
print('model\n',model12)
###Output
model
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=12, n_init=10, n_jobs=1, precompute_distances='auto',
random_state=None, tol=0.0001, verbose=0)
###Markdown
**For k = 5**
###Code
kmeans5 = KMeans(n_clusters = 5)
model5 = kmeans5.fit(X)
centers5 = model5.cluster_centers_
print('model\n',model5)
###Output
model
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=5, n_init=10, n_jobs=1, precompute_distances='auto',
random_state=None, tol=0.0001, verbose=0)
###Markdown
Plots
###Code
# Function that creates a DataFrame with a column for Cluster Number
def pd_centers(features,centers):
colNames = list(features)
colNames.append('prediction')
Z = [np.append(A,index) for index,A in enumerate(centers)]
P = pd.DataFrame(Z,columns=colNames)
P['prediction'] = P['prediction'].astype(int)
return P
# Function that creates Parallel Plots
def parallel_plot(data,k):
myColors = list(islice(cycle(['b','r','g','y','k']), None, len(data)))
plt.figure(figsize=(10,5)).gca().axes.set_ylim([-3,+3])
parallel_coordinates(data,'prediction',color = myColors,marker='o')
plt.title('For k = ' + str(k))
plot5 = pd_centers(features, centers5)
plot12 = pd_centers(features, centers12)
###Output
_____no_output_____
###Markdown
Dry Days
###Code
parallel_plot(plot5[plot5['relative_humidity'] < -0.5],5)
parallel_plot(plot12[plot12['relative_humidity'] < -0.5],12)
###Output
_____no_output_____
###Markdown
Warm Days
###Code
parallel_plot(plot5[plot5['air_temp'] > 0.5],5)
parallel_plot(plot12[plot12['air_temp'] > 0.5],12)
###Output
_____no_output_____
###Markdown
Cool Days
###Code
parallel_plot(plot5[(plot5['relative_humidity'] > 0.5) & (plot5['air_temp'] < 0.5)],5)
parallel_plot(plot12[(plot12['relative_humidity'] > 0.5) & (plot12['air_temp'] < 0.5)],12)
###Output
_____no_output_____ |
08_numpy_practice.ipynb | ###Markdown
Practice on Numpy and PandasHere are some task for you, to refresh what you saw in the notebooks before.It will help you strengthen the **learning objectives** from the notebook before. You should be able to- create numpy arrays- manipulate them with basic mathematics operators- extract rows, columns and items by indexing- use aggregation methods (like sum, min, max, std) on numpy arrays**Part 1**: Creating numpy arrays.- Please create a numpy array from the list:```Pythonmy_list = [1, 2, 5, 6, 8]````- Please create a numpy array containing the values from 1 to 10.- Please create a numpy array from 0 to 5 cwith a stepsize of 0.2- Create a numpy array with the shape 2,3 with random values
###Code
#1
import numpy as np
my_list_1 = np.arange(1,11)
my_list_1
#2
my_list_2 = np.arange(0,5, 0.2)
my_list_2
#3
my_list_3 = np.random.rand(2,3)
my_list_3
###Output
_____no_output_____
###Markdown
**Part 2**: Manipulate numpy arrays.1. Please create a numpy array with (1, 3, '4', 7, 12, '0'). Define the content as integer.2. Check the data type of the objects and the shape of the array3. Update the 4th value to 30.4. Reshape the array to a 2x3 matrix.5. Please add 8 to the first row and 12 to the second row.6. Mulitply the first column with 2, the second with 3 and the third with 4.7. Please summ up all numbers in the first row, and all numbers in the second row.8. Similarly, search for the largest number for each column.9. Extract the number in the second column and the first row.10. Check at which index the value is exactly 48.
###Code
#1
my_list_4 = np.array([1, 3, '4', 7, 12, '0'], np.int32)
my_list_4
#2
print(type(my_list_4))
print(my_list_4.shape)
#3
my_list_4 = np.where(my_list_4 == 4, 30, my_list_4)
my_list_4
#4
my_list_4 = np.array(my_list_4).reshape(2, 3)
print(my_list_4)
#5
my_list_4 = my_list_4 + [[8], [12]]
my_list_4
#6
my_list_4 = my_list_4 * [2, 3, 4]
my_list_4
#7
my_list_4 .sum(axis=1)
#8
my_list_4.max(axis=0)
#9
my_list_4[:1, 1]
#10
print(np.where(my_list_4 == 48))
###Output
(array([1]), array([2]))
|
getting_started/notebooks_managed_domains/Building_Your_First_Recommender_Video_On_Demand.ipynb | ###Markdown
Building Your First Video On Demand RecommenderThis notebook will walk you through the steps to build a Domain dataset group and arecommender that returns movie recommendations based on data collected from the movielens data set. The goal is to recommend movies that are relevant based on a particular user.The data comes from the [MovieLens project](https://grouplens.org/datasets/movielens/). Follow the link to learn more about the data and potential uses. How to Use the NotebookThe code is broken up into cells like the one below. There's a triangular Run button at the top of this page that you can click to execute each cell and move onto the next, or you can press `Shift` + `Enter` while in the cell to execute it and move onto the next one.As a cell is executing you'll notice a line to the side showcase an `*` while the cell is running or it will update to a number to indicate the last cell that completed executing after it has finished exectuting all the code within a cell.Simply follow the instructions below and execute the cells to get started with Amazon Personalize using case optimized recommenders. ImportsPython ships with a broad collection of libraries and we need to import those as well as the ones installed to help us like [boto3](https://aws.amazon.com/sdk-for-python/) (AWS SDK for python) and [Pandas](https://pandas.pydata.org/)/[Numpy](https://numpy.org/) which are core data science tools.
###Code
# Imports
import boto3
import json
import numpy as np
import pandas as pd
import time
import datetime
###Output
_____no_output_____
###Markdown
Next you will want to validate that your environment can communicate successfully with Amazon Personalize, the lines below do just that.
###Code
# Configure the SDK to Personalize:
personalize = boto3.client('personalize')
personalize_runtime = boto3.client('personalize-runtime')
###Output
_____no_output_____
###Markdown
Configure the dataData is imported into Amazon Personalize through Amazon S3, below we will specify a bucket that you have created within AWS for the purposes of this exercise.Below you will update the `bucket` variable to instead be set to the value that you created earlier in the CloudFormation steps, this should be in a text file from your earlier work. the `filename` does not need to be changed. Specify a Bucket and Data Output LocationUpdate the `bucket` name to a unique name.
###Code
bucket_name = "CHANGE_BUCKET_NAME" # replace with the name of your S3 bucket
filename = "movie-lens-100k.csv"
###Output
_____no_output_____
###Markdown
Download, Prepare, and Upload Training DataAt present you do not have the MovieLens data loaded locally yet for examination, execute the lines below to download the latest copy and to examine it quickly. Download and Explore the Dataset
###Code
!wget -N https://files.grouplens.org/datasets/movielens/ml-latest-small.zip
!unzip -o ml-latest-small.zip
!ls ml-latest-small
!pygmentize ml-latest-small/README.txt
interactions_data = pd.read_csv('./ml-latest-small/ratings.csv')
pd.set_option('display.max_rows', 5)
interactions_data
interactions_data.info()
###Output
_____no_output_____
###Markdown
Prepare the Data Interactions DataAs you can see the data contains a UserID, ItemID, Rating, and Timestamp.We are now going to remove the items with low rankings, and remove the Rating column before we build our model.We are also adding the column EVENT_TYPE to all interactions.
###Code
interactions_data = interactions_data[interactions_data['rating'] > 3] # Keep only movies rated higher than 3 out of 5.
interactions_data = interactions_data[['userId', 'movieId', 'timestamp']]
interactions_data.rename(columns = {'userId':'USER_ID', 'movieId':'ITEM_ID',
'timestamp':'TIMESTAMP'}, inplace = True)
interactions_data['EVENT_TYPE']='watch' #Adding an EVENT_TYPE column that has the event type "watched" for all movies
interactions_data.head()
###Output
_____no_output_____
###Markdown
Item MetadataOpen the item data file and take a look at the first rows.
###Code
items_data = pd.read_csv('./ml-latest-small/movies.csv')
items_data.head(5)
items_data.info()
items_data['year'] = items_data['title'].str.extract('.*\((.*)\).*',expand = False)
items_data.head(5)
###Output
_____no_output_____
###Markdown
Selecting a modern date as the creation timestamp for this example because the actual creation timestamp is unknown. In your use-case, please provide the appropriate creation timestamp.
###Code
ts= datetime.datetime(2022, 1, 1, 0, 0).strftime('%s')
print(ts)
items_data["CREATION_TIMESTAMP"] = ts
items_data
# removing the title
items_data.drop(columns="title", inplace = True)
# renaming the columns to match schema
items_data.rename(columns = { 'movieId':'ITEM_ID', 'genres':'GENRES',
'year':'YEAR'}, inplace = True)
items_data
###Output
_____no_output_____
###Markdown
User MetadataThe dataset doe not have any user metadata so we will create an fake metadata field.
###Code
# get user ids from the interaction dataset
user_ids = interactions_data['USER_ID'].unique()
user_data = pd.DataFrame()
user_data["USER_ID"]=user_ids
user_data
###Output
_____no_output_____
###Markdown
Adding MetadataThe current dataset does not contain additiona user information. For this example, we'll randomly assign a gender to the users with equal probablity of male and female.
###Code
possible_genders = ['female', 'male']
random = np.random.choice(possible_genders, len(user_data.index), p=[0.5, 0.5])
user_data["GENDER"] = random
user_data
###Output
_____no_output_____
###Markdown
Configure an S3 bucket and an IAM roleSo far, we have downloaded, manipulated, and saved the data onto the Amazon EBS instance attached to instance running this Jupyter notebook. However, Amazon Personalize will need an S3 bucket to act as the source of your data, as well as IAM roles for accessing that bucket. Let's set all of that up.The Amazon S3 bucket needs to be in the same region as the Amazon Personalize resources we have been creating so far. Simply define the region as a string below.
###Code
region = "eu-west-1" #Specify the region where your bucket will be domiciled
s3 = boto3.client('s3')
account_id = boto3.client('sts').get_caller_identity().get('Account')
bucket_name = account_id + "-" + region + "-" + "personalizemanagedretailers"
print('bucket_name:', bucket_name)
try:
if region == "us-east-1":
s3.create_bucket(Bucket=bucket_name)
else:
s3.create_bucket(
Bucket=bucket_name,
CreateBucketConfiguration={'LocationConstraint': region}
)
except:
print("Bucket already exists. Using bucket", bucket_name)
###Output
_____no_output_____
###Markdown
Upload data to S3Now that your Amazon S3 bucket has been created, upload the CSV file of our user-item-interaction data.
###Code
interactions_filename = "interactions.csv"
interactions_data.to_csv(interactions_filename, index=False)
boto3.Session().resource('s3').Bucket(bucket_name).Object(interactions_filename).upload_file(interactions_filename)
items_filename = "items.csv"
items_data.to_csv(items_filename, index=False)
boto3.Session().resource('s3').Bucket(bucket_name).Object(items_filename).upload_file(items_filename)
user_filename = "users.csv"
user_data.to_csv(user_filename, index=False)
boto3.Session().resource('s3').Bucket(bucket_name).Object(user_filename).upload_file(user_filename)
###Output
_____no_output_____
###Markdown
Set the S3 bucket policyAmazon Personalize needs to be able to read the contents of your S3 bucket. So add a bucket policy which allows that.Note: Make sure the role you are using to run the code in this notebook has the necessary permissions to modify the S3 bucket policy.
###Code
s3 = boto3.client("s3")
policy = {
"Version": "2012-10-17",
"Id": "PersonalizeS3BucketAccessPolicy",
"Statement": [
{
"Sid": "PersonalizeS3BucketAccessPolicy",
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::{}".format(bucket_name),
"arn:aws:s3:::{}/*".format(bucket_name)
]
}
]
}
s3.put_bucket_policy(Bucket=bucket_name, Policy=json.dumps(policy))
###Output
_____no_output_____
###Markdown
Create and Wait for Dataset GroupThe largest grouping in Personalize is a Dataset Group, this will isolate your data, event trackers, solutions, and campaigns. Grouping things together that share a common collection of data. Feel free to alter the name below if you'd like. Create Dataset Group
###Code
response = personalize.create_dataset_group(
name='personalize-video-on-demand',
domain='VIDEO_ON_DEMAND'
)
dataset_group_arn = response['datasetGroupArn']
print(json.dumps(response, indent=2))
###Output
_____no_output_____
###Markdown
Wait for Dataset Group to Have ACTIVE StatusBefore we can use the Dataset Group in any items below it must be active, execute the cell below and wait for it to show active.
###Code
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_dataset_group_response = personalize.describe_dataset_group(
datasetGroupArn = dataset_group_arn
)
status = describe_dataset_group_response["datasetGroup"]["status"]
print("DatasetGroup: {}".format(status))
if status == "ACTIVE" or status == "CREATE FAILED":
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Create Interactions SchemaA core component of how Personalize understands your data comes from the Schema that is defined below. This configuration tells the service how to digest the data provided via your CSV file. Note the columns and types align to what was in the file you created above.
###Code
schema = {
"type": "record",
"name": "Interactions",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "EVENT_TYPE",
"type": "string"
},
{
"name": "TIMESTAMP",
"type": "long"
}
],
"version": "1.0"
}
create_interactions_schema_response = personalize.create_schema(
name='personalize-demo-interactions-schema',
schema=json.dumps(schema),
domain='VIDEO_ON_DEMAND'
)
interactions_schema_arn = create_interactions_schema_response['schemaArn']
print(json.dumps(create_interactions_schema_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Items (movies) schema
###Code
schema = {
"type": "record",
"name": "Items",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "GENRES",
"type": [
"string"
],
"categorical": True
},
{
"name": "CREATION_TIMESTAMP",
"type": "long"
}
],
"version": "1.0"
}
create_items_schema_response = personalize.create_schema(
name='personalize-demo-items-schema',
schema=json.dumps(schema),
domain='VIDEO_ON_DEMAND'
)
items_schema_arn = create_items_schema_response['schemaArn']
print(json.dumps(create_items_schema_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Users schema
###Code
schema = {
"type": "record",
"name": "Users",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "GENDER",
"type": "string",
"categorical": True
}
],
"version": "1.0"
}
create_users_schema_response = personalize.create_schema(
name='personalize-demo-users-schema',
schema=json.dumps(schema),
domain='VIDEO_ON_DEMAND'
)
users_schema_arn = create_users_schema_response['schemaArn']
print(json.dumps(create_users_schema_response, indent=2))
###Output
_____no_output_____
###Markdown
Create DatasetsAfter the group, the next thing to create is the actual datasets. Create Interactions Dataset
###Code
dataset_type = "INTERACTIONS"
create_dataset_response = personalize.create_dataset(
name = "personalize-demo-interactions",
datasetType = dataset_type,
datasetGroupArn = dataset_group_arn,
schemaArn = interactions_schema_arn
)
interactions_dataset_arn = create_dataset_response['datasetArn']
print(json.dumps(create_dataset_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Items Dataset
###Code
dataset_type = "ITEMS"
create_dataset_response = personalize.create_dataset(
name = "personalize-demo-items",
datasetType = dataset_type,
datasetGroupArn = dataset_group_arn,
schemaArn = items_schema_arn
)
items_dataset_arn = create_dataset_response['datasetArn']
print(json.dumps(create_dataset_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Users Dataset
###Code
dataset_type = "USERS"
create_dataset_response = personalize.create_dataset(
name = "personalize-demo-users",
datasetType = dataset_type,
datasetGroupArn = dataset_group_arn,
schemaArn = users_schema_arn
)
users_dataset_arn = create_dataset_response['datasetArn']
print(json.dumps(create_dataset_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Personalize RoleAlso Amazon Personalize needs the ability to assume Roles in AWS in order to have the permissions to execute certain tasks, the lines below grant that.Note: Make sure the role you are using to run the code in this notebook has the necessary permissions to create a role.
###Code
iam = boto3.client("iam")
role_name = "PersonalizeRoleDemoRecommender"
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
create_role_response = iam.create_role(
RoleName = role_name,
AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
)
# AmazonPersonalizeFullAccess provides access to any S3 bucket with a name that includes "personalize" or "Personalize"
# if you would like to use a bucket with a different name, please consider creating and attaching a new policy
# that provides read access to your bucket or attaching the AmazonS3ReadOnlyAccess policy to the role
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonPersonalizeFullAccess"
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = policy_arn
)
# Now add S3 support
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess',
RoleName=role_name
)
time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate
role_arn = create_role_response["Role"]["Arn"]
print(role_arn)
###Output
_____no_output_____
###Markdown
Import the dataEarlier you created the DatasetGroup and Dataset to house your information, now you will execute an import job that will load the data from S3 into Amazon Personalize for usage building your model. Create Interactions Dataset Import Job
###Code
create_interactions_dataset_import_job_response = personalize.create_dataset_import_job(
jobName = "personalize-demo-import-interactions",
datasetArn = interactions_dataset_arn,
dataSource = {
"dataLocation": "s3://{}/{}".format(bucket_name, interactions_filename)
},
roleArn = role_arn
)
dataset_interactions_import_job_arn = create_interactions_dataset_import_job_response['datasetImportJobArn']
print(json.dumps(create_interactions_dataset_import_job_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Items Dataset Import Job
###Code
create_items_dataset_import_job_response = personalize.create_dataset_import_job(
jobName = "personalize-demo-import-items",
datasetArn = items_dataset_arn,
dataSource = {
"dataLocation": "s3://{}/{}".format(bucket_name, items_filename)
},
roleArn = role_arn
)
dataset_items_import_job_arn = create_items_dataset_import_job_response['datasetImportJobArn']
print(json.dumps(create_items_dataset_import_job_response, indent=2))
###Output
_____no_output_____
###Markdown
Create Users Dataset Import Job
###Code
create_users_dataset_import_job_response = personalize.create_dataset_import_job(
jobName = "personalize-demo-import-users",
datasetArn = users_dataset_arn,
dataSource = {
"dataLocation": "s3://{}/{}".format(bucket_name, user_filename)
},
roleArn = role_arn
)
dataset_users_import_job_arn = create_users_dataset_import_job_response['datasetImportJobArn']
print(json.dumps(create_users_dataset_import_job_response, indent=2))
###Output
_____no_output_____
###Markdown
Wait for Dataset Import Job to Have ACTIVE StatusIt can take a while before the import job completes, please wait until you see that it is active below.
###Code
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_dataset_import_job_response = personalize.describe_dataset_import_job(
datasetImportJobArn = dataset_interactions_import_job_arn
)
status = describe_dataset_import_job_response["datasetImportJob"]['status']
print("DatasetImportJob: {}".format(status))
if status == "ACTIVE" or status == "CREATE FAILED":
break
time.sleep(60)
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_dataset_import_job_response = personalize.describe_dataset_import_job(
datasetImportJobArn = dataset_items_import_job_arn
)
status = describe_dataset_import_job_response["datasetImportJob"]['status']
print("DatasetImportJob: {}".format(status))
if status == "ACTIVE" or status == "CREATE FAILED":
break
time.sleep(60)
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_dataset_import_job_response = personalize.describe_dataset_import_job(
datasetImportJobArn = dataset_users_import_job_arn
)
status = describe_dataset_import_job_response["datasetImportJob"]['status']
print("DatasetImportJob: {}".format(status))
if status == "ACTIVE" or status == "CREATE FAILED":
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Choose a recommender use casesEach domain has different use cases. When you create a recommender you create it for a specific use case, and each use case has different requirements for getting recommendations.
###Code
available_recipes = personalize.list_recipes(domain='VIDEO_ON_DEMAND') # See a list of recommenders for the domain.
print (available_recipes["recipes"])
###Output
_____no_output_____
###Markdown
We are going to create a recommender of the type "More like X". This type of recommender offers recommendations for videos that are similar to a video a user watched. With this use case, Amazon Personalize automatically filters videos the user watched based on the userId specified in the `get_recommendations` call. For better performance, record Click events in addition to the required Watch events.
###Code
create_recommender_response = personalize.create_recommender(
name = 'because_you_watched_x',
recipeArn = 'arn:aws:personalize:::recipe/aws-vod-more-like-x',
datasetGroupArn = dataset_group_arn
)
recommender_you_watched_x_arn = create_recommender_response["recommenderArn"]
print (json.dumps(create_recommender_response))
###Output
_____no_output_____
###Markdown
We are going to create a second recommender of the type "Top picks for you". This type of recommender offers personalized streaming content recommendations for a user that you specify. With this use case, Amazon Personalize automatically filters videos the user watched based on the userId that you specify and `Watch` events.[More use cases per domain](https://docs.aws.amazon.com/personalize/latest/dg/domain-use-cases.html)
###Code
create_recommender_response = personalize.create_recommender(
name = 'top_picks_for_you',
recipeArn = 'arn:aws:personalize:::recipe/aws-vod-top-picks',
datasetGroupArn = dataset_group_arn
)
recommender_top_picks_arn = create_recommender_response["recommenderArn"]
print (json.dumps(create_recommender_response))
###Output
_____no_output_____
###Markdown
We wait until the recomenders have finished creating and have status `ACTIVE`. We check periodically on the status of the recommender
###Code
max_time = time.time() + 10*60*60 # 10 hours
while time.time() < max_time:
version_response = personalize.describe_recommender(
recommenderArn = recommender_you_watched_x_arn
)
status = version_response["recommender"]["status"]
if status == "ACTIVE":
print("Build succeeded for {}".format(recommender_you_watched_x_arn))
elif status == "CREATE FAILED":
print("Build failed for {}".format(recommender_you_watched_x_arn))
if status == "ACTIVE":
break
else:
print("The solution build is still in progress")
time.sleep(60)
while time.time() < max_time:
version_response = personalize.describe_recommender(
recommenderArn = recommender_top_picks_arn
)
status = version_response["recommender"]["status"]
if status == "ACTIVE":
print("Build succeeded for {}".format(recommender_top_picks_arn))
elif status == "CREATE FAILED":
print("Build failed for {}".format(recommender_top_picks_arn))
if status == "ACTIVE":
break
else:
print("The solution build is still in progress")
time.sleep(60)
###Output
_____no_output_____
###Markdown
Getting recommendations with a recommenderNow that the recommenders have been trained, lets have a look at the recommendations we can get for our users!
###Code
# reading the original data in order to have a dataframe that has both movie_ids
# and the corresponding titles to make out recommendations easier to read.
items_df = pd.read_csv('./ml-latest-small/movies.csv')
items_df.sample(10)
def get_movie_by_id(movie_id, movie_df):
"""
This takes in an movie_id from a recommendation in string format,
converts it to an int, and then does a lookup in a specified
dataframe.
A really broad try/except clause was added in case anything goes wrong.
Feel free to add more debugging or filtering here to improve results if
you hit an error.
"""
try:
return movie_df.loc[movie_df["movieId"]==int(movie_id)]['title'].values[0]
except:
print (movie_id)
return "Error obtaining title"
###Output
_____no_output_____
###Markdown
Let us get some 'Because you Watched X' recommendations:
###Code
# First pick a user
test_user_id = "1"
# Select a random item
test_item_id = "59315" #Iron Man 59315
# Get recommendations for the user for this item
get_recommendations_response = personalize_runtime.get_recommendations(
recommenderArn = recommender_you_watched_x_arn,
userId = test_user_id,
itemId = test_item_id,
numResults = 20
)
# Build a new dataframe for the recommendations
item_list = get_recommendations_response['itemList']
recommendation_list = []
for item in item_list:
movie = get_movie_by_id(item['itemId'], items_df)
recommendation_list.append(movie)
user_recommendations_df = pd.DataFrame(recommendation_list, columns = [get_movie_by_id(test_item_id, items_df)])
pd.options.display.max_rows =20
display(user_recommendations_df)
###Output
_____no_output_____
###Markdown
Get recommendations from the recommender returning "Top picks for you": Adding the user's metadata to our sample user, you can use this type of metadata to get insights on your users.
###Code
users_data_df = pd.read_csv('./users.csv')
def get_gender_by_id(user_id, user_df):
"""
This takes in a user_id and then does a lookup in a specified
dataframe.
A really broad try/except clause was added in case anything goes wrong.
Feel free to add more debugging or filtering here to improve results if
you hit an error.
"""
return user_df.loc[user_df["USER_ID"]==int(user_id)]['GENDER'].values[0]
try:
return user_df.loc[user_df["USER_ID"]==int(user_id)]['GENDER'].values[0]
except:
print (user_id)
return "Error obtaining title"
# First pick a user
test_user_id = "111" # samples users: 55, 75, 76, 111
# Get recommendations for the user
get_recommendations_response = personalize_runtime.get_recommendations(
recommenderArn = recommender_top_picks_arn,
userId = test_user_id,
numResults = 20
)
# Build a new dataframe for the recommendations
item_list = get_recommendations_response['itemList']
recommendation_list = []
for item in item_list:
movie = get_movie_by_id(item['itemId'], items_df)
recommendation_list.append(movie)
column_name = test_user_id+" ("+get_gender_by_id(test_user_id, users_data_df)+")"
user_recommendations_df = pd.DataFrame(recommendation_list, columns = [column_name])
pd.options.display.max_rows =20
display(user_recommendations_df)
###Output
_____no_output_____
###Markdown
ReviewUsing the codes above you have successfully trained a deep learning model to generate movie recommendations based on prior user behavior. You have created two recommenders for two foundational use cases. Going forward, you can adapt this code to create other recommenders. Notes for the Next Notebook:There are a few values you will need for the next notebook, execute the cell below to store them so they can be used in the `Clean_Up_Resources.ipynb` notebook.This will overwite any data stored for those variables and set them to the values specified in this notebook.
###Code
# store for cleanup
%store dataset_group_arn
%store role_name
%store interactions_schema_arn
%store items_schema_arn
%store users_schema_arn
%store region
###Output
_____no_output_____ |
Learning+Misc/Quantum-Katas/tutorials/ComplexArithmetic/.ipynb_checkpoints/ComplexArithmetic-checkpoint.ipynb | ###Markdown
Complex ArithmeticThis is a tutorial designed to introduce you to complex arithmetic.This topic isn't particularly expansive, but it's important to understand it to be able to work with quantum computing.This tutorial covers the following topics:* Imaginary and complex numbers* Basic complex arithmetic* Complex plane* Modulus operator* Imaginary exponents* Polar representationIf you need to look up some formulas quickly, you can find them in [this cheatsheet](https://github.com/microsoft/QuantumKatas/blob/master/quickref/qsharp-quick-reference.pdf).If you are curious to learn more, you can find more information at [Wikipedia](https://en.wikipedia.org/wiki/Complex_number). This notebook has several tasks that require you to write Python code to test your understanding of the concepts. If you are not familiar with Python, [here](https://docs.python.org/3/tutorial/index.html) is a good introductory tutorial for it. Let's start by importing some useful mathematical functions and constants, and setting up a few things necessary for testing the exercises. **Do not skip this step**.Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac).
###Code
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise
from typing import Tuple
import math
Complex = Tuple[float, float]
Polar = Tuple[float, float]
###Output
Success!
###Markdown
Algebraic Perspective Imaginary numbersFor some purposes, real numbers aren't enough. Probably the most famous example is this equation:$$x^{2} = -1$$which has no solution for $x$ among real numbers. If, however, we abandon that constraint, we can do something interesting - we can define our own number. Let's say there exists some number that solves that equation. Let's call that number $i$.$$i^{2} = -1$$As we said before, $i$ can't be a real number. In that case, we'll call it an **imaginary unit**. However, there is no reason for us to define it as acting any different from any other number, other than the fact that $i^2 = -1$:$$i + i = 2i \\i - i = 0 \\-1 \cdot i = -i \\(-i)^{2} = -1$$We'll call the number $i$ and its real multiples **imaginary numbers**.> A good video introduction on imaginary numbers can be found [here](https://youtu.be/SP-YJe7Vldo). Exercise 1: Powers of $i$.**Input:** An even integer $n$.**Goal:** Return the $n$th power of $i$, or $i^n$.> Fill in the missing code (denoted by `...`) and run the cell below to test your work.
###Code
@exercise
def imaginary_power(n : int) -> int:
# If n is divisible by 4
if n % 4 == 0:
return 1
else:
return -1
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-1:-Powers-of-$i$.).* Complex NumbersAdding imaginary numbers to each other is quite simple, but what happens when we add a real number to an imaginary number? The result of that addition will be partly real and partly imaginary, otherwise known as a **complex number**. A complex number is simply the real part and the imaginary part being treated as a single number. Complex numbers are generally written as the sum of their two parts: $a + bi$, where both $a$ and $b$ are real numbers. For example, $3 + 4i$, or $-5 - 7i$ are valid complex numbers. Note that purely real or purely imaginary numbers can also be written as complex numbers: $2$ is $2 + 0i$, and $-3i$ is $0 - 3i$.When performing operations on complex numbers, it is often helpful to treat them as polynomials in terms of $i$. Exercise 2: Complex addition.**Inputs:**1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.2. A complex number $y = c + di$, represented as a tuple `(c, d)`.**Goal:** Return the sum of these two numbers $x + y = z = g + hi$, represented as a tuple `(g, h)`.> A tuple is a pair of numbers.> You can make a tuple by putting two numbers in parentheses like this: `(3, 4)`.> * You can access the $n$th element of tuple `x` like so: `x[n]`> * For this tutorial, complex numbers are represented as tuples where the first element is the real part, and the second element is the real coefficient of the imaginary part> * For example, $1 + 2i$ would be represented by a tuple `(1, 2)`, and $7 - 5i$ would be represented by `(7, -5)`.>> You can find more details about Python's tuple data type in the [official documentation](https://docs.python.org/3/library/stdtypes.htmltuples). Need a hint? Click here Remember, adding complex numbers is just like adding polynomials. Add components of the same type - add the real part to the real part, add the complex part to the complex part. A video explanation can be found here.
###Code
@exercise
def complex_add(x : Complex, y : Complex) -> Complex:
# You can extract elements from a tuple like this
a = x[0]
b = x[1]
c = y[0]
d = y[1]
# This creates a new variable and stores the real component into it
real = a + c
# Replace the ... with code to calculate the imaginary component
imaginary = b+d
# You can create a tuple like this
ans = (real, imaginary)
return ans
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-2:-Complex-addition.).* Exercise 3: Complex multiplication.**Inputs:**1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.2. A complex number $y = c + di$, represented as a tuple `(c, d)`.**Goal:** Return the product of these two numbers $x \cdot y = z = g + hi$, represented as a tuple `(g, h)`. Need a hint? Click here Remember, multiplying complex numbers is just like multiplying polynomials. Distribute one of the complex numbers: $$(a + bi)(c + di) = a(c + di) + bi(c + di)$$Then multiply through, and group the real and imaginary terms together. A video explanation can be found here.
###Code
@exercise
def complex_mult(x : Complex, y : Complex) -> Complex:
# Fill in your own code
return (x[0]*y[0] - x[1]*y[1], x[0]*y[1]+x[1]*y[0])
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-3:-Complex-multiplication.).* Complex ConjugateBefore we discuss any other complex operations, we have to cover the **complex conjugate**. The conjugate is a simple operation: given a complex number $x = a + bi$, its complex conjugate is $\overline{x} = a - bi$.The conjugate allows us to do some interesting things. The first and probably most important is multiplying a complex number by its conjugate:$$x \cdot \overline{x} = (a + bi)(a - bi)$$Notice that the second expression is a difference of squares:$$(a + bi)(a - bi) = a^2 - (bi)^2 = a^2 - b^2i^2 = a^2 + b^2$$This means that a complex number multiplied by its conjugate always produces a non-negative real number.Another property of the conjugate is that it distributes over both complex addition and complex multiplication:$$\overline{x + y} = \overline{x} + \overline{y} \\\overline{x \cdot y} = \overline{x} \cdot \overline{y}$$ Exercise 4: Complex conjugate.**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.**Goal:** Return $\overline{x} = g + hi$, the complex conjugate of $x$, represented as a tuple `(g, h)`. Need a hint? Click here A video expanation can be found here.
###Code
@exercise
def conjugate(x : Complex) -> Complex:
return (x[0], -x[1])
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-4:-Complex-conjugate.).* Complex DivisionThe next use for the conjugate is complex division. Let's take two complex numbers: $x = a + bi$ and $y = c + di \neq 0$ (not even complex numbers let you divide by $0$). What does $\frac{x}{y}$ mean?Let's expand $x$ and $y$ into their component forms:$$\frac{x}{y} = \frac{a + bi}{c + di}$$Unfortunately, it isn't very clear what it means to divide by a complex number. We need some way to move either all real parts or all imaginary parts into the numerator. And thanks to the conjugate, we can do just that. Using the fact that any number (except $0$) divided by itself equals $1$, and any number multiplied by $1$ equals itself, we get:$$\frac{x}{y} = \frac{x}{y} \cdot 1 = \frac{x}{y} \cdot \frac{\overline{y}}{\overline{y}} = \frac{x\overline{y}}{y\overline{y}} = \frac{(a + bi)(c - di)}{(c + di)(c - di)} = \frac{(a + bi)(c - di)}{c^2 + d^2}$$By doing this, we re-wrote our division problem to have a complex multiplication expression in the numerator, and a real number in the denominator. We already know how to multiply complex numbers, and dividing a complex number by a real number is as simple as dividing both parts of the complex number separately:$$\frac{a + bi}{r} = \frac{a}{r} + \frac{b}{r}i$$ Exercise 5: Complex division.**Inputs:**1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.2. A complex number $y = c + di \neq 0$, represented as a tuple `(c, d)`.**Goal:** Return the result of the division $\frac{x}{y} = \frac{a + bi}{c + di} = g + hi$, represented as a tuple `(g, h)`. Need a hint? Click here A video explanation can be found here.
###Code
@exercise
def complex_div(x : Complex, y : Complex) -> Complex:
numer = complex_mult(x, (y[0], -y[1]))
denom = (y[0]**2 + y[1]**2)
return (numer[0]/denom, numer[1]/denom)
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-5:-Complex-division.).* Geometric Perspective The Complex PlaneYou may recall that real numbers can be represented geometrically using the [number line](https://en.wikipedia.org/wiki/Number_line) - a line on which each point represents a real number. We can extend this representation to include imaginary and complex numbers, which gives rise to an entirely different number line: the imaginary number line, which only intersects with the real number line at $0$.A complex number has two components - a real component and an imaginary component. As you no doubt noticed from the exercises, these can be represented by two real numbers - the real component, and the real coefficient of the imaginary component. This allows us to map complex numbers onto a two-dimensional plane - the **complex plane**. The most common mapping is the obvious one: $a + bi$ can be represented by the point $(a, b)$ in the **Cartesian coordinate system**.This mapping allows us to apply complex arithmetic to geometry, and, more importantly, apply geometric concepts to complex numbers. Many properties of complex numbers become easier to understand when viewed through a geometric lens. ModulusOne such property is the **modulus** operator. This operator generalizes the **absolute value** operator on real numbers to the complex plane. Just like the absolute value of a number is its distance from $0$, the modulus of a complex number is its distance from $0 + 0i$. Using the distance formula, if $x = a + bi$, then:$$|x| = \sqrt{a^2 + b^2}$$There is also a slightly different, but algebraically equivalent definition:$$|x| = \sqrt{x \cdot \overline{x}}$$Like the conjugate, the modulus distributes over multiplication.$$|x \cdot y| = |x| \cdot |y|$$Unlike the conjugate, however, the modulus doesn't distribute over addition. Instead, the interaction of the two comes from the triangle inequality:$$|x + y| \leq |x| + |y|$$ Exercise 6: Modulus.**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.**Goal:** Return the modulus of this number, $|x|$.> Python's exponentiation operator is `**`, so $2^3$ is `2 ** 3` in Python.>> You will probably need some mathematical functions to solve the next few tasks. They are available in Python's math library. You can find the full list and detailed information in the [official documentation](https://docs.python.org/3/library/math.html). Need a hint? Click here In particular, you might be interested in Python's square root function. A video explanation can be found here.
###Code
@exercise
def modulus(x : Complex) -> float:
return (x[0]**2 + x[1]**2)**0.5
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-6:-Modulus.).* Imaginary ExponentsThe next complex operation we're going to need is exponentiation. Raising an imaginary number to an integer power is a fairly simple task, but raising a number to an imaginary power, or raising an imaginary (or complex) number to a real power isn't quite as simple.Let's start with raising real numbers to imaginary powers. Specifically, let's start with a rather special real number - Euler's constant, $e$:$$e^{i\theta} = \cos \theta + i\sin \theta$$(Here and later in this tutorial $\theta$ is measured in radians.)Explaining why that happens is somewhat beyond the scope of this tutorial, as it requires some calculus, so we won't do that here. If you are curious, you can see [this video](https://youtu.be/v0YEaeIClKY) for a beautiful intuitive explanation, or [the Wikipedia article](https://en.wikipedia.org/wiki/Euler%27s_formulaProofs) for a more mathematically rigorous proof.Here are some examples of this formula in action:$$e^{i\pi/4} = \frac{1}{\sqrt{2}} + \frac{i}{\sqrt{2}} \\e^{i\pi/2} = i \\e^{i\pi} = -1 \\e^{2i\pi} = 1$$> One interesting consequence of this is Euler's Identity:>> $$e^{i\pi} + 1 = 0$$>> While this doesn't have any notable uses, it is still an interesting identity to consider, as it combines 5 fundamental constants of algebra into one expression.We can also calculate complex powers of $e$ as follows:$$e^{a + bi} = e^a \cdot e^{bi}$$Finally, using logarithms to express the base of the exponent as $r = e^{\ln r}$, we can use this to find complex powers of any positive real number. Exercise 7: Complex exponents.**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.**Goal:** Return the complex number $e^x = e^{a + bi} = g + hi$, represented as a tuple `(g, h)`.> Euler's constant $e$ is available in the [math library](https://docs.python.org/3/library/math.htmlmath.e),> as are [Python's trigonometric functions](https://docs.python.org/3/library/math.htmltrigonometric-functions).
###Code
@exercise
def complex_exp(x : Complex) -> Complex:
n = math.e**(x[0])
return (n*math.cos(x[1]), n*math.sin(x[1]))
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-7:-Complex-exponents.).* Exercise 8*: Complex powers of real numbers.**Inputs:**1. A non-negative real number $r$.2. A complex number $x = a + bi$, represented as a tuple `(a, b)`.**Goal:** Return the complex number $r^x = r^{a + bi} = g + hi$, represented as a tuple `(g, h)`.> Remember, you can use functions you have defined previously Need a hint? Click here You can use the fact that $r = e^{\ln r}$ to convert exponent bases. Remember though, $\ln r$ is only defined for positive numbers - make sure to check for $r = 0$ separately!
###Code
@exercise
def complex_exp_real(r : float, x : Complex) -> Complex:
if r == 0: return (0, 0)
import numpy as np
n = r**x[0]
return (n*np.cos(x[1]*np.log(r)), n*np.sin(x[1]*np.log(r)))
###Output
Success!
###Markdown
Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-8*:-Complex-powers-of-real-numbers.). Polar coordinatesConsider the expression $e^{i\theta} = \cos\theta + i\sin\theta$. Notice that if we map this number onto the complex plane, it will land on a **unit circle** arount $0 + 0i$. This means that its modulus is always $1$. You can also verify this algebraically: $\cos^2\theta + \sin^2\theta = 1$.Using this fact we can represent complex numbers using **polar coordinates**. In a polar coordinate system, a point is represented by two numbers: its direction from origin, represented by an angle from the $x$ axis, and how far away it is in that direction.Another way to think about this is that we're taking a point that is $1$ unit away (which is on the unit circle) in the specified direction, and multiplying it by the desired distance. And to get the point on the unit circle, we can use $e^{i\theta}$.A complex number of the format $r \cdot e^{i\theta}$ will be represented by a point which is $r$ units away from the origin, in the direction specified by the angle $\theta$.Sometimes $\theta$ will be referred to as the number's **phase**. Exercise 9: Cartesian to polar conversion.**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.**Goal:** Return the polar representation of $x = re^{i\theta}$, i.e., the distance from origin $r$ and phase $\theta$ as a tuple `(r, θ)`.* $r$ should be non-negative: $r \geq 0$* $\theta$ should be between $-\pi$ and $\pi$: $-\pi < \theta \leq \pi$ Need a hint? Click here Python has a separate function for calculating $\theta$ for this purpose. A video explanation can be found here.
###Code
@exercise
def polar_convert(x : Complex) -> Polar:
r = (x[0]**2 + x[1]**2)**0.5
if r == 0: return (0, 0)
theta = math.acos(x[0]/r) - 0*math.pi
if x[1] < 0: return (r, -theta)
return (r, theta)
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-9:-Cartesian-to-polar-conversion.).* Exercise 10: Polar to Cartesian conversion.**Input:** A complex number $x = re^{i\theta}$, represented in polar form as a tuple `(r, θ)`.**Goal:** Return the Cartesian representation of $x = a + bi$, represented as a tuple `(a, b)`.
###Code
@exercise
def cartesian_convert(x : Polar) -> Complex:
return (x[0]*math.cos(x[1]), x[0]*math.sin(x[1]))
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-10:-Polar-to-Cartesian-conversion.).* Exercise 11: Polar multiplication.**Inputs:**1. A complex number $x = r_{1}e^{i\theta_1}$ represented in polar form as a tuple `(r1, θ1)`.2. A complex number $y = r_{2}e^{i\theta_2}$ represented in polar form as a tuple `(r2, θ2)`.**Goal:** Return the result of the multiplication $x \cdot y = z = r_3e^{i\theta_3}$, represented in polar form as a tuple `(r3, θ3)`.* $r_3$ should be non-negative: $r_3 \geq 0$* $\theta_3$ should be between $-\pi$ and $\pi$: $-\pi < \theta_3 \leq \pi$* Try to avoid converting the numbers into Cartesian form. Need a hint? Click here Remember, a number written in polar form already involves multiplication. What is $r_1e^{i\theta_1} \cdot r_2e^{i\theta_2}$? Need another hint? Click here Is your θ not coming out correctly? Remember you might have to check your boundaries and adjust it to be in the range requested.
###Code
@exercise
def polar_mult(x : Polar, y : Polar) -> Polar:
if x[1]+y[1] > math.pi: return (x[0]*y[0], -2*math.pi + x[1]+y[1])
if x[1]+y[1] < -math.pi: return (x[0]*y[0], 2*math.pi + x[1]+y[1])
else: return (x[0]*y[0], x[1]+y[1])
###Output
Success!
###Markdown
*Can't come up with a solution? See the explained solution in the [Complex Arithmetic Workbook](./Workbook_ComplexArithmetic.ipynbExercise-11:-Polar-multiplication.).* Exercise 12**: Arbitrary complex exponents.You now know enough about complex numbers to figure out how to raise a complex number to a complex power.**Inputs:**1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.2. A complex number $y = c + di$, represented as a tuple `(c, d)`.**Goal:** Return the result of raising $x$ to the power of $y$: $x^y = (a + bi)^{c + di} = z = g + hi$, represented as a tuple `(g, h)`. Need a hint? Click here Convert $x$ to polar form, and raise the result to the power of $y$.
###Code
@exercise
def complex_exp_arbitrary(x : Complex, y : Complex) -> Complex:
(a, b) = x
(c, d) = y
# Convert x to polar form
r = math.sqrt(a ** 2 + b ** 2)
theta = math.atan2(b, a)
# Special case for r = 0
if (r == 0):
return (0, 0)
lnr = math.log(r)
exponent = math.exp(lnr * c - d * theta)
real = exponent * (math.cos(lnr * d + theta * c ))
imaginary = exponent * (math.sin(lnr * d + theta * c))
return (real, imaginary)
###Output
_____no_output_____ |
Number-Plate-Detection.ipynb | ###Markdown
Numberplate Detection using OPenCV- Project as a Part of Learning OpenCV and is used to Develop Further- Anyone can contribute to this project and use this project as well Install the Dependencies
###Code
!pip install easyocr
!pip install imutils
import cv2
from matplotlib import pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Reading in Image, Grayscale and Blur
###Code
img = cv2.imread('image4.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(cv2.cvtColor(gray, cv2.COLOR_BGR2RGB))
plt.show()
###Output
_____no_output_____
###Markdown
Applying filter and find edges for localization - Maximum the number plates are written in white FLAT surfaces in the car. Sp, Making Greyscale and identifying the edges is very helpful to recognise the text in NumberPlate.
###Code
bfilter = cv2.bilateralFilter(gray, 11, 17, 17) #Noise reduction
edged = cv2.Canny(bfilter, 30, 200) #Edge detection
plt.imshow(cv2.cvtColor(edged, cv2.COLOR_BGR2RGB))
plt.show()
###Output
_____no_output_____
###Markdown
Finding Contours and Apply Mask
###Code
import imutils
keypoints = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = imutils.grab_contours(keypoints)
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10]
location = None
for contour in contours:
approx = cv2.approxPolyDP(contour, 10, True)
if len(approx) == 4:
location = approx
break
location
mask = np.zeros(gray.shape, np.uint8)
new_image = cv2.drawContours(mask, [location], 0,255, -1)
new_image = cv2.bitwise_and(img, img, mask=mask)
plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB))
(x,y) = np.where(mask==255)
(x1, y1) = (np.min(x), np.min(y))
(x2, y2) = (np.max(x), np.max(y))
cropped_image = gray[x1:x2+1, y1:y2+1]
plt.imshow(cv2.cvtColor(cropped_image, cv2.COLOR_BGR2RGB))
plt.show()
###Output
_____no_output_____
###Markdown
Using Easy OCR To Read Text
###Code
import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext(cropped_image)
result
###Output
_____no_output_____
###Markdown
Rendering the Result
###Code
text = result[0][-2]
font = cv2.FONT_HERSHEY_SIMPLEX
res = cv2.putText(img, text=text, org=(approx[0][0][0], approx[1][0][1]+60), fontFace=font, fontScale=1, color=(0,255,0), thickness=2, lineType=cv2.LINE_AA)
res = cv2.rectangle(img, tuple(approx[0][0]), tuple(approx[2][0]), (0,255,0),3)
plt.imshow(cv2.cvtColor(res, cv2.COLOR_BGR2RGB))
plt.show()
###Output
_____no_output_____ |
scraping/daumNews_scrap_baseline.ipynb | ###Markdown
Import Package
###Code
import csv
from socket import MSG_MCAST
from urllib.parse import quote, quote_plus
import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
###Output
_____no_output_____
###Markdown
날짜 지정
###Code
strDate = '20200901' # 시작 날짜
endDate = '20200930' # 종료 날짜
###Output
_____no_output_____
###Markdown
CSV 저장
###Code
filename = f"daum_news_{strDate}-{endDate}.csv"
f = open(filename, "w", encoding="utf-8-sig", newline="")
writer = csv.writer(f)
###Output
_____no_output_____
###Markdown
유저 에이전트 지정
> [WhatismyUserAgent(ctrl+🖱 후 확인)](https://www.whatismybrowser.com/detect/what-is-my-user-agent)
###Code
headers = {"User-Agent" : "개인 유저 에이전트"}
###Output
_____no_output_____
###Markdown
반복문에 들어갈 날짜 지정
###Code
dt_index = pd.date_range(start=strDate, end=endDate)
dt_list = dt_index.strftime("%Y%m%d").tolist()
###Output
_____no_output_____
###Markdown
스크래핑 시작
###Code
for i in dt_list:
print('날짜',i)
page_url = f"https://news.daum.net/breakingnews/?page=10000®Date={i}"
page =requests.get(page_url, headers=headers)
page_soup = BeautifulSoup(page.text, "lxml")
last_page = page_soup.find("em", attrs="num_page").get_text()
lastPage_num = re.sub(r'[^0-9]','',last_page)
# print(lastPage_num)
for j in range(1, int(lastPage_num)+1):
main_url = f"https://news.daum.net/breakingnews/?page={j}®Date={i}" # url 입력
res = requests.get(main_url, headers=headers)
if res.status_code == 200:
print(i, int(lastPage_num), '중' ,j,'page',round(j/int(lastPage_num)*100, 2),'%', main_url, 'status:',res.status_code)
soup = BeautifulSoup(res.text, "lxml") # soup으로 저장
main = soup.find("ul", attrs={"class":"list_news2 list_allnews"})
news = main.find_all("strong", attrs={"class":"tit_thumb"})
cnt = 0
for new in news:
urls = new.select_one("a")["href"]# 페이지에 나와있는 뉴스 URL 변수 입력
# print(urls)
result = requests.get(urls, headers=headers) # request 로 다시 개별 뉴스 접속
if result.status_code == 200:
news_soup = BeautifulSoup(result.text, "lxml")
# 뉴스 제목, 발행시간, 기사본문 저장
title = news_soup.find("h3", attrs={"tit_view"}).get_text().strip()
pubdate = news_soup.find("span", attrs={"num_date"}).get_text().strip()
text = news_soup.find("div", attrs={"news_view"}).get_text().strip()
cnt += 1
# print(j,'of',cnt,'번째 기사')
# print(i,j,'of',cnt,'번째 기사', urls,'status:', result.status_code)
writer.writerow([cnt, title, pubdate, urls, text])
else:
print(i,j,'of',cnt,'번째 기사','error_code :',result.status_code, urls)
pass
else:
print(i,'page : ',j,'error_code :',res.status_code, main_url)
pass
###Output
_____no_output_____ |
notebooks/dominate_builder.ipynb | ###Markdown
Resources for building my website.- [w3scools](https://www.w3schools.com/)- [w3schools on Bootstrap 4](https://www.w3schools.com/bootstrap4/default.asp)- [dominate docs](https://github.com/Knio/dominate/)- [draperjames.github.io](https://draperjames.github.io/)- [github projects through the API](https://api.github.com/users/draperjames/repos?per_page=100)- [nbviewer rendering of this page from the live github repo](http://nbviewer.jupyter.org/github/draperjames/draperjames.github.io/blob/master/dominate_builder.ipynb)
###Code
from dominate import tags
# Set the title of the site.
title = tags.title("James Draper : Computational Biologist")
# Generate the metadata.
meta_data = [title]
meta_data += [tags.meta(charset="utf-8")]
meta_data += [tags.meta(name="viewport", content="width=device-width, initial-scale=1")]
# Bootstrap CSS CDN
css = [tags.link(rel="stylesheet", href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css")]
# JQuery CDN
js = [tags.script(src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js")]
# Pooper CDN
js = [tags.script(src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js")]
# BootStrapJS CDN
js += [tags.script(src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js")]
extenal_code = css + js
# Build the HTML head
head = tags.head(meta_data+extenal_code)
# Create a jumbtron wrapper function
# FIXME: Remove this for readibility
@tags.div(cls='jumbotron text-center')
def jumbotron_center(title=None, subtitle=None, **kwargs):
tags.h1('{}'.format(title))
if subtitle is not None:
tags.p(subtitle)
projects = dict()
projects["guipyter"] = "https://draperjames.github.io/guipyter/"
projects["pandomics"] = "https://draperjames.github.io/pandomics/"
projects["notebook for build this page"] = "http://nbviewer.jupyter.org/github/draperjames/draperjames.github.io/blob/master/dominate_builder.ipynb"
lnk_list = []
for k,v in projects.items():
lnk = tags.a(k, href=v)
lnk_list += [lnk, tags.br()]
# Container content added.
container = tags.div(tags.h3("My OSS Projects"), tags.p(lnk_list), cls="col-sm-4")
container = tags.div(container, cls="container")
# Build the body.
body = []
body += [jumbotron_center(title="James Draper", subtitle="Computational Biologist")]
body += [container]
body = tags.body(body)
# body.render()
page = tags.html(head, body)
def make_page(file_path=None, page=None):
with open(file_path,"w") as f:
f.write(page.render())
# Overwrite index.html with the generated page.
make_page("index.html", page)
###Output
_____no_output_____
###Markdown
--- On going development--- Investigating the github API.- https://developer.github.com/v3/repos/- https://api.github.com/users/draperjames/repos?per_page=100
###Code
import time
import json
import pandas as pd
from urllib import request
github_api = "https://api.github.com/users/draperjames/repos?per_page=100"
# Request to github API
page = request.urlopen(github_api)
# Format the JSON content.
jpage = json.load(page)
# Make into DataFrame for handling and data reduction.
jdf = pd.DataFrame(jpage)
jdfs = jdf[["name", "pushed_at", "updated_at", "created_at", "size", "fork", "forks_count"]]
# Format the dates as datetime objects.
dt_format = pd.DataFrame([pd.to_datetime(jdfs.pushed_at),
pd.to_datetime(jdfs.updated_at),
pd.to_datetime(jdfs.created_at)]).T
jdfs = pd.concat([jdfs["name"], dt_format, jdfs.iloc[:, 4:]], axis=1)
# Just my repos
jdfs.loc[~jdfs.fork]
###Output
_____no_output_____
###Markdown
Creating a projects heatmap- [Styling pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/style.html)
###Code
time.mktime(jdfs.pushed_at[0].timetuple())
# Convert all of the pushed_at dates to time
pt = jdfs.pushed_at.apply(lambda x: time.mktime(x.timetuple()))
# Normalizing to the earliest date.
npt = pt - pt.min()
list(filter(lambda x:"commit" in x, jdf.columns))
jdf.git_commits_url[10]
import guipyter
guipyter.notebook_tools.notebook_current_convert("html")
###Output
This notebook has been saved.
Converted notebook.
|
code/ch6-aggregating-grouping-merging/1-Aggregating-Grouping-Example.ipynb | ###Markdown
Examples of aggregating and grouping data
###Code
import pandas as pd
from pathlib import Path
input_file = Path.cwd() / 'data' / 'raw' / 'sample_sales.xlsx'
df = pd.read_excel(input_file)
df.head()
df['price'].agg(['mean', 'std', 'min', 'max'])
df.agg(['mean', 'max'])
agg_cols = {'quantity': 'sum',
'price': ['mean', 'std'],
'invoice': 'count',
'extended amount': 'sum'}
df.agg(agg_cols).fillna(0)
df.groupby(['product']).sum()
df.groupby(['product'])['quantity'].sum()
prod_cols = {'quantity': 'sum'}
df.groupby(['product']).agg(prod_cols)
prod_cols = {'quantity': ['sum', 'mean', 'std', 'max']}
df.groupby(['product']).agg(prod_cols)
df.groupby(['company', 'product']).agg(prod_cols).fillna(0)
df.groupby(['company', 'product']).agg(prod_cols).reset_index()
df.groupby(['company']).agg(invoice_total=('invoice', 'count'),
max_purchase=('extended amount', 'max'))
df.groupby(['company']).agg({'invoice': 'count',
'extended amount': 'max'})
###Output
_____no_output_____
###Markdown
Pivot table and crosstab
###Code
pd.pivot_table(df, index=['company'], columns=['product'],
values=['extended amount'],
aggfunc='sum',
margins=True,
fill_value=0)
pd.pivot_table(df, index=['company'], columns=['product'],
values=['extended amount'],
aggfunc=['sum', 'mean', 'max'],
margins=True,
fill_value=0)
pd.pivot_table(df, index=['company', 'product'],
values=['extended amount'],
aggfunc=['sum'],
margins=True,
fill_value=0)
pd.crosstab(df['company'], df['product'])
pd.crosstab(df['company'], df['product'], values=df['extended amount'], aggfunc='sum', normalize='index')
###Output
_____no_output_____ |
HW2/HW2 - 1 .ipynb | ###Markdown
Name: Yan Sun PID: A53240727 1.The code currently does not perform any train/test splits. Split the data into training, validation, andtest sets, via 1/3, 1/3, 1/3 splits. Use the first third, second third, and last third of the data (respectively).After training on the training set, report the accuracy of the classier on the validation and test sets (1mark).
###Code
import numpy
import urllib
import scipy.optimize
import random
from math import exp
from math import log
def parseData(fname):
for l in urllib.urlopen(fname):
yield eval(l)
print("Reading data...")
data = list(parseData("file:beer_50000.json"))
print("done")
def feature(datum):
feat = [1, datum['review/taste'], datum['review/appearance'], \
datum['review/aroma'], datum['review/palate'], \
datum['review/overall']]
return feat
X = [feature(d) for d in data]
y = [d['beer/ABV'] >= 6.5 for d in data]
def inner(x,y):
return sum([x[i]*y[i] for i in range(len(x))])
def sigmoid(x):
return 1.0 / (1 + exp(-x))
length = int(len(data)/3)
X_train = X[:length]
y_train = y[:length]
X_validation = X[length:2*length]
y_validation = y[length:2*length]
X_test = X[2*length:]
y_test = y[2*length:]
##################################################
# Logistic regression by gradient ascent #
##################################################
# NEGATIVE Log-likelihood
def f(theta, X, y, lam):
loglikelihood = 0
for i in range(len(X)):
logit = inner(X[i], theta)
loglikelihood -= log(1 + exp(-logit))
if not y[i]:
loglikelihood -= logit
for k in range(len(theta)):
loglikelihood -= lam * theta[k]*theta[k]
# for debugging
# print("ll =" + str(loglikelihood))
return -loglikelihood
# NEGATIVE Derivative of log-likelihood
def fprime(theta, X, y, lam):
dl = [0]*len(theta)
for i in range(len(X)):
logit = inner(X[i], theta)
for k in range(len(theta)):
dl[k] += X[i][k] * (1 - sigmoid(logit))
if not y[i]:
dl[k] -= X[i][k]
for k in range(len(theta)):
dl[k] -= lam*2*theta[k]
return numpy.array([-x for x in dl])
##################################################
# Train #
##################################################
def train(lam):
theta,_,_ = scipy.optimize.fmin_l_bfgs_b(f, [0]*len(X[0]), \
fprime, pgtol = 10, args = (X_train, y_train, lam))
return theta
X_data = [X_train, X_validation, X_test]
y_data = [y_train, y_validation, y_test]
symbol = ['train', 'valid', 'test']
print 'λ\tDataset\t\tTruePositive\tFalsePositive\tTrueNegative\tFalseNegative\tAccuracy\tBER'
lam = 1.0
theta = train(lam)
#print theta
for i in range(3):
def TP(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==1) and (b==1)) for (a,b) in zip(predictions,y_data[i])]
tp = sum(correct) * 1.0
return tp
def TN(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==0) and (b==0)) for (a,b) in zip(predictions,y_data[i])]
tn = sum(correct) * 1.0
return tn
def FP(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==1) and (b==0)) for (a,b) in zip(predictions,y_data[i])]
fp = sum(correct) * 1.0
return fp
def FN(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==0) and (b==1)) for (a,b) in zip(predictions,y_data[i])]
fn = sum(correct) * 1.0
return fn
if i == 1 or i == 2 :
tp = TP(theta)
fp = FP(theta)
tn = TN(theta)
fn = FN(theta)
TPR = tp / (tp + fn)
TNR = tn / (tn + fp)
BER = 1 - 0.5 * (TPR + TNR)
accuracy = (tp+tn)/(tp+tn+fp+fn)
print str(lam)+'\t'+symbol[i]+'\t\t'+str(tp)+'\t\t'+str(fp)+'\t\t'+str(tn)+'\t\t'+\
str(fn)+'\t\t'+str(accuracy)+'\t'+str(BER)
# X_data, y_data can be changed to train, validation or test data
def TP(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==1) and (b==1)) for (a,b) in zip(predictions,y_data[i])]
tp = sum(correct) * 1.0
return tp
def TN(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==0) and (b==0)) for (a,b) in zip(predictions,y_data[i])]
tn = sum(correct) * 1.0
return tn
def FP(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==1) and (b==0)) for (a,b) in zip(predictions,y_data[i])]
fp = sum(correct) * 1.0
return fp
def FN(theta):
scores = [inner(theta,x) for x in X_data[i]]
predictions = [s > 0 for s in scores]
correct = [((a==0) and (b==1)) for (a,b) in zip(predictions,y_data[i])]
fn = sum(correct) * 1.0
return fn
tp = TP(theta)
fp = FP(theta)
tn = TN(theta)
fn = FN(theta)
TPR = tp / (tp + fn)
TNR = tn / (tn + fp)
BER = 1 - 0.5 * (TPR + TNR)
accuracy = (tp+tn)/(tp+tn+fp+fn)
###Output
_____no_output_____ |
BO_trials/BO_log_scale.ipynb | ###Markdown
Bayesian Optimisation Verification
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
from scipy.interpolate import interp1d
from scipy import interpolate
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel
from scipy import stats
from scipy.stats import norm
from sklearn.metrics.pairwise import euclidean_distances
from scipy.spatial.distance import cdist
from scipy.optimize import fsolve
import math
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
###Output
_____no_output_____
###Markdown
Trial on TiOx/SiOxTempeature vs. S10_HF
###Code
#import normal data sheet at 85 C (time:0~5000s)
address = 'data/degradation.xlsx'
x_normal = []
y_normal = []
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [0],names = None,nrows = 5000)
df_85 = df.values.tolist()
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [3],names = None,nrows = 5000)
df_85L = df.values.tolist()
#import smooth data sheet at 85 C (time:0~5000s)
address = 'data/degradation.xlsx'
df = pd.read_excel(address,sheet_name = 'smooth data',usecols = [0],names = None,nrows = 5000)
df_85s = df.values.tolist()
df = pd.read_excel(address,sheet_name = 'smooth data',usecols = [3],names = None,nrows = 5000)
df_85Ls = df.values.tolist()
#import normal data sheet at 120 C (time:0~5000s)
address = 'data/degradation.xlsx'
x_normal = []
y_normal = []
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [0],names = None,nrows = 5000)
df_120 = df.values.tolist()
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [3],names = None,nrows = 5000)
df_120L = df.values.tolist()
#import smooth data sheet at 120 C (time:0~5000s)
address = 'data/degradation.xlsx'
df = pd.read_excel(address,sheet_name = 'smooth data',usecols = [0],names = None,nrows = 5000)
df_120s = df.values.tolist()
df = pd.read_excel(address,sheet_name = 'smooth data',usecols = [3],names = None,nrows = 5000)
df_120Ls = df.values.tolist()
# randomly select 7 points from normal data
x_normal = np.array(df_120).T
y_normal = np.array(df_li_L).T
x_normal = x_normal.reshape((5000))
y_normal = y_normal.reshape((5000))
def plot (X,X_,y_mean,y,y_cov,gp,kernel):
#plot function
plt.figure()
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean - np.sqrt(np.diag(y_cov)),y_mean + np.sqrt(np.diag(y_cov)),alpha=0.5, color='k')
plt.scatter(X[:, 0], y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.tight_layout()
# Preparing training set
# For log scaled plot
x_loop = np.array([1,10,32,100,316,1000,3162])
X = x_normal[x_loop].reshape(x_loop.size)
y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
MAX_x_value = np.log10(5000)
X_ = np.linspace(0,MAX_x_value, 5000)
# Kernel setting
length_scale_bounds_MAX = 0.5
length_scale_bounds_MIN = 1e-4
for length_scale_bounds_MAX in (0.3,0.5,0.7):
kernel = 1.0 * RBF(length_scale=20,length_scale_bounds=(length_scale_bounds_MIN, length_scale_bounds_MAX)) + WhiteKernel(noise_level=0.00000001)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, y)
y_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)
plot (X,X_,y_mean,y,y_cov,gp,kernel)
# Find the minimum value in the bound
# 5000 * 5000
# Find minimum value in the last row as the minimum value for the bound
def ucb(X , gp, dim, delta):
"""
Calculates the GP-UCB acquisition function values
Inputs: gp: The Gaussian process, also contains all data
x:The point at which to evaluate the acquisition function
Output: acq_value: The value of the aquisition function at point x
"""
mean, var = gp.predict(X[:, np.newaxis], return_cov=True)
#var.flags['WRITEABLE']=True
#var[var<1e-10]=0
mean = np.atleast_2d(mean).T
var = np.atleast_2d(var).T
beta = 2*np.log(np.power(5000,2.1)*np.square(math.pi)/(3*delta))
return mean - np.sqrt(beta)* np.sqrt(np.diag(var))
acp_value = ucb(X_, gp, 0.1, 5)
X_min = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(np.argmin(acp_value[-1]))
print(min(acp_value[-1]))
# Preparing training set
x_loop = np.array([1,10,32,100,316,1000,3162])
X = x_normal[x_loop].reshape(x_loop.size)
y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
MAX_x_value = np.log10(5000)
X_ = np.linspace(0,MAX_x_value, 5000)
# Kernel setting
length_scale_bounds_MAX = 0.4
length_scale_bounds_MIN = 1e-4
kernel = 1.0 * RBF(length_scale=20,length_scale_bounds=(length_scale_bounds_MIN, length_scale_bounds_MAX)) + WhiteKernel(noise_level=0.0001)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, y)
y_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)
acp_value = ucb(X_, gp, 0.1, 5)
ucb_y_min = acp_value[-1]
print (min(ucb_y_min))
X_min = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(np.argmin(acp_value[-1]))
print(min(acp_value[-1]))
plt.figure()
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.plot(X_, ucb_y_min, 'x', lw=3, zorder=9)
# plt.fill_between(X_, y_mean, ucb_y_min,alpha=0.5, color='k')
plt.scatter(X[:, 0], y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.tight_layout()
acp_value = ucb(X_, gp, 0.1, 5)
X_min = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(np.argmin(acp_value[-1]))
print(min(acp_value[-1]))
# Iterate i times with mins value point of each ucb bound
# Initiate with 7 data points, apply log transformation to them
x_loop = np.array([1,10,32,100,316,1000,3162])
X = x_normal[x_loop].reshape(x_loop.size)
Y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
MAX_x_value = np.log10(5000)
X_ = np.linspace(0,MAX_x_value, 5000)
# Kernel setting
length_scale_bounds_MAX = 0.5
length_scale_bounds_MIN = 1e-4
kernel = 1.0 * RBF(length_scale=20,length_scale_bounds=(length_scale_bounds_MIN, length_scale_bounds_MAX)) + WhiteKernel(noise_level=0.0001)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, Y)
y_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)
acp_value = ucb(X_, gp, 0.1, 5)
ucb_y_min = acp_value[-1]
plt.figure()
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean, ucb_y_min,alpha=0.5, color='k')
plt.scatter(X[:, 0], Y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.tight_layout()
# Change i to set extra data points
i=0
while i < 5 :
acp_value = ucb(X_, gp, 0.1, 5)
ucb_y_min = acp_value[-1]
index = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(min(acp_value[-1]))
# Protection to stop equal x value
while index in x_loop:
index = index - 50
x_loop = np.append(x_loop, index)
x_loop = np.sort(x_loop)
print (x_loop)
X = x_normal[x_loop].reshape(x_loop.size)
Y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, Y)
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean, ucb_y_min,alpha=0.5, color='k')
plt.scatter(X[:, 0], Y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.title('cycle %d'%(i), color = 'white')
plt.tight_layout()
plt.show()
i+=1
print('X:', X, '\nY:', Y)
s = interpolate.InterpolatedUnivariateSpline(x_loop,Y)
x_uni = np.arange(0,5000,1)
y_uni = s(x_uni)
# Plot figure
plt.plot(df_120s,df_120Ls,'-',color = 'gray')
plt.plot(x_uni,y_uni,'-',color = 'red')
plt.plot(x_loop, Y,'x',color = 'black')
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.title('cycle %d'%(i+1), color = 'white')
plt.show()
###Output
-1.0178925921332458
-1.0178925921332458
[ 1 10 32 100 316 618 1000 3162]
|
Pre-processamento.ipynb | ###Markdown
Pré-processamento de texto e divisão de dados de treino e teste Imports
###Code
import pandas as pd
import numpy as np
from sklearn.naive_bayes import MultinomialNB
from matplotlib import pyplot as plt
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
from sklearn.pipeline import make_pipeline
from sklearn import metrics
from sklearn.model_selection import train_test_split, StratifiedKFold, GridSearchCV
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Imports pré-processamento
###Code
import re
import nltk
import matplotlib.pyplot as plt
from nltk.stem import SnowballStemmer
nltk.download('stopwords')
from sklearn import preprocessing, decomposition, model_selection, metrics, pipeline
from sklearn.cluster import KMeans
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.corpus import stopwords
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Felipe\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Carregando os dados
###Code
df = pd.read_csv('./data/imdb-reviews-pt-br.csv')
###Output
_____no_output_____
###Markdown
Manipulação dos dados
###Code
del df['id']
del df['text_en']
df_transformed = df.copy()
df_transformed.sentiment = df_transformed['sentiment'].map({'pos': 1, 'neg': 0})
def corrigir_nomes(nome):
nome = nome.replace(r'[,;&!?/.:]', ' ').replace('(\\d|\\W)+|\w*\d\w*', ' ').replace(';', ' ').replace(':', ' ').replace(',', ' ').replace('"', ' ').replace('ç', 'c').replace('é', 'e').replace('ç', 'c').replace('ã', 'a').replace('ê', 'e').replace('í', 'i').replace('á', 'a').replace('ó', 'o').replace('ú', 'u').replace('â', 'a').replace('ô', 'o').replace(' o ', ' ').replace(' a ', ' ').replace(' os ', ' ').replace(' as ', ' ').replace(' um ', ' ').replace(' uma ', ' ').replace(' uns ', ' ').replace(' umas ', ' ').replace(' ao ', ' ').replace(' aos ', ' ').replace(' à ', ' ').replace(' às ', ' ').replace(' da ', ' ').replace(' das ', ' ').replace(' do ', ' ').replace(' dos ', ' ').replace(' na ', ' ').replace(' nas ', ' ').replace(' no ', ' ').replace(' num ', ' ').replace(' numas ', ' ')
return nome
df_transformed['text_pt'] = df_transformed['text_pt'].apply(corrigir_nomes)
df_transformed['text_pt'] = df_transformed['text_pt'].dropna()
df_transformed['text_pt'] = df_transformed['text_pt'].str.lower()
e_words = df_transformed['text_pt']
sbs = SnowballStemmer("portuguese")
for w in e_words:
rootWord = sbs.stem(w)
df_transformed['text_pt_without_stemming'] = df_transformed['text_pt'].apply(lambda x: ' '.join([word for word in x.split() if word not in (rootWord)]))
df_transformed.head()
stop_words = set(stopwords.words("portuguese"))
print(len(stop_words))
df_transformed.colums = ['text_pt']
df_transformed['text_pt_without_stopwords'] = df_transformed['text_pt'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop_words)]))
df_transformed.head()
###Output
c:\users\felipe\anaconda3\envs\checklist3\lib\site-packages\ipykernel_launcher.py:1: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
"""Entry point for launching an IPython kernel.
###Markdown
Divisão treino e teste
###Code
df_train, df_test = train_test_split(
df_transformed,
test_size = 0.3,
random_state = 42
)
###Output
_____no_output_____
###Markdown
Salvando dados de test e train
###Code
df_train.to_csv('./data_train_test/df_train_imbd.csv')
df_test.to_csv('./data_train_test/df_test_imbd.csv')
###Output
_____no_output_____
###Markdown
Lendo dados de test e train
###Code
df_train = pd.read_csv('./data_train_test/df_train_imbd.csv')
df_test = pd.read_csv('./data_train_test/df_test_imbd.csv')
###Output
_____no_output_____ |
examproject/Group DYE Code - Examproject final.ipynb | ###Markdown
1. Human capital accumulation Consider a worker living in **two periods**, $t \in \{1,2\}$. In each period she decides whether to **work ($l_t = 1$) or not ($l_t = 0$)**. She can *not* borrow or save and thus **consumes all of her income** in each period. If she **works** her **consumption** becomes:$$c_t = w h_t l_t\,\,\text{if}\,\,l_t=1$$where $w$ is **the wage rate** and $h_t$ is her **human capital**. If she does **not work** her consumption becomes:$$c_t = b\,\,\text{if}\,\,l_t=0$$where $b$ is the **unemployment benefits**. Her **utility of consumption** is: $$ \frac{c_t^{1-\rho}}{1-\rho} $$Her **disutility of working** is:$$ \gamma l_t $$ From period 1 to period 2, she **accumulates human capital** according to:$$ h_2 = h_1 + l_1 + \begin{cases}0 & \text{with prob. }0.5 \\\Delta & \text{with prob. }0.5 \end{cases} \\$$where $\Delta$ is a **stochastic experience gain**. In the **second period** the worker thus solves:$$\begin{eqnarray*}v_{2}(h_{2}) & = &\max_{l_{2}} \frac{c_2^{1-\rho}}{1-\rho} - \gamma l_2\\ & \text{s.t.} & \\c_{2}& = & w h_2 l_2 \\l_{2}& \in &\{0,1\}\end{eqnarray*}$$ In the **first period** the worker thus solves:$$\begin{eqnarray*}v_{1}(h_{1}) &=& \max_{l_{1}} \frac{c_1^{1-\rho}}{1-\rho} - \gamma l_1 + \beta\mathbb{E}_{1}\left[v_2(h_2)\right]\\ & \text{s.t.} & \\c_1 &=& w h_1 l_1 \\h_2 &=& h_1 + l_1 + \begin{cases}0 & \text{with prob. }0.5\\\Delta & \text{with prob. }0.5 \end{cases}\\l_{1} &\in& \{0,1\}\\\end{eqnarray*}$$where $\beta$ is the **discount factor** and $\mathbb{E}_{1}\left[v_2(h_2)\right]$ is the **expected value of living in period two**. The **parameters** of the model are:
###Code
rho = 2
beta = 0.96
gamma = 0.1
w = 2
b = 1
Delta = 0.1
###Output
_____no_output_____
###Markdown
The **relevant levels of human capital** are:
###Code
h_vec = np.linspace(0.1,1.5,100)
# Define Labor as boolean
l = [0,1]
###Output
_____no_output_____
###Markdown
Question 1.1:*Solve the model in period 2 and illustrate the solution (including labor supply as a function of human capital).*
###Code
# i. Define utility in second period
def v2(l, h):
# a. Define c
if l == 0:
c = b
else:
c = w*h*l
# b. Define utility in second period
utility = c**(1-rho)/(1-rho) - gamma*l
return utility
# ii. Define function to solve for utility and labour for each level of human capital
def solve(period):
# a. Define period and utility
if period == 2:
utility = v2
elif period == 1:
utility = v1
# b. Define empty grids for h, v and l
h_vec = np.linspace(0.1,1.5,100)
v_vec = np.empty(100)
l_vec = np.empty(100)
# c. Solve for each h2
for i,h in enumerate(h_vec):
# d. Individual will only work if utillity when working > utility when not working
v_vec[i] = max(utility(l[0],h), utility(l[1],h))
l_vec[i] = utility(l[0],h) < utility(l[1],h)
# e. Return values
if period == 2:
v2_vec = v_vec
l2_vec = l_vec
return h_vec, v2_vec, l2_vec
if period == 1:
v1_vec = v_vec
l1_vec = l_vec
return h_vec, v1_vec, l1_vec
# iii. Extract solved values for period 2
h_vec,v2_vec,l2_vec = solve(2)
# iiii. Define fucntion that plots utility and labor function for a given period
def plot(period):
# a. Period
if period == 1:
l_vec = l1_vec
v_vec = v1_vec
elif period == 2:
l_vec = l2_vec
v_vec = v2_vec
# b. Set approprate style
plt.style.use("bmh")
# c. Define axis and plots
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.plot(h_vec,l_vec)
ax1.set_xlabel("$h_{}$".format(period))
ax1.set_ylabel('$l_{}$'.format(period))
ax1.set_title('$h_{}$ as a function of $l_{}$'.format(period, period))
ax2.plot(h_vec,v_vec)
ax2.set_xlabel('$h_{}$'.format(period))
ax2.set_ylabel('$v_{}$'.format(period))
ax2.set_title('$V_{}(h_{})$'.format(period, period))
# d. Define cut-off point for which the individual will choose to work (l=1)
cut_off = h_vec[np.where(l_vec==1)[0][0]]
# e. Plot vertical line at cutoff point and make adjustments
plt.axvline(x=cut_off, linestyle="--", ymin=0, ymax=0.85)
plt.subplots_adjust(bottom=0.15, wspace=.25)
plt.show()
# iiiii. Call plots for period 2
plot(2)
# iiiiii. Define function that calculates the condition for l = 1
def cutoff(period):
if period == 1:
l_vec = l1_vec
elif period == 2:
l_vec = l2_vec
cutoff = round(h_vec[np.where(l_vec==1)[0][0]], 2)
print("l = 1 in period {}, if and only if h ≥ {}".format(period, cutoff))
# iiiiiii. Condition for l=1, in period 2
cutoff(2)
###Output
l = 1 in period 2, if and only if h ≥ 0.57
###Markdown
Question 1.2:*Solve the model in period 1 and illustrate the solution (including labor supply as a function of human capital).*
###Code
# i. interpolar, to be used for calculation of v2
v2_intpol = interpolate.RegularGridInterpolator((h_vec,), v2_vec, bounds_error=False,fill_value=None)
# ii. Define utility for period 1
def v1(l1, h1, intpol = v2_intpol):
# a. Calculate expected value of 2, given v2 with and without stochastic gain
exp_v2 = 0.5*(v2_intpol([h1 + l1])[0] + v2_intpol([h1 + l1 + Delta])[0])
# b. Define c
if l1 == 0:
c = b
else:
c = w*h1*l1
# c. Define Utility
utility = c**(1-rho)/(1-rho) - gamma*l1 + beta*exp_v2
return utility
# iii. Extract solved values for period 1
h_vec,v1_vec,l1_vec = solve(1)
# iiii. Call plots for period 1
plot(1)
# iiiii. Condition for l=1, in period 2
cutoff(1)
###Output
l = 1 in period 1, if and only if h ≥ 0.35
###Markdown
Question 1.3:*Will the worker never work if her potential wage income is lower than the unemployment benefits she can get? Explain and illustrate why or why not.* If the petential wage income for the worker is lower than the employment benefits she receives, b, then the worker will never chose to work, as she could receive the benefit of not working instead. This is especially true due to the disutility associated with working, $-\gamma l_2$. Thus the worker would never work. 2. AS-AD model Consider the following **AS-AD model**. The **goods market equilibrium** is given by$$ y_{t} = -\alpha r_{t} + v_{t} $$where $y_{t}$ is the **output gap**, $r_{t}$ is the **ex ante real interest** and $v_{t}$ is a **demand disturbance**. The central bank's **Taylor rule** is$$ i_{t} = \pi_{t+1}^{e} + h \pi_{t} + b y_{t}$$where $i_{t}$ is the **nominal interest rate**, $\pi_{t}$ is the **inflation gap**, and $\pi_{t+1}^{e}$ is the **expected inflation gap**. The **ex ante real interest rate** is given by $$ r_{t} = i_{t} - \pi_{t+1}^{e} $$ Together, the above implies that the **AD-curve** is$$ \pi_{t} = \frac{1}{h\alpha}\left[v_{t} - (1+b\alpha)y_{t}\right]$$ Further, assume that the **short-run supply curve (SRAS)** is given by$$ \pi_{t} = \pi_{t}^{e} + \gamma y_{t} + s_{t}$$where $s_t$ is a **supply disturbance**. **Inflation expectations are adaptive** and given by$$ \pi_{t}^{e} = \phi\pi_{t-1}^{e} + (1-\phi)\pi_{t-1}$$ Together, this implies that the **SRAS-curve** can also be written as$$ \pi_{t} = \pi_{t-1} + \gamma y_{t} - \phi\gamma y_{t-1} + s_{t} - \phi s_{t-1} $$ The **parameters** of the model are:
###Code
par = {}
par['alpha'] = 5.76
par['h'] = 0.5
par['b'] = 0.5
par['phi'] = 0
par['gamma'] = 0.075
#Define variables
# Define variables and parameters
alpha = sm.symbols("alpha")
b = sm.symbols("b")
gamma = sm.symbols("gamma")
h = sm.symbols("h")
phi = sm.symbols("phi")
pi = sm.symbols("pi_t")
pit = sm.symbols("pi_t-1")
pi_opt = sm.symbols("pi^opt")
s = sm.symbols("s_t")
st = sm.symbols("s_t-1")
v = sm.symbols("v_t")
y = sm.symbols("y_t")
yt = sm.symbols("y_t-1")
y_opt = sm.symbols("y^opt")
###Output
_____no_output_____
###Markdown
Question 2.1:*Use the ``sympy`` module to solve for the equilibrium values of output, $y_t$, and inflation, $\pi_t$, (where AD = SRAS) given the parameters ($\alpha$, $h$, $b$, $\alpha$, $\gamma$) and $y_{t-1}$ , $\pi_{t-1}$, $v_t$, $s_t$, and $s_{t-1}$.*
###Code
# i. Define agregate demand (AD) and aggregate supply (AS)
ad = sm.Function("ad")
sras = sm.Function("sras")
ad_func = 1/(h*alpha)*(v-(1+b*alpha)*y)
sras_func = pit + gamma*y - phi*gamma*yt + s - phi*st
ad = sm.Eq(pi, ad_func)
sras = sm.Eq(pi, sras_func)
###Output
_____no_output_____
###Markdown
We can now solve for $\pi^{opt}$ and $y^{opt}$ and the result is as follows
###Code
# ii. Solving the with the first order condition
foc = sm.solve([ad, sras], [y, pi])
# iii. Y in equilibrium
y_foc = sm.Eq(y_opt, foc[y])
y_foc
# iiii. Pi in equilibrium
pi_foc = sm.Eq(pi_opt, foc[pi])
pi_foc
###Output
_____no_output_____
###Markdown
3. Exchange economy Consider an **exchange economy** with1. 3 goods, $(x_1,x_2,x_3)$2. $N$ consumers indexed by \\( j \in \{1,2,\dots,N\} \\)3. Preferences are Cobb-Douglas with log-normally distributed coefficients $$ \begin{eqnarray*} u^{j}(x_{1},x_{2},x_{3}) &=& \left(x_{1}^{\beta_{1}^{j}}x_{2}^{\beta_{2}^{j}}x_{3}^{\beta_{3}^{j}}\right)^{\gamma}\\ & & \,\,\,\beta_{i}^{j}=\frac{\alpha_{i}^{j}}{\alpha_{1}^{j}+\alpha_{2}^{j}+\alpha_{3}^{j}} \\ & & \,\,\,\boldsymbol{\alpha}^{j}=(\alpha_{1}^{j},\alpha_{2}^{j},\alpha_{3}^{j}) \\ & & \,\,\,\log(\boldsymbol{\alpha}^j) \sim \mathcal{N}(\mu,\Sigma) \\ \end{eqnarray*} $$4. Endowments are exponentially distributed,$$\begin{eqnarray*}\boldsymbol{e}^{j} &=& (e_{1}^{j},e_{2}^{j},e_{3}^{j}) \\ & & e_i^j \sim f, f(z;\zeta) = 1/\zeta \exp(-z/\zeta)\end{eqnarray*}$$ Let $p_3 = 1$ be the **numeraire**. The implied **demand functions** are:$$\begin{eqnarray*}x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j})&=&\beta^{j}_i\frac{I^j}{p_{i}} \\\end{eqnarray*}$$where consumer $j$'s income is$$I^j = p_1 e_1^j + p_2 e_2^j +p_3 e_3^j$$ The **parameters** and **random preferences and endowments** are given by:
###Code
# a. parameters
N = 50000
mu = np.array([3,2,1])
Sigma = np.array([[0.25, 0, 0], [0, 0.25, 0], [0, 0, 0.25]])
gamma = 0.8
zeta = 1
# b. random draws
seed = 1986
np.random.seed(seed)
# preferences
alphas = np.exp(np.random.multivariate_normal(mu, Sigma, size=N))
betas = alphas/np.reshape(np.sum(alphas,axis=1),(N,1))
# endowments
e1 = np.random.exponential(zeta,size=N)
e2 = np.random.exponential(zeta,size=N)
e3 = np.random.exponential(zeta,size=N)
###Output
_____no_output_____
###Markdown
Question 3.1:*Plot the histograms of the budget shares for each good across agents.*
###Code
# i. Creating a list for our betas as b1, b2, b3
b1 = betas[:,0]
""" The above creates a list of the variable beta
"""
b2 = betas[:,1]
b3 = betas[:,2]
# ii. Plotting b1, b2, b3
fig=plt.figure(figsize=(20,10))
""" Initializing the figure and the size of it
"""
for i in [1,2,3]:
""" Looping over i to create a plot for each beta
"""
ax = fig.add_subplot(1,3,i)
ax.hist(globals()['b%s' % i],bins=100);
ax.set_xlabel('b%s' % i)
ax.set_ylabel('Consumers')
ax.set_title('Good '+str(i))
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Consider the **excess demand functions:**$$ z_i(p_1,p_2) = \sum_{j=1}^N x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j}) - e_i^j$$ Question 3.2:*Plot the excess demand functions.*
###Code
# i. Creating the demand function for each good
def demand_1(p1,p2,e1,e2,e3,betas):
""" Defining af function to calculate the demand for good 1
"""
I = e1*p1 + e2*p2 + e3
""" Defining income as specified in the assignment
"""
return b1*I/p1
def demand_2(p1,p2,e1,e2,e3,betas):
I = e1*p1 + e2*p2 + e3
return b2*I/p2
def demand_3(p1,p2,e1,e2,e3,betas):
I = e1*p1 + e2*p2 + e3
return b3*I
# ii. Creating excess demand function
def excess_demand_1(p1,p2,e1,e2,e3,betas):
""" Defining the excess function
"""
excess_1 = np.sum(demand_1(p1,p2,e1,e2,e3,betas)) - np.sum(e1)
""" excess = demand - supply
"""
return excess_1
def excess_demand_2(p1,p2,e1,e2,e3,betas):
excess_2 = np.sum(demand_2(p1,p2,e1,e2,e3,betas)) - np.sum(e2)
return excess_2
def excess_demand_3(p1,p2,e1,e2,e3,betas):
excess_3 = np.sum(demand_3(p1,p2,e1,e2,e3,betas)) - np.sum(e3)
return excess_3
# iii.Plotting the excess demand function
p1_ = np.linspace(0.3,30,300)
p2_ = np.linspace(0.3,30,300)
""" Creating price vectors to be used for excess demand plot
"""
p1_grid,p2_grid = np.meshgrid(p1_,p2_,indexing='ij')
""" Creating a grid for p1 and p2
"""
excess_1_grid = np.empty((300,300))
""" Excess demand grid for plots
"""
excess_2_grid = np.empty((300,300))
excess_3_grid = np.empty((300,300))
for i,p1 in enumerate(p1_):
for j,p2 in enumerate(p2_):
""" Looping over both price sets to get excess demands for each good
"""
excess_1_grid[i,j] = excess_demand_1(p1,p2,e1,e2,e3,betas)
excess_2_grid[i,j] = excess_demand_2(p1,p2,e1,e2,e3,betas)
excess_3_grid[i,j] = excess_demand_3(p1,p2,e1,e2,e3,betas)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(p1_grid, p2_grid, excess_1_grid)
ax.invert_xaxis()
ax.set_title('Excess demand - good 1')
ax.set_xlabel("p1")
ax.set_ylabel("p2")
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(1,2,2, projection='3d')
ax.plot_surface(p1_grid, p2_grid, excess_2_grid)
ax.invert_xaxis()
ax.set_title('Excess demand - good 2')
ax.set_xlabel("p1")
ax.set_ylabel("p2")
plt.show()
###Output
_____no_output_____
###Markdown
Quesiton 3.3:*Find the Walras-equilibrium prices, $(p_1,p_2)$, where both excess demands are (approximately) zero, e.g. by using the following tâtonnement process:**1. Guess on $p_1 > 0$, $p_2 > 0$ and choose tolerance $\epsilon > 0$ and adjustment aggressivity parameter, $\kappa > 0$.**2. Calculate $z_1(p_1,p_2)$ and $z_2(p_1,p_2)$.**3. If $|z_1| < \epsilon$ and $|z_2| < \epsilon$ then stop.**4. Else set $p_1 = p_1 + \kappa \frac{z_1}{N}$ and $p_2 = p_2 + \kappa \frac{z_2}{N}$ and return to step 2.*
###Code
# i. Defining the Walras equilibrium function
def Walras_equilibrium(betas, p1, p2, e1, e2, e3, kappa=0.5, eps=1e-8, maxiter=50000):
t = 0
while True:
# a. step 1: excess demand
X1 = excess_demand_1(p1,p2,e1,e2,e3,betas)
X2 = excess_demand_2(p1,p2,e1,e2,e3,betas)
# b. step 2: stop
if np.abs(X1) < eps and np.abs(X2) < eps or t >= maxiter:
print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {X1:14.8f}')
print(f'{t:3d}: p2 = {p2:12.8f} -> excess demand -> {X2:14.8f}')
break
# c. step 3: update prices
p1 = p1 + kappa*X1/betas.size
p2 = p2 + kappa*X2/betas.size
# d. step 4: return
if t < 5 or t%2500 == 0:
print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {X1:14.8f}')
print(f'{t:3d}: p2 = {p2:12.8f} -> excess demand -> {X2:14.8f}')
elif t == 5:
print(" ...")
t += 1
return p1, p2
# ii. Setting initial values to find equilibrium price
p1 = 1.4
p2 = 1
kappa = 0.1
eps = 1e-8
# iii. Using our function from part i. and initial values from part ii. to find equlibrium prices
p1_eq,p2_eq = Walras_equilibrium(betas,p1,p2,e1,e2,e3,kappa=kappa,eps=eps)
print("Equilibrium prices (p1,p2) = (6.49, 2.61)")
###Output
0: p1 = 1.41865154 -> excess demand -> 27977.31151752
0: p2 = 0.99608594 -> excess demand -> -5871.09387963
1: p1 = 1.43684279 -> excess demand -> 27286.87384056
1: p2 = 0.99241373 -> excess demand -> -5508.31164171
2: p1 = 1.45459877 -> excess demand -> 26633.97026830
2: p2 = 0.98897615 -> excess demand -> -5156.36947140
3: p1 = 1.47194254 -> excess demand -> 26015.65971241
3: p2 = 0.98576609 -> excess demand -> -4815.08645945
4: p1 = 1.48889542 -> excess demand -> 25429.30968444
4: p2 = 0.98277657 -> excess demand -> -4484.28325466
...
2500: p1 = 5.93849485 -> excess demand -> 550.18919081
2500: p2 = 2.41083774 -> excess demand -> 205.44544903
5000: p1 = 6.37746401 -> excess demand -> 104.64732030
5000: p2 = 2.57468743 -> excess demand -> 39.04621324
7500: p1 = 6.46580779 -> excess demand -> 22.23130435
7500: p2 = 2.60764817 -> excess demand -> 8.29385995
10000: p1 = 6.48477769 -> excess demand -> 4.82462641
10000: p2 = 2.61472520 -> excess demand -> 1.79987803
12500: p1 = 6.48890390 -> excess demand -> 1.05180255
12500: p2 = 2.61626452 -> excess demand -> 0.39238367
15000: p1 = 6.48980388 -> excess demand -> 0.22952650
15000: p2 = 2.61660027 -> excess demand -> 0.08562665
17500: p1 = 6.49000030 -> excess demand -> 0.05009851
17500: p2 = 2.61667354 -> excess demand -> 0.01868963
20000: p1 = 6.49004317 -> excess demand -> 0.01093546
20000: p2 = 2.61668953 -> excess demand -> 0.00407956
22500: p1 = 6.49005253 -> excess demand -> 0.00238701
22500: p2 = 2.61669303 -> excess demand -> 0.00089049
25000: p1 = 6.49005458 -> excess demand -> 0.00052104
25000: p2 = 2.61669379 -> excess demand -> 0.00019438
27500: p1 = 6.49005502 -> excess demand -> 0.00011373
27500: p2 = 2.61669395 -> excess demand -> 0.00004243
30000: p1 = 6.49005512 -> excess demand -> 0.00002483
30000: p2 = 2.61669399 -> excess demand -> 0.00000926
32500: p1 = 6.49005514 -> excess demand -> 0.00000542
32500: p2 = 2.61669400 -> excess demand -> 0.00000202
35000: p1 = 6.49005514 -> excess demand -> 0.00000118
35000: p2 = 2.61669400 -> excess demand -> 0.00000044
37500: p1 = 6.49005515 -> excess demand -> 0.00000026
37500: p2 = 2.61669400 -> excess demand -> 0.00000010
40000: p1 = 6.49005515 -> excess demand -> 0.00000006
40000: p2 = 2.61669400 -> excess demand -> 0.00000002
42500: p1 = 6.49005515 -> excess demand -> 0.00000001
42500: p2 = 2.61669400 -> excess demand -> 0.00000000
42841: p1 = 6.49005515 -> excess demand -> 0.00000001
42841: p2 = 2.61669400 -> excess demand -> 0.00000000
Equilibrium prices (p1,p2) = (6.49, 2.61)
###Markdown
Question 3.4:*Plot the distribution of utility in the Walras-equilibrium and calculate its mean and variance.
###Code
# i. Defining the utility function as u_func
def u_func(p1,p2,e1,e2,e3,betas,gamma):
""" Creating the utility function
"""
I = p1*e1 + p2*e2 + e3
demand1 = b1*I/p1
demand2 = b2*I/p2
demand3 = b3*I
""" Setting up inputs for the calculating utility
"""
u = (demand1**b1 + demand2**b2 + demand3**b3)**gamma
return u
# ii. Create a vector of utilities
u_vec = u_func(p1_eq, p2_eq, e1,e2,e3, betas,gamma)
# iii. Plot the utility
fig=plt.figure(figsize=(15,10))
plt.hist(u_vec,bins=100);
plt.xlabel("Utility")
plt.ylabel('Consumers')
plt.title("Utility distribution")
# iiii. Calculate and print mean and variance of the utility
u_mean = np.mean(u_vec)
u_variance = np.var(u_vec)
""" Using the build-in numpy functions to calculate the mean and the variance
"""
print(f"Mean: {u_mean: .3f} and Variance: {u_variance: .3f}")
###Output
Mean: 2.376 and Variance: 0.208
|
Lab4_MongoDB/COMP6235 MongoDB Tutorial 1819.ipynb | ###Markdown
Introduction Using JupyterJupyter is a front end to the [IPython](https://ipython.org) interactive shell, and offers IDE like features. It is separated into two main types of cell: [Markdown](https://en.wikipedia.org/wiki/Markdown) cells (such as this one) which allows markdown or HTML code to be written, and code cells like the next one which can be run in real time.To execute code in a cell, press `Crtl` + `Enter`, click on the `[ > ]Run` button in the main menu, or press `Shift` + `Enter` if you wish to execute the code and then move on to a new cell (creating it if it does not already exist). SetupMongoDB is a NoSQL database, which has a core API in JavaScript, and a series of other APIs in different languages. The one we are going to use is the Python API, [PyMongo](https://api.mongodb.com/python/current/). MongoDB instance on your VM is already started by default.PyMongo is a package that contains tools to work with MongoDB from Python. We have installed it in the VM provided to you. This imports the `MongoClient` class from the pymongo module, which we will use to deal with all our connections from. We're connecting to our localhost, which is listening on port 27017. There are more options, the documentation for the formatting of the connection string is at https://docs.mongodb.com/manual/reference/connection-string/.
###Code
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017')
###Output
_____no_output_____
###Markdown
If you do not get any errors, you have confirmed PyMongo library has been successfully installed&configured in the VM. We will now check to see whether the connection is correct. The following code calls a function which returns a list of all current databases. If your Mongo instance is still empty, it should be something like `['admin', 'local']`.
###Code
client.list_database_names()
###Output
_____no_output_____
###Markdown
Next, we are going to create a database object `db`, which is a property of the `client` object. MongoDB is schemaless, and so accessing a database like this will create the database if it does not already exist.A database can be accessed by using "dot" notation (i.e., `client.dbname`), or dictionary notation (i.e., `client['dbname']`). This also applies to making collectionsCreate a database called `test` in a variable called `db`. Using that variable, create a collection called `test_collection` with a variable called `collection` as follows. Run the code in the following cell (there should not be any output)
###Code
# Create database and collection objects for convenience
db = client.test
collection = db.test_collection
###Output
_____no_output_____
###Markdown
Using MongoDB Inserting dataMongoDB data is stored as BSON (binary JSON), which is essentially JSON with some additional optimisations, so the way to insert data is as a JSON object. For Python, you can use a `dict` or a `list` for this, and then call either `insert_one` or `insert_many` on the collection.
###Code
# Create an object and insert into the `test_collection`
single_obj = {'name': 'Amber', 'star_sign': 'Capricorn', 'favourite_song': 'The Load-Out'}
collection.insert_one(single_obj)
single_obj_2 = {'name': 'Huw', 'star_sign': 'Libra', 'favourite_song': 'The masses against the classes'}
collection.insert_one(single_obj_2)
single_obj_3 = {'name': 'Robert', 'star_sign': 'Leo', 'favourite_song': 'Bad day'}
collection.insert_one(single_obj_3)
###Output
_____no_output_____
###Markdown
We will look at querying data in more detail below, but for now, to see whether the object got successfully inserted into the collection, run the code below. This will always return the first instance which matches the query. You will notice that even though we didn't specify `_id` one got added already. This is a unique identifier for the document in the collection
###Code
collection.find_one() # returns a single document matching the query condition
collection.find_one({'name':'Robert'})
###Output
_____no_output_____
###Markdown
Remember, for MongoDB, you do not have to specify a schema or create a collection, it will be created automatically. You don't need to keep to the same layout, but can have entirely different objects. Consider the following:
###Code
from datetime import datetime
obj1 = {'Meaning of life': 42}
obj2 = {'ABC': 'DEF', 'time': datetime.now()}
collection.insert_one(obj1)
collection.insert_one(obj2)
###Output
_____no_output_____
###Markdown
We can also use the `insert_many`, which accepts a list of dicts. In the cell below, create a list of dicts called `many_objects`, and call the `insert_many` function. The code below that will iterate over all the documents in the database.
###Code
# YOUR CODE HERE
collection.insert_many([{"age":x} for x in range(20,30)])
#See what has been inserted into the collection
for doc in collection.find():
print(doc)
###Output
{u'favourite_song': u'The Load-Out', u'_id': ObjectId('5c0843e65ccae901c5ffb8c9'), u'name': u'Amber', u'star_sign': u'Capricorn'}
{u'favourite_song': u'The masses against the classes', u'_id': ObjectId('5c0843e65ccae901c5ffb8ca'), u'name': u'Huw', u'star_sign': u'Libra'}
{u'favourite_song': u'Bad day', u'_id': ObjectId('5c0843e65ccae901c5ffb8cb'), u'name': u'Robert', u'star_sign': u'Leo'}
{u'Meaning of life': 42, u'_id': ObjectId('5c0843ed5ccae901c5ffb8cc')}
{u'_id': ObjectId('5c0843ed5ccae901c5ffb8cd'), u'ABC': u'DEF', u'time': datetime.datetime(2018, 12, 5, 21, 32, 29, 444000)}
{u'age': 20, u'_id': ObjectId('5c0844065ccae901c5ffb8ce')}
{u'age': 21, u'_id': ObjectId('5c0844065ccae901c5ffb8cf')}
{u'age': 22, u'_id': ObjectId('5c0844065ccae901c5ffb8d0')}
{u'age': 23, u'_id': ObjectId('5c0844065ccae901c5ffb8d1')}
{u'age': 24, u'_id': ObjectId('5c0844065ccae901c5ffb8d2')}
{u'age': 25, u'_id': ObjectId('5c0844065ccae901c5ffb8d3')}
{u'age': 26, u'_id': ObjectId('5c0844065ccae901c5ffb8d4')}
{u'age': 27, u'_id': ObjectId('5c0844065ccae901c5ffb8d5')}
{u'age': 28, u'_id': ObjectId('5c0844065ccae901c5ffb8d6')}
{u'age': 29, u'_id': ObjectId('5c0844065ccae901c5ffb8d7')}
###Markdown
Importing and querying dataFor this part of the exercise, we will use a sample dataset provided by Mongo for a documentation tutorial. The following cell runs the `mongoimport` command, which is a Unix command which comes with Mongo for importing data. We will need to run a bash command in the next cell first. This uses the Jupyter \"magics\", and requires that the first line include `%%bash`. The code does the following:- Download the JSON file from the url, and save as ./primer-dataset.json- Import into the `test` database into the collection `restaurants` whilst dropping any collection which already exists from the file ./primer-dataset.json- Deletes the file. Click on the following cell, and execute it:
###Code
%%bash
# Use wget to download the data
wget https://raw.githubusercontent.com/mongodb/docs-assets/primer-dataset/primer-dataset.json
# mongoimport is the Mongo command to import data.
# It specifies the database, collection and format, and import file
# --drop means it's going to drop any collection with the same name which already exists
mongoimport --db test --collection restaurants --drop --file ./primer-dataset.json
# Delete the JSON file we just downloaded
rm ./primer-dataset.json
###Output
--2018-12-05 21:33:28-- https://raw.githubusercontent.com/mongodb/docs-assets/primer-dataset/primer-dataset.json
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.16.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.16.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11874761 (11M) [text/plain]
Saving to: ‘primer-dataset.json’
0K .......... .......... .......... .......... .......... 0% 14.8K 12m59s
50K .......... .......... .......... .......... .......... 0% 23.7K 10m30s
100K .......... .......... .......... .......... .......... 1% 41.1K 8m31s
150K .......... .......... .......... .......... .......... 1% 31.4K 7m52s
200K .......... .......... .......... .......... .......... 2% 398K 6m22s
250K .......... .......... .......... .......... .......... 2% 31.7K 6m16s
300K .......... .......... .......... .......... .......... 3% 22.2K 6m33s
350K .......... .......... .......... .......... .......... 3% 180K 5m50s
400K .......... .......... .......... .......... .......... 3% 826K 5m11s
450K .......... .......... .......... .......... .......... 4% 258K 4m43s
500K .......... .......... .......... .......... .......... 4% 339K 4m19s
550K .......... .......... .......... .......... .......... 5% 284K 4m0s
600K .......... .......... .......... .......... .......... 5% 285K 3m43s
650K .......... .......... .......... .......... .......... 6% 157K 3m31s
700K .......... .......... .......... .......... .......... 6% 273K 3m19s
750K .......... .......... .......... .......... .......... 6% 24.4K 3m34s
800K .......... .......... .......... .......... .......... 7% 1.56M 3m20s
850K .......... .......... .......... .......... .......... 7% 3.53M 3m9s
900K .......... .......... .......... .......... .......... 8% 1.28M 2m58s
950K .......... .......... .......... .......... .......... 8% 566K 2m49s
1000K .......... .......... .......... .......... .......... 9% 1.85M 2m41s
1050K .......... .......... .......... .......... .......... 9% 971K 2m33s
1100K .......... .......... .......... .......... .......... 9% 473K 2m27s
1150K .......... .......... .......... .......... .......... 10% 1.81M 2m20s
1200K .......... .......... .......... .......... .......... 10% 1.98M 2m14s
1250K .......... .......... .......... .......... .......... 11% 3.94M 2m9s
1300K .......... .......... .......... .......... .......... 11% 1.78M 2m3s
1350K .......... .......... .......... .......... .......... 12% 3.07M 1m59s
1400K .......... .......... .......... .......... .......... 12% 2.15M 1m54s
1450K .......... .......... .......... .......... .......... 12% 1.69M 1m50s
1500K .......... .......... .......... .......... .......... 13% 1.81M 1m46s
1550K .......... .......... .......... .......... .......... 13% 3.66M 1m42s
1600K .......... .......... .......... .......... .......... 14% 2.14M 99s
1650K .......... .......... .......... .......... .......... 14% 560K 96s
1700K .......... .......... .......... .......... .......... 15% 503K 93s
1750K .......... .......... .......... .......... .......... 15% 836K 91s
1800K .......... .......... .......... .......... .......... 15% 1.96M 88s
1850K .......... .......... .......... .......... .......... 16% 2.63M 85s
1900K .......... .......... .......... .......... .......... 16% 1.37M 83s
1950K .......... .......... .......... .......... .......... 17% 607K 81s
2000K .......... .......... .......... .......... .......... 17% 1.04M 78s
2050K .......... .......... .......... .......... .......... 18% 444K 77s
2100K .......... .......... .......... .......... .......... 18% 528K 75s
2150K .......... .......... .......... .......... .......... 18% 185K 74s
2200K .......... .......... .......... .......... .......... 19% 487K 72s
2250K .......... .......... .......... .......... .......... 19% 35.4K 76s
2300K .......... .......... .......... .......... .......... 20% 301K 75s
2350K .......... .......... .......... .......... .......... 20% 254K 74s
2400K .......... .......... .......... .......... .......... 21% 891K 72s
2450K .......... .......... .......... .......... .......... 21% 753K 70s
2500K .......... .......... .......... .......... .......... 21% 36.2K 73s
2550K .......... .......... .......... .......... .......... 22% 650K 72s
2600K .......... .......... .......... .......... .......... 22% 1.03M 70s
2650K .......... .......... .......... .......... .......... 23% 2.50M 69s
2700K .......... .......... .......... .......... .......... 23% 549K 67s
2750K .......... .......... .......... .......... .......... 24% 164K 67s
2800K .......... .......... .......... .......... .......... 24% 24.8K 71s
2850K .......... .......... .......... .......... .......... 25% 2.14M 70s
2900K .......... .......... .......... .......... .......... 25% 4.48M 68s
2950K .......... .......... .......... .......... .......... 25% 490K 67s
3000K .......... .......... .......... .......... .......... 26% 8.43M 66s
3050K .......... .......... .......... .......... .......... 26% 11.6M 64s
3100K .......... .......... .......... .......... .......... 27% 220K 63s
3150K .......... .......... .......... .......... .......... 27% 1.84M 62s
3200K .......... .......... .......... .......... .......... 28% 6.68M 61s
3250K .......... .......... .......... .......... .......... 28% 10.7M 60s
3300K .......... .......... .......... .......... .......... 28% 384K 59s
3350K .......... .......... .......... .......... .......... 29% 7.07M 57s
3400K .......... .......... .......... .......... .......... 29% 460K 56s
3450K .......... .......... .......... .......... .......... 30% 469K 56s
3500K .......... .......... .......... .......... .......... 30% 480K 55s
3550K .......... .......... .......... .......... .......... 31% 14.9M 54s
3600K .......... .......... .......... .......... .......... 31% 16.6M 53s
3650K .......... .......... .......... .......... .......... 31% 80.6K 53s
3700K .......... .......... .......... .......... .......... 32% 62.1K 53s
3750K .......... .......... .......... .......... .......... 32% 1.46M 53s
3800K .......... .......... .......... .......... .......... 33% 574K 52s
3850K .......... .......... .......... .......... .......... 33% 12.7M 51s
3900K .......... .......... .......... .......... .......... 34% 14.0M 50s
3950K .......... .......... .......... .......... .......... 34% 16.9M 49s
4000K .......... .......... .......... .......... .......... 34% 16.5M 48s
4050K .......... .......... .......... .......... .......... 35% 24.2M 47s
4100K .......... .......... .......... .......... .......... 35% 235K 46s
4150K .......... .......... .......... .......... .......... 36% 341K 46s
4200K .......... .......... .......... .......... .......... 36% 16.8K 50s
4250K .......... .......... .......... .......... .......... 37% 719K 49s
4300K .......... .......... .......... .......... .......... 37% 13.2M 48s
4350K .......... .......... .......... .......... .......... 37% 18.5M 48s
4400K .......... .......... .......... .......... .......... 38% 24.6M 47s
4450K .......... .......... .......... .......... .......... 38% 58.9K 47s
4500K .......... .......... .......... .......... .......... 39% 7.14M 46s
4550K .......... .......... .......... .......... .......... 39% 10.2M 46s
4600K .......... .......... .......... .......... .......... 40% 10.8M 45s
4650K .......... .......... .......... .......... .......... 40% 14.1M 44s
4700K .......... .......... .......... .......... .......... 40% 73.3K 44s
4750K .......... .......... .......... .......... .......... 41% 12.6M 43s
4800K .......... .......... .......... .......... .......... 41% 12.8M 43s
4850K .......... .......... .......... .......... .......... 42% 17.2M 42s
4900K .......... .......... .......... .......... .......... 42% 21.0M 41s
4950K .......... .......... .......... .......... .......... 43% 26.2M 40s
5000K .......... .......... .......... .......... .......... 43% 32.3M 40s
5050K .......... .......... .......... .......... .......... 43% 25.4M 39s
5100K .......... .......... .......... .......... .......... 44% 25.8M 38s
5150K .......... .......... .......... .......... .......... 44% 25.2M 38s
5200K .......... .......... .......... .......... .......... 45% 913K 37s
5250K .......... .......... .......... .......... .......... 45% 673K 37s
5300K .......... .......... .......... .......... .......... 46% 455K 36s
5350K .......... .......... .......... .......... .......... 46% 1.55M 36s
5400K .......... .......... .......... .......... .......... 46% 417K 35s
5450K .......... .......... .......... .......... .......... 47% 458K 35s
5500K .......... .......... .......... .......... .......... 47% 751K 34s
5550K .......... .......... .......... .......... .......... 48% 1.86M 33s
5600K .......... .......... .......... .......... .......... 48% 990K 33s
5650K .......... .......... .......... .......... .......... 49% 197K 33s
5700K .......... .......... .......... .......... .......... 49% 105K 33s
5750K .......... .......... .......... .......... .......... 50% 615K 32s
5800K .......... .......... .......... .......... .......... 50% 79.0K 32s
5850K .......... .......... .......... .......... .......... 50% 471K 32s
5900K .......... .......... .......... .......... .......... 51% 537K 31s
5950K .......... .......... .......... .......... .......... 51% 437K 31s
6000K .......... .......... .......... .......... .......... 52% 266K 31s
6050K .......... .......... .......... .......... .......... 52% 1.84M 30s
6100K .......... .......... .......... .......... .......... 53% 144K 30s
6150K .......... .......... .......... .......... .......... 53% 1.03M 29s
6200K .......... .......... .......... .......... .......... 53% 1.81M 29s
6250K .......... .......... .......... .......... .......... 54% 262K 29s
6300K .......... .......... .......... .......... .......... 54% 125K 28s
6350K .......... .......... .......... .......... .......... 55% 302K 28s
6400K .......... .......... .......... .......... .......... 55% 3.01M 28s
6450K .......... .......... .......... .......... .......... 56% 170K 27s
6500K .......... .......... .......... .......... .......... 56% 2.25M 27s
6550K .......... .......... .......... .......... .......... 56% 45.9K 27s
6600K .......... .......... .......... .......... .......... 57% 656K 27s
6650K .......... .......... .......... .......... .......... 57% 4.04M 26s
6700K .......... .......... .......... .......... .......... 58% 8.58M 26s
6750K .......... .......... .......... .......... .......... 58% 8.38M 25s
6800K .......... .......... .......... .......... .......... 59% 11.6M 25s
6850K .......... .......... .......... .......... .......... 59% 11.3M 25s
6900K .......... .......... .......... .......... .......... 59% 15.4M 24s
6950K .......... .......... .......... .......... .......... 60% 18.6M 24s
7000K .......... .......... .......... .......... .......... 60% 27.5M 23s
7050K .......... .......... .......... .......... .......... 61% 17.9M 23s
7100K .......... .......... .......... .......... .......... 61% 23.6M 22s
7150K .......... .......... .......... .......... .......... 62% 392K 22s
7200K .......... .......... .......... .......... .......... 62% 364K 22s
7250K .......... .......... .......... .......... .......... 62% 720K 21s
7300K .......... .......... .......... .......... .......... 63% 85.3K 21s
7350K .......... .......... .......... .......... .......... 63% 296K 21s
7400K .......... .......... .......... .......... .......... 64% 374K 21s
7450K .......... .......... .......... .......... .......... 64% 490K 20s
7500K .......... .......... .......... .......... .......... 65% 336K 20s
7550K .......... .......... .......... .......... .......... 65% 730K 20s
7600K .......... .......... .......... .......... .......... 65% 228K 19s
7650K .......... .......... .......... .......... .......... 66% 232K 19s
7700K .......... .......... .......... .......... .......... 66% 66.2K 19s
7750K .......... .......... .......... .......... .......... 67% 1006K 19s
7800K .......... .......... .......... .......... .......... 67% 115K 19s
7850K .......... .......... .......... .......... .......... 68% 576K 18s
7900K .......... .......... .......... .......... .......... 68% 1.72M 18s
7950K .......... .......... .......... .......... .......... 68% 394K 18s
8000K .......... .......... .......... .......... .......... 69% 328K 17s
8050K .......... .......... .......... .......... .......... 69% 193K 17s
8100K .......... .......... .......... .......... .......... 70% 112K 17s
8150K .......... .......... .......... .......... .......... 70% 312K 17s
8200K .......... .......... .......... .......... .......... 71% 307K 16s
8250K .......... .......... .......... .......... .......... 71% 102K 16s
8300K .......... .......... .......... .......... .......... 72% 119K 16s
8350K .......... .......... .......... .......... .......... 72% 248K 16s
8400K .......... .......... .......... .......... .......... 72% 929K 16s
8450K .......... .......... .......... .......... .......... 73% 87.4K 15s
8500K .......... .......... .......... .......... .......... 73% 312K 15s
8550K .......... .......... .......... .......... .......... 74% 123K 15s
8600K .......... .......... .......... .......... .......... 74% 283K 15s
8650K .......... .......... .......... .......... .......... 75% 120K 15s
8700K .......... .......... .......... .......... .......... 75% 51.1K 14s
8750K .......... .......... .......... .......... .......... 75% 1.22M 14s
8800K .......... .......... .......... .......... .......... 76% 485K 14s
8850K .......... .......... .......... .......... .......... 76% 240K 14s
8900K .......... .......... .......... .......... .......... 77% 148K 13s
8950K .......... .......... .......... .......... .......... 77% 188K 13s
9000K .......... .......... .......... .......... .......... 78% 61.4K 13s
9050K .......... .......... .......... .......... .......... 78% 108K 13s
9100K .......... .......... .......... .......... .......... 78% 26.2K 13s
9150K .......... .......... .......... .......... .......... 79% 728K 13s
9200K .......... .......... .......... .......... .......... 79% 33.3K 13s
9250K .......... .......... .......... .......... .......... 80% 58.8K 13s
9300K .......... .......... .......... .......... .......... 80% 569K 12s
9350K .......... .......... .......... .......... .......... 81% 398K 12s
9400K .......... .......... .......... .......... .......... 81% 3.09M 12s
9450K .......... .......... .......... .......... .......... 81% 2.02M 11s
9500K .......... .......... .......... .......... .......... 82% 1.04M 11s
9550K .......... .......... .......... .......... .......... 82% 910K 11s
9600K .......... .......... .......... .......... .......... 83% 165K 10s
9650K .......... .......... .......... .......... .......... 83% 188K 10s
9700K .......... .......... .......... .......... .......... 84% 465K 10s
9750K .......... .......... .......... .......... .......... 84% 534K 10s
9800K .......... .......... .......... .......... .......... 84% 232K 9s
9850K .......... .......... .......... .......... .......... 85% 210K 9s
9900K .......... .......... .......... .......... .......... 85% 198K 9s
9950K .......... .......... .......... .......... .......... 86% 111K 9s
10000K .......... .......... .......... .......... .......... 86% 679K 8s
10050K .......... .......... .......... .......... .......... 87% 253K 8s
10100K .......... .......... .......... .......... .......... 87% 66.8K 8s
10150K .......... .......... .......... .......... .......... 87% 17.2K 8s
10200K .......... .......... .......... .......... .......... 88% 414K 8s
10250K .......... .......... .......... .......... .......... 88% 135K 7s
10300K .......... .......... .......... .......... .......... 89% 301K 7s
10350K .......... .......... .......... .......... .......... 89% 20.0K 7s
10400K .......... .......... .......... .......... .......... 90% 19.3M 7s
10450K .......... .......... .......... .......... .......... 90% 9.21M 6s
10500K .......... .......... .......... .......... .......... 90% 15.6M 6s
10550K .......... .......... .......... .......... .......... 91% 14.5M 6s
10600K .......... .......... .......... .......... .......... 91% 13.6M 5s
10650K .......... .......... .......... .......... .......... 92% 12.0M 5s
10700K .......... .......... .......... .......... .......... 92% 16.7M 5s
10750K .......... .......... .......... .......... .......... 93% 23.3M 4s
10800K .......... .......... .......... .......... .......... 93% 19.6M 4s
10850K .......... .......... .......... .......... .......... 93% 15.2M 4s
10900K .......... .......... .......... .......... .......... 94% 29.5M 4s
10950K .......... .......... .......... .......... .......... 94% 32.1M 3s
11000K .......... .......... .......... .......... .......... 95% 27.5M 3s
11050K .......... .......... .......... .......... .......... 95% 22.5M 3s
11100K .......... .......... .......... .......... .......... 96% 28.8M 2s
11150K .......... .......... .......... .......... .......... 96% 30.9M 2s
11200K .......... .......... .......... .......... .......... 97% 24.2M 2s
11250K .......... .......... .......... .......... .......... 97% 17.7M 2s
11300K .......... .......... .......... .......... .......... 97% 22.3M 1s
11350K .......... .......... .......... .......... .......... 98% 25.6M 1s
11400K .......... .......... .......... .......... .......... 98% 31.1M 1s
11450K .......... .......... .......... .......... .......... 99% 20.4M 1s
11500K .......... .......... .......... .......... .......... 99% 31.2M 0s
11550K .......... .......... .......... .......... ...... 100% 2.31M=61s
2018-12-05 21:34:34 (192 KB/s) - ‘primer-dataset.json’ saved [11874761/11874761]
2018-12-05T21:34:35.010+0000 connected to: localhost
2018-12-05T21:34:35.010+0000 dropping: test.restaurants
2018-12-05T21:34:37.969+0000 imported 25359 documents
###Markdown
Change the variable `collection` to refer to the new collection `restaurants`, and inspect the general format of the data by adding the code below to find the first record of the collection:
###Code
# YOUR CODE HERE
collection = db.restaurants
collection.find_one()
###Output
_____no_output_____
###Markdown
We saw the [`collection.find()`](https://api.mongodb.com/python/current/api/pymongo/collection.htmlpymongo.collection.Collection.find) function earlier to return all the documents we inserted into our `test` collection. Without any arguments, `find()` will return a cursor of all the available documents from in the collection. To refine queries, however, the search can be filtered by the addition of a first parameter.The `filter` parameter is a dict, which searches for the documents where `key` = `value` where the dict is of the form `{key: value}`. For example, to find all bakeries in the city, we would do the following query: **WARNING** Unlike the Mongo command line interface, if you try and print the output of a `find()` query, it will continue to output all results until it has finished. This can cause the browser to crash, particularly if it is a particularly large query set.Using `find().count()` is a useful way of checking how many a result will return, and `find_one()` to see the general structure of a result. If you use `find()`, make sure you either include the `limit` argument, or have a counter or other condition to break out of your printing loop!
###Code
collection.find({'cuisine': 'Bakery'}).limit(5)
###Output
_____no_output_____
###Markdown
Noted, count() is deprecated in the MongoDb drivers compatible with the 4.0features, as a result, we should use countDocuments() in the following exercises. (for more information, check https://docs.mongodb.com/manual/reference/method/db.collection.count/ and http://api.mongodb.com/python/current/changelog.html)
###Code
collection.count_documents({'cuisine': 'Bakery'})
###Output
_____no_output_____
###Markdown
A filter can have as many conditions as you like, and will assume that you are using an AND condition, unless you specify otherwise (as below). In the cell below, write a query to return the number/count of all the establishments with a cuisine of `Hamburgers` in the borough of Manhattan.
###Code
# YOUR CODE HERE
# All establishments with:
# * a cuisine of 'Hamburgers'
# * in the borough of 'Manhattan
collection.count_documents({'cuisine': 'Hamburgers', 'borough': 'Manhattan'})
collection.distinct('cuisine')
collection.distinct('coord')
collection.find({'street': 'Morris Park Ave'})
collection.count_documents({'street': "Morris"+"\000"+"Park"+"\000"+"Ave"})
from pprint import pprint
cursor=collection.find({'street': 'Morris Park Ave'})
for c in cursor:
pprint(c)
collection.count_documents({'borough':'Bronx'})
###Output
_____no_output_____
###Markdown
Sub-documentsA valid JSON style "document" can have another JSON document inside it. To access these, we use the "dot" notation to access them. For example, to get all the restaurants in a certain zipcode, you would run code as follows:
###Code
from pprint import pprint
cursor = collection.find({'address.zipcode': '10462'}, limit=5)
for c in cursor:
pprint(c)
###Output
_____no_output_____
###Markdown
OperatorsMongoDB has a series of [operators](https://docs.mongodb.com/manual/reference/operator/query/) which allow us to do more sophisticated filters on our queries. There are too many to go into individually, but we will look at a few important ones. The specific syntax varies depending on the operator, so it isn't possible to give a general rule, but we will go over a few examples here. Make sure you check the [documentation](https://docs.mongodb.com/manual/reference/operator/query/) for use on each one. [\$or](https://docs.mongodb.com/manual/reference/operator/query/or/op._S_or)Performs a logical **OR** operation on all the key/value pairs in a list, as in the code below:
###Code
filter = {"$or": [{"cuisine": "Polynesian"}, {"cuisine": "Hawaiian"}]}
for f in collection.find(filter):
pprint(f)
###Output
_____no_output_____
###Markdown
[`$regex`](https://docs.mongodb.com/manual/reference/operator/query/regex/op._S_regex)The `$regex` operator searches for a regular expression on a particular field. Within the filter field, the named field (a key) takes a dict as a value. For example, to search for all restaurants which start with the word "Pretzel" in the title you can do the following:
###Code
filter = {"name": {"$regex": '^Pretzel'}}
collection.count_documents(filter)
###Output
_____no_output_____
###Markdown
There are other ways to use regular expressions in PyMongo, you can use the [`re`](https://docs.python.org/3/library/re.html) module in Python. In Mongo itself, you can use the following syntax: The simplest is to enclose the regular expression inside `/` characters, as in `{"name": /^Pretzel/}`, but that doesn't work properly in PyMongo.Using the `$regex` operator, find all restaurants which end in "Bar" in the borough of Brooklyn.HINT: The regex character for the end of a string is `$`
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
[`$gt`](https://docs.mongodb.com/manual/reference/operator/query/gt/op._S_gt)The `$gt` operator is a comparison between two values where one is greater than the other. For example, consider this code which finds restaurants which have had a score of more than 15:
###Code
filter = {'grades.score': {'$gt': 12}}
collection.count_documents(filter)
###Output
_____no_output_____
###Markdown
Using one of the other comparison operators, find all restaurants which had a grade awarded on the 15 December 2012. You'll need to create a [`datetime`](https://docs.python.org/2/library/datetime.htmldatetime-objects) object in Python.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Organising outputSo far, we have seen the two of the arguments in the `find()` and related functions. The `filter` which allows us to select the criteria for documents in the collection, and the `limit` to limit the amount of results. You should read the documentation fully about the function in your own time, but for now, we will go over two other arguments which are for organising output: field selection, and sorting.The field selection or `projection` argument is the argument after the \[optional\] filter, and is either:* A list of fields to include (plus \_id)* A dict of fields with True/False to includeFor example, to display only the name of the restaurant:
###Code
filter = {'cuisine': 'Brazilian'}
fields = {'_id': False, 'name': True}
collection.find_one(filter, fields)
###Output
_____no_output_____
###Markdown
The sort argument is a dict object of field names as keys, and directions. This can be done either as a named parameter when calling `find()`, or as a function in its own right [`sort()`](https://api.mongodb.com/python/current/api/pymongo/cursor.htmlpymongo.cursor.Cursor.sort)For example, to sort in alphabetical order, consider the following code:
###Code
import pymongo
# The ASCENDING and DESCENDING constants have values of 1 (ASCENDING) and -1 (DESCENDING)
sort = [('name', pymongo.ASCENDING)]
for d in collection.find(filter, projection=fields, sort=sort):
pprint(d)
###Output
_____no_output_____
###Markdown
MongoDB Aggregation FrameworkThe most common usage for the aggregation framework is to perform group operations such as sum, count or average. The framework works as a pipeline, with a series of different stages where the data are transformed in each one.At its simplest, this can be used to obtain output like min, max, count, avg on a collection as follows:
###Code
group = {
'$group': {
'_id': None,
'size': {'$sum': 1},
'min': {'$min': '$restaurant_id'},
'max': {'$max': '$restaurant_id'}
}
}
cursor = collection.aggregate([group])
for c in cursor:
print(c)
###Output
_____no_output_____
###Markdown
Note that it has an `$_id: None` key/value pair in it. It is compulsory for a `$group` pipeline to have one, and it indicates what it is grouping by. In this case, we haven't grouped it at all, however it can also be used for more complex output where documents are grouped according to a field. Aggregation exampleConsider this example, of finding the breakdown of how many of each type of restaurant there is in the Bronx. We would need to go through the following stages:- Identify restaurants which are in the Bronx- Group the restaurants by type to get the count- Sort the results in a sensible wayThe code to perform this query is below:
###Code
# Restrict the results to only establishments in the Bronx.
# '$match' indicates the stage in the pipeline, and the dictionary is the same as using with find()
match = {
"$match": {"borough": "Bronx"}
}
# $group indicates the stage in the pipeline
# _id is the field to perform the operation on (like SQL GROUP BY)
# count is the name of the field that the result will be in
# $sum is the counting operation, and the value 1 is how many to count each time
group = {
'$group': {'_id': '$cuisine', 'count': {'$sum': 1}}
}
# $sort indicates the position in the pipeline
# count is the field to sort by, and -1 means to sort in descending order
sort = {
'$sort': {'count': pymongo.DESCENDING}
}
cursor = collection.aggregate([match, group, sort])
for c in cursor:
print(c)
###Output
_____no_output_____
###Markdown
This is a simple query, which shows some of the basic stages of the aggregation pipeline. It can be improved as follows:* We can change the name of the `_id` in the output back to `cuisine` using the `$project` stage* We can change the order of the output to be sorted in alphabetical order as well* We can limit the results to include results only with a count of 20 or moreImplement those stages in the cell below
###Code
# YOUR CODE HERE
pipeline = [match, group] #More code here...
cursor = collection.aggregate(pipeline)
for c in cursor:
print(c)
# YOUR CODE HERE FOR sort
pipeline.append(sort)
cursor = collection.aggregate(pipeline)
for c in cursor:
pprint(c)
# YOUR CODE HERE FOR `count` > 20
cursor = collection.aggregate(pipeline)
for c in cursor:
pprint(c)
###Output
_____no_output_____
###Markdown
ChallengeHow would you work out the percentage of each type of cuisine out of all selected restaurants?
###Code
# YOUR CODE HERE...
###Output
_____no_output_____ |
ML_HW#3/Question8.ipynb | ###Markdown
Dataset prepratation
###Code
from PIL import Image
def rgb_mean(name, label):
image = Image.open(name)
rChannel = 0
bChannel = 0
gChannel = 0
count = 0
for x in range(image.width):
for y in range(image.height):
count += 1
rChannel += image.getpixel((x,y))[0]
bChannel += image.getpixel((x,y))[2]
rChannel = rChannel / count
bChannel = bChannel / count
return { "red": rChannel, "blue": bChannel, "label": label }
PATH = './Q9_Dataset/Images/'
dataset = [rgb_mean(PATH + f"c{i}.jpg", 0) for i in range(1, 66)]
dataset.extend(rgb_mean(PATH + f"m{i}.jpg", 1) for i in range(1, 58))
import pandas as pd
dataset = pd.DataFrame(dataset)
dataset.head()
X, y = dataset.drop(columns=["label"]), dataset["label"]
X_chelsea = dataset[dataset['label'] == 0].drop(columns=["label"])
X_manchester = dataset[dataset['label'] == 1].drop(columns=["label"])
X_chelsea.head()
X_manchester.head()
###Output
_____no_output_____
###Markdown
Training Guassian Mixture Models
###Code
from sklearn.mixture import GaussianMixture
GMM_chelsea = GaussianMixture(n_components=2)
GMM_chelsea.fit(X_chelsea)
from sklearn.mixture import GaussianMixture
GMM_manchester = GaussianMixture(n_components=2)
GMM_manchester.fit(X_manchester)
import numpy as np
from matplotlib.patches import Ellipse
def draw_ellipse(position, covariance, ax=None, **kwargs):
ax = ax or plt.gca()
if covariance.shape == (2, 2):
U, s, Vt = np.linalg.svd(covariance)
angle = np.degrees(np.arctan2(U[1, 0], U[0, 0]))
width, height = 2 * np.sqrt(s)
else:
angle = 0
width, height = 2 * np.sqrt(covariance)
for nsig in range(1, 4):
ax.add_patch(Ellipse(position, nsig * width, nsig * height,
angle, **kwargs))
import matplotlib.pyplot as plt
ax = plt.gca()
ax.scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, s=40, cmap='viridis')
w_factor_chelsea = 0.2 / GMM_chelsea.weights_.max()
w_factor_manchester = 0.2 / GMM_manchester.weights_.max()
for pos, covar, w in zip(GMM_chelsea.means_, GMM_chelsea.covariances_, GMM_chelsea.weights_):
draw_ellipse(pos, covar, alpha=w * w_factor_chelsea)
for pos, covar, w in zip(GMM_manchester.means_, GMM_manchester.covariances_, GMM_manchester.weights_):
draw_ellipse(pos, covar, alpha=w * w_factor_manchester)
n_components = np.arange(1, 31)
BIC = np.zeros(n_components.shape)
AIC = np.zeros(n_components.shape)
for i, n in enumerate(n_components):
GMM = GaussianMixture(n_components=n)
GMM.fit(X_chelsea)
AIC[i] = GMM.aic(X_chelsea)
BIC[i] = GMM.bic(X_chelsea)
plt.plot(n_components, AIC, 'b', label='aic')
plt.plot(n_components, BIC, 'r', label='bic')
plt.title("Chelsea")
plt.legend()
plt.show()
n_components = np.arange(1, 31)
BIC = np.zeros(n_components.shape)
AIC = np.zeros(n_components.shape)
for i, n in enumerate(n_components):
GMM = GaussianMixture(n_components=n)
GMM.fit(X_manchester)
AIC[i] = GMM.aic(X_manchester)
BIC[i] = GMM.bic(X_manchester)
plt.plot(n_components, AIC, 'b', label='aic')
plt.plot(n_components, BIC, 'r', label='bic')
plt.title("Manchester")
plt.legend()
plt.show()
values = (np.array(AIC) + np.array(BIC)) / 2
index_min = np.argmin(values)
print(index_min)
###Output
27
|
examples/notebooks/append_bands_example.ipynb | ###Markdown
In this example we initially create a single band image, then append two more.
###Code
# data dimensions
dims = (1000, 1000)
# random data
data = numpy.random.ranf(dims)
dtype = data.dtype.name
# define a single output band and add other bands later
kwargs = {'width': dims[1],
'height': dims[0],
'count': 1,
'compression': 4,
'chunks': (100, 100),
'blocksize': 100,
'dtype': dtype}
with kea.open('append-bands-example.kea', 'w', **kwargs) as src:
src.write(data, 1)
# random data
data = numpy.random.ranf(dims)
# add a new band to contain the segments data
src.add_image_band(dtype=dtype, chunks=kwargs['chunks'],
blocksize=kwargs['blocksize'], compression=6,
band_name='Add One')
# write the data
src.write(data, 2)
# random data
data = numpy.random.ranf(dims)
# add a new band to contain the segments data
src.add_image_band(dtype=dtype, chunks=kwargs['chunks'],
blocksize=kwargs['blocksize'], compression=1,
band_name='Then Another')
# write the data
src.write(data, 3)
###Output
_____no_output_____ |
L3/L3-A00354777.ipynb | ###Markdown
**Development Exercises – L3** **Created by Ramses Alexander Coraspe Valdez** **Created on March 2, 2020** **Right - Bicep**> **Right** - Are the results right?> **B** - are all the boundary conditions correct?> **I** - can you check the inverse relationships?> **C** - can you cross-check results using other means?> **E** - can you force error conditions to happen?> **P** - are performance characteristics within bounds? 16. Given the math.ceil function- Define a set of unit test cases that exercise the function 17. Given the math.factorial function- Define a set of test cases that exercise the function 18. Given the math.pow function- Define a set of test cases that exercise the function 20. Implement a class that manages a directory that is saved in a text file. The data saved includes:- a. Name- b. Email- c. Age- d. Country of OriginThe class should have capabilities to:- Add new record- Delete a record- Look for a record by mail and age- List on screen all records
###Code
import math
import unittest
import csv
class directory:
def __init__(self, n, e, a, c):
self.Name= n
self.Email = e
self.Age = a
self.Country= c
class ManageDirectory:
def __init__(self, filename):
self.myList = [];
self.myFile= filename;
def load_from_file(self):
li = [];
with open(self.myFile) as f:
reader = csv.reader(f)
data = [r for r in reader]
data.pop(0);
for r in data:
li.append(directory(r[0], r[1], r[2], r[3]));
self.myList = li;
def saveToCSVFile(self):
with open(self.myFile,'w',newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['Name', 'Email', 'Age', 'Country'])
for r in self.myList:
writer.writerow([r.Name, r.Email, r.Age, r.Country])
def add(self, e):
if e is None:
return False
try:
self.myList.append(e);
li.saveToCSVFile();
return True
except:
return False
def remove(self, i):
if (i<=1):
return False
try:
self.myList.pop(i-1);
li.saveToCSVFile();
return True
except:
return False
def lookBy_email_age(self, e, a):
match=[];
for r in self.myList:
if((e in r.Email) and (a== r.Age)):
match.append(r)
return match;
def show_list(self):
return self.myList;
li = ManageDirectory('test.csv');
##if the file already contains data you can use this method
####################### li.load_from_file(); #######################
# If there is no data in the file or it does not exist you can add records to the file in this way
li.add(directory("Ram1", "[email protected]", "3", "Mexico"));
li.add(directory("Ram2", "[email protected]", "2", "Mexico"));
li.add(directory("Ram3", "[email protected]", "7", "Mexico"));
li.add(directory("Ram4", "[email protected]", "5", "Mexico"));
li.add(directory("Ram", "[email protected]", "5", "Mexico"));
l= li.show_list()
for row in l:
print(row.Name, row.Email, row.Age, row.Country)
print("----------Removing record 1--------------------")
li.remove(1)
l= li.show_list()
for row in l:
print(row.Name, row.Email, row.Age, row.Country)
print("----------Searching record by email and age--------------------")
res= li.lookBy_email_age('[email protected]', '5')
for row in res:
print(row.Name, row.Email, row.Age, row.Country)
print("----------Showing all the records--------------------")
l= li.show_list()
for row in l:
print(row.Name, row.Email, row.Age, row.Country)
class My_tests(unittest.TestCase):
li = ManageDirectory('test.csv');
def test_ceil_positive(self):
res= math.ceil(2.25);
self.assertEqual(3, res);
def test_ceil_negative(self):
res= math.ceil(-2.25);
self.assertEqual(-2, res);
def test_factorial_zero(self):
res = math.factorial(0)
self.assertEqual(1, res)
def test_factorial(self):
res = math.factorial(5)
self.assertEqual(120, res)
def test_pow(self):
res = math.pow(2,2)
self.assertEqual(4, res)
def test_adding_record(self):
res= li.add(directory('testname','[email protected]','20','Mex'));
self.assertTrue(res);
def test_adding_record2(self):
res= li.add(directory('testname2','[email protected]','25','USA'));
self.assertTrue(res);
def test_adding_record3(self):
res= li.add(directory('testname3','[email protected]','22','Arg'));
self.assertTrue(res);
def test_adding_record_fail(self):
res= li.add(None);
self.assertFalse(res);
def test_removing_record(self):
res= li.remove(2)
self.assertTrue(res);
def test_removing_record_fail(self):
res= li.remove(0)
self.assertFalse(res);
def test_searching_record(self):
res= li.lookBy_email_age('[email protected]', '22')
self.assertEqual(True, len(res)>0)
def test_searching_no_record(self):
res= li.lookBy_email_age('[email protected]', '22')
self.assertEqual(False, len(res)>0)
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)
###Output
.............
----------------------------------------------------------------------
Ran 13 tests in 0.016s
OK
|
Automate_Mouse_Events_using_pynput.ipynb | ###Markdown
Automate Mouse Events using pynput Explained in Minutes 8 - ASA Learning
###Code
# Import pynput
from pynput.mouse import Button, Controller, Listener
asa_mouse= Controller()
# Get the current mouse position (x,y)
asa_mouse.position
# Move to your desired position using (x,y)
asa_mouse.position = (100,1100)
# Move mouse by pixels
asa_mouse.move(100,-100)
# Press and Release
asa_mouse.press(Button.right)
asa_mouse.release(Button.right)
# Perform Mouse clicks
asa_mouse.click(Button.left,1)
# Make your mouse click at desired position
asa_mouse.position = (561,521)
asa_mouse.click(Button.left,1)
# Funtion to get the position and button
def on_click(x,y,button,pressed):
if pressed:
print("Clicked at", x,",", y, button, pressed)
listener.stop()
# listens to your mouse clicks
with Listener (on_click=on_click) as listener:
listener.join()
###Output
Clicked at 565 , 521 Button.left True
|
ParmiSomigliAdUnSempliceEsempio.ipynb | ###Markdown
Esistono molti modi diversi di approcciarsi alla risoluzione di un problema. In base alla propria astuzia, alle proprie conoscenze algoritmiche, matematiche, statistiche, al proprio buonsenso, è possibile elaborare strategie sempre più raffinate. Oggi invece siamo pigri. Supponiamo di essere in assoluto il più pigro studente che sia mai esistito. E supponiamo che il problema che dobbiamo risolvere sia imparare un frammento dell’Amleto di Shakespeare. Il frammento in questione è una semplice frase di Amleto:Parmi somigli ad una donnolaDato che siamo mostruosamente pigri siamo stati affiancati ad un insegnante, profumatamente pagato per aiutarci a svolgere il nostro compito ma purtroppo altrettanto pigro.Sforzarsi di imparare la frase che ci è stata assegnata è fuori questione. Perché non cominciare provando a buttare là frasi a caso e vedere come va? Alla fine se delle scimmie che premono tasti a caso su una macchina da scrivere possono riscrivere tutte le opere di Shakespeare perché non possiamo farcela noi con una sola frase?Una normale persona probabilmente metterebbe insieme delle parole che l’autore potrebbe aver usato ma non vogliamo dare alle scimmie nostre rivali alcun handicap e quindi cominciamo a dire lettere a caso.Nello specifico selezioniamo un numero di lettere casuali, pari alla lunghezza della frase che vogliamo imparare e lo proponiamo al nostro insegnante, il quale, pigro quanto noi, si limita a dirci quante lettere abbiamo azzeccato.Quello che facciamo in pratica è:
###Code
import random
import string
def random_char():
return random.choice(string.ascii_lowercase + ' ')
def genera_frase():
return [random_char() for n in range(0,len(amleto))]
amleto = list('parmi somigli ad una donnola')
print("target= '"+''.join(amleto)+"'")
frase = genera_frase()
print(str(frase)+" = '"+''.join(frase)+"'")
###Output
target= 'parmi somigli ad una donnola'
['c', 'f', 'x', 'x', 'z', 'g', 'w', 'y', 'i', 'u', 't', ' ', 'o', 'c', 'h', 'o', 'k', 'p', 'a', 'i', 'y', 'i', 'y', 'r', 'g', 'x', 's', 'm'] = 'cfxxzgwyiut ochokpaiyiyrgxsm'
###Markdown
E poi proporre la nostra frase all’insegnante il quale si limita a:
###Code
def valuta( candidato ):
azzeccate = 0
for (lettera1, lettera2) in zip(candidato, amleto):
if lettera1 == lettera2:
azzeccate = azzeccate + 1
return azzeccate
risposta = valuta(frase)
print(risposta)
###Output
0
###Markdown
Ben presto ci accorgiamo di quanta pazienza devono avere le scimmie, dopo 100000 tentativi (saremo pur pigri ma anche molto tenaci) abbiamo azzeccato appena 8 lettere su 28:“vfgoyflcmorpiisd untmsonqcji”“parmi somigli ad una donnola”Un risultato non proprio stellare.Abbiamo 28 lettere da azzeccare, per ognuna abbiamo 27 scelte ('abcdefghijklmnopqrstuvwxyz '), il che vuol dire che la nostra probabilità di azzeccare tirando a caso è una su $27^{28}$ cioè circa:0.00000000000000000000000000000000000000008Mentre proponiamo la 1000001-esima frase pensiamo tra noi che sarebbe bello non dover ricominciare da capo ogni volta, che magari ora che abbiamo una frase con 8 lettere corrette sarebbe bello poter migliorarla invece di buttarla via e ricominciare. Se solo il nostro insegnante fosse così gentile da dirci quali sono le lettere che abbiamo azzeccato oltre che quante, finiremmo in un attimo.Vabbè, cerchiamo comunque di provarci, invece di buttare via la nostra frase ogni volta proviamo a tenerla e cambiare una lettera cercando di ottenere risultati sempre migliori:
###Code
def altera(vecchia_frase):
posizione_da_cambiare = random.choice(range(0,len(vecchia_frase)))
lettera_da_cambiare = vecchia_frase[posizione_da_cambiare]
alternative = (string.ascii_lowercase + ' ').replace(lettera_da_cambiare,'')
nuova_frase = list(vecchia_frase)
nuova_frase[posizione_da_cambiare] = random.choice(alternative)
return nuova_frase
i=0
miglior_frase = [random_char() for n in range(0,len(amleto))]
miglior_risultato = valuta(miglior_frase)
while(miglior_risultato < len(amleto)):
frase = altera(miglior_frase)
risposta = valuta(frase)
i = i+1
if risposta > miglior_risultato:
miglior_risultato = risposta
miglior_frase = frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
###Output
38: "ixblkmshfresrvycedrmvdr rgnj" 2
71: "ixblkmshfresrvycedravdr rgnj" 3
99: "ixblkmshfresrvycednavdr rgnj" 4
140: "ixblkmshfrgsrvycednavdr rgnj" 5
143: "ixblkmshfrglrvycednavdr rgnj" 6
169: "ixblk shfrglrvycednavdr rgnj" 7
207: "ixblk shfrglr ycednavdr rgnj" 8
292: "iablk shfrglr ycednavdr rgnj" 9
370: "iablk shmrglr ycednavdr rgnj" 10
421: "iablk shmrglr ycednavdr ngnj" 11
488: "iablk shmiglr ycednavdr ngnj" 12
509: "iablk shmiglr ycednavdr nglj" 13
590: "pablk shmiglr ycednavdr nglj" 14
650: "pablk shmiglr yceunavdr nglj" 15
738: "pablk shmiglr yceunavdrnnglj" 16
796: "pablk somiglr yceunavdrnnglj" 17
815: "pablk somiglr yceunavdonnglj" 18
893: "parlk somiglr yceunavdonnglj" 19
1033: "parlk somiglr yceunavdonngla" 20
1237: "parlk somiglr yceuna donngla" 21
1280: "parlk somigli yceuna donngla" 22
1297: "parmk somigli yceuna donngla" 23
1358: "parmk somigli aceuna donngla" 24
1712: "parmk somigli ac una donngla" 25
1982: "parmk somigli ad una donngla" 26
2466: "parmk somigli ad una donnola" 27
2476: "parmi somigli ad una donnola" 28
###Markdown
Con questa semplice modifica siamo in grado di imparare la frase corretta in appena qualche migliaio di tentativi.Circa tremila tentativi è decisamente un risultato molto migliore di prima! Certo che però provarle una alla volta è un po’ noioso, perché non proviamo invece a proporne di più allo stesso tempo? Proviamo a proporne 100 in un colpo solo:
###Code
def migliore(candidati):
ordinati = sorted(candidati,key=lambda tup: tup[1], reverse=True)
return ordinati[0]
def genera_candidati(num_candidati):
candidati = []
for i in range(0,num_candidati):
tmp_frase = genera_frase()
tmp_risposta = valuta(tmp_frase)
candidati.append((tmp_frase,tmp_risposta))
return candidati
candidati = genera_candidati(100)
i=0
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
###Output
15: "xorsdpsecxrln cpduuqbzqnhfie" 6
42: "xorsdpsocxrln cpduuqbzqnhfie" 7
65: "xorsdpsocxrln cp uuqbzqnhfie" 8
72: "xorsdpsocirln cp uuqbzqnhfie" 9
83: "xorsdpsocirln cp uuq zqnhfie" 10
107: "xorsdpsocirln cp uuq zqnhfle" 11
129: "xorsdpsocirln cp uua zqnhfle" 12
175: "xorsipsocirln cp uua zqnhfle" 13
204: "xorsipsocirln cp uua zqnnfle" 14
258: "xorsipsocirln cp uua zqnnole" 15
261: "xorsipsocigln cp uua zqnnole" 16
320: "porsipsocigln cp uua zqnnole" 17
429: "pavyi smmigli gs orj gonnoba" 18
438: "pavyi smmigli gs ora gonnoba" 19
439: "pavyi smmigli gs ura gonnoba" 20
566: "pavyi smmigli gs ura gonnola" 21
567: "paryi smmigli gs ura gonnola" 22
710: "padmi somrgli sm una donnnla" 23
754: "parjv somigli ao una dondola" 24
835: "padmi somigli sd una donnnla" 25
999: "padmi somigli sd una donnola" 26
1122: "par i somigli ad una donnola" 27
1400: "parmi somigli ad una donnola" 28
###Markdown
Wow ora sono riuscito a scendere attorno al migliaio, ma ho davvero guadagnato qualcosa in termini di numero di tentativi necessari? Alla fine ora faccio 100 tentativi alla volta, perchè non ci metto 100 volte meno? In effetti è sleale contare il numero di volte che passo i miei tentativi all’insegnante, dovrei contare invece quante volte l’insegnante valuta i miei tentativi!Vediamo un po’:
###Code
def valuta( candidato ):
global valutazioni
valutazioni = valutazioni + 1
azzeccate = 0
for (lettera1, lettera2) in zip(candidato, amleto):
if lettera1 == lettera2:
azzeccate = azzeccate + 1
return azzeccate
def prova_piu_frasi_insieme(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
print('Valutazioni totali: '+str(valutazioni))
prova_piu_frasi_insieme(100)
###Output
4: "ahmliqwqfiwdbpjvbunu hier va" 6
9: "arrso vyrghrpuntumta econolx" 7
10: "arrso vyrggrpuntumta econolx" 8
36: "arrso vyrggrpuatumta econolx" 9
69: "arrso vyrggrpuat mta econolx" 10
103: "arrso vyrggrpuad mta econolx" 11
150: "lsx i zflpgfuxme nnajdennola" 12
177: "psx i zflpgfuxme nnajdennola" 13
180: "psx i zflpgfuxmd nnajdennola" 14
270: "pzmmu soliglmpcg unqwxopnoha" 15
300: "pbrmgtsofeglcba anc ronno a" 16
349: "pbrmgtsofeglcba anc donno a" 17
366: "paoii svmiwbxbat uns doncola" 18
387: "paoii svmiwbxbat una doncola" 19
388: "paoii svmiwbx at una doncola" 20
419: "paoii svmiwbx at una donnola" 21
430: "paoii svmiwbi at una donnola" 22
577: "phrmi lomigle ad una sonlola" 23
654: "paomi svmigbi at una donnola" 24
820: "phrmi somigle ad una donlola" 25
958: "varmi somigli adluna donnola" 26
965: "varmi somigli ad una donnola" 27
1141: "parmi somigli ad una donnola" 28
Valutazioni totali: 114200
###Markdown
Diamine! Sì, è vero che ci metto circa un terzo delle iterazioni adesso ma quel poveretto del mio insegnante deve valutare uno sproposito di tentativi in più! circa 130000! Mentre prima quando gli proponevo una frase alla volta ne bastavano circa 3000. E dal momento che devo aspettare che lui abbia finito prima di poter provare nuove frasi mi sto tirando la zappa sui piedi!Inoltre cos’è questo macello nei miei tentativi inziali?Come possono cambiare così tanto l’uno dall’altro? Io modifico solo una lettera per volta! Aspetta… ah già, dal momento che ho molte frasi contemporaneamente può essere che alcune di loro abbiano trovato alcune lettere corrette ma che siano state battute in velocità da altre alcune iterazioni dopo. Alla fine mi curo solo di chi è il miglior candidato, se per caso qualche suo rivale riesce a trovare anche solo una lettera in più il mio povero campione viene gettato nel dimenticatoio. Ma perchè sul finale le cose sembrano molto più “uniformi” come me le aspettavo?Mmmm… potrebbe essere che all’inizio fosse facile migliorare il proprio risultato dato che la maggior parte delle lettere erano sbagliate. Sul finale invece solo una o due lettere sono sbagliate quindi è difficile progredire, bisogna sia avere la fortuna di azzeccare la posizione giusta che la lettera mancante!Forse è meglio se do un occhio a cosa succede alle mie frasi oltre che al campione:
###Code
import pprint
pp = pprint.PrettyPrinter()
def stampa_candidati(candidati):
# candidati -> array di char, li trasformo in stringhe con ''.join(...)
# [' ', 'x', 'p', 'l', 'f', … ,'d', 'z', 'h', 'f'] -> ' xplfrvvjjvnmzkovohltroudzhf'
stringhe_e_valori = list(map(lambda x : (''.join(x[0]),x[1]), candidati))
# per comodità ordino le stringhe in base al numero di lettere corrette, decrescente
stringhe_ordinate = sorted(stringhe_e_valori,key=lambda tup: tup[1], reverse=True)
pp.pprint(stringhe_ordinate)
stampa_candidati(genera_candidati(10))
###Output
[('xktlybaunjdoptpnkanfavqepofo', 2),
('upimoitniasy stjxduttgqiidpl', 1),
('nzcwjefayqbknovijemanawszint', 1),
('rqusswassibffzvjhdw enipiv p', 1),
('zsqilwb jamhibkslzaefzpkeeoc', 1),
('hqxmlcvvthuwjcwsxhiqgp tixif', 1),
('s bccheqrzmsjopfvtglhmzkwhlu', 1),
('zmlkolrgesdgyywjkrdldosse dw', 0),
('xzijjgdzxmudsdusfgpilbgvfmaw', 0),
('qyqlyvydapsfahrjsvmybpfhaw l', 0)]
###Markdown
Proviamo a vedere cosa succede ai nostri candidati quando ne considero 10 alla volta:
###Code
def prova_piu_frasi_insieme(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
for n in range(0,len(candidati)):
frase,risposta = candidati[n]
nuova_frase = altera(frase)
nuova_risposta = valuta(nuova_frase)
if nuova_risposta > risposta:
candidati[n] = (nuova_frase,nuova_risposta)
if nuova_risposta > miglior_risultato:
miglior_risultato = nuova_risposta
miglior_frase = nuova_frase
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
stampa_candidati(candidati)
print('Valutazioni totali: '+str(valutazioni))
prova_piu_frasi_insieme(10)
###Output
3: "vaolovbodpnei vcjiijidcfaqib" 5
[('vaolovbodpnei vcjiijidcfaqib', 5),
(' pbczuemtkksjcvfwrvy kglalp', 2),
('bsdhrasaupoifdqpdwzr qnlczjl', 2),
('eocfi cznxdvexgbi lvscaaiqoq', 2),
('jkoamjeqgmfunjo cjnplagmyhq', 1),
('wmikitkmpkyyjsbtanmibpmgtqrk', 1),
('ogvktuph njgqftydseeawlnuhew', 1),
('kqwmzlaeuyrvhyhwgebte cmx gg', 1),
('tzwolmzhojqcpljqwpgkncfww dq', 0),
('ymkbbiewpulzfs jwfpskmtpiznu', 0)]
32: "varlovbodpnei vcjiijidcfaqib" 6
[('varlovbodpnei vcjiijidcfaqib', 6),
(' pbmzuemtkksjcvfwrvy knlalp', 4),
('pqwmzlaeuyrlhyhwgebte cmx gg', 3),
('bsdhrasaupgifdqpdwzr qnlczjl', 3),
('jkoamjeqgmfun o cjnplagmyhq', 2),
('wmikitkmpiyyjsbtanmibpmgtqrk', 2),
('ogvktuph njgqftydseeawonuhew', 2),
('eocfi cznxdvexgbi lvscaaiqoq', 2),
('ymrbbiewpulzfs jwfpskmtpiznu', 1),
('tzwolmzhojqcpljqwpgkncfww dq', 0)]
40: "varlovbodpnei vcjiijidcfaqlb" 7
[('varlovbodpnei vcjiijidcfaqlb', 7),
(' pbmzuemtkksjcvfwrvy knlalp', 4),
('pgvktuph njgqftydseeawonuhew', 3),
('pqwmzlaeuyrlhyhwgebte cmx gg', 3),
('bsdhrasaupgifdqpdwzr qnlczjl', 3),
('eocfi cznxdvexgbi nvscaaiqoq', 3),
('jkoamjeqgmfun o cjnplagmyhq', 2),
('wmikitkmpiyyjsbtanmibpmgtqrk', 2),
('ymrbbiewpulzfs jwfps mtpiznu', 2),
('tzwolmzhojqcpljqwpgkncfww dq', 0)]
43: "varlovbodpnei vcjiijidcfnqlb" 8
[('varlovbodpnei vcjiijidcfnqlb', 8),
(' pbmzuemtkksjcvfwrvy knlalp', 4),
('pgvktuph njgqftydseeawonuhew', 3),
('pqwmzlaeuyrlhyhwgebte cmx gg', 3),
('bsdhrasaupgifdqpdwzr qnlczjl', 3),
('eocfi cznxdvexgbi nvscaaiqoq', 3),
('jkoamjeqgmfun o cjnplagmyhq', 2),
('wmikitkmpiyyjsbtanmibpmgtqrk', 2),
('ymrbbiewpulzfs jwfps mtpiznu', 2),
('tzwolmzhojqcpljqwpgkncfww dq', 0)]
75: "varlovbodpnei vc iijidcfnqlb" 9
[('varlovbodpnei vc iijidcfnqlb', 9),
('pqwmzlaeuyrlhyhwgente cmxogg', 5),
('wmikitkmpiyyisbtannibpmgtqrk', 4),
('pgvktuph njgqftydseeawonuoew', 4),
(' pbmzuemtkksjcvfwrvy knlalp', 4),
('bsdhrasaupgifdqddwzr qnlczjl', 4),
('eocfi cznxdvixgbi nvscaaiqoq', 4),
('yarbbiewpulzfs jwfps mtpiznu', 3),
('jkoamjeqgmfun o cjnplagmyhq', 2),
('tzwolmzhojqcpljqwpgkncfww dq', 0)]
86: "varlovbodpnei vc iijidcfnolb" 10
[('varlovbodpnei vc iijidcfnolb', 10),
('pqwmzlseuyrlhyhwgente cmxogg', 6),
(' pbmzuemtkksj vfwrvy knlalp', 5),
('wmikitkmpiyyisbtannibpmgtqrk', 4),
('pgvktuph njgqftydseeawonuoew', 4),
('bsdhrasaupgifdqddwzr qnlczjl', 4),
('eocfi cznxdvixgbi nvscaaiqoq', 4),
('yarbbiewpulzfs jwfps mtpiznu', 3),
('jkoamjeqgmfun o cjnplagmyhq', 2),
('tzwolmzhojqcpljqwpgkncfww dq', 0)]
89: "varlovbodinei vc iijidcfnolb" 11
[('varlovbodinei vc iijidcfnolb', 11),
('pqwmzlseuyrlhyhwgente cmxogg', 6),
('wmikitkmpiyyisbtannibpogtqrk', 5),
(' pbmzuemtkksj vfwrvy knlalp', 5),
('bsdhrasampgifdqddwzr qnlczjl', 5),
('pgvktuph njgqftydseeawonuoew', 4),
('eocfi cznxdvixgbi nvscaaiqoq', 4),
('yarbbiewpulzfs jwfps mtpiznu', 3),
('jkoamjeqgmfun o cjnplagmyhq', 2),
('tzwolmzhojqcpljqwpgkncfwn dq', 1)]
129: "varlovbodinei vc iiaidcfnolb" 12
[('varlovbodinei vc iiaidcfnolb', 12),
('pqwmzlseuyrlhyhwgente cmxogg', 6),
('eocfi cznxdlixgbiunvscaaiqoq', 6),
('wmikitkmpiyyisbtannibpogtqrk', 5),
('pgvkiuph njgqftydseeawonuoew', 5),
(' pbmzuemtkksj vfwrvy knlalp', 5),
('bsdhrasampgifdqddwzr qnlczjl', 5),
('jkoamjeqgmfun o cnnplagmyhq', 3),
('yarbbiewpulzfs jwfps mtpiznu', 3),
('tzwolmzhojqcpljqwpgkncfwn dq', 1)]
164: "varlovbodigei vc iiaidcfnolb" 13
[('varlovbodigei vc iiaidcfnolb', 13),
('wmikitkmpigyisbtannibpogtqrk', 6),
('pqwmzlseuyrlhyhwgente cmxogg', 6),
('bsdhrasampgifdqddwnr qnlczjl', 6),
('eocfi cznxdlixgbiunvscaaiqoq', 6),
('pgvkiuph njgqftydseeawonuoew', 5),
(' pbmzuemtkksj vfwrvy knlalp', 5),
('jkoamjeqgmfun o cnnplagmyhq', 3),
('pzwolmzhojqcpljqwpgancfwn dq', 3),
('yarbbiewpulzfs jwfps mtpiznu', 3)]
165: "varlovbodigei vd iiaidcfnolb" 14
[('varlovbodigei vd iiaidcfnolb', 14),
('wmikitkmpigyisbtannibpogtqrk', 6),
('pqwmzlseuyrlhyhwgente cmxogg', 6),
('bsdhrasampgifdqddwnr qnlczjl', 6),
('eocfi cznxdlixgbiunvscaaiqoq', 6),
('pgvkiuph njgqftydseeawonuoew', 5),
(' pbmzuemtkksj vfwrvy knlalp', 5),
('jkoamjeqgmfun o cnnplagmyhq', 3),
('pzwolmzhojqcpljqwpgancfwn dq', 3),
('yarbbiewpulzfs jwfps mtpiznu', 3)]
205: "parlovbodigei vd iiaidcfnolb" 15
[('parlovbodigei vd iiaidcfnolb', 15),
('pgrki ph njgqftydseeawonuoew', 7),
(' pbmz emtkksj vfwrvy knlala', 7),
('eocfi cznxdlixgbiunv caaiqoq', 7),
('jkoamjeqgifui o unnplagmyhq', 6),
('wmikitkmpigyisbtannibpogtqrk', 6),
('pqwmzlseuyrlhyhwgente cmxogg', 6),
('bsdhrasampgifdqddwnr qnlczjl', 6),
('yarbbiswpulzfsajwfps mtpiznu', 5),
('pzwolmzhmjqcpljqwpgancfwn dq', 4)]
235: "parlovbodigei ad iiaidcfnolb" 16
[('parlovbodigei ad iiaidcfnolb', 16),
('jkoamjeqgigui od unnplagmyhq', 8),
('eorfi cznxdlixgbiunv caaiqoq', 8),
('wmrkitkmpigyisbtannibpogtqrk', 7),
('pgrki ph njgqftydseeawonuoew', 7),
(' pbmz emtkksj vfwrvy knlala', 7),
('yarbbiswpulzfsadwfps mtpizlu', 7),
('bsdhrasampgifdqddwnr dnlczjl', 7),
('pzwolmzhmiqcpljqwpga cfwn dq', 6),
('pqwmzlseuyrlhyhwgente cmxogg', 6)]
351: "parlovbodigei ad iiaidcfnola" 17
[('parlovbodigei ad iiaidcfnola', 17),
('jkramjsqgigui od unnplonmyha', 13),
('yarbb swpigzfsad fns mtpizlu', 12),
(' pbmz emtigsj vdwrvy knnala', 11),
('pzwol zomiqcpljd pga cfwnodq', 11),
('bsdhiasamigifdqddunr dnlczjl', 10),
('eormi cznxdlixgb unv caaiqoq', 10),
('wmrmitkmpigyisbtannibpogtork', 9),
('pqwmzlseuyrlhyhwgent cmnolg', 9),
('pgrki phmnjgqftydseeawonuoew', 8)]
448: "parlovbodigei ad inaidcfnola" 18
[('parlovbodigei ad inaidcfnola', 18),
('jkramjsqgigli od unnplonmoha', 15),
('parol zomiqcp jd pga cfwnolq', 15),
('badhi samiglfdqddunr dnlnzll', 15),
('yarbb swpigzfsad fns dtpizlu', 13),
(' pbmz smtigsj vdwrvy knnala', 12),
('wmrmitkmpigyisbtannibdognork', 11),
('eormi conxdlixgb unv caaiqoq', 11),
('pgrmi phmnjgqftydseeawonuoew', 9),
('pqwmzlseuyrlhyhwgent cmnolg', 9)]
490: "parlo bodigei ad inaidcfnola" 19
[('parlo bodigei ad inaidcfnola', 19),
('jkramjsqgigli od unn lonmola', 17),
('parol zomiqlp jd pga cfwnola', 17),
('badhi samiglfdqddunr dnlnzll', 15),
('pmrmitkopigyisbtannibdognork', 13),
(' pbmi smtigsj vdwrvy knnala', 13),
('yarbb swpigzfsad fns dtpizlu', 13),
('eormi conxdlixab unv caaiqoq', 12),
('pgrmi phmnjgqftydsee wonuoew', 10),
('pqwmilseuyrlhyhwgent cmnolg', 10)]
527: "parlo sodigei ad inaidcfnola" 20
[('parlo sodigei ad inaidcfnola', 20),
('parml somiqlp jd pga cfwnola', 19),
('pkramjsqgigli od unn lonmola', 18),
('badhi samiglfdqddunr dnlnzll', 15),
('yarbb swmigzfsad fns dtpizlu', 14),
('pmrmitkopigyisbtannibdognork', 13),
(' pbmi smtigsj vdwrvy knnala', 13),
('eormi conxdli ab unv caaiqoq', 13),
('pqwmilsemyrlhyhwgent cmnolg', 11),
('pgrmi phmnjgqftydsee wonuoew', 10)]
597: "parli sodigei ad inaidcfnola" 21
[('parli sodigei ad inaidcfnola', 21),
('pkramjsogigli od unn lonnola', 20),
('parml somiqlp ad pga cfwnola', 20),
('yarbb swmigzfsad fns dtniola', 17),
('badhi samiglfdqddunr dnlnzll', 15),
('pmrmitkopigyisbtanni dognork', 14),
('eormi conxdli ab unv caaiooq', 14),
(' pbmi smtigsj vdwrvy knnala', 13),
('pawmilsemyrlhyhwgent dcmnolg', 13),
('pgrmi phmnjgqftydsee wonuolw', 11)]
712: "pkrmmjsogigli od una lonnola" 22
[('pkrmmjsogigli od una lonnola', 22),
('parml somiqlp ad pna cfwnola', 21),
('parli sodigei ad inaidcfnola', 21),
('yarmb swmigzisad fna dtniola', 20),
('badhi somiglfdqddunr dnnnzll', 17),
('pmrmi kopigyisbdanni dognork', 16),
('parmilsemyrlh hdgent dcmnolg', 16),
('eormi sonxdli ab unv daaiooq', 16),
('pgrmi phmnjgqfty uee wonuola', 14),
(' abmi smtigsj vdwrvy knnala', 14)]
753: "parli somigei ad inaidofnola" 23
[('parli somigei ad inaidofnola', 23),
('pkrmmjsogigli od una lonnola', 22),
('parml somiqlp ad pna cfwnola', 21),
('yarmb swmigzisad fna dtnnola', 21),
('badhi somiglfdqddunr dnnnzla', 18),
('pmrmi kopigyisbdanni donnork', 17),
('parmilsemyrlh adgent dcmnolg', 17),
('eormi sonxdli ab unv daaiooq', 16),
('pgrmi phmnjgifty uee wonuola', 15),
(' abmi smtigsj vdwrvy knnala', 14)]
783: "parli somigei ad inaidonnola" 24
[('parli somigei ad inaidonnola', 24),
('pkrmijsogigli od una lonnola', 23),
('parml somiqli ad pna dfwnola', 23),
('yarmb swmigzisad fna dtnnola', 21),
('padhi somiglfdqddunr dnnnzla', 19),
('pmrmi kopigyisbdanni donnork', 17),
('parmilsemyrlh adgent dcmnolg', 17),
('pgrmi phmnjgifty uea wonuola', 16),
('eormi sonxdli ab unv daaiooq', 16),
(' abmi smtigsj vdwuvy knnala', 15)]
952: "parml somigli ad una dfwnola" 25
[('parml somigli ad una dfwnola', 25),
('pkrmijsogigli od una donnola', 24),
('parli somigei ad inaidonnola', 24),
('parmb swmigzisad fna dtnnola', 22),
('padhi somiglfdqdduna donnzla', 21),
('parmilsemyrlh ad ent domnolg', 19),
('eormi somxgli ab unv daaiolq', 19),
('parmi pomnjgifty uea wonuola', 18),
(' abmi sotigsj vdwuvy donnala', 18),
('pmrmi kopigyisbdanni donnork', 17)]
1096: "pkrmijsomigli ad una donnola" 26
[('pkrmijsomigli ad una donnola', 26),
('parml somigli ad una dfwnola', 25),
('parli somigei ad unaidonnola', 25),
('parmb swmigzi ad fna dtnnola', 23),
('parmilsemirli ad ent donnola', 23),
('padhi somiglfdqd una donnzla', 22),
('parmi pomijgi td uea wonuola', 21),
('eormi somigli ab una daaiolq', 21),
(' armi sotigsj vdwuny donnala', 20),
('pmrmi sopigyisbdanna donnork', 19)]
1256: "pkrmi somigli ad una donnola" 27
[('pkrmi somigli ad una donnola', 27),
('parmi somiglj ad uny donnala', 25),
('parml somigli ad una dfwnola', 25),
('parli somigei ad unaidonnola', 25),
('parmi somiglfdqd una donnzla', 24),
('parmb swmigzi ad fna dtnnola', 23),
('parmilsemirli ad ent donnola', 23),
('parmi pomiggi td uea wonuola', 22),
('eormi somigli ab una daaiola', 22),
('pmrmi sopigyisbd nna donnora', 21)]
1560: "parmi somigli ad una donnola" 28
[('parmi somigli ad una donnola', 28),
('pkrmi somigli ad una donnola', 27),
('parmi somiglj ad uny donnola', 26),
('parmi somigli ad una dfwnola', 26),
('parmi somiglidqd una donnzla', 25),
('parmi pomiggi td una wonnola', 24),
('parmb swmigzi ad fna donnola', 24),
('parmilsemirli ad ena donnola', 24),
('eormi somigli ad una daniola', 24),
('pmrmi sopigyisbd nna donnora', 21)]
Valutazioni totali: 15610
###Markdown
Eh sì è come pensavo, quello che era il mio campione ha fallito a trovare un miglioramento e quindi un suo rivale è passato in vantaggio!Ma perchè devono competere tra di loro! Io voglio solo trovare la frase che vuole l’insegnante, se collaborassero sarebbe molto più semplice...Posso far si che condividano le parti corrette che ognuno di loro ha trovato? Diamine se solo sapessi quali sono! Maledetto insegnante che mi dici solo quante lettere azzecco.Mmmm e se facessi mescolare tra di loro le varie frasi sperando che mettano insieme le parti corrette che hanno trovato? Potrei prendere due frasi a caso e costruirne una nuova prendendo lettere da una o l’altra. Sì, sembra una buona idea, alla fine se entrambe le frasi hanno trovato una lettera giusta non importa da chi pesco, la nuova frase avrà senz’altro quella lettera, almeno posso star tranquillo che non rovinerò le soluzioni che trovano!
###Code
def mescola(frase1, frase2):
nuova_frase = []
for i in range(0,len(frase1[0])):
if random.random() > 0.5:
nuova_frase.append(frase1[0][i])
else:
nuova_frase.append(frase2[0][i])
return (nuova_frase,valuta(nuova_frase))
test_frase1 , test_frase2 = genera_frase(), genera_frase()
print('frase1: "'+''.join(test_frase1)+'"')
print('frase2: "'+''.join(test_frase2)+'"')
print('mix: "'+''.join(mescola((test_frase1,1),(test_frase2,1))[0])+'"')
###Output
frase1: "pbvywapkfeq mwmnzfzhsmbleabw"
frase2: "wwayxzhgbsc oglqnaavj sov ax"
mix: "pwayxzhgfsc mgmqnaavj bovaax"
###Markdown
Ok, ma chi mescolo tra loro? Argh…Uff,riflettiamo, senz’altro voglio la frase migliore, alla fine è quella con più lettere giuste, ma con chi potrei mescolarla? La seconda migliore? E se frasi meno “buone” avessero comunque trovato parti che alla migliore mancano? Facciamo così le scelgo a caso, ma do priorità a quelle con valori più alti. Si, mi sembra sensato, ma come faccio a farlo? T_TMmmm.... è come se volessi girare una roulette, dove chi ha un fitness più elevato ha più “spicchi” proviamo a fare uno schizzo:Mmm una frase che ha 24 di valore, merita 24 spicchi, mentre una che ha 20 ne riceverà solo 20, quindi il totale degli spicchi è la somma di tutti i valori.
###Code
def genera_ruota(candidati):
totale = 0
ruota = []
for frase,valore in candidati:
totale = totale + valore
ruota.append((totale,frase,valore))
return ruota
def gira_ruota(wheel):
totale = wheel[-1][0]
pick = totale * random.random()
for (parziale,candidato,valore) in wheel:
if parziale >= pick:
return (candidato,valore)
return wheel[-1][1:]
candidati = genera_candidati(10)
wheel = genera_ruota(candidati)
pretty_wheel = list(map(lambda x:(x[0],''.join(x[1]),x[2]),wheel))
pp.pprint(pretty_wheel)
print("migliore='"+''.join(migliore(candidati)[0])+"'")
def prova_piu_frasi_e_mescola(num_frasi):
global valutazioni
valutazioni = 0
i=0
candidati = genera_candidati(num_frasi)
miglior_frase, miglior_risultato = migliore(candidati)
while(miglior_risultato < len(amleto)):
i = i+1
ruota = genera_ruota(candidati)
nuovi_candidati = []
for n in range(0,len(candidati)):
minitorneo = [gira_ruota(ruota),gira_ruota(ruota)]
nuova_frase = altera(mescola(minitorneo[0],minitorneo[1])[0])
nuova_risposta = valuta(nuova_frase)
minitorneo.append((nuova_frase,nuova_risposta))
vincitore,valore_vincitore = migliore(minitorneo)
nuovi_candidati.append((vincitore,valore_vincitore))
if valore_vincitore > miglior_risultato:
miglior_risultato = valore_vincitore
miglior_frase = vincitore
print(str(i)+':\t"'+''.join(miglior_frase)+'"\t'+str(miglior_risultato))
stampa_candidati(candidati)
candidati = nuovi_candidati
print('valutazioni: '+str(valutazioni))
prova_piu_frasi_e_mescola(10)
###Output
1: " jrirs odnngkae poatblnssll" 6
[('ujcdtqkomrrjouzoc kae orootb', 5),
('fwlmzamow kcshywssga mjtmhrz', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('r ycfj orsfmglkuysxaho sngnj', 3),
('rliqgpxfbvbg eyxksarkxmnnoht', 3),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('vhrvdkmkyidjiupasamlgyfwvvgz', 3),
('ryewyoktojzcg jqgwzmffonmpzc', 3),
('idlhgjenmiyqzhomrgxhgduzlrts', 3),
('pziqcidf yyambkdeuwqnozkqajp', 3),
(' uamrs jodcnqkasachavblmcsdd', 3),
('xsylozeoypfqtupzkfnpbagztqil', 2),
('hdabh qzksnlbndkhagvppzsiptt', 2),
('raofobtqgyswetwiyneguyjenjui', 2),
('fhqpwkvzzbdnnjejwjnp bqqsgo ', 2),
('fooxkmlybjulepvlwsasphomcwyq', 2),
('zsknpqmteicrxnlmazyw vglhbyl', 2),
('hlqarnsskfayzdalffchlpvvmpni', 2),
('ekhdejydpsfii pbfazporjramt ', 2),
('kcnjcarmxtlhgsoueyxgydztcplu', 2),
('zzlvnmpjrtkaoqsoacnopdqjdxay', 2),
('ofsnmhbpivyosinaqcxljjaknon ', 2),
('xipvzubyjtegldvbiuey tzvzlod', 2),
('pznkrucyqevegxyaluevnqkiphwp', 2),
('j gwtjkmrxohsgjfufrq zybrkd', 2),
('yyorymhxutxprmn uixvgnxrzeu', 2),
('nyjzgpsjgwyxhqbue hedwk asr', 2),
('tzpzpzoclfylrehhbgmlj hqzqca', 2),
('sssnlugnqipdmfvuqeqvjumzmslo', 2),
('nyetntfgrqzrszfkluuz tbkgfzy', 2),
('idzbccogmcrqrlpexavaqticfgws', 2),
('dupmxfzxbkcbahfnavnkbbbwvzzy', 2),
('c wubvhohlnpgfovoijbzadgn fq', 2),
('lqu rfnpyeagcrsfkstn k eida', 1),
('fpfcgckzboxo irkaofblhfcfhh', 1),
('oiqpohiakbnty y oaxpbssop vj', 1),
('sl yllqiwczbzduwmwti qsecjpd', 1),
('vzwmrdzuczwnxjqrttksggiscyzm', 1),
('cmvomqwytnjrumpwgynxoebptyyh', 1),
('kozaorufcyzwxtyyiuiznjfcdiys', 1),
('lsjzbnrevrkvhdde foyybguhfdm', 1),
('nmdeku uqnptbwatyxgxztgjyaop', 1),
('iduxfwrhpogtelzfzwhbcjnkbcse', 1),
('gbdabkexhgwxdyzmzwu jfodsgbg', 1),
('trooqvvrokgsvkobcjzsmgnxl ed', 1),
('wxiiin qqbsxoqkkpjrfehuspjgn', 1),
('wzwxujmtiwusiisosia l ayrqcf', 1),
('tkrkwvgivpozplxcxolxkcsgqqdc', 1),
('phxigurqqqthqxpuxlhujgyrrxdt', 1),
('olrr fxjwvjqcyywfvrycggxgyat', 1),
('wlibyqqfdlablmbmmvnzrmujvjvl', 1),
('aaezutyaqotednyaxfobrf ucnyn', 1),
('lgbdqmnhjv cvhqkpm ofcwuopia', 1),
('zevxemrvpoctndslrcgbeo nhrmk', 1),
('cwpfayxsdfdxxzagufpinuyvtunu', 1),
('ugxysxbnngbotszxcqdoblxxpoil', 1),
('bxzlzqqairmfaanubvpzehtnahyl', 1),
('zegyoqjvmddbhplcxdllqfcuuifp', 1),
('hxxtniuiptuwdeozmz fideeaitp', 1),
('qglcxadmmnmduiparlf yefcdqyu', 1),
('xhngdryqziseadzgycbmtpyvrgjm', 1),
('sd yu mxdv goewkutqpvchg jue', 1),
('pdxnymkbb dqnlfveebddagxztzm', 1),
('idqoqmmalrkalbvfyxl rcsqnzjf', 1),
('ycmklrycghovdmozkjibqajlexsb', 0),
('kbwisfciajohpdsrcmbrxwusacgd', 0),
('hbdruivfhkhnxeffjlcjfkklddwo', 0),
('nccrsmilgtvcrdrrreadckplwwji', 0),
('ayc cbpesbjqupjunhghqkfr cge', 0),
('x afoxbqjnybyfbccckbgqtoyjvm', 0),
('hiduavargjvmklztftwsuwatmpos', 0),
('ulereudkodzdpgo uqcitemyonph', 0),
('fjccltytqainhi qywfdsbxmo zh', 0),
('uevcjqeqapwjrprxphh lpzpcl o', 0),
('tqvqmxwhtks gmjvsdtergqzjttr', 0),
('ivmfwilkoayoqptslfedxrgzmuef', 0),
('nqpqnzqbzohgtgkbkoopetlxv kf', 0),
('esjsxnevgpxphtpxcyutgcwmzjyq', 0),
('ifvyagzroah epfqljlwltfsucwg', 0),
('jwptqgx ytypezcqiptyhsvprkqu', 0),
(' cuwnrjxstzbryoyzgyocilmridv', 0),
('luhkruthk dzfinwppefgtaivudv', 0),
('ubfrduknprftsptost mrhikfpur', 0),
('ednzewkuytohueuhkcrsukitabor', 0),
(' loibyedzzcrymyxniqlacrbtnpx', 0),
('blljcwxddxbjjqcnbvgjqgsfo qm', 0),
('vbzywaapacpmqwnbtcpjyxcuwzzc', 0),
('tncgkhegnwhraairnwscyvjjxjqh', 0),
('ivpvglkbvqooroqzqhljkiqbbaat', 0),
('wsybfmi gycjsuk pps vxyelcwe', 0),
(' mawkpxbzub lwruzxkigfkgvzcj', 0),
('lsznwerbkwdfenyzigccspfcjbbg', 0),
('lwmsdfveaxumvxmcgbypxcsc zpo', 0),
('hgxbzvtbxkugwaweiqidxcquhsei', 0),
('uwmobeucoqtxbmnjbeqyznpyjmao', 0),
('epdbyllsrhxtpujowzc gqsfgkdj', 0),
('sjljjyrghqc zggsklm yqadqwu ', 0),
('dnzoasbeekktouxpuof gxcwoqwg', 0),
('uyzkghi hzcuncdjxqaeygh mmtu', 0)]
2: " hlibs opphikee pnabbynsolc" 7
[(' jrirs odnngkae poatblnssll', 6),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('fwlmzamow kcshywssga mjtmhrz', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('soghoivxpsuvd omyq fpdsnamlp', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('vhrvdzmcyidjiuhasarlg hwzqca', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('iziqcwdh ygtmbadzuhbconkqaje', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('pzo cmhfutxpmmnd uixngnkraeu', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('r tcmj orvfmglkaqsxajj snonj', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('rzpcpz orsflgehlbgxajo szqna', 4),
(' uamrs jodcnqkasachavblmcsdd', 3),
('vhrvdkmkyidjiupasamlgyfwvvgz', 3),
('caooobwqgyswetpegynxuebpnjuh', 3),
('vhrvdkmkyidjiupasamlgyfwvvgz', 3),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('r ycfj orsfmglkuysxaho sngnj', 3),
('idlhgjenmiyqzhomrgxhgduzlrts', 3),
('ryewyoktojzcg jqgwzmffonmpzc', 3),
('dxpmifwxqbsxohknpvnkebbsvjzn', 3),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('hzjkgusjgeyxhqbaeu vedwkphwr', 3),
('idlhgjenmiyqzhomrgxhgduzlrts', 3),
('idlhgjenmiyqzhomrgxhgduzlrts', 3),
(' uamrs jodcnqkasachavblmcsdd', 3),
('aalzumpartkakqyaxfnbrd jcnan', 3),
('vhrvdkmkyidjiupasamlgyfwvvgz', 3),
('pziqcidf yyambkdeuwqnozkqajp', 3),
('oyqpnhfakqzts f luxz tsopyvy', 3),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('fooxkmlybjulepvlwsasphomcwyq', 2),
('nyjzgpsjgwyxhqbue hedwk asr', 2),
('nyetntfgrqzrszfkluuz tbkgfzy', 2),
('xsylozeoypfqtupzkfnpbagztqil', 2),
('idzbccogmcrqrlpexavaqticfgws', 2),
('nyetntfgrqzrszfkluuz tbkgfzy', 2),
('xsylozeoypfqtupzkfnpbagztqil', 2),
('c wubvhohlnpgfovoijbzadgn fq', 2),
('nyjzgpsjgwyxhqbue hedwk asr', 2),
('kcnjcarmxtlhgsoueyxgydztcplu', 2),
('nyetntfgrqzrszfkluuz tbkgfzy', 2),
('kcnjcarmxtlhgsoueyxgydztcplu', 2),
('j gwtjkmrxohsgjfufrq zybrkd', 2),
('nyetntfgrqzrszfkluuz tbkgfzy', 2),
('hlqarnsskfayzdalffchlpvvmpni', 2),
('nyetntfgrqzrszfkluuz tbkgfzy', 2),
('xipvzubyjtegldvbiuey tzvzlod', 2),
('idzbccogmcrqrlpexavaqticfgws', 2),
('xsylozeoypfqtupzkfnpbagztqil', 2),
('dupmxfzxbkcbahfnavnkbbbwvzzy', 2),
('yyorymhxutxprmn uixvgnxrzeu', 2),
('hdabh qzksnlbndkhagvppzsiptt', 2),
('kcnjcarmxtlhgsoueyxgydztcplu', 2),
('raofobtqgyswetwiyneguyjenjui', 2),
('hdabh qzksnlbndkhagvppzsiptt', 2),
('tzpzpzoclfylrehhbgmlj hqzqca', 2),
('xipvzubyjtegldvbiuey tzvzlod', 2),
('raofobtqgyswetwiyneguyjenjui', 2),
('dupmxfzxbkcbahfnavnkbbbwvzzy', 2),
('dupmxfzxbkcbahfnavnkbbbwvzzy', 2),
('zzlvnmpjrtkaoqsoacnopdqjdxay', 2),
('zsknpqmteicrxnlmazyw vglhbyl', 2),
('xipvzubyjtegldvbiuey tzvzlod', 2),
('ofsnmhbpivyosinaqcxljjaknon ', 2),
('xsylozeoypfqtupzkfnpbagztqil', 2),
('kcnjcarmxtlhgsoueyxgydztcplu', 2),
('hdabh qzksnlbndkhagvppzsiptt', 2),
('qelcxadvmoctndsarlf eh ndryk', 2),
('lgbdqmnhjv cvhqkpm ofcwuopia', 1),
('kozaorufcyzwxtyyiuiznjfcdiys', 1),
('vzwmrdzuczwnxjqrttksggiscyzm', 1),
('phxigurqqqthqxpuxlhujgyrrxdt', 1),
('xhngdryqziseadzgycbmtpyvrgjm', 1)]
3: " jmdbq omppjikeo pnab orsolb" 9
[(' hlibs opphikee pnabbynsolc', 7),
('yqrmknfjodcngqae phuvblncvll', 6),
(' jrirs odnngkae poatblnssll', 6),
('vorvoivkyidji pasqmuptfnvvlz', 6),
('ujcdqnfomrrjguzocpouevonoolb', 6),
('ujctbjkbmyrjiweq enae yrooxb', 6),
(' jrirs odnngkae poatblnssll', 6),
('bjgdoikomruvd zmc kfe oraolb', 6),
('kjlttjeo yrjineo nue orootb', 6),
('ujodkqfoywrjgqzoc oae onsoll', 6),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('fwjmza ow kxshywssga dwkmarz', 5),
('oyltnhfakypti fqlenz ayopoxo', 5),
('yjrhbnf ywnhgwee pnutvynxklc', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('tzozcmhfltxlrmhd umxngnkzdea', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('yjshknv ysuvg om q utdqnsvlp', 5),
('vhrhanmcyidjguse aolt hwzvla', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('fogmzdmowsdct myqgaptstvhlz', 5),
('y rhkjfoysfzglseysoato nkvlj', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('hjrbknqzkpnlbqse pgutvznspll', 5),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('iziqcwdh ygtmbadzuhbconkqaje', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('r tcmj orvfmglkaqsxajj snonj', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('rzpcpz orsflgehlbgxajo szqna', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('vhrvdzmcyidjiuhasarlg hwzqca', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('r tcmj orvfmglkaqsxajj snonj', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('usylodeoypfvtugmkfnupdsntmil', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('rzpcpz orsflgehlbgxajo szqna', 4),
('vhrzgpskywyxiupasakhedfwvagz', 4),
('vhrvdzmcyidjiuhasarlg hwzqca', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('fwlmzamow kcshywssga mjtmhrz', 4),
('iziqcwdh ygtmbadzuhbconkqaje', 4),
('yjrhknf ywnzgqse poutvqnsvll', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('rzpcpz orsflgehlbgxajo szqna', 4),
('khltbjev yphiweq enubayxxoxc', 4),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('xsybozooypfqrupekanaqaicfqxl', 3),
('r ycfj orsfmglkuysxaho sngnj', 3),
('vhrvdkmkyidjiupasamlgyfwvvgz', 3),
('ryewyoktojzcg jqgwzmffonmpzc', 3),
('idlhgjenmiyqzhomrgxhgduzlrts', 3),
('uoghodvxysdvt gmyqpuptsnvmlj', 3),
('tlpzrzscsfalzdalfgmllphqzqci', 3),
('xipvzubyjtegldvbiuey tzvzlod', 2),
('hdabh qzksnlbndkhagvppzsiptt', 2),
('hlqarnsskfayzdalffchlpvvmpni', 2),
('qelcxadvmoctndsarlf eh ndryk', 2),
('yyorymhxutxprmn uixvgnxrzeu', 2),
('zzlvnmpjrtkaoqsoacnopdqjdxay', 2)]
5: "yjstwnvoywrvi eo q atdonsolb" 10
[(' jmdbq omppjikeo pnab orsolb', 9),
(' hldrs omdnhi aec natb nsslc', 9),
(' jmdbq omppjikeo pnab orsolb', 9),
('qjlqcqdo rgjouao unab orootc', 9),
('kjrtknfv rrjiwzq enueyonxoll', 8),
('bjltojeomyuviwzn nfeaorxolc', 8),
('ujcmpqkomrrjnuhl gxae oyoqla', 8),
('bjltojeomyuviwzn nfeaorxolc', 8),
('ujctktfoywrjiqeocenae onsolb', 8),
(' hlibs opphikee pnabbynsolc', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
(' hlibs opphikee pnabbynsolc', 7),
('rzpmpz orsflgqhl gxavo yzqla', 7),
(' hlibs opphikee pnabbynsolc', 7),
('ujshknkomsujg om aedopovtb', 7),
('vjrhoiv yidzi se poqtvqnvvll', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('uhcdtqmokirjouza naeyowoogb', 7),
('vjrhoiv yidzi se poqtvqnvvll', 7),
(' hlibs opphikee pnabbynsolc', 7),
('yorvonvkyidni sa poutvqnvvlz', 7),
(' hlibs opphikee pnabbynsolc', 7),
('ujodkqkomwrjgqzoc kae onoolb', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
(' hlibs opphikee pnabbynsolc', 7),
('uhcvwqmomrdjiupo ekabyorootz', 7),
('ujcmzakom rlouzoc kaemorootb', 7),
('hjrdh fzmrrloudk nve yrxplb', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('ujshknkomsujg om aedopovtb', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('ujcttqkbmrrjiuzq nab orooxb', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('vjrhoiv yidzi se poqtvqnvvll', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('vjrhoiv yidzi se poqtvqnvvll', 7),
(' hlibs opphikee pnabbynsolc', 7),
('i tqtqkomwfxgladquhae oroajb', 7),
('ujrdtq omynnokae kaeblrosll', 7),
('kjlttjeo yrjineo nue orootb', 6),
('vhrvcz c ydjmuaq unbbahdxaca', 6),
(' jrirs odnngkae poatblnssll', 6),
('hjrhh fzysnlbndk anvppysxplq', 6),
('ujimzakom kjouzoc ka motohtz', 6),
('ujldtqmom rlshzos ka otmhrz', 6),
('ujcttqeom yjoueoc na yrooxz', 6),
('kjlttjeo yrjineo nue orootb', 6),
('khcvbjkomidjiwpo klo owovtz', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('fhlmzjeowypcihew sgu mjxdoxc', 6),
(' jrirs odnngkae poatblnssll', 6),
('kjlttjeo yrjineo nue orootb', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('ujlmtamomrkjohyosska mttmorb', 6),
('ujcqcqkomygjobzokuhac oroatb', 6),
('ujimzakom kjouzoc ka motohtz', 6),
('ujctbjkbmyrjiweq enae yrooxb', 6),
('kjlttjeo yrjineo nue orootb', 6),
(' jrirs odnngkae poatblnssll', 6),
('ujchknf mrnzoqgecpkaevonsolb', 6),
(' jrirs odnngkae poatblnssll', 6),
('ujldtqmom rlshzos ka otmhrz', 6),
('vorvoivkyidji pasqmuptfnvvlz', 6),
(' jrirs odnngkae poatblnssll', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('ujodkqfoywrjgqzoc oae onsoll', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('yjchknkomsmvguoo qkutdqroolb', 6),
(' jrirs odnngkae poatblnssll', 6),
('ujodkqfoywrjgqzoc oae onsoll', 6),
('fhlmzamoy dlihpassnl mctmvgz', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
(' jrirs odnngkae poatblnssll', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('kjlttjeo yrjineo nue orootb', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('yhrhbjfvywnziweq eoubaynsvlq', 5),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('yhrhbjfvywnziweq eoubaynsvlq', 5),
('yhrhbjfvywnziweq eoubaynsvlq', 5),
('oyltnhfakypti fqlenz ayopoxo', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khltzaeow zcsheq snu aytmoxz', 5),
('yjrhbnf ywnhgwee pnutvynxklc', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khrvbjmkyidjiwpa enlbycwxvgz', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('yjshknv ysuvg om q utdqnsvlp', 5),
('ujcdtqkomrrjouzoc kae orootb', 5),
('khltzaeow zcsheq snu aytmoxz', 5),
('yjrhbnf ywnhgwee pnutvynxklc', 5)]
6: " jlmpq omqrhi ae gnatbonsqla" 13
[('yjstwnvoywrvi eo q atdonsolb', 10),
(' hshkskompfji om abdopsolb', 10),
(' hldrs omdnhi aec natb nsslc', 9),
(' jmdbq omppjikeo pnab orsolb', 9),
(' jmdbq omppjikeo pnab orsolb', 9),
('ujrhoivoyixji ze pouevqnooll', 9),
('ujrhoivomikzi om qevqnovll', 9),
('ojrdtqfomrrjgqao kav lnoolb', 9),
(' hldrs omdnhi aec natb nsslc', 9),
('qjlqcqdo rgjouao unab orootc', 9),
(' hldrs omdnhi aec natb nsslc', 9),
(' hldrs omdnhi aec natb nsslc', 9),
(' jmdbq omppjikeo pnab orsolb', 9),
(' hldrs omdnhi aec natb nsslc', 9),
(' jmdbq omppjikeo pnab orsolb', 9),
(' jmdbq omppjikeo pnab orsolb', 9),
('ujctktfoywrjiqeocenae onsolb', 8),
('ujcmpqkomrrjnuhl gxae oyoqla', 8),
('kjrtknfv rrjiwzq enueyonxoll', 8),
('kjrtknfv rrjiwzq enueyonxoll', 8),
('yjrhbn cyydjguae unttaynxkla', 8),
('bgrdbskomwnni aqceoubbynaslq', 8),
('ujctktfoywrjiqeocenae onsolb', 8),
('uqrmknfoydrngqaf phuvyonovll', 8),
('hjlmt fzmskjohdx ska mtsxolb', 8),
('kjrtknfv rrjiwzq enueyonxoll', 8),
('kjrtknfv rrjiwzq enueyonxoll', 8),
('bjltojeomyuviwzn nfeaorxolc', 8),
('hjrdh fzmrrloudk nae yrxplb', 8),
('hjrmh fzmdrnoqak nuvblrxplh', 8),
('ujlmkakomsulghpa snl fotovgz', 8),
('uhcmzqkom rlcuzac naemowootb', 8),
('yjshbnk mwnhg gm natdopovlb', 8),
('bjltojeomyuviwzn nfeaorxolc', 8),
('uhsmbamoopkhokyo sna bttmolc', 8),
('ftlmbjmoyipcihea enl yjwdoxc', 8),
('ujcmpqkomrrjnuhl gxae oyoqla', 8),
('uhcvwqmomrdjiupo ekabyorootz', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('i tqtqkomwfxgladquhae oroajb', 7),
(' hlibs opphikee pnabbynsolc', 7),
('vhlvtzeo zdjiueq unub hdoota', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('ujshknkomsujg om aedopovtb', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
(' jrirs omrrjlkzec katbonssll', 7),
('ujodkqkomwrjgqzoc kae onoolb', 7),
(' hlibs opphikee pnabbynsolc', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('ujrdtq omynnokae kaeblrosll', 7),
('ujshknkomsujg om aedopovtb', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('ujrdtq omynnokae kaeblrosll', 7),
('i tqtqkomwfxgladquhae oroajb', 7),
('kjrvb mzysnliwpa enlppcsuplq', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('vjrhoiv yidzi se poqtvqnvvll', 7),
('ujrdtq omynnokae kaeblrosll', 7),
('yjshknk mruvo om atdorovlb', 7),
('vjrhoiv yidzi se poqtvqnvvll', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
('yorvonvkyidni sa poutvqnvvlz', 7),
(' hlibs opphikee pnabbynsolc', 7),
(' hlibs opphikee pnabbynsolc', 7),
(' hlibs opphikee pnabbynsolc', 7),
('ujcttqkbmrrjiuzq nab orooxb', 7),
('hjrdh fzmrrloudk nve yrxplb', 7),
('yarirs odcngqae phatblnssll', 7),
('bjgdrskomdnnd adc oftblnaslb', 7),
(' hlibs opphikee pnabbynsolc', 7),
(' hlibs opphikee pnabbynsolc', 7),
('ujodkqkomwrjgqzoc kae onoolb', 7),
('ujrhknfoyrrjgqzf pouevonooll', 7),
('uhcvwqmomrdjiupo ekabyorootz', 7),
('ujcttqkbmrrjiuzq nab orooxb', 7),
('ujchknf mrnzoqgecpkaevonsolb', 6),
('kjlttjeo yrjineo nue orootb', 6),
(' jrirs odnngkae poatblnssll', 6),
('ujimzakom kjouzoc ka motohtz', 6),
('yjchknkomsmvguoo qkutdqroolb', 6),
('ujchknf mrnzoqgecpkaevonsolb', 6),
('kjlttjeo yrjineo nue orootb', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('hjrhh fzysnlbndk anvppysxplq', 6),
('kjlttjeo yrjineo nue orootb', 6),
(' jrirs odnngkae poatblnssll', 6),
(' jrirs odnngkae poatblnssll', 6),
('kjlttjeo yrjineo nue orootb', 6),
('ujcqcqkomygjobzokuhac oroatb', 6),
('ujimzakom kjouzoc ka motohtz', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('kjlttjeo yrjineo nue orootb', 6),
('yqrmknfjodcngqae phuvblncvll', 6),
('kjlttjeo yrjineo nue orootb', 6),
('kjlttjeo yrjineo nue orootb', 6),
('ujodkqfoywrjgqzoc oae onsoll', 6),
('yhrhbjfvywnziweq eoubaynsvlq', 5),
('yhrhbjfvywnziweq eoubaynsvlq', 5)]
11: "hhlmt eomsrlg tdcunfb onoola" 14
[(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujtvoevomiwni ad u ae onvalb', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' hrdts omdrhi ae unat nsolo', 13),
(' qsmoikomikzi adcpnatbqnoolc', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujrubqkomrphg em pnatdonooll', 12),
('ujrhra omikli pe nh v nsvlc', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
(' hrdrs omdnhigad natvqnoolc', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('qjrhcivoyykji ao unpeaonxvla', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('qjrhkqvomigui se uoabvopvotl', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('ujrdksqomdgjigod unabvqpoolc', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('qjrdtq omydhgkao unatbonxkla', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('qjrhcivoyykji ao unpeaonxvla', 12),
('qjrhcivoyykji ao unpeaonxvla', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('ujldenvomdzlgqad na voroolc', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('uvrdtafomprlihae p ab onsolc', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('qjlqkskomikji ol unabhoroold', 12),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('brgdbidomigzifodcpnae onooll', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('ujrhra omikli pe nh v nsvlc', 12),
('u reonvomrrji zd knae orooll', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('ujrdvsvomdnndgad nabvqnoolc', 11),
(' jritq omjpjikeo pnabbonsolg', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
(' hlmr omsnjihax naub sxolc', 11),
(' qrmoskomdcnd adcpoatbbnaolc', 11),
('qjsqkskomegji om unab opootc', 11),
('qjsqkskomegji om unab opootc', 11),
('bjgdtifomirziqao kqbbonoola', 11),
('qjsqkskomegji om unab opootc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
(' hlmr omsnjihax naub sxolc', 11),
('qjsqkskomegji om unab opootc', 11),
(' jritq omjpjikeo pnabbonsolg', 11),
('vjrhlqvo zdti se unae onowll', 11),
('uhrdrafomiklihaec atoonsvlc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
('qjrqcn oyydjguao unaraonxkla', 11),
('qjrqcn oyydjguao unaraonxkla', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
('bjldcikomikzi al nqb oroold', 11),
('qjsqkskomegji om unab opootc', 11),
('bjldcikomikzi al nqb oroold', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
(' hlmr omsnjihax naub sxolc', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
('qjsqkskomegji om unab opootc', 11),
('vhldrreomppjikao unab nsolb', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
(' jrhoqvomidhg ae naqvlnvwll', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
('uhrdrafomiklihaec atoonsvlc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
('usthrnvomwkxi ae nue onoolb', 11),
('qjrqcn oyydjguao unaraonxkla', 11),
('bjldcikomikzi al nqb oroold', 11),
(' jrdrivomikzi oez natbqnoslc', 10),
('ujrhoafomiklihpq u vokovll', 10),
('ujrhoafomiklihpq u vokovll', 10),
('vjrhoqvomidni se oaevlnvwll', 10),
(' ordbs mpphikee pnab onvols', 10),
('vhldrzeomddhi aecunab nsata', 10),
('ujrhoafomiklihpq u vokovll', 10),
(' jrdtqfomprziqao pkab orsolb', 10),
('brgdkikomikzi odc qebonooll', 10),
('vjrhoqvomidni se oaevlnvwll', 10),
('ujrhrngomrrji ze nue noolc', 10),
(' jrdtqfomprziqao pkab orsolb', 10),
(' ordbs mpphikee pnab onvols', 10),
('vjmvlqeo zpjikeq unab oroola', 10),
('brgdkikomikzi odc qebonooll', 10),
(' ordbs mpphikee pnab onvols', 10)]
12: "phrurq omsnhi ad unatxonooll" 16
[('hhlmt eomsrlg tdcunfb onoola', 14),
(' ksmoieomipji aqcpnab onoola', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
(' hlmr omsnhi adcunatx noolc', 14),
('uhrmbo omrnhi ac natdosxoll', 14),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujrhtqfqminei ae natdonvwll', 13),
('oyldcikomdnhi al pnatdonoolc', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' qsmoikomikzi adcpnatbqnoolc', 13),
('ojrdbifomir ifod anae onoolb', 13),
(' jrdbivomigzi ndcpnaebonosll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' qsmoikomikzi adcpnatbqnoolc', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('ujtvoevomiwni ad u ae onvalb', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ohldcqqominhi am pnatdoroold', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('ujlmenfomiklghpd nu jorooll', 12),
(' orqbskomeghi o pnab onvots', 12),
('bjgdts omiphikae pnabbonotla', 12),
('ujrhra omikli pe nh v nsvlc', 12),
('ujrubqkomrphg em pnatdonooll', 12),
(' hrdrs omdnhigad natvqnoolc', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('ujrhra omikli pe nh v nsvlc', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('qjrdtq omydhgkao unatbonxkla', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('qjrhkqvomigui se uoabvopvotl', 12),
('u reonvomrrji zd knae orooll', 12),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
('ujldenvomdzlgqad na voroolc', 12),
('qjlqkskomikji ol unabhoroold', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('qjrhkqvomigui se uoabvopvotl', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('qjrhkqvomigui se uoabvopvotl', 12),
('ujldenvomdzlgqad na voroolc', 12),
('uvrdtafomprlihae p ab onsolc', 12),
('qjrhkqvomigui se uoabvopvotl', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('qjsdkseomegjikoo unabdonootc', 12),
('ujrdksqomdgjigod unabvqpoolc', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('uhrdraftmiklihadrunf oorsvll', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('qjrhcivoyykji ao unpeaonxvla', 12),
('ujrhra omikli pe nh v nsvlc', 12),
('qjrqcn oyydjguao unaraonxkla', 11),
('bjgdtifomirziqao kqbbonoola', 11),
('bjldcikomikzi al nqb oroold', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
('qjrqcn oyydjguao unaraonxkla', 11),
(' qrmoskomdcnd adcpoatbbnaolc', 11),
(' hlmr omsnjihax naub sxolc', 11),
('bordcsk mpphi od ab onvols', 11),
('qjsqkskomegji om unab opootc', 11),
(' qrmoskomdcnd adcpoatbbnaolc', 11),
('qjsqkskomegji om unab opootc', 11),
('ujlmenfomwrlgqzdrunf orooll', 11),
('uhrdrafomiklihaec atoonsvlc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11),
('qjrqcn oyydjguao unaraonxkla', 11),
(' jritq omjpjikeo pnabbonsolg', 11),
('qjrqcn oyydjguao unaraonxkla', 11)]
13: "uhrmtovomunhi ad una vonsoll" 17
[('phrurq omsnhi ad unatxonooll', 16),
('qwrqonvoyiwji ao unar onxola', 15),
('ujrxtqkominei ad natdqnvoll', 15),
('uhrmb eomsnhg adcunabdunooll', 15),
('uhrdtsvomunni ad unabvqnsolo', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
(' ksmoieomipji aqcpnab onoola', 14),
('bhrmc komsnji dw unabm nsqla', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('uhrmbo omrnhi ac natdosxoll', 14),
(' hlmr omsnhi adcunatx noolc', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('ujrubnkomwplg ed unr onooll', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('ujrdtseomprlihaq u al onsola', 14),
(' hlmr omsnhi adcunatx noolc', 14),
(' ksmoieomipji aqcpnab onoola', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
(' ksmoieomipji aqcpnab onoola', 14),
('hhlmo vomikni awcu aemovvola', 14),
('ujrdkseomdgjikao unabvonootc', 13),
('bjrdcskomenni ad nab qnoolc', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujrhtqfqminei ae natdonvwll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ujrhtqfqminei ae natdonvwll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' qsmoikomikzi adcpnatbqnoolc', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhrafqmikzihad unftdorsol ', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ujrhtqfqminei ae natdonvwll', 13),
('ojrdbifomir ifod anae onoolb', 13),
('qjrheqvomggui sd unabvopvotc', 13),
(' jrdbivomigzi ndcpnaebonosll', 13),
(' qsmoikomikzi adcpnatbqnoolc', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('ohldcqqominhi am pnatdoroold', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujtvoevomiwni ad u ae onvalb', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ojrdbifomir ifod anae onoolb', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('ojrdbifomir ifod anae onoolb', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ohldcqqominhi am pnatdoroold', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ojrheifoywrjiqzd unp onxola', 13),
('mjrubqkomrpng ed pnabdonoolc', 13),
('ojrdbifomir ifod anae onoolb', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('bjrdcskomdnni ad nab qpoolc', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
('ujldenvomdzlgqad na voroolc', 12),
('bjgdts omiphikae pnabbonotla', 12),
('qjlqkskomikji ol unabhoroold', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
(' hrdrs omdnhigad natvqnoolc', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
('ujldenvomdzlgqad na voroolc', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('qjrhcivoyykji ao unpeaonxvla', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('ohldrqkomdnhg am pnatdonsolc', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('uhrdraftmiklihadrunf oorsvll', 12),
('qjrhkqvomigui se uoabvopvotl', 12),
('ujrhtqfqmpnziqam pnatdonooll', 12),
('vjmvlseomdnhikaq umat onsola', 12),
('vjrvoqvo iwni sq uoae hnvola', 12),
('vjmvlseomdnhikaq umat onsola', 12),
(' hrdrs omdnhigad natvqnoolc', 12),
('ujrhra omikli pe nh v nsvlc', 12),
('qjsdkseomegjikoo unabdonootc', 12),
('qjrqcn oyydjguao unaraonxkla', 11),
('qjsqkskomegji om unab opootc', 11),
('ujrdvsvomdnndgad nabvqnoolc', 11)]
14: "ujrmt omirli ad unab onsola" 20
[('uhrmtovomunhi ad una vonsoll', 17),
('omrmt fomirli odcunab onoolb', 17),
('utrmpqeomprli aq u at onsola', 16),
('okrdiieomip i od anab onoola', 16),
('uhrdtivomunni ad unabasnxola', 15),
('ujrxtqkominei ad natdqnvoll', 15),
('qjsdkinomigji aocunabdonoolc', 15),
('ujrxtqkominei ad natdqnvoll', 15),
('ujrxtqkominei ad natdqnvoll', 15),
('ujrxtqkominei ad natdqnvoll', 15),
('uhrmb eomsnhg adcunabdunooll', 15),
('utrdtieomipjihaq unal onoola', 15),
('uhsmbtkomiphi aq natdonooll', 15),
('qwrqonvoyiwji ao unar onxola', 15),
('uhszbskomegji am unatdopoolc', 14),
('ujrubnkomwplg ed unr onooll', 14),
(' hlmr omsnhi adcunatx noolc', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('bhrmc komsnji dw unabm nsqla', 14),
('vjlmps omqrhi aq unatxonsqla', 14),
('ohrhb komdnhg ao natdonoolc', 14),
('hhrhe eoywrji zd unab onxolt', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
('vjlmpqeomqnhi ad umatbonsqla', 14),
('ujrdtseomprlihaq u al onsola', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
(' ksmoieomipji aqcpnab onoola', 14),
(' ksmoieomipji aqcpnab onoola', 14),
('fjrhpqfqmqrzi am pnatdonsola', 14),
('bhrmc komsnji dw unabm nsqla', 14),
('qjrdcqvomgghi sd unmtdopvolc', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('hhlmtbkomdkji dw unabd nsola', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
(' jlmp omskhi ae unabmznsqla', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
(' jrhrq omikli pe gna boysvla', 14),
(' jlmcs omerhi ae gnaz onoola', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
(' ksmoieomipji aqcpnab onoola', 14),
('uhrdtsvomunni ad unabvqnsolo', 14),
('uhrmbo omrnhi ac natdosxoll', 14),
('hhlmt eomsrlg tdcunfb onoola', 14),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('ojrdbifomir ifod anae onoolb', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ojrdbifomir ifod anae onoolb', 13),
('ojrheifoywrjiqzd unp onxola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ojrdbifomir ifod anae onoolb', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ohldcqqominhi am pnatdoroold', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ohldcqqominhi am pnatdoroold', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('bjrdcskomenni ad nab qnoolc', 13),
('ojrheifoywrjiqzd unp onxola', 13),
('ujrdkseomdgjikao unabvonootc', 13),
('ujrhtqfqminei ae natdonvwll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('ujrdtseompgziqam pnabdojoolc', 13),
(' qsmoikomikzi adcpnatbqnoolc', 13),
('qjrdtnvomyrji ao anasaonovla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('bjrdcskomenni ad nab qnoolc', 13),
('ujrhtqfqminei ae natdonvwll', 13),
('ujrhtqfqminei ae natdonvwll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('hhlmt eomskji dwcunabm nsola', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('uhrhbokomrnhg ao natdonooll', 13),
('ojrdbifomir ifod anae onoolb', 13),
('ojrdbifomir ifod anae onoolb', 13),
('uhrhbokomrnhg ao natdonooll', 13),
(' jlmpq omqrhi ae gnatbonsqla', 13),
('bjrdcskomdnni ad nab qpoolc', 12),
(' hrdrs omdnhigad natvqnoolc', 12),
('ujldenvomdzlgqad na voroolc', 12),
('bjrdcskomdnni ad nab qpoolc', 12),
('bjgdts omiphikae pnabbonotla', 12),
('ujrubqkomrphg em pnatdonooll', 12),
('ojrdtnfomrrjiqaq anav onoolb', 12),
|
examples/dask.ipynb | ###Markdown
Working with IPython and dask.distributed[dask.distributed](https://distributed.readthedocs.io) is a cool library for doing distributed execution. You should check it out, if you haven't already.Assuming you already have an IPython cluster running:
###Code
import ipyparallel as ipp
rc = ipp.Client()
rc.ids
###Output
_____no_output_____
###Markdown
You can turn your IPython cluster into a distributed cluster by calling `Client.become_dask()`:
###Code
executor = rc.become_dask(ncores=1)
executor
###Output
_____no_output_____
###Markdown
This will:1. start a Scheduler on the Hub2. start a Worker on each engine3. return an Executor, the distributed client APIBy default, distributed Workers will use threads to run on all cores of a machine. In this case, since I already have one *engine* per core,I tell distributed to run one core per Worker with `ncores=1`.We can now use our IPython cluster with distributed:
###Code
from distributed import progress
def square(x):
return x ** 2
def neg(x):
return -x
A = executor.map(square, range(1000))
B = executor.map(neg, A)
total = executor.submit(sum, B)
progress(total)
total.result()
###Output
_____no_output_____
###Markdown
I could also let distributed do its multithreading thing, and run one multi-threaded Worker per engine.First, I need to get a mapping of one engine per host:
###Code
import socket
engine_hosts = rc[:].apply_async(socket.gethostname).get_dict()
engine_hosts
###Output
_____no_output_____
###Markdown
I can reverse this mapping, to get a list of engines on each host:
###Code
host_engines = {}
for engine_id, host in engine_hosts.items():
if host not in host_engines:
host_engines[host] = []
host_engines[host].append(engine_id)
host_engines
###Output
_____no_output_____
###Markdown
Now I can get one engine per host:
###Code
one_engine_per_host = [ engines[0] for engines in host_engines.values()]
one_engine_per_host
###Output
_____no_output_____
###Markdown
*Here's a concise, but more opaque version that does the same thing:*
###Code
one_engine_per_host = list({host:eid for eid,host in engine_hosts.items()}.values())
one_engine_per_host
###Output
_____no_output_____
###Markdown
I can now stop the first distributed cluster, and start a new one on just these engines, letting distributed allocate threads:
###Code
rc.stop_distributed()
executor = rc.become_dask(one_engine_per_host)
executor
###Output
tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: scheduler='tcp://192.168.1.19:63890' processes=24 cores=24>>
Traceback (most recent call last):
File "C:\Users\tedal\Anaconda3\lib\site-packages\tornado\ioloop.py", line 1229, in _run
return self.callback()
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\client.py", line 900, in _heartbeat
self.scheduler_comm.send({'op': 'heartbeat-client'})
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\batched.py", line 117, in send
raise CommClosedError
distributed.comm.core.CommClosedError
tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: scheduler='tcp://192.168.1.19:63890' processes=24 cores=24>>
Traceback (most recent call last):
File "C:\Users\tedal\Anaconda3\lib\site-packages\tornado\ioloop.py", line 1229, in _run
return self.callback()
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\client.py", line 900, in _heartbeat
self.scheduler_comm.send({'op': 'heartbeat-client'})
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\batched.py", line 117, in send
raise CommClosedError
distributed.comm.core.CommClosedError
tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: scheduler='tcp://192.168.1.19:63890' processes=24 cores=24>>
Traceback (most recent call last):
File "C:\Users\tedal\Anaconda3\lib\site-packages\tornado\ioloop.py", line 1229, in _run
return self.callback()
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\client.py", line 900, in _heartbeat
self.scheduler_comm.send({'op': 'heartbeat-client'})
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\batched.py", line 117, in send
raise CommClosedError
distributed.comm.core.CommClosedError
tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: scheduler='tcp://192.168.1.19:63890' processes=24 cores=24>>
Traceback (most recent call last):
File "C:\Users\tedal\Anaconda3\lib\site-packages\tornado\ioloop.py", line 1229, in _run
return self.callback()
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\client.py", line 900, in _heartbeat
self.scheduler_comm.send({'op': 'heartbeat-client'})
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\batched.py", line 117, in send
raise CommClosedError
distributed.comm.core.CommClosedError
distributed.batched - INFO - Batched Comm Closed: in <closed TCP>: Stream is closed
tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: scheduler='tcp://192.168.1.19:63890' processes=24 cores=24>>
Traceback (most recent call last):
File "C:\Users\tedal\Anaconda3\lib\site-packages\tornado\ioloop.py", line 1229, in _run
return self.callback()
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\client.py", line 900, in _heartbeat
self.scheduler_comm.send({'op': 'heartbeat-client'})
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\batched.py", line 117, in send
raise CommClosedError
distributed.comm.core.CommClosedError
###Markdown
And submit the same tasks again:
###Code
A = executor.map(square, range(100))
B = executor.map(neg, A)
total = executor.submit(sum, B)
progress(total)
###Output
_____no_output_____
###Markdown
Debugging distributed with IPython
###Code
rc.stop_distributed()
executor = rc.become_dask(one_engine_per_host)
executor
###Output
tornado.application - ERROR - Exception in callback <bound method Client._heartbeat of <Client: scheduler='tcp://192.168.1.19:50390' processes=1 cores=1>>
Traceback (most recent call last):
File "C:\Users\tedal\Anaconda3\lib\site-packages\tornado\ioloop.py", line 1229, in _run
return self.callback()
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\client.py", line 900, in _heartbeat
self.scheduler_comm.send({'op': 'heartbeat-client'})
File "C:\Users\tedal\Anaconda3\lib\site-packages\distributed\batched.py", line 117, in send
raise CommClosedError
distributed.comm.core.CommClosedError
###Markdown
Let's set the %px magics to only run on our one engine per host:
###Code
view = rc[one_engine_per_host]
view.block = True
view.activate()
###Output
_____no_output_____
###Markdown
Let's submit some work that's going to fail somewhere in the middle:
###Code
from IPython.display import display
from distributed import progress
def shift5(x):
return x - 5
def inverse(x):
return 1 / x
shifted = executor.map(shift5, range(1, 10))
inverted = executor.map(inverse, shifted)
total = executor.submit(sum, inverted)
display(progress(total))
total.result()
###Output
_____no_output_____
###Markdown
We can see which task failed:
###Code
[ f for f in inverted if f.status == 'error' ]
###Output
_____no_output_____
###Markdown
When IPython starts a worker on each engine,it stores it in the `distributed_worker` variable in the engine's namespace.This lets us query the worker interactively.We can check out the current data resident on each worker:
###Code
%%px
dask_worker.data
###Output
_____no_output_____ |
week03_functions/seminar/seminar03.ipynb | ###Markdown
Seminar 03 - Functions form submissions with other solution attempts (access by @phystech.edu email):https://docs.google.com/spreadsheets/d/1cV-s5hKuQAAtk1jNDMiaTUvjjBZH_yohHJU05PKWYxw Task 1return min and max argument (or keyword argument)if 1 argument passed and it is iterable, return min and max of this argument
###Code
# decision from seminar
def minmax(*args, **kwargs):
if len(args) == 1 and not kwargs and hasattr(args[0], '__iter__'): # isinstance([], list)
return min(args[0]), max(args[0])
else:
all_values = [*args, *(kwargs.values())]
return min(all_values), max(all_values)
# correct decision that works for minmax(a=[1, 2, 3])
def minmax(*args, **kwargs):
all_values = [*args, *(kwargs.values())]
if len(all_values) == 1 and hasattr(all_values[0], '__iter__'):
collection = all_values[0]
else:
collection = all_values
return min(collection), max(collection)
minmax(1, a=4, b=5, c=6)
[
minmax([1, 2, 3]) == (1, 3),
minmax(1, 2, 3) == (1, 3),
minmax(1, 2, 3, a=4, b=5, c=6) == (1, 6),
minmax(1, a=4, b=5, c=6) == (1, 6),
minmax(1) == (1, 1),
minmax(a=[1, 2, 3]) == (1, 3),
]
min([1, 2, 3], 1) # raise error
###Output
_____no_output_____
###Markdown
Packing & unpacking refresh
###Code
def pack_args(*args, **kwargs):
return args, kwargs
pack_args(1, 2, 3, a=4, b=5, c=6)
def pack_args_2(x, y, *args, a=None, **kwargs):
return args, kwargs
pack_args_2(1, 2, 3, a=4, b=5, c=6)
###Output
_____no_output_____
###Markdown
Task 2return 3 minimal elements from given collection
###Code
from typing import Iterable
def min3(a: Iterable):
a = a.copy() # not to change given collection
return a.pop(a.index(min(a))), a.pop(a.index(min(a))), a.pop(a.index(min(a)))
min3([1, 1, 1, 1])
[
min3([5, 4, 3, 2, 1]) == (1, 2, 3),
min3([1, 1, 2, 3]) == (1, 1, 2),
min3([1, 1, 1, 1]) == (1, 1, 1),
]
a = [5, 4, 3, 2, 1]
min3(a)
a # should be [5, 4, 3, 2, 1]
###Output
_____no_output_____
###Markdown
Task 3Create a decorator to print function name and its arguments (including kwargs) before the call and print function result after call
###Code
import functools
def log(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
name = func.__name__
print(f'{name} function was called with {args} {kwargs}')
res = func(*args, **kwargs)
print(f'result is {res}')
return res
return wrapper
@log
def foo(bar):
pass
foo(1)
# 'foo' function was called with args=(1,), kwargs={}
# 'foo' result is None
@log
def pack_args(*args, **kwargs):
return args, kwargs
print(pack_args.__name__)
pack_args(1, 2, 3, a=4, b=5, c=6)
###Output
pack_args function was called with (1, 2, 3) {'a': 4, 'b': 5, 'c': 6}
result is ((1, 2, 3), {'a': 4, 'b': 5, 'c': 6})
###Markdown
Function is object in Python
###Code
def foo(bar):
pass
print(type(foo))
print(dir(type(foo)))
###Output
['__annotations__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__globals__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__kwdefaults__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
|
Didattica/RudimentiPython.ipynb | ###Markdown
Rudimenti di Python per Laboratorio IIn questa lezione sono forniti alcuni rudimenti di python che permettono di svolgere semplici operazioni di analisi dei dati. Non viene data alcuna introduzione al linguaggio di programmazione e molte definizioni saranno *operative*, cioè volte ad esemplificare come si può svolgere uno dei compiti richiesti utilizzando python come strumento piuttosto che capire come effettivamente funziona il linguaggio. 0 Installazione di Python[Python](https://www.python.org) è facilmente installabile su qualsiasi sistema operativo se non vi è già presente (OSX, Linux). 0.1 Installazione più Semplice (Windows, OSX, Linux)Il modo più semplice di utilizzare Python e numerosi pacchetti per l'analisi dei dati è installare [Anaconda Individual Edition](https://www.anaconda.com/products/individual). A seguito dell'installazione e dell'avvio del programma si possono scegliere varie suite di programmazione, come [Spyder](https://www.spyder-ide.org), [Jupyter](https://jupyter.org), o un semplice terminale si apre una shell di comando che può essere utilizzata per inserire i comandi da eseguire.Il mio suggerimento è di creare un [Jupyter Notebook](https://jupyter.org/try) per ciascuna delle esperienze che verranno effettuate. 0.2 Installazione per espertiQualora siate esperti nell'utilizzo del terminale, potreste installare una versione più "leggera" di python includendo solo le librerie suggerite per effettuare le analisi di questo laboratorio.Su sistemi **Windows** l'installazione di python più semplice è sempre quella con Anaconda. Per OSX e Linux, si può ricorrere ad un gestore di pacchetti come [Homebrew](https://brew.sh) per installare python (v3). Generalmente python è già installato nel sistema. Talvolta può accadere che la versione di python del sistema sia **python 2**, mentre quella più recente è la versione **3**. Si può verificare la presenza nel sistema di python3 digitando al prompt ```$ python3```a seguito del quale dovrebbe apparire qualcosa di simile```Python 3.9.0 (default, Dec 6 2020, 19:27:10) [Clang 12.0.0 (clang-1200.0.32.27)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> ```Nel caso python3 non sia installato si può utilizzare **brew** per installarlo:```$ brew install python3```Una volta verificato che nel sistema è presente python3, la cosa più semplice è creare un ambiente virtuale contenente il software che vogliamo utilizzare (*jupyter*, *numpy*, *scipy*, *matplotlib*).```$ python3 -m venv laboratorioI$ source laboratorioI/bin/activate$ python -m pip install --upgrade pip$ export PKGS=(numpy scipy jupyter matplotlib)$ for PKG in $PKGS; do pip install $PKG; done$ deactivate```Questi comandi creano un ambiente virtuale di python3 con i pacchetti *numpy*, *scipy*, *jupyter* e *matplotlib* (e le loro dipendenze).Per attivare l'ambiente virtuale, basta aprire un terminale e avviare il comando `activate` presente in `laboratorioI/bin` (come fatto per installare i pacchetti).Per verificare che l'installazione sia andata a buon fine```laboratorioI $ python>>> import scipy>>>```Se il terminale non dà errori è tutto in ordine. Per uscire dalla shell di python si utilizzi la combinazione di tasti `CTRL-D` o si invii il comando `exit`.Per aprire una finestra del browser con jupyter notebook, basta inviare il seguente comando da shell```laboratorioI $ jupyter notebook```e si aprirà una pagina simile a questa.Il vantaggio di utilizzare *jupyter notebook* è che si possono alternare celle di testo a celle di codice eseguibile direttamente cliccando
###Code
import numpy as np
np.sqrt(2)
###Output
_____no_output_____
###Markdown
1 Utilizzo di BaseUna cella di comando può essere utilizzata per effettuare alcune operazioni di base, come operazioni aritmetiche o di definizione di variabili
###Code
2+3
a= 6
a = 4+5
###Output
_____no_output_____
###Markdown
Per stampare il valore di una variabile in un dato momento si può utilizzare il comando `print`
###Code
print(a)
print('a = {}'.format(a))
print("a = {}".format(a))
###Output
_____no_output_____
###Markdown
per maggiori dettagli sull'utilizzo della funzione `format` si può consultare la [documentazione](https://docs.python.org/3.4/library/string.htmlformatspec).È importante osservare che nel notebook le celle possono non essere eseguite in maniera sequenziale per cui talvolta può capitare di compiere errori di cambiamenti di valore di una variabile o non definizione della stessa, per cui si consiglia di cliccare su `Kernel->Restart & Run All` nella barra degli strumenti quando si abbia qualche dubbio. 1.1 Tipi di VariabileIn python (e nei linguaggi di programmazione in generale) le variabili possono essere di diversi tipi: - intere `int` - a virgola mobile `float` - stringhe `str` - booleane `bool`A seconda di come una variabile è definita il linguaggio di programmazione istruisce il calcolatore su quali sono le operazioni possibili.In python non c'è bisogno di informare il linguaggio del tipo di variabile, in quanto è in grado di determinarlo in fase di assegnazione del valore.
###Code
a = 1
b = 1.2
c = 'a'
d = True
print('a = {} è {}'.format(a,type(a)))# la funzione type restituisce il tipo di variabile
print('b = {} è {}'.format(b,type(b)))
print('c = {} è {}'.format(c,type(c)))
print('d = {} è {}'.format(d,type(d)))
###Output
_____no_output_____
###Markdown
1.2 Vettori (Liste) e DizionariPython permette di definire delle variabile vettore, cioè delle variabili che contengono una lista di valori, variabili o oggetti.Possono essere omogenee
###Code
A = [1,2,3]
print(A)
###Output
_____no_output_____
###Markdown
o eterogenee, cioè composte da elementi di tipo diverso
###Code
B = [1,2.3,'a',A]
print(B)
###Output
_____no_output_____
###Markdown
Gli elementi di un vettore possono essere richiamati specificando la posizione dell'elemento nel vettore (partendo da 0)
###Code
B[0]
###Output
_____no_output_____
###Markdown
Un particolare tipo di lista è il *dizionario*, cioè una lista nella quale ad ogni elemento ne è associato un altro
###Code
D={'a':1, 'b': 2.0, 1: 3}
print(D)
###Output
_____no_output_____
###Markdown
Per richiamare un elemento del dizionario si utilizza una sintassi simile a quella delle liste, dove si evidenzia però l'importanza dell'associazione tra i due elementi
###Code
print(D['a'])
print(D[1])
###Output
_____no_output_____
###Markdown
Per conoscere gli elementi che è possibile cercare nel dizionario si usa il comando `keys()` (la dimensione di un vettore è invece data dal comando `len()`
###Code
print(D.keys())
print(len(B))
###Output
_____no_output_____
###Markdown
1.3 Funzioni e LibrerieIl vantaggio di utilizzare python per il calcolo scientifico è che possiede molte librerie di funzioni scritte e verificate dalla comunità, per cui non si corre il rischio di "reinventare la ruota" ogni qual volta sia necessario scrivere una funzione per svolgere una determinata operazione. Ad esempio la libreria [numpy](https://numpy.org) contiene varie funzioni matematiche di utilizzo comune scritte con una sintassi che permette un'efficiente operazione tra vettori.
###Code
import numpy as np
print(np.sqrt(3))
print(np.sqrt(np.array([1,2,3])))
np.sqrt(np.array([1,2,3]))
print('{:.20f}'.format(np.sqrt(3)))
###Output
_____no_output_____
###Markdown
Nel primo caso ho inserito come argomento della funzione `np.sqrt` un numero `int` e la funzione ha restituito un `float`. Nel secondo caso ho fornito un `numpy.array` ed ho ottenuto un altro vettore con i risultati dell'operazione per ciascun elemento del primo.Si osservi come il *modulo* `numpy` venga caricato tramite la funzione di python `import` e le venga dato il nome `np` per brevità. Dopo questo comando ogni funzione contenuta nel modulo `numpy` può essere chiamata utilizzando la sintassi `modulo.funzione()`.Ovviamente non è necessario rinominare i moduli in fase di caricamento.
###Code
import scipy
type(scipy)
###Output
_____no_output_____
###Markdown
Per definire una funzione, si utilizza il comando `def`
###Code
def func(x):
y = x*x # notare l'indentazione
return y
###Output
_____no_output_____
###Markdown
si noti l'indentazione, è una caratteristica fondamentale del linguaggio e serve per separare i blocchi di codice che devono essere eseguiti in una funzione o in un ciclo
###Code
z = func(4)
print(z)
###Output
_____no_output_____
###Markdown
Si osservi che la variabile `x` è passata alla funzione in modo agnostico, cioè python non si preoccupa che l'operazione che vogliamo eseguire su di essa sia valida. Questo ha grandi vantaggi, ad esempio permette di utilizzare la stessa funzione con un argomento del tutto diverso, ad esempio un vettore numpy:
###Code
y = func(np.array([1,2,3]))
print(y)
print(type(y))
###Output
_____no_output_____
###Markdown
Ma può anche condurre ad errori nel caso venga utilizzata in modo non corretto, ad esempio se utilizzassimo come argomento una stringa
###Code
#y = func('a')
###Output
_____no_output_____
###Markdown
Furtunatamente in questo caso il calcolatore ci ha fornito un messaggio di errore abbastanza chiaro, ma talvolta ciò non avviene e si rischia di introdurre un *bug* nel sistema. 1.4 Cicli *for* e *while*Nella programmazione è utile poter eseguire delle operazioni ripetute con pochi comandi. Per questo si utilizzano i cicli *for* e *while*.Il primo permette di variare una variabile in un intervallo dato ed eseguire delle operazioni in un blocco di comandi
###Code
for i in [0,1,2]:
print(i)
t = 0
for i in range(4): # range(n) è una funzione utile per definire un vettore di interi tra 0 e n-1
t += i
print('i = {}, t = {}'.format(i,t))
print(t)
for i in ['a',1,3.3]: print(i)
###Output
_____no_output_____
###Markdown
*while* invece esegue un blocco di comandi finché una condizione non è soddisfatta
###Code
t = 4
while t>0:
t = t-1
print(t)
###Output
_____no_output_____
###Markdown
1.5 Esempio Pratico: MediaDefiniamo una funzione che calcoli la media degli elementi di un vettore
###Code
def media(x):
m = 0
for i in x:
m += i # += incrementa la variabile somma di i
m /= len(x) # /= divide la variabile m per len(x)
return m
media([1,2,3,1,2,4,2])
###Output
_____no_output_____
###Markdown
**Esercizio**: Scrivere una funzione che calcola la deviazione standard degli elementi di un vettore 2 Operazioni Utili per il LaboratorioQueste lezioni non possono coprire tutti i dettagli di un linguaggio di programmazione così complesso, per cui a seguito dei rudimenti, tratteremo alcuni argomenti specifici di utilità per le esperienze di laboratorio- Disegno di dati e funzioni su un grafico- Interpolazione- Disegno della retta di interpolazione sul grafico- Calcolo del $\chi^2$ 2.1 Disegno dei dati e funzioni su un graficoPer disegnare i dati su un grafico si può utilizzare la libreria [matplotlib](https://matplotlib.org). In essa sono presenti funzioni per costruire istogrammi e disegnare funzioni. 2.1.1 IstogrammiSupponiamo di voler disegnare un istogramma a partire dalle seguenti misure| | | | | | | --- | --- | --- | --- | --- | | 3.10 | 2.99 | 2.93 | 3.12 | 3.04 | | 2.97 | 2.87 | 2.78 | 3.09 | 3.19 | | 3.03 | 3.11 | 2.87 | 2.98 | 2.89 | | 2.99 | 2.89 | 2.91 | 3.03 | 3.05 |Per prima cosa costruiamo un vettore con le misure
###Code
x = np.array([3.10,2.99,2.93,3.12,3.04,
2.97,2.87,2.78,3.09,3.19,
3.03,3.11,2.87,2.98,2.89,
2.99,2.89,2.91,3.03,3.05])
###Output
_____no_output_____
###Markdown
Quindi carichiamo il modulo `pyplot` dal modulo `matplotlib` e lo chiamiamo `plt` per brevità. In questo modulo è presente la funzione [`hist`](https://matplotlib.org/3.3.3/api/_as_gen/matplotlib.pyplot.hist.html) che permette di disegnare un istogramma
###Code
from matplotlib import pyplot as plt
plt.hist(x)
###Output
_____no_output_____
###Markdown
Di base, la funzione disegna raccoglie gli elementi del vettore in un istogramma con 10 intervalli equipollenti, si può scegliere di utilizzare meno intervalli aggiungendo l'argomento `bins`
###Code
plt.hist(x,bins=5)
###Output
_____no_output_____
###Markdown
O addirittura definire intervalli di ampiezza diversa...
###Code
plt.hist(x,bins=[2.7,2.9,3,3.2])
###Output
_____no_output_____
###Markdown
Supponiamo che i dati siano distribuiti secondo una distribuzione di Gauss, possiamo calcolarne la media e la deviazione standard scrivendo una funzione oppure utilizzando quelle fornite da `numpy`
###Code
def media(x):
m = 0
for i in x:
m += i # += incrementa la variabile somma di i
m /= len(x) # len() restituisce il numero di elementi del vettore e /= divide la variabile m per len(x)
return m
print(media(x))
print(np.mean(x))
print('x = {0:.2f} ± {1:.2f}'.format(np.mean(x),np.std(x)))
print('x = {m:.2f} ± {s:.2f}'.format(m=np.mean(x),s=np.std(x))) # ulteriore esempio di formattazione, si noti la possibilità di scegliere la posizione delle variabili nella stampa
###Output
_____no_output_____
###Markdown
Come atteso, la funzione da noi definita e la funzione di `numpy` forniscono lo stesso risultato.Nell'ultima stampa si è utilizzata anche la funzione `numpy.std` per calcolare la deviazione standard dei dati nel vettore.Tutto ciò che segue il carattere `` nella linea è un commento: viene ignorato dal calcolatore, ma è utile al programmatore.**Esercizio**: Confrontare il risultato della funzione che calcola la deviazione standard definita al punto 1.5 con l'analoga funzione `numpy.std` Sovrapposizione di una funzione al graficoPer disegnare la distribuzione di Gauss corrispondente ai dati nel vettore `x` si può utilizzare la funzione `pyplot.plt` unita alla definizione della funzione da disegnare. Per semplicità costruiamo l'istogramma delle frequenze, così da non dover cambiare la noramlizzazione della funzione di Gauss.
###Code
# Definisco la funzione
def gaus(x,m,s):
h = 1./s/np.sqrt(2)
z = x-m
return np.exp(-np.power(h*z, 2.)) *h / np.sqrt(np.pi)
# Definisco il numero degli intervalli, minimo, massimo e disegno l'istogramma
num_bins = 5
xmin, xmax = np.floor(10.*min(x))/10, np.ceil(10.*max(x))/10 # scelgo minimo e massimo arrotondando (oss: floor e ceil restituiscono un intero, io voglio arrotondare a 0.1, da cui la moltiplicazione e divisione per 10)
plt.hist(x, num_bins, range = [xmin, xmax], alpha=0.5, density=True, label='data')
# Abbellimento del grafico
xt = [round(xmin+0.1*i,1) for i in range(num_bins+1)] # abbellimento del grafico: scelgo i punti dove disegnare gli intervalli
plt.xticks(xt, [str(i) for i in xt]) # scrivo le tacchette sull'asse x
plt.xlabel('$x_k$ (mm)') # titolo asse x
plt.ylabel('Densità di Frequenza') # titolo asse y
# Disegno la funzione
mean, sigma = np.mean(x),np.std(x) # calcolo media e deviazione standard
t = np.linspace(xmin,xmax) # questa funzione definisce un vettore ad alta densità per calcolare la funzione da disegnare lungo l'asse x
plt.plot(t,gaus(t,mean, sigma),label=r"$G(x;\mu,\sigma)$")
plt.legend() # aggiungo una legenda
###Output
_____no_output_____
###Markdown
Disegno di un grafico con barre di erroreNella maggior parte delle esperienze, sarete chiamati a disegnare dei grafici con barre di errore per rappresentare i risultati delle misure. Suppongo di aver misurato il tempo di discesa di un carrellino al variare dell'angolo di inclinazione di un piano inclinato e di aver ottenuto i seguenti risultati| $\frac{1}{\sin\alpha}$ | $t^2$ $(s^2)$ ||---|---|| 2.00 | 0.18 ± 0.15 || 2.37 | 0.22 ± 0.15 || 2.92 | 0.36 ± 0.15 || 3.86 | 0.44 ± 0.15 || 5.76 | 0.66 ± 0.15 || 11.47 | 1.12 ± 0.15 |Si creano dei vettori con i valori ottenuti:
###Code
X = np.array([11.47, 5.76, 3.86, 2.92, 2.37, 2.0])
Y = np.array([1.12, 0.66, 0.44, 0.36, 0.22, 0.18])
sy = 0.15
sY = np.array(sy*np.ones(len(Y)))
###Output
_____no_output_____
###Markdown
Quindi si può utilizzare la funzione `pyplot.errorbar` per disegnare il grafico dei valori `Y` corrispondenti a `X` con la barra di errore simmetrica `sY` (ovviamente i vettori devono avere la stessa dimensione perché la funzione dia un risultato)
###Code
plt.errorbar(X,Y,sY, fmt='o', ls='none', label='data')
# abbellimenti
plt.xlim(left=0)
plt.ylim(bottom=0)
plt.text(max(X), -0.1*(max(Y)-min(Y)+2*max(sY)), r'$\frac{1}{\sin\alpha}$')
plt.text(-0.2*(max(X)-min(X)), max(Y)+max(sY), r'$t^2 (s^2)$')
plt.legend()
###Output
_____no_output_____
###Markdown
2.2 Effettuare un'interpolazioneLa cosa più semplice per effettuare un'interpolazione è scrivere le funzioni necessarie a partire dalle formule discusse a lezione. In questo esempio si utilizza una retta passante per l'origine $y = kx$:$$k = \frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x^2_i} \qquad \sigma_k^2 = \frac{\sigma_y^2}{\sum_{i=1}^N x^2_i}$$Si può scrivere una funzione che calcoli $k$ e $\sigma_k$ a partire dai vettori `X`, `Y` e l'incertezza `sy` oppure eseguire le operazioni richieste in una cella (il vantaggio della funzione è che può essere riutilizzata per vari set di dati senza dover ricopiare le operazioni di volta in volta).
###Code
def InterpolazioneLineareOrigine(x,y,sy):
'''
Dati due vettori di pari dimensione x, y e l'incertezza sy si può interpolare la retta passante per l'origine con il metodo dei minimi quadrati
'''
# controllo che x e y abbiano pari dimensione diversa da 0
if len(x) != len(y) or len(x) == 0:
print('I dati inseriti non sono validi')
return 0
if sy ==0 :
print("L'incertezza non può essere 0")
return 0
# calcolo le sommatorie
sumxy = 0
sumx2 = 0
for i in range(len(x)): # range(n) = [0,1,2,..,n-1]
sumxy += x[i]*y[i]
sumx2 += x[i]*x[i]
k = sumxy/sumx2
sk = sy/np.sqrt(sumx2)
return (k,sk)
###Output
_____no_output_____
###Markdown
**Esercizio**: si scriva una funzione per effettuare l'interpolazione di una retta generica**Esercizio**: si scriva una funzione per effettuare l'interpolazione di una retta generica con incertezze variabiliProcedo all'interpolazione.
###Code
res = InterpolazioneLineareOrigine(X,Y,sy)
print(res)
print('k = {:.2f} ± {:.2f}'.format(res[0],res[1]))
###Output
_____no_output_____
###Markdown
2.3 Disegno della retta interpolata sui datiDisegno la retta interpolata sui dati, similmente a quanto fatto quando è stata disegnata la funzione di Gauss
###Code
# Definisco la funzione che voglio disegnare
def line(x,m,q=0):
y = m*x+q
return y
# Disegno il grafico con barre di errore
plt.errorbar(X,Y,sY, fmt='o', ls='none', label='data')
# abbellimenti
plt.xlim(left=0)
plt.ylim(bottom=0)
plt.text(max(X), -0.1*(max(Y)-min(Y)+2*max(sY)), r'$\frac{1}{\sin\alpha}$')
plt.text(-0.2*(max(X)-min(X)), max(Y)+max(sY), r'$t^2 (s^2)$')
# Disegno la funzione
xmin, xmax = 0, max(X)+0.1*(max(X)-min(X))
t = np.linspace(xmin,xmax) # questa funzione definisce un vettore ad alta densità per calcolare la funzione da disegnare lungo l'asse x
plt.plot(t,line(t,res[0]),label=r"$y = {:.2f}x$".format(res[0]))
plt.legend() # aggiungo una legenda
###Output
_____no_output_____
###Markdown
2.4 Calcolo del $\chi^2$Per calcolare il $\chi^2$ si può di nuovo scrivere una funzione (o eseguire le operazioni in una cella):$$\chi^2_0 = \sum_{i=1}^N \left(\frac{y_i - k x_i}{\sigma_{y_i}}\right)^2= \frac{\sum_{i=1}^N \left(y_i - k x_i\right)^2}{\sigma_{y}^2}$$
###Code
def chisq(y,e,sy):
'''
y: vettore delle misure
e: vettore dei valori attesi per i valori di x considerati
sy: incertezza sulle misure
'''
if len(y)!=len(e) or len(y) == 0:
print('I dati inseriti non sono validi')
return 0
if sy ==0 :
print('L\'incertezza non può essere 0')
return 0
c2 = 0
for i in range(len(y)): c2 = c2 + (y[i]-e[i])*(y[i]-e[i])
c2 /= sy*sy
return c2
chi2v = chisq(Y,line(X,res[0]),sy)
print('chi2 = {:.2f}'.format(chi2v))
###Output
_____no_output_____
###Markdown
Il test del $\chi^2$ si può effettuare calcolando il numero di gradi di libertà del problema (n-1 in questo caso) e utilizzando la *classe* `chi2` del *modulo* `scipy.stats` che permette di calcolare la funzione cumulativa (`cdf`) della distribuzione del $\chi^2$ con *d* gradi di libertà fino ad un dato valore (`chi2v` nel nostro caso)$$P_0 = P(\chi^2 \geq \chi^2_0) = \int_{\chi^2_0}^{+\infty}f(\chi^2;d)\mathrm{d}\chi^2 = 1- \int_{0}^{\chi^2_0}f(\chi^2;d)\mathrm{d}\chi^2$$
###Code
from scipy.stats import chi2
d = len(Y)-1
pchi2 = 1-chi2.cdf(chi2v,d)
print('P(chi2) = {:.1f}%'.format(100.*pchi2))
###Output
_____no_output_____ |
0-Beginning.ipynb | ###Markdown
[](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision) مشاهده محل مفسر در کامپیوتر شما
###Code
import sys
print(sys.executable)
###Output
C:\Users\A s u s\Anaconda3\envs\virtual_platform\python.exe
###Markdown
مشاهده پوشه کار یا working directory پروژه
###Code
import os
print(os.getcwd())
###Output
E:\my cources\Vision\notebook
###Markdown
آزمایش import کتابخانه های نصب شده نباید خطایی مشاهده شود.
###Code
import cv2
import numpy as np
import matplotlib
###Output
_____no_output_____
###Markdown
مشاهده نسخه OpenCV
###Code
print (cv2.__version__)
###Output
3.1.0
|
Basic Federated Classifier.ipynb | ###Markdown
Basic federated classifier with TensorFlow The code in this notebook is copyright 2018 coMind. Licensed under the Apache License, Version 2.0; you may not use this code except in compliance with the License. You may obtain a copy of the License.Join the conversation at Slack.This a series of three tutorials you are in the last one: * [Basic Classifier](https://github.com/coMindOrg/federated-averaging-tutorials/blob/master/Basic%20Classifier.ipynb)* [Basic Distributed Classifier](https://github.com/coMindOrg/federated-averaging-tutorials/blob/master/Basic%20Distributed%20Classifier.ipynb)* [Basic Federated Classifier](https://github.com/coMindOrg/federated-averaging-tutorials/blob/master/Basic%20Federated%20Classifier.ipynb)In this tutorial we will see how to train a model using federated averaging.To begin a brief explanation of what it means to train using federated averaging with respect to training using a SyncReplicasOptimizer.In the previous tutorial, we explained that with SyncReplicasOptimizer each worker generated a gradient for its weights and wrote it to the parameter server. The chief read those gradients (including its own), it averaged them and updated the shared model.This time each worker will be updating its weights locally, as if it were the only one training. Every certain number of steps it will send its weights (not the gradients, but the weights themselves) to the parameter server. The chief will read the weights from there, it will average and write them again to the parameter server so that all the workers can overwrite theirs. The entire first part of the code is the same as the distributed classifier tutorial.Two differences only:- This time we also import __federated_average_optimizer__, the library with which we can federalize learning.- On the other hand we define the variable __INTERVAL_STEPS__. Every how many steps we will perform the average of the weights. Put another way, how many steps will each worker make in local before writing their weights in the parameter server and overwriting them with the average that the chief has made.
###Code
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import os
import numpy as np
from time import time
import matplotlib.pyplot as plt
import federated_averaging_optimizer
flags = tf.app.flags
flags.DEFINE_integer("task_index", None,
"Worker task index, should be >= 0. task_index=0 is "
"the master worker task that performs the variable "
"initialization ")
flags.DEFINE_string("ps_hosts", "localhost:2222",
"Comma-separated list of hostname:port pairs")
flags.DEFINE_string("worker_hosts", "localhost:2223,localhost:2224",
"Comma-separated list of hostname:port pairs")
flags.DEFINE_string("job_name", None, "job name: worker or ps")
BATCH_SIZE = 32
EPOCHS = 5
INTERVAL_STEPS = 10
FLAGS = flags.FLAGS
if FLAGS.job_name is None or FLAGS.job_name == "":
raise ValueError("Must specify an explicit `job_name`")
if FLAGS.task_index is None or FLAGS.task_index == "":
raise ValueError("Must specify an explicit `task_index`")
if FLAGS.task_index == 0:
print('--- GPU Disabled ---')
os.environ['CUDA_VISIBLE_DEVICES'] = ''
#Construct the cluster and start the server
ps_spec = FLAGS.ps_hosts.split(",")
worker_spec = FLAGS.worker_hosts.split(",")
# Get the number of workers.
num_workers = len(worker_spec)
print('{} workers defined'.format(num_workers))
cluster = tf.train.ClusterSpec({"ps": ps_spec, "worker": worker_spec})
server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index)
if FLAGS.job_name == "ps":
print('--- Parameter Server Ready ---')
server.join()
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
print('Data loaded')
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images = np.split(train_images, num_workers)[FLAGS.task_index]
train_labels = np.split(train_labels, num_workers)[FLAGS.task_index]
print('Local dataset size: {}'.format(train_images.shape[0]))
train_images = train_images / 255.0
test_images = test_images / 255.0
is_chief = (FLAGS.task_index == 0)
checkpoint_dir='logs_dir/federated_worker_{}/{}'.format(FLAGS.task_index, time())
print('Checkpoint directory: ' + checkpoint_dir)
worker_device = "/job:worker/task:%d" % FLAGS.task_index
print('Worker device: ' + worker_device + ' - is_chief: {}'.format(is_chief))
###Output
_____no_output_____
###Markdown
Here we begin the definition of the graph in the same way as it was done in the basic classifier, we explicitly place every operation in the local worker. The rest is fairly standard until we reach the definition of the optimizer.
###Code
with tf.device(worker_device)
global_step = tf.train.get_or_create_global_step()
with tf.name_scope('dataset'), tf.device('/cpu:0'):
images_placeholder = tf.placeholder(train_images.dtype, [None, train_images.shape[1], train_images.shape[2]], name='images_placeholder')
labels_placeholder = tf.placeholder(train_labels.dtype, [None], name='labels_placeholder')
batch_size = tf.placeholder(tf.int64, name='batch_size')
shuffle_size = tf.placeholder(tf.int64, name='shuffle_size')
dataset = tf.data.Dataset.from_tensor_slices((images_placeholder, labels_placeholder))
dataset = dataset.shuffle(shuffle_size, reshuffle_each_iteration=True)
dataset = dataset.repeat(EPOCHS)
dataset = dataset.batch(batch_size)
iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
dataset_init_op = iterator.make_initializer(dataset, name='dataset_init')
X, y = iterator.get_next()
flatten_layer = tf.layers.flatten(X, name='flatten')
dense_layer = tf.layers.dense(flatten_layer, 128, activation=tf.nn.relu, name='relu')
predictions = tf.layers.dense(dense_layer, 10, activation=tf.nn.softmax, name='softmax')
summary_averages = tf.train.ExponentialMovingAverage(0.9)
with tf.name_scope('loss'):
loss = tf.reduce_mean(keras.losses.sparse_categorical_crossentropy(y, predictions))
loss_averages_op = summary_averages.apply([loss])
tf.summary.scalar('cross_entropy', summary_averages.average(loss))
with tf.name_scope('accuracy'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.cast(y, tf.int64))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy_metric')
accuracy_averages_op = summary_averages.apply([accuracy])
tf.summary.scalar('accuracy', summary_averages.average(accuracy))
###Output
_____no_output_____
###Markdown
We used the __replica_device_setter__ in the distributed learning to automatically choose in which device to place each defined op. Here we create it just to pass it as an argument to the custom optimizer that we have created to contain the logic of the federated averaging.This custom optimizer will use the __replica_device_setter__ to place a copy of each trainable variable in the ps, this new variables will store the averaged values of all the local models.Once this optimizer has been defined, we create the training operation and a, in the same way as we did with SyncReplicasOptimizer, a hook that will run inside the MonitoredTrainingSession, which handles the initialization.
###Code
with tf.name_scope('train'):
device_setter = tf.train.replica_device_setter(worker_device=worker_device, cluster=cluster)
optimizer = federated_averaging_optimizer.FederatedAveragingOptimizer(tf.train.AdamOptimizer(np.sqrt(num_workers) * 0.001), replicas_to_aggregate=num_workers, interval_steps=INTERVAL_STEPS, is_chief=is_chief, device_setter=device_setter)
with tf.control_dependencies([loss_averages_op, accuracy_averages_op]):
train_op = optimizer.minimize(loss, global_step=global_step)
model_average_hook = optimizer.make_session_run_hook()
###Output
_____no_output_____
###Markdown
We keep defining our hooks as usual.
###Code
n_batches = int(train_images.shape[0] / BATCH_SIZE)
last_step = int(n_batches * EPOCHS)
print('Graph definition finished')
sess_config = tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=False,
operation_timeout_in_ms=20000,
device_filters=["/job:ps",
"/job:worker/task:%d" % FLAGS.task_index])
print('Training {} batches...'.format(last_step))
class _LoggerHook(tf.train.SessionRunHook):
def begin(self):
self._total_loss = 0
self._total_acc = 0
def before_run(self, run_context):
return tf.train.SessionRunArgs([loss, accuracy, global_step])
def after_run(self, run_context, run_values):
loss_value, acc_value, step_value = run_values.results
self._total_loss += loss_value
self._total_acc += acc_value
if (step_value + 1) % n_batches == 0 and not step_value == 0:
print("Epoch {}/{} - loss: {:.4f} - acc: {:.4f}".format(int(step_value / n_batches) + 1, EPOCHS, self._total_loss / n_batches, self._total_acc / n_batches))
self._total_loss = 0
self._total_acc = 0
class _InitHook(tf.train.SessionRunHook):
def after_create_session(self, session, coord):
session.run(dataset_init_op, feed_dict={images_placeholder: train_images, labels_placeholder: train_labels, batch_size: BATCH_SIZE, shuffle_size: train_images.shape[0]})
###Output
_____no_output_____
###Markdown
The shared variables generated within the custom optimizer get their initialized value from their corresponding trainable variables in the local worker. Therefore their initialization ops will be unavailable out of this session even if we try to restore a saved checkpoint.We need to define a custom saver which ignores this shared variables. In this case, we only save the trainable_variables .
###Code
class _SaverHook(tf.train.SessionRunHook):
def begin(self):
self._saver = tf.train.Saver(tf.trainable_variables())
def before_run(self, run_context):
return tf.train.SessionRunArgs(global_step)
def after_run(self, run_context, run_values):
step_value = run_values.results
if step_value % n_batches == 0 and not step_value == 0:
self._saver.save(run_context.session, checkpoint_dir+'/model.ckpt', step_value)
def end(self, session):
self._saver.save(session, checkpoint_dir+'/model.ckpt', session.run(global_step))
###Output
_____no_output_____
###Markdown
The execution of the training session is standard. Notice the new hooks that we have added to the hook lists.WARNING! Do not define a chief worker. We need each worker to initialize their local session and train on its own!
###Code
with tf.name_scope('monitored_session'):
with tf.train.MonitoredTrainingSession(
master=server.target,
checkpoint_dir=checkpoint_dir,
hooks=[_LoggerHook(), _InitHook(), _SaverHook(), model_average_hook],
config=sess_config,
stop_grace_period_secs=10,
save_checkpoint_secs=None) as mon_sess:
while not mon_sess.should_stop():
mon_sess.run(train_op)
###Output
_____no_output_____
###Markdown
Finally, we evaluate the model.
###Code
if is_chief:
print('--- Begin Evaluation ---')
tf.reset_default_graph()
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
saver = tf.train.import_meta_graph(ckpt.model_checkpoint_path + '.meta', clear_devices=True)
saver.restore(sess, ckpt.model_checkpoint_path)
print('Model restored')
graph = tf.get_default_graph()
images_placeholder = graph.get_tensor_by_name('dataset/images_placeholder:0')
labels_placeholder = graph.get_tensor_by_name('dataset/labels_placeholder:0')
batch_size = graph.get_tensor_by_name('dataset/batch_size:0')
accuracy = graph.get_tensor_by_name('accuracy/accuracy_metric:0')
predictions = graph.get_tensor_by_name('softmax/BiasAdd:0')
dataset_init_op = graph.get_operation_by_name('dataset/dataset_init')
sess.run(dataset_init_op, feed_dict={images_placeholder: test_images, labels_placeholder: test_labels, batch_size: test_images.shape[0], shuffle_size: 1})
print('Test accuracy: {:4f}'.format(sess.run(accuracy)))
predicted = sess.run(predictions)
# Plot the first 25 test images, their predicted label, and the true label
# Color correct predictions in green, incorrect predictions in red
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(test_images[i], cmap=plt.cm.binary)
predicted_label = np.argmax(predicted[i])
true_label = test_labels[i]
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} ({})".format(class_names[predicted_label],
class_names[true_label]),
color=color)
plt.show(True)
###Output
_____no_output_____
###Markdown
Basic federated classifier with TensorFlow The code in this notebook is copyright 2018 coMind. Licensed under the Apache License, Version 2.0; you may not use this code except in compliance with the License. You may obtain a copy of the License.Join the conversation at Slack.This a series of three tutorials you are in the last one: * [Basic Classifier](https://github.com/coMindOrg/federated-averaging-tutorials/blob/master/Basic%20Classifier.ipynb)* [Basic Distributed Classifier](https://github.com/coMindOrg/federated-averaging-tutorials/blob/master/Basic%20Distributed%20Classifier.ipynb)* [Basic Federated Classifier](https://github.com/coMindOrg/federated-averaging-tutorials/blob/master/Basic%20Federated%20Classifier.ipynb)In this tutorial we will see how to train a model using federated averaging.To begin a brief explanation of what it means to train using federated averaging with respect to training using a SyncReplicasOptimizer.In the previous tutorial, we explained that with SyncReplicasOptimizer each worker generated a gradient for its weights and wrote it to the parameter server. The chief read those gradients (including its own), it averaged them and updated the shared model.This time each worker will be updating its weights locally, as if it were the only one training. Every certain number of steps it will send its weights (not the gradients, but the weights themselves) to the parameter server. The chief will read the weights from there, it will average and write them again to the parameter server so that all the workers can overwrite theirs. The entire first part of the code is the same as the distributed classifier tutorial.Two differences only:- This time we also import __federated_average_optimizer__, the library with which we can federalize learning.- On the other hand we define the variable __INTERVAL_STEPS__. Every how many steps we will perform the average of the weights. Put another way, how many steps will each worker make in local before writing their weights in the parameter server and overwriting them with the average that the chief has made.
###Code
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import os
import numpy as np
from time import time
import matplotlib.pyplot as plt
import federated_averaging_optimizer
flags = tf.app.flags
flags.DEFINE_integer("task_index", None,
"Worker task index, should be >= 0. task_index=0 is "
"the master worker task that performs the variable "
"initialization ")
flags.DEFINE_string("ps_hosts", "localhost:2222",
"Comma-separated list of hostname:port pairs")
flags.DEFINE_string("worker_hosts", "localhost:2223,localhost:2224",
"Comma-separated list of hostname:port pairs")
flags.DEFINE_string("job_name", None, "job name: worker or ps")
BATCH_SIZE = 32
EPOCHS = 5
INTERVAL_STEPS = 10
FLAGS = flags.FLAGS
if FLAGS.job_name is None or FLAGS.job_name == "":
raise ValueError("Must specify an explicit `job_name`")
if FLAGS.task_index is None or FLAGS.task_index == "":
raise ValueError("Must specify an explicit `task_index`")
if FLAGS.task_index == 0:
print('--- GPU Disabled ---')
os.environ['CUDA_VISIBLE_DEVICES'] = ''
#Construct the cluster and start the server
ps_spec = FLAGS.ps_hosts.split(",")
worker_spec = FLAGS.worker_hosts.split(",")
# Get the number of workers.
num_workers = len(worker_spec)
print('{} workers defined'.format(num_workers))
cluster = tf.train.ClusterSpec({"ps": ps_spec, "worker": worker_spec})
server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index)
if FLAGS.job_name == "ps":
print('--- Parameter Server Ready ---')
server.join()
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
print('Data loaded')
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images = np.split(train_images, num_workers)[FLAGS.task_index]
train_labels = np.split(train_labels, num_workers)[FLAGS.task_index]
print('Local dataset size: {}'.format(train_images.shape[0]))
train_images = train_images / 255.0
test_images = test_images / 255.0
is_chief = (FLAGS.task_index == 0)
checkpoint_dir='logs_dir/federated_worker_{}/{}'.format(FLAGS.task_index, time())
print('Checkpoint directory: ' + checkpoint_dir)
worker_device = "/job:worker/task:%d" % FLAGS.task_index
print('Worker device: ' + worker_device + ' - is_chief: {}'.format(is_chief))
###Output
_____no_output_____
###Markdown
Here we begin the definition of the graph in the same way as it was done in the basic classifier, we explicitly place every operation in the local worker. The rest is fairly standard until we reach the definition of the optimizer.
###Code
with tf.device(worker_device)
global_step = tf.train.get_or_create_global_step()
with tf.name_scope('dataset'), tf.device('/cpu:0'):
images_placeholder = tf.placeholder(train_images.dtype, [None, train_images.shape[1], train_images.shape[2]],
name='images_placeholder')
labels_placeholder = tf.placeholder(train_labels.dtype, [None], name='labels_placeholder')
batch_size = tf.placeholder(tf.int64, name='batch_size')
shuffle_size = tf.placeholder(tf.int64, name='shuffle_size')
dataset = tf.data.Dataset.from_tensor_slices((images_placeholder, labels_placeholder))
dataset = dataset.shuffle(shuffle_size, reshuffle_each_iteration=True)
dataset = dataset.repeat(EPOCHS)
dataset = dataset.batch(batch_size)
iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
dataset_init_op = iterator.make_initializer(dataset, name='dataset_init')
X, y = iterator.get_next()
flatten_layer = tf.layers.flatten(X, name='flatten')
dense_layer = tf.layers.dense(flatten_layer, 128, activation=tf.nn.relu, name='relu')
predictions = tf.layers.dense(dense_layer, 10, activation=tf.nn.softmax, name='softmax')
summary_averages = tf.train.ExponentialMovingAverage(0.9)
with tf.name_scope('loss'):
loss = tf.reduce_mean(keras.losses.sparse_categorical_crossentropy(y, predictions))
loss_averages_op = summary_averages.apply([loss])
tf.summary.scalar('cross_entropy', summary_averages.average(loss))
with tf.name_scope('accuracy'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.cast(y, tf.int64))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy_metric')
accuracy_averages_op = summary_averages.apply([accuracy])
tf.summary.scalar('accuracy', summary_averages.average(accuracy))
###Output
_____no_output_____
###Markdown
We used the __replica_device_setter__ in the distributed learning to automatically choose in which device to place each defined op. Here we create it just to pass it as an argument to the custom optimizer that we have created to contain the logic of the federated averaging.This custom optimizer will use the __replica_device_setter__ to place a copy of each trainable variable in the ps, this new variables will store the averaged values of all the local models.Once this optimizer has been defined, we create the training operation and a, in the same way as we did with SyncReplicasOptimizer, a hook that will run inside the MonitoredTrainingSession, which handles the initialization.
###Code
with tf.name_scope('train'):
device_setter = tf.train.replica_device_setter(worker_device=worker_device, cluster=cluster)
optimizer = federated_averaging_optimizer.FederatedAveragingOptimizer(
tf.train.AdamOptimizer(np.sqrt(num_workers) * 0.001),
replicas_to_aggregate=num_workers, interval_steps=INTERVAL_STEPS, is_chief=is_chief,
device_setter=device_setter)
with tf.control_dependencies([loss_averages_op, accuracy_averages_op]):
train_op = optimizer.minimize(loss, global_step=global_step)
model_average_hook = optimizer.make_session_run_hook()
###Output
_____no_output_____
###Markdown
We keep defining our hooks as usual.
###Code
n_batches = int(train_images.shape[0] / BATCH_SIZE)
last_step = int(n_batches * EPOCHS)
print('Graph definition finished')
sess_config = tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=False,
operation_timeout_in_ms=20000,
device_filters=["/job:ps",
"/job:worker/task:%d" % FLAGS.task_index])
print('Training {} batches...'.format(last_step))
class _LoggerHook(tf.train.SessionRunHook):
def begin(self):
self._total_loss = 0
self._total_acc = 0
def before_run(self, run_context):
return tf.train.SessionRunArgs([loss, accuracy, global_step])
def after_run(self, run_context, run_values):
loss_value, acc_value, step_value = run_values.results
self._total_loss += loss_value
self._total_acc += acc_value
if (step_value + 1) % n_batches == 0 and not step_value == 0:
print("Epoch {}/{} - loss: {:.4f} - acc: {:.4f}".format(
int(step_value / n_batches) + 1, EPOCHS, self._total_loss / n_batches, self._total_acc / n_batches))
self._total_loss = 0
self._total_acc = 0
class _InitHook(tf.train.SessionRunHook):
def after_create_session(self, session, coord):
session.run(dataset_init_op, feed_dict={
images_placeholder: train_images, labels_placeholder: train_labels,
batch_size: BATCH_SIZE, shuffle_size: train_images.shape[0]})
###Output
_____no_output_____
###Markdown
The shared variables generated within the custom optimizer get their initialized value from their corresponding trainable variables in the local worker. Therefore their initialization ops will be unavailable out of this session even if we try to restore a saved checkpoint.We need to define a custom saver which ignores this shared variables. In this case, we only save the trainable_variables .
###Code
class _SaverHook(tf.train.SessionRunHook):
def begin(self):
self._saver = tf.train.Saver(tf.trainable_variables())
def before_run(self, run_context):
return tf.train.SessionRunArgs(global_step)
def after_run(self, run_context, run_values):
step_value = run_values.results
if step_value % n_batches == 0 and not step_value == 0:
self._saver.save(run_context.session, checkpoint_dir+'/model.ckpt', step_value)
def end(self, session):
self._saver.save(session, checkpoint_dir+'/model.ckpt', session.run(global_step))
###Output
_____no_output_____
###Markdown
The execution of the training session is standard. Notice the new hooks that we have added to the hook lists.WARNING! Do not define a chief worker. We need each worker to initialize their local session and train on its own!
###Code
with tf.name_scope('monitored_session'):
with tf.train.MonitoredTrainingSession(
master=server.target,
checkpoint_dir=checkpoint_dir,
hooks=[_LoggerHook(), _InitHook(), _SaverHook(), model_average_hook],
config=sess_config,
stop_grace_period_secs=10,
save_checkpoint_secs=None) as mon_sess:
while not mon_sess.should_stop():
mon_sess.run(train_op)
###Output
_____no_output_____
###Markdown
Finally, we evaluate the model.
###Code
if is_chief:
print('--- Begin Evaluation ---')
tf.reset_default_graph()
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
saver = tf.train.import_meta_graph(ckpt.model_checkpoint_path + '.meta', clear_devices=True)
saver.restore(sess, ckpt.model_checkpoint_path)
print('Model restored')
graph = tf.get_default_graph()
images_placeholder = graph.get_tensor_by_name('dataset/images_placeholder:0')
labels_placeholder = graph.get_tensor_by_name('dataset/labels_placeholder:0')
batch_size = graph.get_tensor_by_name('dataset/batch_size:0')
accuracy = graph.get_tensor_by_name('accuracy/accuracy_metric:0')
predictions = graph.get_tensor_by_name('softmax/BiasAdd:0')
dataset_init_op = graph.get_operation_by_name('dataset/dataset_init')
sess.run(dataset_init_op, feed_dict={
images_placeholder: test_images, labels_placeholder: test_labels,
batch_size: test_images.shape[0], shuffle_size: 1})
print('Test accuracy: {:4f}'.format(sess.run(accuracy)))
predicted = sess.run(predictions)
# Plot the first 25 test images, their predicted label, and the true label
# Color correct predictions in green, incorrect predictions in red
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(test_images[i], cmap=plt.cm.binary)
predicted_label = np.argmax(predicted[i])
true_label = test_labels[i]
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} ({})".format(class_names[predicted_label],
class_names[true_label]),
color=color)
plt.show(True)
###Output
_____no_output_____ |
notebooks/125_chapter4_fig.4.40.ipynb | ###Markdown
Plot fig 4.40 - now unusedTheme Song: BubblesArtist: Biffy ClyroAlbum: Only RevolutionsReleased: 2009This is the difference between RCPs and SSPs for ERF.
###Code
import matplotlib.pyplot as pl
import numpy as np
import pandas as pd
pl.rcParams['figure.figsize'] = (18/2.54, 9/2.54)
pl.rcParams['font.size'] = 9
pl.rcParams['font.family'] = 'Arial'
pl.rcParams['ytick.direction'] = 'out'
pl.rcParams['ytick.minor.visible'] = True
pl.rcParams['ytick.major.right'] = True
pl.rcParams['ytick.right'] = True
pl.rcParams['xtick.major.bottom'] = False
pl.rcParams['axes.spines.bottom'] = False
pl.rcParams['axes.spines.top'] = False
erf = {}
scenarios = ['ssp126', 'ssp245', 'ssp585', 'rcp26', 'rcp45', 'rcp85']
for scenario in scenarios:
path = '../data_output/' + scenario[:3].upper() + 's/'
erf[scenario] = pd.read_csv(path + 'ERF_%s_1750-2500.csv' % scenario, index_col=0)
erf[scenario]['wmghg'] = erf[scenario]['co2'] + erf[scenario]['ch4'] + erf[scenario]['n2o'] + erf[scenario]['other_wmghg']
erf[scenario]['aerosol'] = erf[scenario]['aerosol-radiation_interactions'] + erf[scenario]['aerosol-cloud_interactions']
colors_ssp = {
'co2' : '#7f0089',
'wmghg' : '#7000c0',
'aerosol': '#66665f',
'total_anthropogenic': '#24937e'
}
colors_rcp = {
'co2' : '#f457ff',
'wmghg' : '#d08fff',
'aerosol': '#ccccc7',
'total_anthropogenic': '#9ce7d9'
}
labels = {
'co2': 'Carbon dioxide',
'wmghg': 'Greenhouse gases',
'aerosol': 'Aerosols',
'total_anthropogenic': 'Total'
}
sspid = {
'26' : '1',
'45' : '2',
'85' : '5'
}
fig, ax = pl.subplots(1, 3)
for iforc, forc in enumerate(['26', '45', '85']):
real_name = forc[0] + '.' + forc[1]
for ispec, specie in enumerate(['co2', 'wmghg', 'aerosol', 'total_anthropogenic']):
ax[iforc].bar(
ispec+0.3, erf['ssp'+sspid[forc]+forc].loc[2100,specie], width=0.4, color=colors_ssp[specie],
label=(labels[specie] if iforc==0 else '')
)
ax[iforc].bar(
ispec+0.7, erf['rcp'+forc].loc[2100,specie], width=0.4, color=colors_rcp[specie],
label=('2100' if (iforc==1 and ispec==3) else '')
)
ax[iforc].bar(ispec+0.3, erf['ssp'+sspid[forc]+forc].loc[2050,specie], width=0.4, color='None', hatch='/', lw=1, edgecolor='k')
ax[iforc].bar(
ispec+0.7, erf['rcp'+forc].loc[2050,specie], width=0.4, color='None', hatch='/', lw=1, edgecolor='k',
label=('2050' if (iforc==1 and ispec==3) else '')
)
ax[iforc].text(ispec+0.3, -2.2, 'SSP', ha='center', va='bottom', rotation=90)
ax[iforc].text(ispec+0.7, -2.2, 'RCP', ha='center', va='bottom', rotation=90)
ax[iforc].axhline(0, ls=':', color='k', lw=0.5)
ax[iforc].set_ylim(-2.2, 12)
ax[iforc].text(2, 11.4, '('+chr(iforc+97)+') SSP'+sspid[forc]+'-'+real_name+'/RCP'+real_name, ha='center', va='bottom')
ax[0].set_ylabel('W m$^{-2}$')
ax[1].set_yticklabels([])
ax[2].set_yticklabels([])
ax[0].legend(loc='upper left', bbox_to_anchor=[0,0.9])
ax[1].legend(loc='upper left', bbox_to_anchor=[0,0.9])
subplots_center = 0.5 * (ax[1].get_position().x0 + ax[1].get_position().x1)
pl.figtext(subplots_center, 0.95, 'Effective radiative forcing in SSP and RCP scenarios', ha='center', va='bottom', fontsize=11)
fig.tight_layout(rect=[0,0,1,0.97])
pl.savefig('../figures/fig4.40.png', dpi=300)
pl.savefig('../figures/fig4.40.pdf')
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/WSTA_N6_probabilistic_parsing-checkpoint.ipynb | ###Markdown
Probabilistic CFG parsing This notebook demonstrates the process behind parsing with a probabilistic (weighted) context free grammar. The algorithm here is based on the CYK algorithm for PCFG parsing, as described in Figure 14.3 J&M2 and presented in the lecture. Note that NLTK implements a few probabilistic parsing methods internally, which are a little different to the algorithm presented in the lecture but are an excellent resource (see pointer at the bottom).
###Code
import nltk
import nltk.grammar
from collections import defaultdict
from IPython.core.display import display, HTML
###Output
_____no_output_____
###Markdown
As a warm-up let's start by creating a standard (unweighted) CFG grammar and parse a simple sentence.
###Code
groucho_grammar = nltk.CFG.fromstring("""
S -> NP VP
PP -> P NP
NP -> Det N | Det N PP | 'I'
VP -> V NP | VP PP
Det -> 'a' | 'an' | 'my' | 'the'
N -> 'elephant' | 'pajamas' | 'shot'
V -> 'shot'
P -> 'in'
""")
###Output
_____no_output_____
###Markdown
Our sentence is chosen to be ambiguous, note that there are two ways in which the following can be parsed.
###Code
sent = ['I', 'shot', 'an', 'elephant', 'in', 'my', 'pajamas']
parser = nltk.ChartParser(groucho_grammar)
trees = list(parser.parse(sent))
print(len(trees))
print(trees[0])
print(trees[1])
###Output
2
(S
(NP I)
(VP
(VP (V shot) (NP (Det an) (N elephant)))
(PP (P in) (NP (Det my) (N pajamas)))))
(S
(NP I)
(VP
(V shot)
(NP (Det an) (N elephant) (PP (P in) (NP (Det my) (N pajamas))))))
###Markdown
These trees can be visualised in a more comprehensible way
###Code
trees[0]
###Output
_____no_output_____
###Markdown
If you get a lot of warnings in the above, and no pretty tree, then that means you're missing the *ghostview* library. It comes standard with linux and mac, I think, but you may need to install it yourself on windows. You can obtain a copy of the GPL binary [from the ghostview website](http://www.ghostscript.com/download/gsdnld.html). If you're on a locked-down machine, and can't install, you may want to view the tree as follows:
###Code
# ASCII art
trees[0].pretty_print()
# launches a popup window
trees[0].draw()
###Output
_____no_output_____
###Markdown
And here's the other tree
###Code
trees[1]
###Output
_____no_output_____
###Markdown
Now we will move to a PCFG, where each rule has a weight denoted by a bracket [number] after each production. Here's a fairly simple (different) grammar, which we will use to develop and demonstrate the CYK parsing algorithm.
###Code
toy_pcfg2 = nltk.grammar.PCFG.fromstring("""
S -> NP VP [1.0]
VP -> V NP [.59]
VP -> V [.40]
VP -> VP PP [.01]
NP -> Det N [.41]
NP -> Name [.28]
NP -> NP PP [.31]
PP -> P NP [1.0]
V -> 'saw' [.21]
V -> 'ate' [.51]
V -> 'ran' [.28]
N -> 'boy' [.11]
N -> 'cookie' [.12]
N -> 'table' [.13]
N -> 'telescope' [.14]
N -> 'hill' [.5]
Name -> 'Jack' [.52]
Name -> 'Bob' [.48]
P -> 'with' [.61]
P -> 'under' [.39]
Det -> 'the' [.41]
Det -> 'a' [.31]
Det -> 'my' [.28]
""")
###Output
_____no_output_____
###Markdown
First we need to check that the grammar is in Chomsky normal form (CNF), as this is a condition for using the CYK parsing algorithm. Recall that CNF means that all rules are either X -> A B or X -> 'word', where capitals denote non-terminal symbols.
###Code
print(toy_pcfg2.is_chomsky_normal_form())
print(toy_pcfg2.is_flexible_chomsky_normal_form())
###Output
False
True
###Markdown
Nope. This is because of the unary productions X -> Y. Note that these can cause problems, especially when there is a loop (NP -> PP -> NP ...). This means the grammar can produce infinitely deep trees, and generally makes the algorithm more complex, as we would have to reduce the grammar to CNF by solving the total probability over any length chain. Note that as the probability of each production is bounded, $0 \le p \le 1$, long chains tend to be increasingly improbable. However note that our grammar has no unary cycles, so we can ignore this problem. *Do you have any ideas on how this might be handled properly, in general?* We can interact with the grammar in several ways. Have a little play below, e.g., to lookup a production and query its attributes.
###Code
p = toy_pcfg2.productions(rhs='saw')[0]
print(p.prob())
print(p.lhs())
q = toy_pcfg2.productions(rhs=p.lhs())
print(q)
###Output
0.21
V
[VP -> V NP [0.59], VP -> V [0.4]]
###Markdown
Now for the CYK parsing algorithm. This operates by incrementally constructing a *chart (table)* using dynamic programming. The chart stores the partial results thus far, which encodes the best scoring tree structure for substrings of the input sentence. These partial results are extended incrementally to find the best scoring structure for the whole sentence.Recall that a sentence is indexed by the spaces between words, e.g., 0 this 1 is 2 an 3 example 4 such that words or strings of words can be represented as a pair of [start, end] indices. E.g., [2,3] = an; [1,4] = is an example. Each element in the chart is indexed as: table[i, j][non-terminal] which stores a probability for parsing the string [i, j]. The CYK algorithm builds these from smaller units by searching for all i < k < j and all productions such that previous analyses that cover [i,k] and [k,j] might be combined using a binary production to cover [i,j]. For this to be valid, the symbols for the two spans must match the RHS of a production in the grammar, and the resulting non-terminal is the LHS of the production. You'll need to read J&M2 and the lecture slides for more details, or tinker with the code below to understand the process. (E.g., by adding a display call into the j loop.)
###Code
def parse_CYK(words, grammar):
table = defaultdict(dict)
back = defaultdict(dict)
# righter-most index j
for j in range(1, len(words)+1):
# insert token preterminal rewrites, POS -> 'word'
token = words[j-1]
for prod in grammar.productions(rhs=token):
if len(prod.rhs()) == 1:
table[j-1,j][prod.lhs()] = prod.prob()
# deal with pesky unary productions
changes = True
cell = table[j-1,j]
while changes:
# repeat this loop until no rule changes; will infinitely
# loop for grammars with a unary cycle
changes = False
for non_term in list(cell.keys()):
prob = cell[non_term]
for prod in grammar.productions(rhs=non_term):
if len(prod.rhs()) == 1:
unary_prob = prod.prob() * prob
if unary_prob > cell.get(prod.lhs(), 0):
cell[prod.lhs()] = unary_prob
back[j-1,j][prod.lhs()] = (None, prod)
changes = True
# now look for larger productions that span [i, j]
# allowing i to move leftward over the input
for i in range(j-2, -1, -1):
cell = table[i,j]
# k is the split point, i < k < j
for k in range(i+1, j):
# find chart cells based on the split point
left_cell = table[i,k]
right_cell = table[k,j]
# find binary productions which handle a valid symbol A from left cell, X -> A B
for left_nt, left_prob in left_cell.items():
for prod in grammar.productions(rhs=left_nt):
if len(prod.rhs()) == 2:
# check if the left and right cells have a valid parse
right_prob = right_cell.get(prod.rhs()[1])
if left_prob != None and right_prob != None:
# score the partial parse
prob = prod.prob() * left_prob * right_prob
if prob > cell.get(prod.lhs(), 0.0):
# if it exceeds the current best analysis, update the cell
cell[prod.lhs()] = prob
# and store a record of how we got here
back[i,j][prod.lhs()] = (k, prod)
# display the table and back pointers
display_CYK_chart(words, table, back)
# have to build the tree from the back pointers
return build_tree(words, back, grammar.start(), i=0, j=len(words))
###Output
_____no_output_____
###Markdown
We now need to write the remaining functions. First creating the tree from the back-pointers, which operates recursively starting at the word sentence headed by the start symbol 'S' (i.e., cell [0, length, S]). Then it follows the back pointers to find where the span was split and what the child symbols were.
###Code
def build_tree(words, back, symbol, i, j):
backpointer = back[i, j].get(symbol)
if backpointer == None:
# X -> 'word' production
assert j == i+1
return nltk.tree.Tree(symbol, [words[i]])
else:
k, prod = back[i, j][symbol]
if k != None:
# X -> A B binary production
left_subtree = build_tree(words, back, prod.rhs()[0], i=i, j=k)
right_subtree = build_tree(words, back, prod.rhs()[1], i=k, j=j)
return nltk.tree.Tree(symbol, [left_subtree, right_subtree])
else:
# X -> A unary production
subtree = build_tree(words, back, prod.rhs()[0], i=i, j=j)
return nltk.tree.Tree(symbol, [subtree])
###Output
_____no_output_____
###Markdown
And some pretty printing code to view the chart.
###Code
def display_CYK_chart(words, table, back):
length = len(words)
html = ''
html += '<tr>'
for i in range(length):
html += '<td><i>%s<i></td>' % words[i]
html += '</tr>'
for i in range(length):
html += '<tr>'
for j in range(1, length+1):
if j <= i:
html += '<td>'
else:
html += "<td bgcolor='lightcyan'>"
html += '[%d,%d]' % (i,j)
for symbol, prob in table.get((i,j), {}).items():
html += '<br><b>%s</b> [%.5f]' % (str(symbol), prob)
b = back[i,j].get(symbol)
if b != None:
html += '<br> (k=%s, %s)' % (b[0], str(b[1]))
html += '</td>'
html += '</tr>'
display(HTML(html))
###Output
_____no_output_____
###Markdown
Finally we are ready to parse! Let's set it loose on a simple sentence.
###Code
parse_CYK('Jack saw Bob with my cookie'.split(), toy_pcfg2)
###Output
_____no_output_____
###Markdown
Finally, feel free to investage NLTK's grammar and parsing algorithms. These live in `nltk.grammar`, `nltk.tree` and `nltk.parse`. Here's a good entry point:
###Code
import nltk.grammar
nltk.grammar.pcfg_demo()
###Output
A PCFG production: NP -> NP PP [0.25]
pcfg_prod.lhs() => NP
pcfg_prod.rhs() => (NP, PP)
pcfg_prod.prob() => 0.25
A PCFG grammar: <Grammar with 23 productions>
grammar.start() => S
grammar.productions() => [S -> NP VP [1.0],
VP -> V NP [0.59],
VP -> V [0.4],
VP -> VP PP [0.01],
NP -> Det N [0.41],
NP -> Name [0.28],
NP -> NP PP [0.31],
PP -> P NP [1.0],
V -> 'saw' [0.21],
V -> 'ate' [0.51],
V -> 'ran' [0.28],
N -> 'boy' [0.11],
N -> 'cookie' [0.12],
N -> 'table' [0.13],
N -> 'telescope' [0.14],
N -> 'hill' [0.5],
Name -> 'Jack' [0.52],
Name -> 'Bob' [0.48],
P -> 'with' [0.61],
P -> 'under' [0.39],
Det -> 'the' [0.41],
Det -> 'a' [0.31],
Det -> 'my' [0.28]]
Induce PCFG grammar from treebank data:
Grammar with 53 productions (start state = S)
JJ -> 'nonexecutive' [0.5]
NP -> CD NNS [0.125]
NP-SBJ -> NNP NNP [0.5]
NP-SBJ -> NP NP-SBJ|<,-ADJP> [0.5]
NNP -> 'Mr.' [0.125]
NNS -> 'years' [1.0]
NP -> DT NP|<JJ-NN> [0.125]
NP-SBJ|<,-ADJP> -> , NP-SBJ|<ADJP-,> [1.0]
NP|<JJ-NN> -> JJ NN [1.0]
NN -> 'director' [0.25]
NN -> 'group' [0.25]
NNP -> 'Dutch' [0.125]
CD -> '61' [0.5]
. -> '.' [1.0]
PP-CLR -> IN NP [1.0]
IN -> 'as' [0.5]
NNP -> 'N.V.' [0.125]
, -> ',' [1.0]
NP-TMP -> NNP CD [1.0]
NP -> NN [0.125]
VBG -> 'publishing' [1.0]
NNP -> 'Elsevier' [0.125]
MD -> 'will' [1.0]
NP -> NNP NNP [0.25]
VB -> 'join' [1.0]
PP -> IN NP [1.0]
VP|<NP-PP-CLR> -> NP VP|<PP-CLR-NP-TMP> [1.0]
DT -> 'a' [0.333333]
NP|<,-NP> -> , NP [1.0]
CD -> '29' [0.5]
NP-PRD -> NP PP [1.0]
ADJP -> NP JJ [1.0]
VP -> VB VP|<NP-PP-CLR> [0.333333]
VP -> MD VP [0.333333]
NP|<VBG-NN> -> VBG NN [1.0]
S -> NP-SBJ S|<VP-.> [1.0]
NNP -> 'Pierre' [0.125]
NNP -> 'Nov.' [0.125]
VBZ -> 'is' [1.0]
NP-SBJ|<ADJP-,> -> ADJP , [1.0]
NP -> NP NP|<,-NP> [0.125]
NN -> 'board' [0.25]
VP|<PP-CLR-NP-TMP> -> PP-CLR NP-TMP [1.0]
VP -> VBZ NP-PRD [0.333333]
S|<VP-.> -> VP . [1.0]
IN -> 'of' [0.5]
NNP -> 'Vinken' [0.25]
NP|<NNP-VBG> -> NNP NP|<VBG-NN> [1.0]
JJ -> 'old' [0.5]
DT -> 'the' [0.666667]
NP -> DT NP|<NNP-VBG> [0.125]
NP -> DT NN [0.125]
NN -> 'chairman' [0.25]
Parse sentence using induced grammar:
['Pierre', 'Vinken', ',', '61', 'years', 'old', ',', 'will', 'join', 'the', 'board', 'as', 'a', 'nonexecutive', 'director', 'Nov.', '29', '.']
|[-] . . . . . . . . . . . . . . . . .| [0:1] 'Pierre' [1.0]
|. [-] . . . . . . . . . . . . . . . .| [1:2] 'Vinken' [1.0]
|. . [-] . . . . . . . . . . . . . . .| [2:3] ',' [1.0]
|. . . [-] . . . . . . . . . . . . . .| [3:4] '61' [1.0]
|. . . . [-] . . . . . . . . . . . . .| [4:5] 'years' [1.0]
|. . . . . [-] . . . . . . . . . . . .| [5:6] 'old' [1.0]
|. . . . . . [-] . . . . . . . . . . .| [6:7] ',' [1.0]
|. . . . . . . [-] . . . . . . . . . .| [7:8] 'will' [1.0]
|. . . . . . . . [-] . . . . . . . . .| [8:9] 'join' [1.0]
|. . . . . . . . . [-] . . . . . . . .| [9:10] 'the' [1.0]
|. . . . . . . . . . [-] . . . . . . .| [10:11] 'board' [1.0]
|. . . . . . . . . . . [-] . . . . . .| [11:12] 'as' [1.0]
|. . . . . . . . . . . . [-] . . . . .| [12:13] 'a' [1.0]
|. . . . . . . . . . . . . [-] . . . .| [13:14] 'nonexecutive' [1.0]
|. . . . . . . . . . . . . . [-] . . .| [14:15] 'director' [1.0]
|. . . . . . . . . . . . . . . [-] . .| [15:16] 'Nov.' [1.0]
|. . . . . . . . . . . . . . . . [-] .| [16:17] '29' [1.0]
|. . . . . . . . . . . . . . . . . [-]| [17:18] '.' [1.0]
|. . . . . . . . . . . . . . . . . [-]| [17:18] '.' [1.0]
|. . . . . . . . . . . . . . . . . [-]| [17:18] . -> '.' * [1.0]
|. . . . . . . . . . . . . . . . . > .| [17:17] . -> * '.' [1.0]
|. . . . . . . . . . . . . . . . [-] .| [16:17] '29' [1.0]
|. . . . . . . . . . . . . . . [-] . .| [15:16] 'Nov.' [1.0]
|. . . . . . . . . . . . . . [-] . . .| [14:15] 'director' [1.0]
|. . . . . . . . . . . . . [-] . . . .| [13:14] 'nonexecutive' [1.0]
|. . . . . . . . . . . . [-] . . . . .| [12:13] 'a' [1.0]
|. . . . . . . . . . . [-] . . . . . .| [11:12] 'as' [1.0]
|. . . . . . . . . . [-] . . . . . . .| [10:11] 'board' [1.0]
|. . . . . . . . . [-] . . . . . . . .| [9:10] 'the' [1.0]
|. . . . . . . . [-] . . . . . . . . .| [8:9] 'join' [1.0]
|. . . . . . . . [-] . . . . . . . . .| [8:9] VB -> 'join' * [1.0]
|. . . . . . . . > . . . . . . . . . .| [8:8] VB -> * 'join' [1.0]
|. . . . . . . [-] . . . . . . . . . .| [7:8] 'will' [1.0]
|. . . . . . . [-] . . . . . . . . . .| [7:8] MD -> 'will' * [1.0]
|. . . . . . . > . . . . . . . . . . .| [7:7] MD -> * 'will' [1.0]
|. . . . . . [-] . . . . . . . . . . .| [6:7] ',' [1.0]
|. . . . . . [-] . . . . . . . . . . .| [6:7] , -> ',' * [1.0]
|. . . . . . [-> . . . . . . . . . . .| [6:7] NP|<,-NP> -> , * NP [1.0]
|. . . . . . [-> . . . . . . . . . . .| [6:7] NP-SBJ|<,-ADJP> -> , * NP-SBJ|<ADJP-,> [1.0]
|. . . . . . > . . . . . . . . . . . .| [6:6] NP|<,-NP> -> * , NP [1.0]
|. . . . . . > . . . . . . . . . . . .| [6:6] NP-SBJ|<,-ADJP> -> * , NP-SBJ|<ADJP-,> [1.0]
|. . . . . . > . . . . . . . . . . . .| [6:6] , -> * ',' [1.0]
|. . . . . [-] . . . . . . . . . . . .| [5:6] 'old' [1.0]
|. . . . [-] . . . . . . . . . . . . .| [4:5] 'years' [1.0]
|. . . . [-] . . . . . . . . . . . . .| [4:5] NNS -> 'years' * [1.0]
|. . . . > . . . . . . . . . . . . . .| [4:4] NNS -> * 'years' [1.0]
|. . . [-] . . . . . . . . . . . . . .| [3:4] '61' [1.0]
|. . [-] . . . . . . . . . . . . . . .| [2:3] ',' [1.0]
|. . [-] . . . . . . . . . . . . . . .| [2:3] , -> ',' * [1.0]
|. . [-> . . . . . . . . . . . . . . .| [2:3] NP|<,-NP> -> , * NP [1.0]
|. . [-> . . . . . . . . . . . . . . .| [2:3] NP-SBJ|<,-ADJP> -> , * NP-SBJ|<ADJP-,> [1.0]
|. . > . . . . . . . . . . . . . . . .| [2:2] NP|<,-NP> -> * , NP [1.0]
|. . > . . . . . . . . . . . . . . . .| [2:2] NP-SBJ|<,-ADJP> -> * , NP-SBJ|<ADJP-,> [1.0]
|. . > . . . . . . . . . . . . . . . .| [2:2] , -> * ',' [1.0]
|. [-] . . . . . . . . . . . . . . . .| [1:2] 'Vinken' [1.0]
|[-] . . . . . . . . . . . . . . . . .| [0:1] 'Pierre' [1.0]
|. . . . . . . . . [-] . . . . . . . .| [9:10] DT -> 'the' * [0.6666666666666666]
|. . . . . . . . . > . . . . . . . . .| [9:9] DT -> * 'the' [0.6666666666666666]
|. . . [-] . . . . . . . . . . . . . .| [3:4] CD -> '61' * [0.5]
|. . . > . . . . . . . . . . . . . . .| [3:3] CD -> * '61' [0.5]
|. . . . . [-] . . . . . . . . . . . .| [5:6] JJ -> 'old' * [0.5]
|. . . . . > . . . . . . . . . . . . .| [5:5] NP|<JJ-NN> -> * JJ NN [1.0]
|. . . . . [-> . . . . . . . . . . . .| [5:6] NP|<JJ-NN> -> JJ * NN [0.5]
|. . . . . > . . . . . . . . . . . . .| [5:5] JJ -> * 'old' [0.5]
|. . . . . . . . . . . [-] . . . . . .| [11:12] IN -> 'as' * [0.5]
|. . . . . . . . . . . > . . . . . . .| [11:11] PP -> * IN NP [1.0]
|. . . . . . . . . . . > . . . . . . .| [11:11] PP-CLR -> * IN NP [1.0]
|. . . . . . . . . . . [-> . . . . . .| [11:12] PP -> IN * NP [0.5]
|. . . . . . . . . . . [-> . . . . . .| [11:12] PP-CLR -> IN * NP [0.5]
|. . . . . . . . . . . > . . . . . . .| [11:11] IN -> * 'as' [0.5]
|. . . . . . . . . . . . . [-] . . . .| [13:14] JJ -> 'nonexecutive' * [0.5]
|. . . . . . . . . . . . . > . . . . .| [13:13] NP|<JJ-NN> -> * JJ NN [1.0]
|. . . . . . . . . . . . . [-> . . . .| [13:14] NP|<JJ-NN> -> JJ * NN [0.5]
|. . . . . . . . . . . . . > . . . . .| [13:13] JJ -> * 'nonexecutive' [0.5]
|. . . . . . . . . . . . . . . . [-] .| [16:17] CD -> '29' * [0.5]
|. . . . . . . . . . . . . . . . > . .| [16:16] CD -> * '29' [0.5]
|. . . . . . . [-> . . . . . . . . . .| [7:8] VP -> MD * VP [0.3333333333333333]
|. . . . . . . > . . . . . . . . . . .| [7:7] VP -> * MD VP [0.3333333333333333]
|. . . . . . . . [-> . . . . . . . . .| [8:9] VP -> VB * VP|<NP-PP-CLR> [0.3333333333333333]
|. . . . . . . . > . . . . . . . . . .| [8:8] VP -> * VB VP|<NP-PP-CLR> [0.3333333333333333]
|. . . . . . . . . . . . [-] . . . . .| [12:13] DT -> 'a' * [0.3333333333333333]
|. . . . . . . . . . . . > . . . . . .| [12:12] DT -> * 'a' [0.3333333333333333]
|. [-] . . . . . . . . . . . . . . . .| [1:2] NNP -> 'Vinken' * [0.25]
|. > . . . . . . . . . . . . . . . . .| [1:1] NP|<NNP-VBG> -> * NNP NP|<VBG-NN> [1.0]
|. > . . . . . . . . . . . . . . . . .| [1:1] NP-TMP -> * NNP CD [1.0]
|. > . . . . . . . . . . . . . . . . .| [1:1] NP-SBJ -> * NNP NNP [0.5]
|. [-> . . . . . . . . . . . . . . . .| [1:2] NP|<NNP-VBG> -> NNP * NP|<VBG-NN> [0.25]
|. [-> . . . . . . . . . . . . . . . .| [1:2] NP-TMP -> NNP * CD [0.25]
|. > . . . . . . . . . . . . . . . . .| [1:1] NP -> * NNP NNP [0.25]
|. > . . . . . . . . . . . . . . . . .| [1:1] NNP -> * 'Vinken' [0.25]
|. . . . . . . . . . [-] . . . . . . .| [10:11] NN -> 'board' * [0.25]
|. . . . . . . . . . > . . . . . . . .| [10:10] NN -> * 'board' [0.25]
|. . . . . . . . . . . . . . [-] . . .| [14:15] NN -> 'director' * [0.25]
|. . . . . . . . . . . . . . > . . . .| [14:14] NN -> * 'director' [0.25]
|. . . . . . . . . . . . . . > . . . .| [14:14] NP -> * NN [0.125]
|. . . . . . . . . . > . . . . . . . .| [10:10] NP -> * NN [0.125]
|. [-> . . . . . . . . . . . . . . . .| [1:2] NP-SBJ -> NNP * NNP [0.125]
|. . . . . . . . . . . . > . . . . . .| [12:12] NP -> * DT NN [0.125]
|. . . . . . . . . . . . > . . . . . .| [12:12] NP -> * DT NP|<NNP-VBG> [0.125]
|. . . . . . . . . . . . > . . . . . .| [12:12] NP -> * DT NP|<JJ-NN> [0.125]
|. . . . . . . . . . . . . . . . > . .| [16:16] NP -> * CD NNS [0.125]
|. . . . . . . . . . . . . [---] . . .| [13:15] NP|<JJ-NN> -> JJ NN * [0.125]
|. . . > . . . . . . . . . . . . . . .| [3:3] NP -> * CD NNS [0.125]
|. . . . . . . . . > . . . . . . . . .| [9:9] NP -> * DT NN [0.125]
|. . . . . . . . . > . . . . . . . . .| [9:9] NP -> * DT NP|<NNP-VBG> [0.125]
|. . . . . . . . . > . . . . . . . . .| [9:9] NP -> * DT NP|<JJ-NN> [0.125]
|[-] . . . . . . . . . . . . . . . . .| [0:1] NNP -> 'Pierre' * [0.125]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP|<NNP-VBG> -> * NNP NP|<VBG-NN> [1.0]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP-TMP -> * NNP CD [1.0]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP-SBJ -> * NNP NNP [0.5]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP -> * NNP NNP [0.25]
|[-> . . . . . . . . . . . . . . . . .| [0:1] NP|<NNP-VBG> -> NNP * NP|<VBG-NN> [0.125]
|[-> . . . . . . . . . . . . . . . . .| [0:1] NP-TMP -> NNP * CD [0.125]
|> . . . . . . . . . . . . . . . . . .| [0:0] NNP -> * 'Pierre' [0.125]
|. . . . . . . . . . . . . . . [-] . .| [15:16] NNP -> 'Nov.' * [0.125]
|. . . . . . . . . . . . . . . > . . .| [15:15] NP|<NNP-VBG> -> * NNP NP|<VBG-NN> [1.0]
|. . . . . . . . . . . . . . . > . . .| [15:15] NP-TMP -> * NNP CD [1.0]
|. . . . . . . . . . . . . . . > . . .| [15:15] NP-SBJ -> * NNP NNP [0.5]
|. . . . . . . . . . . . . . . > . . .| [15:15] NP -> * NNP NNP [0.25]
|. . . . . . . . . . . . . . . [-> . .| [15:16] NP|<NNP-VBG> -> NNP * NP|<VBG-NN> [0.125]
|. . . . . . . . . . . . . . . [-> . .| [15:16] NP-TMP -> NNP * CD [0.125]
|. . . . . . . . . . . . . . . > . . .| [15:15] NNP -> * 'Nov.' [0.125]
|. . . . . . . . . [-> . . . . . . . .| [9:10] NP -> DT * NN [0.08333333333333333]
|. . . . . . . . . [-> . . . . . . . .| [9:10] NP -> DT * NP|<NNP-VBG> [0.08333333333333333]
|. . . . . . . . . [-> . . . . . . . .| [9:10] NP -> DT * NP|<JJ-NN> [0.08333333333333333]
|. . . . . . . . . . . . . . . [---] .| [15:17] NP-TMP -> NNP CD * [0.0625]
|. . . . . . . . . . . . . . . [-> . .| [15:16] NP-SBJ -> NNP * NNP [0.0625]
|[-> . . . . . . . . . . . . . . . . .| [0:1] NP-SBJ -> NNP * NNP [0.0625]
|. [-> . . . . . . . . . . . . . . . .| [1:2] NP -> NNP * NNP [0.0625]
|. . . . . . . . . . . . . . . . [-> .| [16:17] NP -> CD * NNS [0.0625]
|. . . [-> . . . . . . . . . . . . . .| [3:4] NP -> CD * NNS [0.0625]
|. . . [---] . . . . . . . . . . . . .| [3:5] NP -> CD NNS * [0.0625]
|. . . > . . . . . . . . . . . . . . .| [3:3] ADJP -> * NP JJ [1.0]
|. . . > . . . . . . . . . . . . . . .| [3:3] NP-PRD -> * NP PP [1.0]
|. . . > . . . . . . . . . . . . . . .| [3:3] VP|<NP-PP-CLR> -> * NP VP|<PP-CLR-NP-TMP> [1.0]
|. . . > . . . . . . . . . . . . . . .| [3:3] NP-SBJ -> * NP NP-SBJ|<,-ADJP> [0.5]
|. . . > . . . . . . . . . . . . . . .| [3:3] NP -> * NP NP|<,-NP> [0.125]
|. . . [---> . . . . . . . . . . . . .| [3:5] ADJP -> NP * JJ [0.0625]
|. . . [---> . . . . . . . . . . . . .| [3:5] NP-PRD -> NP * PP [0.0625]
|. . . [---> . . . . . . . . . . . . .| [3:5] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [0.0625]
|. . [-----] . . . . . . . . . . . . .| [2:5] NP|<,-NP> -> , NP * [0.0625]
|. . . . . . . . . . . . [-> . . . . .| [12:13] NP -> DT * NN [0.041666666666666664]
|. . . . . . . . . . . . [-> . . . . .| [12:13] NP -> DT * NP|<NNP-VBG> [0.041666666666666664]
|. . . . . . . . . . . . [-> . . . . .| [12:13] NP -> DT * NP|<JJ-NN> [0.041666666666666664]
|. . . [-----] . . . . . . . . . . . .| [3:6] ADJP -> NP JJ * [0.03125]
|. . . > . . . . . . . . . . . . . . .| [3:3] NP-SBJ|<ADJP-,> -> * ADJP , [1.0]
|. . . [-----> . . . . . . . . . . . .| [3:6] NP-SBJ|<ADJP-,> -> ADJP * , [0.03125]
|. . . [-------] . . . . . . . . . . .| [3:7] NP-SBJ|<ADJP-,> -> ADJP , * [0.03125]
|. . [---------] . . . . . . . . . . .| [2:7] NP-SBJ|<,-ADJP> -> , NP-SBJ|<ADJP-,> * [0.03125]
|. . . [---> . . . . . . . . . . . . .| [3:5] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [0.03125]
|. . . . . . . . . . . . . . . [-> . .| [15:16] NP -> NNP * NNP [0.03125]
|[-> . . . . . . . . . . . . . . . . .| [0:1] NP -> NNP * NNP [0.03125]
|. . . . . . . . . . . . . . [-] . . .| [14:15] NP -> NN * [0.03125]
|. . . . . . . . . . . . . . > . . . .| [14:14] ADJP -> * NP JJ [1.0]
|. . . . . . . . . . . . . . > . . . .| [14:14] NP-PRD -> * NP PP [1.0]
|. . . . . . . . . . . . . . > . . . .| [14:14] VP|<NP-PP-CLR> -> * NP VP|<PP-CLR-NP-TMP> [1.0]
|. . . . . . . . . . . . . . > . . . .| [14:14] NP-SBJ -> * NP NP-SBJ|<,-ADJP> [0.5]
|. . . . . . . . . . . . . . > . . . .| [14:14] NP -> * NP NP|<,-NP> [0.125]
|. . . . . . . . . . . . . . [-> . . .| [14:15] ADJP -> NP * JJ [0.03125]
|. . . . . . . . . . . . . . [-> . . .| [14:15] NP-PRD -> NP * PP [0.03125]
|. . . . . . . . . . . . . . [-> . . .| [14:15] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [0.03125]
|. . . . . . . . . . [-] . . . . . . .| [10:11] NP -> NN * [0.03125]
|. . . . . . . . . . > . . . . . . . .| [10:10] ADJP -> * NP JJ [1.0]
|. . . . . . . . . . > . . . . . . . .| [10:10] NP-PRD -> * NP PP [1.0]
|. . . . . . . . . . > . . . . . . . .| [10:10] VP|<NP-PP-CLR> -> * NP VP|<PP-CLR-NP-TMP> [1.0]
|. . . . . . . . . . > . . . . . . . .| [10:10] NP-SBJ -> * NP NP-SBJ|<,-ADJP> [0.5]
|. . . . . . . . . . > . . . . . . . .| [10:10] NP -> * NP NP|<,-NP> [0.125]
|. . . . . . . . . . [-> . . . . . . .| [10:11] ADJP -> NP * JJ [0.03125]
|. . . . . . . . . . [-> . . . . . . .| [10:11] NP-PRD -> NP * PP [0.03125]
|. . . . . . . . . . [-> . . . . . . .| [10:11] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [0.03125]
|. . . . . . . . . [---] . . . . . . .| [9:11] NP -> DT NN * [0.020833333333333332]
|. . . . . . . . . > . . . . . . . . .| [9:9] ADJP -> * NP JJ [1.0]
|. . . . . . . . . > . . . . . . . . .| [9:9] NP-PRD -> * NP PP [1.0]
|. . . . . . . . . > . . . . . . . . .| [9:9] VP|<NP-PP-CLR> -> * NP VP|<PP-CLR-NP-TMP> [1.0]
|. . . . . . . . . > . . . . . . . . .| [9:9] NP-SBJ -> * NP NP-SBJ|<,-ADJP> [0.5]
|. . . . . . . . . > . . . . . . . . .| [9:9] NP -> * NP NP|<,-NP> [0.125]
|. . . . . . . . . [---> . . . . . . .| [9:11] ADJP -> NP * JJ [0.020833333333333332]
|. . . . . . . . . [---> . . . . . . .| [9:11] NP-PRD -> NP * PP [0.020833333333333332]
|. . . . . . . . . [---> . . . . . . .| [9:11] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [0.020833333333333332]
|. . . . . . . . . . [-> . . . . . . .| [10:11] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [0.015625]
|. . . . . . . . . . . . . . [-> . . .| [14:15] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [0.015625]
|[---] . . . . . . . . . . . . . . . .| [0:2] NP-SBJ -> NNP NNP * [0.015625]
|> . . . . . . . . . . . . . . . . . .| [0:0] S -> * NP-SBJ S|<VP-.> [1.0]
|[---> . . . . . . . . . . . . . . . .| [0:2] S -> NP-SBJ * S|<VP-.> [0.015625]
|. . . . . . . . . [---> . . . . . . .| [9:11] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [0.010416666666666666]
|[---] . . . . . . . . . . . . . . . .| [0:2] NP -> NNP NNP * [0.0078125]
|> . . . . . . . . . . . . . . . . . .| [0:0] ADJP -> * NP JJ [1.0]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP-PRD -> * NP PP [1.0]
|> . . . . . . . . . . . . . . . . . .| [0:0] VP|<NP-PP-CLR> -> * NP VP|<PP-CLR-NP-TMP> [1.0]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP-SBJ -> * NP NP-SBJ|<,-ADJP> [0.5]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP -> * NP NP|<,-NP> [0.125]
|[---> . . . . . . . . . . . . . . . .| [0:2] ADJP -> NP * JJ [0.0078125]
|[---> . . . . . . . . . . . . . . . .| [0:2] NP-PRD -> NP * PP [0.0078125]
|[---> . . . . . . . . . . . . . . . .| [0:2] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [0.0078125]
|. . . [---> . . . . . . . . . . . . .| [3:5] NP -> NP * NP|<,-NP> [0.0078125]
|. . . . . . . . . . . . [-----] . . .| [12:15] NP -> DT NP|<JJ-NN> * [0.005208333333333333]
|. . . . . . . . . . . . > . . . . . .| [12:12] ADJP -> * NP JJ [1.0]
|. . . . . . . . . . . . > . . . . . .| [12:12] NP-PRD -> * NP PP [1.0]
|. . . . . . . . . . . . > . . . . . .| [12:12] VP|<NP-PP-CLR> -> * NP VP|<PP-CLR-NP-TMP> [1.0]
|. . . . . . . . . . . . > . . . . . .| [12:12] NP-SBJ -> * NP NP-SBJ|<,-ADJP> [0.5]
|. . . . . . . . . . . . > . . . . . .| [12:12] NP -> * NP NP|<,-NP> [0.125]
|. . . . . . . . . . . . [-----> . . .| [12:15] ADJP -> NP * JJ [0.005208333333333333]
|. . . . . . . . . . . . [-----> . . .| [12:15] NP-PRD -> NP * PP [0.005208333333333333]
|. . . . . . . . . . . . [-----> . . .| [12:15] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [0.005208333333333333]
|[---> . . . . . . . . . . . . . . . .| [0:2] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [0.00390625]
|. . . . . . . . . . [-> . . . . . . .| [10:11] NP -> NP * NP|<,-NP> [0.00390625]
|. . . . . . . . . . . . . . [-> . . .| [14:15] NP -> NP * NP|<,-NP> [0.00390625]
|. . . . . . . . . . . . [-----> . . .| [12:15] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [0.0026041666666666665]
|. . . . . . . . . . . [-------] . . .| [11:15] PP -> IN NP * [0.0026041666666666665]
|. . . . . . . . . . . [-------] . . .| [11:15] PP-CLR -> IN NP * [0.0026041666666666665]
|. . . . . . . . . . . > . . . . . . .| [11:11] VP|<PP-CLR-NP-TMP> -> * PP-CLR NP-TMP [1.0]
|. . . . . . . . . . . [-------> . . .| [11:15] VP|<PP-CLR-NP-TMP> -> PP-CLR * NP-TMP [0.0026041666666666665]
|. . . . . . . . . [---> . . . . . . .| [9:11] NP -> NP * NP|<,-NP> [0.0026041666666666665]
|[---> . . . . . . . . . . . . . . . .| [0:2] NP -> NP * NP|<,-NP> [0.0009765625]
|. . . . . . . . . . . . [-----> . . .| [12:15] NP -> NP * NP|<,-NP> [0.0006510416666666666]
|. . . . . . . . . . . [-----------] .| [11:17] VP|<PP-CLR-NP-TMP> -> PP-CLR NP-TMP * [0.00016276041666666666]
|[-------------] . . . . . . . . . . .| [0:7] NP-SBJ -> NP NP-SBJ|<,-ADJP> * [0.0001220703125]
|[-------------> . . . . . . . . . . .| [0:7] S -> NP-SBJ * S|<VP-.> [0.0001220703125]
|. . . . . . . . . . [---------] . . .| [10:15] NP-PRD -> NP PP * [8.138020833333333e-05]
|[---------] . . . . . . . . . . . . .| [0:5] NP -> NP NP|<,-NP> * [6.103515625e-05]
|[---------> . . . . . . . . . . . . .| [0:5] ADJP -> NP * JJ [6.103515625e-05]
|[---------> . . . . . . . . . . . . .| [0:5] NP-PRD -> NP * PP [6.103515625e-05]
|[---------> . . . . . . . . . . . . .| [0:5] VP|<NP-PP-CLR> -> NP * VP|<PP-CLR-NP-TMP> [6.103515625e-05]
|. . . . . . . . . [-----------] . . .| [9:15] NP-PRD -> NP PP * [5.425347222222222e-05]
|[-----------] . . . . . . . . . . . .| [0:6] ADJP -> NP JJ * [3.0517578125e-05]
|> . . . . . . . . . . . . . . . . . .| [0:0] NP-SBJ|<ADJP-,> -> * ADJP , [1.0]
|[-----------> . . . . . . . . . . . .| [0:6] NP-SBJ|<ADJP-,> -> ADJP * , [3.0517578125e-05]
|[-------------] . . . . . . . . . . .| [0:7] NP-SBJ|<ADJP-,> -> ADJP , * [3.0517578125e-05]
|[---------> . . . . . . . . . . . . .| [0:5] NP-SBJ -> NP * NP-SBJ|<,-ADJP> [3.0517578125e-05]
|[---------> . . . . . . . . . . . . .| [0:5] NP -> NP * NP|<,-NP> [7.62939453125e-06]
|. . . . . . . . . . [-------------] .| [10:17] VP|<NP-PP-CLR> -> NP VP|<PP-CLR-NP-TMP> * [5.086263020833333e-06]
|. . . . . . . . . [---------------] .| [9:17] VP|<NP-PP-CLR> -> NP VP|<PP-CLR-NP-TMP> * [3.3908420138888887e-06]
|. . . . . . . . [-----------------] .| [8:17] VP -> VB VP|<NP-PP-CLR> * [1.1302806712962962e-06]
|. . . . . . . . > . . . . . . . . . .| [8:8] S|<VP-.> -> * VP . [1.0]
|. . . . . . . . [-----------------> .| [8:17] S|<VP-.> -> VP * . [1.1302806712962962e-06]
|. . . . . . . . [-------------------]| [8:18] S|<VP-.> -> VP . * [1.1302806712962962e-06]
|. . . . . . . [-------------------] .| [7:17] VP -> MD VP * [3.767602237654321e-07]
|. . . . . . . > . . . . . . . . . . .| [7:7] S|<VP-.> -> * VP . [1.0]
|. . . . . . . [-------------------> .| [7:17] S|<VP-.> -> VP * . [3.767602237654321e-07]
|. . . . . . . [---------------------]| [7:18] S|<VP-.> -> VP . * [3.767602237654321e-07]
|[===================================]| [0:18] S -> NP-SBJ S|<VP-.> * [4.599123825261622e-11]
(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken))
(NP-SBJ|<,-ADJP>
(, ,)
(NP-SBJ|<ADJP-,>
(ADJP (NP (CD 61) (NNS years)) (JJ old))
(, ,))))
(S|<VP-.>
(VP
(MD will)
(VP
(VB join)
(VP|<NP-PP-CLR>
(NP (DT the) (NN board))
(VP|<PP-CLR-NP-TMP>
(PP-CLR
(IN as)
(NP
(DT a)
(NP|<JJ-NN>
(JJ nonexecutive)
(NN director))))
(NP-TMP (NNP Nov.) (CD 29))))))
(. .))) (p=4.59912e-11)
|
categorical-features/sklearn-onehot-encoding-mixedtype-df.ipynb | ###Markdown
OneHot Encoding in Scikit-Learn with DataFrames of Mixed Column Types Some Toydata - Imagine we have some dataset that consists of both numerical and categorical features.- And we just want to convert the categorical features into a onehot encoding (while leaving the numerical features untouched)
###Code
import pandas as pd
feature_1 = [
1.1, 2.1, 3.1, 4.2,
5.1, 6.1, 7.1, 8.1,
1.2, 2.1, 3.1, 4.1
]
feature_2 = [
'b', 'b', 'b', 'b',
'a', 'a', 'a', 'a',
'c', 'c', 'c', 'c'
]
df = pd.DataFrame({'numerical': feature_1, 'categorical': feature_2})
df
###Output
_____no_output_____
###Markdown
Onehot Encoding - We can use e.g., scikit-learn's [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) to expand the categorical column into onehot-encoded ones- By default, the `OneHotEncoder` will expand all columns into categorical ones (this includes the numerical ones), which is not what we want if we have mixed-type datasets- We can use the [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) to select specific columns we want to transform, though
###Code
import sklearn
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse=False, drop='first', dtype='float')
categorical_features = ['categorical']
col_transformer = ColumnTransformer(
transformers=[
('cat', ohe, categorical_features)],
# include the numerical column(s) via passthrough:
remainder='passthrough'
)
col_transformer.fit(df)
X_t = col_transformer.transform(df)
X_t
%load_ext watermark
%watermark --iversions
###Output
pandas : 1.4.0
sklearn: 1.0.2
|
notebooks/30-PRMT-1985--a-high-proportion-of-pending-transfers-are-caused-by-a-small-group-of-practices.ipynb | ###Markdown
PRMT-1985 [HYPOTHESIS] A high proportion of pending transfers are caused by a small group of practices HypothesisWe believe that a large proportion of pending transfers are caused by a small group of practices receiving, regardless of the sending supplier. We will know this to be true when we can see that the same practices causing a high number of EMIS-EMIS pending transfers are the same as those causing TPP-EMIS or Vision-EMIS, (and the same would occur for each supplier as a receiver) Import and setup Data
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# Using data generated from branch PRMT-1742-duplicates-analysis.
# This is needed to correctly handle duplicates.
# Once the upstream pipeline has a fix for duplicate EHRs, then we can go back to using the main output.
transfer_file_location = "s3://prm-gp2gp-data-sandbox-dev/transfers-duplicates-hypothesis/"
transfer_files = [
"9-2020-transfers.parquet",
"10-2020-transfers.parquet",
"11-2020-transfers.parquet",
"12-2020-transfers.parquet",
"1-2021-transfers.parquet",
"2-2021-transfers.parquet"
]
transfer_input_files = [transfer_file_location + f for f in transfer_files]
transfers_raw = pd.concat((
pd.read_parquet(f)
for f in transfer_input_files
))
# In the data from the PRMT-1742-duplicates-analysis branch, these columns have been added , but contain only empty values.
transfers_raw = transfers_raw.drop(["sending_supplier", "requesting_supplier"], axis=1)
# Given the findings in PRMT-1742 - many duplicate EHR errors are misclassified, the below reclassifies the relevant data
has_at_least_one_successful_integration_code = lambda errors: any((np.isnan(e) or e==15 for e in errors))
successful_transfers_bool = transfers_raw['request_completed_ack_codes'].apply(has_at_least_one_successful_integration_code)
transfers = transfers_raw.copy()
transfers.loc[successful_transfers_bool, "status"] = "INTEGRATED"
# Correctly interpret certail sender errors as failed.
# This is explained in PRMT-1974. Eventaully this will be fixed upstream in the pipeline.
pending_sender_error_codes=[6,7,10,24,30,23,14,99]
transfers_with_pending_sender_code_bool=transfers['sender_error_code'].isin(pending_sender_error_codes)
transfers_with_pending_with_error_bool=transfers['status']=='PENDING_WITH_ERROR'
transfers_which_need_pending_to_failure_change_bool=transfers_with_pending_sender_code_bool & transfers_with_pending_with_error_bool
transfers.loc[transfers_which_need_pending_to_failure_change_bool,'status']='FAILED'
# Add integrated Late status
eight_days_in_seconds=8*24*60*60
transfers_after_sla_bool=transfers['sla_duration']>eight_days_in_seconds
transfers_with_integrated_bool=transfers['status']=='INTEGRATED'
transfers_integrated_late_bool=transfers_after_sla_bool & transfers_with_integrated_bool
transfers.loc[transfers_integrated_late_bool,'status']='INTEGRATED LATE'
# If the record integrated after 28 days, change the status back to pending.
# This is to handle each month consistentently and to always reflect a transfers status 28 days after it was made.
# TBD how this is handled upstream in the pipeline
twenty_eight_days_in_seconds=28*24*60*60
transfers_after_month_bool=transfers['sla_duration']>twenty_eight_days_in_seconds
transfers_pending_at_month_bool=transfers_after_month_bool & transfers_integrated_late_bool
transfers.loc[transfers_pending_at_month_bool,'status']='PENDING'
transfers_with_early_error_bool=(~transfers.loc[:,'sender_error_code'].isna()) |(~transfers.loc[:,'intermediate_error_codes'].apply(len)>0)
transfers.loc[transfers_with_early_error_bool & transfers_pending_at_month_bool,'status']='PENDING_WITH_ERROR'
# Supplier name mapping
supplier_renaming = {
"EGTON MEDICAL INFORMATION SYSTEMS LTD (EMIS)":"EMIS",
"IN PRACTICE SYSTEMS LTD":"Vision",
"MICROTEST LTD":"Microtest",
"THE PHOENIX PARTNERSHIP":"TPP",
None: "Unknown"
}
asid_lookup_file = "s3://prm-gp2gp-data-sandbox-dev/asid-lookup/asidLookup-Mar-2021.csv.gz"
asid_lookup = pd.read_csv(asid_lookup_file)
lookup = asid_lookup[["ASID", "MName", "NACS","OrgName"]]
transfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'requesting_supplier', 'ASID': 'requesting_supplier_asid', 'NACS': 'requesting_ods_code'}, axis=1)
transfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'sending_supplier', 'ASID': 'sending_supplier_asid', 'NACS': 'sending_ods_code'}, axis=1)
transfers["sending_supplier"] = transfers["sending_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
transfers["requesting_supplier"] = transfers["requesting_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
# For each requesting practice, what is the proportion in each status?
status_counts=transfers.pivot_table(index='requesting_practice_asid',columns='status',values='conversation_id',aggfunc='count').fillna(0)
status_counts_percentage=status_counts.div(status_counts.sum(axis=1),axis=0).multiply(100).round(2)
status_counts_percentage=status_counts_percentage[['INTEGRATED','INTEGRATED LATE','PENDING','PENDING_WITH_ERROR','FAILED']]
status_counts_percentage.columns=status_counts_percentage.columns.str.title().str.replace("_"," ") + ' %'
status_counts_percentage.head()
transfers_reduced_columns = transfers[["requesting_practice_asid","requesting_supplier","sending_supplier", "status"]].copy()
is_pending_transfers = transfers["status"] == "PENDING"
transfers_reduced_columns["is_pending"] = is_pending_transfers
transfers_reduced_columns = transfers_reduced_columns.drop("status", axis=1)
transfers_reduced_columns.head()
###Output
_____no_output_____
###Markdown
For all Pending, what is the distribution by number of Practices
###Code
def transfers_quantile_status_table(transfers_df,status,quantiles=5):
practice_status_table=pd.pivot_table(transfers_df,index='requesting_practice_asid',columns='status',values='conversation_id',aggfunc='count').fillna(0)
practice_status_table['TOTAL']=practice_status_table.sum(axis=1)
practice_profile_data=practice_status_table.sort_values(by=status,ascending=False)
cumulative_percentage=practice_profile_data[status].cumsum()/practice_profile_data[status].sum()
practice_profile_data['Percentile Group']=(100/quantiles)*np.ceil(cumulative_percentage*quantiles)
practice_profile_data=practice_profile_data.groupby('Percentile Group').agg({status:'sum','TOTAL':'sum','INTEGRATED':'count'}).astype(int)
practice_profile_data=practice_profile_data.rename({status:'Total ' + status,'TOTAL':'Total Transfers','INTEGRATED':'Total Practices'},axis=1)
practice_profile_data_percentages=(100*practice_profile_data/practice_profile_data.sum()).round(2)
practice_profile_data_percentages.columns= "% " + practice_profile_data_percentages.columns
return pd.concat([practice_profile_data,practice_profile_data_percentages],axis=1)
transfers_quantile_status_table(transfers,"PENDING")
practice_status_table=pd.pivot_table(transfers,index='requesting_practice_asid',columns='status',values='conversation_id',aggfunc='count').fillna(0).sort_values(by="PENDING",ascending=False)
ax=practice_status_table['PENDING'].cumsum().reset_index(drop=True).plot(figsize=(8,5))
ax.set_ylabel('Number of Pending without error Transfers')
ax.set_xlabel('Number of GP Practices')
ax.set_title('Cumulative graph of total pending transfers for GP Practices')
plt.gcf().savefig('Cumulative_pending_transfers.jpg')
###Output
_____no_output_____
###Markdown
For practices of each Supplier, does their "policy" towards pending transfers differ by the supplier they are requesting from?
###Code
suppliers_to_investigate = ["EMIS", "TPP", "Vision"]
pending_by_supplier_pathway=transfers_reduced_columns.pivot_table(index='requesting_supplier',columns='sending_supplier',values='is_pending',aggfunc='mean').multiply(100).round(2)
pending_by_supplier_pathway=pending_by_supplier_pathway.loc[suppliers_to_investigate,suppliers_to_investigate]
pending_by_supplier_pathway
###Output
_____no_output_____
###Markdown
Correlations of volume pending
###Code
pending_transfers_supplier_pathways_pivot = pd.pivot_table(transfers_reduced_columns, index=["requesting_supplier", "requesting_practice_asid"], columns="sending_supplier", values="is_pending", aggfunc="sum").fillna(0)
pending_transfers_as_list = [pending_transfers_supplier_pathways_pivot.loc[supplier].corr().stack().rename(supplier) for supplier in suppliers_to_investigate]
pending_transfers_volume_between_suppliers_correlation = pd.concat(pending_transfers_as_list, axis=1)
pending_transfers_volume_between_suppliers_correlation.loc[[("EMIS", "TPP"), ("EMIS", "Vision"), ("TPP", "Vision")]].round(2)
###Output
_____no_output_____
###Markdown
Correlations of % pending
###Code
pending_transfers_supplier_pathways_pivot = pd.pivot_table(transfers_reduced_columns, index=["requesting_supplier", "requesting_practice_asid"], columns="sending_supplier", values="is_pending", aggfunc="mean").fillna(0)
pending_transfers_as_list = [pending_transfers_supplier_pathways_pivot.loc[supplier].corr().stack().rename(supplier) for supplier in suppliers_to_investigate]
pending_transfers_percentage_between_suppliers_correlation = pd.concat(pending_transfers_as_list, axis=1)
pending_transfers_percentage_between_suppliers_correlation.loc[[("EMIS", "TPP"), ("EMIS", "Vision"), ("TPP", "Vision")]].round(2)
###Output
_____no_output_____
###Markdown
Create frame of all practices ranked by pendingOutput to Excel
###Code
# Total volume Pending by practice
pending_transfers_supplier_pathways_pivot = pd.pivot_table(transfers_reduced_columns, index=["requesting_supplier", "requesting_practice_asid"], columns="sending_supplier", values="is_pending", aggfunc="sum").fillna(0)
pending_transfers_for_supplier_pathways = pending_transfers_supplier_pathways_pivot.copy().astype(int)
pending_transfers_for_supplier_pathways = pending_transfers_for_supplier_pathways.loc[:, ["EMIS","TPP","Vision","Microtest","Unknown"]]
pending_transfers_for_supplier_pathways.insert(0, "Total Pending Transfers", pending_transfers_for_supplier_pathways.sum(axis=1))
# Percentage Pending by practice
pending_transfers_supplier_pathways_pivot_percentage = pd.pivot_table(transfers_reduced_columns, index=["requesting_supplier", "requesting_practice_asid"], columns="sending_supplier", values="is_pending", aggfunc="mean").fillna(0)
pending_transfers_supplier_pathways_percentage = pending_transfers_supplier_pathways_pivot_percentage.copy().round(4).multiply(100)
pending_transfers_supplier_pathways_percentage = pending_transfers_supplier_pathways_percentage.loc[:, ["EMIS","TPP","Vision","Microtest","Unknown"]]
pending_transfers_supplier_pathways_percentage.columns = pending_transfers_supplier_pathways_percentage.columns + " %"
# Join the two and clean up and re-organise the frame
complete_pending_transfers_for_supplier_pathways = pd.concat([pending_transfers_for_supplier_pathways, pending_transfers_supplier_pathways_percentage], axis=1)
complete_pending_transfers_for_supplier_pathways = complete_pending_transfers_for_supplier_pathways.sort_values(by="Total Pending Transfers", ascending=False)
complete_pending_transfers_for_supplier_pathways = asid_lookup[["ASID", "PostCode", "OrgName"]].merge(complete_pending_transfers_for_supplier_pathways.reset_index(), right_on="requesting_practice_asid", left_on="ASID", how="right")
complete_pending_transfers_for_supplier_pathways=complete_pending_transfers_for_supplier_pathways.drop('requesting_practice_asid',axis=1).set_index(['requesting_supplier','ASID'])#.insert(0,"Supplier",supplier)
# Add in the % in each status for each supplier
complete_pending_transfers_for_supplier_pathways=complete_pending_transfers_for_supplier_pathways.reset_index().merge(status_counts_percentage,left_on='ASID',right_index=True,how='left').set_index(['requesting_supplier','ASID'])
# Save to Excel
complete_pending_transfers_for_supplier_pathways.reset_index().to_excel('PRMT-1985 Pending Transfers all practices.xlsx')
complete_pending_transfers_for_supplier_pathways.loc['Vision'].head(20)
# View Emis practices
complete_pending_transfers_for_supplier_pathways.loc["EMIS"].head(10)
gants_hill = (transfers["requesting_practice_asid"] == "926102461049") & (transfers["status"] == "PENDING") & (transfers["sending_supplier"] == "Vision")
gants_hill = transfers.loc[gants_hill]
#gants_hill.head(20)
###Output
_____no_output_____ |
jypeter/CharClassificationDM.ipynb | ###Markdown
Character Classification - Data MiningSimple UI for creating data for character classification
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
# Import Widgets
from ipywidgets import Button, Text, HBox, VBox
from IPython.display import display, clear_output
# Timestamp
import time
# For transforming letters encoding (removing accents)
import unicodedata as ud
# Creating folders
import os
# Import costume functions, corresponding to notebooks
from ocr import page, words, charSeg
from ocr.normalization import imageNorm
# Helper functions - ploting and resizing
from ocr.helpers import implt, resize, ratio
###Output
_____no_output_____
###Markdown
Global variables
###Code
IMG = "text"
###Output
_____no_output_____
###Markdown
Load image
###Code
image = cv2.cvtColor(cv2.imread("data/pagedet/%s.jpg" % IMG), cv2.COLOR_BGR2RGB)
implt(image)
# Crop image and get bounding boxes
crop = page.detection(image)
implt(crop)
bBoxes = words.detection(crop)
###Output
_____no_output_____
###Markdown
Simple UI using widgets
###Code
class Cycler:
""" Cycle through the chars, save data """
height = 60
def __init__(self, image, boxes, idx):
self.boxes = boxes # Array of bounding boxes
self.image = image # Whole image
self.index = idx # Index of current bounding box
self.charIdx = 0 # Position po slider
self.actual = image # Current image of word
self.char = None # Actual char image
self.nextImg()
def save(self, sender, val=1):
"""
Saving current char into dataset
Saving the letter both in EN and CZ dataset
"""
letter = sender.value
sender.value = ""
if self.char is None:
self.nextChar()
else:
if len(letter) == 1:
# Remove accents for EN dataset
letterEN = ud.normalize('NFKD', letter)[0]
# Saving both into cz and en dataset
dirCZ = "data/charclas/cz/" + letter
dirEN = "data/charclas/en/" + letterEN
# Create dir if it doesn't exist
os.makedirs(dirCZ, exist_ok=True)
os.makedirs(dirEN, exist_ok=True)
# Save the letter
cv2.imwrite("%s/%s.jpg" % (dirCZ, time.time()), self.char)
cv2.imwrite("%s/%s.jpg" % (dirEN, time.time()), self.char)
self.nextChar()
else:
print("Enter single letter.")
def nextChar(self, b=None):
""" Ploting next char from the word """
# Clearing jupyter output for new image
clear_output()
idx = self.charIdx
gaps = self.gaps
if idx < len(gaps) - 1:
# Cutting char image - for save function
self.char = self.actual[0:self.height, gaps[idx]:gaps[idx+1]]
implt(self.char, 'gray')
self.charIdx += 1
else:
self.nextImg()
def nextImg(self, b=None):
""" Getting next image from the array """
clear_output()
self.char = None
self.charIdx = 0
if self.index < len(self.boxes):
b = self.boxes[self.index]
x1, y1, x2, y2 = b
# Cuting out the word image and resizing to standard height
img = self.image[y1:y2, x1:x2]
img = resize(img, self.height, True)
implt(img, t='Index: ' + str(self.index))
self.index += 1
self.actual = imageNorm(img, self.height)
self.gaps = charSeg.segmentation(self.actual, debug=True)
return 0
else:
print("END")
return -1
# Last index
LAST_INDEX = 2
# Class cycling through text positions
cycler = Cycler(crop, bBoxes, LAST_INDEX)
# Create GUI
tSaver = Text(description="Save Char")
bNex = Button(description="Next Char")
bNexi = Button(description="Next Image")
tSaver.on_submit(cycler.save)
bNex.on_click(cycler.nextChar)
bNexi.on_click(cycler.nextImg)
VBox([tSaver, HBox([bNexi, bNex])])
### RULES FOR CREATING DATASET ###
#
# Label every possible letter, skip wrong det
# Differentiate between uppercase and lowercase
# Use all czech letters except CH
# If threre is image without char put 0
#
###################################
### Space for Notes ###
#
# Creat dataset paper
#
#######################
###Output
_____no_output_____ |
Climate Exploration and App/climate_starter.ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///../Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create inspector
inspector = inspect(engine)
# Inspect columns for Measurement
columns = inspector.get_columns('Measurement')
for column in columns:
print(column['name'],column['type'])
# Inspect columns for Station
columns = inspector.get_columns('Station')
for column in columns:
print(column['name'],column['type'])
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
Exploratory Precipitation Analysis
###Code
# Find the most recent date in the data set.
most_recent_date = session.query(Measurement.date)[-1]
most_recent_date[0]
# Convert most_recent_date to be used for datatime
year = int(most_recent_date[0][:4])
month = int(most_recent_date[0][5:7])
day = int(most_recent_date[0][8:])
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
recent_date = dt.date(year, month, day)
# Calculate the date one year from the last date in data set.
query_date_1year = recent_date - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
prcp_data = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= query_date_1year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
prcp_df = pd.DataFrame(prcp_data, columns=['Date','Precipitation']).set_index('Date')
# Sort the dataframe by date
prcp_df_sort = prcp_df.sort_values(by=['Date'])
# Use Pandas Plotting with Matplotlib to plot the data
prcp_df_sort.plot(grid = True, ylabel= "Precitipation", title= f"Precipitation from {query_date_1year} to {most_recent_date[0]}", figsize = (10,5))
plt.tight_layout()
plt.savefig(f"../Images/Precipitation from {query_date_1year} to {most_recent_date[0]}.png")
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
summary_statistics = prcp_df_sort.describe()
summary_statistics
###Output
_____no_output_____
###Markdown
Exploratory Station Analysis
###Code
# Design a query to calculate the total number stations in the dataset
total_station = session.query(Station).group_by(Station.station).count()
print(f"Total stations in the dataset: {total_station}")
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
active_stations = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
active_stations
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
most_active_stations = active_stations[0][0]
prcp_most_active_stats = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == most_active_stations).all()
print(f"Precipiation Statistics from the most active Station in Hawaii:\n\nLowest:\t {prcp_most_active_stats[0][0]}\n\
Max: \t {prcp_most_active_stats[0][1]}\nAverage: {round(prcp_most_active_stats[0][2],2)}")
# Using the most active station id
# Query the last 12 months of temperature observation data for this station
tmp_data = session.query(Measurement.date, Measurement.tobs).filter(Measurement.date >= query_date_1year).\
filter(Measurement.station == most_active_stations).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
tmp_df = pd.DataFrame(tmp_data, columns=['Date','Temperature']).set_index('Date')
# Plot the DataFrame as a histogram
tmp_df.plot(kind='hist', bins = 12, title = f"Temperature Frequency from {query_date_1year} to {most_recent_date[0]}", figsize = (10,5))
plt.xlabel('Temperature')
plt.tight_layout()
plt.savefig(f"../Images/Temperature Frequency from {query_date_1year} to {most_recent_date[0]}.png")
plt.show()
###Output
_____no_output_____
###Markdown
Close session
###Code
# Close Session
session.close()
###Output
_____no_output_____ |
extras/003-DataFrame-For-DS.ipynb | ###Markdown
DataFrames For DataScientists1. SparkContext()1. Read/Write1. Convert1. Columns & Rows1. DataFrame : RDD-like Operations1. DataFrame : Action1. DataFrame : Scientific Functions1. DataFrame : Statistical Functions1. DataFrame : Aggregate Functions1. DataFrame : na1. DataFrame : Joins, Set Operations1. DataFrame : Tables & SQL 1. SparkContext()
###Code
import datetime
from pytz import timezone
print "Last run @%s" % (datetime.datetime.now(timezone('US/Pacific')))
from pyspark.context import SparkContext
print "Running Spark Version %s" % (sc.version)
from pyspark.conf import SparkConf
conf = SparkConf()
print conf.toDebugString()
sqlCxt = pyspark.sql.SQLContext(sc)
###Output
_____no_output_____
###Markdown
2. Read/Write
###Code
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('spark-csv/cars.csv')
df.coalesce(1).select('year', 'model').write.format('com.databricks.spark.csv').save('newcars.csv')
df.show()
df_cars = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('car-data/car-milage.csv')
df_cars_x = sqlContext.read.load('cars_1.parquet')
df_cars_x.dtypes
df_cars.show(40)
df_cars.describe().show()
df_cars.describe(["mpg",'hp']).show()
df_cars.groupby("automatic").avg("mpg")
df_cars.na.drop('any').count()
df_cars.count()
df_cars.dtypes
df_2 = df_cars.select(df_cars.mpg.cast("double").alias('mpg'),df_cars.torque.cast("double").alias('torque'),
df_cars.automatic.cast("integer").alias('automatic'))
df_2.show(40)
df_2.dtypes
df_2.describe().show()
###Output
+-------+-----------------+-----------------+-------------------+
|summary| mpg| torque| automatic|
+-------+-----------------+-----------------+-------------------+
| count| 32| 30| 32|
| mean| 20.223125| 217.9| 0.71875|
| stddev|6.318289089312789|83.06970483918289|0.45680340939917435|
| min| 11.2| 81.0| 0|
| max| 36.5| 366.0| 1|
+-------+-----------------+-----------------+-------------------+
###Markdown
9. DataFrame : Aggregate Functions
###Code
df_2.groupby("automatic").avg("mpg","torque").show()
df_2.groupBy().avg("mpg","torque").show()
df_2.agg({"*":"count"}).show()
import pyspark.sql.functions as F
df_2.agg(F.min(df_2.mpg)).show()
import pyspark.sql.functions as F
df_2.agg(F.mean(df_2.mpg)).show()
gdf_2 = df_2.groupBy("automatic")
gdf_2.agg({'mpg':'min'}).collect()
gdf_2.agg({'mpg':'min'}).show()
df_cars_1 = df_cars.select(df_cars.mpg.cast("double").alias('mpg'),
df_cars.displacement.cast("double").alias('displacement'),
df_cars.hp.cast("integer").alias('hp'),
df_cars.torque.cast("integer").alias('torque'),
df_cars.CRatio.cast("float").alias('CRatio'),
df_cars.RARatio.cast("float").alias('RARatio'),
df_cars.CarbBarrells.cast("integer").alias('CarbBarrells'),
df_cars.NoOfSpeed.cast("integer").alias('NoOfSpeed'),
df_cars.length.cast("float").alias('length'),
df_cars.width.cast("float").alias('width'),
df_cars.weight.cast("integer").alias('weight'),
df_cars.automatic.cast("integer").alias('automatic'))
gdf_3 = df_cars_1.groupBy("automatic")
gdf_3.agg({'mpg':'mean'}).show()
df_cars_1.avg("mpg","torque").show()
df_cars_1.groupBy().avg("mpg","torque").show()
df_cars_1.groupby("automatic").avg("mpg","torque").show()
df_cars_1.groupby("automatic").avg("mpg","torque","hp","weight").show()
df_cars_1.printSchema()
df_cars_1.show(5)
df_cars_1.describe().show()
df_cars_1.groupBy().agg({"mpg":"mean"}).show()
df_cars_1.show(40)
###Output
+-----+------------+---+------+------+-------+------------+---------+------+-----+------+---------+
| mpg|displacement| hp|torque|CRatio|RARatio|CarbBarrells|NoOfSpeed|length|width|weight|automatic|
+-----+------------+---+------+------+-------+------------+---------+------+-----+------+---------+
| 18.9| 350.0|165| 260| 8.0| 2.56| 4| 3| 200.3| 69.9| 3910| 1|
| 17.0| 350.0|170| 275| 8.5| 2.56| 4| 3| 199.6| 72.9| 3860| 1|
| 20.0| 250.0|105| 185| 8.25| 2.73| 1| 3| 196.7| 72.2| 3510| 1|
|18.25| 351.0|143| 255| 8.0| 3.0| 2| 3| 199.9| 74.0| 3890| 1|
|20.07| 225.0| 95| 170| 8.4| 2.76| 1| 3| 194.1| 71.8| 3365| 0|
| 11.2| 440.0|215| 330| 8.2| 2.88| 4| 3| 184.5| 69.0| 4215| 1|
|22.12| 231.0|110| 175| 8.0| 2.56| 2| 3| 179.3| 65.4| 3020| 1|
|21.47| 262.0|110| 200| 8.5| 2.56| 2| 3| 179.3| 65.4| 3180| 1|
| 34.7| 89.7| 70| 81| 8.2| 3.9| 2| 4| 155.7| 64.0| 1905| 0|
| 30.4| 96.9| 75| 83| 9.0| 4.3| 2| 5| 165.2| 65.0| 2320| 0|
| 16.5| 350.0|155| 250| 8.5| 3.08| 4| 3| 195.4| 74.4| 3885| 1|
| 36.5| 85.3| 80| 83| 8.5| 3.89| 2| 4| 160.6| 62.2| 2009| 0|
| 21.5| 171.0|109| 146| 8.2| 3.22| 2| 4| 170.4| 66.9| 2655| 0|
| 19.7| 258.0|110| 195| 8.0| 3.08| 1| 3| 171.5| 77.0| 3375| 1|
| 20.3| 140.0| 83| 109| 8.4| 3.4| 2| 4| 168.8| 69.4| 2700| 0|
| 17.8| 302.0|129| 220| 8.0| 3.0| 2| 3| 199.9| 74.0| 3890| 1|
|14.39| 500.0|190| 360| 8.5| 2.73| 4| 3| 224.1| 79.8| 5290| 1|
|14.89| 440.0|215| 330| 8.2| 2.71| 4| 3| 231.0| 79.7| 5185| 1|
| 17.8| 350.0|155| 250| 8.5| 3.08| 4| 3| 196.7| 72.2| 3910| 1|
|16.41| 318.0|145| 255| 8.5| 2.45| 2| 3| 197.6| 71.0| 3660| 1|
|23.54| 231.0|110| 175| 8.0| 2.56| 2| 3| 179.3| 65.4| 3050| 1|
|21.47| 360.0|180| 290| 8.4| 2.45| 2| 3| 214.2| 76.3| 4250| 1|
|16.59| 400.0|185| null| 7.6| 3.08| 4| 3| 196.0| 73.0| 3850| 1|
| 31.9| 96.9| 75| 83| 9.0| 4.3| 2| 5| 165.2| 61.8| 2275| 0|
| 29.4| 140.0| 86| null| 8.0| 2.92| 2| 4| 176.4| 65.4| 2150| 0|
|13.27| 460.0|223| 366| 8.0| 3.0| 4| 3| 228.0| 79.8| 5430| 1|
| 23.9| 133.6| 96| 120| 8.4| 3.91| 2| 5| 171.5| 63.4| 2535| 0|
|19.73| 318.0|140| 255| 8.5| 2.71| 2| 3| 215.3| 76.3| 4370| 1|
| 13.9| 351.0|148| 243| 8.0| 3.25| 2| 3| 215.5| 78.5| 4540| 1|
|13.27| 351.0|148| 243| 8.0| 3.26| 2| 3| 216.1| 78.5| 4715| 1|
|13.77| 360.0|195| 295| 8.25| 3.15| 4| 3| 209.3| 77.4| 4215| 1|
| 16.5| 360.0|165| 255| 8.5| 2.73| 4| 3| 185.2| 69.0| 3660| 1|
+-----+------------+---+------+------+-------+------------+---------+------+-----+------+---------+
###Markdown
8. DataFrame : Statistical Functions
###Code
df_cars_1.corr('hp','weight')
df_cars_1.corr('RARatio','width')
df_cars_1.crosstab('automatic','NoOfSpeed').show()
df_cars_1.crosstab('NoOfSpeed','CarbBarrells').show()
df_cars_1.crosstab('automatic','CarbBarrells').show()
###Output
+----------------------+---+---+---+
|automatic_CarbBarrells| 1| 2| 4|
+----------------------+---+---+---+
| 1| 2| 10| 11|
| 0| 1| 8| 0|
+----------------------+---+---+---+
###Markdown
10. DataFrame : na
###Code
# We can see if a column has null values
df_cars_1.select(df_cars_1.torque.isNull()).show()
# We can filter null and non null rows
df_cars_1.filter(df_cars_1.torque.isNull()).show(40) # You can also use isNotNull
df_cars_1.na.drop().count()
df_cars_1.fillna(9999).show(50)
# This is not what we will do normally. Just to show the effect of fillna
# you can use df_cars_1.na.fill(9999)
# Let us try the interesting when syntax on the HP column
# 0-100=1,101-200=2,201-300=3,others=4
df_cars_1.select(df_cars_1.hp, F.when(df_cars_1.hp <= 100, 1).when(df_cars_1.hp <= 200, 2)
.when(df_cars_1.hp <= 300, 3).otherwise(4).alias("hpCode")).show(40)
df_cars_1.dtypes
df_cars_1.groupBy('CarbBarrells').count().show()
# If file exists, will give error
# java.lang.RuntimeException: path file:.. /cars_1.parquet already exists.
#
df_cars_1.repartition(1).write.save("cars_1.parquet", format="parquet")
# No error even if the file exists
df_cars_1.repartition(1).write.mode("overwrite").format("parquet").save("cars_1.parquet")
# Use repartition if you want all data in one (or more) file
# Appends to existing file
df_cars_1.repartition(1).write.mode("append").format("parquet").save("cars_1_a.parquet")
# Even with repartition, you will see more files as it is append
df_append = sqlContext.load("cars_1_a.parquet")
# sqlContext.load is deprecated
df_append.count()
#eventhough parquet is the default format, explicit format("parquet") is clearer
df_append = sqlContext.read.format("parquet").load("cars_1_a.parquet")
df_append.count()
# for reading parquet files read.parquet is more elegant
df_append = sqlContext.read.parquet("cars_1_a.parquet")
df_append.count()
# Let us read another file
df_orders = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('NW/NW-Orders.csv')
df_orders.head()
df_orders.dtypes
from pyspark.sql.types import StringType, IntegerType,DateType
getYear = F.udf(lambda s: s[-2:], StringType()) #IntegerType())
from datetime import datetime
convertToDate = F.udf(lambda s: datetime.strptime(s, '%m/%d/%y'),DateType())
# You could register the function for sql as follows. We won't use this here
sqlContext.registerFunction("getYear", lambda s: s[-2:])
# let us add an year column
df_orders.select(df_orders['OrderID'],
df_orders['CustomerID'],
df_orders['EmpliyeeID'],
df_orders['OrderDate'],
df_orders['ShipCuntry'].alias('ShipCountry'),
getYear(df_orders['OrderDate'])).show()
# let us add an year column
# Need alias
df_orders_1 = df_orders.select(df_orders['OrderID'],
df_orders['CustomerID'],
df_orders['EmpliyeeID'],
convertToDate(df_orders['OrderDate']).alias('OrderDate'),
df_orders['ShipCuntry'].alias('ShipCountry'),
getYear(df_orders['OrderDate']).alias('Year'))
# df_orders_1 = df_orders_x.withColumn('Year',getYear(df_orders_x['OrderDate'])) # doesn't work. Gives error
df_orders_1.show(1)
df_orders_1.dtypes
df_orders_1.show()
df_orders_1.where(df_orders_1['ShipCountry'] == 'France').show()
df_orders_1.groupBy("CustomerID","Year").count().orderBy('count',ascending=False).show()
df_orders_1.groupBy("CustomerID","Year").count().orderBy('count',ascending=False).show()
# save by partition (year)
df_orders_1.write.mode("overwrite").partitionBy("Year").format("parquet").save("orders_1.parquet")
# load defaults to parquet
df_orders_2 = sqlContext.read.parquet("orders_1.parquet")
df_orders_2.explain(True)
df_orders_3 = df_orders_2.filter(df_orders_2.Year=='96')
df_orders_3.explain(True)
df_orders_3.count()
df_orders_3.explain(True)
df_orders_2.count()
df_orders_1.printSchema()
###Output
root
|-- OrderID: string (nullable = true)
|-- CustomerID: string (nullable = true)
|-- EmpliyeeID: string (nullable = true)
|-- OrderDate: date (nullable = true)
|-- ShipCountry: string (nullable = true)
|-- Year: string (nullable = true)
###Markdown
7. DataFrame : Scientific Functions
###Code
# import pyspark.sql.Row
df = sc.parallelize([10,100,1000]).map(lambda x: {"num":x}).toDF()
df.show()
import pyspark.sql.functions as F
df.select(F.log(df.num)).show()
df.select(F.log10(df.num)).show()
df = sc.parallelize([0,10,100,1000]).map(lambda x: {"num":x}).toDF()
df.show()
df.select(F.log(df.num)).show()
df.select(F.log1p(df.num)).show()
df_cars_1.select(df_cars_1['CarbBarrells'], F.sqrt(df_cars_1['mpg'])).show()
df = sc.parallelize([(3,4),(5,12),(7,24),(9,40),(11,60),(13,84)]).map(lambda x: {"a":x[0],"b":x[1]}).toDF()
df.show()
df.select(df['a'],df['b'],F.hypot(df['a'],df['b']).alias('hypot')).show()
###Output
+---+---+-----+
| a| b|hypot|
+---+---+-----+
| 3| 4| 5.0|
| 5| 12| 13.0|
| 7| 24| 25.0|
| 9| 40| 41.0|
| 11| 60| 61.0|
| 13| 84| 85.0|
+---+---+-----+
###Markdown
11. DataFrame : Joins, Set Operations
###Code
df_a = sc.parallelize( [{"X1":"A","X2":1},{"X1":"B","X2":2},{"X1":"C","X2":3}] ).toDF()
df_b = sc.parallelize( [{"X1":"A","X3":True},{"X1":"B","X3":False},{"X1":"D","X3":True}] ).toDF()
df_a.show()
df_b.show()
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'inner')\
.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show()
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'outer')\
.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show() # same as 'full' or 'fullouter'
# Spark doesn't merge the key columns and so need to alias the column names to distinguih between the columns
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'left_outer')\
.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show() # same as 'left'
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'right_outer')\
.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show() # same as 'right'
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'right')\
.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show()
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'full')\
.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show()# same as 'fullouter'
df_a.join(df_b, df_a['X1'] == df_b['X1'], 'leftsemi').show() # same as semijoin
#.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show()
#anti-join = df.subtract('leftsemi')
df_a.subtract(df_a.join(df_b, df_a['X1'] == df_b['X1'], 'leftsemi')).show()
#.select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show()
c = [{"X1":"A","X2":1},{"X1":"B","X2":2},{"X1":"C","X2":3}]
d = [{"X1":"A","X2":1},{"X1":"B","X2":2},{"X1":"D","X2":4}]
df_c = sc.parallelize(c).toDF()
df_d = sc.parallelize(d).toDF()
df_c.show()
df_d.show()
df_c.intersect(df_d).show()
df_c.subtract(df_d).show()
df_d.subtract(df_c).show()
e = [{"X1":"A","X2":1},{"X1":"B","X2":2},{"X1":"C","X2":3}]
f = [{"X1":"D","X2":4},{"X1":"E","X2":5},{"X1":"F","X2":6}]
df_e = sc.parallelize(e).toDF()
df_f = sc.parallelize(f).toDF()
df_e.unionAll(df_f).show()
# df_a.join(df_b, df_a['X1'] == df_b['X1'], 'semijoin')\
# .select(df_a['X1'].alias('X1-a'),df_b['X1'].alias('X1-b'),'X2','X3').show()
# Gives error Unsupported join type 'semijoin'.
# Supported join types include: 'inner', 'outer', 'full', 'fullouter', 'leftouter', 'left', 'rightouter',
# 'right', 'leftsemi'
###Output
_____no_output_____
###Markdown
12. DataFrame : Tables & SQL
###Code
# SQL on tables
df_a.registerTempTable("tableA")
sqlContext.sql("select * from tableA").show()
df_b.registerTempTable("tableB")
sqlContext.sql("select * from tableB").show()
sqlContext.sql("select * from tableA JOIN tableB on tableA.X1 = tableB.X1").show()
sqlContext.sql("select * from tableA LEFT JOIN tableB on tableA.X1 = tableB.X1").show()
sqlContext.sql("select * from tableA FULL JOIN tableB on tableA.X1 = tableB.X1").show()
###Output
+----+----+----+-----+
| X1| X2| X1| X3|
+----+----+----+-----+
| A| 1| A| true|
| B| 2| B|false|
| C| 3|null| null|
|null|null| D| true|
+----+----+----+-----+
|
Toronto_Neighborhood_Clustering.ipynb | ###Markdown
Note : this notebook contains the week 3 project assignment of the course 'Applied Data Science Capstone' Coursera 1.Scraping the Dataset of Toronto Neighborhood The dataset of the Toronto neighborhood is not available in csv form for manipulation, therefore its data should be scraped from wikipedia here is the technique with step by step guide of how to scrap the data.The Beautiful Soup package is used for scraping the data. Before getting start lets download the following libraries
###Code
from bs4 import BeautifulSoup # for scraping data from wikipedia
import requests #calling request to url
import pandas as pd # handling data frame
import numpy as np
import json
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import geocoder # getting latitude and longitude
from geopy.geocoders import Nominatim
from pandas.io.json import json_normalize
import matplotlib.cm as cm # ploting graphs
import matplotlib.colors as colors
from sklearn.cluster import KMeans #for implementing kmeans algorithm
import folium # ploting map
###Output
_____no_output_____
###Markdown
Getting the request and parsing List of postal codes of Canada: M - Wikipedia document.documentElement.className="client-js";RLCONF={"wgBreakFrames":!1,"wgSeparatorTransformTable":["",""],"wgDigitTransformTable":["",""],"wgDefaultDateFormat":"dmy","wgMonthNames":["","January","February","March","April","May","June","July","August","September","October","November","December"],"wgRequestId":"XpVnAwpAAEYAAEsuV0QAAADD","wgCSPNonce":!1,"wgCanonica
###Code
url='https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
source=requests.get(url).text
soup=BeautifulSoup(source,'lxml')
###Output
_____no_output_____
###Markdown
If you carefully inspect the HTML script all the table contents i.e. postalcode,borough and neighborhood of the toronto which we intend to extract is under class Wikitable .
###Code
my_table=soup.find('table',{'class':'wikitable'})#find the class where required data is place
###Output
_____no_output_____
###Markdown
Now after examing the data we sure our required data lies in a step from 'tbody','tr' to the element of 'td'.Creating a list for storing our data.
###Code
data=[]
table_body=my_table.find('tbody')
sub_table=table_body.find_all('tr')
for row in sub_table:
sub=row.find_all('td') # data resides in the element of td
sub=[ele.text.strip() for ele in sub]
data.append(sub)
###Output
_____no_output_____
###Markdown
Transform the data into Pandas dataframe From the links, we have to extract our required data which is postalcode,borough and neighborhood.Now Convert the following data into Pandas DataFrame to work in python.
###Code
arr=np.array(data)
arr_refine=[x for x in arr if x] #removing the empty list
postal_code=[index[0] for index in arr_refine] #getting the postalcode
borough=[index[1] for index in arr_refine] #getting the borough
neighbor=[index[2] for index in arr_refine] #getting the neighorhood
df = pd.DataFrame(list(zip(postal_code,borough,neighbor)), #creating a table of following columns
columns =['Postal Code','Borough','Neighborhood'])
df_refine=df[df.Borough !='Not assigned'].reset_index(drop=True) # as required ignore the Not assigned cells
toronto_data=df_refine.apply(lambda x: x.str.replace('/',',')) #as required add comma
toronto_data.head()
###Output
_____no_output_____
###Markdown
For calculating row lenght of our data
###Code
toronto_data.shape[0]
###Output
_____no_output_____
###Markdown
2. Adding the Location(Latitude & Longitude) in Dataset Here the csv file data set location is used you can get it from Link :http://cocl.us/Geospatial_data
###Code
location=pd.read_csv('Geospatial_Coordinates.csv')
toronto_data_location=pd.merge(toronto_data,location, on=['Postal Code']) #Alert on merging columns name should be same
toronto_data_location
###Output
_____no_output_____
###Markdown
3. Implement the Clustering In this section explore the neighborhood of Toronto using foursquare api and implementing the kmeans algorithm for clustering. Setting up Foursquare API
###Code
CLIENT_ID='R2R05S3T5J04JUU4YNXM5KEELQIH23P3X4131WIP2FPSXWBO'
CLIENT_SECRET='2JHRTJ4IG2AFO4HS3KJLZCAHV5EGIZF1ZBVT55T33IBQOCVG'
VERSION='20180604'
radius=500
LIMIT=50
###Output
_____no_output_____
###Markdown
Defining the latitude and longitude for api setup
###Code
address = 'Toronto'
geolocator = Nominatim(user_agent="to_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('Latitude {},Longitude{}'.format(latitude,longitude))
n_name=toronto_data_location.loc[0,'Neighborhood']
n_lat=toronto_data_location.loc[0,'Latitude']
n_lng=toronto_data_location.loc[0,'Longitude']
###Output
_____no_output_____
###Markdown
Here create a function for getting call request to api and tranform the result data into clean pandas dataframe
###Code
def NearVenues(n_name, n_lat, n_lng):
venues_list=[]
for name, lat, lng in zip(n_name, n_lat, n_lng):
print(name)
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
results = requests.get(url).json()["response"]['groups'][0]['items']
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Remember this cell will take some time .First it print all names of neighborhood to completely give the latitude and longitude of all the borough to api. Calling NearVenues function
###Code
toronto_venues = NearVenues(n_name=toronto_data_location['Neighborhood'],
n_lat=toronto_data_location['Latitude'],
n_lng=toronto_data_location['Longitude']
)
toronto_venues.head()
###Output
_____no_output_____
###Markdown
Analysis the neighborhood venues
###Code
For manipulating the data we need to deal with numerical form therefore,get_dummies convert categorical data into numbers.
toronto_refine = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
toronto_refine['Neighborhood'] = toronto_venues['Neighborhood']
fixed_columns = [toronto_refine.columns[-1]] + list(toronto_refine.columns[:-1])
toronto_refine = toronto_refine[fixed_columns]
toronto_refine.head()
toronto_group = toronto_refine.groupby('Neighborhood').mean().reset_index() # taking out mean
###Output
_____no_output_____
###Markdown
Here created a new data frame that display the top 10 venues for each neighbor
###Code
top_venues = 10
indicators = ['st', 'nd', 'rd']
columns = ['Neighborhood']
for ind in np.arange(top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_group['Neighborhood']
for ind in np.arange(toronto_group.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_group.iloc[ind, :], top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Clustering the Neighborhood Used the sklearn kmeans library to perform the kmeans algorithm
###Code
kclusters = 5
manhattan_grouped_clustering = toronto_group.drop('Neighborhood', 1)
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(manhattan_grouped_clustering)
kmeans.labels_[0:10]
###Output
_____no_output_____
###Markdown
Here created a new data frame that cluster and add top 10 venues for each neighborhood
###Code
neighborhoods_venues_sorted.insert(0,'Cluster Labels', kmeans.labels_, allow_duplicates = False)
toronto_merged = toronto_data_location
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
toronto_merged.head()
###Output
_____no_output_____
###Markdown
Visualizing data in graphical form using matplot and folium libraries
###Code
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____ |
examples/02_creating_community_datasets.ipynb | ###Markdown
Working with `Community`s the `Community` object is geosnap's central data structure. A `Community` is a dataset that stores information about a collection of neighborhoods over several time periods, including each neighborhood's physical, socioeconomic, and demographic attributes and its demarcated boundaries. Under the hood, each `Community` is simply a long-form geopandas geodataframe with some associated metadata. If you're working with built-in data, you instantiate a `Community` by choosing the constructor for your dataset and passing either a boundary (geodataframe) or a selection filter that defines the study area. The selection filter can be either a `GeoDataFrame` boundary or a set of [FIPS](https://www.policymap.com/2012/08/tips-on-fips-a-quick-guide-to-geographic-place-codes-part-iii/) codes. Boundary queries are often more convenient but they are more expensive to compute and will take longer to construct. When constructing `Community`s from fips codes, the constructor has arguments for state, county, msa, or list of any arbitrary fips codes. If more than one of these arguments is passed, geosnap will use the union. This means that each level of the hierarchy is available for convenience but you are free to mix and match msas, states, counties, and even single tracts to create your study region of choice If you're working with your own data, you instantiate a `Community` by passing a list of geodataframes (or a single long-form).
###Code
from geosnap import Community
###Output
_____no_output_____
###Markdown
Create a `Community` from built-in census data The quickest and easiest method for getting started is to instantiate a Community using the built-in census data. To do so, you use the `Community.from_census` constructor:
###Code
# this will create a new community using data from Washington DC (which is fips code 11)
dc = Community.from_census(state_fips='11')
###Output
_____no_output_____
###Markdown
Note that when using `Community.from_census`, the resulting community has *unharmonized* tract boundaries, meaning that the tracts are different for each decade To access the underlying data from a `Community`, simply call its `gdf` attribute which returns a geodataframe
###Code
dc.gdf.head()
# create a little helper function for plotting a time-series
import matplotlib.pyplot as plt
def plot(community, column):
fig, axs = plt.subplots(1,3, figsize=(15,5))
axs=axs.flatten()
community.gdf[community.gdf.year==1990].dropna(subset=[column]).plot(column=column, scheme='quantiles', cmap='Blues', k=6, ax=axs[0])
axs[0].axis('off')
axs[0].set_title('1990')
community.gdf[community.gdf.year==2000].dropna(subset=[column]).plot(column=column, scheme='quantiles', cmap='Blues', k=6, ax=axs[1])
axs[1].axis('off')
axs[1].set_title('2000')
community.gdf[community.gdf.year==2010].dropna(subset=[column]).plot(column=column, scheme='quantiles', cmap='Blues', k=6, ax=axs[2])
axs[2].axis('off')
axs[2].set_title('2010')
plot(dc, 'p_nonhisp_white_persons')
###Output
_____no_output_____
###Markdown
Create a `Community` from Longitudinal Employment-Household Dynamics You can also create a `Community` from block-level LEHD census data. Unlike the decennial census, LEHD data are annual and contain information about the "workplace area characteristics" ("wac") and "residence area characteristics" ("rac") of employees. "wac" datasets contain information about where employees work, while "rac" datasets contiain information about where they live. Apart from information about the race, skill level, and income of emmployees in eaech census block, LEHD data also count the number of workers in each NAICS 2-digit industry category.If you use the `Community.from_lodes` constructor, you can collect data from 2000 to 2015
###Code
delaware = Community.from_lodes(state_fips='10', years=[2014, 2015])
delaware.gdf.head()
delaware.gdf[delaware.gdf.year==2015].plot(column='total_employees', scheme='quantiles', k=9)
###Output
_____no_output_____
###Markdown
Create a `Community` from a longitudinal database To instantiate a `Community` from a longitudinal database, you must first register the database with geosnap using either `store_ltdb` or `store_ncdb`. Once the data are available in datasets, you can call `Community.from_ltdb` and `Community.from_ncdb` LTDB using fips codes I don't know the Riverside MSA fips code by heart, so I'll slice through the `msas` dataframe in the data store to find it
###Code
from geosnap import datasets
datasets.msas()[datasets.msas().name.str.startswith('Riverside')]
riverside = Community.from_ltdb(msa_fips='40140')
plot(riverside, 'p_poverty_rate')
###Output
_____no_output_____
###Markdown
Instead of passing a fips code, I could use the `boundary` argument to pass the riverside MSA as a geodataframe. This is more computationally expensive because it requires geometric operations, but is more flexible because it allows you to create communities that don't nest into fips hierarchies (like zip codes, census designated places, or non-US data) NCDB Using a boundary
###Code
# grab the boundary for Sacramento from libpysal's built-in examples
import geopandas as gpd
from libpysal.examples import load_example
sac = load_example("Sacramento1")
sac = gpd.read_file(sac.get_path("sacramentot2.shp"))
sacramento = Community.from_ncdb(boundary=sac)
plot(sacramento, 'median_household_income')
###Output
_____no_output_____
###Markdown
Create a `Community` from a list of geodataframes If you are working outside the US, or if you have data that aren't included in geosnap (like census blocks or zip codes) then you can still create a community using the `Community.from_geodataframes` constructor, which allows you to pass a list of geodataframes that will be concatenated into the single long-form gdf structure that geosnap's analytics expect.This constructor is typically used in cases where a researcher has several shapefiles for a study area, each of which pertainin to a different time period. In such a case, the user would read each shapefile into a geodataframe and ensure that each has a "time" column that will differentiate each time period from one another in the long-form structure (e.g. if each shapefile is a different decade, then the 1990 shapefile should have a column called "year" in which every observation has a value of 1990). Then, these geodataframes simply need to be passed in a list to the `from_geodataframes` constructor Here, I'll use `cenpy` to grap population data from two different ACS vintages and combine them into a single community
###Code
from cenpy.products import ACS
chi13 = ACS(2013).from_place('chicago', variables='B00001_001E')
chi13['year'] = 2013
chi17 = ACS(2017).from_place('chicago', variables='B00001_001E')
chi17['year'] = 2017
chicago = Community.from_geodataframes([chi13, chi17])
fig, axs = plt.subplots(1,2, figsize=(12,5))
chicago.gdf[chicago.gdf.year==2013].dropna(subset=['B00001_001E']).plot(column='B00001_001E', cmap='Greens', scheme='quantiles', k=6, ax=axs[0])
axs[0].axis('off')
axs[0].set_title('2013')
chicago.gdf[chicago.gdf.year==2017].dropna(subset=['B00001_001E']).plot(column='B00001_001E', cmap='Greens', scheme='quantiles', k=6, ax=axs[1])
axs[1].axis('off')
axs[1].set_title('2017')
###Output
_____no_output_____ |
notebooks/S10C_Monte_Carlo_Integration.ipynb | ###Markdown
Numerical Evaluation of Integrals
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Integration problems are common in statistics whenever we are dealing with continuous distributions. For example the expectation of a function is an integration problem$$E[f(x)] = \int{f(x) \, p(x) \, dx}$$In Bayesian statistics, we need to solve the integration problem for the marginal likelihood or evidence$$p(X \mid \alpha) = \int{p(X \mid \theta) \, p(\theta \mid \alpha) d\theta}$$where $\alpha$ is a hyperparameter and $p(X \mid \alpha)$ appears in the denominator of Bayes theorem$$p(\theta | X) = \frac{p(X \mid \theta) \, p(\theta \mid \alpha)}{p(X \mid \alpha)}$$In general, there is no closed form solution to these integrals, and we have to approximate them numerically. The first step is to check if there is some **reparameterization** that will simplify the problem. Then, the general approaches to solving integration problems are1. Numerical quadrature2. Importance sampling, adaptive importance sampling and variance reduction techniques (Monte Carlo swindles)3. Markov Chain Monte Carlo4. Asymptotic approximations (Laplace method and its modern version in variational inference)This lecture will review the concepts for quadrature and Monte Carlo integration. Quadrature----You may recall from Calculus that integrals can be numerically evaluated using quadrature methods such as Trapezoid and Simpson's's rules. This is easy to do in Python, but has the drawback of the complexity growing as $O(n^d)$ where $d$ is the dimensionality of the data, and hence infeasible once $d$ grows beyond a modest number. Integrating functions
###Code
from scipy.integrate import quad
def f(x):
return x * np.cos(71*x) + np.sin(13*x)
x = np.linspace(0, 1, 100)
plt.plot(x, f(x))
pass
###Output
_____no_output_____
###Markdown
Exact solution
###Code
from sympy import sin, cos, symbols, integrate
x = symbols('x')
integrate(x * cos(71*x) + sin(13*x), (x, 0,1)).evalf(6)
###Output
_____no_output_____
###Markdown
Using quadrature
###Code
y, err = quad(f, 0, 1.0)
y
###Output
_____no_output_____
###Markdown
Multiple integrationFollowing the `scipy.integrate` [documentation](http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html), we integrate$$I=\int_{y=0}^{1/2}\int_{x=0}^{1-2y} x y \, dx\, dy$$
###Code
x, y = symbols('x y')
integrate(x*y, (x, 0, 1-2*y), (y, 0, 0.5))
from scipy.integrate import nquad
def f(x, y):
return x*y
def bounds_y():
return [0, 0.5]
def bounds_x(y):
return [0, 1-2*y]
y, err = nquad(f, [bounds_x, bounds_y])
y
###Output
_____no_output_____
###Markdown
Monte Carlo integrationThe basic idea of Monte Carlo integration is very simple and only requires elementary statistics. Suppose we want to find the value of $$I = \int_a^b f(x) dx$$in some region with volume $V$. Monte Carlo integration estimates this integral by estimating the fraction of random points that fall below $f(x)$ multiplied by $V$. In a statistical context, we use Monte Carlo integration to estimate the expectation$$E[g(X)] = \int_X g(x) p(x) dx$$with$$\bar{g_n} = \frac{1}{n} \sum_{i=1}^n g(x_i)$$where $x_i \sim p$ is a draw from the density $p$.We can estimate the Monte Carlo variance of the approximation as$$v_n = \frac{1}{n^2} \sum_{o=1}^n (g(x_i) - \bar{g_n})^2)$$Also, from the Central Limit Theorem,$$\frac{\bar{g_n} - E[g(X)]}{\sqrt{v_n}} \sim \mathcal{N}(0, 1)$$The convergence of Monte Carlo integration is $\mathcal{0}(n^{1/2})$ and independent of the dimensionality. Hence Monte Carlo integration generally beats numerical integration for moderate- and high-dimensional integration since numerical integration (quadrature) converges as $\mathcal{0}(n^{d})$. Even for low dimensional problems, Monte Carlo integration may have an advantage when the volume to be integrated is concentrated in a very small region and we can use information from the distribution to draw samples more often in the region of importance.An elementary, readable description of Monte Carlo integration and variance reduction techniques can be found [here](https://www.cs.dartmouth.edu/~wjarosz/publications/dissertation/appendixA.pdf). Intuition behind Monte Carlo integration We want to find some integral $$I = \int{f(x)} \, dx$$Consider the expectation of a function $g(x)$ with respect to some distribution $p(x)$. By definition, we have$$E[g(x)] = \int{g(x) \, p(x) \, dx}$$If we choose $g(x) = f(x)/p(x)$, then we have$$\begin{align}E[g(x)] &= \int{\frac{f(x}{p(x)} \, p(x) \, dx} \\&= \int{f(x) dx} \\&= I\end{align}$$By the law of large numbers, the average converges on the expectation, so we have$$I \approx \bar{g_n} = \frac{1}{n} \sum_{i=1}^n g(x_i)$$If $f(x)$ is a proper integral (i.e. bounded), and $p(x)$ is the uniform distribution, then $g(x) = f(x)$ and this is known as ordinary Monte Carlo. If the integral of $f(x)$ is improper, then we need to use another distribution with the same support as $f(x)$.
###Code
from scipy import stats
x = np.linspace(-3,3,100)
dist = stats.norm(0,1)
a = -2
b = 0
plt.plot(x, dist.pdf(x))
plt.fill_between(np.linspace(a,b,100), dist.pdf(np.linspace(a,b,100)), alpha=0.5)
plt.text(b+0.1, 0.1, 'p=%.4f' % (dist.cdf(b) - dist.cdf(a)), fontsize=14)
pass
###Output
_____no_output_____
###Markdown
Using quadrature
###Code
y, err = quad(dist.pdf, a, b)
y
###Output
_____no_output_____
###Markdown
Simple Monte Carlo integration If we can sample directly from the target distribution $N(0,1)$
###Code
n = 10000
x = dist.rvs(n)
np.sum((a < x) & (x < b))/n
###Output
_____no_output_____
###Markdown
If we cannot sample directly from the target distribution $N(0,1)$ but can evaluate it at any point. Recall that $g(x) = \frac{f(x)}{p(x)}$. Since $p(x)$ is $U(a, b)$, $p(x) = \frac{1}{b-a}$. So we want to calculate$$\frac{1}{n} \sum_{i=1}^n (b-a) f(x)$$
###Code
n = 10000
x = np.random.uniform(a, b, n)
np.mean((b-a)*dist.pdf(x))
###Output
_____no_output_____
###Markdown
Intuition for error rateWe will just work this out for a proper integral $f(x)$ defined in the unit cube and bounded by $|f(x)| \le 1$. Draw a random uniform vector $x$ in the unit cube. Then$$\begin{align}E[f(x_i)] &= \int{f(x) p(x) dx} = I \\\text{Var}[f(x_i)] &= \int{(f(x_i) - I )^2 p(x) \, dx} \\&= \int{f(x)^2 \, p(x) \, dx} - 2I \int(f(x) \, p(x) \, dx + I^2 \int{p(x) \, dx} \\&= \int{f(x)^2 \, p(x) \, dx} + I^2 \\& \le \int{f(x)^2 \, p(x) \, dx} \\& \le \int{p(x) \, dx} = 1\end{align}$$Now consider summing over many such IID draws $S_n = f(x_1) + f(x_2) + \cdots + f(x_n)$, \ldots, x_n$. We have$$\begin{align}E[S_n] &= nI \\\text{Var}[S_n] & \le n\end{align}$$and as expected, we see that $I \approx S_n/n$. From Chebyshev's inequality,$$\begin{align}P \left( \left| \frac{s_n}{n} - I \right| \ge \epsilon \right) &= P \left( \left| s_n - nI \right| \ge n \epsilon \right) & \le \frac{\text{Var}[s_n]}{n^2 \epsilon^2} & \le\frac{1}{n \epsilon^2} = \delta\end{align}$$Suppose we want 1% accuracy and 99% confidence - i.e. set $\epsilon = \delta = 0.01$. The above inequality tells us that we can achieve this with just $n = 1/(\delta \epsilon^2) = 1,000,000$ samples, regardless of the data dimensionality. ExampleWe want to estimate the following integral $\int_0^1 e^x dx$.
###Code
x = np.linspace(0, 1, 100)
plt.plot(x, np.exp(x))
plt.xlim([0,1])
plt.ylim([0, np.exp(1)])
pass
###Output
_____no_output_____
###Markdown
Analytic solution
###Code
from sympy import symbols, integrate, exp
x = symbols('x')
expr = integrate(exp(x), (x,0,1))
expr.evalf()
###Output
_____no_output_____
###Markdown
Using quadrature
###Code
from scipy import integrate
y, err = integrate.quad(exp, 0, 1)
y
###Output
_____no_output_____
###Markdown
Monte Carlo integration
###Code
for n in 10**np.array([1,2,3,4,5,6,7,8]):
x = np.random.uniform(0, 1, n)
sol = np.mean(np.exp(x))
print('%10d %.6f' % (n, sol))
###Output
10 2.016472
100 1.717020
1000 1.709350
10000 1.719758
100000 1.716437
1000000 1.717601
10000000 1.718240
100000000 1.718152
###Markdown
Monitoring variance in Monte Carlo integrationWe are often interested in knowing how many iterations it takes for Monte Carlo integration to "converge". To do this, we would like some estimate of the variance, and it is useful to inspect such plots. One simple way to get confidence intervals for the plot of Monte Carlo estimate against number of iterations is simply to do many such simulations.For the example, we will try to estimate the function (again)$$f(x) = x \cos 71 x + \sin 13x, \ \ 0 \le x \le 1$$
###Code
def f(x):
return x * np.cos(71*x) + np.sin(13*x)
x = np.linspace(0, 1, 100)
plt.plot(x, f(x))
pass
###Output
_____no_output_____
###Markdown
Single MC integration estimate
###Code
n = 100
x = f(np.random.random(n))
y = 1.0/n * np.sum(x)
y
###Output
_____no_output_____
###Markdown
Using multiple independent sequences to monitor convergenceWe vary the sample size from 1 to 100 and calculate the value of $y = \sum{x}/n$ for 1000 replicates. We then plot the 2.5th and 97.5th percentile of the 1000 values of $y$ to see how the variation in $y$ changes with sample size. The blue lines indicate the 2.5th and 97.5th percentiles, and the red line a sample path.
###Code
n = 100
reps = 1000
x = f(np.random.random((n, reps)))
y = 1/np.arange(1, n+1)[:, None] * np.cumsum(x, axis=0)
upper, lower = np.percentile(y, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1), y, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), y[:, 0], c='red', linewidth=1);
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
pass
###Output
_____no_output_____
###Markdown
Using bootstrap to monitor convergenceIf it is too expensive to do 1000 replicates, we can use a bootstrap instead.
###Code
xb = np.random.choice(x[:,0], (n, reps), replace=True)
yb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)
upper, lower = np.percentile(yb, [2.5, 97.5], axis=1)
plt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)
plt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)
plt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')
pass
###Output
_____no_output_____
###Markdown
Variance ReductionWith independent samples, the variance of the Monte Carlo estimate is $$\begin{align}\text{Var}[\bar{g_n}] &= \text{Var} \left[ \frac{1}{N}\sum_{i=1}^{N} \frac{f(x_i)}{p(x_i)} \right] \\&= \frac{1}{N^2} \sum_{i=1}^{N} \text{Var} \left[ \frac{f(x_i)}{p(x_i)} \right] \\&= \frac{1}{N^2} \sum_{i=1}^{N} \text{Var}[Y_i] \\&= \frac{1}{N} \text{Var}[Y_i]\end{align}$$where $Y_i = f(x_i)/p(x_i)$. In general, we want to make $\text{Var}[\bar{g_n}]$ as small as possible for the same number of samples. There are several variance reduction techniques (also colorfully known as Monte Carlo swindles) that have been described - we illustrate the change of variables and importance sampling techniques here. Change of variablesThe Cauchy distribution is given by $$f(x) = \frac{1}{\pi (1 + x^2)}, \ \ -\infty \lt x \lt \infty $$Suppose we want to integrate the tail probability $P(X > 3)$ using Monte Carlo. One way to do this is to draw many samples form a Cauchy distribution, and count how many of them are greater than 3, but this is extremely inefficient. Only 10% of samples will be used
###Code
import scipy.stats as stats
h_true = 1 - stats.cauchy().cdf(3)
h_true
n = 100
x = stats.cauchy().rvs(n)
h_mc = 1.0/n * np.sum(x > 3)
h_mc, np.abs(h_mc - h_true)/h_true
###Output
_____no_output_____
###Markdown
A change of variables lets us use 100% of drawsWe are trying to estimate the quantity$$\int_3^\infty \frac{1}{\pi (1 + x^2)} dx$$Using the substitution $y = 3/x$ (and a little algebra), we get$$\int_0^1 \frac{3}{\pi(9 + y^2)} dy$$Hence, a much more efficient MC estimator is $$\frac{1}{n} \sum_{i=1}^n \frac{3}{\pi(9 + y_i^2)}$$where $y_i \sim \mathcal{U}(0, 1)$.
###Code
y = stats.uniform().rvs(n)
h_cv = 1.0/n * np.sum(3.0/(np.pi * (9 + y**2)))
h_cv, np.abs(h_cv - h_true)/h_true
###Output
_____no_output_____
###Markdown
Importance samplingSuppose we want to evaluate$$I = \int{h(x)\,p(x) \, dx}$$where $h(x)$ is some function and $p(x)$ is the PDF of $y$. If it is hard to sample directly from $p$, we can introduce a new density function $q(x)$ that is easy to sample from, and write$$I = \int{h(x)\, p(x)\, dx} = \int{h(x)\, \frac{p(x)}{q(x)} \, q(x) \, dx}$$In other words, we sample from $h(y)$ where $y \sim q$ and weight it by the likelihood ratio $\frac{p(y)}{q(y)}$, estimating the integral as$$\frac{1}{n}\sum_{i=1}^n \frac{p(y_i)}{q(y_i)} h(y_i)$$Sometimes, even if we can sample from $p$ directly, it is more efficient to use another distribution. ExampleSuppose we want to estimate the tail probability of $\mathcal{N}(0, 1)$ for $P(X > 5)$. Regular MC integration using samples from $\mathcal{N}(0, 1)$ is hopeless since nearly all samples will be rejected. However, we can use the exponential density truncated at 5 as the importance function and use importance sampling. Note that $h$ here is simply the identify function.
###Code
x = np.linspace(4, 10, 100)
plt.plot(x, stats.expon(5).pdf(x))
plt.plot(x, stats.norm().pdf(x))
pass
###Output
_____no_output_____
###Markdown
Expected answerWe expect about 3 draws out of 10,000,000 from $\mathcal{N}(0, 1)$ to have a value greater than 5. Hence simply sampling from $\mathcal{N}(0, 1)$ is hopelessly inefficient for Monte Carlo integration.
###Code
%precision 10
v_true = 1 - stats.norm().cdf(5)
v_true
###Output
_____no_output_____
###Markdown
Using direct Monte Carlo integration
###Code
n = 10000
y = stats.norm().rvs(n)
v_mc = 1.0/n * np.sum(y > 5)
# estimate and relative error
v_mc, np.abs(v_mc - v_true)/v_true
###Output
_____no_output_____
###Markdown
Using importance sampling
###Code
n = 10000
y = stats.expon(loc=5).rvs(n)
v_is = 1.0/n * np.sum(stats.norm().pdf(y)/stats.expon(loc=5).pdf(y))
# estimate and relative error
v_is, np.abs(v_is- v_true)/v_true
###Output
_____no_output_____ |
intermediate/Deque.ipynb | ###Markdown
A **deque**, also known as **a double-ended queue**, is an ordered collection of items similar to the queue. It has two ends, a front and a rear, and the items remain positioned in the collection. What makes a deque different is the unrestrictive nature of adding and removing items. New items can be added at either the front or the rear. Likewise, existing items can be removed from either end. In a sense, this hybrid linear structure provides all the capabilities of stacks and queues in a single data structure. Figure 1 shows a deque of Python data objects.
###Code
class Deque:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def addFront(self, item):
self.items.append(item)
def addRear(self, item):
self.items.insert(0,item)
def removeFront(self):
return self.items.pop()
def removeRear(self):
return self.items.pop(0)
def size(self):
return len(self.items)
d=Deque()
print(f"is empty?: {d.isEmpty()}")
print(f"size: {d.size()}")
d.addRear(4)
print(f"size: {d.size()}")
d.addRear('dog')
print(f"size: {d.size()}")
d.addFront('cat')
print(f"size: {d.size()}")
d.addFront(True)
print(f"is empty?: {d.isEmpty()}")
print(f"size: {d.size()}")
d.addRear(8.4)
print(f"size: {d.size()}")
print(d.removeRear())
print(f"size: {d.size()}")
print(d.removeFront())
print(f"size: {d.size()}")
###Output
is empty?: True
size: 0
size: 1
size: 2
size: 3
is empty?: False
size: 4
size: 5
8.4
size: 4
True
size: 3
|
Tipos_de_distribuciones.ipynb | ###Markdown
Estadística con Python GitHub repository: https://github.com/jorgemauricio/python_statistics Instructor: Jorge Mauricio Tipo de distribuciones
###Code
# librerías
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Distribución Uniforme
###Code
valores = np.random.uniform(-100.0, 100.0, 100000)
plt.hist(valores, 50)
plt.show()
###Output
_____no_output_____
###Markdown
Normal o Gausiana
###Code
# librerías
from scipy.stats import norm
import matplotlib.pyplot as plt
x = np.arange(-3, 3, 0.001)
plt.plot(x, norm.pdf(x))
###Output
_____no_output_____
###Markdown
Generar algunos numeros aleatoriamente con una distribución normal, * "mu" es la media deseada y * "sigma" es la desviación estandar
###Code
mu = 5.0
sigma = 2.0
valores = np.random.normal(mu, sigma, 10000)
plt.hist(valores, 50)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential PDF / "Power Law"
###Code
from scipy.stats import expon
import matplotlib.pyplot as plt
x = np.arange(0, 10, 0.001)
plt.plot(x, expon.pdf(x))
###Output
_____no_output_____
###Markdown
Binomial Probability Mass Function
###Code
from scipy.stats import binom
import matplotlib.pyplot as plt
n, p = 10, 0.5
x = np.arange(0, 10, 0.001)
plt.plot(x, binom.pmf(x, n, p))
###Output
_____no_output_____
###Markdown
Poisson Probability Mass FunctionEjemplo, el portal del LNMySR tiene al día 250 visitas, Cual es la probabilidad de tener 300 visitas?
###Code
from scipy.stats import poisson
import matplotlib.pyplot as plt
mu = 250
x = np.arange(150, 350, 0.5)
plt.plot(x, poisson.pmf(x, mu))
###Output
_____no_output_____ |
models/deprecated/3-2 (1). VGG16 Triplet KNN Model.ipynb | ###Markdown
Data Information
###Code
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_df.shape)
print('Validation:\t', val_df.shape)
print('Test:\t\t', test_df.shape)
print('\nTrain Landmarks:\t', len(train_df['landmark_id'].unique()))
print('Validation Landmarks:\t', len(val_df['landmark_id'].unique()))
print('Test Landmarks:\t\t', len(test_df['landmark_id'].unique()))
train_df.head()
###Output
_____no_output_____
###Markdown
Load Features and Labels
###Code
# Already normalized
train_feature = np.load('./data/triplet/train_triplet_vgg16(1)_features.npy')
val_feature = np.load('./data/triplet/validation_triplet_vgg16(1)_features.npy')
test_feature = np.load('./data/triplet/test_triplet_vgg16(1)_features.npy')
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_feature.shape, train_df.shape)
print('Validation:\t', val_feature.shape, val_df.shape)
print('Test:\t\t', test_feature.shape, test_df.shape)
# Helper function
def accuracy(true_label, prediction, top=1):
""" function to calculate the prediction accuracy """
prediction = prediction[:, :top]
count = 0
for i in range(len(true_label)):
if true_label[i] in prediction[i]:
count += 1
return count / len(true_label)
###Output
_____no_output_____
###Markdown
Implement KNN Model
###Code
# Merge train and validation features
train_val_feature = np.concatenate((train_feature, val_feature), axis=0)
train_val_df = pd.concat((train_df, val_df), axis=0)
train_val_df = train_val_df.reset_index(drop=True)
# Implement KNN model
knn = NearestNeighbors(n_neighbors=50, algorithm='auto', leaf_size=30,
metric='minkowski', p=2, n_jobs=-1)
knn.fit(train_val_feature)
# Search the first 50 neighbors
distance, neighbor_index = knn.kneighbors(test_feature, return_distance=True)
# Save the results
np.save('./result/knn_triplet_vgg16(1)_distance.npy', distance)
np.save('./result/knn_triplet_vgg16(1)_neighbor.npy', neighbor_index)
###Output
_____no_output_____
###Markdown
Search Neighbors
###Code
knn_distance = np.load('./result/knn_triplet_vgg16(1)_distance.npy')
knn_neighbor = np.load('./result/knn_triplet_vgg16(1)_neighbor.npy')
# Get the first 50 neighbors
predictions = []
for neighbors in knn_neighbor:
predictions.append(train_val_df.loc[neighbors]['landmark_id'].values)
predictions = np.array(predictions)
np.save('./result/knn_triplet_vgg16(1)_test_prediction.npy', predictions)
###Output
_____no_output_____
###Markdown
Compute Accuracy
###Code
print('Top 1 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=1))
print('Top 5 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=5))
print('Top 10 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=10))
print('Top 20 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=20))
knn_acc = []
for i in range(1, 51):
tmp_acc = accuracy(test_df['landmark_id'].values, predictions, top=i)
knn_acc.append(tmp_acc)
np.save('./result/knn_triplet_vgg16(1)_accuracy.npy', knn_acc)
###Output
_____no_output_____ |
python/signal/py-signal.ipynb | ###Markdown
Signal信号(signal)-- 进程之间通讯的方式,是一种软件中断。一个进程一旦接收到信号就会打断原来的程序执行流程来处理信号。 信号类型: SIGHUP 1 A 终端挂起或者控制进程终止 SIGINT 2 A 键盘中断(如break键被按下) SIGQUIT 3 C 键盘的退出键被按下 SIGILL 4 C 非法指令 SIGABRT 6 C 由abort(3)发出的退出指令 SIGFPE 8 C 浮点异常 SIGKILL 9 AEF Kill信号 SIGSEGV 11 C 无效的内存引用 SIGPIPE 13 A 管道破裂: 写一个没有读端口的管道 SIGALRM 14 A 由alarm(2)发出的信号 SIGTERM 15 A 终止信号 SIGUSR1 30,10,16 A 用户自定义信号1 SIGUSR2 31,12,17 A 用户自定义信号2 SIGCHLD 20,17,18 B 子进程结束信号 SIGCONT 19,18,25 进程继续(曾被停止的进程) SIGSTOP 17,19,23 DEF 终止进程 SIGTSTP 18,20,24 D 控制终端(tty)上按下停止键 SIGTTIN 21,21,26 D 后台进程企图从控制终端读 SIGTTOU 22,22,27 D 后台进程企图从控制终端写
###Code
#!/usr/bin/env python3
import sys
import time
import signal
def sigint_handler(signum, frame):
print("signal handler")
sys.exit()
if 'name' == 'main':
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigint_handler)
signal.signal(signal.SIGHUP, sigint_handler)
while (True):
print("Signal test")
time.sleep(1)
###Output
_____no_output_____ |
CNN_PartB_(xception).ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
zip_path = "drive/MyDrive/nature_12K.zip"
!cp "{zip_path}" .
!unzip -q nature_12K.zip
import os
import glob
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Activation, Dropout, BatchNormalization
!pip install wandb
import wandb
from keras.callbacks import EarlyStopping, ModelCheckpoint
from wandb.keras import WandbCallback
from keras.utils.vis_utils import plot_model
from PIL import Image
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
data_dir = "inaturalist_12K"
#data_augmentation = True
def data_preparation(data_dir , data_augmentation = True , batch_size = 250):
train_dir = os.path.join(data_dir, "train")
test_dir = os.path.join(data_dir, "val")
if data_augmentation == True:
train_datagen = ImageDataGenerator(rescale=1./255,
height_shift_range=0.2,
width_shift_range=0.2,
horizontal_flip=True,
zoom_range=0.2,
fill_mode="nearest",
validation_split = 0.1)
else:
train_datagen = ImageDataGenerator(rescale=1./255 ,validation_split = 0.1)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
shuffle=True,
color_mode="rgb",
batch_size=batch_size,
class_mode='categorical',
subset = "training")
val_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
shuffle=True,
color_mode="rgb",
batch_size=batch_size,
class_mode='categorical',
subset = "validation")
test_generator = test_datagen.flow_from_directory(test_dir, target_size=(224, 224), batch_size=batch_size)
return train_generator , val_generator, test_generator
def pretrain_model(pretrained_model_name, dropout, dense_layer, pre_layer_train=None):
'''
define a keras sequential model based on a pre-trained model intended to be fine tuned
'''
shape = (224 ,224, 3)
# add a pretrained model without the top dense layer
if pretrained_model_name == 'ResNet50':
pretrained_model = tf.keras.applications.ResNet50(include_top=False,input_shape=shape, weights='imagenet')
elif pretrained_model_name == 'InceptionResNetV2':
pretrained_model = tf.keras.applications.InceptionResNetV2(include_top=False,input_shape= shape, weights='imagenet')
elif pretrained_model_name == 'InceptionV3':
pretrained_model = tf.keras.applications.InceptionV3(include_top=False,input_shape= shape, weights='imagenet')
elif pretrained_model_name == 'Xception':
pretrained_model = tf.keras.applications.Xception(include_top=False,input_shape= shape, weights='imagenet')
else:
raise Exception('no pretrained model given')
#freeze all layers
for layer in pretrained_model.layers:
layer.trainable=False
#setting top layers as trainable
if pre_layer_train:
for layer in pretrained_model.layers[-pre_layer_train:]:
layer.trainable=True
model = tf.keras.models.Sequential()
model.add(pretrained_model) #adding pretrained model
model.add(Flatten()) # The flatten layer
model.add(Dense(dense_layer, activation= 'relu'))#adding a dense layer
model.add(Dropout(dropout)) # For dropout
model.add(Dense(10, activation="softmax"))#softmax layer
return model
pre_train_model = "InceptionV3" #you can change model for wandb sweeps
def train():
# Default values for hyper-parameters
config_defaults = {
"data_augmentation": True,
"batch_size": 250,
"dropout": 0.1,
"dense_layer": 256,
"learning_rate": 0.0001,
"epochs": 5,
"pre_layer_train": None,
}
# Initialize a new wandb run
wandb.init(config=config_defaults)
# Config is a variable that holds and saves hyperparameters and inputs
config = wandb.config
# Local variables, values obtained from wandb config
data_augmentation = config.data_augmentation
batch_size = config.batch_size
dropout = config.dropout
dense_layer = config.dense_layer
learning_rate = config.learning_rate
epochs = config.epochs
pre_layer_train = config.pre_layer_train
# Display the hyperparameters
run_name = "pre_train_mdl_{}_aug_{}_bs_{}_ep_{}_dropout_{}_dense_{}".format(pre_train_model, data_augmentation, batch_size,epochs, dropout, dense_layer)
print(run_name)
# Create the data generators
train_generator , val_generator, test_generator = data_preparation(data_dir , data_augmentation = True , batch_size = 250)
# Define the model
model = pretrain_model(pretrained_model_name = pre_train_model, dropout = dropout, dense_layer = dense_layer, pre_layer_train=pre_layer_train)
#model = define_model(pretrained_model_name=pre_train_model, activation_function_dense=activation_function_dense, fc_layer=fc_layer, dropout=dropout, pre_layer_train=pre_layer_train)
print(model.count_params())
model.compile(optimizer=Adam(learning_rate = learning_rate), loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Early Stopping callback
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
# To save the model with best validation accuracy
checkpoint = ModelCheckpoint('best_model.h5', monitor='val_accuracy', mode='max', verbose=0, save_best_only=True)
history = model.fit(train_generator,
steps_per_epoch = train_generator.n//train_generator.batch_size,
validation_data = val_generator,
validation_steps = val_generator.n//val_generator.batch_size,
epochs=epochs, verbose = 2,
callbacks=[WandbCallback(), earlyStopping, checkpoint])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
# Meaningful name for the run
wandb.run.name = run_name
wandb.run.save()
wandb.run.finish()
return history
# Sweep configuration
sweep_config = {
"name": "CNN_part B",
"metric": {
"name":"val_accuracy",
"goal": "maximize"
},
"method": "bayes",
"parameters": {
"data_augmentation": {
"values": [True, False]
},
"batch_size": {
"values": [128, 256]
},
"learning_rate": {
"values": [0.001, 0.0001]
},
"epochs": {
"values": [5, 10]
},
"dropout": {
"values": [0, 0.2, 0.1]
},
"dense_layer": {
"values": [128, 256, 512]
},
"pre_layer_train": {
"values": [None, 10, 20]
}
}
}
# Generates a sweep id
sweep_id = wandb.sweep(sweep_config, entity="moni6264", project="part_b_xception")
wandb.agent(sweep_id, train, count=3)
###Output
_____no_output_____ |
Aula_06_DetectorCaracteristicas.ipynb | ###Markdown
Detectores de CaracterísticasNesse tutorial veremos como utilizar alguns detectores de características no OpenCV. Começaremos com a importação de bibliotecas e leitura da imagem. Utilizaremos a seguinte [imagem](https://drive.google.com/file/d/1Nvpch-60Wre4TyoQWS-HjYP40cglj-Sj/view?usp=sharing). A imagem precisará estar em escala de cinza e no formato `float32`.
###Code
from google.colab import drive
drive.mount('/content/drive')
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
image = cv.imread("/content/drive/MyDrive/ColabFiles/chessboard.png")
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
gray = np.float32(gray)
%matplotlib inline
plt.imshow(gray, cmap='gray', vmin=0, vmax=255)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
1 - Detector de HarrisO OpenCV possui o método `cornerHarris` que permite utilizar o detector de Harris. O método possui os seguintes parâmetros:* A imagem de origem* O tamanho da vizinhança para considerar na detecção (tamanho do *patch*)* O tamanho do *kernel* para estimativa do gradiente pelo filtro de Sobel* A constante do detector de Harris ($α$ na fórmula dos slides)* O método de tratamento de borda (padding) (opcional, por default é por espelhamento)Método retorna uma imagem com o valor da função de resposta para cada pixel.
###Code
harris = cv.cornerHarris(gray,2,3,0.04)
#Dilatação na imagem para facilitar a visualização dos pontos
harris = cv.dilate(harris,None)
#marcação dos pontos sobre a imagem original
image[harris > 0.05*harris.max()] = [0,0,255]
plt.imshow(image)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
2 - Detector de Shi-TomasiO detector de Shi-Tomasi (menor autovalor) pode ser utilizado pelo método `goodFeaturesToTrack`, o qual necessita dos seguintes parâmetros:* A imagem de origem (8-bit ou float32)* O número máximo de pontos retornados. Se o valor for $\leq 0$ não há limite para o máximo de pontos retornados* O nível de qualidade que será multiplicado pelo valor de qualidade do melhor ponto obtido. Todos os pontos com qualidade inferior a esse produto serão rejeitados* Distância Euclidiana mínima entre pontos retornados* A região de interesse onde os pontos devem ser detectados (opcional, por *default* é a imagem toda* Tamanho do *kernel* para cálculo dos gradientes (opcional, por *default* é 3)* Uma *flag* que quando verdadeira permite utilizar o detector de Harris (opcional, por default é falsa)* A constante do detector de Harris ($α$ na fórmula dos slides. Opcional, por *default* é 0.04)Método retorna um vetor de pontos. A imagem utilizada pode ser encontrada [aqui](https://drive.google.com/file/d/1hMrd5_vXvyUMBuM6w5LqGlcnhM1Hs-b5/view?usp=sharing).
###Code
image = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
corners = cv.goodFeaturesToTrack(gray,30,0.01,10)
corners = np.int0(corners)
#desenha círculos nos corners
for i in corners:
x,y = i.ravel()
cv.circle(image,(x,y),3,255,-1)
plt.imshow(image)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
3 - Exercícios1. Varie os parâmetros do detector de Harris e analise os resultados
###Code
image = cv.imread("/content/drive/MyDrive/ColabFiles/chessboard.png")
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
gray = np.float32(gray)
harris = cv.cornerHarris(gray,5,1,0.04)
#Dilatação na imagem para facilitar a visualização dos pontos
harris = cv.dilate(harris,None)
#marcação dos pontos sobre a imagem original
image[harris > 0.05*harris.max()] = [0,0,255]
plt.imshow(image)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
2. Utilize o método `goodFeaturesToTrack` para aplicar o detector de Harris.
###Code
image = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
corners = cv.goodFeaturesToTrack(gray,30,0.01,10, useHarrisDetector = True)
corners = np.int0(corners)
#desenha círculos nos corners
for i in corners:
x,y = i.ravel()
cv.circle(image,(x,y),3,255,-1)
plt.imshow(image)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
3. Pesquise e teste outros métodos de detecção de *features* no OpenCV. [Canny](https://docs.opencv.org/4.x/dd/d1a/group__imgproc__feature.htmlga1d6bb77486c8f92d79c8793ad995d541)
###Code
image = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray= cv.cvtColor(image,cv.COLOR_BGR2GRAY)
print("0.5, 0.2")
banana = cv.Canny(image, 0.5, 0.2)
plt.imshow(banana)
plt.xticks([]), plt.yticks([])
plt.show()
print("1, 255")
banana = cv.Canny(image, 1, 255)
plt.imshow(banana)
plt.xticks([]), plt.yticks([])
plt.show()
print("apertureSize = 3")
banana = cv.Canny(image, 1, 255, apertureSize = 3)
plt.imshow(banana)
plt.xticks([]), plt.yticks([])
plt.show()
print("L2gradient = True")
banana = cv.Canny(image, 1, 255, apertureSize = 3, L2gradient = True)
plt.imshow(banana)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
0.5, 0.2
###Markdown
[Fast](https://docs.opencv.org/3.4/df/d0c/tutorial_py_fast.html)
###Code
image = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY)
fast = cv.FastFeatureDetector_create()
kp = fast.detect(gray,None)
gray2 = cv.drawKeypoints(gray, kp, None, color=(255,0,0))
plt.imshow(gray2)
plt.xticks([]), plt.yticks([])
plt.show()
print("Disable nonmaxSuppression")
fast.setNonmaxSuppression(0)
kp = fast.detect(image, None)
img3 = cv.drawKeypoints(img, kp, None, color=(255,0,0))
plt.imshow(img3)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
[Surf e sift](https://stackoverflow.com/questions/37984709/how-to-use-surf-in-python)
###Code
img = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY)
kaze = cv.KAZE_create(1)
(kps, descs) = kaze.detectAndCompute(gray, None)
img_1 = cv.drawKeypoints(img, kps, None, color=(0,255,0), flags=0)
plt.imshow(img_1)
plt.xticks([]), plt.yticks([])
plt.show()
img = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY)
akaze = cv.AKAZE_create()
(kps, descs) = akaze.detectAndCompute(gray, None)
img_1 = cv.drawKeypoints(img, kps, None, color=(0,255,0), flags=0)
plt.imshow(img_1)
plt.xticks([]), plt.yticks([])
plt.show()
img = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY)
brisk = cv.BRISK_create()
(kps, descs) = brisk.detectAndCompute(gray, None)
img_1 = cv.drawKeypoints(img, kps, None, color=(0,255,0), flags=0)
plt.imshow(img_1)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
[BRIEF](https://docs.opencv.org/3.4/dc/d7d/tutorial_py_brief.html)
###Code
img = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY)
# Initiate FAST detector
star = cv.xfeatures2d.StarDetector_create()
# Initiate BRIEF extractor
brief = cv.xfeatures2d.BriefDescriptorExtractor_create()
# find the keypoints with STAR
kps = star.detect(img,None)
# compute the descriptors with BRIEF
kps, des = brief.compute(img, kps)
img_1 = cv.drawKeypoints(img, kps, None, color=(0,255,0), flags=0)
plt.imshow(img_1)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
[ORB](https://docs.opencv.org/3.4/d1/d89/tutorial_py_orb.html)
###Code
img = cv.imread("/content/drive/MyDrive/ColabFiles/blox.jpg")
gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY)
# Initiate ORB detector
orb = cv.ORB_create()
# find the keypoints with ORB
kps = orb.detect(img,None)
# compute the descriptors with ORB
kps, des = orb.compute(img, kps)
img_1 = cv.drawKeypoints(img, kps, None, color=(0,255,0), flags=0)
plt.imshow(img_1)
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____ |
model-development - 1.ipynb | ###Markdown
Model Development In this project, we use a database that contains information about used cars. The objective of this project is to develop a model and try to predict the price value for a car. We also try different models and compare it's result to see which model fits the best and predicts accurately.
###Code
%pip install sklearn
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load the data and store it in dataframe `df`:
###Code
# path of data
path = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/automobileEDA.csv'
df = pd.read_csv(path)
df.head()
#Lets use linear regression to predict the price of cars
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm
#Using highway mpg to predict the price of cars
X = df[['highway-mpg']]
Y = df['price']
lm.fit(X,Y)
#Predicted price values
Yhat=lm.predict(X)
Yhat[0:5]
#The intercept and coeffecient of the linear function used to predict the price
print(lm.intercept_)
print(lm.coef_)
# Similarily we predict price using engine size data and compare the values
lm1 = LinearRegression()
lm1
X = df[['engine-size']]
Y=df[['price']]
lm1.fit(X,Y)
lm1
y=lm1.predict(X)
y[0:5]
#Similarily, the slope and intercept are:
print(lm1.intercept_)
print(lm1.coef_)
#We previously determined the factors that have a strong effect on the price of the car
#Instead of predicting price value with a single predictor variable we use multiple linear regression model to predict the price using multiple variables
Z = df[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]
lm.fit(Z, df['price'])
print(lm.intercept_)
print(lm.coef_)
y=lm.predict(Z)
y[0:5]
#To choose the best model, lets visualise and choose the best one
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's visualize **highway-mpg** as potential predictor variable of price:
###Code
#Let's visualize highway-mpg as potential predictor variable of price:
plt.figure(figsize=(12, 10))
sns.regplot(x="highway-mpg", y="price", data=df)
plt.ylim(0,)
###Output
_____no_output_____
###Markdown
We can see from this plot that price is negatively correlated to highway-mpg since the regression slope is negative.
###Code
#Let's compare the above plot with that of rpm
plt.figure(figsize=(12, 10))
sns.regplot(x="peak-rpm", y="price", data=df)
plt.ylim(0,)
###Output
_____no_output_____
###Markdown
Comparing the regression plot of "peak-rpm" and "highway-mpg", we see that the points for "highway-mpg" are much closer to the generated line and, on average, decrease. The points for "peak-rpm" have more spread around the predicted line and it is much harder to determine if the points are decreasing or increasing as the "peak-rpm" increases.
###Code
width = 12
height = 10
plt.figure(figsize=(width, height))
sns.residplot(df['highway-mpg'], df['price'])
plt.show()
###Output
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
We can see from this residual plot that the residuals are not randomly spread around the x-axis, leading us to believe that maybe a non-linear model is more appropriate for this data.
###Code
#To visualise the multiple linear regression model we use the distribution plot since neither scatter or residual plots are viable
plt.figure(figsize=(12, 10))
ax1 = sns.distplot(df['price'], hist=False, color="r", label="Actual Value")
sns.distplot(y, hist=False, color="b", label="Fitted Values", ax=ax1)
plt.title('Actual vs Fitted Values for Price')
plt.xlabel('Price (in dollars)')
plt.ylabel('Proportion of Cars')
plt.show()
plt.close()
###Output
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
###Markdown
We can see that the fitted values are reasonably close to the actual values since the two distributions overlap a bit. However, there is definitely some room for improvement.
###Code
#Trying out a polynomial modelto see if highway-mpg gives a better result compared to linear model
#Defining a function to plot the visualisation
def PlotPolly(model, independent_variable, dependent_variabble, Name):
x_new = np.linspace(15, 55, 100)
y_new = model(x_new)
plt.plot(independent_variable, dependent_variabble, '.', x_new, y_new, '-')
plt.title('Polynomial Fit with Matplotlib for Price ~ Length')
ax = plt.gca()
ax.set_facecolor((0.898, 0.898, 0.898))
fig = plt.gcf()
plt.xlabel(Name)
plt.ylabel('Price of Cars')
plt.show()
plt.close()
x = df['highway-mpg']
y = df['price']
###Output
_____no_output_____
###Markdown
Let's fit the polynomial using the function polyfit, then use the function poly1d to display the polynomial function.
###Code
#Trying out a polynomial of the 3rd order
f = np.polyfit(x, y, 3)
p = np.poly1d(f)
print(p)
###Output
3 2
-1.557 x + 204.8 x - 8965 x + 1.379e+05
###Markdown
Let's plot the function:
###Code
PlotPolly(p, x, y, 'highway-mpg')
###Output
_____no_output_____
###Markdown
We can already see from plotting that this polynomial model performs better than the linear model. This is because the generated polynomial function "hits" more of the data points. Model 1: Simple Linear Regression
###Code
#To better identify which model gives us the best prediction of the price value we can use the R^2 and Mean squared error method
#We test the single linear regression model first
lm.fit(X, Y)
print('The R-square is: ', lm.score(X, Y))
#To get the mean squared error we import the following module
from sklearn.metrics import mean_squared_error
Yhat=lm.predict(X)
print('The output of the first four predicted value is: ', Yhat[0:4])
mse = mean_squared_error(df['price'], Yhat)
print('The mean square error of price and predicted value is: ', mse)
###Output
The output of the first four predicted value is: [[13728.4631336 ]
[13728.4631336 ]
[17399.38347881]
[10224.40280408]]
The mean square error of price and predicted value is: 15021126.02517414
###Markdown
Model 2: Multiple Linear Regression Let's calculate the R^2:
###Code
#Secondly, we test the multiple linear regression model
lm.fit(Z, df['price'])
print('The R-square is: ', lm.score(Z, df['price']))
Y_predict_multifit = lm.predict(Z)
print('The mean square error of price and predicted value using multifit is: ', \
mean_squared_error(df['price'], Y_predict_multifit))
###Output
The mean square error of price and predicted value using multifit is: 11980366.87072649
###Markdown
Model 3: Polynomial Fit Let's calculate the R^2. Let’s import the function r2\_score from the module metrics as we are using a different function.
###Code
#Thirdly, we will test the Polynomial regression model to see if its a better fit
from sklearn.metrics import r2_score
r_squared = r2_score(y, p(x))
print('The R-square value is: ', r_squared)
###Output
The R-square value is: 0.674194666390652
###Markdown
We can also calculate the MSE:
###Code
mean_squared_error(df['price'], p(x))
###Output
_____no_output_____
###Markdown
Determining the Best Model Fit Now that we have visualized the different models, and generated the R-squared and MSE values for the fits, we determine a good model fit by comparing it's R-squared value and MSE value.Let's take a look at the values for the different models.Simple Linear Regression: Using Highway-mpg as a Predictor Variable of Price. R-squared: 0.7609686443622008 MSE: 1.5 ×10^7Multiple Linear Regression: Using Horsepower, Curb-weight, Engine-size, and Highway-mpg as Predictor Variables of Price. R-squared: 0.8093562806577457 MSE: 1.2 ×10^7Polynomial Fit: Using Highway-mpg as a Predictor Variable of Price. R-squared: 0.674194666390652 MSE: 2.05 x 10^7 Simple Linear Regression Model (SLR) vs Multiple Linear Regression Model (MLR) Usually, the more variables you have, the better your model is at predicting, but this is not always true. Sometimes you may not have enough data, you may run into numerical problems, or many of the variables may not be useful and even act as noise. As a result, you should always check the MSE and R^2.In order to compare the results of the MLR vs SLR models, we look at a combination of both the R-squared and MSE to make the best conclusion about the fit of the model. MSE: The MSE of SLR is 1.5 x10^7 while MLR has an MSE of 1.2 x10^7. The MSE of MLR is smaller. R-squared: In this case, we can also see that there is a big difference between the R-squared of the SLR and the R-squared of the MLR. The R-squared for the SLR is small compared to the R-squared for the MLR .This R-squared in combination with the MSE show that MLR seems like the better model fit in this case compared to SLR. Simple Linear Model (SLR) vs. Polynomial Fit MSE: We can see that Polynomial Fit brought up the MSE, since this MSE is much bigger than the one from the SLR. R-squared: The R-squared for the Polynomial Fit is smaller than the R-squared for the SLR, so the Polynomial Fit also brought down the R-squared quite a bit.Since the Polynomial Fit resulted in a higher MSE and a lower R-squared, so we can conclude that this was a poorly fit model than the simple linear regression for predicting "price" with "highway-mpg" as a predictor variable. Multiple Linear Regression (MLR) vs. Polynomial Fit MSE: The MSE for the MLR is smaller than the MSE for the Polynomial Fit. R-squared: The R-squared for the MLR is also much larger than for the Polynomial Fit. Conclusion Comparing these three models, we conclude that the MLR model is the best model to be able to predict price from our dataset. This result makes sense since we have 27 variables in total and we know that more than one of those variables are potential predictors of the final car price. Training and Testing the Model We developed different models previously and compared them to find the model best suited to predict the price value from our dataset. In this section, we will train and test those models to make better predictions.
###Code
path = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/module_5_auto.csv'
df = pd.read_csv(path)
df.to_csv('module_5_auto.csv')
#Get all numerica data
df=df._get_numeric_data()
df.head()
#We define the following functions to plot the necessary graphs required
def DistributionPlot(RedFunction, BlueFunction, RedName, BlueName, Title):
width = 12
height = 10
plt.figure(figsize=(width, height))
ax1 = sns.distplot(RedFunction, hist=False, color="r", label=RedName)
ax2 = sns.distplot(BlueFunction, hist=False, color="b", label=BlueName, ax=ax1)
plt.title(Title)
plt.xlabel('Price (in dollars)')
plt.ylabel('Proportion of Cars')
plt.show()
plt.close()
def PollyPlot(xtrain, xtest, y_train, y_test, lr,poly_transform):
width = 12
height = 10
plt.figure(figsize=(width, height))
xmax=max([xtrain.values.max(), xtest.values.max()])
xmin=min([xtrain.values.min(), xtest.values.min()])
x=np.arange(xmin, xmax, 0.1)
plt.plot(xtrain, y_train, 'ro', label='Training Data')
plt.plot(xtest, y_test, 'go', label='Test Data')
plt.plot(x, lr.predict(poly_transform.fit_transform(x.reshape(-1, 1))), label='Predicted Function')
plt.ylim([-10000, 60000])
plt.ylabel('Price')
plt.legend()
y_data = df['price']
#Dropping price data in dataframe x_data:
x_data=df.drop('price',axis=1)
#Now, we randomly split our data into training and testing data using the function train_test_split.
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.10, random_state=1)
print("number of test samples :", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
#Let's import LinearRegression from the module linear_model.
from sklearn.linear_model import LinearRegression
lre=LinearRegression()
lre.fit(x_train[['horsepower']], y_train) #Fitting the model
lre.score(x_test[['horsepower']], y_test)
lre.score(x_train[['horsepower']], y_train) #We can see the R^2 is much smaller using the test data compared to the training data.
#Using cross_val_predict to predict the output
from sklearn.model_selection import cross_val_predict
yhat = cross_val_predict(lre,x_data[['horsepower']], y_data,cv=4)
yhat[0:5]
###Output
_____no_output_____
###Markdown
Model Selection
###Code
#Let's create Multiple Linear Regression objects and train the model using 'horsepower', 'curb-weight', 'engine-size' and 'highway-mpg' as features.
lr = LinearRegression()
lr.fit(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_train)
#Prediction using training data:
yhat_train = lr.predict(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])
yhat_train[0:5]
#Prediction using test data:
yhat_test = lr.predict(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])
yhat_test[0:5]
###Output
_____no_output_____
###Markdown
Let's perform some model evaluation using our training and testing data separately. First, we import the seaborn and matplotlib library for plotting.
###Code
#Let's evaluate the models created above
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
#Let's examine the distribution of the predicted values of the training data:
Title = 'Distribution Plot of Predicted Value Using Training Data vs Training Data Distribution'
DistributionPlot(y_train, yhat_train, "Actual Values (Train)", "Predicted Values (Train)", Title)
###Output
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
###Markdown
Figure 1: Plot of predicted values using the training data compared to the actual values of the training data.
###Code
#Let's examine the distribution of the predicted values of the test data:
Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data'
DistributionPlot(y_test,yhat_test,"Actual Values (Test)","Predicted Values (Test)",Title)
###Output
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
C:\Users\LENOVO\anaconda3\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
###Markdown
Figure 2: Plot of predicted value using the test data compared to the actual values of the test data.Comparing Figure 1 and Figure 2, it is evident that the distribution of the test data in Figure 1 is much better at fitting the data. This difference in Figure 2 is apparent in the range of 5000 to 15,000. This is where the shape of the distribution is extremely different. Let's see if polynomial regression also exhibits a drop in the prediction accuracy when analysing the test dataset.
###Code
from sklearn.preprocessing import PolynomialFeatures
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.45, random_state=0)
pr = PolynomialFeatures(degree=5)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
pr
poly = LinearRegression()
poly.fit(x_train_pr, y_train)
yhat = poly.predict(x_test_pr)
yhat[0:5]
#We will use the function "PollyPlot" that we defined at the beginning
PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train, y_test, poly,pr)
###Output
_____no_output_____
###Markdown
Figure 3: A polynomial regression model where red dots represent training data, green dots represent test data, and the blue line represents the model prediction.We see that the estimated function appears to track the data but around 200 horsepower, the function begins to diverge from the data points.
###Code
#R^2 of the training data:
poly.score(x_train_pr, y_train)
#R^2 of the test data:
poly.score(x_test_pr, y_test)
###Output
_____no_output_____
###Markdown
We see the R^2 for the training data is 0.5567 while the R^2 on the test data was -29.87. The lower the R^2, the worse the model. A negative R^2 is a sign of overfitting.
###Code
#Let's see how the R^2 changes on the test data for different order polynomials and then plot the results:
Rsqu_test = []
order = [1, 2, 3, 4]
for n in order:
pr = PolynomialFeatures(degree=n)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
lr.fit(x_train_pr, y_train)
Rsqu_test.append(lr.score(x_test_pr, y_test))
plt.plot(order, Rsqu_test)
plt.xlabel('order')
plt.ylabel('R^2')
plt.title('R^2 Using Test Data')
plt.text(3, 0.74, 'Maximum R^2 ')
###Output
_____no_output_____ |
other scripts/fabio_notebook.ipynb | ###Markdown
Preprocessing
###Code
tracks['track_id_tmp'] = tracks['track_id']
tracks['track_id'] = tracks.index
playlists['playlist_id_tmp'] = playlists['playlist_id']
playlists['playlist_id'] = playlists.index
train['playlist_id_tmp'] = train['playlist_id']
train['track_id_tmp'] = train['track_id']
track_to_num = pd.Series(tracks.index)
track_to_num.index = tracks['track_id_tmp']
playlist_to_num = pd.Series(playlists.index)
playlist_to_num.index = playlists['playlist_id_tmp']
num_to_tracks = pd.Series(tracks['track_id_tmp'])
train['track_id'] = train['track_id'].apply(lambda x : track_to_num[x])
train['playlist_id'] = train['playlist_id'].apply(lambda x : playlist_to_num[x])
tracks.tags = tracks.tags.apply(lambda s: np.array(eval(s), dtype=int))
playlists.title = playlists.title.apply(lambda s: np.array(eval(s), dtype=int))
tracks.loc[0].tags
playlists.head()
train.head()
track_to_num.head()
playlist_to_num[:5]
num_to_tracks[:5]
target_playlists['playlist_id_tmp'] = target_playlists['playlist_id']
target_playlists['playlist_id'] = target_playlists['playlist_id'].apply(lambda x : playlist_to_num[x])
target_tracks['track_id_tmp'] = target_tracks['track_id']
target_tracks['track_id'] = target_tracks['track_id'].apply(lambda x : track_to_num[x])
target_tracks.head()
target_playlists.head()
playlist_tracks = pd.DataFrame(train['playlist_id'].drop_duplicates())
playlist_tracks.index = train['playlist_id'].unique()
playlist_tracks['track_ids'] = train.groupby('playlist_id').apply(lambda x : x['track_id'].values)
playlist_tracks = playlist_tracks.sort_values('playlist_id')
playlist_tracks.head()
track_playlists = pd.DataFrame(train['track_id'].drop_duplicates())
track_playlists.index = train['track_id'].unique()
track_playlists['playlist_ids'] = train.groupby('track_id').apply(lambda x : x['playlist_id'].values)
track_playlists = track_playlists.sort_values('track_id')
track_playlists.head()
def transform_album_1(alb):
ar = eval(alb)
if len(ar) == 0 or (len(ar) > 0 and ar[0] == None):
ar = [-1]
return ar[0]
def transform_album_2(alb):
global next_album_id
if alb == -1:
alb = next_album_id
next_album_id += 1
return alb
tracks.album = tracks.album.apply(lambda alb: transform_album_1(alb))
last_album = tracks.album.max()
next_album_id = last_album + 1
tracks.album = tracks.album.apply(lambda alb: transform_album_2(alb))
###Output
_____no_output_____
###Markdown
Clean data URM and SVD decomposition
###Code
# User Rating Matrix URM
def get_URM(tracks, playlists, playlist_tracks, track_playlists, normalized=False):
URM = lil_matrix((len(playlists), len(tracks)))
num_playlists = len(playlist_tracks)
i = 0
for row in track_playlists.itertuples():
track_id = row.track_id
#row.playlist_ids.sort()
nq = len(row.playlist_ids)
for pl_id in row.playlist_ids:
URM[pl_id,track_id] = math.log((num_playlists - nq + 0.5)/(nq + 0.5)) if normalized else 1
if i % 1000 == 0:
print(i)
i += 1
return URM
%%time
URM = get_URM(tracks, playlists, playlist_tracks, track_playlists, normalized=True)
URM = URM.tocsc()
%%time
U, S, V = svds(URM, k=200)
S = np.diag(S)
M2 = np.dot(S, V)
###Output
_____no_output_____
###Markdown
Normalized:k = 50 -> 0.012786324786324807k = 200 -> 0.024994972347913386k = 500 -> 0.0353849999999999 Tags
###Code
# Count distinct tags
tag_tracks = {}
for row in tracks.itertuples():
for tag in row.tags:
if tag in tag_tracks:
tag_tracks[tag].append(row.track_id)
else:
tag_tracks[tag] = [row.track_id]
# Item Tag Matrix ITM
def get_ITM(tracks, tag_tracks, normalized=False):
unique_tags = list(tag_tracks.keys())
ITM = lil_matrix((len(tracks), max(unique_tags)+1))
ITM_count = lil_matrix((len(tracks), max(unique_tags)+1))
num_tracks = len(tracks)
i = 0
for tag,track_ids in tag_tracks.items():
#row.playlist_ids.sort()
nq = len(track_ids)
for track_id in track_ids:
ITM[track_id,tag] = math.log((num_tracks - nq + 0.5)/(nq + 0.5)) if normalized else 1
ITM_count[track_id,tag] = 1
if i % 1000 == 0:
print(i)
i += 1
return ITM
ITM = get_ITM(tracks, tag_tracks, normalized=True)
"""def create_row(row_num, tags_concatenated):
tags_concatenated.sort()
d = np.array([])
r = np.array([])
c = np.array([])
for i,tag in enumerate(tags_concatenated):
if i > 0 and tags_concatenated[i-1] == tags_concatenated[i]:
d[-1] += 1
else:
d = np.append(d,1)
r = np.append(r,row_num)
c = np.append(c,tags_concatenated[i])
return d, (r, c)
"""
# User Tag Matrix UTM
def get_UTM(tracks, playlist_tracks, tag_tracks, OKAPI_K=1.7, OKAPI_B=0.75):
unique_tags = list(tag_tracks.keys())
i = 0
"""
d = np.array([])
r = np.array([])
c = np.array([])
for row in playlist_tracks.itertuples():
pl_id = row.playlist_id
tags_concatenated = np.array([])
for tr_id in row.track_ids:
tags = tracks.loc[tr_id].tags
tags_concatenated = np.concatenate((tags_concatenated, tags))
d1, (r1, c1) = create_row(row.playlist_id, tags_concatenated)
d = np.concatenate((d, d1))
r = np.concatenate((r, r1))
c = np.concatenate((c, c1))
i += 1
if i % 1000 == 0:
print(i)
UTM = coo_matrix(d, (r, c))
"""
UTM = lil_matrix((max(playlists.playlist_id)+1, max(unique_tags)+1))
for row in playlist_tracks.itertuples():
pl_id = row.playlist_id
for tr_id in row.track_ids:
for tag in tracks.loc[tr_id].tags:
UTM[pl_id,tag] += 1
i += 1
if i % 1000 == 0:
print(i)
avg_document_length = sum(list(map(lambda l: sum(l), UTM.data)))/len(UTM.data)
i = 0
for row in playlist_tracks.itertuples():
pl_id = row.playlist_id
tags = UTM.rows[pl_id]
data = UTM.data[pl_id]
for tag in tags:
fq = UTM[pl_id,tag]
UTM[pl_id,tag] = (fq*(OKAPI_K+1))/(fq + OKAPI_K*(1 - OKAPI_B + OKAPI_B * sum(data) / avg_document_length))
i += 1
if i % 1000 == 0:
print(i)
return UTM
UTM = get_UTM(tracks, playlist_tracks, tag_tracks)
UTM_csc = UTM.tocsc()
ITM_csr_transpose = ITM.tocsr().transpose()
###Output
_____no_output_____
###Markdown
Artists
###Code
unique_artists = tracks.artist_id.unique()
# Item Artist Matrix
def get_IAM(tracks, target_tracks, normalized=False):
unique_artists = tracks.artist_id.unique()
IAM = lil_matrix((len(tracks), max(unique_artists)+1))
tracks_filtered = tracks[tracks.track_id.isin(target_tracks.track_id)]
num_tracks = len(tracks)
i = 0
for row in tracks_filtered.itertuples():
nq = 1
IAM[row.track_id,row.artist_id] = math.log((num_tracks - nq + 0.5)/(nq + 0.5)) if normalized else 1
if i % 1000 == 0:
print(i)
i += 1
return IAM
IAM = get_IAM(tracks, target_tracks, normalized=True)
# User Artist Matrix UAM
def get_UAM(tracks, playlist_tracks, target_playlists, OKAPI_K=1.7, OKAPI_B=0.75):
unique_artists = tracks.artist_id.unique()
playlist_tracks_filtered = playlist_tracks[playlist_tracks.playlist_id.isin(target_playlists.playlist_id)]
i = 0
UAM = lil_matrix((max(playlists.playlist_id)+1, max(unique_artists)+1))
for row in playlist_tracks_filtered.itertuples():
pl_id = row.playlist_id
for tr_id in row.track_ids:
UAM[pl_id,tracks.loc[tr_id].artist_id] += 1
i += 1
if i % 1000 == 0:
print(i)
avg_document_length = functools.reduce(lambda acc,tr_ids: acc + len(tr_ids), playlist_tracks.track_ids, 0) / len(playlist_tracks)
#avg_document_length = sum(list(map(lambda l: sum(l), UAM.data)))/len(UAM.data)
i = 0
for row in playlist_tracks_filtered.itertuples():
pl_id = row.playlist_id
artists = UAM.rows[pl_id]
data = UAM.data[pl_id]
for artist in artists:
fq = UAM[pl_id,artist]
UAM[pl_id,artist] = (fq*(OKAPI_K+1))/(fq + OKAPI_K*(1 - OKAPI_B + OKAPI_B * sum(data) / avg_document_length))
i += 1
if i % 1000 == 0:
print(i)
return UAM
UAM = get_UAM(tracks, playlist_tracks, target_playlists, OKAPI_K=1.7, OKAPI_B=0.75)
UAM_csc = UAM.tocsc()
IAM_csr_transpose = IAM.tocsr().transpose()
###Output
_____no_output_____
###Markdown
Albums
###Code
unique_albums = tracks.album.unique()
unique_albums
# Item Album Matrix IAM_album
def get_IAM_album(tracks, target_tracks, normalized=False):
unique_albums = tracks.album.unique()
IAM_album = lil_matrix((len(tracks), max(unique_albums)+1))
tracks_filtered = tracks[tracks.track_id.isin(target_tracks.track_id)]
num_tracks = len(tracks)
i = 0
for row in tracks_filtered.itertuples():
nq = 1
IAM_album[row.track_id,row.album] = math.log((num_tracks - nq + 0.5)/(nq + 0.5)) if normalized else 1
if i % 1000 == 0:
print(i)
i += 1
return IAM_album
IAM_album = get_IAM_album(tracks, target_tracks, normalized=True)
# User Album Matrix UAM_album
def get_UAM_album(tracks, playlist_tracks, target_playlists, OKAPI_K=1.7, OKAPI_B=0.75):
unique_albums = tracks.album.unique()
playlist_tracks_filtered = playlist_tracks[playlist_tracks.playlist_id.isin(target_playlists.playlist_id)]
i = 0
UAM_album = lil_matrix((max(playlists.playlist_id)+1, max(unique_albums)+1))
for row in playlist_tracks_filtered.itertuples():
pl_id = row.playlist_id
for tr_id in row.track_ids:
UAM_album[pl_id,tracks.loc[tr_id].album] += 1
i += 1
if i % 1000 == 0:
print(i)
avg_document_length = functools.reduce(lambda acc,tr_ids: acc + len(tr_ids), playlist_tracks.track_ids, 0) / len(playlist_tracks)
#avg_document_length = sum(list(map(lambda l: sum(l), UAM_album.data)))/len(UAM_album.data)
i = 0
for row in playlist_tracks_filtered.itertuples():
pl_id = row.playlist_id
albums = UAM_album.rows[pl_id]
data = UAM_album.data[pl_id]
for album in albums:
fq = UAM_album[pl_id,album]
UAM_album[pl_id,album] = (fq*(OKAPI_K+1))/(fq + OKAPI_K*(1 - OKAPI_B + OKAPI_B * sum(data) / avg_document_length))
i += 1
if i % 1000 == 0:
print(i)
return UAM_album
UAM_album = get_UAM_album(tracks, playlist_tracks, target_playlists, OKAPI_K=1.7, OKAPI_B=0.75)
UAM_album_csc = UAM_album.tocsc()
IAM_album_csr_transpose = IAM_album.tocsr().transpose()
###Output
_____no_output_____
###Markdown
Playlist titles
###Code
def from_num_to_id(df, row_num, column = 'track_id'):
""" df must have a 'track_id' column """
return df.iloc[row_num][column]
def from_id_to_num(df, tr_id, column='track_id'):
""" df must have a 'track_id' column """
return np.where(df[column].values == tr_id)[0][0]
# Count distinct title tokens
token_playlists = {}
for row in playlists.itertuples():
for token in row.title:
if token in token_playlists:
token_playlists[token].append(row.playlist_id)
else:
token_playlists[token] = [row.playlist_id]
token_playlists_filtered = {}
for row in playlists[playlists.playlist_id.isin(target_playlists.playlist_id)].itertuples():
for token in row.title:
if token in token_playlists_filtered:
token_playlists_filtered[token].append(row.playlist_id)
else:
token_playlists_filtered[token] = [row.playlist_id]
# User Title Matrix UTM_title
def get_UTM_title(playlists, target_playlists, token_playlists, token_playlists_filtered, normalized=False):
unique_tokens = list(token_playlists.keys())
UTM_title = lil_matrix((len(target_playlists), max(unique_tokens)+1))
playlists_filtered = playlists[playlists.playlist_id.isin(target_playlists.playlist_id)]
num_playlists = len(playlists)
i = 0
for token,playlist_ids in token_playlists_filtered.items():
nq = len(token_playlists[token])
for playlist_id in playlist_ids:
UTM_title[from_id_to_num(target_playlists, playlist_id, column='playlist_id'),token] = math.log((num_playlists - nq + 0.5)/(nq + 0.5)) if normalized else 1
if i % 1000 == 0:
print(i)
i += 1
return UTM_title
UTM_title = get_UTM_title(playlists, target_playlists, token_playlists, token_playlists_filtered, normalized=True)
UTM_title
UTM_title = UTM_title.tocsr()
UTM_title_transpose = UTM_title.transpose().tocsc()
PS_title = np.dot(UTM_title, UTM_title_transpose)
PS_title = PS_title.todense()
PS_title.max()
# User Rating Matrix URM
def get_URM_target(tracks, playlists, playlist_tracks, track_playlists, target_playlists, target_tracks, normalized=False):
URM = lil_matrix((len(target_playlists), len(target_tracks)))
num_playlists = len(playlist_tracks)
i = 0
for row in track_playlists[track_playlists.track_id.isin(target_tracks.track_id)].itertuples():
track_id = row.track_id
#row.playlist_ids.sort()
nq = len(row.playlist_ids)
for pl_id in row.playlist_ids:
if pl_id in target_playlists.playlist_id:
URM[from_id_to_num(target_playlists, pl_id, column='playlist_id'),from_id_to_num(target_tracks, track_id, column='track_id')] = math.log((num_playlists - nq + 0.5)/(nq + 0.5)) if normalized else 1
if i % 1000 == 0:
print(i)
i += 1
return URM
URM_target = get_URM_target(tracks, playlists, playlist_tracks, track_playlists, target_playlists, target_tracks, normalized=False)
URM_target
URM_target = URM_target.tocsc()
PS_title_csr = csr_matrix(PS_title)
PS_title_csr
np.array(np.dot(PS_title_csr[0], URM_target).todense())[0]
###Output
_____no_output_____
###Markdown
Duration
###Code
def get_avg_duration(tr_ids):
sum_durations = 0
tracks_to_count = 0
for tr_id in tr_ids:
tr_dur = tracks.loc[tr_id].duration
if tr_dur >= 0:
sum_durations += tr_dur
tracks_to_count += 1
return 0 if tracks_to_count == 0 else sum_durations/tracks_to_count
MAX_DURATION = 400
tracks.duration = tracks.duration.apply(lambda dur: dur/1000) # to seconds
tracks.duration = tracks.duration.apply(lambda dur: min(dur, MAX_DURATION)) # clamp
playlist_tracks["avg_duration"] = playlist_tracks.track_ids.apply(lambda tr_ids: get_avg_duration(tr_ids))
DURATION_K = 400
###Output
_____no_output_____
###Markdown
Playcount
###Code
def get_avg_playcount(tr_ids):
sum_playcounts = 0
tracks_to_count = 0
for tr_id in tr_ids:
tr_plc = tracks.loc[tr_id].playcount
if tr_plc >= 0:
sum_playcounts += tr_plc
tracks_to_count += 1
return 0 if tracks_to_count == 0 else sum_playcounts/tracks_to_count
playlist_tracks["avg_playcount"] = playlist_tracks.track_ids.apply(lambda tr_ids: get_avg_playcount(tr_ids))
playlist_tracks.avg_playcount[:20]
PLAYCOUNT_K = 2500
###Output
_____no_output_____
###Markdown
Predictions
###Code
ALPHA = 1
BETA = 0.9
GAMMA = 0.5
def make_predictions(test=None, compute_MAP=False):
predictions = pd.DataFrame(target_playlists)
predictions.index = target_playlists['playlist_id']
predictions['track_ids'] = [np.array([]) for i in range(len(predictions))]
ttracks = set(target_tracks['track_id'].values)
test_good = get_playlist_track_list2(test)
test_good.index = test_good.playlist_id.apply(lambda pl_id: playlist_to_num[pl_id])
counter = 0
mean_ap = 0
for _,row in target_playlists.iterrows():
# Compute predictions for current playlist
pred = []
pl_id = row['playlist_id']
pl_tracks = set(playlist_tracks.loc[pl_id]['track_ids'])
simil = ALPHA * np.array(np.dot(UAM_album_csc[pl_id,:], IAM_album_csr_transpose).todense())[0]
simil += BETA * np.array(np.dot(UAM_csc[pl_id,:], IAM_csr_transpose).todense())[0]
#simil = np.array(np.dot(UAM_csc[pl_id,:], IAM_csr_transpose).todense())[0]
#simil = np.array(np.dot(PS_title_csr[from_id_to_num(target_playlists, pl_id, column='playlist_id'),:], URM_target).todense())[0]
simil += GAMMA * np.array(np.dot(UTM_csc[pl_id,:], ITM_csr_transpose).todense())[0]
#simil += DELTA * np.dot(U[pl_id,:], M2)
#simil = np.exp(-(np.abs(playlist_tracks.loc[pl_id].avg_duration - tracks.duration))/DURATION_K)
#simil = np.exp(-(np.abs(playlist_tracks.loc[pl_id].avg_playcount - tracks.playcount))/PLAYCOUNT_K).dropna()
sorted_ind = simil.argsort()
i = len(sorted_ind) - 1
c = 0
while i > 0 and c < 5:
#tr = from_num_to_id(target_tracks, sorted_ind[i], column='track_id')
tr = sorted_ind[i]
if (tr in ttracks) and (tr not in pl_tracks):
pred.append(num_to_tracks[tr])
c+=1
i-=1
predictions.loc[row['playlist_id']] = predictions.loc[row['playlist_id']].set_value('track_ids', np.array(pred))
# Update MAP
if compute_MAP:
correct = 0
ap = 0
for it, t in enumerate(pred):
if t in test_good.loc[pl_id]['track_ids']:
correct += 1
ap += correct / (it+1)
ap /= len(pred)
mean_ap += ap
counter += 1
if counter % 1000 == 0:
print(counter)
if compute_MAP:
print(mean_ap / counter)
predictions['playlist_id'] = predictions['playlist_id_tmp']
return predictions
#%%time
predictions = make_predictions(test=test, compute_MAP=True)
###Output
1000
0.07674666666666677
2000
0.07922166666666648
3000
0.07679333333333296
###Markdown
single ones:albums: 0.063artists: 0.054tags: 0.041URM: 0.035 with k = 400duration: 0.0002playcount: 0.0004playlist title similarity * URM not normalized: 0.004albums + artists:BETA = 0.5: 0.074BETA = 0.75: 0.074BETA = 0.9: 0.075BETA = 1: 0.075Chosen BETA: 0.9albums + artists + tags:GAMMA = 0.8: 0.076GAMMA = 0.6:
###Code
list(map(lambda l: sum(l)/len(l) if len(l)>0 else 0, IAM_album.data[:100]))
###Output
_____no_output_____
###Markdown
SVD supervised
###Code
def from_num_to_id(df, row_num, column = 'track_id'):
""" df must have a 'track_id' column """
return df.iloc[row_num][column]
def from_id_to_num(df, tr_id, column='track_id'):
""" df must have a 'track_id' column """
return np.where(df[column].values == tr_id)[0][0]
def build_id_to_num_map(df, column):
a = pd.Series(np.arange(len(df)))
a.index = df[column]
return a
def build_num_to_id_map(df, column):
a = pd.Series(df[column])
a.index = np.arange(len(df))
return a
N_FEATURES = 5
N_EPOCHS = 5
userValue = np.zeros((URM.shape[0], N_FEATURES))
userValue += 0.1
itemValue = np.zeros((N_FEATURES,URM.shape[1]))
itemValue += 0.1
def predictRating(user, item, features):
return np.dot(userValue[user,:features+1], itemValue[:features+1,item])
lrate = 0.01
K = 0.02
def train_user(user, item, rating, feature):
err = (rating - predictRating(user, item, feature))
userValue[user,feature] += lrate * (err * itemValue[feature,item] - K*userValue[user,feature])
itemValue[feature,item] += lrate * (err * userValue[user,feature] - K*itemValue[feature, item])
URM = URM.tocoo()
%%time
for f in range(N_FEATURES):
for i in range(N_EPOCHS):
print("training feature {0}, stage {1}".format(f, i))
for r,c in zip(URM.row, URM.col):
train_user(r, c, 1, f)
userValue
itemValue
sum((URM[r,c] - predictRating(r,c,N_FEATURES-1))**2 for r,c in zip(URM.row, URM.col))
###Output
_____no_output_____ |
prophet_prediction_usa.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
data=pd.read_csv("/content/usa_formatted.csv")
data.info()
data['Last Update'] = pd.to_datetime(data['Last Update'])
usa=data[data['Country']=='US']
mchina=data[data['Country']=='Mainland China']
mchina=data[data['Country']=='China']
mchina.sort_values(['Last Update'])
hk=data[data['Country']=='Hong Kong']
hkgrouped_country=data.groupby("Country")[['Confirmed', 'Deaths', 'Recovered']]
grouped_country=data.groupby("Country")[['Confirmed', 'Deaths', 'Recovered']]
china=data[(data['Country']=='China') |( data['Country']=='Mainland China')|( data['Country']=='Hong Kong')]
china['Country'].replace('Mainland China','Chinese Sub',inplace=True)
china['Country'].replace('Hong Kong','Chinese Sub',inplace=True)
!pip install prophet
from fbprophet import Prophet
usa.columns=['ds','y','Deaths','Recovered']
usa=usa.drop(columns=['Recovered','Deaths'])
m = Prophet()
m.fit(chinese)
future = m.make_future_dataframe(periods=7,include_history=True)
future.tail()
forecast = m.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
m.plot_components(forecast)
res=m.predict(future)
res
res.iloc[10]
res.iloc[10]
res.iloc[11]
m = Prophet()
m.fit(chinese)
res=m.predict(future)
m.plot(res)
from fbprophet.plot import add_changepoints_to_plot
fig = m.plot(forecast)
a = add_changepoints_to_plot(fig.gca(), m, forecast)
m = Prophet(changepoint_prior_scale=0.5)
forecast = m.fit(chinese).predict(future)
fig = m.plot(forecast)
m = Prophet(changepoint_prior_scale=0.001)
forecast = m.fit(chinese).predict(future)
fig = m.plot(forecast)
###Output
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:n_changepoints greater than number of observations.Using 7.
###Markdown
Uncertainty in the trendThe biggest source of uncertainty in the forecast is the potential for future trend changes. The time series we have seen already in this documentation show clear trend changes in the history. It’s impossible to know for sure, so we do the most reasonable thing we can, and we assume that the future will see similar trend changes as the history. In particular, we assume that the average frequency and magnitude of trend changes in the future will be the same as that which we observe in the history. We project these trend changes forward and by computing their distribution we obtain uncertainty intervals.
###Code
forecast = Prophet(interval_width=0.95).fit(chinese).predict(future)
###Output
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:n_changepoints greater than number of observations.Using 7.
###Markdown
Uncertainty in seasonality with full Bayesian sampling
###Code
m = Prophet(mcmc_samples=300)
forecast = m.fit(chinese).predict(future)
fig = m.plot_components(forecast)
m.predictive_samples(future)
###Output
_____no_output_____
###Markdown
Fourier Order for SeasonalitiesSeasonalities are estimated using a partial Fourier sum;see also:Forecasting at scale;partial Fourier sum can approximate an aribtrary periodic signal. The number of terms in the partial sum (the order) is a parameter that determines how quickly the seasonality can change.
###Code
from fbprophet.plot import plot_yearly
m = Prophet(yearly_seasonality=20).fit(chinese)
a = plot_yearly(m)
###Output
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:n_changepoints greater than number of observations.Using 7.
###Markdown
Specifying Custom SeasonalitiesAs an example, here we fit the data but replace the weekly seasonality with monthly seasonality. The monthly seasonality then will appear in the components plot:
###Code
m = Prophet(weekly_seasonality=False)
m.add_seasonality(name='monthly', period=30.5, fourier_order=5)
forecast = m.fit(chinese).predict(future)
fig = m.plot_components(forecast)
###Output
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:fbprophet:n_changepoints greater than number of observations.Using 7.
###Markdown
Sub-daily data
###Code
m = Prophet(changepoint_prior_scale=0.01).fit(chinese)
future = m.make_future_dataframe(periods=50, freq='H')
fcst = m.predict(future)
fig = m.plot(fcst)
fig = m.plot_components(fcst)
###Output
_____no_output_____
###Markdown
Monthly dataWe can use Prophet to fit monthly data. However, the underlying model is continuous-time, which means that you can get strange results if you fit the model to monthly data and then ask for daily forecasts.This is the same issue from above where the dataset has regular gaps. When we fit the yearly seasonality, it only has data for the first of each month and the seasonality components for the remaining days are unidentifiable and overfit. This can be clearly seen by doing MCMC to see uncertainty in the seasonality:
###Code
m = Prophet(seasonality_mode='multiplicative').fit(chinese)
future = m.make_future_dataframe(periods=3652)
fcst = m.predict(future)
fig = m.plot(fcst)
m = Prophet(seasonality_mode='multiplicative', mcmc_samples=300).fit(chinese)
fcst = m.predict(future)
fig = m.plot_components(fcst)
###Output
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:n_changepoints greater than number of observations.Using 7.
WARNING:pystan:Rhat above 1.1 or below 0.9 indicates that the chains very likely have not mixed
WARNING:pystan:244 of 600 iterations ended with a divergence (40.7 %).
WARNING:pystan:Try running with adapt_delta larger than 0.8 to remove the divergences.
WARNING:pystan:236 of 600 iterations saturated the maximum tree depth of 10 (39.3 %)
WARNING:pystan:Run again with max_treedepth larger than 10 to avoid saturation
###Markdown
This is the same issue from above where the dataset has regular gaps. When we fit the yearly seasonality, it only has data for the first of each month and the seasonality components for the remaining days are unidentifiable and overfit. This can be clearly seen by doing MCMC to see uncertainty in the seasonality:
###Code
m = Prophet(seasonality_mode='multiplicative', mcmc_samples=300).fit(chinese)
fcst = m.predict(future)
fig = m.plot_components(fcst)
###Output
INFO:fbprophet:Disabling yearly seasonality. Run prophet with yearly_seasonality=True to override this.
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:n_changepoints greater than number of observations.Using 7.
WARNING:pystan:Rhat above 1.1 or below 0.9 indicates that the chains very likely have not mixed
WARNING:pystan:133 of 600 iterations ended with a divergence (22.2 %).
WARNING:pystan:Try running with adapt_delta larger than 0.8 to remove the divergences.
WARNING:pystan:297 of 600 iterations saturated the maximum tree depth of 10 (49.5 %)
WARNING:pystan:Run again with max_treedepth larger than 10 to avoid saturation
###Markdown
The seasonality has low uncertainty at the start of each month where there are data points, but has very high posterior variance in between.
###Code
future = m.make_future_dataframe(periods=5, freq='D')
fcst = m.predict(future)
fig = m.plot(fcst)
###Output
_____no_output_____
###Markdown
Diagnostics and Cross-Validation Prophet includes functionality for time series cross validation to measure forecast error using historical data. This is done by selecting cutoff points in the history, and for each of them fitting the model using data only up to that cutoff point. We can then compare the forecasted values to the actual values.
###Code
from fbprophet.diagnostics import cross_validation
df_cv = cross_validation(m, initial='8 days', period='1 days', horizon = '1 days')
df_cv.head()
###Output
_____no_output_____ |
07_train/ml-containers/03_Run_ML_Training_SageMaker.ipynb | ###Markdown
Fine-Tuning a BERT Model and Create a Text ClassifierWe have already performed the Feature Engineering to create BERT embeddings from the `reviews_body` text using the pre-trained BERT model, and split the dataset into train, validation and test files. To optimize for Tensorflow training, we saved the files in TFRecord format. Now, let’s fine-tune the BERT model to our Customer Reviews Dataset and add a new classification layer to predict the `star_rating` for a given `review_body`.As mentioned earlier, BERT’s attention mechanism is called a Transformer. This is, not coincidentally, the name of the popular BERT Python library, “Transformers,” maintained by a company called [HuggingFace](https://github.com/huggingface/transformers). We will use a variant of BERT called [DistilBert](https://arxiv.org/pdf/1910.01108.pdf) which requires less memory and compute, but maintains very good accuracy on our dataset. DEMO 3: Run Model Training on Amazon SageMakerAmazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.
###Code
!pip install -q sagemaker==2.16.3
import boto3
import sagemaker
session = sagemaker.Session()
bucket = session.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
###Output
_____no_output_____
###Markdown
Specify the Dataset in S3We are using the train, validation, and test splits created in the previous section.
###Code
processed_train_data_s3_uri = "s3://fsx-container-demo/input/data/train"
!aws s3 ls $processed_train_data_s3_uri/
processed_validation_data_s3_uri = "s3://fsx-container-demo/input/data/validation"
!aws s3 ls $processed_validation_data_s3_uri/
processed_test_data_s3_uri = "s3://fsx-container-demo/input/data/test"
!aws s3 ls $processed_test_data_s3_uri/
###Output
_____no_output_____
###Markdown
Specify S3 `Distribution Strategy`
###Code
from sagemaker.inputs import TrainingInput
s3_input_train_data = TrainingInput(s3_data=processed_train_data_s3_uri, distribution="ShardedByS3Key")
s3_input_validation_data = TrainingInput(s3_data=processed_validation_data_s3_uri, distribution="ShardedByS3Key")
s3_input_test_data = TrainingInput(s3_data=processed_test_data_s3_uri, distribution="ShardedByS3Key")
print(s3_input_train_data.config)
print(s3_input_validation_data.config)
print(s3_input_test_data.config)
###Output
_____no_output_____
###Markdown
Setup Hyper-Parameters for Classification Layer
###Code
epochs = 3
learning_rate = 0.00001
epsilon = 0.00000001
train_batch_size = 128
validation_batch_size = 64
test_batch_size = 64
train_steps_per_epoch = 100
validation_steps = 10
test_steps = 10
use_xla = True
use_amp = True
max_seq_length = 64
freeze_bert_layer = True
run_validation = True
run_test = True
run_sample_predictions = True
###Output
_____no_output_____
###Markdown
Setup Metrics To Track Model PerformanceThese sample log lines...```45/50 [=====>..] - ETA: 3s - loss: 0.425 - accuracy: 0.88150/50 [=======>] - ETA: 0s - val_loss: 0.407 - val_accuracy: 0.885```...will produce the following 4 metrics in CloudWatch:`loss` = 0.425`accuracy` = 0.881`val_loss` = 0.407`val_accuracy` = 0.885
###Code
metrics_definitions = [
{"Name": "train:loss", "Regex": "loss: ([0-9\\.]+)"},
{"Name": "train:accuracy", "Regex": "accuracy: ([0-9\\.]+)"},
{"Name": "validation:loss", "Regex": "val_loss: ([0-9\\.]+)"},
{"Name": "validation:accuracy", "Regex": "val_accuracy: ([0-9\\.]+)"},
]
###Output
_____no_output_____
###Markdown
Setup Our BERT + TensorFlow Script to Run on SageMakerPrepare our TensorFlow model to run on the managed SageMaker service
###Code
!pygmentize ./code/train.py
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
entry_point="train.py",
source_dir="code",
role=role,
instance_count=1,
instance_type="ml.c5.9xlarge",
use_spot_instances=True,
max_run=3600,
max_wait=3600,
volume_size=1024,
py_version="py3",
framework_version="2.1.0",
hyperparameters={
"epochs": epochs,
"learning_rate": learning_rate,
"epsilon": epsilon,
"train_batch_size": train_batch_size,
"validation_batch_size": validation_batch_size,
"test_batch_size": test_batch_size,
"train_steps_per_epoch": train_steps_per_epoch,
"validation_steps": validation_steps,
"test_steps": test_steps,
"use_xla": use_xla,
"use_amp": use_amp,
"max_seq_length": max_seq_length,
"freeze_bert_layer": freeze_bert_layer,
"run_validation": run_validation,
"run_test": run_test,
"run_sample_predictions": run_sample_predictions,
},
input_mode="Pipe",
metric_definitions=metrics_definitions,
)
###Output
_____no_output_____
###Markdown
Train the Model on SageMaker
###Code
estimator.fit(
inputs={"train": s3_input_train_data, "validation": s3_input_validation_data, "test": s3_input_test_data},
wait=False,
)
training_job_name = estimator.latest_training_job.name
print("Training Job Name: {}".format(training_job_name))
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/jobs/{}">Training Job</a> After About 5 Minutes</b>'.format(
region, training_job_name
)
)
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/TrainingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format(
region, training_job_name
)
)
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview">S3 Output Data</a> After The Training Job Has Completed</b>'.format(
bucket, training_job_name, region
)
)
)
%%time
estimator.latest_training_job.wait(logs=False)
###Output
_____no_output_____
###Markdown
Wait Until the ^^ Training Job ^^ Completes Above!
###Code
!aws s3 ls s3://$bucket/$training_job_name/output/
###Output
_____no_output_____ |
Week 5/Week5-Visualization/05b_Exploring Indicator's Across Countries.ipynb | ###Markdown
World Development Indicators Exploring Data Visualization Using Matplotlib
###Code
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
data = pd.read_csv('Indicators.csv')
data.shape
countries = data['CountryName'].unique().tolist()
indicators = data['IndicatorName'].unique().tolist()
###Output
_____no_output_____
###Markdown
This is a really large dataset, at least in terms of the number of rows. But with 6 columns, what does this hold?
###Code
data.head(2)
###Output
_____no_output_____
###Markdown
Looks like it has different indicators for different countries with the year and value of the indicator. We already saw how the USA's per-capita CO2 production related to other countries, let's see if we can find some more indicators in common between countries. To have some fun, we've picked countries randomly but then stored our random results so you can rerun it with the same answers.
###Code
# Filter 1
# Picks years of choice
yearsFilter = [2010, 2011, 2012, 2013, 2014]
# Filter 2
# Pick 2 countries randomly
countryFilter = random.sample(countries, 2)
countryFilter
# Filter 3
# Pick 1 Indicator randomly
indicatorsFilter = random.sample(indicators, 1)
indicatorsFilter
###Output
_____no_output_____
###Markdown
Problem: We're missing data. Not all countries have all indicators for all yearsTo solve this, we'll need to find two countries and two indicators for which we have data over this time range.
###Code
filterMesh = (data['CountryName'] == countryFilter[0]) & (data['IndicatorName'].isin(indicatorsFilter)) & (data['Year'].isin(yearsFilter))
country1_data = data.loc[filterMesh]
len(country1_data)
filterMesh = (data['CountryName'] == countryFilter[1]) & (data['IndicatorName'].isin(indicatorsFilter)) & (data['Year'].isin(yearsFilter))
country2_data = data.loc[filterMesh]
len(country2_data)
###Output
_____no_output_____
###Markdown
So let's pick indicators and countries which have data over this time rangeThe code below will randomly pick countries and indicators until it finds two countries who have data for an indicator over this time frame. We used it to produce the fixed values you see later, feel free to play with this yourself!
###Code
filteredData1 = []
filteredData2 = []
'''
Plot:
countryFilter: pick two countries,
indicatorsFilter: pick an indicator,
yearsFilter: plot for years in yearsFilter
'''
# problem - not all countries have all indicators so if you go to visualize, it'll have missing data.
# randomly picking two indicators and countries, do these countries have valid data over those years.
# brings up the discussion of missing data/ missing fields
# until we find full data
while(len(filteredData1) < len(yearsFilter)-1):
# pick new indicator
indicatorsFilter = random.sample(indicators, 1)
countryFilter = random.sample(countries, 2)
# how many rows are there that have this country name, this indicator, and this year. Mesh gives bool vector
filterMesh = (data['CountryName'] == countryFilter[0]) & (data['IndicatorName'].isin(indicatorsFilter)) & (data['Year'].isin(yearsFilter))
# which rows have this condition to be true?
filteredData1 = data.loc[filterMesh]
filteredData1 = filteredData1[['CountryName','IndicatorName','Year','Value']]
# need to print this only when our while condition is true
if(len(filteredData1) < len(yearsFilter)-1):
print('Skipping ... %s since very few rows (%d) found' % (indicatorsFilter, len(filteredData1)))
# What did we pick eventually ?
indicatorsFilter
len(filteredData1)
'''
Country 2
'''
while(len(filteredData2) < len(filteredData1)-1):
filterMesh = (data['CountryName'] == countryFilter[1]) & (data['IndicatorName'].isin(indicatorsFilter)) & (data['Year'].isin(yearsFilter))
filteredData2 = data.loc[filterMesh]
filteredData2 = filteredData2[['CountryName','IndicatorName','Year','Value']]
#pick new indicator
old = countryFilter[1]
countryFilter[1] = random.sample(countries, 1)[0]
if(len(filteredData2) < len(filteredData1)-1):
print('Skipping ... %s, since very few rows (%d) found' % (old, len(filteredData2)))
if len(filteredData1) < len(filteredData2):
small = len(filteredData1)
else:
small = len(filteredData2)
filteredData1=filteredData1[0:small]
filteredData2=filteredData2[0:small]
filteredData1
filteredData2
###Output
_____no_output_____
###Markdown
Matplotlib: Additional Examples Example: Scatter PlotNow that we have the data for two countries for the same indicators, let's plot them using a scatterplot.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
fig, axis = plt.subplots()
# Grid lines, Xticks, Xlabel, Ylabel
axis.yaxis.grid(True)
axis.set_title(indicatorsFilter[0],fontsize=10)
axis.set_xlabel(filteredData1['CountryName'].iloc[0],fontsize=10)
axis.set_ylabel(filteredData2['CountryName'].iloc[0],fontsize=10)
X = filteredData1['Value']
Y = filteredData2['Value']
axis.scatter(X, Y)
###Output
_____no_output_____
###Markdown
Example: Line PlotHere we'll plot the indicator over time for each country.
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(20, 10))
ax.set_ylim(min(0,filteredData1['Value'].min()), 2*filteredData1['Value'].max())
ax.set_title('Indicator Name : ' + indicatorsFilter[0])
ax.plot(filteredData1['Year'], filteredData1['Value'] , 'r--', label=filteredData1['CountryName'].unique())
# Add the legend
legend = plt.legend(loc = 'upper center',
shadow=True,
prop={'weight':'roman','size':'xx-large'})
# Rectangle around the legend
frame = legend.get_frame()
frame.set_facecolor('.95')
plt.show()
###Output
_____no_output_____
###Markdown
Let's plot country 2
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(20, 10))
# Adjust the lower and upper limit to bring the graph at center
ax.set_ylim(min(0,filteredData2['Value'].min()), 2*filteredData2['Value'].max())
ax.set_title('Indicator Name : ' + indicatorsFilter[0])
ax.plot(filteredData2['Year'], filteredData2['Value'] ,
label=filteredData2['CountryName'].unique(),
color="purple", lw=1, ls='-',
marker='s', markersize=20,
markerfacecolor="yellow", markeredgewidth=4, markeredgecolor="blue")
# Add the legend
legend = plt.legend(loc = 'upper left',
shadow=True,
prop={'weight':'roman','size':'xx-large'})
# Rectangle around the legend
frame = legend.get_frame()
frame.set_facecolor('.95')
plt.show()
###Output
_____no_output_____
###Markdown
Example (random datasets)
###Code
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
countof_angles = 36
countof_radii = 8
# array - radii
array_rad = np.linspace(0.125, 1.0, countof_radii)
# array - angles
array_ang = np.linspace(0, 2*np.pi, countof_angles, endpoint=False)
# repeat all angles per radius
array_ang = np.repeat(array_ang[...,np.newaxis], countof_radii, axis=1)
# from polar (radii, angles) coords to cartesian (x, y) coords
x = np.append(0, (array_rad*np.cos(array_ang)).flatten())
y = np.append(0, (array_rad*np.sin(array_ang)).flatten())
# saddle shaped surface
z = np.sin(-x*y)
fig = plt.figure(figsize=(20,10))
ax = fig.gca(projection='3d')
ax.plot_trisurf(x, y, z, cmap=cm.autumn, linewidth=0.2)
plt.show()
fig.savefig("vis_3d.png")
###Output
_____no_output_____
###Markdown
Example (random dataset)
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
n_points = 200
radius = 2 * np.random.rand(n_points)
angles = 2 * (np.pi) * np.random.rand(n_points)
area = 400 * (radius**2) * np.random.rand(n_points)
colors = angles
fig = plt.figure(figsize=(20,10))
ax = plt.subplot(111, polar=True)
c = plt.scatter(angles, radius, c=colors, s=area, cmap=plt.cm.hsv)
c.set_alpha(1.95)
plt.show()
fig.savefig("vis_bubbleplot.png")
###Output
_____no_output_____
###Markdown
Example 4: Box Plots (random datasets)
###Code
np.random.seed(452)
# Three ararys of 100 points each
A1 = np.random.normal(0, 1, 100)
A2 = np.random.normal(0, 2, 100)
A3 = np.random.normal(0, 1.5, 100)
# Concatenate the three arrays
data = [ A1, A2, A3 ]
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 10))
# Box plot: Notch Shape
bplot1 = axes[1].boxplot(data,
notch=True,
vert=True, # vertical aligmnent
patch_artist=True) # color
# Box plot: Rectangular
bplot2 = axes[0].boxplot(data,
vert=True, # vertical aligmnent
patch_artist=True) # color
colors = ['tomato', 'darkorchid', 'lime']
# more colors here: http://matplotlib.org/examples/color/named_colors.html
for bplot in (bplot1, bplot2):
for patch, color in zip(bplot['boxes'], colors):
patch.set_facecolor(color)
# Grid lines, Xticks, Xlabel, Ylabel
for axis in axes:
axis.yaxis.grid(True)
axis.set_xticks([y for y in range(len(data))], )
axis.set_xlabel('Sample X-Label',fontsize=20)
axis.set_ylabel('Sample Y-Label',fontsize=20)
# Xtick labels
plt.setp(axes, xticks=[y for y in range(len(data))],
xticklabels=['X1', 'X2', 'X3'])
plt.show()
fig.savefig("vis_boxplot.png")
###Output
_____no_output_____ |
Functional_Thinking/Lab/28F-collections.ipynb | ###Markdown
`collections.namedtuple``namedtuple()` is a factory function; that is, it generates a class, and not an instance of a class (an object).So there are two steps:1. First, use `namedtuple()` to generate a `class`2. And then create an instance of this `class`
###Code
from collections import namedtuple
Person = namedtuple('Person', ['name', 'age'])
jay_z = Person(name='Sean Carter', age=47)
print('%s is %s years old' % (jay_z.name, jay_z.age))
###Output
Sean Carter is 47 years old
###Markdown
`collections.OrderedDict`If your code requires that the order of elements in a `dict` is preserved, use `OrderedDict`—even if your version of Python (e.g. CPython 3.6) already does this!Otherwise, `OrderedDict` behaves exactly like a regular `dict`.
###Code
from collections import OrderedDict
d = OrderedDict([
('Lizard', 'Reptile'),
('Whale', 'Mammal')
])
for species, class_ in d.items():
print('%s is a %s' % (species, class_))
###Output
Lizard is a Reptile
Whale is a Mammal
###Markdown
collections.defaultdictUse `defaultdict` if you want non-existing keys to return a default value, rather than raise a `KeyError`.Otherwise, `OrderedDict` behaves exactly like a regular `dict`.
###Code
from collections import defaultdict
favorite_programming_languages = {
'Claire' : 'Assembler',
'John' : 'Ruby',
'Sarah' : 'JavaScript'
}
d = defaultdict(lambda: 'Python')
d.update(favorite_programming_languages)
print(d['John'])
###Output
Ruby
|
JupyterNotebooks/Part 4 - Stacked Ensemble.ipynb | ###Markdown
Voting Classifier Ensemble
###Code
X=df[['didPurchase','rating']]
y=df['doRecommend']
clf1 = LogisticRegression(random_state=1)
clf2 = RandomForestClassifier(random_state=1)
clf3 = GaussianNB()
clf4 = KNeighborsClassifier(n_neighbors=10)
eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3),('knn',clf4)], voting='hard')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=None)
eclf.fit(X_train, y_train)
eclf.score( X_test, y_test)
###Output
_____no_output_____
###Markdown
Voting classifier gives an accuracy of 95%
###Code
for clf, label in zip([clf1, clf2, clf3,clf4, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'KNN','Ensemble']):
scores = cross_val_score(clf, X, y, cv=20, scoring='accuracy')
print("Accuracy: %0.3f (+/- %0.3f) [%s]" % (scores.mean(), scores.std(), label))
###Output
Accuracy: 0.944 (+/- 0.035) [Logistic Regression]
Accuracy: 0.954 (+/- 0.029) [Random Forest]
Accuracy: 0.948 (+/- 0.036) [naive Bayes]
Accuracy: 0.947 (+/- 0.037) [KNN]
Accuracy: 0.946 (+/- 0.037) [Ensemble]
###Markdown
*** Staked Ensemble
###Code
training, valid, ytraining, yvalid = train_test_split(X_train, y_train, test_size=0.3, random_state=0)
clf2.fit(training,ytraining)
clf4.fit(training,ytraining)
preds1=clf2.predict(valid)
preds2=clf4.predict(valid)
test_preds1=clf2.predict(X_test)
test_preds2=clf4.predict(X_test)
stacked_predictions=np.column_stack((preds1,preds2))
stacked_test_predictions=np.column_stack((test_preds1,test_preds2))
meta_model=LinearRegression()
meta_model.fit(stacked_predictions,yvalid)
final_predictions=meta_model.predict(stacked_test_predictions)
count=[];
y_list=y_test.tolist()
for i in range(len(y_list)):
if (y_list[i]==np.round(final_predictions[i])):
count.append("Correct")
else:
count.append("Incorrect")
import seaborn as sns
sns.countplot(x=count)
accuracy_score(y_test,np.round(final_predictions))
###Output
_____no_output_____
###Markdown
*** Stacked Ensemble with Cross validation
###Code
def fit_stack(clf, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions):
clf.fit(training,ytraining)
preds=clf.predict(valid)
test_preds=clf.predict(X_test)
stacked_predictions=np.append(stacked_predictions,preds)
stacked_test_predictions=np.append(stacked_test_predictions,test_preds)
return {'pred':stacked_predictions,'test':stacked_test_predictions}
def stacked():
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=None)
training, valid, ytraining, yvalid = train_test_split(X_train, y_train, test_size=0.3, random_state=None)
#kf = RepeatedKFold(n_splits=5, n_repeats=5, random_state=None)
#for train_index, test_index in kf.split(X):
# training, valid = X.iloc[train_index], X.iloc[test_index]
# ytraining, yvalid = y.iloc[train_index], y.iloc[test_index]
clf2.fit(training,ytraining)
clf4.fit(training,ytraining)
preds1=clf2.predict(valid)
preds2=clf4.predict(valid)
test_preds1=clf2.predict(X_test)
test_preds2=clf4.predict(X_test)
stacked_predictions=np.column_stack((preds1,preds2))
stacked_test_predictions=np.column_stack((test_preds1,test_preds2))
#print(stacked_predictions.shape)
#print(stacked_test_predictions.shape)
meta_model=KNeighborsClassifier(n_neighbors=10)
meta_model.fit(stacked_predictions,yvalid)
final_predictions=meta_model.predict(stacked_test_predictions)
return accuracy_score(y_test,final_predictions)
accuracies=[]
for i in range(10):
accuracies.append(stacked())
print("%0.3f" % accuracies[i])
print(accuracies)
mean_acc=sum(accuracies) / float(len(accuracies))
print("Mean %0.3f" % mean_acc)
###Output
0.957
0.954
0.956
0.958
0.957
0.958
0.954
0.957
0.957
0.932
[0.9566217548471903, 0.9542744472090512, 0.9563400779306136, 0.9580301394300738, 0.9567156471527158, 0.9575606779024459, 0.953898877986949, 0.9569973240692925, 0.9566217548471903, 0.9318341861884418]
Mean 0.954
###Markdown
Repeated KFold with X and y
###Code
accuracies=[]
def stacked2():
from sklearn.metrics import accuracy_score
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=None)
#training, valid, ytraining, yvalid = train_test_split(X_train, y_train, test_size=0.3, random_state=None)
kf = RepeatedKFold(n_splits=5, n_repeats=2, random_state=None)
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
training, valid, ytraining, yvalid = train_test_split(X_train, y_train, test_size=0.3, random_state=None)
stacked_predictions=[]
stacked_test_predictions=[]
result=fit_stack(clf2, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions)
stacked_predictions=result['pred']
stacked_test_predictions=result['test']
result=fit_stack(clf4, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions)
stacked_predictions=result['pred']
stacked_test_predictions=result['test']
result=fit_stack(clf3, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions)
stacked_predictions=result['pred']
stacked_test_predictions=result['test']
meta_model=KNeighborsClassifier(n_neighbors=10)
meta_model.fit(stacked_predictions.reshape(-1,3),yvalid)
final_predictions=meta_model.predict(stacked_test_predictions.reshape(-1,3))
acc=accuracy_score(y_test,final_predictions)
accuracies.append(acc)
print(accuracies)
mean_acc=sum(accuracies) / float(len(accuracies))
print("Mean %0.3f" % mean_acc)
stacked2()
###Output
[0.9321174565171467, 0.9322582916696007, 0.9308499401450602, 0.9314788732394366, 0.9322535211267605, 0.9307795225688332, 0.9312724456024224, 0.9318357862122386, 0.9361267605633803, 0.928943661971831]
Mean 0.932
###Markdown
Repeated KFold with X_train and y_train
###Code
accuracies=[]
def stacked1():
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=None)
kf = RepeatedKFold(n_splits=5, n_repeats=1, random_state=None)
for train_index, test_index in kf.split(X_train):
training, valid = X_train.iloc[train_index], X_train.iloc[test_index]
ytraining, yvalid = y_train.iloc[train_index], y_train.iloc[test_index]
stacked_predictions=[]
stacked_test_predictions=[]
result=fit_stack(clf2, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions)
stacked_predictions=result['pred']
stacked_test_predictions=result['test']
result=fit_stack(clf4, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions)
stacked_predictions=result['pred']
stacked_test_predictions=result['test']
result=fit_stack(clf3, training, valid, ytraining,X_test,stacked_predictions,stacked_test_predictions)
stacked_predictions=result['pred']
stacked_test_predictions=result['test']
meta_model=KNeighborsClassifier(n_neighbors=10)
meta_model.fit(stacked_predictions.reshape(-1,3),yvalid)
final_predictions=meta_model.predict(stacked_test_predictions.reshape(-1,3))
acc=accuracy_score(y_test,final_predictions)
accuracies.append(acc)
print(accuracies)
mean_acc=sum(accuracies) / float(len(accuracies))
print("Mean %0.3f" % mean_acc)
stacked1()
###Output
[0.932021970799493, 0.932021970799493, 0.932021970799493, 0.932021970799493, 0.932021970799493]
Mean 0.932
|
Chinese_rationality_analasis/chinese_sentiment_analysis.ipynb | ###Markdown
Tutorial for Chinese Sentiment analysis with hotel review data DependenciesPython 3.5, numpy, pickle, keras, tensorflow, [jieba](https://github.com/fxsjy/jieba) Optional for plottingpylab, scipy
###Code
from os import listdir
from os.path import isfile, join
import jieba
import codecs
from langconv import * # convert Traditional Chinese characters to Simplified Chinese characters
import pickle
import random
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import GRU
from keras.preprocessing.text import Tokenizer
from keras.layers.core import Dense
from keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import TensorBoard
###Output
_____no_output_____
###Markdown
Helper function to pickle and load stuff
###Code
def __pickleStuff(filename, stuff):
save_stuff = open(filename, "wb")
pickle.dump(stuff, save_stuff)
save_stuff.close()
def __loadStuff(filename):
saved_stuff = open(filename,"rb")
stuff = pickle.load(saved_stuff)
saved_stuff.close()
return stuff
###Output
_____no_output_____
###Markdown
Get lists of files, positive and negative files
###Code
dataBaseDirPos = "./data/ChnSentiCorp_htl_ba_6000/pos/"
dataBaseDirNeg = "./data/ChnSentiCorp_htl_ba_6000/neg/"
positiveFiles = [dataBaseDirPos + f for f in listdir(dataBaseDirPos) if isfile(join(dataBaseDirPos, f))]
negativeFiles = [dataBaseDirNeg + f for f in listdir(dataBaseDirNeg) if isfile(join(dataBaseDirNeg, f))]
###Output
_____no_output_____
###Markdown
Show length of samples
###Code
print(len(positiveFiles))
print(len(negativeFiles))
###Output
2916
3000
###Markdown
Have a look at what's in a file(one hotel review)
###Code
filename = positiveFiles[0]
with codecs.open(filename, "r", encoding="gb2312") as doc_file:
text=doc_file.read()
print(text)
###Output
如果是我一个人再到东莞出差的话,我还会住在东莞宾馆,但是如果我和我的老板一起来东莞的话,可能会住到索菲特去。
我喜欢东莞宾馆的环境和它的几个餐厅,身处闹市但是却很安静,很舒服,宾馆的服务也很到位。
对于普通的商务旅行者来说,东莞宾馆的标准间是最物超所值的。
###Markdown
Test removing stop wordsDemo what it looks like to tokenize the sentence and remove stop words.
###Code
filename = positiveFiles[110]
with codecs.open(filename, "r", encoding="gb2312") as doc_file:
text=doc_file.read()
text = text.replace("\n", "")
text = text.replace("\r", "")
print("==Orginal==:\n\r{}".format(text))
stopwords = [ line.rstrip() for line in codecs.open('./data/chinese_stop_words.txt',"r", encoding="utf-8") ]
seg_list = jieba.cut(text, cut_all=False)
final =[]
seg_list = list(seg_list)
for seg in seg_list:
if seg not in stopwords:
final.append(seg)
print("==Tokenized==\tToken count:{}\n\r{}".format(len(seg_list)," ".join(seg_list)))
print("==Stop Words Removed==\tToken count:{}\n\r{}".format(len(final)," ".join(final)))
###Output
==Orginal==:
别墅型的酒店,非常特别,离海边很近.消费很平价
==Tokenized== Token count:15
别墅 型 的 酒店 , 非常 特别 , 离 海边 很近 . 消费 很 平价
==Stop Words Removed== Token count:8
别墅 型 酒店 特别 海边 很近 消费 平价
###Markdown
Prepare "doucments", a list of tuplesSome files contain abnormal encoding characters which encoding GB2312 will complain about. Solution: read as bytes then decode as GB2312 line by line, skip lines with abnormal encodings. We also convert any traditional Chinese characters to simplified Chinese characters.
###Code
documents = []
for filename in positiveFiles:
text = ""
with codecs.open(filename, "rb") as doc_file:
for line in doc_file:
try:
line = line.decode("GB2312")
except:
continue
text+=Converter('zh-hans').convert(line)# Convert from traditional to simplified Chinese
text = text.replace("\n", "")
text = text.replace("\r", "")
documents.append((text, "pos"))
for filename in negativeFiles:
text = ""
with codecs.open(filename, "rb") as doc_file:
for line in doc_file:
try:
line = line.decode("GB2312")
except:
continue
text+=Converter('zh-hans').convert(line)# Convert from traditional to simplified Chinese
text = text.replace("\n", "")
text = text.replace("\r", "")
documents.append((text, "neg"))
###Output
_____no_output_____
###Markdown
Optional step to save/load the documents as pickle file
###Code
# Uncomment those two lines to save/load the documents for later use since the step above takes a while
# __pickleStuff("./data/chinese_sentiment_corpus.p", documents)
# documents = __loadStuff("./data/chinese_sentiment_corpus.p")
print(len(documents))
print(documents[4000])
###Output
5916
('优点;有电梯(4层开封宾馆连电梯都没有),晚上好像没有小姐电话骚扰(开封宾馆骚扰到最后干脆和按摩小姐电话聊天起来了)缺点;房间象招待所,设备老化,能算3星?价格偏高,性价比不好!门口乱哄哄的,感觉不好!', 'neg')
###Markdown
shuffle the data
###Code
random.shuffle(documents)
###Output
_____no_output_____
###Markdown
Prepare the input and output for the modelEach input (hotel review) will be a list of tokens, output will be one token("pos" or "neg"). The stopwords are not removed here since the dataset is relative small and removing the stop words are not saving much traing time.
###Code
# Tokenize only
totalX = []
totalY = [str(doc[1]) for doc in documents]
for doc in documents:
seg_list = jieba.cut(doc[0], cut_all=False)
seg_list = list(seg_list)
totalX.append(seg_list)
#Switch to below code to experiment with removing stop words
# Tokenize and remove stop words
# totalX = []
# totalY = [str(doc[1]) for doc in documents]
# stopwords = [ line.rstrip() for line in codecs.open('./data/chinese_stop_words.txt',"r", encoding="utf-8") ]
# for doc in documents:
# seg_list = jieba.cut(doc[0], cut_all=False)
# seg_list = list(seg_list)
# Uncomment below code to experiment with removing stop words
# final =[]
# for seg in seg_list:
# if seg not in stopwords:
# final.append(seg)
# totalX.append(final)
###Output
_____no_output_____
###Markdown
Visualize distribution of sentence lengthDecide the max input sequence, here we cover up to 60% sentences. The longer input sequence, the more training time will take, but could improve prediction accuracy.
###Code
import numpy as np
import scipy.stats as stats
import pylab as pl
h = sorted([len(sentence) for sentence in totalX])
maxLength = h[int(len(h) * 0.60)]
print("Max length is: ",h[len(h)-1])
print("60% cover length up to: ",maxLength)
h = h[:5000]
fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed
pl.plot(h,fit,'-o')
pl.hist(h,normed=True) #use this to draw histogram of your data
pl.show()
###Output
Max length is: 1804
60% cover length up to: 68
###Markdown
Words to number tokens, paddingPad input sequence to max input length if it is shorterSave the input tokenizer, since we need to use the same tokenizer for our new predition data.
###Code
totalX = [" ".join(wordslist) for wordslist in totalX] # Keras Tokenizer expect the words tokens to be seperated by space
input_tokenizer = Tokenizer(30000) # Initial vocab size
input_tokenizer.fit_on_texts(totalX)
vocab_size = len(input_tokenizer.word_index) + 1
print("input vocab_size:",vocab_size)
totalX = np.array(pad_sequences(input_tokenizer.texts_to_sequences(totalX), maxlen=maxLength))
__pickleStuff("./data/input_tokenizer_chinese.p", input_tokenizer)
###Output
input vocab_size: 22123
###Markdown
Output, array of 0s and 1s
###Code
target_tokenizer = Tokenizer(3)
target_tokenizer.fit_on_texts(totalY)
print("output vocab_size:",len(target_tokenizer.word_index) + 1)
totalY = np.array(target_tokenizer.texts_to_sequences(totalY)) -1
totalY = totalY.reshape(totalY.shape[0])
totalY[40:50]
###Output
_____no_output_____
###Markdown
Turn output 0s and 1s to categories(one-hot vectors)
###Code
totalY = to_categorical(totalY, num_classes=2)
totalY[40:50]
output_dimen = totalY.shape[1] # which is 2
###Output
_____no_output_____
###Markdown
Save meta data for later preditionmaxLength: the input sequence lengthvocab_size: Input vocab sizeoutput_dimen: which is 2 in this example (pos or neg)sentiment_tag: either ["neg","pos"] or ["pos","neg"] matching the target tokenizer
###Code
target_reverse_word_index = {v: k for k, v in list(target_tokenizer.word_index.items())}
sentiment_tag = [target_reverse_word_index[1],target_reverse_word_index[2]]
metaData = {"maxLength":maxLength,"vocab_size":vocab_size,"output_dimen":output_dimen,"sentiment_tag":sentiment_tag}
__pickleStuff("./data/meta_sentiment_chinese.p", metaData)
###Output
_____no_output_____
###Markdown
Build the Model, train and save itThe training data is logged to Tensorboard, we can look at it by cd into directory "./Graph/sentiment_chinese" and run"python -m tensorflow.tensorboard --logdir=."
###Code
embedding_dim = 256
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim,input_length = maxLength))
# Each input would have a size of (maxLength x 256) and each of these 256 sized vectors are fed into the GRU layer one at a time.
# All the intermediate outputs are collected and then passed on to the second GRU layer.
model.add(GRU(256, dropout=0.9, return_sequences=True))
# Using the intermediate outputs, we pass them to another GRU layer and collect the final output only this time
model.add(GRU(256, dropout=0.9))
# The output is then sent to a fully connected layer that would give us our final output_dim classes
model.add(Dense(output_dimen, activation='softmax'))
# We use the adam optimizer instead of standard SGD since it converges much faster
tbCallBack = TensorBoard(log_dir='./Graph/sentiment_chinese', histogram_freq=0,
write_graph=True, write_images=True)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(totalX, totalY, validation_split=0.1, batch_size=32, epochs=20, verbose=1, callbacks=[tbCallBack])
model.save('./data/sentiment_chinese_model.HDF5')
print("Saved model!")
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 68, 256) 5663488
_________________________________________________________________
gru_1 (GRU) (None, 68, 256) 393984
_________________________________________________________________
gru_2 (GRU) (None, 256) 393984
_________________________________________________________________
dense_1 (Dense) (None, 2) 514
=================================================================
Total params: 6,451,970
Trainable params: 6,451,970
Non-trainable params: 0
_________________________________________________________________
Train on 5324 samples, validate on 592 samples
Epoch 1/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.6808 - acc: 0.5640 - val_loss: 0.5026 - val_acc: 0.7770
Epoch 2/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.5421 - acc: 0.7483 - val_loss: 0.4294 - val_acc: 0.8074
Epoch 3/20
5324/5324 [==============================] - 48s 9ms/step - loss: 0.4263 - acc: 0.8276 - val_loss: 0.4619 - val_acc: 0.8041
Epoch 4/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.3507 - acc: 0.8650 - val_loss: 0.3194 - val_acc: 0.8834
Epoch 5/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.2998 - acc: 0.8866 - val_loss: 0.2837 - val_acc: 0.8936
Epoch 6/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.2695 - acc: 0.9006 - val_loss: 0.2839 - val_acc: 0.8986
Epoch 7/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.2174 - acc: 0.9222 - val_loss: 0.2801 - val_acc: 0.8936
Epoch 8/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.1895 - acc: 0.9324 - val_loss: 0.2746 - val_acc: 0.8986
Epoch 9/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.1660 - acc: 0.9369 - val_loss: 0.2981 - val_acc: 0.8885
Epoch 10/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.1526 - acc: 0.9429 - val_loss: 0.3024 - val_acc: 0.9003
Epoch 11/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.1259 - acc: 0.9551 - val_loss: 0.3632 - val_acc: 0.9003
Epoch 12/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.1337 - acc: 0.9482 - val_loss: 0.3948 - val_acc: 0.8986
Epoch 13/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.1133 - acc: 0.9591 - val_loss: 0.3517 - val_acc: 0.8868
Epoch 14/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0905 - acc: 0.9634 - val_loss: 0.3895 - val_acc: 0.8632
Epoch 15/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0990 - acc: 0.9637 - val_loss: 0.3961 - val_acc: 0.8801
Epoch 16/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0929 - acc: 0.9624 - val_loss: 0.3559 - val_acc: 0.8970
Epoch 17/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0962 - acc: 0.9649 - val_loss: 0.4281 - val_acc: 0.8885
Epoch 18/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0849 - acc: 0.9651 - val_loss: 0.4243 - val_acc: 0.8953
Epoch 19/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0747 - acc: 0.9694 - val_loss: 0.4803 - val_acc: 0.8902
Epoch 20/20
5324/5324 [==============================] - 49s 9ms/step - loss: 0.0768 - acc: 0.9692 - val_loss: 0.4465 - val_acc: 0.9020
Saved model!
###Markdown
Below are prediction codeFunction to load the meta data and the model we just trained.
###Code
model = None
sentiment_tag = None
maxLength = None
def loadModel():
global model, sentiment_tag, maxLength
metaData = __loadStuff("./data/meta_sentiment_chinese.p")
maxLength = metaData.get("maxLength")
vocab_size = metaData.get("vocab_size")
output_dimen = metaData.get("output_dimen")
sentiment_tag = metaData.get("sentiment_tag")
embedding_dim = 256
if model is None:
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim, input_length=maxLength))
# Each input would have a size of (maxLength x 256) and each of these 256 sized vectors are fed into the GRU layer one at a time.
# All the intermediate outputs are collected and then passed on to the second GRU layer.
model.add(GRU(256, dropout=0.9, return_sequences=True))
# Using the intermediate outputs, we pass them to another GRU layer and collect the final output only this time
model.add(GRU(256, dropout=0.9))
# The output is then sent to a fully connected layer that would give us our final output_dim classes
model.add(Dense(output_dimen, activation='softmax'))
# We use the adam optimizer instead of standard SGD since it converges much faster
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.load_weights('./data/sentiment_chinese_model.HDF5')
model.summary()
print("Model weights loaded!")
###Output
_____no_output_____
###Markdown
Functions to convert sentence to model input, and predict result
###Code
def findFeatures(text):
text=Converter('zh-hans').convert(text)
text = text.replace("\n", "")
text = text.replace("\r", "")
seg_list = jieba.cut(text, cut_all=False)
seg_list = list(seg_list)
text = " ".join(seg_list)
textArray = [text]
input_tokenizer_load = __loadStuff("./data/input_tokenizer_chinese.p")
textArray = np.array(pad_sequences(input_tokenizer_load.texts_to_sequences(textArray), maxlen=maxLength))
return textArray
def predictResult(text):
if model is None:
print("Please run \"loadModel\" first.")
return None
features = findFeatures(text)
predicted = model.predict(features)[0] # we have only one sentence to predict, so take index 0
predicted = np.array(predicted)
probab = predicted.max()
predition = sentiment_tag[predicted.argmax()]
return predition, probab
###Output
_____no_output_____
###Markdown
Calling the load model function
###Code
loadModel()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, 68, 256) 5663488
_________________________________________________________________
gru_5 (GRU) (None, 68, 256) 393984
_________________________________________________________________
gru_6 (GRU) (None, 256) 393984
_________________________________________________________________
dense_3 (Dense) (None, 2) 514
=================================================================
Total params: 6,451,970
Trainable params: 6,451,970
Non-trainable params: 0
_________________________________________________________________
Model weights loaded!
###Markdown
Try some new comments, feel free to try your ownThe result tuple consists the predicted result and likehood.
###Code
predictResult("还好,床很大而且很干净,前台很友好,很满意,下次还来。")
predictResult("床上有污渍,房间太挤不透气,空调不怎么好用。")
predictResult("房间有点小但是设备还齐全,没有异味。")
predictResult("房间还算干净,一般般吧,短住还凑合。")
predictResult("开始不太满意,前台好说话换了一间,房间很干净没有异味。")
predictResult("你是个SB")
###Output
_____no_output_____ |
tf_keras/hyperparameter_tuning.ipynb | ###Markdown
Using CallbacksCallbacks are an integral part of Keras. Callbacks are used to get performance information, log progress, halt in the event of errors, tune parameters, save model state (in case of crash, etc.), finish training once loss is minimized. The list goes on.Callbacks can be passed to fit, evaluate and predict methods of keras.Model. GoalsThe overarching goal is to learn to use callbacks for some typical tasks. These include:- Reporting about training progress.- Stoping once training no longer reduces loss.- Tuning hyperparameters.- Implementing adaptive learning rate decay.- Finding an optimal batch-size for training.- Putting some of this into ```my_keras_utils.py``` so that they can be easily called and reused. What's Here?I continue working with MNIST data, which I began working with in [my first Keras models](first_model.ipynb). My **concrete objective** is to tune a model that does well on Kaggle: 97th percentile? That's tough, but I think I can make it work.
###Code
import numpy as np
from datetime import datetime, time, timedelta
import pandas as pd
import tensorflow as tf
import kerastuner as kt
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import my_keras_utils as my_utils
tf.__version__
tf.config.experimental.list_physical_devices('GPU')
## Load our data.
## Since the load process is a little slow, the try-except allows us to re-run all
## cells without having to wait.
try:
## Raises NameError and loads data if X_train is not defined.
X_train.shape
except NameError:
((X_train, y_train), (X_dev, y_dev), (X_test, y_test)) = my_utils.load_kaggle_mnist()
X_train.shape
X_dev.shape
## Let's use the dropout model from my first_model notebook.
inputs = keras.Input(shape=(784))
x = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
x = layers.Dropout(rate = .05)(x)
x = layers.Dense(100, activation='relu',)(x)
x = layers.Dropout(rate = .15)(x)
x = layers.Dense(100, activation='relu',)(x)
x = layers.Dropout(rate = .15)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model_dropout = keras.Model(inputs=inputs, outputs=outputs, name='Dropout')
model_dropout.summary()
optimizer = keras.optimizers.Adam(.001)
model_dropout.compile(optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]
)
init_weights = model_dropout.get_weights()
early_stopping = keras.callbacks.EarlyStopping(
monitor='loss',
patience = 10,
restore_best_weights = True)
adaptive_lr = keras.callbacks.ReduceLROnPlateau(
monitor='loss',
patience= 4,
cooldown= 8,
factor=0.3,)
progress_update = my_utils.TimedProgressUpdate(1)
callbacks = [progress_update, early_stopping]
epochs = 0
initial_epoch = 0
batch_size = 128
if True: ## reinitialize
model_dropout.set_weights(init_weights)
increment_epochs = 50
epochs += increment_epochs
history1 = model_dropout.fit(X_train, y_train,
epochs=epochs,
initial_epoch = initial_epoch,
batch_size=batch_size,
validation_data=(X_dev, y_dev),
callbacks = [progress_update],
verbose = 0)
initial_epoch += increment_epochs
if True: ## reinitialize
model_dropout.set_weights(init_weights)
increment_epochs = 50
epochs += increment_epochs
history2 = model_dropout.fit(X_train, y_train,
epochs=epochs,
initial_epoch = initial_epoch,
batch_size=batch_size,
validation_data=(X_dev, y_dev),
callbacks = [adaptive_lr],
verbose =0)
initial_epoch += increment_epochs
def overlay_histories(histories, metric):
fig, ax = plt.subplots()
n = 0
for h in histories:
x = range(0,len(h.history[metric]))
y = np.array(h.history[metric])
label = 'history_' + str(n)
ax.plot(x,y, label=label)
n += 1
ax.legend()
overlay_histories([history1, history2], 'val_loss')
###Output
_____no_output_____
###Markdown
TuningTime to work with Keras Tuner. Read the DocsReading the documentation was really helpful. Note that the search function _will_ use callbacks. So, instead of worrying about the ```max_epochs```, you can (if you have the resources and time--which _is_ a resource) just add a realistic ```EarlyStopping``` callback. Below, I use this to stop if training loss is not decreasing.
###Code
def model_builder(hp):
## Define the parameter search space.
hp_dropout_x1 = hp.Float('rate1', min_value = .05, max_value = .5, step=.01)
hp_dropout_w1 = hp.Float('rate2', min_value = .05, max_value = .5, step=.01)
hp_dropout_w2 = hp.Float('rate3', min_value = .05, max_value = .5, step=.01)
hp_l1_units = hp.Int('l1_units', min_value = 20, max_value = 200, step = 20)
hp_l2_units = hp.Int('l2_units', min_value = 20, max_value = 200, step = 20)
## ### I need to learn about the options here. 'Choice' means "here are your choices"
## ### 'Int' is a different option that searches an integer range by steps.
inputs = keras.Input(shape=(784))
x = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
x = layers.Dropout(rate = hp_dropout_x1)(x)
x = layers.Dense(hp_l1_units, activation='relu',)(x)
x = layers.Dropout(rate = hp_dropout_w1)(x)
x = layers.Dense(hp_l2_units, activation='relu',)(x)
x = layers.Dropout(rate = hp_dropout_w2)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer = keras.optimizers.Adam(.003),
loss = "sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]
)
return model
tuner = kt.Hyperband(model_builder,
objective = 'val_loss',
max_epochs = 120,
hyperband_iterations = 2,
factor = 3,
directory = 'ignored/kt_trials',
project_name = 'dropout_mnist',
)
tuner.search_space_summary()
## prevent bloated ipython output during training.
clear_output = my_utils.ClearTrainingOutput()
timed_update = my_utils.TimedProgressUpdate(3)
## stop when training loss is not happening.
train_loss_stopping = keras.callbacks.EarlyStopping(monitor='loss',
patience = 10,
restore_best_weights = False
)
adaptive_lr = keras.callbacks.ReduceLROnPlateau(
monitor='loss',
patience= 4,
cooldown= 8,
min_lr=.0003,
factor=0.334,)
tuner_callbacks = [adaptive_lr, train_loss_stopping, timed_update]
start = datetime.now()
tuner.search(X_train, y_train,
epochs=120,
batch_size=128,
validation_data=(X_dev, y_dev),
callbacks = tuner_callbacks,
verbose = 0
)
tuner.results_summary()
###Output
_____no_output_____
###Markdown
Ran the Hyperband, with the following params:```tuner = kt.Hyperband(model_builder, objective = 'val_loss', max_epochs = 200, hyperband_iterations = 10, factor = 3, directory = 'ignored/kt_trials', project_name = 'dropout_mnist') tuner.search(X_train, y_train, epochs=50, batch_size=128, validation_data=(X_dev, y_dev), callbacks = tuner_callbacks, )```Which involved way too many iterations. I didn't notice the hyperband_iterations param (10!). os.path.normpathPer [Issue 198](https://github.com/keras-team/keras-tuner/issues/198) you may need to add os.path.normpath() to the directory keyword arg in Windows and the path to the logs needs to be short. (E.g., you won't be able to use my_trials/this_type_of_trial/this_trial--too long. Try mt/t/this).
###Code
def rand_search_model_builder(hp):
## Define the hyperparameter search space.
hp_dropout_x1 = hp.Float('rate1', min_value = .1, max_value = .4, step=.01)
hp_dropout_w1 = hp.Float('rate2', min_value = .05, max_value = .4, step=.01)
hp_dropout_w2 = hp.Float('rate3', min_value = .05, max_value = .4, step=.01)
# Define the hypermodel
inputs = keras.Input(shape=(784))
x = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
x = layers.Dropout(rate = hp_dropout_x1)(x)
x = layers.Dense(100, activation='relu',)(x)
x = layers.Dropout(rate = hp_dropout_w1)(x)
x = layers.Dense(100, activation='relu',)(x)
x = layers.Dropout(rate = hp_dropout_w2)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer = keras.optimizers.Adam(.003),
loss = "sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]
)
return model
#kt.tuners.RandomSearch?
rand_tuner = kt.tuners.RandomSearch(rand_search_model_builder,
objective = 'val_loss',
max_trials = 25,
executions_per_trial=1,
directory = os.path.normpath('ignored/mnist'),
project_name = 'rs')
rand_tuner.search_space_summary()
adaptive_lr = keras.callbacks.ReduceLROnPlateau(
monitor='loss',
patience= 4,
cooldown= 8,
min_lr=.0003,
factor=0.334,)
progress_update = my_utils.TimedProgressUpdate(2)
rand_tuner.search(X_train, y_train,
epochs=120,
batch_size=128,
validation_data=(X_dev, y_dev),
callbacks = [adaptive_lr,
progress_update,
#clear_output
],
verbose = 0
)
def rand_search_model_builder_2(hp):
## Define the hyperparameter search space.
hp_dropout_x1 = hp.Float('rate1', min_value = .27, max_value = .38, step=.001)
hp_dropout_w1 = hp.Float('rate2', min_value = .05, max_value = .15, step=.001)
hp_dropout_w2 = hp.Float('rate3', min_value = .11, max_value = .21, step=.001)
# Define the hypermodel
inputs = keras.Input(shape=(784))
x = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
x = layers.Dropout(rate = hp_dropout_x1)(x)
x = layers.Dense(100, activation='relu',)(x)
x = layers.Dropout(rate = hp_dropout_w1)(x)
x = layers.Dense(100, activation='relu',)(x)
x = layers.Dropout(rate = hp_dropout_w2)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer = keras.optimizers.Adam(.003),
loss = "sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]
)
return model
#kt.tuners.RandomSearch?
new_tuner = kt.tuners.RandomSearch(rand_search_model_builder_2,
objective = 'val_loss',
max_trials = 25,
executions_per_trial=1,
directory = os.path.normpath('ignored/mnist'),
project_name = 'rs_2')
new_tuner.search_space_summary()
adaptive_lr = keras.callbacks.ReduceLROnPlateau(
monitor='loss',
patience= 4,
cooldown= 8,
min_lr=.0003,
factor=0.334,)
progress_update = my_utils.TimedProgressUpdate(.5)
new_tuner.search(X_train, y_train,
epochs=1,
batch_size=128,
validation_data=(X_dev, y_dev),
callbacks = [adaptive_lr,
progress_update,
#clear_output
],
verbose = 0
)
###Output
INFO:tensorflow:Reloading Oracle from existing project ignored\mnist\rs_2\oracle.json
INFO:tensorflow:Reloading Tuner from ignored\mnist\rs_2\tuner0.json
###Markdown
A dubious fist runI ran a random_search, lr = .001 and a 0-.5 range on the params; 50 epochs, 2 executions per trial. The results were okay, but it seemed like the number of epochs was too few. Also, the max_trials was too big. At least on the Surface, it took three minutes per trial. Ouch. 5 hours later... I ran a second timeThis a second time. This time with more epochs (120), but 1 execution per trial and a slightly different band of paramters. There was a clear stand outTrial ID: 39f9768e653c7179ddb0917bd84a4b9d|-Score: 0.05155862867832184|-Best step: 0Hyperparameters:|-rate1: 0.32999999999999985|-rate2: 0.1|-rate3: 0.16000000000000003
###Code
new_tuner.results_summary()
###Output
_____no_output_____
###Markdown
Baysian TuningLet's see how well the Baysian Optimizer does with the rand_search_builder.
###Code
bayesian_tuner = kt.tuners.BayesianOptimization(
rand_search_model_builder,
objective='val_loss',
max_trials = 25,
directory = os.path.normpath('ignored/mnist'),
project_name = 'bayes'
)
bayesian_tuner.search(X_train, y_train,
epochs=120,
batch_size=128,
validation_data=(X_dev, y_dev),
callbacks = [adaptive_lr,
progress_update,
clear_output
],
verbose = 0
)
bayesian_tuner.results_summary()
###Output
_____no_output_____
###Markdown
Let's put it all together. The Bayesian tuner got pretty bad results compared with the other two. I'm not really sure what happened, except that it didn't seem exploratory enough. I should probably figure out how to get better Bayesian results.But for now, we have several high scoring models. Let's extract three and cross validate.
###Code
best = rand_tuner.get_best_models()[0]
config = best.get_config()
model_1 = best.from_config(config)
config
temp = rand_tuner.get_best_models(num_models=2)
model_2 = temp[0].from_config(temp[0].get_config())
model_3 = temp[1].from_config(temp[1].get_config())
def cross_validate(model, X, y, val_size = 2000, folds = 3):
init_weights = model.get_weights()
results = []
for fold in range(folds):
X_val = X[fold*val_size:(fold+1)*val_size,:]
X_train = np.concatenate((X[0:fold*val_size,:],X[(fold+1)*val_size:,:]))
y_val = y[fold*val_size:(fold+1)*val_size]
y_train = np.concatenate((y[0:fold*val_size],y[(fold+1)*val_size:]))
history = model.fit(X_train, y_train,
epochs=200,
batch_size=128,
validation_data=(X_val, y_val),
callbacks = [adaptive_lr,
progress_update,
clear_output
],
verbose = 0)
model.set_weights(init_weights)
results.append(history)
return results
results = []
for model in [model_1, model_2, model_3]:
model.compile(optimizer = keras.optimizers.Adam(.003),
loss = "sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]
)
histories = cross_validate(model, X_train, y_train)
results.append(histories)
for e in results[0]:
print(e.history['val_loss'][-1])
for e in results[1]:
print(e.history['val_loss'][-1])
for e in results[2]:
print(e.history['val_loss'][-1])
## Reset all the weights.
model_1.compile(optimizer = keras.optimizers.Adam(.003),
loss = "sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]
)
## Save so I can move to another computer
model_1.save('MNIST_model.hdf5')
model_2.save("mnist_2.hdf5")
model_3.save("mnist_3.hdf5")
model_1.fit(X_train, y_train,
epochs=200,
batch_size=128,
validation_data=(X_dev, y_dev),
callbacks = [adaptive_lr,
progress_update,
clear_output
],
verbose = 2)
model_1.save('MNIST_model.hdf5')
###Output
Begin training of functional_1 at 08:45:50. Progress updates every 120.0 seconds.
Epoch 1/200
297/297 - 1s - loss: 0.0311 - acc: 0.9896 - val_loss: 0.0906 - val_acc: 0.9785
Epoch 2/200
297/297 - 1s - loss: 0.0333 - acc: 0.9886 - val_loss: 0.0886 - val_acc: 0.9805
Epoch 3/200
297/297 - 1s - loss: 0.0343 - acc: 0.9886 - val_loss: 0.0873 - val_acc: 0.9815
Epoch 4/200
297/297 - 1s - loss: 0.0322 - acc: 0.9896 - val_loss: 0.0871 - val_acc: 0.9785
Epoch 5/200
297/297 - 1s - loss: 0.0342 - acc: 0.9884 - val_loss: 0.0875 - val_acc: 0.9800
Epoch 6/200
297/297 - 1s - loss: 0.0372 - acc: 0.9881 - val_loss: 0.0890 - val_acc: 0.9790
Epoch 7/200
297/297 - 1s - loss: 0.0342 - acc: 0.9890 - val_loss: 0.0885 - val_acc: 0.9795
Epoch 8/200
297/297 - 1s - loss: 0.0313 - acc: 0.9896 - val_loss: 0.0891 - val_acc: 0.9800
Epoch 9/200
297/297 - 1s - loss: 0.0318 - acc: 0.9886 - val_loss: 0.0875 - val_acc: 0.9800
Epoch 10/200
297/297 - 1s - loss: 0.0335 - acc: 0.9891 - val_loss: 0.0853 - val_acc: 0.9810
Epoch 11/200
297/297 - 1s - loss: 0.0321 - acc: 0.9896 - val_loss: 0.0863 - val_acc: 0.9805
Epoch 12/200
297/297 - 1s - loss: 0.0333 - acc: 0.9892 - val_loss: 0.0905 - val_acc: 0.9795
Epoch 13/200
297/297 - 1s - loss: 0.0333 - acc: 0.9890 - val_loss: 0.0915 - val_acc: 0.9805
Epoch 14/200
297/297 - 1s - loss: 0.0329 - acc: 0.9887 - val_loss: 0.0879 - val_acc: 0.9805
Epoch 15/200
297/297 - 1s - loss: 0.0325 - acc: 0.9888 - val_loss: 0.0895 - val_acc: 0.9815
Epoch 16/200
297/297 - 1s - loss: 0.0311 - acc: 0.9898 - val_loss: 0.0910 - val_acc: 0.9795
Epoch 17/200
297/297 - 1s - loss: 0.0326 - acc: 0.9892 - val_loss: 0.0876 - val_acc: 0.9805
Epoch 18/200
297/297 - 1s - loss: 0.0333 - acc: 0.9892 - val_loss: 0.0893 - val_acc: 0.9810
Epoch 19/200
297/297 - 1s - loss: 0.0340 - acc: 0.9886 - val_loss: 0.0900 - val_acc: 0.9810
Epoch 20/200
297/297 - 1s - loss: 0.0307 - acc: 0.9898 - val_loss: 0.0900 - val_acc: 0.9810
Epoch 21/200
297/297 - 1s - loss: 0.0325 - acc: 0.9888 - val_loss: 0.0899 - val_acc: 0.9795
Epoch 22/200
297/297 - 1s - loss: 0.0339 - acc: 0.9896 - val_loss: 0.0901 - val_acc: 0.9795
Epoch 23/200
297/297 - 1s - loss: 0.0319 - acc: 0.9897 - val_loss: 0.0877 - val_acc: 0.9795
Epoch 24/200
297/297 - 1s - loss: 0.0331 - acc: 0.9892 - val_loss: 0.0890 - val_acc: 0.9800
Epoch 25/200
297/297 - 1s - loss: 0.0320 - acc: 0.9893 - val_loss: 0.0889 - val_acc: 0.9800
Epoch 26/200
297/297 - 1s - loss: 0.0318 - acc: 0.9891 - val_loss: 0.0898 - val_acc: 0.9810
Epoch 27/200
297/297 - 1s - loss: 0.0313 - acc: 0.9891 - val_loss: 0.0890 - val_acc: 0.9805
Epoch 28/200
297/297 - 1s - loss: 0.0322 - acc: 0.9900 - val_loss: 0.0885 - val_acc: 0.9820
Epoch 29/200
297/297 - 1s - loss: 0.0322 - acc: 0.9893 - val_loss: 0.0905 - val_acc: 0.9795
Epoch 30/200
297/297 - 1s - loss: 0.0321 - acc: 0.9898 - val_loss: 0.0896 - val_acc: 0.9815
Epoch 31/200
297/297 - 1s - loss: 0.0331 - acc: 0.9890 - val_loss: 0.0904 - val_acc: 0.9820
Epoch 32/200
297/297 - 1s - loss: 0.0324 - acc: 0.9895 - val_loss: 0.0885 - val_acc: 0.9815
Epoch 33/200
297/297 - 1s - loss: 0.0308 - acc: 0.9899 - val_loss: 0.0912 - val_acc: 0.9800
Epoch 34/200
297/297 - 1s - loss: 0.0320 - acc: 0.9893 - val_loss: 0.0872 - val_acc: 0.9800
Epoch 35/200
297/297 - 1s - loss: 0.0307 - acc: 0.9897 - val_loss: 0.0880 - val_acc: 0.9830
Epoch 36/200
297/297 - 1s - loss: 0.0292 - acc: 0.9904 - val_loss: 0.0896 - val_acc: 0.9805
Epoch 37/200
297/297 - 1s - loss: 0.0321 - acc: 0.9895 - val_loss: 0.0934 - val_acc: 0.9790
Epoch 38/200
297/297 - 1s - loss: 0.0303 - acc: 0.9899 - val_loss: 0.0944 - val_acc: 0.9795
Epoch 39/200
297/297 - 1s - loss: 0.0304 - acc: 0.9902 - val_loss: 0.0933 - val_acc: 0.9810
Epoch 40/200
297/297 - 1s - loss: 0.0280 - acc: 0.9904 - val_loss: 0.0949 - val_acc: 0.9795
Epoch 41/200
297/297 - 1s - loss: 0.0325 - acc: 0.9893 - val_loss: 0.0923 - val_acc: 0.9800
Epoch 42/200
297/297 - 1s - loss: 0.0344 - acc: 0.9893 - val_loss: 0.0921 - val_acc: 0.9790
Epoch 43/200
297/297 - 1s - loss: 0.0307 - acc: 0.9896 - val_loss: 0.0918 - val_acc: 0.9810
Epoch 44/200
297/297 - 1s - loss: 0.0330 - acc: 0.9893 - val_loss: 0.0892 - val_acc: 0.9805
Epoch 45/200
297/297 - 1s - loss: 0.0328 - acc: 0.9899 - val_loss: 0.0878 - val_acc: 0.9815
Epoch 46/200
297/297 - 1s - loss: 0.0307 - acc: 0.9896 - val_loss: 0.0903 - val_acc: 0.9820
Epoch 47/200
297/297 - 1s - loss: 0.0311 - acc: 0.9899 - val_loss: 0.0914 - val_acc: 0.9820
Epoch 48/200
297/297 - 1s - loss: 0.0307 - acc: 0.9897 - val_loss: 0.0898 - val_acc: 0.9820
Epoch 49/200
297/297 - 1s - loss: 0.0293 - acc: 0.9910 - val_loss: 0.0878 - val_acc: 0.9820
Epoch 50/200
297/297 - 1s - loss: 0.0321 - acc: 0.9898 - val_loss: 0.0878 - val_acc: 0.9825
Epoch 51/200
297/297 - 1s - loss: 0.0320 - acc: 0.9898 - val_loss: 0.0900 - val_acc: 0.9815
Epoch 52/200
297/297 - 1s - loss: 0.0293 - acc: 0.9903 - val_loss: 0.0918 - val_acc: 0.9800
Epoch 53/200
297/297 - 1s - loss: 0.0332 - acc: 0.9895 - val_loss: 0.0920 - val_acc: 0.9800
Epoch 54/200
297/297 - 1s - loss: 0.0346 - acc: 0.9889 - val_loss: 0.0902 - val_acc: 0.9815
Epoch 55/200
297/297 - 1s - loss: 0.0319 - acc: 0.9895 - val_loss: 0.0911 - val_acc: 0.9815
Epoch 56/200
297/297 - 1s - loss: 0.0314 - acc: 0.9902 - val_loss: 0.0926 - val_acc: 0.9815
Epoch 57/200
297/297 - 1s - loss: 0.0311 - acc: 0.9900 - val_loss: 0.0895 - val_acc: 0.9820
Epoch 58/200
297/297 - 1s - loss: 0.0296 - acc: 0.9900 - val_loss: 0.0924 - val_acc: 0.9805
Epoch 59/200
297/297 - 1s - loss: 0.0300 - acc: 0.9903 - val_loss: 0.0923 - val_acc: 0.9815
Epoch 60/200
297/297 - 1s - loss: 0.0309 - acc: 0.9901 - val_loss: 0.0920 - val_acc: 0.9820
Epoch 61/200
297/297 - 1s - loss: 0.0301 - acc: 0.9899 - val_loss: 0.0906 - val_acc: 0.9805
Epoch 62/200
297/297 - 1s - loss: 0.0315 - acc: 0.9892 - val_loss: 0.0871 - val_acc: 0.9820
Epoch 63/200
297/297 - 1s - loss: 0.0317 - acc: 0.9898 - val_loss: 0.0927 - val_acc: 0.9805
Epoch 64/200
297/297 - 1s - loss: 0.0314 - acc: 0.9894 - val_loss: 0.0931 - val_acc: 0.9810
Epoch 65/200
297/297 - 1s - loss: 0.0306 - acc: 0.9896 - val_loss: 0.0933 - val_acc: 0.9795
Epoch 66/200
297/297 - 1s - loss: 0.0315 - acc: 0.9898 - val_loss: 0.0921 - val_acc: 0.9805
Epoch 67/200
297/297 - 1s - loss: 0.0308 - acc: 0.9904 - val_loss: 0.0910 - val_acc: 0.9820
Epoch 68/200
297/297 - 1s - loss: 0.0313 - acc: 0.9898 - val_loss: 0.0928 - val_acc: 0.9820
Epoch 69/200
297/297 - 1s - loss: 0.0300 - acc: 0.9905 - val_loss: 0.0899 - val_acc: 0.9820
Epoch 70/200
297/297 - 1s - loss: 0.0287 - acc: 0.9901 - val_loss: 0.0928 - val_acc: 0.9815
Epoch 71/200
297/297 - 1s - loss: 0.0328 - acc: 0.9898 - val_loss: 0.0917 - val_acc: 0.9800
Epoch 72/200
297/297 - 1s - loss: 0.0306 - acc: 0.9893 - val_loss: 0.0894 - val_acc: 0.9825
Epoch 73/200
297/297 - 1s - loss: 0.0302 - acc: 0.9898 - val_loss: 0.0926 - val_acc: 0.9825
Epoch 74/200
297/297 - 1s - loss: 0.0328 - acc: 0.9892 - val_loss: 0.0921 - val_acc: 0.9815
Epoch 75/200
297/297 - 1s - loss: 0.0316 - acc: 0.9893 - val_loss: 0.0917 - val_acc: 0.9815
Epoch 76/200
297/297 - 1s - loss: 0.0300 - acc: 0.9903 - val_loss: 0.0961 - val_acc: 0.9805
Epoch 77/200
297/297 - 1s - loss: 0.0287 - acc: 0.9902 - val_loss: 0.0956 - val_acc: 0.9805
Epoch 78/200
297/297 - 1s - loss: 0.0326 - acc: 0.9893 - val_loss: 0.0901 - val_acc: 0.9815
Epoch 79/200
297/297 - 1s - loss: 0.0337 - acc: 0.9888 - val_loss: 0.0907 - val_acc: 0.9815
Epoch 80/200
297/297 - 1s - loss: 0.0283 - acc: 0.9908 - val_loss: 0.0917 - val_acc: 0.9825
Epoch 81/200
297/297 - 1s - loss: 0.0320 - acc: 0.9899 - val_loss: 0.0906 - val_acc: 0.9810
Epoch 82/200
297/297 - 1s - loss: 0.0300 - acc: 0.9903 - val_loss: 0.0910 - val_acc: 0.9820
Epoch 83/200
297/297 - 1s - loss: 0.0304 - acc: 0.9899 - val_loss: 0.0917 - val_acc: 0.9820
Epoch 84/200
297/297 - 1s - loss: 0.0311 - acc: 0.9896 - val_loss: 0.0905 - val_acc: 0.9820
Epoch 85/200
297/297 - 1s - loss: 0.0318 - acc: 0.9902 - val_loss: 0.0908 - val_acc: 0.9830
Epoch 86/200
297/297 - 1s - loss: 0.0307 - acc: 0.9893 - val_loss: 0.0927 - val_acc: 0.9815
Epoch 87/200
297/297 - 1s - loss: 0.0305 - acc: 0.9894 - val_loss: 0.0944 - val_acc: 0.9810
Epoch 88/200
297/297 - 1s - loss: 0.0294 - acc: 0.9904 - val_loss: 0.0908 - val_acc: 0.9810
Epoch 89/200
297/297 - 1s - loss: 0.0290 - acc: 0.9911 - val_loss: 0.0904 - val_acc: 0.9820
Epoch 90/200
297/297 - 1s - loss: 0.0288 - acc: 0.9904 - val_loss: 0.0945 - val_acc: 0.9820
Epoch 91/200
297/297 - 1s - loss: 0.0308 - acc: 0.9894 - val_loss: 0.0965 - val_acc: 0.9820
Epoch 92/200
297/297 - 1s - loss: 0.0315 - acc: 0.9892 - val_loss: 0.0963 - val_acc: 0.9805
Epoch 93/200
297/297 - 1s - loss: 0.0322 - acc: 0.9894 - val_loss: 0.0949 - val_acc: 0.9815
Epoch 94/200
297/297 - 1s - loss: 0.0293 - acc: 0.9900 - val_loss: 0.0918 - val_acc: 0.9810
Epoch 95/200
297/297 - 1s - loss: 0.0318 - acc: 0.9894 - val_loss: 0.0892 - val_acc: 0.9835
Epoch 96/200
297/297 - 1s - loss: 0.0300 - acc: 0.9896 - val_loss: 0.0900 - val_acc: 0.9825
Epoch 97/200
297/297 - 1s - loss: 0.0315 - acc: 0.9897 - val_loss: 0.0915 - val_acc: 0.9805
Epoch 98/200
297/297 - 1s - loss: 0.0283 - acc: 0.9903 - val_loss: 0.0893 - val_acc: 0.9810
Epoch 99/200
297/297 - 1s - loss: 0.0303 - acc: 0.9897 - val_loss: 0.0917 - val_acc: 0.9820
Epoch 100/200
297/297 - 1s - loss: 0.0281 - acc: 0.9902 - val_loss: 0.0908 - val_acc: 0.9820
Epoch 101/200
297/297 - 1s - loss: 0.0300 - acc: 0.9900 - val_loss: 0.0903 - val_acc: 0.9830
Epoch 102/200
297/297 - 1s - loss: 0.0303 - acc: 0.9904 - val_loss: 0.0920 - val_acc: 0.9810
Epoch 103/200
297/297 - 1s - loss: 0.0282 - acc: 0.9910 - val_loss: 0.0894 - val_acc: 0.9820
Epoch 104/200
297/297 - 1s - loss: 0.0295 - acc: 0.9905 - val_loss: 0.0907 - val_acc: 0.9825
Epoch 105/200
297/297 - 1s - loss: 0.0324 - acc: 0.9892 - val_loss: 0.0903 - val_acc: 0.9825
Epoch 106/200
297/297 - 1s - loss: 0.0308 - acc: 0.9900 - val_loss: 0.0919 - val_acc: 0.9820
Epoch 107/200
297/297 - 1s - loss: 0.0321 - acc: 0.9898 - val_loss: 0.0911 - val_acc: 0.9845
Epoch 108/200
297/297 - 1s - loss: 0.0300 - acc: 0.9897 - val_loss: 0.0932 - val_acc: 0.9825
Epoch 109/200
297/297 - 1s - loss: 0.0309 - acc: 0.9895 - val_loss: 0.0920 - val_acc: 0.9825
Epoch 110/200
297/297 - 1s - loss: 0.0290 - acc: 0.9903 - val_loss: 0.0931 - val_acc: 0.9800
Epoch 111/200
297/297 - 1s - loss: 0.0294 - acc: 0.9897 - val_loss: 0.0961 - val_acc: 0.9800
Epoch 112/200
297/297 - 1s - loss: 0.0310 - acc: 0.9900 - val_loss: 0.0947 - val_acc: 0.9820
Epoch 113/200
297/297 - 1s - loss: 0.0316 - acc: 0.9897 - val_loss: 0.0924 - val_acc: 0.9810
Epoch 114/200
297/297 - 1s - loss: 0.0298 - acc: 0.9904 - val_loss: 0.0947 - val_acc: 0.9815
Epoch 115/200
297/297 - 1s - loss: 0.0269 - acc: 0.9906 - val_loss: 0.0936 - val_acc: 0.9815
Epoch 116/200
297/297 - 1s - loss: 0.0312 - acc: 0.9897 - val_loss: 0.0935 - val_acc: 0.9810
Epoch 117/200
297/297 - 1s - loss: 0.0295 - acc: 0.9903 - val_loss: 0.0939 - val_acc: 0.9810
Epoch 118/200
297/297 - 1s - loss: 0.0286 - acc: 0.9905 - val_loss: 0.0908 - val_acc: 0.9820
Epoch 119/200
297/297 - 1s - loss: 0.0309 - acc: 0.9901 - val_loss: 0.0928 - val_acc: 0.9815
Epoch 120/200
297/297 - 1s - loss: 0.0320 - acc: 0.9894 - val_loss: 0.0935 - val_acc: 0.9795
Epoch 121/200
297/297 - 1s - loss: 0.0306 - acc: 0.9901 - val_loss: 0.0951 - val_acc: 0.9805
Epoch 122/200
297/297 - 1s - loss: 0.0290 - acc: 0.9908 - val_loss: 0.0912 - val_acc: 0.9810
Epoch 123/200
297/297 - 1s - loss: 0.0275 - acc: 0.9911 - val_loss: 0.0899 - val_acc: 0.9810
Epoch 124/200
297/297 - 1s - loss: 0.0294 - acc: 0.9903 - val_loss: 0.0887 - val_acc: 0.9815
Epoch 125/200
297/297 - 1s - loss: 0.0299 - acc: 0.9901 - val_loss: 0.0901 - val_acc: 0.9810
Epoch 126/200
297/297 - 1s - loss: 0.0296 - acc: 0.9902 - val_loss: 0.0891 - val_acc: 0.9820
Epoch 127/200
297/297 - 1s - loss: 0.0306 - acc: 0.9900 - val_loss: 0.0858 - val_acc: 0.9815
Epoch 128/200
297/297 - 1s - loss: 0.0300 - acc: 0.9894 - val_loss: 0.0921 - val_acc: 0.9815
Epoch 129/200
297/297 - 1s - loss: 0.0266 - acc: 0.9909 - val_loss: 0.0940 - val_acc: 0.9805
Epoch 130/200
297/297 - 1s - loss: 0.0314 - acc: 0.9901 - val_loss: 0.0979 - val_acc: 0.9795
Epoch 131/200
297/297 - 1s - loss: 0.0293 - acc: 0.9908 - val_loss: 0.0941 - val_acc: 0.9810
Epoch 132/200
297/297 - 1s - loss: 0.0285 - acc: 0.9908 - val_loss: 0.0961 - val_acc: 0.9810
Epoch 133/200
297/297 - 1s - loss: 0.0289 - acc: 0.9902 - val_loss: 0.0923 - val_acc: 0.9820
Epoch 134/200
297/297 - 1s - loss: 0.0303 - acc: 0.9900 - val_loss: 0.0927 - val_acc: 0.9820
Epoch 135/200
297/297 - 1s - loss: 0.0303 - acc: 0.9905 - val_loss: 0.0913 - val_acc: 0.9825
Epoch 136/200
297/297 - 1s - loss: 0.0314 - acc: 0.9899 - val_loss: 0.0934 - val_acc: 0.9815
Epoch 137/200
297/297 - 1s - loss: 0.0302 - acc: 0.9897 - val_loss: 0.0885 - val_acc: 0.9825
Epoch 138/200
297/297 - 1s - loss: 0.0282 - acc: 0.9908 - val_loss: 0.0898 - val_acc: 0.9805
Epoch 139/200
297/297 - 1s - loss: 0.0291 - acc: 0.9901 - val_loss: 0.0901 - val_acc: 0.9800
Epoch 140/200
297/297 - 1s - loss: 0.0290 - acc: 0.9904 - val_loss: 0.0928 - val_acc: 0.9805
Epoch 141/200
297/297 - 1s - loss: 0.0279 - acc: 0.9905 - val_loss: 0.0933 - val_acc: 0.9795
Epoch 142/200
297/297 - 1s - loss: 0.0306 - acc: 0.9899 - val_loss: 0.0924 - val_acc: 0.9790
Epoch 143/200
297/297 - 1s - loss: 0.0281 - acc: 0.9910 - val_loss: 0.0878 - val_acc: 0.9800
Epoch 144/200
297/297 - 1s - loss: 0.0280 - acc: 0.9908 - val_loss: 0.0903 - val_acc: 0.9805
Epoch 145/200
297/297 - 1s - loss: 0.0280 - acc: 0.9906 - val_loss: 0.0898 - val_acc: 0.9815
Epoch 146/200
297/297 - 1s - loss: 0.0285 - acc: 0.9903 - val_loss: 0.0914 - val_acc: 0.9820
Epoch 147/200
297/297 - 1s - loss: 0.0301 - acc: 0.9900 - val_loss: 0.0926 - val_acc: 0.9820
Epoch 148/200
297/297 - 1s - loss: 0.0304 - acc: 0.9899 - val_loss: 0.0925 - val_acc: 0.9820
Epoch 149/200
297/297 - 1s - loss: 0.0298 - acc: 0.9894 - val_loss: 0.0933 - val_acc: 0.9800
Epoch 150/200
297/297 - 1s - loss: 0.0262 - acc: 0.9911 - val_loss: 0.0947 - val_acc: 0.9820
Epoch 151/200
297/297 - 1s - loss: 0.0299 - acc: 0.9902 - val_loss: 0.0944 - val_acc: 0.9805
Epoch 152/200
297/297 - 1s - loss: 0.0292 - acc: 0.9907 - val_loss: 0.0943 - val_acc: 0.9805
Epoch 153/200
297/297 - 1s - loss: 0.0294 - acc: 0.9905 - val_loss: 0.0920 - val_acc: 0.9795
Epoch 154/200
297/297 - 1s - loss: 0.0268 - acc: 0.9913 - val_loss: 0.0934 - val_acc: 0.9805
Epoch 155/200
297/297 - 1s - loss: 0.0273 - acc: 0.9906 - val_loss: 0.0972 - val_acc: 0.9810
Epoch 156/200
297/297 - 1s - loss: 0.0274 - acc: 0.9909 - val_loss: 0.0978 - val_acc: 0.9820
Epoch 157/200
297/297 - 1s - loss: 0.0305 - acc: 0.9902 - val_loss: 0.0968 - val_acc: 0.9800
Epoch 158/200
297/297 - 1s - loss: 0.0276 - acc: 0.9914 - val_loss: 0.0931 - val_acc: 0.9805
Epoch 159/200
297/297 - 1s - loss: 0.0288 - acc: 0.9901 - val_loss: 0.0925 - val_acc: 0.9820
Epoch 160/200
297/297 - 1s - loss: 0.0266 - acc: 0.9913 - val_loss: 0.0926 - val_acc: 0.9815
Epoch 161/200
297/297 - 1s - loss: 0.0285 - acc: 0.9908 - val_loss: 0.0922 - val_acc: 0.9820
Epoch 162/200
297/297 - 1s - loss: 0.0286 - acc: 0.9904 - val_loss: 0.0913 - val_acc: 0.9825
Epoch 163/200
297/297 - 1s - loss: 0.0292 - acc: 0.9903 - val_loss: 0.0915 - val_acc: 0.9820
Epoch 164/200
297/297 - 1s - loss: 0.0299 - acc: 0.9908 - val_loss: 0.0925 - val_acc: 0.9820
Epoch 165/200
|
docs/notebooks/Recovering weak signals.ipynb | ###Markdown
Recovering weak signals
###Code
import numpy as np
import corner
import pandas as pd
import matplotlib.pyplot as plt
import exoplanet as xo
import pymc3 as pm
import lightkurve as lk
%config IPython.matplotlib.backend = "retina"
from matplotlib import rcParams
rcParams["figure.dpi"] = 150
rcParams["savefig.dpi"] = 150
###Output
_____no_output_____
###Markdown
Sometimes a signal is too weak to be obvious in the `first_look`. There are a few ways to dig deeper. One of those is the time delay periodogram module, which essentially brute forces a model for a range of orbital periods. The likelihood of the model can then be plotted against the orbital periods, and should peak at the correct value. Let's try this out for a short period binary, KIC~6780873
###Code
lc = lk.search_lightcurvefile('KIC 6780873', mission='Kepler').download_all().PDCSAP_FLUX.stitch().remove_nans()
lc.plot();
from maelstrom import Maelstrom
ms = Maelstrom(lc.time, lc.flux, max_peaks=2, fmin=10)
ms.first_look(segment_size=5);
###Output
_____no_output_____
###Markdown
It does look like there's something there.. but it's hard to tell. Let's create the periodogram searcher:
###Code
pg = ms.period_search()
###Output
_____no_output_____
###Markdown
We'll search between 5 and 15 days
###Code
periods = np.linspace(5,15,100)
res = pg.fit(periods)
pg.diagnose();
###Output
_____no_output_____
###Markdown
Cool, there's a peak at the orbital period of ~9 days! This is almost exactly where we know it to be. The contents of the pg.fit results are a list of optimisation dict results. If we want to make these plots manually, we can do the following:
###Code
ys = np.array([[r[0] for r in row] for row in res])
sm = np.sum(ys, axis=0)
plt.plot(periods, sm)
period = periods[np.argmax(sm)]
plt.xlabel('Period [day]')
plt.ylabel('Model likelihood')
print(f"Expected orbital period at {period:.2f} days")
###Output
Expected orbital period at 9.14 days
|
malaria_cell_images.ipynb | ###Markdown
Malaria Cell InfectionThis notebook is used in the National Institute of Health (NIH) Malaria DatasetURL to the used dataset on Kaggle: https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria Getting the datasetsThis will extract the data from `./drive/My Drive/malaria-cell-images.zip`. Before running this cell, save the dataset to your drive's using the `malaria-cell-images-to-drive` notebook
###Code
# Skip copy and unzip if the extracted directory exists
![ -d './cell_images' ] || cp './drive/My Drive/malaria-cell-images.zip' './'
![ -d './cell_images' ] || unzip './malaria-cell-images.zip' > /dev/null
!find './cell_images' -type f -name '*.db' -delete
###Output
_____no_output_____
###Markdown
Some information about the images
###Code
!echo "NUMBER OF POSITIVE IMAGES: $(find ./cell_images/Parasitized -type f | wc -l)"
!echo "NUMBER OF NEGATIVE IMAGES: $(find ./cell_images/Uninfected -type f | wc -l)"
!echo " TOTAL: $(find ./cell_images -type f | wc -l)"
###Output
NUMBER OF POSITIVE IMAGES: 13779
NUMBER OF NEGATIVE IMAGES: 13779
TOTAL: 27558
###Markdown
Importing the modules
###Code
!pip install pytorch-ignite
import math
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from tabulate import tabulate
from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss, RunningAverage, ConfusionMatrix
from ignite.handlers import ModelCheckpoint, EarlyStopping
import torch
from torch import optim, nn
import torch.nn.functional as F
from torch.utils.data.sampler import RandomSampler, SubsetRandomSampler
from torchsummary import summary
from torchvision import datasets, models, transforms
###Output
_____no_output_____
###Markdown
Splitting the datasetSince we'll not be doing model finetuning, the dataset will be split into *training*, with 80% of the dataset, and *test*, with the remaining 20%.
###Code
def split_images(imgs_root_path, splits_root_path):
if os.path.exists(splits_root_path):
shutil.rmtree(splits_root_path)
os.makedirs(splits_root_path)
fnames_map = dict()
img_classes = os.listdir(imgs_root_path)
datasets = ['train', 'test']
for img_class in img_classes:
img_class_root_path = os.path.join(imgs_root_path, img_class)
fnames = os.listdir(img_class_root_path)
(train_fnames, test_fnames) = train_test_split(
fnames, train_size=0.8, shuffle=True)
fnames_map[('train', img_class)] = train_fnames
fnames_map[('test', img_class)] = test_fnames
for ((dset, img_class), fnames) in fnames_map.items():
for fname in fnames:
src_img_path = os.path.join(imgs_root_path, img_class, fname)
dst_img_path = os.path.join(splits_root_path, dset, img_class, fname)
if not os.path.exists(os.path.dirname(dst_img_path)):
os.makedirs(os.path.dirname(dst_img_path))
shutil.copy(src_img_path, dst_img_path)
table_headers = datasets
table_data = list()
for c in img_classes:
row = [c]
for d in datasets:
abs_freq = len(fnames_map[(d, c)])
row.append(abs_freq)
table_data.append(row)
print('SPLITS ABS. FREQUENCY')
print(tabulate(tabular_data=table_data, headers=table_headers, tablefmt='github'))
%%time
split_images('./cell_images', 'datasets')
###Output
SPLITS ABS. FREQUENCY
| | train | test |
|-------------|---------|--------|
| Parasitized | 11023 | 2756 |
| Uninfected | 11023 | 2756 |
CPU times: user 1.5 s, sys: 2.68 s, total: 4.18 s
Wall time: 4.93 s
###Markdown
Helper functions Defining the data loaders function
###Code
def create_transforms(img_res):
train_transforms = transforms.Compose(
[
transforms.Resize(IMG_RES),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(45),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
]
)
eval_transforms = transforms.Compose(
[
transforms.Resize(IMG_RES),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
]
)
return (train_transforms, eval_transforms)
def create_data_loaders(img_res):
(train_transforms, eval_transforms) = create_transforms(img_res)
train_dataset = datasets.ImageFolder('./datasets/train', transform=train_transforms)
test_dataset = datasets.ImageFolder('./datasets/test', transform=eval_transforms)
train_sampler = RandomSampler(train_dataset)
test_sampler = RandomSampler(test_dataset)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
num_workers=NUM_WORKERS
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=BATCH_SIZE,
sampler=test_sampler,
num_workers=NUM_WORKERS
)
return (train_loader, test_loader)
###Output
_____no_output_____
###Markdown
Defining the training, validation and test procedure
###Code
def create_training_procedure(model, optimizer, criterion, device, train_loader,
test_loader):
trainer = create_supervised_trainer(model, optimizer, criterion, device=device)
metrics = {
'Accuracy': Accuracy(),
'Loss': Loss(criterion)
}
evaluator = create_supervised_evaluator(model, metrics=metrics, device=device)
train_history = {'accuracy': list(), 'loss': list()}
test_history = {'accuracy': list(), 'loss': list()}
last_epoch = list()
RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')
# The commented lines below add early stopping but for the purposes of this notebook
# we won't be using it
# def score_function(engine):
# val_loss = engine.state.metrics['Loss']
# return -val_loss
# handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)
# evaluator.add_event_handler(Events.COMPLETED, handler)
def get_acc_and_loss(data_loader):
evaluator.run(data_loader)
metrics = evaluator.state.metrics
accuracy = metrics['Accuracy'] * 100
loss = metrics['Loss']
return (accuracy, loss)
@trainer.on(Events.EPOCH_COMPLETED)
def log_epoch_results(trainer):
(train_acc, train_loss) = get_acc_and_loss(train_loader)
train_history['accuracy'].append(train_acc)
train_history['loss'].append(train_loss)
(test_acc, test_loss) = get_acc_and_loss(test_loader)
test_history['accuracy'].append(test_acc)
test_history['loss'].append(test_loss)
print(
f'Epoch: {trainer.state.epoch:>2},',
f'Train Avg. Acc.: {train_acc:.2f},',
f'Train Avg. Loss: {train_loss:.2f} -',
f'Test Avg. Acc.: {test_acc:.2f},',
f'Test Avg. Loss: {test_loss:.2f}'
)
checkpointer = ModelCheckpoint(
'./saved_models', 'baseline-cnn', n_saved=2, create_dir=True,
save_as_state_dict=True, require_empty=False)
_ = trainer.add_event_handler(
Events.EPOCH_COMPLETED, checkpointer, {'baseline-cnn': model})
return (trainer, train_history, test_history)
###Output
_____no_output_____
###Markdown
Defining the experiment plots
###Code
def display_results(train_history, test_history, title):
plt.plot(train_history['accuracy'],label="Training Accuracy")
plt.plot(test_history['accuracy'],label="Test Accuracy")
plt.xlabel('No. of Epochs')
plt.ylabel('Accuracy')
plt.legend(frameon=False)
plt.title(title)
plt.show()
plt.plot(train_history['loss'],label="Training Loss")
plt.plot(test_history['loss'],label="Test Loss")
plt.xlabel('No. of Epochs')
plt.ylabel('Loss')
plt.legend(frameon=False)
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Defining the calculation of convolution shape propagationUsing these helpers to calculate how the image shrinks through the convolutional layers
###Code
def filter_shape(width, kernel_size, padding=1, stride=1):
div = width + (2 * padding) - kernel_size
return math.floor(div / stride) + 1
def conv(width, kernel_size, padding=1, stride=1):
return filter_shape(width, kernel_size, padding=padding, stride=stride)
def pool(width, pool_size, stride=2):
return filter_shape(width, kernel_size=pool_size, padding=0, stride=stride)
###Output
_____no_output_____
###Markdown
Defining a rapid shape mismatch testWe'll run forward propagation on a single batch to check for input shape mismatch between the convolutional and fully-connected layers
###Code
def test_forward_prop(model, train_loader, device):
"""
A simple shape mismatch test for the forward propagation
"""
(data, target) = next(iter(train_loader))
(data, target) = (data.to(device), target.to(device))
logits = model(data)
(batch_size, num_classes) = logits.shape
print(f'NO SHAPE MISMATCH ERRORS IT SEEMS...')
print(f'BATCH SIZE: {batch_size}, NUM. CLASSES: {num_classes}')
###Output
_____no_output_____
###Markdown
Experiment - Baseline CNNURL: https://www.kaggle.com/pradheeprio/malaria-cell-image-detection-using-cnn-acc-95 Configuring the parameters
###Code
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'DEVICE USED: {DEVICE}')
RANDOM_SEED = 42
BATCH_SIZE = 64
NUM_EPOCHS = 30
LEARNING_RATE = 0.001
IMG_RES = (68, 68)
NUM_WORKERS = 2
NUM_CLASSES = 2
_ = torch.manual_seed(RANDOM_SEED)
###Output
DEVICE USED: cuda
###Markdown
Creating the train and test data loaders
###Code
(train_loader, test_loader) = create_data_loaders(IMG_RES)
###Output
_____no_output_____
###Markdown
Implementing the model
###Code
w = IMG_RES[0]
print(f'INPUT WIDTH: {w}', end='\n\n')
w = conv(width=w, kernel_size=3, padding=0, stride=1)
print(f'C1: {w}, in_ch: 3, out_ch: 16')
w = pool(width=w, pool_size=2)
print(f'P1: {w}', end='\n\n')
w = conv(width=w, kernel_size=3, padding=0, stride=1)
print(f'C2: {w}, in_ch: 16, out_ch: 64')
w = pool(width=w, pool_size=2)
print(f'P2: {w}', end='\n\n')
print(f'FLAT SHAPE: {w * w * 64}')
class BaselineCNN(nn.Module):
def __init__(self, num_classes):
super().__init__()
self.conv_block1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Dropout2d(p=0.2)
)
self.conv_block2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=64, kernel_size=3, stride=1, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Dropout2d(p=0.3)
)
self.dense_block = nn.Sequential(
nn.Linear(in_features=15 * 15 * 64, out_features=64),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(in_features=64, out_features=num_classes)
)
def forward(self, X):
out = X
out = self.conv_block1(out)
out = self.conv_block2(out)
out = torch.flatten(out, 1)
out = self.dense_block(out)
return out
###Output
_____no_output_____
###Markdown
Testing the propagation with a single batch
###Code
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'DEVICE USED: {DEVICE}')
model = BaselineCNN(num_classes=2).to(DEVICE)
test_forward_prop(model, train_loader, DEVICE)
###Output
DEVICE USED: cuda
NO SHAPE MISMATCH ERRORS IT SEEMS...
BATCH SIZE: 64, NUM. CLASSES: 2
###Markdown
Printing the model's summary
###Code
model = BaselineCNN(num_classes=2).to(DEVICE)
summary(model, input_size=(3, 68, 68))
###Output
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 16, 66, 66] 448
ReLU-2 [-1, 16, 66, 66] 0
MaxPool2d-3 [-1, 16, 33, 33] 0
Dropout2d-4 [-1, 16, 33, 33] 0
Conv2d-5 [-1, 64, 31, 31] 9,280
ReLU-6 [-1, 64, 31, 31] 0
MaxPool2d-7 [-1, 64, 15, 15] 0
Dropout2d-8 [-1, 64, 15, 15] 0
Linear-9 [-1, 64] 921,664
ReLU-10 [-1, 64] 0
Dropout-11 [-1, 64] 0
Linear-12 [-1, 2] 130
================================================================
Total params: 931,522
Trainable params: 931,522
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 2.49
Params size (MB): 3.55
Estimated Total Size (MB): 6.10
----------------------------------------------------------------
###Markdown
Running the experimentThe model will be trained with three values of learning rate 0.001, 0.055, and 0.01.
###Code
%%time
model_name = 'Baseline CNN'
model = BaselineCNN(num_classes=NUM_CLASSES).to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
criterion = nn.CrossEntropyLoss()
(trainer, train_history, test_history) = create_training_procedure(
model, optimizer, criterion, DEVICE, train_loader, test_loader)
print(f'# MODEL: {model_name}, LR: {lr}')
state = trainer.run(train_loader, max_epochs=NUM_EPOCHS)
display_results(train_history, test_history,
f'{model_name} - lr: {lr}')
###Output
# MODEL: Baseline CNN, LR: 0.001
Epoch: 1, Train Avg. Acc.: 91.79, Train Avg. Loss: 0.24 - Test Avg. Acc.: 92.63, Test Avg. Loss: 0.23
Epoch: 2, Train Avg. Acc.: 93.96, Train Avg. Loss: 0.18 - Test Avg. Acc.: 94.56, Test Avg. Loss: 0.17
Epoch: 3, Train Avg. Acc.: 95.03, Train Avg. Loss: 0.16 - Test Avg. Acc.: 95.46, Test Avg. Loss: 0.15
Epoch: 4, Train Avg. Acc.: 95.30, Train Avg. Loss: 0.15 - Test Avg. Acc.: 95.41, Test Avg. Loss: 0.14
Epoch: 5, Train Avg. Acc.: 95.42, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.59, Test Avg. Loss: 0.13
Epoch: 6, Train Avg. Acc.: 95.46, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.68, Test Avg. Loss: 0.13
Epoch: 7, Train Avg. Acc.: 95.34, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.32, Test Avg. Loss: 0.14
Epoch: 8, Train Avg. Acc.: 95.54, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.74, Test Avg. Loss: 0.13
Epoch: 9, Train Avg. Acc.: 95.46, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.41, Test Avg. Loss: 0.13
Epoch: 10, Train Avg. Acc.: 95.32, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.45, Test Avg. Loss: 0.13
Epoch: 11, Train Avg. Acc.: 95.55, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.54, Test Avg. Loss: 0.13
Epoch: 12, Train Avg. Acc.: 95.60, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.61, Test Avg. Loss: 0.13
Epoch: 13, Train Avg. Acc.: 95.55, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.65, Test Avg. Loss: 0.12
Epoch: 14, Train Avg. Acc.: 95.63, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.61, Test Avg. Loss: 0.13
Epoch: 15, Train Avg. Acc.: 95.56, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.77, Test Avg. Loss: 0.13
Epoch: 16, Train Avg. Acc.: 95.69, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.81, Test Avg. Loss: 0.12
Epoch: 17, Train Avg. Acc.: 95.48, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.39, Test Avg. Loss: 0.13
Epoch: 18, Train Avg. Acc.: 95.69, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.75, Test Avg. Loss: 0.13
Epoch: 19, Train Avg. Acc.: 95.77, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.79, Test Avg. Loss: 0.12
Epoch: 20, Train Avg. Acc.: 95.74, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.74, Test Avg. Loss: 0.12
Epoch: 21, Train Avg. Acc.: 95.75, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.72, Test Avg. Loss: 0.12
Epoch: 22, Train Avg. Acc.: 95.83, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.79, Test Avg. Loss: 0.12
Epoch: 23, Train Avg. Acc.: 95.83, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.72, Test Avg. Loss: 0.12
Epoch: 24, Train Avg. Acc.: 95.91, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.75, Test Avg. Loss: 0.12
Epoch: 25, Train Avg. Acc.: 95.85, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.92, Test Avg. Loss: 0.12
Epoch: 26, Train Avg. Acc.: 95.80, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.90, Test Avg. Loss: 0.12
Epoch: 27, Train Avg. Acc.: 95.77, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.85, Test Avg. Loss: 0.12
Epoch: 28, Train Avg. Acc.: 95.81, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.74, Test Avg. Loss: 0.12
Epoch: 29, Train Avg. Acc.: 95.78, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.70, Test Avg. Loss: 0.12
Epoch: 30, Train Avg. Acc.: 95.71, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.85, Test Avg. Loss: 0.12
###Markdown
Experiment - LeNet-5URL: https://towardsdatascience.com/implementing-yann-lecuns-lenet-5-in-pytorch-5e05a0911320 Configuring the Parameters
###Code
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'DEVICE USED: {DEVICE}')
RANDOM_SEED = 42
BATCH_SIZE = 64
NUM_EPOCHS = 30
LEARNING_RATE = 0.001
IMG_RES = (32, 32)
NUM_WORKERS = 2
NUM_CLASSES = 2
_ = torch.manual_seed(RANDOM_SEED)
###Output
DEVICE USED: cuda
###Markdown
Creating the train and test data loaders
###Code
(train_loader, test_loader) = create_data_loaders(IMG_RES)
###Output
_____no_output_____
###Markdown
Implementing the model
###Code
w = IMG_RES[0]
print(f'INPUT WIDTH: {w}', end='\n\n')
w = conv(width=w, kernel_size=5, padding=0, stride=1)
print(f'C1: {w}, in_ch: 3, out_ch: 6')
w = pool(width=w, pool_size=2)
print(f'P1: {w}', end='\n\n')
w = conv(width=w, kernel_size=5, padding=0, stride=1)
print(f'C2: {w}, in_ch: 6, out_ch: 16')
w = pool(width=w, pool_size=2)
print(f'P2: {w}', end='\n\n')
w = conv(width=w, kernel_size=5, padding=0, stride=1)
print(f'C3: {w}, in_ch: 16, out_ch: 120')
print(f'FLATTENED SHAPE: {w * w * 120}')
class LeNet5(nn.Module):
def __init__(self, num_classes):
super().__init__()
self.conv_block1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0),
nn.Tanh(),
nn.AvgPool2d(kernel_size=2)
)
self.conv_block2 = nn.Sequential(
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=0),
nn.Tanh(),
nn.AvgPool2d(kernel_size=2)
)
self.conv_block3 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1, padding=0),
nn.Tanh()
)
self.dense_block = nn.Sequential(
nn.Linear(in_features=1 * 1 * 120, out_features=84),
nn.Tanh(),
nn.Linear(in_features=84, out_features=num_classes)
)
def forward(self, X):
out = X
out = self.conv_block1(out)
out = self.conv_block2(out)
out = self.conv_block3(out)
out = torch.flatten(out, 1)
out = self.dense_block(out)
return out
###Output
_____no_output_____
###Markdown
Testing the propagation with a single batch
###Code
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'DEVICE USED: {DEVICE}')
model = LeNet5(num_classes=2).to(DEVICE)
test_forward_prop(model, train_loader, DEVICE)
###Output
DEVICE USED: cuda
NO SHAPE MISMATCH ERRORS IT SEEMS...
BATCH SIZE: 64, NUM. CLASSES: 2
###Markdown
Printing the model's summary
###Code
model = LeNet5(num_classes=2).to(DEVICE)
summary(model, input_size=(3, 32, 32))
###Output
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 6, 28, 28] 456
Tanh-2 [-1, 6, 28, 28] 0
AvgPool2d-3 [-1, 6, 14, 14] 0
Conv2d-4 [-1, 16, 10, 10] 2,416
Tanh-5 [-1, 16, 10, 10] 0
AvgPool2d-6 [-1, 16, 5, 5] 0
Conv2d-7 [-1, 120, 1, 1] 48,120
Tanh-8 [-1, 120, 1, 1] 0
Linear-9 [-1, 84] 10,164
Tanh-10 [-1, 84] 0
Linear-11 [-1, 2] 170
================================================================
Total params: 61,326
Trainable params: 61,326
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 0.11
Params size (MB): 0.23
Estimated Total Size (MB): 0.36
----------------------------------------------------------------
###Markdown
Running the experimentThe model will be trained with three values of learning rate 0.001, 0.055, and 0.01.
###Code
%%time
model_name = 'LeNet-5'
model = LeNet5(num_classes=NUM_CLASSES).to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
criterion = nn.CrossEntropyLoss()
(trainer, train_history, test_history) = create_training_procedure(
model, optimizer, criterion, DEVICE, train_loader, test_loader)
print(f'# MODEL: {model_name}, LR: {lr}')
state = trainer.run(train_loader, max_epochs=NUM_EPOCHS)
display_results(train_history, test_history,
f'{model_name} - lr: {lr}')
###Output
# MODEL: LeNet-5, LR: 0.001
Epoch: 1, Train Avg. Acc.: 67.85, Train Avg. Loss: 0.60 - Test Avg. Acc.: 67.13, Test Avg. Loss: 0.61
Epoch: 2, Train Avg. Acc.: 70.43, Train Avg. Loss: 0.58 - Test Avg. Acc.: 69.58, Test Avg. Loss: 0.58
Epoch: 3, Train Avg. Acc.: 70.55, Train Avg. Loss: 0.56 - Test Avg. Acc.: 70.28, Test Avg. Loss: 0.57
Epoch: 4, Train Avg. Acc.: 73.08, Train Avg. Loss: 0.54 - Test Avg. Acc.: 72.70, Test Avg. Loss: 0.55
Epoch: 5, Train Avg. Acc.: 79.79, Train Avg. Loss: 0.44 - Test Avg. Acc.: 78.86, Test Avg. Loss: 0.45
Epoch: 6, Train Avg. Acc.: 84.31, Train Avg. Loss: 0.36 - Test Avg. Acc.: 84.02, Test Avg. Loss: 0.36
Epoch: 7, Train Avg. Acc.: 84.04, Train Avg. Loss: 0.37 - Test Avg. Acc.: 82.51, Test Avg. Loss: 0.38
Epoch: 8, Train Avg. Acc.: 86.48, Train Avg. Loss: 0.31 - Test Avg. Acc.: 85.96, Test Avg. Loss: 0.32
Epoch: 9, Train Avg. Acc.: 88.03, Train Avg. Loss: 0.29 - Test Avg. Acc.: 87.34, Test Avg. Loss: 0.30
Epoch: 10, Train Avg. Acc.: 88.03, Train Avg. Loss: 0.29 - Test Avg. Acc.: 88.13, Test Avg. Loss: 0.30
Epoch: 11, Train Avg. Acc.: 89.19, Train Avg. Loss: 0.26 - Test Avg. Acc.: 88.68, Test Avg. Loss: 0.27
Epoch: 12, Train Avg. Acc.: 87.39, Train Avg. Loss: 0.30 - Test Avg. Acc.: 87.65, Test Avg. Loss: 0.31
Epoch: 13, Train Avg. Acc.: 89.53, Train Avg. Loss: 0.25 - Test Avg. Acc.: 89.50, Test Avg. Loss: 0.26
Epoch: 14, Train Avg. Acc.: 89.76, Train Avg. Loss: 0.25 - Test Avg. Acc.: 89.57, Test Avg. Loss: 0.26
Epoch: 15, Train Avg. Acc.: 89.93, Train Avg. Loss: 0.24 - Test Avg. Acc.: 89.91, Test Avg. Loss: 0.25
Epoch: 16, Train Avg. Acc.: 91.01, Train Avg. Loss: 0.23 - Test Avg. Acc.: 90.42, Test Avg. Loss: 0.24
Epoch: 17, Train Avg. Acc.: 90.79, Train Avg. Loss: 0.22 - Test Avg. Acc.: 91.11, Test Avg. Loss: 0.23
Epoch: 18, Train Avg. Acc.: 90.38, Train Avg. Loss: 0.24 - Test Avg. Acc.: 90.55, Test Avg. Loss: 0.25
Epoch: 19, Train Avg. Acc.: 91.00, Train Avg. Loss: 0.22 - Test Avg. Acc.: 91.04, Test Avg. Loss: 0.23
Epoch: 20, Train Avg. Acc.: 90.93, Train Avg. Loss: 0.23 - Test Avg. Acc.: 91.22, Test Avg. Loss: 0.24
Epoch: 21, Train Avg. Acc.: 91.57, Train Avg. Loss: 0.21 - Test Avg. Acc.: 91.42, Test Avg. Loss: 0.22
Epoch: 22, Train Avg. Acc.: 91.54, Train Avg. Loss: 0.21 - Test Avg. Acc.: 91.22, Test Avg. Loss: 0.22
Epoch: 23, Train Avg. Acc.: 91.86, Train Avg. Loss: 0.20 - Test Avg. Acc.: 92.04, Test Avg. Loss: 0.21
Epoch: 24, Train Avg. Acc.: 92.07, Train Avg. Loss: 0.20 - Test Avg. Acc.: 91.98, Test Avg. Loss: 0.21
Epoch: 25, Train Avg. Acc.: 92.14, Train Avg. Loss: 0.20 - Test Avg. Acc.: 92.27, Test Avg. Loss: 0.21
Epoch: 26, Train Avg. Acc.: 92.71, Train Avg. Loss: 0.19 - Test Avg. Acc.: 92.67, Test Avg. Loss: 0.19
Epoch: 27, Train Avg. Acc.: 92.43, Train Avg. Loss: 0.19 - Test Avg. Acc.: 92.20, Test Avg. Loss: 0.20
Epoch: 28, Train Avg. Acc.: 92.88, Train Avg. Loss: 0.18 - Test Avg. Acc.: 92.67, Test Avg. Loss: 0.20
Epoch: 29, Train Avg. Acc.: 92.70, Train Avg. Loss: 0.18 - Test Avg. Acc.: 92.36, Test Avg. Loss: 0.20
Epoch: 30, Train Avg. Acc.: 92.60, Train Avg. Loss: 0.19 - Test Avg. Acc.: 92.49, Test Avg. Loss: 0.19
###Markdown
Experiment - ResNet18 Configuring the parameters
###Code
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'DEVICE USED: {DEVICE}')
RANDOM_SEED = 42
BATCH_SIZE = 64
NUM_EPOCHS = 30
LEARNING_RATE = 0.001
IMG_RES = (244, 244)
NUM_WORKERS = 2
NUM_CLASSES = 2
_ = torch.manual_seed(RANDOM_SEED)
###Output
DEVICE USED: cuda
###Markdown
Creating the train and test data loaders
###Code
(train_loader, test_loader) = create_data_loaders(IMG_RES)
###Output
_____no_output_____
###Markdown
Printing the model's summary
###Code
model = models.resnet18(pretrained=False, num_classes=2).to(DEVICE)
summary(model, input_size=(3, 224, 224))
###Output
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 112, 112] 9,408
BatchNorm2d-2 [-1, 64, 112, 112] 128
ReLU-3 [-1, 64, 112, 112] 0
MaxPool2d-4 [-1, 64, 56, 56] 0
Conv2d-5 [-1, 64, 56, 56] 36,864
BatchNorm2d-6 [-1, 64, 56, 56] 128
ReLU-7 [-1, 64, 56, 56] 0
Conv2d-8 [-1, 64, 56, 56] 36,864
BatchNorm2d-9 [-1, 64, 56, 56] 128
ReLU-10 [-1, 64, 56, 56] 0
BasicBlock-11 [-1, 64, 56, 56] 0
Conv2d-12 [-1, 64, 56, 56] 36,864
BatchNorm2d-13 [-1, 64, 56, 56] 128
ReLU-14 [-1, 64, 56, 56] 0
Conv2d-15 [-1, 64, 56, 56] 36,864
BatchNorm2d-16 [-1, 64, 56, 56] 128
ReLU-17 [-1, 64, 56, 56] 0
BasicBlock-18 [-1, 64, 56, 56] 0
Conv2d-19 [-1, 128, 28, 28] 73,728
BatchNorm2d-20 [-1, 128, 28, 28] 256
ReLU-21 [-1, 128, 28, 28] 0
Conv2d-22 [-1, 128, 28, 28] 147,456
BatchNorm2d-23 [-1, 128, 28, 28] 256
Conv2d-24 [-1, 128, 28, 28] 8,192
BatchNorm2d-25 [-1, 128, 28, 28] 256
ReLU-26 [-1, 128, 28, 28] 0
BasicBlock-27 [-1, 128, 28, 28] 0
Conv2d-28 [-1, 128, 28, 28] 147,456
BatchNorm2d-29 [-1, 128, 28, 28] 256
ReLU-30 [-1, 128, 28, 28] 0
Conv2d-31 [-1, 128, 28, 28] 147,456
BatchNorm2d-32 [-1, 128, 28, 28] 256
ReLU-33 [-1, 128, 28, 28] 0
BasicBlock-34 [-1, 128, 28, 28] 0
Conv2d-35 [-1, 256, 14, 14] 294,912
BatchNorm2d-36 [-1, 256, 14, 14] 512
ReLU-37 [-1, 256, 14, 14] 0
Conv2d-38 [-1, 256, 14, 14] 589,824
BatchNorm2d-39 [-1, 256, 14, 14] 512
Conv2d-40 [-1, 256, 14, 14] 32,768
BatchNorm2d-41 [-1, 256, 14, 14] 512
ReLU-42 [-1, 256, 14, 14] 0
BasicBlock-43 [-1, 256, 14, 14] 0
Conv2d-44 [-1, 256, 14, 14] 589,824
BatchNorm2d-45 [-1, 256, 14, 14] 512
ReLU-46 [-1, 256, 14, 14] 0
Conv2d-47 [-1, 256, 14, 14] 589,824
BatchNorm2d-48 [-1, 256, 14, 14] 512
ReLU-49 [-1, 256, 14, 14] 0
BasicBlock-50 [-1, 256, 14, 14] 0
Conv2d-51 [-1, 512, 7, 7] 1,179,648
BatchNorm2d-52 [-1, 512, 7, 7] 1,024
ReLU-53 [-1, 512, 7, 7] 0
Conv2d-54 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-55 [-1, 512, 7, 7] 1,024
Conv2d-56 [-1, 512, 7, 7] 131,072
BatchNorm2d-57 [-1, 512, 7, 7] 1,024
ReLU-58 [-1, 512, 7, 7] 0
BasicBlock-59 [-1, 512, 7, 7] 0
Conv2d-60 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-61 [-1, 512, 7, 7] 1,024
ReLU-62 [-1, 512, 7, 7] 0
Conv2d-63 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-64 [-1, 512, 7, 7] 1,024
ReLU-65 [-1, 512, 7, 7] 0
BasicBlock-66 [-1, 512, 7, 7] 0
AdaptiveAvgPool2d-67 [-1, 512, 1, 1] 0
Linear-68 [-1, 2] 1,026
================================================================
Total params: 11,177,538
Trainable params: 11,177,538
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 62.79
Params size (MB): 42.64
Estimated Total Size (MB): 106.00
----------------------------------------------------------------
###Markdown
Running the experimentThe model will be trained with three values of learning rate 0.001, 0.055, and 0.01.
###Code
%%time
model_name = 'ResNet18'
model = models.resnet18(pretrained=False, num_classes=NUM_CLASSES).to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
criterion = nn.CrossEntropyLoss()
(trainer, train_history, test_history) = create_training_procedure(
model, optimizer, criterion, DEVICE, train_loader, test_loader)
print(f'# MODEL: {model_name}, LR: {lr}')
state = trainer.run(train_loader, max_epochs=NUM_EPOCHS)
display_results(train_history, test_history,
f'{model_name} - lr: {lr}')
###Output
# MODEL: ResNet18, LR: 0.001
Epoch: 1, Train Avg. Acc.: 95.53, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.74, Test Avg. Loss: 0.14
Epoch: 2, Train Avg. Acc.: 94.22, Train Avg. Loss: 0.17 - Test Avg. Acc.: 94.90, Test Avg. Loss: 0.15
Epoch: 3, Train Avg. Acc.: 95.80, Train Avg. Loss: 0.13 - Test Avg. Acc.: 96.28, Test Avg. Loss: 0.13
Epoch: 4, Train Avg. Acc.: 95.70, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.90, Test Avg. Loss: 0.14
Epoch: 5, Train Avg. Acc.: 95.64, Train Avg. Loss: 0.13 - Test Avg. Acc.: 96.39, Test Avg. Loss: 0.11
Epoch: 6, Train Avg. Acc.: 94.96, Train Avg. Loss: 0.14 - Test Avg. Acc.: 95.48, Test Avg. Loss: 0.13
Epoch: 7, Train Avg. Acc.: 96.20, Train Avg. Loss: 0.12 - Test Avg. Acc.: 96.59, Test Avg. Loss: 0.11
Epoch: 8, Train Avg. Acc.: 95.77, Train Avg. Loss: 0.12 - Test Avg. Acc.: 95.86, Test Avg. Loss: 0.12
Epoch: 9, Train Avg. Acc.: 96.11, Train Avg. Loss: 0.12 - Test Avg. Acc.: 96.39, Test Avg. Loss: 0.12
Epoch: 10, Train Avg. Acc.: 96.16, Train Avg. Loss: 0.13 - Test Avg. Acc.: 96.48, Test Avg. Loss: 0.12
Epoch: 11, Train Avg. Acc.: 96.20, Train Avg. Loss: 0.11 - Test Avg. Acc.: 96.34, Test Avg. Loss: 0.11
Epoch: 12, Train Avg. Acc.: 96.49, Train Avg. Loss: 0.10 - Test Avg. Acc.: 96.53, Test Avg. Loss: 0.10
Epoch: 13, Train Avg. Acc.: 96.06, Train Avg. Loss: 0.11 - Test Avg. Acc.: 96.26, Test Avg. Loss: 0.11
Epoch: 14, Train Avg. Acc.: 96.67, Train Avg. Loss: 0.10 - Test Avg. Acc.: 96.86, Test Avg. Loss: 0.10
Epoch: 15, Train Avg. Acc.: 96.38, Train Avg. Loss: 0.11 - Test Avg. Acc.: 96.75, Test Avg. Loss: 0.10
Epoch: 16, Train Avg. Acc.: 96.43, Train Avg. Loss: 0.10 - Test Avg. Acc.: 96.55, Test Avg. Loss: 0.10
Epoch: 17, Train Avg. Acc.: 96.52, Train Avg. Loss: 0.10 - Test Avg. Acc.: 96.57, Test Avg. Loss: 0.10
Epoch: 18, Train Avg. Acc.: 95.07, Train Avg. Loss: 0.13 - Test Avg. Acc.: 95.30, Test Avg. Loss: 0.12
Epoch: 19, Train Avg. Acc.: 96.84, Train Avg. Loss: 0.09 - Test Avg. Acc.: 96.88, Test Avg. Loss: 0.09
Epoch: 20, Train Avg. Acc.: 96.82, Train Avg. Loss: 0.09 - Test Avg. Acc.: 96.90, Test Avg. Loss: 0.09
Epoch: 21, Train Avg. Acc.: 96.81, Train Avg. Loss: 0.09 - Test Avg. Acc.: 97.10, Test Avg. Loss: 0.09
Epoch: 22, Train Avg. Acc.: 96.69, Train Avg. Loss: 0.09 - Test Avg. Acc.: 96.68, Test Avg. Loss: 0.09
Epoch: 23, Train Avg. Acc.: 96.87, Train Avg. Loss: 0.09 - Test Avg. Acc.: 96.93, Test Avg. Loss: 0.09
Epoch: 24, Train Avg. Acc.: 96.92, Train Avg. Loss: 0.09 - Test Avg. Acc.: 97.01, Test Avg. Loss: 0.08
Epoch: 25, Train Avg. Acc.: 97.09, Train Avg. Loss: 0.08 - Test Avg. Acc.: 96.95, Test Avg. Loss: 0.09
Epoch: 26, Train Avg. Acc.: 97.00, Train Avg. Loss: 0.08 - Test Avg. Acc.: 97.24, Test Avg. Loss: 0.08
Epoch: 27, Train Avg. Acc.: 97.02, Train Avg. Loss: 0.08 - Test Avg. Acc.: 96.84, Test Avg. Loss: 0.08
Epoch: 28, Train Avg. Acc.: 97.28, Train Avg. Loss: 0.08 - Test Avg. Acc.: 97.26, Test Avg. Loss: 0.08
Epoch: 29, Train Avg. Acc.: 97.28, Train Avg. Loss: 0.08 - Test Avg. Acc.: 97.21, Test Avg. Loss: 0.08
Epoch: 30, Train Avg. Acc.: 97.54, Train Avg. Loss: 0.07 - Test Avg. Acc.: 97.17, Test Avg. Loss: 0.08
|
notebooks/PrepareRecs/CombineSignals.ipynb | ###Markdown
Ensemble signals into a linear model
###Code
def get_deltas(sources):
deltas = []
for source_filename in sources:
delta = pickle.load(open(source_filename, "rb"))
source = source_filename.split(".")[0].split("_")[0]
delta = delta.rename({x: x + f"_{source}" for x in delta.columns}, axis=1)
deltas.append(delta)
return pd.concat(deltas, axis=1)
def clean_data(df):
# fill missing data with reasonable defaults
delta_sources = [x.split("_")[-1] for x in df.columns if "delta_var" in x]
for source in delta_sources:
df.loc[lambda x: x[f"delta_var_{source}"] == np.inf, f"delta_{source}"] = np.nan
df.loc[
lambda x: x[f"delta_var_{source}"] == np.inf, f"delta_var_{source}"
] = np.nan
df[f"delta_{source}"] = df[f"delta_{source}"].fillna(0)
df[f"delta_var_{source}"] = df[f"delta_var_{source}"].fillna(df[f"delta_var_{source}"].quantile(0.8))
return df
if cross_validate:
train_df = get_deltas([f"{x}_loocv.pkl" for x in delta_sources])
else:
train_df = get_deltas([f"{x}.pkl" for x in delta_sources])
delta_corrs = train_df[[f"delta_{source}" for source in delta_sources]].corr()
labelled_data = pickle.load(open("user_anime_list.pkl", "rb"))
labelled_data = clean_data(labelled_data.merge(train_df, on="anime_id", how="left"))
# get model
delta_cols = [f"delta_{source}" for source in delta_sources]
formula = "score ~ " + " + ".join(delta_cols)
model = lm(formula, labelled_data)
print(model.summary())
df = clean_data(get_deltas([f"{x}.pkl" for x in delta_sources]))
blp = pickle.load(open("baseline_predictor.pkl", "rb"))
df["blp"] = blp["blp"]
df["score"] = model.predict(df) + df["blp"]
df["delta"] = df["score"] - df["blp"]
valid_baseline = ~df['blp'].isna()
df = df.loc[valid_baseline]
###Output
_____no_output_____
###Markdown
Compute Confidence Intervals
###Code
for _ in range(renormalize_variance_iters):
for source in delta_sources:
seen_shows = pickle.load(open("user_anime_list.pkl", "rb"))
seen_shows = seen_shows.set_index("anime_id")
seen_shows["delta"] = df[f"delta_{source}"]
single_delta_model = lm("score ~ delta + 0", seen_shows)
seen_shows["pred_score"] = single_delta_model.predict(df)
seen_shows["pred_std"] = np.sqrt(
(df[f"delta_var_{source}"] + df[f"delta_{source}"] ** 2)
* (
single_delta_model.bse["delta"] ** 2
+ single_delta_model.params["delta"] ** 2
)
- (df[f"delta_{source}"] ** 2 * single_delta_model.params["delta"] ** 2)
)
seen_shows = seen_shows.loc[lambda x: x["pred_std"] < np.inf]
std_mult = (
(seen_shows["pred_score"] - seen_shows["score"]) / seen_shows["pred_std"]
).std()
df[f"delta_var_{source}"] *= std_mult ** 2
# compute error bars
model_vars = pd.DataFrame()
for col in delta_cols:
source = col.split("_")[1]
model_vars[f"model_delta_var_{source}"] = (
(df[f"delta_var_{source}"] + df[f"delta_{source}"] ** 2)
* (model.bse[f"delta_{source}"] ** 2 + model.params[f"delta_{source}"] ** 2)
) - df[f"delta_{source}"] ** 2 * model.params[f"delta_{source}"] ** 2
model_stds = np.sqrt(model_vars)
delta_corrs = delta_corrs.loc[lambda x: (x.index.isin(delta_cols)), delta_cols]
delta_variance = np.sum(
(model_stds.values @ delta_corrs.values) * model_stds.values, axis=1
)
intercept_variance = 0
if "Intercept" in model.bse:
intercept_variance = model.bse["Intercept"] ** 2
df["std"] = np.sqrt(delta_variance + intercept_variance)
for _ in range(renormalize_variance_iters):
seen_shows = pickle.load(open("user_anime_list.pkl", "rb"))
seen_shows = seen_shows.set_index("anime_id")
seen_shows["score"] += df["blp"]
seen_shows["pred_score"] = df[f"score"]
seen_shows["pred_std"] = df["std"]
std_mult = (
(seen_shows["pred_score"] - seen_shows["score"]) / seen_shows["pred_std"]
).std()
df["std"] *= std_mult
zscore = st.norm.ppf(1 - (1 - confidence_interval) / 2)
df["score_lower_bound"] = df["score"] - df["std"] * zscore
df["score_upper_bound"] = df["score"] + df["std"] * zscore
###Output
_____no_output_____
###Markdown
Display Recommendations
###Code
anime = pd.read_csv("../../cleaned_data/anime.csv")
anime = anime[["anime_id", "title", "medium", "genres"]]
df = df.merge(anime, on="anime_id").set_index("anime_id")
# reorder the columns
cols = [
"title",
"medium",
"score",
"score_lower_bound",
"score_upper_bound",
"delta",
"std",
] + delta_cols
df = df[cols + [x for x in df.columns if x not in cols]]
related_series = pickle.load(open("../../processed_data/strict_relations_anime_graph.pkl", "rb"))
df = df.merge(related_series, on="anime_id").set_index("anime_id")
new_recs = df.loc[lambda x: ~x.index.isin(labelled_data.anime_id) & (x["medium"] == "tv")]
epsilon = 1e-6
min_bound = epsilon
if "Intercept" in model.params:
min_bound += model.params["Intercept"]
df.loc[lambda x: x["delta"] > min_bound].sort_values(
by="score_lower_bound", ascending=False
)[:20]
new_recs.loc[lambda x: (x["delta"] > min_bound)].sort_values(
by="score_lower_bound", ascending=False
).groupby("series_id").first().sort_values(by="score_lower_bound", ascending=False)[:50]
# Inreased serendipity!
new_recs.loc[lambda x: (x["delta_user"] > 0)].sort_values(
by="score_lower_bound", ascending=False
).groupby("series_id").first().sort_values(by="score_lower_bound", ascending=False)[:50]
###Output
_____no_output_____ |
Machine Learning & Data Science Masterclass - JP/18-Naive-Bayes-and-NLP/01-Text-Classification - Flight tweets.ipynb | ###Markdown
NLP and Supervised Learning - Flight Tweets Sentiment Classification Classification of Text Data The DataSource: https://www.kaggle.com/crowdflower/twitter-airline-sentiment?select=Tweets.csvThis data originally came from Crowdflower's Data for Everyone library.As the original source says,A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service"). Goal: Create a Machine Learning Algorithm that can predict if a tweet is positive, neutral, or negative. In the future we could use such an algorithm to automatically read and flag tweets for an airline for a customer service agent to reach out to contact. Business Usecase: + As these data are already labled like which words are negative, positive and netural. So we can utiized those information to create classification model that takes in raw text data and report back the tweet sentiment.+ In this way, in the future when the model detect negative tweets, we can alert customer service and CS can reach out to the customer. There is no need for human staff to go through all the tweets.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('../Data/airline_tweets.csv')
df.head()
###Output
_____no_output_____
###Markdown
EDA
###Code
df.info()
df['airline_sentiment'].value_counts()
sns.countplot(data=df, x='airline_sentiment');
###Output
_____no_output_____
###Markdown
For our problem, we want to catch `negative` so that it can be handled properly. So if model confuse between `neutral` and `postive`, it doesn't really matter. And the problem is more like binary classification rather than multi-classification.But in real world use case can be different and customer service may reach out to positive customers too. It is all based on requirements.
###Code
plt.figure(figsize=(10,8))
sns.countplot(data=df, x='negativereason');
plt.xticks(rotation=90);
plt.figure(dpi=150)
sns.countplot(data=df, x='airline', hue='airline_sentiment');
###Output
_____no_output_____
###Markdown
------- Features and LabelFor this, we want to get the raw data text for the analysis.
###Code
data = df[['airline_sentiment','text']]
data
X = data['text']
y = data['airline_sentiment']
###Output
_____no_output_____
###Markdown
----- Train test split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=101)
###Output
_____no_output_____
###Markdown
----- Vectorization
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words = 'english')
X_train_tfidf = tfidf.fit_transform(X_train)
X_test_tfidf = tfidf.transform(X_test)
X_train_tfidf
###Output
_____no_output_____
###Markdown
**DO NOT USE .todense() for such a large sparse matrix!!!** ------ Model Comparisons - Naive Bayes,LogisticRegression, LinearSVC Naive Bayes
###Code
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(X_train_tfidf, y_train)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
log_model = LogisticRegression(max_iter=1000)
log_model.fit(X_train_tfidf, y_train)
###Output
_____no_output_____
###Markdown
SVM
###Code
from sklearn.svm import SVC, LinearSVC
rbf_svc = SVC()
rbf_svc.fit(X_train_tfidf, y_train)
linear_svc = LinearSVC()
linear_svc.fit(X_train_tfidf, y_train)
###Output
_____no_output_____
###Markdown
------- Performance Evaluation
###Code
from sklearn.metrics import classification_report, plot_confusion_matrix
# custom function
def report(model):
pred = model.predict(X_test_tfidf)
print(classification_report(y_test, pred))
plot_confusion_matrix(model, X_test_tfidf, y_test)
report(nb)
report(log_model)
report(rbf_svc)
report(linear_svc)
###Output
precision recall f1-score support
negative 0.82 0.89 0.86 1817
neutral 0.59 0.52 0.55 628
positive 0.76 0.64 0.69 483
accuracy 0.77 2928
macro avg 0.73 0.68 0.70 2928
weighted avg 0.76 0.77 0.77 2928
###Markdown
--------- Finalizing a PipeLine for Deployment on New TweetsIf we were satisfied with a model's performance, we should set up a pipeline that can take in a tweet directly.
###Code
from sklearn.pipeline import Pipeline
pipe = Pipeline([
('tfidf', TfidfVectorizer()),
('scv', LinearSVC()),
])
# fit on entire data
pipe.fit(X, y)
new_tweet = ['I did enjoy the flight']
pipe.predict(new_tweet)
new_tweet = ['I did take from NY to LA']
pipe.predict(new_tweet)
new_tweet = ['it just ok flight']
pipe.predict(new_tweet)
new_tweet = ['so so flight']
pipe.predict(new_tweet)
###Output
_____no_output_____ |
EDA/COVID Data Explore 13.ipynb | ###Markdown
Goal: I want to calculate mortality rates by ethnicity and then compare this to the correlation between overall mortality and several sociodemographic factors.
###Code
df = pd.read_csv('COVID_Cases_Restricted_Detailed_10312020.csv')
df.head()
#Drop 'Missing' and 'Unknown' from death_yn so that there are only cases where death = Yes or No. Drop 'Unknown' cases from race_ethnicity
df_oct = df[(df.death_yn != 'Missing') & (df.death_yn != 'Unknown') & (df.race_ethnicity_combined != 'Unknown')]
df_oct['race_ethnicity_combined'].value_counts()
#Abbreviate ethnicity_race names for simplicity
df_oct = df_oct.replace({'race_ethnicity_combined' : { 'White, Non-Hispanic' : 'W',
'Hispanic/Latino' : 'H/L',
'Black, Non-Hispanic' : 'B',
'Multiple/Other, Non-Hispanic ' : 'M/O',
'Asian, Non-Hispanic' : 'A',
'American Indian/Alaska Native, Non-Hispanic' : 'AI/AN',
'Native Hawaiian/Other Pacific Islander, Non-Hispanic' : 'NH/OPI'}})
df_oct['race_ethnicity_combined'].value_counts()
#Determine % Mortality Rate by Ethnicity
W = df_oct[df_oct.race_ethnicity_combined == "W"]
W_Mortality_Rate = float(len(W[W.death_yn == 'Yes'])) / len(W)
H = df_oct[df_oct.race_ethnicity_combined == "H/L"]
H_Mortality_Rate = float(len(H[H.death_yn == 'Yes'])) / len(H)
B = df_oct[df_oct.race_ethnicity_combined == "B"]
B_Mortality_Rate = float(len(B[B.death_yn == 'Yes'])) / len(B)
M = df_oct[df_oct.race_ethnicity_combined == "M/O"]
M_Mortality_Rate = float(len(M[M.death_yn == 'Yes'])) / len(M)
A = df_oct[df_oct.race_ethnicity_combined == "A"]
A_Mortality_Rate = float(len(A[A.death_yn == 'Yes'])) / len(A)
AI = df_oct[df_oct.race_ethnicity_combined == "AI/AN"]
AI_Mortality_Rate = float(len(AI[AI.death_yn == 'Yes'])) / len(AI)
NH = df_oct[df_oct.race_ethnicity_combined == "NH/OPI"]
NH_Mortality_Rate = float(len(NH[NH.death_yn == 'Yes'])) / len(NH)
df_Mrate = pd.DataFrame([('W', W_Mortality_Rate*100),
('H/L', H_Mortality_Rate*100),
('B', B_Mortality_Rate*100),
('M/O', M_Mortality_Rate*100),
('A' , A_Mortality_Rate*100),
('AI/AN', AI_Mortality_Rate*100),
('NH/OPI', NH_Mortality_Rate*100)],
columns=['Ethnicity', '% Mortality Rate'])
df_Mrate
###Output
_____no_output_____
###Markdown
Next step is to attach select sociodemographic factors to the CDC data so that the correlation with mortality can easily be calculated. The FIPS will be used to add the information by county.
###Code
df_oct.rename(columns={"county_fips_code": "FIPS"}, inplace=True)
print(df_oct.columns)
#Load the table with sociodemographic data by county and rename the columns I want to use for better readability
df_health = pd.read_excel('Health Factors by County 2020 County Health Rankings Rows.xls', sheet_name='Ranked Measure Data')
df_health.rename(columns={'Adult obesity - % Adults with Obesity':'% Obesity'
, 'Adult smoking - % Smokers':'% Smokers',
'Physical inactivity - % Physically Inactive':'% Phys. Inactive',
'Uninsured - % Uninsured': '% Uninsured',
'High school graduation - High School Graduation Rate':'% High School',
'Some college - % Some College':'% Some College',
'Unemployment - % Unemployed':'% Unemployed'}, inplace=True)
pd.set_option('display.max_columns', None)
df_health.head(3)
#Add the renamed columns on to the CDC data set based on the FIPS
df_oct = pd.merge(df_oct, df_health[['FIPS','% Obesity','% Smokers','% Phys. Inactive','% Uninsured','% High School','% Some College','% Unemployed']], on='FIPS', how='left')
df_oct.head(3)
#Map death_yn to numeric binaries
df_oct['death_yn'] = df_oct['death_yn'].map({'No': 0,'Yes': 1})
df_oct.head(100)
from scipy import stats
#df_oct.dtypes
#uniqueValues = df_oct['% High School'].unique()
#print(uniqueValues)
#Drop nan values from added columns
df_oct.dropna(subset = ['% Obesity', '% Smokers', '% Phys. Inactive','% Uninsured',
'% High School','% Some College','% Unemployed'], inplace=True)
#Calculate regression between death_yn and socioeconomic factors
lin_reg1 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% Obesity"])
lin_reg2 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% Smokers"])
lin_reg3 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% Phys. Inactive"])
lin_reg4 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% Uninsured"])
lin_reg5 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% High School"])
lin_reg6 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% Some College"])
lin_reg7 = stats.linregress(x=df_oct["death_yn"], y=df_oct["% Unemployed"])
print(lin_reg1[2],";",lin_reg2[2],";",lin_reg3[2],";",lin_reg4[2],";",lin_reg5[2],";",lin_reg6[2],";",lin_reg7[2])
###Output
-0.08459854226592793 ; -0.06398524017613383 ; 0.014410677238303301 ; -0.0713606770161389 ; -0.049987534422671315 ; 0.025242602884312675 ; 0.0473538502896234
|
image_classification/image_classification_cnn.ipynb | ###Markdown
Comments about Assignment 1 Adriano Mundo 10524163 , Mario Sacaj 10521887Assignment 1 for the ANN2DL course was to participate a competition on kaggle with the objective to develop a Neural Network Architecture for Image Classification.We, as showed us in the practice classes, developed a Convolutional Neural Network for the classification task.We started with the architecture provided by the professor trying to understand what every piece of code do, then we re-elaborate our architecture, and instead of using the class structure, we decided to use the Sequential architecture, so it was easier to test, adding and removing new layers of the network. In order to divide our training set in train and validation we used the ImageDataGenerator combined with the SEED to make the experiment reproducible with a 80/20 split. Then, we started experimenting with data augmentation in order to improve the network. Analysing our image set we noticed that image are larger than what we expect, so we increased the image shape, and because of our limited amounts of data, we played with the ImageDataGenerator to increase our train dataset, for example adding rotation, width and heigh shift range, brightness range, etc. In order to deal with overfitting, between the two Dense Layer we added a Dropout with 0.5 as a parameter.Then we trained our network using relu as activation function for all the network expect the last layer were we used softmax. The metric, as stated by the task was accuracy, while loss function and optimzer are respectively categorical cross entropy and adam. We obtained a 64% of accuracy, we did other experiments and research and we can say that it's possible to reach about a maxium of 70% accuracy with these kind of architecture trying to tune again the parameter to find the match. We decided to not use Transfer Learning because it is very easy to conquer this task using a pre-trained network, vgg to obtain an accuracy of more or less 100%. We, as reported in the task assignment ,tried to understand the concepts, what does it make sense to do and what does not make sense without thinking about obtaining the best result because it was very easy to implement that kind of architecture.
###Code
import os
import tensorflow as tf
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from datetime import datetime
import json
# this let the experiment to be reproducible
SEED = 1234
tf.random.set_seed(1234)
cwd = os.getcwd()
#Dataset directory
dataset_dir = "/kaggle/input/ann-and-dl-image-classification/Classification_Dataset"
#Model weights filepath (set to None if you wanna train your model from scratch)
WEIGHTS = '/kaggle/input/model4me/model.h5'
weights = None
#Create Dataset_split.json for competition purposes (set to None if you don't want to create it)
dataset_info = None
#Definition of results writedown function
def create_csv(results, results_dir=''):
csv_fname = 'results_'
csv_fname += datetime.now().strftime('%b%d_%H-%M-%S') + '.csv'
with open(os.path.join(results_dir, csv_fname), 'w') as f:
f.write('Id,Category\n')
for key, value in results.items():
f.write(key + ',' + str(value) + '\n')
# batch size
bs = 8
# img shape
img_h = 320
img_w = 320
# ImageDataGenerator
# -------------------------------------------------------
# data augmentation to improve performances
apply_data_augmentation = True
# create training/validation ImageDataGenerator object
if apply_data_augmentation:
train_data_gen = ImageDataGenerator(rotation_range=20,
width_shift_range=20,
height_shift_range=20,
zoom_range=0.3,
horizontal_flip=True,
vertical_flip=False,
brightness_range = [0.5, 1.5],
fill_mode='constant',
cval=0,
rescale=1./255,
validation_split = 0.2)
else:
train_data_gen = ImageDataGenerator(rescale=1./255, validation_split=0.2)
# create testing ImageDataGenerator object
test_data_gen = ImageDataGenerator(rescale=1./255)
# -------------------------------------------------------
# list of classes
num_classes = 20
decide_class_indices = True
if decide_class_indices:
classes = ['owl', # 0
'galaxy', # 1
'lightning', # 2
'wine-bottle', # 3
't-shirt', # 4
'waterfall', # 5
'sword', # 6
'school-bus', # 7
'calculator', # 8
'sheet-music', # 9
'airplanes', # 10
'lightbulb', # 11
'skyscraper', # 12
'mountain-bike', # 13
'fireworks', # 14
'computer-monitor', # 15
'bear', # 16
'grand-piano', # 17
'kangaroo', # 18
'laptop'] # 19
else:
classes = None
# Generators
training_dir = os.path.join(dataset_dir, 'training')
train_gen = train_data_gen.flow_from_directory(training_dir,
batch_size=bs,
color_mode='rgb',
classes=classes,
target_size=(img_h, img_w),
class_mode='categorical',
shuffle=True,
seed=SEED, subset='training')
valid_gen = train_data_gen.flow_from_directory(training_dir,
color_mode='rgb',
batch_size=bs,
classes=classes,
target_size=(img_h, img_w),
class_mode='categorical',
shuffle=False,
seed=SEED, subset='validation')
test_gen = test_data_gen.flow_from_directory(dataset_dir,
classes=['test'],
color_mode='rgb',
target_size=(img_h, img_w),
class_mode=None,
shuffle=False,
seed=SEED)
# -------------------------------------------------------
# Create Dataset objects
# -------------------------------------------------------
# training and repeat
#train_dataset = tf.data.Dataset.from_generator(lambda: train_gen,
# output_types=(tf.float32, tf.float32),
# output_shapes=([None, img_h, img_w, 3], [None, num_classes]))
#train_dataset = train_dataset.repeat()
# validation and repeat
#valid_dataset = tf.data.Dataset.from_generator(lambda: valid_gen,
#output_types=(tf.float32, tf.float32),
#output_shapes=([None, img_h, img_w, 3], [None, num_classes]))
#valid_dataset = valid_dataset.repeat()
# -------------------------------------------------------
# Neural Network Architecture with Sequential model
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=8,kernel_size=(3, 3),strides=(1, 1), padding='same', activation = 'relu', input_shape=(img_h, img_w, 3)))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Conv2D(filters=16,kernel_size=(3, 3),strides=(1, 1),padding='same', activation = 'relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Conv2D(filters=32,kernel_size=(3, 3),strides=(1, 1),padding='same', activation = 'relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Conv2D(filters=64,kernel_size=(3, 3),strides=(1, 1),padding='same', activation = 'relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Conv2D(filters=128,kernel_size=(3, 3),strides=(1, 1),padding='same', activation = 'relu'))
model.add(tf.keras.layers.MaxPool2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=512, activation = 'relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(units=num_classes, activation='softmax'))
# Optimization parameters
# -------------------------------------------------------
# loss
loss = tf.keras.losses.CategoricalCrossentropy()
# learning rate
lr = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
# validation metrics
metrics = ['accuracy']
# compile Model
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
# -------------------------------------------------------
# Optional model_weights loading from file
if weights is not None:
model.load_weights(weights)
# Training
if weights is None:
#Training with callbacks
#-------------------------------------------------------
callbacks = []
# Early Stopping
early_stop = True
if early_stop:
es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) #, restore_best_weights=True)
callbacks.append(es_callback)
model.fit_generator(train_gen, steps_per_epoch=len(train_gen), epochs=100, callbacks=callbacks, validation_data=valid_gen, validation_steps=len(valid_gen))
#Compute Predictions
test_gen.reset()
preds = model.predict_generator(test_gen)
preds_cls_idx = preds.argmax(axis=-1)
# Write down results file
image_filenames = test_gen.filenames
image_filenames
results = {}
for i in range(len(image_filenames)):
results[(image_filenames[i][5:])] = preds_cls_idx[i]
create_csv(results)
# Create dictionary for the .json file
if dataset_info is not None:
from collections import defaultdict
train_dict = defaultdict(list)
for train_image in train_gen.filenames:
clss = train_image.split('/')
key = clss[0]
val = clss[1]
train_dict[key].append(val)
valid_dict = defaultdict(list)
for valid_image in valid_gen.filenames:
clss = valid_image.split('/')
key = clss[0]
val = clss[1]
valid_dict[key].append(val)
master_dict = {}
master_dict['training'] = train_dict
master_dict['validation'] = valid_dict
with open('dataset_split.json', 'w') as json_file:
json.dump(master_dict, json_file)
###Output
_____no_output_____ |
docs/source/user_guide/clean/clean_au_tfn.ipynb | ###Markdown
Australian Tax File Numbers Introduction The function `clean_au_tfn()` cleans a column containing Australian Tax File Numbers (TFN) strings, and standardizes them in a given format. The function `validate_au_tfn()` validates either a single TFN strings, a column of TFN strings or a DataFrame of TFN strings, returning `True` if the value is valid, and `False` otherwise. TFN strings can be converted to the following formats via the `output_format` parameter:* `compact`: only number strings without any seperators or whitespace, like "123456782"* `standard`: TFN strings with proper whitespace in the proper places, like "123 456 782"Invalid parsing is handled with the `errors` parameter:* `coerce` (default): invalid parsing will be set to NaN* `ignore`: invalid parsing will return the input* `raise`: invalid parsing will raise an exceptionThe following sections demonstrate the functionality of `clean_au_tfn()` and `validate_au_tfn()`. An example dataset containing TFN strings
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"tfn": [
"123 456 782",
"999 999 999",
"123456782",
"51 824 753 556",
"hello",
np.nan,
"NULL"
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
###Output
_____no_output_____
###Markdown
1. Default `clean_au_tfn`By default, `clean_au_tfn` will clean tfn strings and output them in the standard format with proper separators.
###Code
from dataprep.clean import clean_au_tfn
clean_au_tfn(df, column = "tfn")
###Output
_____no_output_____
###Markdown
2. Output formats This section demonstrates the output parameter. `standard` (default)
###Code
clean_au_tfn(df, column = "tfn", output_format="standard")
###Output
_____no_output_____
###Markdown
`compact`
###Code
clean_au_tfn(df, column = "tfn", output_format="compact")
###Output
_____no_output_____
###Markdown
3. `inplace` parameterThis deletes the given column from the returned DataFrame. A new column containing cleaned TFN strings is added with a title in the format `"{original title}_clean"`.
###Code
clean_au_tfn(df, column="tfn", inplace=True)
###Output
_____no_output_____
###Markdown
4. `errors` parameter `coerce` (default)
###Code
clean_au_tfn(df, "tfn", errors="coerce")
###Output
_____no_output_____
###Markdown
`ignore`
###Code
clean_au_tfn(df, "tfn", errors="ignore")
###Output
_____no_output_____
###Markdown
4. `validate_au_tfn()` `validate_au_tfn()` returns `True` when the input is a valid TFN. Otherwise it returns `False`.The input of `validate_au_tfn()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_au_tfn()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_au_tfn()` returns the validation result for the whole DataFrame.
###Code
from dataprep.clean import validate_au_tfn
print(validate_au_tfn("123 456 782"))
print(validate_au_tfn("99 999 999"))
print(validate_au_tfn("123456782"))
print(validate_au_tfn("51 824 753 556"))
print(validate_au_tfn("hello"))
print(validate_au_tfn(np.nan))
print(validate_au_tfn("NULL"))
###Output
_____no_output_____
###Markdown
Series
###Code
validate_au_tfn(df["tfn"])
###Output
_____no_output_____
###Markdown
DataFrame + Specify Column
###Code
validate_au_tfn(df, column="tfn")
###Output
_____no_output_____
###Markdown
Only DataFrame
###Code
validate_au_tfn(df)
###Output
_____no_output_____ |
nb/prior_correct/mock_sfh_data.ipynb | ###Markdown
Mock SFH dataIn this notebook, I will generate mock data that I will use to demonstrate the prior correction method. The mock data will be generated using FSPS with $M_*=10^{10}M_\odot$, $Z = 0.02$, and with the following different SFHs: 1. constant2. falling3. rising4. burst5. quench
###Code
import os
import numpy as np
import fsps
from astropy.cosmology import Planck13
# --- plotting ---
from matplotlib import gridspec
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
tage = Planck13.age(0.).value # Gyr age of z=0 galaxy
###Output
_____no_output_____
###Markdown
get SFHs
###Code
nlb = 1000 # look back time bins
tlb = np.linspace(0, tage, nlb) # look back time
# delta t bin widths
dt = np.zeros(nlb)
dt[1:-1] = 0.5 * (np.diff(tlb)[1:] + np.diff(tlb)[:-1])
dt[0] = 0.5 * (tlb[1] - tlb[0])
dt[-1] = 0.5 * (tlb[-1] - tlb[-2])
sfh_constant = np.ones(nlb)/np.trapz(np.ones(nlb), tlb)
sfh_falling = np.exp((tlb-tage)/1.4)/np.trapz(np.exp((tlb-tage)/1.4), tlb)
sfh_rising = np.exp((tage-tlb)/3.4)/np.trapz(np.exp((tage -tlb)/3.4), tlb)
sfh_burst = 0.8*(np.ones(nlb)/np.trapz(np.ones(nlb), tlb)) + 0.2 * (np.exp(-0.5*(tlb - 0.5)**2/0.2**2)/np.trapz(np.exp(-0.5*(tlb - 0.5)**2/0.2**2), tlb))
sfh_quench = np.concatenate([0.02*np.ones(nlb)[tlb < 1], np.ones(nlb)[tlb >= 1]])/np.trapz(np.concatenate([0.02*np.ones(nlb)[tlb < 1], np.ones(nlb)[tlb >= 1]]), tlb)
sfhs = [sfh_constant, sfh_falling, sfh_rising, sfh_burst, sfh_quench]
lbls = ['constant', 'falling', 'rising', 'burst', 'quench']
# check that the SFH are normalized properly
for sfh in sfhs:
assert np.isclose(np.trapz(sfh, tlb), 1.)
fig = plt.figure(figsize=(6,20))
for i, sfh, lbl in zip(range(5), sfhs, lbls):
sub = fig.add_subplot(5,1,i+1)
sub.plot(tlb, sfh, c='C0', label=lbl)
sub.set_xlim(0, tage)
if i == 2: sub.set_ylabel(r'SFH/$M_{\rm formed}$ [${\rm yr}^-1$]', fontsize=25)
sub.set_ylim(0, 0.7)
sub.text(0.5, 0.65, lbl, ha='left', va='top', fontsize=25)
sub.set_xlabel(r'$t_{\rm lookback}$ [Gyr]', fontsize=25)
###Output
_____no_output_____
###Markdown
construct SEDs using `fsps`First, initiate a `fsps.StellarPopulation` object
###Code
ssp = fsps.StellarPopulation(
zcontinuous=1, # interpolate metallicities
sfh=0, # tabulated SFH
dust_type=2, # calzetti(2000)
imf_type=1) # chabrier IMF
def build_SED(sfh):
''' construct SED given tabulated SFH using FSPS. All SEDs
have Z = Zsol and dust2 =
'''
mtot = 0.
for i, tage in enumerate(tlb):
m = dt[i] * sfh[i] # mass formed in this bin
if m == 0 and i != 0: continue
ssp.params['logzsol'] = 0
ssp.params['dust2'] = 0.3 # tau_V (but Conroy defines tau differently than standard methods)
wave_rest, lum_i = ssp.get_spectrum(
tage=np.clip(tage, 1e-8, None),
peraa=True) # in units of Lsun/AA
# note that this spectrum is normalized such that the total formed
# mass = 1 Msun
if i == 0: lum_ssp = np.zeros(len(wave_rest))
lum_ssp += m * lum_i
mtot += m
assert np.isclose(mtot, 1.)
lum_ssp *= 10**10 # 10^10 Msun
return wave_rest, lum_ssp
sed_constant = build_SED(sfh_constant)
sed_falling = build_SED(sfh_falling)
sed_rising = build_SED(sfh_rising)
sed_burst = build_SED(sfh_burst)
sed_quench = build_SED(sfh_quench)
seds = [sed_constant, sed_falling, sed_rising, sed_burst, sed_quench]
fig = plt.figure(figsize=(15,20))
gs = gridspec.GridSpec(5, 2, width_ratios=[1,3])
for i, sfh, sed, lbl in zip(range(5), sfhs, seds, lbls):
sub = plt.subplot(gs[i,0])
sub.plot(tlb, sfh, c='C%i' % i, label=lbl)
if i == 4: sub.set_xlabel(r'$t_{\rm lookback}$ [Gyr]', fontsize=25)
sub.set_xlim(0, tage)
if i == 2: sub.set_ylabel(r'SFH/$M_{\rm formed}$ [${\rm yr}^-1$]', fontsize=25)
sub.set_ylim(0, 0.7)
sub.text(0.5, 0.65, lbl, ha='left', va='top', fontsize=25)
sub = plt.subplot(gs[i,1])
sub.plot(sed[0], sed[1], c='C%i' % i, label=lbl)
sub.set_xlim(2e3, 1e4)
sub.set_ylim(0, 3e6)
if i == 4: sub.set_xlabel('wavelength', fontsize=25)
if i == 2: sub.set_ylabel(r'flux [$L_\odot/\AA$]', fontsize=25)
###Output
_____no_output_____
###Markdown
save SFH, wavelengths, SEDs to file
###Code
np.save('/Users/chahah/data/provabgs/prior_correct/t.lookback.npy', tlb)
np.save('/Users/chahah/data/provabgs/prior_correct/wave.fsps.npy', sed_constant[0])
for i, sfh, sed, lbl in zip(range(5), sfhs, seds, lbls):
np.save('/Users/chahah/data/provabgs/prior_correct/sfh.%s.npy' % lbl, sfh)
np.save('/Users/chahah/data/provabgs/prior_correct/sed.%s.npy' % lbl, sed[1])
###Output
_____no_output_____ |
Machine_Learning_In_Cyber_Security/Binary_Classifier_poisoning_attack.ipynb | ###Markdown
In this example , i will add malicious nodes to substantially change the prediction graph of the ml model
###Code
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
# creating a random dataset for a binary classifier
X , y = make_classification(n_samples=500 , n_features=2,n_informative=2,n_redundant=0,weights=[.5,.5],random_state=30)
## now creating the classifier for maintaining the required features
# and fitting half data for the traning and other half of testing
clf = MLPClassifier(max_iter = 1000 , random_state = 123).fit(X[:250],y[:250])
import numpy as np
# now creating a visualization of classifier decision function to analyse the working
# creating a random mesh grid
xx , yy = np.mgrid[-3:3:.01 , -3:3:.01]
grid = np.c_[xx.ravel() , yy.ravel()]
probs = clf.predict_proba(grid)[:,1].reshape(xx.shape)
# now generating the countour plot
import matplotlib.pyplot as plt
f , ax = plt.subplots(figsize=(12, 9))
# Plot the contour background
contour = ax.contourf(xx, yy, probs, 25, cmap="RdBu",
vmin=0, vmax=1)
ax_c = f.colorbar(contour)
ax_c.set_label("$P(y = 1)$")
ax_c.set_ticks([0, .25, .5, .75, 1])
# Plot the test set (latter half of X and y)
ax.scatter(X[250:,0], X[250:, 1], c=y[250:], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=1)
ax.set(aspect="equal",
xlim=(-3, 3), ylim=(-3, 3))
###Output
_____no_output_____
###Markdown
now we try to do come malicious tricks by entering the malicious nodes and then finding the effect.
###Code
# now defining a function to declare the decision function
def plot_decision_boundary(X_orig, y_orig, probs_orig, chaff_X=None, chaff_y=None, probs_poisoned=None):
f , ax = plt.subplots(figsize=(12, 9))
ax.scatter(X_orig[250:,0], X_orig[250:, 1],
c=y_orig[250:], s=50, cmap="gray",
edgecolor="black", linewidth=1)
# checking wether any list is generated corrected or not.
if all([(chaff_X is not None),
(chaff_y is not None),
(probs_poisoned is not None)]):
ax.scatter(chaff_X[:,0], chaff_X[:, 1],
c=chaff_y, s=50, cmap="gray",
marker="*", edgecolor="black", linewidth=1)
ax.contour(xx, yy, probs_orig, levels=[.5],
cmap="gray", vmin=0, vmax=.8)
ax.contour(xx, yy, probs_poisoned, levels=[.5],
cmap="gray")
else:
ax.contour(xx, yy, probs_orig, levels=[.5], cmap="gray")
ax.set(aspect="equal", xlim=(-3, 3), ylim=(-3, 3))
# adding selected chaff nodes ( called in scientific articles)
num_chaff = 100
chaff_X = np.array([np.linspace(-2, -1 , num_chaff),
np.linspace(0.1, 0.1, num_chaff)]).T
chaff_y = np.ones(num_chaff)
# finding normal plots
plot_decision_boundary(X, y, probs)
# now again traning the model with the info from the
clf.partial_fit(chaff_X, chaff_y)
probs_poisoned = clf.predict_proba(grid)[:, 1].reshape(xx.shape)
plot_decision_boundary(X, y, probs, chaff_X, chaff_y, probs_poisoned)
# now changing the number of malicious values and then plotting the graph ,
# we can ascertain the effect of changing of decision boundary.
## the end ####
###Output
_____no_output_____ |
data_preprocessing_feature_scaling.ipynb | ###Markdown
Feature Scaling
###Code
#Importing packages
import numpy as np
import pandas as pd
dataset = pd.read_csv('/Users/swaruptripathy/Desktop/Data Science and AI/datasets/purchase_salary.csv')
dataset.head()
dataset.shape
X = dataset.iloc[:,2:4]
X.head()
#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
std_X = sc.fit_transform(X)
std_X[0:5]
help(StandardScaler)
from sklearn.preprocessing import MinMaxScaler
MinMaxScalerX = MinMaxScaler()
MN_X = MinMaxScalerX.fit_transform(X)
MN_X[0:5]
###Output
_____no_output_____ |
Week_2_Multiple_Regression/assign-1_sklearn_mulp_regre.ipynb | ###Markdown
Regression Week 2: Multiple Regression (Interpretation) In this notebook, we will use data on house sales in King County, Seatle to predict prices using multiple regression. The goal of this notebook is to explore multiple regression and feature engineering. You will:* Use DataFrames to do some feature engineering* Use sklearn to compute the regression weights (coefficients/parameters)* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares (RSS)* Look at coefficients and interpret their meanings* Evaluate multiple models via RSS Importing Libraries
###Code
import os
import zipfile
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unzipping files with house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_train_data.csv','kc_house_test_data.csv', 'kc_house_data.csv']
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
###Output
_____no_output_____
###Markdown
Loading Sales data, Sales Training data, and Sales Test data
###Code
# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int,
'sqft_living15':float, 'grade':int, 'yr_renovated':int,
'price':float, 'bedrooms':float, 'zipcode':str,
'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':str, 'condition':int, 'lat':float, 'date':str,
'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
# Loading sales data, sales training data, and test_data into DataFrames
sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict)
train_data = pd.read_csv('kc_house_train_data.csv', dtype = dtype_dict)
test_data = pd.read_csv('kc_house_test_data.csv', dtype = dtype_dict)
# Looking at head of training data DataFrame
train_data.head()
###Output
_____no_output_____
###Markdown
Learning a multiple regression model Now, learn a multiple regression model predicting 'price' based on the following features:example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data. First, let's plot the data for these features.
###Code
plt.figure(figsize=(8,6))
plt.plot(train_data['sqft_living'], train_data['price'],'.')
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.show()
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.plot(train_data['bedrooms'], train_data['price'],'.')
plt.xlabel('# Bedrooms', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.subplot(1, 2, 2)
plt.plot(train_data['bathrooms'], train_data['price'],'.')
plt.xlabel('# Bathrooms', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Now, creating a list of the features we are interested in, the feature matrix, and the output vector.
###Code
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
X_multi_lin_reg = train_data[example_features]
y_multi_lin_reg = train_data['price']
###Output
_____no_output_____
###Markdown
Creating a Linear Regression Object for Sklearn library and using the feature matrix and output vector to perform linear regression.
###Code
example_model = LinearRegression()
example_model.fit(X_multi_lin_reg, y_multi_lin_reg)
###Output
_____no_output_____
###Markdown
Now that we have fitted the model we can extract the regression weights (coefficients):
###Code
# printing the intercept and coefficients
print example_model.intercept_
print example_model.coef_
# Putting the intercept and weights from the multiple linear regression into a Series
example_weight_summary = pd.Series( [example_model.intercept_] + list(example_model.coef_),
index = ['intercept'] + example_features )
print example_weight_summary
###Output
intercept 87912.865815
sqft_living 315.406691
bedrooms -65081.887116
bathrooms 6942.165986
dtype: float64
###Markdown
Making PredictionsRecall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
###Code
example_predictions = example_model.predict(X_multi_lin_reg)
print example_predictions[0] # should be close to 271789.505878
###Output
271789.26538
###Markdown
Compute RSS Now that we can make predictions given the model, let's write a function to compute the RSS of the model.
###Code
def get_residual_sum_of_squares(model, data, outcome):
# - data holds the data points with the features (columns) we are interested in performing a linear regression fit
# - model holds the linear regression model obtained from fitting to the data
# - outcome is the y, the observed house price for each data point
# By using the model and applying predict on the data, we return a numpy array which holds
# the predicted outcome (house price) from the linear regression model
model_predictions = model.predict(data)
# Computing the residuals between the predicted house price and the actual house price for each data point
residuals = outcome - model_predictions
# To get RSS, square the residuals and add them up
RSS = sum(residuals*residuals)
return(RSS)
###Output
_____no_output_____
###Markdown
Create some new features Although we often think of multiple regression as including multiple different features (e.g. of bedrooms, squarefeet, and of bathrooms), we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms. Create the following 4 new features as column in both TEST and TRAIN data:* bedrooms_squared = bedrooms\*bedrooms* bed_bath_rooms = bedrooms\*bathrooms* log_sqft_living = log(sqft_living)* lat_plus_long = lat + long
###Code
# Creating new 'bedrooms_squared' feature
train_data['bedrooms_squared'] = train_data['bedrooms']*train_data['bedrooms']
test_data['bedrooms_squared'] = test_data['bedrooms']*test_data['bedrooms']
# Creating new 'bed_bath_rooms' feature
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
# Creating new 'log_sqft_living' feature
train_data['log_sqft_living'] = np.log(train_data['sqft_living'])
test_data['log_sqft_living'] = np.log(test_data['sqft_living'])
# Creating new 'lat_plus_long' feature
train_data['lat_plus_long'] = train_data['lat'] + train_data['long']
test_data['lat_plus_long'] = test_data['lat'] + test_data['long']
# Displaying head of train_data DataFrame and test_data DataFrame to verify that new features are present
train_data.head()
test_data.head()
###Output
_____no_output_____
###Markdown
* Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.* bedrooms times bathrooms gives what's called an "interaction" feature. It is large when *both* of them are large.* Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.* Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why) **Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)**
###Code
print "Mean of Test data 'bedrooms_squared' feature: %.2f " % np.mean(test_data['bedrooms_squared'].values)
print "Mean of Test data 'bed_bath_rooms' feature: %.2f " % np.mean(test_data['bed_bath_rooms'].values)
print "Mean of Test data 'log_sqft_living' feature: %.2f " % np.mean(test_data['log_sqft_living'].values)
print "Mean of Test data 'lat_plus_long' feature: %.2f " % np.mean(test_data['lat_plus_long'].values)
###Output
Mean of Test data 'bedrooms_squared' feature: 12.45
Mean of Test data 'bed_bath_rooms' feature: 7.50
Mean of Test data 'log_sqft_living' feature: 7.55
Mean of Test data 'lat_plus_long' feature: -74.65
###Markdown
Learning Multiple Models Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:* Model 1: squarefeet, bedrooms, bathrooms, latitude & longitude* Model 2: add bedrooms\*bathrooms* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
###Code
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
###Output
_____no_output_____
###Markdown
Now that you have the features, learn the weights for the three different models for predicting target = 'price' and look at the value of the weights/coefficients:
###Code
# Creating a LinearRegression Object for Model 1 and learning the multiple linear regression model
model_1 = LinearRegression()
model_1.fit(train_data[model_1_features], train_data['price'])
# Creating a LinearRegression Object for Model 2 and learning the multiple linear regression model
model_2 = LinearRegression()
model_2.fit(train_data[model_2_features], train_data['price'])
# Creating a LinearRegression Object for Model 3 and learning the multiple linear regression model
model_3 = LinearRegression()
model_3.fit(train_data[model_3_features], train_data['price'])
###Output
_____no_output_____
###Markdown
** Now, Examine/extract each model's coefficients: **
###Code
# Putting the Model 1 intercept and weights from the multiple linear regression for the 3 models into a Series
model_1_summary = pd.Series( [model_1.intercept_] + list(model_1.coef_),
index = ['intercept'] + model_1_features , name='Model 1 Coefficients' )
print model_1_summary
# Putting the Model 2 intercept and weights from the multiple linear regression for the 3 models into a Series
model_2_summary = pd.Series( [model_2.intercept_] + list(model_2.coef_),
index = ['intercept'] + model_2_features , name='Model 2 Coefficients' )
print model_2_summary
# Putting the Model 3 intercept and weights from the multiple linear regression for the 3 models into a Series
model_3_summary = pd.Series( [model_3.intercept_] + list(model_3.coef_),
index = ['intercept'] + model_3_features , name='Model 3 Coefficients' )
print model_3_summary
###Output
intercept -62036084.986098
sqft_living 529.422820
bedrooms 34514.229578
bathrooms 67060.781319
lat 534085.610867
long -406750.710861
bed_bath_rooms -8570.504395
bedrooms_squared -6788.586670
log_sqft_living -561831.484076
lat_plus_long 127334.900006
Name: Model 3 Coefficients, dtype: float64
###Markdown
**Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?**
###Code
print "Positive: ", model_1_summary['bathrooms']
###Output
Positive: 15706.7420827
###Markdown
**Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?**
###Code
print "Negative: ", model_2_summary['bathrooms']
###Output
Negative: -71461.3082928
###Markdown
Think about what this means: In model 2, the new 'bed_bath_rooms' feature causes the house price to be over-estimated. Thus, the 'bathrooms' turns to negative to better agree with the observed values for the house prices. Comparing multiple modelsNow that you've learned three models and extracted the model weights we want to evaluate which model is best. First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
###Code
# Compute the RSS on TRAINING data for each of the three models and record the values:
rss_model_1_train = get_residual_sum_of_squares(model_1, train_data[model_1_features], train_data['price'])
rss_model_2_train = get_residual_sum_of_squares(model_2, train_data[model_2_features], train_data['price'])
rss_model_3_train = get_residual_sum_of_squares(model_3, train_data[model_3_features], train_data['price'])
print "RSS for Model 1 Training Data: ", rss_model_1_train
print "RSS for Model 2 Training Data: ", rss_model_2_train
print "RSS for Model 3 Training Data: ", rss_model_3_train
###Output
RSS for Model 1 Training Data: 9.6787996305e+14
RSS for Model 2 Training Data: 9.58419635074e+14
RSS for Model 3 Training Data: 9.0343645505e+14
###Markdown
**Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data?** Is this what you expected? Model 3 has the lowest RSS on the Training Data. This is expected since Model 3 has the most features. Now compute the RSS on on TEST data for each of the three models.
###Code
# Compute the RSS on TESTING data for each of the three models and record the values:
rss_model_1_test = get_residual_sum_of_squares(model_1, test_data[model_1_features], test_data['price'])
rss_model_2_test = get_residual_sum_of_squares(model_2, test_data[model_2_features], test_data['price'])
rss_model_3_test = get_residual_sum_of_squares(model_3, test_data[model_3_features], test_data['price'])
print "RSS for Model 1 Test Data: ", rss_model_1_test
print "RSS for Model 2 Test Data: ", rss_model_2_test
print "RSS for Model 3 T Data: ", rss_model_3_test
###Output
RSS for Model 1 Test Data: 2.25500469795e+14
RSS for Model 2 Test Data: 2.23377462976e+14
RSS for Model 3 T Data: 2.59236319207e+14
|
notebooks/03f - Exploratory Data Analysis WW2 - EDA-Airplanes.ipynb | ###Markdown
Exploratory Data Analysis in Action - EDA: Airplanes In this section we explore the [_Arial Bombing Data Set_](https://www.kaggle.com/usaf/world-war-ii) and apply techniques referred to as __Exploratory Data Analysis__. **Import statements**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Global settings**
###Code
pd.options.display.max_rows = 999
pd.options.display.max_columns = 100
#pd.set_option('display.max_colwidth', -1)
plt.rcParams["figure.figsize"] = [15,6]
###Output
_____no_output_____
###Markdown
**Load data set**
###Code
import pickle
gdf_europe = pickle.load( open( "../datasets/gdf_europe.p", "rb" ) )
europe = pickle.load(open( "../datasets/europe.p", "rb" ) )
germany = pickle.load(open("../datasets/germany.p", "rb"))
gdf_germany = pickle.load(open("../datasets/gdf_germany.p", "rb"))
###Output
_____no_output_____
###Markdown
Research questions __@Airplanes__- Q1: Which type of airplane types was mostly engaged over?- Q2: At what height do airplanes operate? At what height to the 10 most common airplane types operate?- Q3: Which type of airplane carried the heaviest bombs? Which were the 10 most dangerous airplane types with respect to carried explosives?- Q4: Which Allied Force uses which airplane when and where?
###Code
df_airpl = gdf_europe.copy()
df_airpl.columns
###Output
_____no_output_____
###Markdown
> **Q1: Which type of airplane is mostly engaged?**
###Code
## your code here
###Output
_____no_output_____
###Markdown
> **Q2: At what height do airplanes operate? At what height to the 10 most common airplane types operate?**
###Code
## your code here
###Output
_____no_output_____
###Markdown
> **Q3: Which type of airplane carried the heaviest bombs? Which were the 10 most dangerous airplane types with respect to carried explosives?**
###Code
## your code here
###Output
_____no_output_____
###Markdown
> **Q4: Which Allied Force uses which airplane when and where?** _This question is for sure a huge one. We suggest to write a function (or script) that plots for each year the Allied attacks over Europe for any specified airplane type._
###Code
df_airpl['Aircraft Series'].unique()
## your code here
###Output
_____no_output_____
###Markdown
_Note: If you struggle you mak take a look at our implementation for this problem. Uncomment and run the cell below and apply the_ `plot_airplane_type_over_europe` _function._
###Code
# %load ../src/_solutions/eda_airplanes_q4.py
# plot_airplane_type_over_europe(df_airpl, airplane="B17", kdp=False);
###Output
_____no_output_____ |
notebooks/semisupervised/MNIST/learned-metric/not-augmented-Y/mnist-aug-64ex-learned-nothresh-Y-not-augmented.ipynb | ###Markdown
Choose GPU
###Code
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=3
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
Load packages
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
import tensorflow_addons as tfa
import pickle
###Output
/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
" (e.g. in jupyter console)", TqdmExperimentalWarning)
###Markdown
parameters
###Code
dataset = "mnist"
labels_per_class = 64 # 'full'
n_latent_dims = 1024
confidence_threshold = 0.0 # minimum confidence to include in UMAP graph for learned metric
learned_metric = True # whether to use a learned metric, or Euclidean distance between datapoints
augmented = False #
min_dist= 0.001 # min_dist parameter for UMAP
negative_sample_rate = 5 # how many negative samples per positive sample
batch_size = 128 # batch size
optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train
optimizer = tfa.optimizers.MovingAverage(optimizer)
label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy
max_umap_iterations = 500 # how many times, maximum, to recompute UMAP
max_epochs_per_graph = 10 # how many epochs maximum each graph trains for (without early stopping)
graph_patience = 10 # how many times without improvement to train a new graph
min_graph_delta = 0.0025 # minimum improvement on validation acc to consider an improvement for training
from datetime import datetime
datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f")
datestring = (
str(dataset)
+ "_"
+ str(confidence_threshold)
+ "_"
+ str(labels_per_class)
+ "____"
+ datestring
+ '_umap_augmented'
)
print(datestring)
###Output
mnist_0.0_64____2020_08_24_16_51_16_969542_umap_augmented
###Markdown
Load dataset
###Code
from tfumap.semisupervised_keras import load_dataset
(
X_train,
X_test,
X_labeled,
Y_labeled,
Y_masked,
X_valid,
Y_train,
Y_test,
Y_valid,
Y_valid_one_hot,
Y_labeled_one_hot,
num_classes,
dims
) = load_dataset(dataset, labels_per_class)
###Output
_____no_output_____
###Markdown
load architecture
###Code
from tfumap.semisupervised_keras import load_architecture
encoder, classifier, embedder = load_architecture(dataset, n_latent_dims)
###Output
_____no_output_____
###Markdown
load pretrained weights
###Code
from tfumap.semisupervised_keras import load_pretrained_weights
encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0824 16:51:21.219846 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084ed0dda0> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084edcfda0>).
W0824 16:51:21.222154 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084edcc198> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084ed9ae10>).
W0824 16:51:21.334750 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084eac8198> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084eafc438>).
W0824 16:51:21.339720 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084eafc438> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084f0b1390>).
W0824 16:51:21.346334 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084eca8f60> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084eca8978>).
W0824 16:51:21.351073 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084eca8978> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084eca8828>).
W0824 16:51:21.357711 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084eb34f98> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084f0ab278>).
W0824 16:51:21.362348 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084f0ab278> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084f0ab400>).
W0824 16:51:21.381991 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084e996668> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084e9965c0>).
W0824 16:51:21.389613 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084e9965c0> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084e996f98>).
W0824 16:51:21.400824 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084eeb5710> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084eeb5cc0>).
W0824 16:51:21.407382 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084eeb5cc0> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084eeb5f60>).
W0824 16:51:21.414678 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084ee51a58> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084ee51cf8>).
W0824 16:51:21.419428 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084ee51cf8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084ee51f98>).
W0824 16:51:21.431040 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f084f367198> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084f3677b8>).
W0824 16:51:21.435521 139678025615104 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f084f3677b8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f084f3679e8>).
###Markdown
compute pretrained accuracy
###Code
# test current acc
pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True)
pretrained_predictions = np.argmax(pretrained_predictions, axis=1)
pretrained_acc = np.mean(pretrained_predictions == Y_test)
print('pretrained acc: {}'.format(pretrained_acc))
###Output
313/313 [==============================] - 2s 7ms/step
313/313 [==============================] - 0s 1ms/step
pretrained acc: 0.9787
###Markdown
get a, b parameters for embeddings
###Code
from tfumap.semisupervised_keras import find_a_b
a_param, b_param = find_a_b(min_dist=min_dist)
###Output
_____no_output_____
###Markdown
build network
###Code
from tfumap.semisupervised_keras import build_model
model = build_model(
batch_size=batch_size,
a_param=a_param,
b_param=b_param,
dims=dims,
encoder=encoder,
classifier=classifier,
negative_sample_rate=negative_sample_rate,
optimizer=optimizer,
label_smoothing=label_smoothing,
embedder = embedder,
)
###Output
_____no_output_____
###Markdown
build labeled iterator
###Code
from tfumap.semisupervised_keras import build_labeled_iterator
labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims)
###Output
_____no_output_____
###Markdown
training
###Code
from livelossplot import PlotLossesKerasTF
from tfumap.semisupervised_keras import get_edge_dataset
from tfumap.semisupervised_keras import zip_datasets
###Output
_____no_output_____
###Markdown
callbacks
###Code
# plot losses callback
groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']}
plotlosses = PlotLossesKerasTF(groups=groups)
history_list = []
current_validation_acc = 0
batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int)
epochs_since_last_improvement = 0
current_umap_iterations = 0
current_epoch = 0
from tfumap.paths import MODEL_DIR, ensure_dir
save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring
ensure_dir(save_folder / 'test_loss.npy')
for cui in tqdm(np.arange(current_epoch, max_umap_iterations)):
if len(history_list) > graph_patience+1:
previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list]
best_of_patience = np.max(previous_history[-graph_patience:])
best_of_previous = np.max(previous_history[:-graph_patience])
if (best_of_previous + min_graph_delta) > best_of_patience:
print('Early stopping')
break
# make dataset
edge_dataset = get_edge_dataset(
model,
augmented,
classifier,
encoder,
X_train,
Y_masked,
batch_size,
confidence_threshold,
labeled_dataset,
dims,
learned_metric = learned_metric
)
# zip dataset
zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size)
# train dataset
history = model.fit(
zipped_ds,
epochs= current_epoch + max_epochs_per_graph,
initial_epoch = current_epoch,
validation_data=(
(X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)),
{"classifier": Y_valid_one_hot},
),
callbacks = [plotlosses],
max_queue_size = 100,
steps_per_epoch = batches_per_epoch,
#verbose=0
)
current_epoch+=len(history.history['loss'])
history_list.append(history)
# save score
class_pred = classifier.predict(encoder.predict(X_test))
class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test)
np.save(save_folder / 'test_loss.npy', (np.nan, class_acc))
# save weights
encoder.save_weights((save_folder / "encoder").as_posix())
classifier.save_weights((save_folder / "classifier").as_posix())
# save history
with open(save_folder / 'history.pickle', 'wb') as file_pi:
pickle.dump([i.history for i in history_list], file_pi)
current_umap_iterations += 1
if len(history_list) > graph_patience+1:
previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list]
best_of_patience = np.max(previous_history[-graph_patience:])
best_of_previous = np.max(previous_history[:-graph_patience])
if (best_of_previous + min_graph_delta) > best_of_patience:
print('Early stopping')
#break
plt.plot(previous_history)
###Output
_____no_output_____
###Markdown
save embedding
###Code
z = encoder.predict(X_train)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
np.save(save_folder / 'train_embedding.npy', embedding)
###Output
_____no_output_____ |
.ipynb_checkpoints/Calculo_emisiones-checkpoint.ipynb | ###Markdown
Modulo 2En este notebook se realiza el calculo de las emisiones.Se utilizan los VKT desde el programa de Nicolásy se multiplican por los factores de emisión obtenidos por COPERT5.5
###Code
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
import numpy as np
df = pd.read_excel("Modulo 2/VKT.xlsx", index_col=None)
EF = pd.read_excel("EF.xlsx", index_col=None)
df_2 = df.iloc[:,2:]
df_2.drop("2013", inplace=True, axis=1)
df_2
EF_CO = EF.iloc[:,1:-10][EF["Pollutant"]=="CO"]
EF_CH4 = EF.iloc[:,1:-10][EF["Pollutant"]=="CH4"]
EF_NOx = EF.iloc[:,1:-10][EF["Pollutant"]=="NOx"]
EF_CO2 = EF.iloc[:,1:-10][EF["Pollutant"]=="CO2"]
EF_SOx = EF.iloc[:,1:-10][EF["Pollutant"]=="SOx"]
EF_PM = EF.iloc[:,1:-10][EF["Pollutant"]=="PM Exhaust"]
EF_CO
merge(df, left_on=["Modo", "Motorizacion", "Norma"], right_on=["Segment","Fuel","Euro Standard"], how="left")
dfm = df_2.merge(EF_CO, left_on=["Región", "Ámbito", "Categoría", "Motorizacion", "Norma"], right_on=["Region", "Ambito", "Modo", "Motorizacion", "Norma"], how="left")
dfm.columns
dfm
###Output
_____no_output_____ |
04 Python Teil 2/04 Funktionenneu.ipynb | ###Markdown
FunktionenFunktionen sind zusammengefasste Codeblöcke. Mittels Funktionen können wir es vermeiden, mehrmals verwendete Codeblöcke zu wiederholen. Wir definieren stattdessen einmal eine Funktion, die diese Codeblöcke enthält und brauchen an weiteren Stellen nur noch (kurz) die Funktion aufzurufen, ohne die in ihr enthaltenen Codezeilen zu kopieren. Eine Funktion definieren und aufrufenWir haben schon einige Funktionen kennengelernt, die uns Python zur Verfügung stellt. Die Funktion, die wir bislang wohl am häufigsten verwendet haben, ist die print-Funktion:
###Code
print("HALLO WELT")
###Output
HALLO WELT
###Markdown
P.S. Eine Übersicht über Funktionen in Python findest du hier: https://docs.python.org/3/library/functions.html Weitere Funktionen in Python Auch die len-Funktion für Listen kennst du schon. :-)
###Code
print(len(["Hallo", "Welt"]))
###Output
2
###Markdown
Du kannst die len-Funktion auch auf Strings anwenden.
###Code
print(len("Hallo"))
def multi_print():
print("Hallo Welt!")
print("Hallo Welt!")
###Output
_____no_output_____
###Markdown
Wenn wir eine eigene Funktion verwenden wollen, müssen wir sie zuerst definieren. Eine solche Funktionsdefinition hat die allgemeine Syntax:**def Funktionname():** **Code** Um eine Funktion auszuführen, die definiert wurde, schreiben wir: **Funktionname()**
###Code
multi_print()
###Output
Hallo Welt!
Hallo Welt!
###Markdown
Funktionen mit einem ArgumentMan kann Funktionen ein **Argument** übergeben, d. h. einen Wert, von dem der Code innerhalb der Funktion abhängt:**def Funktionsname(Argument):** **Code, in dem mit dem spezifischen Argument gearbeitet wird**
###Code
def multi_print2(name):
print(name)
print(name)
multi_print2("HALLO")
multi_print2("WELT")
###Output
HALLO
HALLO
WELT
WELT
###Markdown
Funktionen mit mehreren ArgumentenEine Funktion darf auch mehrere Argumente enthalten:**def Funktionenname(Argument1, Argument2, ...):** **Code, in dem mit Argument1, Argument2,... gearbeitet wird**
###Code
def multi_print(name, count):
for i in range(0, count):
print(name)
multi_print("Hallo!", 5)
###Output
Hallo!
Hallo!
Hallo!
Hallo!
Hallo!
###Markdown
Du kannst dir einen solchen Parameter als eine zu einer Funktion gehörige Variable vorstellen. Vermeide es, einen Funktionsparameter wie eine bereits bestehende Variable zu benennen - Verwirrungsgefahr! Funktionen und Gültigkeit von Variablen
###Code
name = "MARS"
planet = "MARS"
def hallo(name):
print("Hallo" + name)
hallo("Plotti")
print(name)
###Output
MARS
HalloPlotti
MARS
###Markdown
Du siehst, dass der Wert der Variable _name_ keinen Einfluss auf das Argument _name_ der Funktion hat! Die Variable _name_ außerhalb der Funktion ist also eine andere Variable als die Variable _name_ innerhalb der Funktion. Daher Achtung: Das macht den Code unübersichtlich! ÜbungSchreibe eine Funktion, die den Gesamtpreis der Produkte im Warenkorb berechnet!Vervollständige die Funktion list_sum(), der als Parameter eine Liste mit den Preisen übergeben wird. Die Funktion soll dann die Summe der Zahlen aus der Liste ausgeben.
###Code
cart_prices = [20, 3.5, 6.49, 8.99, 9.99, 14.98]
def list_sum(preisliste):
sum = 0
# hier kommt dein Code hin
for preis in preisliste:
sum = sum + preis
print("Der Gesamtpreis ist: " + str(sum))
list_sum(cart_prices)
###Output
Der Gesamtpreis ist: 63.95
###Markdown
Funktionen in FunktionenFunktionen können auch ineinander geschachtelt werden:
###Code
def mutti_print():
print("Hallo Mutti")
mutti_print()
def multi_print(text,anzahl):
for i in range(0,anzahl):
print(text)
multi_print("plotti",5)
def multi_print(text,anzahl):
for i in range(0,anzahl):
print(text)
def super_begruessung():
multi_print("Hallo!", 3)
multi_print("Welt!", 3)
super_begruessung()
###Output
Hallo!
Hallo!
Hallo!
Welt!
Welt!
Welt!
###Markdown
Übung - schreibt eine funktion halo_mutti: sie gibt "hallo mutti" aus- schreibt eine funktion vallo_vatti: sie gibt "hallo vatti " aus- schreibt eine funktion hallo_familie: sie ruft die funktion hallo_mutti und hallo_vatti auf.
###Code
hallo_mutti()
def hallo_mutti(begruessung): #ändern
print(begruessung + " mutti") #ändern
def hallo_vatti(begruessung): #ändern
print(begruessung + " vatti") #ändern
def family_print(begruessung):
hallo_mutti(begruessung) #ändern
hallo_vatti(begruessung) #ändern
#Aufrufen
family_print("Ciao")
#ausgabe soll sein:
#Ciao mutti
#Ciao vatti
def hallo_mutti(begruessung): #ändern
print(begruessung + " mutti") #ändern
hallo_mutti("hi")
###Output
hi mutti
###Markdown
Einen Wert zurückgebenBislang führen wir mit Funktionen einen Codeblock aus, der von Argumenten abhängen kann. Funktionen können aber auch mittels des Befehls **return** Werte zurückgeben.
###Code
def return_element(name):
x = name + " Nase"
return x
a = return_element("Hi")
print(a)
###Output
Hi Nase
###Markdown
Solche Funktionen mit return können wir dann wie Variablen behandeln:
###Code
def return_with_exclamation(name):
return name + "!"
a = return_with_exclamation("Hi")
if a == "Hi!":
print("Right!")
else:
print("Wrong.")
def maximum(a, b):
if a < b:
return b
else:
return a
result = maximum(10, 14)
print(result)
###Output
14
###Markdown
ÜbungIn Amerika ist es üblich Trinkgeld zu geben. Du möchtest eine Funktion schreiben der du den Preis des Essens eingibst (price), die erhöhung in % (percent) und eine Tabelle erhälst die dir zeigt wieviel Trinkgeld du geben kannst. Ein Beispiel:tipp_generator(10,0.05) sollte ausgeben:* Preis des Essens: 10 dollar* Mit 5% Trinkgeld: 10,5 dollar* Mit 10% Trinkgeld: 11 dollar * Mit 15% Trinkgeld: 11,5 dollar
###Code
def print_percentages(price,percent,multiplikator):
print("Mit "+ str(percent*multiplikator*100) + "% ist der Preis: " + str(price*(1+percent*multiplikator)))
def tipp_generator(price,percent):
print("Preis des Essens:" + str(price))
for i in range(1,4):
print_percentages(price,percent,i)
tipp_generator(10,0.1)
###Output
Preis des Essens:10
Mit 10.0% ist der Preis: 11.0
Mit 20.0% ist der Preis: 12.0
Mit 30.000000000000004% ist der Preis: 13.0
###Markdown
Funktionen vs. Methoden FunktionenBei ihrem Aufruf stehen Funktionen "für sich" und das, worauf sie sich beziehen steht ggf. als Argument in den Klammern hinter ihnen:
###Code
liste = [1, 2, 3]
print(liste)
print(len(liste))
###Output
3
###Markdown
MethodenDaneben kennen wir aber auch schon Befehle, die mit einem Punkt an Objekte angehängt werden. Eine Liste ist ein solches **Objekt**. Jedes Objekt hat Methoden, auf die wir zurückgreifen können. Diese Methoden können wir aber nicht auf ein Objekt eines anderen Typs anwenden (meistens zumindest).Schauen wir uns einige nützliche Methoden des Listen-Objektes an. Zwei kennst du ja schon gut:
###Code
# ein Element anhängen
liste.append(4)
print(liste)
# ein Element an einem bestimmten Index entfernen
liste.pop(2)
###Output
_____no_output_____
###Markdown
Übung Python objekte haben oft schon von "Haus aus" methoden die sehr praktisch sind. Finde für Listen heraus: - Wie kann man die Elemente einer Liste umdrehen? - Wie kann man rausfinden an welcher Stelle "Max" steht? - Wie kann ich "Max" von der Liste entfernen?
###Code
liste = ["Max", "Moritz", "Klara", "Elena"]
###Output
_____no_output_____ |
comorbidity.ipynb | ###Markdown
COVID-19 OPEN RESEARCH DATASET
###Code
import os
import json
import pandas as pd
%cd '/home/myilmaz/devel/covid551982_1475446_bundle_archive/'
###Output
/home/myilmaz/devel/covid551982_1475446_bundle_archive
###Markdown
Papers researching chronic kidney disease as a comorbidity risk
###Code
kags=pd.DataFrame(None)
for i in os.listdir('Kaggle/target_tables/8_risk_factors/'):
kag=pd.read_csv('Kaggle/target_tables/8_risk_factors/'+i)
kags=kags.append(kag)
kags.head()
%ls document_parses
keep=['Epidemiology, clinical course, and outcomes of critically ill adults with COVID-19 in New York City: a prospective cohort study','Psychiatric Predictors of COVID-19 Outcomes in a Skilled Nursing Facility Cohort','COVID-19 in Iran, a comprehensive investigation from exposure to treatment outcomes']
arts=set(kags['Study'])-set(keep)
os.path.getsize('document_parses/pdf_json')/1000000
docs=[]
alltext=[]
for i in os.listdir('document_parses/pdf_json'):
save=0
savee=0
with open('document_parses/pdf_json/'+i) as json_file:
data = json.load(json_file)
if data['metadata']['title'] in list(arts):
print(data['metadata']['title'])
doc=[]
text=''
for c,j in enumerate(data['body_text']):
row=[i,data['metadata']['title'],data['body_text'][c]['section'],data['body_text'][c]['text']]
doc.append(row)
text+=data['body_text'][c]['text']
if doc not in docs:
docs.append(doc)
alltext.append(text)
else:
pass
jsons=[j[0] for i in docs for j in i]
titles=[j[1] for i in docs for j in i]
sections=[j[2] for i in docs for j in i]
text=[j[1]+'. '+j[2]+'. '+j[3] for i in docs for j in i]
creats=pd.DataFrame(None,columns=['jsons','titles','sections','text'])
creats=pd.DataFrame(None,columns=['jsons','titles','sections','text'])
creats.jsons=jsons
creats.titles=titles
creats.sections=sections
creats.text=text
docs=creats.copy(deep=True)
docs.drop_duplicates(keep='first',inplace=True)
# NUMBER OF UNIQUE DOCUMENTS IN THE DATA SET
docs['jsons'].nunique()
docs.to_csv('comorbids.csv')
docs=pd.read_csv('comorbids.csv')
import numpy as np
np.min([len(i) for i in alltext])
np.max([len(i) for i in alltext])
print('Average document length is {} words'.format(np.mean([len(i) for i in alltext])))
###Output
Average document length is 16896.051020408162 words
###Markdown
Use pretrained NER model to find Problems, Tests, and Treatments
###Code
import os
import pyspark.sql.functions as F
from pyspark.sql.functions import monotonically_increasing_id
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp.base import *
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed -q spark-nlp==2.5
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp.base import *
from pyspark.sql.functions import monotonically_increasing_id
import pyspark.sql.functions as F
import pyspark.sql.types as t
spark=sparknlp.start()
docs.fillna('',inplace=True)
docs.head(1)
sparkdocs=spark.createDataFrame(docs).toDF('index','docid','title','section','sectionNo','text')
document_assembler = DocumentAssembler() \
.setInputCol("text")\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence")
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
word_embeddings = WordEmbeddingsModel.load("/home/myilmaz/cache_pretrained/embeddings_clinical_en_2.4.0_2.4_1580237286004")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = NerDLModel.load('/home/myilmaz/cache_pretrained/ner_clinical_en_2.4.0_2.4_1580237286004') \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
nlpPipeline = Pipeline(stages=[document_assembler,sentence_detector,tokenizer,
word_embeddings,
clinical_ner,ner_converter
])
empty_data = spark.createDataFrame([[""]]).toDF("text")
model = nlpPipeline.fit(empty_data)
results=model.transform(sparkdocs)
results.columns
exploded = results.select('docid','section','sectionNo',F.explode(F.arrays_zip('token.metadata','token.result','ner.result')).alias("cols")) \
.select('docid','section',F.expr("cols['0'].sentence").alias("sentid"),
F.col('cols.1').alias("token"),F.col('cols.2').alias("label"))
###Output
_____no_output_____
###Markdown
Save annotated documents for further analysis
###Code
exploded.write.option("header", "true").csv("comorbids2.csv")
os.listdir('comorbids2.csv')[0]
import pyspark.sql.types as t
myschema = t.StructType(
[
t.StructField('docid', t.StringType(), True),
t.StructField('section', t.StringType(), True),
t.StructField('sentid', t.IntegerType(), True),
t.StructField('token', t.StringType(), True),
t.StructField('label', t.StringType(), True)
]
)
csvs=os.listdir('comorbids2.csv')
big=pd.DataFrame(None)
for i in csvs:
dfs=spark.read.csv('comorbids2.csv/'+i,sep=',',schema=myschema,header=True)
one=dfs.toPandas()
big=big.append(one)
big.head()
import numpy as np
tokens=[]
savei=''
save=0
for i,j in zip(big.token,big.label):
if j.split('-')[0]!='I':
if save<0:
tokens[save]=savei
tokens.append(np.nan)
savei=i
save=0
continue
else:
tokens.append(savei)
savei=i
save=0
continue
elif j.split('-')[0]=='I':
savei+=' '+i
save-=1
tokens.append(np.nan)
else:
tokens.append(np.nan)
if save<0:
tokens[save]=savei
tokens.append(np.nan)
else:
tokens.append(savei)
tokens=tokens[1:]
big['chunks']=tokens
bigdf=big[big['chunks'].notnull()]
bigdf=bigdf[bigdf['label']!='O']
bigdf['chunks'].value_counts()
###Output
_____no_output_____
###Markdown
Prep for visualizations
###Code
problems=bigdf[bigdf['label']=='B-PROBLEM']
tests=bigdf[bigdf['label']=='B-TEST']
#len(problems[problems['chunks']=='proteinuria'])
probhist=pd.DataFrame(problems['chunks'].value_counts())
probhist=probhist.rename(columns={'chunks':'counts'})
probhist2=probhist.iloc[0:100]
testhist=pd.DataFrame(tests['chunks'].value_counts())
testhist=testhist.rename(columns={'chunks':'counts'})
testhist2=testhist.iloc[0:100]
###Output
_____no_output_____
###Markdown
Look at most frequent "Test" entities
###Code
testhist2.head(40)
import matplotlib.pyplot as plt
import seaborn as sns
fig, ax = plt.subplots(figsize =(24,12))
chart=sns.barplot(testhist2.index,testhist2['counts'])
chart.set_xticklabels(testhist2.index,rotation=90)
plt.title('Test Entities',fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
The pretrained model is returning a lot of false positive for Test entities, but you can still see that kidney related tests such as "creatinine" are well represented in the dataset. Look at most frequent "Problem" entities
###Code
probhist2.head(40)
import seaborn as sns
import seaborn as sns
fig, ax = plt.subplots(figsize =(24,12))
chart=sns.barplot(probhist2.index,probhist2['counts'])
chart.set_xticklabels(probhist2.index,rotation=90)
plt.title('Problem Entities',fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
You can see that kidney related problems such as "AKI" are well represented in the dataset. Find 'Test' entities near the most frequent kidney related 'Problem' entity
###Code
problems=pd.DataFrame(problems).reset_index(drop=True)
problems['sectionid']=problems.docid+'-'+problems.section
tests=pd.DataFrame(tests).reset_index(drop=True)
tests['sectionid']=tests.docid+'-'+tests.section
akis=pd.DataFrame(problems[problems['chunks']=='AKI']).reset_index(drop=True)
a=list(set(akis['sectionid']))
akitest=tests[tests['sectionid'].isin(a)]
akicount=pd.DataFrame(akitest.groupby(['chunks'])['label'].count()).reset_index()
akicount=akicount.sort_values(by='label',ascending=False).reset_index(drop=True)
akicount.columns=['chunk','counts']
akicount
import seaborn as sns
import seaborn as sns
fig, ax = plt.subplots(figsize =(24,12))
chart= sns.barplot(akicount['chunk'][0:50],akicount['counts'][0:50])
chart.set_xticklabels(akicount.chunk,rotation=90)
plt.title("Clinical Tests Near 'AKI'",fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Our clinical tests NER is returning a lot of false positives but we still see that creatinine, CRP, and PCR tests are well represented in the dataset, appearing in the same section as "AKI". This tells me the information is probably not historical and I will have measurements that I can use for predictions as well as terms to use for topic modelling and text classification. Find 'Problem' entities near the most frequent kidney related 'Test' entity
###Code
creatins=pd.DataFrame(tests[tests['chunks']=='creatinine']).reset_index(drop=True)
b=list(set(creatins['sectionid']))
creatprob=problems[problems['sectionid'].isin(b)]
creatcount=pd.DataFrame(creatprob.groupby(['chunks'])['label'].count()).reset_index()
creatcount=creatcount.sort_values(by='label',ascending=False).reset_index(drop=True)
creatcount.columns=['chunk','counts']
creatcount
creatcounts=creatcount.iloc[0:50]
import seaborn as sns
import seaborn as sns
fig, ax = plt.subplots(figsize =(24,12))
chart= sns.barplot(creatcounts.chunk,creatcounts['counts'])
chart.set_xticklabels(creatcounts.chunk,rotation=90)
plt.title("Patient Problems Near 'Creatinine' Test",fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
AKI, hypertension, diabetes, and acute kidney injury are all well represented in the dataset, appearing in the same section as "creatinine" tests. This tells me the information is probably not historical and I will have measurements that I can use for predictions as well as terms to use for topic modelling and text classification. Frequency of 'patient' mentions in documents
###Code
patient=pd.DataFrame(big[(big['token'].str.lower()=='patient')|(big['token'].str.lower()=='patients')]).reset_index(drop=True)
patients=patient.groupby(['docid'])['token'].count()
patients=patients.reset_index()
patients=patients.rename(columns={'token':'counts'})
len(patients)
sns.boxplot(patients['counts'])
plt.title('Frequency of Patient Mentions in 1568 Documents',fontsize=14)
###Output
_____no_output_____
###Markdown
Frequency of 'case report' mentions in documents
###Code
case=pd.DataFrame(big[(big['section'].str.lower()=='case report')|(big['section']=='case study')|(big['chunks'].str.lower()=='case report')|(big['chunks'].str.lower()=='case study')|(big['section'].str.lower()=='case reports')|(big['section']=='case studies')|(big['chunks'].str.lower()=='case reports')|(big['chunks'].str.lower()=='case studies')]).reset_index(drop=True)
cases=case.groupby(['docid'])['section'].count()
cases=cases.reset_index()
cases=cases.rename(columns={'section':'counts'})
len(cases)
sns.boxplot(cases['counts'])
plt.title('Frequency of Case Report/Study Mentions in 78 Documents',fontsize=14)
###Output
_____no_output_____
###Markdown
78 documents refer to case reports a median of about 550 times (The average document length is about 30,000 words.) I think I will have enough patient data to attempt some predictions.
###Code
artlist=kag['Study']
pres=[]
doc=[]
for i in os.listdir('document_parses/pdf_json'):
with open('document_parses/pdf_json/'+i) as json_file:
data = json.load(json_file)
if data['metadata']['title'] in list(artlist):
for c,j in enumerate(data['body_text']):
row=[i,data['metadata']['title'],data['body_text'][c]['section'],data['body_text'][c]['text']]
doc.append(row)
pres.append(doc)
jsons=[j[0] for i in pres for j in i]
titles=[j[1] for i in pres for j in i]
sections=[j[2].lower() for i in pres for j in i]
text=[j[1].lower()+'. '+j[2].lower()+'. '+j[3].lower() for i in pres for j in i]
pres2=pd.DataFrame(None,columns=['jsons','titles','sections','text'])
pres2['jsons']=jsons
pres2['titles']=titles
pres2['section']=sections
pres2['text']=text
pres2.head(1)
case=pd.DataFrame(pres2[(pd.Series(pres2['section']).str.contains('case report'))|(pd.Series(pres2['section']).str.contains('case study'))|(pd.Series(pres2['text']).str.contains('case report'))|(pd.Series(pres2['text']).str.contains('case study'))|(pd.Series(pres2['section']).str.contains('case reports'))|(pd.Series(pres2['section']).str.contains('case studies'))|(pd.Series(pres2['text']).str.contains('case reports'))|(pd.Series(pres2['text']).str.contains('case studies'))]).reset_index(drop=True)
case.head()
len(case)
case['jsons'].nunique()
case['titles'].value_counts()
###Output
_____no_output_____ |
LS_DS_123_Lecture.ipynb | ###Markdown
Lambda School Data Science Module 123 Introduction to Bayesian Inference[XKCD 1132](https://www.xkcd.com/1132/) Bayes vs. Frequentist inference| Bayes | Frequentist ||:---|:---||uses probabilities for both hypotheses and data|never uses or gives the probability of a hypothesis (no prior or posterior).||depends on the prior and likelihood of observed data.|depends on the likelihood P(D | H)) for both observed and unobserved data.||requires one to know or construct a ‘subjective prior’.|does not require a prior.||dominated statistical practice before the 20th century.|dominated statistical practice during the 20th century.|https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading20.pdf Cookie Challengehttps://www.greenteapress.com/thinkbayes/thinkbayes.pdfpage20 Suppose there are two bowls of cookies. * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies. What is the probability of randomly drawing a vanilla cookie?
###Code
# independent probability of getting a vanilla cookie
vanilla = 30+20
choco = 10+20
cookies=vanilla+choco
p_V=vanilla/cookies
print(p_V)
###Output
0.625
###Markdown
Given that I pick from bowl 1, what is the probability of randomly drawing a vanilla cookie? `p(vanilla|Bowl 1)`
###Code
# conditional probability
p_B1=1 # prior
p_VB1= 30/40
print(p_VB1)
###Output
0.75
###Markdown
Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla. What is the probability that it came from Bowl 1? `p(Bowl 1|vanilla)`
###Code
# conditional probability
p_B1= .5 # independent probability of bowl 1
p_VB1=.75 # cond prob
p_V = 50/80 # indep prob of vanilla
p_B1V = (p_B1*p_VB1)/p_V
print(p_B1V)
# confirm what we did
(.5*.75)/(50/80)
###Output
_____no_output_____
###Markdown
**Bayes' theorem:**$$p(H|D) = \frac{p(H) * p(D|H)}{p(D)}$$* H is the **hypothesis** * D is the observed **data** * p(H) is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**. * p(D) is the **marginal probability**, likelihood considered independently * p(D|H) is the probability of the data under the hypothesis, called the **conditional probability**. * p(H|D) is what we want to compute, the probability of the hypothesis after we see the data, called the **posterior** probability. $$posterior = \frac{prior * conditional}{marginal}$$ The Monty Hall Challengehttps://math.ucsd.edu/~crypto/Monty/monty.html
###Code
# Let's recreate the game with code.
import random
def lets_make_a_deal(initial, secondary):
doors=['a','b','c']
second_choices=['KEEP', 'SWITCH']
car = random.choice(doors)
if initial== car and secondary=='KEEP':
result=1
elif initial!=car and secondary == 'SWITCH':
result=1
else:
result=0
# print(f'Car was behind door {car}. You initially picked {initial} and then you decided to {secondary}. Points={result}')
return result
# try it out!
lets_make_a_deal('c', 'SWITCH')
# Choose a door (A, B, C,) and a choice (KEEP, SWITCH)
# Play the game 10,000 times. Which strategy is better? (Note: comment out the 'print'!)
wins = 0
doors=['a', 'b', 'c']
for x in range(0,10000):
points=lets_make_a_deal(random.choice(doors), 'SWITCH')
wins=wins+points
print('total points if switch:', wins)
# Play the game 10,000 times. Which strategy is better? (Note: comment out the 'print'!)
wins = 0
doors=['a', 'b', 'c']
for x in range(0,10000):
points=lets_make_a_deal(random.choice(doors), 'KEEP')
wins=wins+points
print('total points if KEEP:', wins)
###Output
total points if KEEP: 3317
###Markdown
Suppose you pick door A. The host opens door B to reveal a goat. Should you switch to door C? * prior probability = 1/3* marginal probability = 1/2 * conditional probability = (1/2 * 1/3) + ( 0 * 1/3) + ( 1 * 1/3) = 1/2
###Code
# define your prior, marginal, and conditional probabilities
prior=.33
marginal=.5
conditional=(.5*.33)+(0*.33)+(1*.33)
# P(car is A | choice is A) = (marginal * prior) / conditional
posterior = (prior * conditional)/marginal
print(posterior)
# P(car is C | choice is A) = 1-P(car is A | choice is A)
round(1-posterior,3)
###Output
_____no_output_____
###Markdown
If we pick a door and then switch after seeing another door opens, the probability of selecting the right door increases from 1/3 to 2/3. It is, based on this information, always in our best interest to switch. Bayes' Theorem and the Bayesian mindset The Law of Total ProbabilityBy definition, the total probability of all outcomes (events) of some variable (event space) $A$ is 1. That is:$$P(A) = \sum_n P(A_n) = 1$$The law of total probability takes this further, considering two variables ($A$ and $B$) and relating their marginal probabilities and their conditional probabilities. * marginal probabilities (their likelihoods considered independently, without reference to one another)* conditional probabilities (their likelihoods considered jointly) A marginal probability is simply notated as e.g. $P(A)$, while a conditional probability is notated $P(A|B)$, which reads "probability of $A$ *given* $B$".The law of total probability states:$$P(A) = \sum_n P(A | B_n) P(B_n)$$In words - the total probability of $A$ is equal to the sum of the conditional probability of $A$ on any given event $B_n$ times the probability of that event $B_n$, and summed over all possible events in $B$. The Law of Conditional ProbabilityWhat's the probability of something conditioned on something else? To determine this we have to go back to set theory and think about the intersection of sets:The formula for actual calculation:$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$Think of the overall rectangle as the whole probability space, $A$ as the left circle, $B$ as the right circle, and their intersection as the red area. Try to visualize the ratio being described in the above formula, and how it is different from just the $P(A)$ (not conditioned on $B$).We can see how this relates back to the law of total probability - multiply both sides by $P(B)$ and you get $P(A|B)P(B) = P(A \cap B)$ - replaced back into the law of total probability we get $P(A) = \sum_n P(A \cap B_n)$.This may not seem like an improvement at first, but try to relate it back to the above picture - if you think of sets as physical objects, we're saying that the total probability of $A$ given $B$ is all the little pieces of it intersected with $B$, added together. The conditional probability is then just that again, but divided by the probability of $B$ itself happening in the first place. Bayes Theorem$$P(A|B) = \frac{P(B|A)* P(A)}{P(B)}$$In words - the probability of $A$ conditioned on $B$ is the probability of $B$ conditioned on $A$, times the probability of $A$ and divided by the probability of $B$. Using Bayes Theorem Iteratively (repeated testing)There are many ways to apply Bayes' theorem, such as drug tests. You may think that a drug test that is 100% accurate for true positives (detecting somebody who is a user) is pretty good, but what if it also has 1% false positive rate (indicating somebody is a user when they're not)? And furthermore, the rate of drug use in the population at large (and thus our prior belief) is 1/200.What is the likelihood somebody really is a user if they test positive? Some may guess it's 99% - the difference between the true positives and the false positives. But we have a prior belief of the background/true rate of drug use. Sounds like a job for Bayes' theorem!In other words, the likelihood that somebody is a user given they tested positive on a drug test is only 33.2% - probably much lower than you'd guess. This is why, in practice, it's important to use repeated testing to confirm. If we have the same individual who tested positive the first time take the drug test a second time then the posterior probability from our the first test becomes our new prior during the second application. What is the probability that a person is a drug user after two positive drug tests in a row?Bayes' theorem has been relevant in court cases where proper consideration of evidence was important. Whether it's a drug test, breathalyzer, pregnancy test, doctor's diagnosis, or neutrino detector, we have to take into account **both** the false positive rate and our prior probability in order to calculate the correct conditional probability. * P(+|User) = 1 - True Positive Rate* P(User) = 1/200 Prior probability* P(+|Non-user) = False Positive rateYou only need the above three numbers in order to calculate bayes rule
###Code
# prior belief
P_user=1/200
# complement of the prior belief
P_non_user=1-P_user
# true positive rate
P_pos_given_user= 1
# false positive rate
P_pos_given_nonuser=.01
# My first iteration of Bayes Rule (Bayes Theorem)
numerator = P_pos_given_user*P_user
denom = (P_pos_given_user * P_user) + (P_pos_given_nonuser*P_non_user)
posterior1=numerator/denom
print(posterior1)
# We have the same person take the drug test again, and they test positive again
# Now what is the likelihood that they are a drug user?
# The posterior probability from the first test becomes our prior for the second iteration.
P_user=posterior1
P_nonuser=1-P_user
# run Bayes theorem
numerator = P_pos_given_user*P_user
denom = (P_pos_given_user * P_user) + (P_pos_given_nonuser*P_non_user)
posterior2=numerator/denom
print(posterior2)
# Now the third test
P_user=posterior2
P_nonuser=1-P_user
# run Bayes theorem
numerator = P_pos_given_user*P_user
denom = (P_pos_given_user * P_user) + (P_pos_given_nonuser*P_non_user)
posterior3=numerator/denom
print(posterior3)
# Fourth test
P_user=posterior3
P_nonuser=1-P_user
# run Bayes theorem
numerator = P_pos_given_user*P_user
denom = (P_pos_given_user * P_user) + (P_pos_given_nonuser*P_non_user)
posterior4=numerator/denom
print(posterior4)
# Turn all of that into a function
def prob_drug_use(prior, fpr, tpr, num_tests):
posterior=prior
for test in range(0, num_tests):
# prior belief
P_user=posterior
# complement of the prior belief
P_non_user=1-P_user
# true positive rate
P_pos_given_user= 1
# false positive rate
P_pos_given_nonuser=.01
# theorem
numerator = P_pos_given_user*P_user
denom = (P_pos_given_user * P_user) + (P_pos_given_nonuser*P_non_user)
posterior=numerator/denom
return posterior
# try it out
prob_drug_use(1/200, .01, 1, 1 )
###Output
_____no_output_____
###Markdown
Calculating Bayesian Confidence
###Code
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bayes_mvs.html
###Output
_____no_output_____
###Markdown
Example 1. Using random number generator
###Code
# import scipy and numpy
from scipy import stats
import numpy as np
# Set Random Seed for Reproducibility
np.random.seed(seed=42)
coinflips =np.random.binomial(n=1, p=.5, size=100)
print(coinflips[:10])
# FREQUENTIST APPROACH
# calculate a 95% confidence interval on either side of this.
coinflips.mean()
ci_freq=stats.t.interval(0.95, # confidence level (alpha)
len(coinflips), # length of sample
loc=np.mean(coinflips), # sample mean
scale = stats.sem(coinflips) # std error of the mean
)
print(ci_freq)
# BAYESIAN APPROACH
ci_bayes=stats.bayes_mvs(coinflips, alpha=.95)[0][1]
print(ci_bayes)
import seaborn as sns
import matplotlib.pyplot as plt
# plot on graph with kernel density estimate
sns.kdeplot(coinflips);
plt.axvline(x=ci_freq[0], color='red');
plt.axvline(x=ci_freq[1], color='red');
plt.axvline(x=ci_bayes[0], color='green');
plt.axvline(x=ci_bayes[1], color='green');
plt.axvline(x=np.mean(coinflips), color='k');
plt.xlim(.3703, .3705);
###Output
_____no_output_____
###Markdown
Example 2. Using pandas dataframe
###Code
# read in the adult workforce dataset
url='https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv'
import pandas as pd
df = pd.read_csv(url, na_values='?')
df.head(3)
# mean age
samp_mean=df['age'].mean()
# FREQUENTIST APPROACH
# calculate a 95% confidence interval on either side of this.
ci_freq=stats.t.interval(0.95, # confidence level (alpha)
df.shape[0], # length of sample
loc=samp_mean, # sample mean
scale = stats.sem(df['age']) # std error of the mean
)
print(ci_freq)
# BAYESIAN APPROACH
ci_bayes=stats.bayes_mvs(df['age'], alpha=.95)[0][1]
print(ci_bayes)
# plot on graph with kernel density estimate
sns.kdeplot(df['age']);
plt.axvline(x=ci_freq[0], color='red');
plt.axvline(x=ci_freq[1], color='red');
plt.axvline(x=ci_bayes[0], color='green');
plt.axvline(x=ci_bayes[1], color='green');
plt.axvline(x=samp_mean, color='k');
plt.xlim(38, 39);
###Output
_____no_output_____
###Markdown
Pregnancy Test Example The **prior** is our belief in the model given no additional information. This model could be as simple as a statistic, such as the mean we're measuring, or a complex regression. The **conditional probability** is the probability of the data we observed occurring given the model. The **marginal probability** of the data is the probability that our data are observed regardless of what model we choose or believe in. You divide the likelihood by this value to ensure that we are only talking about our model within the context of the data occurring. Imagine that we have a suspicion that someone is pregnant. We might use a pregnancy test to determine whether or not that's the case. Consider the following:1) Our hypothesis, $H$, is that this person is pregnant. 2) A positive result on the pregnancy test is denoted as $D$. 3) We'll consider some "information" about the world: $p(H) = 0.125$ **prior**: on average, 12.5 percent of women are pregnant (not accurate, but useful for our purposes here). **marginal**: $p(D) = 0.14$ — Fourteen percent of people who take the pregnancy test come back with a positive result. **likelihood**: $p(D|H) = 0.85$ — The likelihood states, "How likely would we get data that look like this if our hypothesis was true?" In other words, how accurate is our test? **Estimate how likely it is that a woman who got a positive result is actually be pregnant using Bayes' theorem.** We've got a decent chance of the results being accurate, but not 100 percent — that means that our posterior probability of somebody being pregnant, ($H=1$), given the data that we've seen (a positive result on the test) is 0.759 percent.
###Code
# Define a function for bayes' theorem, assuming you know the conditional, prior, and marginal probabilities.
def bayes(conditional, prior, marginal):
return (conditional*prior)/marginal
posterior = bayes(.85, .125, .14)
print(posterior)
###Output
0.7589285714285713
|
blink/baselines/TF-IDF+SVM.ipynb | ###Markdown
Using processed data
###Code
with open('./data/processed/dictionary.pickle','rb') as f:
dictionary = pickle.load(f)
article_df = pd.DataFrame.from_dict(dictionary)
def return_list_of_dict(filename):
train=[]
with open('./data/processed/'+filename,'r') as f:
for line in f:
train.append(json.loads(line))
return train
train = return_list_of_dict('train.jsonl')
val = return_list_of_dict('valid.jsonl')
test = return_list_of_dict('test.jsonl')
train_df = pd.DataFrame.from_dict(train)
val_df = pd.DataFrame.from_dict(val)
test_df = pd.DataFrame.from_dict(test)
train_df.head()
X_train = train_df['mention']
X_val = val_df['mention']
X_test = test_df['mention']
X_train = preprocess(X_train)
X_test = preprocess(X_test)
X_val = preprocess(X_val)
y_train = train_df['label_id']
y_val = val_df['label_id']
y_test = test_df['label_id']
# Doing PCA here using truncated SVD as PCA does not work for sparse matrices
vectorizer, X_train_tf_idf_vector = get_tf_idf_vector(X_train)
#truncatedSVD, X_train_svd = truncated_svd_on_tf_idf_vector(X_train_tf_idf_vector)
X_val_tf_idf_vector = get_tf_idf_vector_test(X_val,vectorizer)
#X_val_svd = truncated_svd_on_tf_idf_vector_test(X_val_tf_idf_vector,truncatedSVD)
X_test_tf_idf_vector = get_tf_idf_vector_test(X_test,vectorizer)
#X_test_svd = truncated_svd_on_tf_idf_vector_test(X_test_tf_idf_vector, truncatedSVD)
# Training SVM
clf = make_pipeline(StandardScaler(with_mean=False), SVC(gamma='auto',class_weight='balanced'))
clf.fit(X_train_tf_idf_vector, y_train)
#clf = make_pipeline(StandardScaler(), SVC(gamma='auto'),class_weight='balanced')
#clf.fit(X_train_svd, y_train)
sum(y_test_pred==y_test)/y_test.shape[0]*100
correct=[]
incorrect=[]
for i in range(len(y_test)):
if y_test[i]==y_test_pred[i]:
correct.append(y_test[i])
else:
incorrect.append([y_test[i],y_test_pred[i]])
correct= pd.DataFrame(correc)
y_test_pred_df=pd.DataFrame(np.array(y_test), columns=['true'])
y_test_pred_df['pred']=np.array(y_test_pred)
y_test_pred_df.to_csv('TFIDF_SVM_Predictions.csv')
y_test_pred_df = pd.read_csv('TFIDF_SVM_Predictions.csv',index_col=0)
y_test_pred_df
len(y_test_pred_df[y_test_pred_df['pred']==155])
y_test_correct = y_test_pred_df[y_test_pred_df['true']==y_test_pred_df['pred']]
y_test_correct.head()
y_test_incorrect = y_test_pred_df[y_test_pred_df['true']!=y_test_pred_df['pred']]
y_test_incorrect.head()
len(y_test_correct[])
len(y_test_correct[y_test_correct['pred']==155])
len(y_test_incorrect)
len(y_test_incorrect[y_test_incorrect['pred']==155])
len(y_test_pred_df[y_test_pred_df['true']==155])
with open('./data/processed/dictionary.pickle','rb') as f:
dictionary = pickle.load(f)
article_df = pd.DataFrame.from_dict(dictionary)
y_test_correct=y_test_correct.merge(article_df,left_on='true', right_on='cui')
y_test_correct
y_test_incorrect=y_test_incorrect.merge(article_df,left_on='true', right_on='cui')
y_test_incorrect=y_test_incorrect.merge(article_df,left_on='pred', right_on='cui')
y_test_incorrect.head()
y_test_incorrect = y_test_incorrect.rename(columns={"description_x":"true_description","summary_x":"true_summary","description_y":"pred_description","summary_y":"pred_summary","title_x":"true_title","title_y":"pred_title"})
y_test_incorrect.head()
y_test_incorrect.to_csv('TF-IDF_SVM_CorrectPredictions.csv')
y_test_correct.to_csv('TF-IDF_SVM_IncorrectPredictions.csv')
len(y_test_correct)
y_test_pred_df = pd.read_csv('TFIDF_SVM_Predictions.csv')
f1_score(y_test_pred_df['true'],y_test_pred_df['pred'], average='macro')
f1_score(y_test_pred_df['true'],y_test_pred_df['pred'], average='micro')
###Output
_____no_output_____ |
_notebooks/2021_12_31_New_York_City_Taxi_Trip_Duration.ipynb | ###Markdown
"New York City Taxi Trip Durationd"> "[Kaggle]"- toc:true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [Colab, Kaggle Code Review] https://www.kaggle.com/sheriytm/brewed-tpot-for-nyc-with-love-lb0-37 1.Library & Data Load
###Code
from google.colab import drive
drive.mount('/content/drive')
!cp /content/drive/MyDrive/Kaggle/kaggle.json /root/.kaggle/
#!kaggle competitions list
!kaggle competitions download -c nyc-taxi-trip-duration
!unzip "/content/train.zip" -d "/content"
!unzip "/content/test.zip" -d "/content"
!unzip "/content/sample_submission.zip" -d "/content"
!pip install haversine
!pip install tpot
# Import libraries
import os
mingw_path = 'g:/mingw64/bin'
os.environ['PATH'] = mingw_path + ';' + os.environ['PATH']
import xgboost as xgb
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
color = sns.color_palette()
%matplotlib inline
from haversine import haversine
import datetime as dt
from subprocess import check_output
import warnings
warnings.filterwarnings('ignore')
os.listdir()
trainDF = pd.read_csv('train.csv', nrows=50000)
# Load testing data as test
testDF = pd.read_csv('test.csv')
# Print size as well as the top 5 observation of training dataset
print('Size of the training set is: {} rows and {} columns'.format(*trainDF.shape))
print ("\n", trainDF.head(5))
trainDF.describe()
print("Train shape : ", trainDF.shape)
print("Test shape : ", testDF.shape)
dtypeDF = trainDF.dtypes.reset_index()
dtypeDF.columns = ["Count", "Column Type"]
dtypeDF.groupby("Column Type").aggregate('count').reset_index()
###Output
_____no_output_____
###Markdown
Plot the trip duration
###Code
plt.figure(figsize=(8,6))
plt.scatter(range(trainDF.shape[0]), np.sort(trainDF.trip_duration.values))
plt.xlabel('index', fontsize=12)
plt.ylabel('trip duration', fontsize=12)
plt.show()
th = trainDF.trip_duration.quantile(0.99)
tempDF = trainDF
tempDF = tempDF[tempDF['trip_duration'] < th]
plt.figure(figsize=(8,6))
plt.scatter(range(tempDF.shape[0]), np.sort(tempDF.trip_duration.values))
plt.xlabel('index', fontsize=12)
plt.ylabel('trip duration', fontsize=12)
plt.show()
del tempDF
###Output
_____no_output_____
###Markdown
Lets remove the outliers from the train data target
###Code
trainDF = trainDF[trainDF['trip_duration'] < th]
###Output
_____no_output_____
###Markdown
Number of variables with missing values
###Code
variables_missing_value = trainDF.isnull().sum()
variables_missing_value
variables_missing_value = testDF.isnull().sum()
variables_missing_value
###Output
_____no_output_____
###Markdown
2. Data 전처리
###Code
from sklearn.decomposition import PCA
from sklearn.cluster import MiniBatchKMeans
t0 = dt.datetime.now()
train = trainDF
test = testDF
#pickup, dropoff datetime 변환
# 탑승한 날짜 변수 추가
# train 에 check_trip_duration: 탑승시각-하차시각 변수 추가
#주성분분석을 통해 변수 추가
train['pickup_datetime'] = pd.to_datetime(train.pickup_datetime)
test['pickup_datetime'] = pd.to_datetime(test.pickup_datetime)
train.loc[:, 'pickup_date'] = train['pickup_datetime'].dt.date
test.loc[:, 'pickup_date'] = test['pickup_datetime'].dt.date
train['dropoff_datetime'] = pd.to_datetime(train.dropoff_datetime)
train['store_and_fwd_flag'] = 1 * (train.store_and_fwd_flag.values == 'Y')
test['store_and_fwd_flag'] = 1 * (test.store_and_fwd_flag.values == 'Y')
train['check_trip_duration'] = (train['dropoff_datetime'] - train['pickup_datetime']).map(lambda x: x.total_seconds())
duration_difference = train[np.abs(train['check_trip_duration'].values - train['trip_duration'].values) > 1]
print('Trip_duration and datetimes are ok.') if len(duration_difference[['pickup_datetime', 'dropoff_datetime', 'trip_duration', 'check_trip_duration']]) == 0 else print('Ooops.')
train['trip_duration'].describe()
train['log_trip_duration'] = np.log(train['trip_duration'].values + 1)
# Feature Extraction
coords = np.vstack((train[['pickup_latitude', 'pickup_longitude']].values,
train[['dropoff_latitude', 'dropoff_longitude']].values,
test[['pickup_latitude', 'pickup_longitude']].values,
test[['dropoff_latitude', 'dropoff_longitude']].values))
pca = PCA().fit(coords)
train['pickup_pca0'] = pca.transform(train[['pickup_latitude', 'pickup_longitude']])[:, 0]
train['pickup_pca1'] = pca.transform(train[['pickup_latitude', 'pickup_longitude']])[:, 1]
train['dropoff_pca0'] = pca.transform(train[['dropoff_latitude', 'dropoff_longitude']])[:, 0]
train['dropoff_pca1'] = pca.transform(train[['dropoff_latitude', 'dropoff_longitude']])[:, 1]
test['pickup_pca0'] = pca.transform(test[['pickup_latitude', 'pickup_longitude']])[:, 0]
test['pickup_pca1'] = pca.transform(test[['pickup_latitude', 'pickup_longitude']])[:, 1]
test['dropoff_pca0'] = pca.transform(test[['dropoff_latitude', 'dropoff_longitude']])[:, 0]
test['dropoff_pca1'] = pca.transform(test[['dropoff_latitude', 'dropoff_longitude']])[:, 1]
###Output
Trip_duration and datetimes are ok.
###Markdown
 Distance탄 위치의 거리 + 직선거리, 맨해튼거리
###Code
def haversine_array(lat1, lng1, lat2, lng2):
lat1, lng1, lat2, lng2 = map(np.radians, (lat1, lng1, lat2, lng2))
AVG_EARTH_RADIUS = 6371 # in km
lat = lat2 - lat1
lng = lng2 - lng1
d = np.sin(lat * 0.5) ** 2 + np.cos(lat1) * np.cos(lat2) * np.sin(lng * 0.5) ** 2
h = 2 * AVG_EARTH_RADIUS * np.arcsin(np.sqrt(d))
return h
def dummy_manhattan_distance(lat1, lng1, lat2, lng2):
a = haversine_array(lat1, lng1, lat1, lng2)
b = haversine_array(lat1, lng1, lat2, lng1)
return a + b
def bearing_array(lat1, lng1, lat2, lng2):
AVG_EARTH_RADIUS = 6371 # in km
lng_delta_rad = np.radians(lng2 - lng1)
lat1, lng1, lat2, lng2 = map(np.radians, (lat1, lng1, lat2, lng2))
y = np.sin(lng_delta_rad) * np.cos(lat2)
x = np.cos(lat1) * np.sin(lat2) - np.sin(lat1) * np.cos(lat2) * np.cos(lng_delta_rad)
return np.degrees(np.arctan2(y, x))
train.loc[:, 'distance_haversine'] = haversine_array(train['pickup_latitude'].values, train['pickup_longitude'].values, train['dropoff_latitude'].values, train['dropoff_longitude'].values)
train.loc[:, 'distance_dummy_manhattan'] = dummy_manhattan_distance(train['pickup_latitude'].values, train['pickup_longitude'].values, train['dropoff_latitude'].values, train['dropoff_longitude'].values)
train.loc[:, 'direction'] = bearing_array(train['pickup_latitude'].values, train['pickup_longitude'].values, train['dropoff_latitude'].values, train['dropoff_longitude'].values)
train.loc[:, 'pca_manhattan'] = np.abs(train['dropoff_pca1'] - train['pickup_pca1']) + np.abs(train['dropoff_pca0'] - train['pickup_pca0'])
test.loc[:, 'distance_haversine'] = haversine_array(test['pickup_latitude'].values, test['pickup_longitude'].values, test['dropoff_latitude'].values, test['dropoff_longitude'].values)
test.loc[:, 'distance_dummy_manhattan'] = dummy_manhattan_distance(test['pickup_latitude'].values, test['pickup_longitude'].values, test['dropoff_latitude'].values, test['dropoff_longitude'].values)
test.loc[:, 'direction'] = bearing_array(test['pickup_latitude'].values, test['pickup_longitude'].values, test['dropoff_latitude'].values, test['dropoff_longitude'].values)
test.loc[:, 'pca_manhattan'] = np.abs(test['dropoff_pca1'] - test['pickup_pca1']) + np.abs(test['dropoff_pca0'] - test['pickup_pca0'])
train.loc[:, 'center_latitude'] = (train['pickup_latitude'].values + train['dropoff_latitude'].values) / 2
train.loc[:, 'center_longitude'] = (train['pickup_longitude'].values + train['dropoff_longitude'].values) / 2
test.loc[:, 'center_latitude'] = (test['pickup_latitude'].values + test['dropoff_latitude'].values) / 2
test.loc[:, 'center_longitude'] = (test['pickup_longitude'].values + test['dropoff_longitude'].values) / 2
# Datetime features
#요일
#1년중 몇째주인지
# 일주일을 시간으로계산
train.loc[:, 'pickup_weekday'] = train['pickup_datetime'].dt.weekday
train.loc[:, 'pickup_hour_weekofyear'] = train['pickup_datetime'].dt.weekofyear
train.loc[:, 'pickup_hour'] = train['pickup_datetime'].dt.hour
train.loc[:, 'pickup_minute'] = train['pickup_datetime'].dt.minute
train.loc[:, 'pickup_dt'] = (train['pickup_datetime'] - train['pickup_datetime'].min()).dt.total_seconds()
train.loc[:, 'pickup_week_hour'] = train['pickup_weekday'] * 24 + train['pickup_hour']
test.loc[:, 'pickup_weekday'] = test['pickup_datetime'].dt.weekday
test.loc[:, 'pickup_hour_weekofyear'] = test['pickup_datetime'].dt.weekofyear
test.loc[:, 'pickup_hour'] = test['pickup_datetime'].dt.hour
test.loc[:, 'pickup_minute'] = test['pickup_datetime'].dt.minute
test.loc[:, 'pickup_dt'] = (test['pickup_datetime'] - train['pickup_datetime'].min()).dt.total_seconds()
test.loc[:, 'pickup_week_hour'] = test['pickup_weekday'] * 24 + test['pickup_hour']
train.loc[:,'week_delta'] = train['pickup_datetime'].dt.weekday + \
((train['pickup_datetime'].dt.hour + (train['pickup_datetime'].dt.minute / 60.0)) / 24.0)
test.loc[:,'week_delta'] = test['pickup_datetime'].dt.weekday + \
((test['pickup_datetime'].dt.hour + (test['pickup_datetime'].dt.minute / 60.0)) / 24.0)
# Make time features cyclic
train.loc[:,'week_delta_sin'] = np.sin((train['week_delta'] / 7) * np.pi)**2
train.loc[:,'hour_sin'] = np.sin((train['pickup_hour'] / 24) * np.pi)**2
test.loc[:,'week_delta_sin'] = np.sin((test['week_delta'] / 7) * np.pi)**2
test.loc[:,'hour_sin'] = np.sin((test['pickup_hour'] / 24) * np.pi)**2
# Speed
train.loc[:, 'avg_speed_h'] = 1000 * train['distance_haversine'] / train['trip_duration']
train.loc[:, 'avg_speed_m'] = 1000 * train['distance_dummy_manhattan'] / train['trip_duration']
train.loc[:, 'pickup_lat_bin'] = np.round(train['pickup_latitude'], 3)
train.loc[:, 'pickup_long_bin'] = np.round(train['pickup_longitude'], 3)
# Average speed for regions
gby_cols = ['pickup_lat_bin', 'pickup_long_bin']
coord_speed = train.groupby(gby_cols).mean()[['avg_speed_h']].reset_index()
coord_count = train.groupby(gby_cols).count()[['id']].reset_index()
coord_stats = pd.merge(coord_speed, coord_count, on=gby_cols)
coord_stats = coord_stats[coord_stats['id'] > 100]
train.loc[:, 'pickup_lat_bin'] = np.round(train['pickup_latitude'], 2)
train.loc[:, 'pickup_long_bin'] = np.round(train['pickup_longitude'], 2)
train.loc[:, 'center_lat_bin'] = np.round(train['center_latitude'], 2)
train.loc[:, 'center_long_bin'] = np.round(train['center_longitude'], 2)
train.loc[:, 'pickup_dt_bin'] = (train['pickup_dt'] // (3 * 3600))
test.loc[:, 'pickup_lat_bin'] = np.round(test['pickup_latitude'], 2)
test.loc[:, 'pickup_long_bin'] = np.round(test['pickup_longitude'], 2)
test.loc[:, 'center_lat_bin'] = np.round(test['center_latitude'], 2)
test.loc[:, 'center_long_bin'] = np.round(test['center_longitude'], 2)
test.loc[:, 'pickup_dt_bin'] = (test['pickup_dt'] // (3 * 3600))
###Output
_____no_output_____
###Markdown
Clustering
###Code
sample_ind = np.random.permutation(len(coords))[:500000]
kmeans = MiniBatchKMeans(n_clusters=100, batch_size=10000).fit(coords[sample_ind])
train.loc[:, 'pickup_cluster'] = kmeans.predict(train[['pickup_latitude', 'pickup_longitude']])
train.loc[:, 'dropoff_cluster'] = kmeans.predict(train[['dropoff_latitude', 'dropoff_longitude']])
test.loc[:, 'pickup_cluster'] = kmeans.predict(test[['pickup_latitude', 'pickup_longitude']])
test.loc[:, 'dropoff_cluster'] = kmeans.predict(test[['dropoff_latitude', 'dropoff_longitude']])
t1 = dt.datetime.now()
print('Time till clustering: %i seconds' % (t1 - t0).seconds)
# Temporal and geospatial aggregation
for gby_col in ['pickup_hour', 'pickup_date', 'pickup_dt_bin',
'pickup_week_hour', 'pickup_cluster', 'dropoff_cluster']:
gby = train.groupby(gby_col).mean()[['avg_speed_h', 'avg_speed_m', 'log_trip_duration']]
gby.columns = ['%s_gby_%s' % (col, gby_col) for col in gby.columns]
train = pd.merge(train, gby, how='left', left_on=gby_col, right_index=True)
test = pd.merge(test, gby, how='left', left_on=gby_col, right_index=True)
for gby_cols in [['center_lat_bin', 'center_long_bin'],
['pickup_hour', 'center_lat_bin', 'center_long_bin'],
['pickup_hour', 'pickup_cluster'], ['pickup_hour', 'dropoff_cluster'],
['pickup_cluster', 'dropoff_cluster']]:
coord_speed = train.groupby(gby_cols).mean()[['avg_speed_h']].reset_index()
coord_count = train.groupby(gby_cols).count()[['id']].reset_index()
coord_stats = pd.merge(coord_speed, coord_count, on=gby_cols)
coord_stats = coord_stats[coord_stats['id'] > 100]
coord_stats.columns = gby_cols + ['avg_speed_h_%s' % '_'.join(gby_cols), 'cnt_%s' % '_'.join(gby_cols)]
train = pd.merge(train, coord_stats, how='left', on=gby_cols)
test = pd.merge(test, coord_stats, how='left', on=gby_cols)
group_freq = '60min'
df_all = pd.concat((train, test))[['id', 'pickup_datetime', 'pickup_cluster', 'dropoff_cluster']]
train.loc[:, 'pickup_datetime_group'] = train['pickup_datetime'].dt.round(group_freq)
test.loc[:, 'pickup_datetime_group'] = test['pickup_datetime'].dt.round(group_freq)
# Count trips over 60min
df_counts = df_all.set_index('pickup_datetime')[['id']].sort_index()
df_counts['count_60min'] = df_counts.isnull().rolling(group_freq).count()['id']
train = train.merge(df_counts, on='id', how='left')
test = test.merge(df_counts, on='id', how='left')
feature_names = list(train.columns)
print(np.setdiff1d(train.columns, test.columns))
do_not_use_for_training = ['id', 'log_trip_duration', 'pickup_datetime', 'dropoff_datetime', 'trip_duration', 'check_trip_duration',
'pickup_date', 'avg_speed_h', 'avg_speed_m', 'pickup_lat_bin', 'pickup_long_bin',
'center_lat_bin', 'center_long_bin', 'pickup_dt_bin', 'pickup_datetime_group']
feature_names = [f for f in train.columns if f not in do_not_use_for_training]
# print(feature_names)
print('We have %i features.' % len(feature_names))
train[feature_names].count()
#ytrain = np.log(train['trip_duration'].values + 1)
#ytrain = train[train['trip_duration'].notnull()]['trip_duration_log'].values
t1 = dt.datetime.now()
print('Feature extraction time: %i seconds' % (t1 - t0).seconds)
feature_stats = pd.DataFrame({'feature': feature_names})
feature_stats.loc[:, 'train_mean'] = np.nanmean(train[feature_names].values, axis=0).round(4)
feature_stats.loc[:, 'test_mean'] = np.nanmean(test[feature_names].values, axis=0).round(4)
feature_stats.loc[:, 'train_std'] = np.nanstd(train[feature_names].values, axis=0).round(4)
feature_stats.loc[:, 'test_std'] = np.nanstd(test[feature_names].values, axis=0).round(4)
feature_stats.loc[:, 'train_nan'] = np.mean(np.isnan(train[feature_names].values), axis=0).round(3)
feature_stats.loc[:, 'test_nan'] = np.mean(np.isnan(test[feature_names].values), axis=0).round(3)
feature_stats.loc[:, 'train_test_mean_diff'] = np.abs(feature_stats['train_mean'] - feature_stats['test_mean']) / np.abs(feature_stats['train_std'] + feature_stats['test_std']) * 2
feature_stats.loc[:, 'train_test_nan_diff'] = np.abs(feature_stats['train_nan'] - feature_stats['test_nan'])
feature_stats = feature_stats.sort_values(by='train_test_mean_diff')
feature_stats[['feature', 'train_test_mean_diff']].tail()
###Output
_____no_output_____
###Markdown
3. 모델링 TPOT를 Data Science Assistant로 생각해 보십시오. TPOT는 유전 프로그래밍을 사용하여 기계 학습 파이프라인을 최적화하는 Python Automated Machine Learning 도구이다.TPOT는 Scikit-learn을 기반으로 만들어졌으므로 TPOT가 생성하는 모든 코드가 익숙해 보일 것입니다. 여러분이 공상 학습에 익숙하다면, 어쨌든요.TPOT는 아직 개발 중이므로 정기적으로 이 저장소를 확인하여 업데이트를 받으시기 바랍니다.
###Code
from tpot import TPOTRegressor
auto_classifier = TPOTRegressor(generations=3, population_size=9, verbosity=2)
from sklearn.model_selection import train_test_split
# K Fold Cross Validation
from sklearn.model_selection import KFold
X = train[feature_names].values
y = np.log(train['trip_duration'].values + 1)
kf = KFold(n_splits=10)
kf.get_n_splits(X)
print(kf)
KFold(n_splits=10, random_state=None, shuffle=False)
for train_index, test_index in kf.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_valid = X[train_index], X[test_index]
y_train, y_valid = y[train_index], y[test_index]
auto_classifier.fit(X_train, y_train)
# Now do the prediction
test_result = auto_classifier.predict(test[feature_names].values)
sub = pd.DataFrame()
sub['id'] = test['id']
sub['trip_duration'] = np.exp(test_result)
sub.to_csv('NYCTaxi_TpotModels.csv', index=False)
!kaggle competitions submit -c nyc-taxi-trip-duration -f NYCTaxi_TpotModels.csv -m "_1"
###Output
Successfully submitted to New York City Taxi Trip Duration |
building-analytics/.ipynb_checkpoints/test_clean_data-checkpoint.ipynb | ###Markdown
Test TU_Util()
###Code
import matplotlib
from matplotlib import style
%matplotlib inline
style.use('ggplot')
# This is to import custom-made modules
# This can be removed after making these modules a real library
import os, sys
lib_path = os.path.abspath(os.path.join('..', 'building-analytics')) # relative path of the source code in Box Folder
sys.path.append(lib_path)
from TS_Util_Clean_Data import *
# inputs
fileName = "test_marco.csv" # replace with other files used
folder = "../test1"
## call script
# instantiate class
TSU = TS_Util()
# load data
data= TSU.load_TS(fileName, folder)
data.head()
data= TSU.remove_start_NaN(data)
data.head()
# clean start-end
data= TSU.remove_end_NaN(data)
data.tail()
TSU._find_missing(data).head()
TSU.display_missing(data, how="all")
TSU.count_missing(data, how="number")
TSU.remove_missing(data,how="any").head()
TSU._find_outOfBound(data, 10, 300).head()
TSU.display_outOfBound(data, 10, 300)
TSU.count_outOfBound(data, 10, 300)
TSU.remove_outOfBound(data, 10, 350)
TSU.display_outliers(data,method="std",coeff=2, window=10)
TSU.display_outliers(data,method="rstd",coeff=1, window=10)
TSU.display_outliers(data,method="rmedian",coeff=1, window=10)
TSU.display_outliers(data,method="iqr",coeff=1, window=10)
TSU.display_outliers(data,method="qtl",coeff=1, window=10)
###Output
_____no_output_____ |
charts/5-Past_Rating_Square_area_chart.ipynb | ###Markdown
Table of Contents1 Plot square areas 5 - Past rating | Number of companies in line with B2DS scenario
###Code
import numpy as np
import plotly.graph_objects as go
width_all = np.sqrt(96)
width_all
width_B2DS = np.sqrt(11)
width_B2DS
###Output
_____no_output_____
###Markdown
Plot square areas
###Code
square_All = go.Bar(
x=[width_all/2],
y=[10],
width=[width_all], # customize width here
customdata = ['There are 96 companies in the sample.'],
hovertemplate="%{customdata}",
#hovertext=['There are 96 companies in the sample.'],
name='All companies',
marker_color='#BFBFBD'
)
square_B2DS = go.Bar(
x=[width_all-width_B2DS/2],
y=[width_B2DS],
width=[width_B2DS], # customize width here
customdata = ['Only 11 companies have reduced their CO2 emissions and are in line with the B2DS scenario from the IEA.'],
hovertemplate="%{customdata}",
#hovertext=['There are 96 companies in the sample.'],
name='Companies with high CO2 reductions',
marker_color= "#FF5748"#"#1A2E40"
)
fig = go.Figure(data=[square_All,square_B2DS])
fig.update_layout(barmode='group')
fig.update_xaxes(range=[0, 10])
fig.update_yaxes(range=[0, 10])
fig.update_xaxes(showline=False, showticklabels=False)
fig.update_yaxes(showline=False, showticklabels=False)
fig['layout'].update(
#title = overall_title,
template = "simple_white",
showlegend=False,
hovermode='closest'
);
fig.update_yaxes(
scaleanchor = "x",
scaleratio = 1,
)
fig.show()
###Output
_____no_output_____ |
Code/Skewed_data.ipynb | ###Markdown
Processing for skewed dataFeature Selection via Shapley ValuesTraining Decision Tree ClassifierTraining Vanilla NNTraining Multiple experts systems Processing for data after outlier removal and ImputationFeature Selection via Shapley ValuesTraining Decision Tree ClassifierTraining Vanilla NNTraining Multiple experts systems
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/MyDrive
!pip install shap
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import collections
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from xgboost.sklearn import XGBRegressor
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBClassifier
from sklearn.tree import DecisionTreeClassifier
import shap
from sklearn.metrics import mean_squared_error
%matplotlib inline
# import os
# os.listdir()
df=pd.read_csv('complete_data.csv')
df.head()
#SHAPLEY
y=df['output_nonad']
X=df.drop('output_nonad', 1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42)
shap.initjs()
xgb_model = XGBRegressor(n_estimators=1000, max_depth=10, learning_rate=0.001, random_state=0)
xgb_model.fit(X_train, y_train)
y_predict = xgb_model.predict(X_test)
mean_squared_error(y_test, y_predict)**(0.5)
explainer = shap.TreeExplainer(xgb_model)
shap_values = explainer(X_train)
print(shap_values)
shap.plots.bar(shap_values,max_display=16)
cols=['width','ancurl*com','url*ads','height','ancurl*click','aratio','url*ad','ancurl*http+www','alt*click','alt*here+for','ancurl*ad','url*afn.org','alt*to','url*images+home','url*doubleclick.net','output_nonad']
cols_X=['width','ancurl*com','url*ads','height','ancurl*click','aratio','url*ad','ancurl*http+www','alt*click','alt*here+for','ancurl*ad','url*afn.org','alt*to','url*images+home','url*doubleclick.net']
datapd=df[['width','url*ads','height','ancurl*click','aratio','url*ad','ancurl*http+www','alt*here+for','ancurl*ad','url*afn.org','alt*to','url*images+home','output_nonad']]
# fig,ax=plt.subplots(nrows=1,ncols=3)
# fig.set_figheight(5)
# fig.set_figwidth(25)
# sns.distplot(datapd['aratio'],ax=ax[0])
# sns.distplot(datapd['height'],ax=ax[1])
# sns.distplot(datapd['width'],ax=ax[2])
columns_for_visualization = list()
columns_for_visualization=cols_X
corr=datapd[columns_for_visualization].corr()
fig, ax = plt.subplots(figsize=(15,15))
sns.heatmap(corr,annot=True,linewidths=.5, ax=ax)
da=datapd.values
X=da[:,:-1]
y=da[:,-1]
xtrain,xtest,ytrain,ytest=train_test_split(X,y,test_size=0.30,random_state=80)
def fit_models(classifiers,xtrain,ytrain):
"""This function fit multiple models by sklearn and return the dictionary with values as objects of models"""
models=collections.OrderedDict()
for constructor in classifiers:
obj=constructor()
obj.fit(xtrain,ytrain)
models[str(constructor).split(':')[0]]=obj
return models
def classification_multi_report(ytest,models_array):
"""This function generate classification accuracy report for given input model objects"""
for i in models_array:
print('__________________________________________________')
print('the model - '+str(i))
print(classification_report(ytest,models_array[i].predict(xtest)))
def cross_Fucntion(models,cv):
"""This function return cross validated accuray and the variance of given input model obejects"""
accuracy={}
for model in models:
cross_val_array=cross_val_score(models[model],xtrain,ytrain,scoring='accuracy',cv=cv)
accuracy[model]=[np.mean(cross_val_array),np.std(cross_val_array)]
return accuracy
def multi_grid_search(param_grid_array,estimator_list,x,y):
"""This function calculate the grid search parameters and accuracy for given input modles and return dictionary with each tupple containing accuracy and best parameters"""
d={}
count=0
for i in estimator_list:
gc=GridSearchCV(estimator=estimator_list[i],param_grid=param_grid_array[count],scoring ='accuracy',cv=5).fit(x,y)
d[i]=(gc.best_params_,gc.best_score_)
count+=1
return d
###Output
_____no_output_____
###Markdown
Classification Report>The classification report visualizer displays the precision, recall, F1, and support scores for the model. ... Visual classification reports are used to compare classification models to select models that are “redder”, e.g. have stronger classification metrics or that are more balanced.
###Code
classifiers=[ SVC,KNeighborsClassifier, RandomForestClassifier, GradientBoostingClassifier,LogisticRegression,XGBClassifier,DecisionTreeClassifier]
# classifiers=[DecisionTreeClassifier]
model_list=fit_models(classifiers,xtrain,ytrain)
classification_multi_report(ytest,model_list)
obj=cross_Fucntion(model_list,cv=20)
for model in obj:
print('the model -'+str(model)+'has \n || crosss validated accuracy as -> '+str(obj[model][0])+' | variance - '+str(obj[model][1])+' ||' )
print('______________________________________________________________________________________________________________')
param_grid_svm=[
{
'kernel':['linear'],'random_state':[0]
},
{
'kernel':['rbf'],'random_state':[0]
},
{
'kernel':['poly'],'degree':[1,2,3,4],'random_state':[0]
}
]
param_grid_knn=[
{
'n_neighbors':[1,5,11,15,20,25],
'p':[2]
}
]
param_grid_nb=[
{}
]
#RandomF SVC,KNeighborsClassifier, RandomForestClassifier, GradientBoostingClassifier,LogisticRegressio,XGB
param_grid_rf=[
{
'n_estimators': np.arange(100, 1000, 100),
# 'max_depth' : np.arange(5, 100,5)
}
]
param_grid_xgb=[
{
'n_estimators': np.arange(100,500, 100)
}
]
param_grid_lr=[
{}
]
param_grid_xgbx=[
{
'n_estimators': np.arange(100,500, 100)
}
]
param_grid_array=[param_grid_svm,param_grid_knn,param_grid_rf,param_grid_xgb,param_grid_lr,param_grid_xgbx]
multi_grid_search(param_grid_array,model_list,xtrain,ytrain)
###Output
_____no_output_____
###Markdown
Fitting model with best hyperparmater and cv score
###Code
classifier=RandomForestClassifier(n_estimators=800)
classifier.fit(xtrain,ytrain)
sns.heatmap(pd.crosstab(ytest,classifier.predict(xtest)),cmap='coolwarm')
plt.xlabel('predicted')
plt.ylabel('actual')
plt.show()
#custom NN
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop,Adam
print(X.shape)
X_train1, X_test, y_train1, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train1, y_train1, test_size=0.125, random_state=1)
model=Sequential()
model.add(Dense(25,activation='relu',input_shape=(15,),kernel_initializer='he_normal'))
model.add(Dense(20,activation='sigmoid'))
# model.add(Dense(20,activation='sigmoid'))
# model.add(Dropout(0.2))
# model.add(Dense(20,activation='sigmoid'))
# model.add(Dropout(0.2))
# model.add(Dense(15,activation='sigmoid'))
model.add(Dense(5,activation='sigmoid'))
# model.add(Dropout(0.2))
# model.add(Dropout(0.2))
# model.add(Dropout(0.2))
# model.add(Dense(90,activation='relu'))
# model.add(Dropout(0.2))
# model.add(Dense(60,activation='relu'))
# model.add(Dropout(0.2))
# model.add(Dense(30,activation='relu'))
# model.add(Dropout(0.2))
model.add(Dense(1,activation='sigmoid'))
# model.add(DropOut(0.2))
model.summary()
print(X_train.shape)
model.compile(loss='binary_crossentropy',
optimizer=Adam(),
metrics=['accuracy','AUC'])
class_w={0.0:8,1.0:1}
history = model.fit(X_train1, y_train1,class_weight=class_w,
batch_size=32,
epochs=100,
verbose=1,
validation_data=(X_val, y_val))
score = model.evaluate(X_test,y_test, verbose=0)
print(score)
print('Test accuracy:', score[1])
model.save('model_weights_Skewed_2.h5')
%cd /content/drive/MyDrive
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
from sklearn.model_selection import train_test_split
synth_df=pd.read_csv('complete_data.csv')
X=synth_df.values[:,:-1]
y=synth_df.values[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.20, random_state=42)
geo_X_train=X_train[:,0:4]
geo_X_test=X_test[:,0:4]
url_X_train=X_train[:,4:461]
url_X_test=X_test[:,4:461]
origurl_X_train=X_train[:,461:956]
origurl_X_test=X_test[:,461:956]
ancurl_X_train=X_train[:,956:1428]
ancurl_X_test=X_test[:,956:1428]
alt_X_train=X_train[:,1428:1539]
alt_X_test=X_test[:,1428:1539]
cap_X_train=X_train[:,1539:1558]
cap_X_test=X_test[:,1539:1558]
geo_input=keras.Input(shape=(4,),name='geo')
# geo_x=layers.Dense(16,activation='sigmoid')(geo_input)
geo_x=layers.Dense(8,activation='sigmoid')(geo_input)
geo_output=layers.Dense(1,activation='sigmoid')(geo_x)
url_input=keras.Input(shape=(457,),name='url')
# url_x=layers.Dense(256,activation='sigmoid')(url_input)
url_x=layers.Dense(20,activation='sigmoid')(url_input)
url_output=layers.Dense(1,activation='sigmoid')(url_x)
origurl_input=keras.Input(shape=(495,),name='origurl')
# origurl_x=layers.Dense(256,activation='sigmoid')(origurl_input)
origurl_x=layers.Dense(20,activation='sigmoid')(origurl_input)
origurl_output=layers.Dense(1,activation='sigmoid')(origurl_x)
ancurl_input=keras.Input(shape=(472,),name='ancurl')
# ancurl_x=layers.Dense(256,activation='sigmoid')(ancurl_input)
ancurl_x=layers.Dense(20,activation='sigmoid')(ancurl_input)
ancurl_output=layers.Dense(1,activation='sigmoid')(ancurl_x)
alt_input=keras.Input(shape=(111,),name='alt')
# alt_x=layers.Dense(64,activation='sigmoid')(alt_input)
alt_x=layers.Dense(20,activation='sigmoid')(alt_input)
alt_output=layers.Dense(1,activation='sigmoid')(alt_x)
cap_input=keras.Input(shape=(19,),name='cap')
# cap_x=layers.Dense(16,activation='sigmoid')(cap_input)
cap_x=layers.Dense(8,activation='sigmoid')(cap_input)
cap_output=layers.Dense(1,activation='sigmoid')(cap_x)
xf1=layers.concatenate([geo_output,url_output,origurl_output,ancurl_output,alt_output,cap_output])
# xf1=layers.Dense(4,activation='sigmoid')(xf1)
result=layers.Dense(1, name="ad/nonad")(xf1)
model=keras.Model(inputs=[geo_input,url_input,origurl_input,ancurl_input,alt_input,cap_input],outputs=[result])
keras.utils.plot_model(model, "multiModal_skewed_3.png", show_shapes=True)
model.summary()
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
# keras.losses.CategoricalCrossentropy(from_logits=True),
],
metrics=['accuracy'],
)
s='geo_X_test'
print(s.replace("geo","geo"),",")
print(s.replace("geo","url"),",",)
print(s.replace("geo","origurl"),",",)
print(s.replace("geo","ancurl"),",",)
print(s.replace("geo","alt"),",",)
print(s.replace("geo","cap"),"",)
model.fit(
{"geo": geo_X_train, "url": url_X_train ,
"origurl": origurl_X_train ,
"ancurl": ancurl_X_train ,
"alt": alt_X_train ,
"cap": cap_X_train },
{"ad/nonad": y_train},
epochs=200,
batch_size=64,
validation_split=0.125,
verbose=1,
)
test_score= model.predict([geo_X_test ,
url_X_test ,
origurl_X_test ,
ancurl_X_test ,
alt_X_test ,
cap_X_test ])
def custom_f1(y_true, y_pred):
tn=0
tp=0
fn=0
fp=0
for i in range(656):
# print(y_true[i][0],)
if y_true[i][0]==0 and y_pred[i][0]==0:
tn+=1
if y_true[i][0]==0 and y_pred[i][0]==1:
fp+=1
if y_true[i][0]==1 and y_pred[i][0]==0:
fn+=1
if y_true[i][0]==1 and y_pred[i][0]==1:
tp+=1
print(tn,tp,fn,fp)
precision=tp/(tp+fp)
recall=tp/(tp+tn)
print(recall)
print(precision)
print((tp+tn)/(tp+tn+fp+fn))
return 2*((precision*recall)/(precision+recall))
print(custom_f1(y_test,test_score))
for i in range(len(test_score)):
if test_score[i]>=0:
test_score[i]=1
else:
test_score[i]=0
model.save('multimodal_skewed_6grps.h5')
y_test=np.reshape(y_test,(656,1))
%cd /content/drive/MyDrive
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
from sklearn.model_selection import train_test_split
synth_df=pd.read_csv('complete_data.csv')
X=synth_df.values[:,:-1]
y=synth_df.values[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.20, random_state=42)
geo_X_train=X_train[:,0:4]
geo_X_test=X_test[:,0:4]
url_X_train=X_train[:,4:1428]
url_X_test=X_test[:,4:1428]
alt_X_train=X_train[:,1428:1558]
alt_X_test=X_test[:,1428:1558]
geo_input=keras.Input(shape=(4,),name='geo')
# geo_x=layers.Dense(16,activation='sigmoid')(geo_input)
geo_x=layers.Dense(8,activation='sigmoid')(geo_input)
geo_output=layers.Dense(1,activation='sigmoid')(geo_x)
url_input=keras.Input(shape=(1424,),name='url')
url_x=layers.Dense(320,activation='sigmoid')(url_input)
url_x=layers.Dense(64,activation='sigmoid')(url_x)
url_output=layers.Dense(1,activation='sigmoid')(url_x)
alt_input=keras.Input(shape=(130,),name='alt')
alt_x=layers.Dense(64,activation='sigmoid')(alt_input)
alt_x=layers.Dense(20,activation='sigmoid')(alt_x)
alt_output=layers.Dense(1,activation='sigmoid')(alt_x)
xf1=layers.concatenate([geo_output,url_output,alt_output])
xf1=layers.Dense(2,activation='sigmoid')(xf1)
result=layers.Dense(1, name="ad/nonad")(xf1)
model=keras.Model(inputs=[geo_input,url_input,alt_input],outputs=[result])
keras.utils.plot_model(model, "multi_modal_skewed_2.png", show_shapes=True)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
# keras.losses.CategoricalCrossentropy(from_logits=True),
],
metrics=['accuracy'],
)
model.fit(
{"geo": geo_X_train, "url": url_X_train ,
"alt": alt_X_train ,
},
{"ad/nonad": y_train},
epochs=400,
batch_size=64,
validation_split=0.125,
verbose=2,
)
test_score= model.evaluate([geo_X_test ,
url_X_test ,
alt_X_test ],y_test,verbose=2)
y_pred= model.predict([geo_X_test ,
url_X_test ,
alt_X_test ])
for i in range(len(y_pred)):
if y_pred[i][0]>=0:
y_pred[i]=1
else:
y_pred[i]=0
def custom_f1(y_true, y_pred):
tn=0
tp=0
fn=0
fp=0
for i in range(656):
print(y_pred[i],)
if y_true[i]==0 and y_pred[i]==0:
tn+=1
if y_true[i]==0 and y_pred[i]==1:
fp+=1
if y_true[i]==1 and y_pred[i]==0:
fn+=1
if y_true[i]==1 and y_pred[i]==1:
tp+=1
print(tn,tp,fn,fp)
precision=tp/(tp+fp)
recall=tp/(tp+tn)
print(recall)
print(precision)
print((tp+tn)/(tp+tn+fp+fn))
return 2*((precision*recall)/(precision+recall))
print(custom_f1(y_test,y_pred))
model.save('skewed_multi_2.h5')
###Output
_____no_output_____ |
Data_Science_Projects/LinearRegression/Linear Regression Project.ipynb | ###Markdown
Linear Regression ProjectThe company is trying to decide whether to focus their efforts on their mobile app experience or their website. Imports** Importing pandas, numpy, matplotlib,and seaborn. Then setting %matplotlib inline.**
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get the DataI'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:* Avg. Session Length: Average session of in-store style advice sessions.* Time on App: Average time spent on App in minutes* Time on Website: Average time spent on Website in minutes* Length of Membership: How many years the customer has been a member. ** Read in the Ecommerce Customers csv file as a DataFrame called customers.**
###Code
customers = pd.read_csv('Ecommerce Customers')
###Output
_____no_output_____
###Markdown
**Checking the head of customers, and checking out its info() and describe() methods.**
###Code
customers.head()
customers.describe()
customers.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 8 columns):
Email 500 non-null object
Address 500 non-null object
Avatar 500 non-null object
Avg. Session Length 500 non-null float64
Time on App 500 non-null float64
Time on Website 500 non-null float64
Length of Membership 500 non-null float64
Yearly Amount Spent 500 non-null float64
dtypes: float64(5), object(3)
memory usage: 31.3+ KB
###Markdown
Exploratory Data Analysis**Let's explore the data!**For the rest of the exercise I'll only be using the numerical data of the csv file.___**Using seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns.**
###Code
sns.set_palette("GnBu_d")
sns.set_style('whitegrid')
sns.jointplot(x = 'Time on Website', y = 'Yearly Amount Spent', data = customers)
###Output
_____no_output_____
###Markdown
**Time on App column instead**
###Code
sns.jointplot(x = 'Time on App', y = 'Yearly Amount Spent', data = customers)
###Output
_____no_output_____
###Markdown
** Using jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.**
###Code
sns.jointplot(x = 'Time on App', y = 'Length of Membership', data = customers, kind = 'hex')
###Output
_____no_output_____
###Markdown
**Let's explore these types of relationships across the entire data set.**
###Code
sns.pairplot(data = customers)
###Output
_____no_output_____
###Markdown
**Based off this plot what looks to be the most correlated feature with Yearly Amount Spent.**
###Code
customers.corr()
###Output
_____no_output_____
###Markdown
**Creating a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.**
###Code
sns.lmplot(x = 'Length of Membership', y = 'Yearly Amount Spent', data = customers)
###Output
_____no_output_____
###Markdown
Training and Testing DataNow that I've explored the data a bit, let's go ahead and split the data into training and testing sets.** Setting a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. **
###Code
customers.columns
X = customers[['Avg. Session Length', 'Time on App', 'Time on Website', 'Length of Membership']]
y = customers['Yearly Amount Spent']
###Output
_____no_output_____
###Markdown
** Using model_selection.train_test_split from sklearn to split the data into training and testing sets. Setting test_size=0.3 and random_state=101**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 101)
###Output
_____no_output_____
###Markdown
Training the ModelNow its time to train the model on the training data!** Importing LinearRegression from sklearn.linear_model **
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
**Creating an instance of a LinearRegression() model named lm.**
###Code
lm = LinearRegression()
###Output
_____no_output_____
###Markdown
** Training/fiting lm on the training data.**
###Code
lm.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
**Printing out the coefficients of the model**
###Code
lm.coef_
###Output
_____no_output_____
###Markdown
Predicting Test DataNow that I have fit the model, let's evaluate its performance by predicting off the test values!** Using lm.predict() to predict off the X_test set of the data.**
###Code
prediction = lm.predict(X_test)
###Output
_____no_output_____
###Markdown
** Creating a scatterplot of the real test values versus the predicted values. **
###Code
plt.scatter(y_test, prediction)
plt.xlabel('Y Test (True Values)')
plt.ylabel('Predicted Values')
###Output
_____no_output_____
###Markdown
Evaluating the ModelLet's evaluate the model performance by calculating the residual sum of squares and the explained variance score (R^2).** Calculating the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas**
###Code
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, prediction))
print('MSE:', metrics.mean_squared_error(y_test, prediction))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
metrics.explained_variance_score(y_test, prediction)
###Output
_____no_output_____
###Markdown
ResidualsExploring the residuals to make sure everything was okay with the data. **Ploting a histogram of the residuals and make sure it looks normally distributed. Using seaborn distplot.**
###Code
sns.distplot(y_test - prediction, bins = 50)
###Output
_____no_output_____
###Markdown
Interpreting the coefficients.** Recreating the dataframe below. **
###Code
cdf = pd.DataFrame(lm.coef_, index = X.columns, columns = ['Coeff'])
cdf
###Output
_____no_output_____ |
PySpark_Excercise.ipynb | ###Markdown
1. Setup
###Code
!pip install -r requirements.txt
###Output
Requirement already satisfied: absl-py==0.9.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 1)) (0.9.0)
Requirement already satisfied: appdirs==1.4.4 in c:\users\anany\appdata\roaming\python\python37\site-packages (from -r requirements.txt (line 2)) (1.4.4)
Requirement already satisfied: astor==0.8.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 3)) (0.8.0)
Requirement already satisfied: backcall==0.2.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 4)) (0.2.0)
Requirement already satisfied: beautifulsoup4==4.9.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 5)) (4.9.1)
Requirement already satisfied: blinker==1.4 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 6)) (1.4)
Requirement already satisfied: brotlipy==0.7.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 7)) (0.7.0)
Requirement already satisfied: bs4==0.0.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 8)) (0.0.1)
Requirement already satisfied: cachetools@ file:///tmp/build/80754af9/cachetools_1596822027882/work from file:///tmp/build/80754af9/cachetools_1596822027882/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 9)) (4.1.1)
Requirement already satisfied: certifi==2020.6.20 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 10)) (2020.6.20)
Requirement already satisfied: cffi==1.14.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 11)) (1.14.0)
Requirement already satisfied: chardet==3.0.4 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 12)) (3.0.4)
Requirement already satisfied: click==7.1.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 13)) (7.1.2)
Requirement already satisfied: colorama==0.4.3 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 14)) (0.4.3)
Requirement already satisfied: cryptography==2.9.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 15)) (2.9.2)
Requirement already satisfied: cycler==0.10.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 16)) (0.10.0)
Requirement already satisfied: decorator==4.4.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 17)) (4.4.2)
Requirement already satisfied: distlib==0.3.0 in c:\users\anany\appdata\roaming\python\python37\site-packages (from -r requirements.txt (line 18)) (0.3.0)
Requirement already satisfied: filelock==3.0.12 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 19)) (3.0.12)
Requirement already satisfied: gast==0.2.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 20)) (0.2.2)
Requirement already satisfied: google-auth@ file:///tmp/build/80754af9/google-auth_1596863485713/work from file:///tmp/build/80754af9/google-auth_1596863485713/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 21)) (1.20.1)
Requirement already satisfied: google-auth-oauthlib==0.4.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 22)) (0.4.1)
Requirement already satisfied: google-pasta==0.2.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 23)) (0.2.0)
Requirement already satisfied: grpcio==1.27.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 24)) (1.27.2)
Requirement already satisfied: h5py==2.10.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 25)) (2.10.0)
Requirement already satisfied: idna@ file:///tmp/build/80754af9/idna_1593446292537/work from file:///tmp/build/80754af9/idna_1593446292537/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 26)) (2.10)
Requirement already satisfied: importlib-metadata==1.7.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 27)) (1.7.0)
Requirement already satisfied: ipython@ file:///C:/ci/ipython_1596868620883/work from file:///C:/ci/ipython_1596868620883/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 28)) (7.17.0)
Requirement already satisfied: ipython-genutils==0.2.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 29)) (0.2.0)
Requirement already satisfied: jedi@ file:///C:/ci/jedi_1596472767194/work from file:///C:/ci/jedi_1596472767194/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 30)) (0.17.2)
Requirement already satisfied: joblib@ file:///tmp/build/80754af9/joblib_1594236160679/work from file:///tmp/build/80754af9/joblib_1594236160679/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 31)) (0.16.0)
Requirement already satisfied: Keras-Applications@ file:///tmp/build/80754af9/keras-applications_1594366238411/work from file:///tmp/build/80754af9/keras-applications_1594366238411/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 32)) (1.0.8)
Requirement already satisfied: Keras-Preprocessing==1.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 33)) (1.1.0)
Requirement already satisfied: kiwisolver==1.2.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 34)) (1.2.0)
Requirement already satisfied: Markdown==3.1.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 35)) (3.1.1)
Requirement already satisfied: matplotlib@ file:///C:/ci/matplotlib-base_1592846084747/work from file:///C:/ci/matplotlib-base_1592846084747/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 36)) (3.2.2)
Requirement already satisfied: mkl-fft==1.0.14 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 37)) (1.0.14)
Requirement already satisfied: mkl-random==1.0.4 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: mkl-service==2.3.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 39)) (2.3.0)
Requirement already satisfied: numpy==1.17.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 40)) (1.17.0)
Requirement already satisfied: oauthlib==3.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 41)) (3.1.0)
Requirement already satisfied: opt-einsum==3.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 42)) (3.1.0)
Requirement already satisfied: pandas@ file:///C:/ci/pandas_1596824386248/work from file:///C:/ci/pandas_1596824386248/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 43)) (1.1.0)
Requirement already satisfied: parso==0.7.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 44)) (0.7.0)
Requirement already satisfied: pickleshare@ file:///C:/ci/pickleshare_1594374056827/work from file:///C:/ci/pickleshare_1594374056827/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 45)) (0.7.5)
Requirement already satisfied: prompt-toolkit==3.0.5 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 46)) (3.0.5)
Requirement already satisfied: protobuf==3.11.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 47)) (3.11.2)
Requirement already satisfied: py4j@ file:///tmp/build/80754af9/py4j_1593437928427/work from file:///tmp/build/80754af9/py4j_1593437928427/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 48)) (0.10.9)
Requirement already satisfied: pyarrow==0.15.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 49)) (0.15.1)
Requirement already satisfied: pyasn1==0.4.8 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 50)) (0.4.8)
Requirement already satisfied: pyasn1-modules==0.2.7 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 51)) (0.2.7)
Requirement already satisfied: pycparser@ file:///tmp/build/80754af9/pycparser_1594388511720/work from file:///tmp/build/80754af9/pycparser_1594388511720/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 52)) (2.20)
Requirement already satisfied: Pygments==2.6.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 53)) (2.6.1)
Requirement already satisfied: PyJWT==1.7.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 54)) (1.7.1)
Requirement already satisfied: pyOpenSSL@ file:///tmp/build/80754af9/pyopenssl_1594392929924/work from file:///tmp/build/80754af9/pyopenssl_1594392929924/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 55)) (19.1.0)
Requirement already satisfied: pyparsing==2.4.7 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 56)) (2.4.7)
Requirement already satisfied: pyreadline==2.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 57)) (2.1)
Requirement already satisfied: PySocks@ file:///C:/ci/pysocks_1594394709107/work from file:///C:/ci/pysocks_1594394709107/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 58)) (1.7.1)
Requirement already satisfied: pyspark@ file:///tmp/build/80754af9/pyspark_1593438048599/work from file:///tmp/build/80754af9/pyspark_1593438048599/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 59)) (3.0.0)
Requirement already satisfied: python-dateutil==2.8.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 60)) (2.8.1)
Requirement already satisfied: pytz==2020.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 61)) (2020.1)
Requirement already satisfied: requests@ file:///tmp/build/80754af9/requests_1592841827918/work from file:///tmp/build/80754af9/requests_1592841827918/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 62)) (2.24.0)
Requirement already satisfied: requests-oauthlib==1.3.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 63)) (1.3.0)
Requirement already satisfied: rsa@ file:///tmp/build/80754af9/rsa_1596998415516/work from file:///tmp/build/80754af9/rsa_1596998415516/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 64)) (4.6)
Requirement already satisfied: scikit-learn==0.23.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 65)) (0.23.2)
Requirement already satisfied: scipy==1.5.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 66)) (1.5.2)
Requirement already satisfied: six==1.15.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 67)) (1.15.0)
Requirement already satisfied: soupsieve==2.0.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 68)) (2.0.1)
Requirement already satisfied: tensorboard==2.2.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 69)) (2.2.1)
Requirement already satisfied: tensorboard-plugin-wit==1.6.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 70)) (1.6.0)
Requirement already satisfied: tensorflow==2.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 71)) (2.1.0)
Requirement already satisfied: tensorflow-estimator==2.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 72)) (2.1.0)
Requirement already satisfied: termcolor==1.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 73)) (1.1.0)
Requirement already satisfied: threadpoolctl@ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl from file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 74)) (2.1.0)
Requirement already satisfied: tornado==6.0.4 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 75)) (6.0.4)
Requirement already satisfied: traitlets==4.3.3 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 76)) (4.3.3)
Requirement already satisfied: urllib3@ file:///tmp/build/80754af9/urllib3_1597086586889/work from file:///tmp/build/80754af9/urllib3_1597086586889/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 77)) (1.25.10)
Requirement already satisfied: virtualenv==20.0.20 in c:\users\anany\appdata\roaming\python\python37\site-packages (from -r requirements.txt (line 78)) (20.0.20)
Requirement already satisfied: wcwidth@ file:///tmp/build/80754af9/wcwidth_1593447189090/work from file:///tmp/build/80754af9/wcwidth_1593447189090/work in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 79)) (0.2.5)
Requirement already satisfied: Werkzeug==0.16.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 80)) (0.16.1)
Requirement already satisfied: win-inet-pton==1.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 81)) (1.1.0)
Requirement already satisfied: wincertstore==0.2 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 82)) (0.2)
Requirement already satisfied: wrapt==1.12.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 83)) (1.12.1)
Requirement already satisfied: xgboost==1.1.1 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 84)) (1.1.1)
Requirement already satisfied: zipp==3.1.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from -r requirements.txt (line 85)) (3.1.0)
Requirement already satisfied: setuptools>=40.3.0 in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from google-auth@ file:///tmp/build/80754af9/google-auth_1596863485713/work->-r requirements.txt (line 21)) (49.3.1.post20200810)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in c:\programdata\anaconda3\envs\zeta_pyspark\lib\site-packages (from tensorboard==2.2.1->-r requirements.txt (line 69)) (0.34.2)
###Markdown
1.1 Imports
###Code
# my gereneral imports
import os
import wget
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
import time
# url data reader imports
from bs4 import BeautifulSoup
import requests
# pyspark imports
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext, SparkSession
from pyspark.sql.types import StructType, StructField, DoubleType, IntegerType, StringType
from pyspark.sql import SparkSession
from pyspark.ml.feature import StringIndexer, OneHotEncoder, MinMaxScaler, VectorAssembler
from pyspark.ml.classification import GBTClassifier
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator
%matplotlib inline
PROJECT_ROOT_DIR = "."
PROJECT = "Income"
DATA_PATH = os.path.join(PROJECT_ROOT_DIR, "data", PROJECT)
os.makedirs(DATA_PATH, exist_ok=True)
OUT_PATH = os.path.join(PROJECT_ROOT_DIR, "output", PROJECT)
os.makedirs(OUT_PATH, exist_ok=True)
t0 = time.time()
###Output
_____no_output_____
###Markdown
Q1. PySpark Assessment 1.1 Setup a local environment to run PySpark Create Spark session
###Code
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession \
.builder \
.getOrCreate()
###Output
_____no_output_____
###Markdown
1.2 Downloading data
###Code
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/"
ext = 'gz'
def list_files(url, ext=''):
page = requests.get(url).text
# print(page)
soup = BeautifulSoup(page, 'html.parser')
return [url + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)]
for fileurl in list_files(url, ext):
# print(fileurl)
filename = fileurl.split('/')[-1]
# print(filename)
wget.download(fileurl, out=os.path.join(DATA_PATH, filename))
os.listdir('data')
###Output
_____no_output_____
###Markdown
1.2.1 Read the data
###Code
df = (
spark.read.format("csv").\
options(header='false', inferSchema='True').\
load(os.path.join(DATA_PATH, 'census-income.data.gz'))
)
df.show(1)
###Output
+---+----------------+---+---+--------------------+---+----------------+--------+--------------------+----------------+------+----------+-------+----------------+----------------+-------------------+----+----+----+---------+----------------+----------------+--------------------+--------------------+-------+----+----+----+--------------------+----+----+----------------+--------------+--------------+--------------+--------------------+----+----------------+----+----+----+---------+
|_c0| _c1|_c2|_c3| _c4|_c5| _c6| _c7| _c8| _c9| _c10| _c11| _c12| _c13| _c14| _c15|_c16|_c17|_c18| _c19| _c20| _c21| _c22| _c23| _c24|_c25|_c26|_c27| _c28|_c29|_c30| _c31| _c32| _c33| _c34| _c35|_c36| _c37|_c38|_c39|_c40| _c41|
+---+----------------+---+---+--------------------+---+----------------+--------+--------------------+----------------+------+----------+-------+----------------+----------------+-------------------+----+----+----+---------+----------------+----------------+--------------------+--------------------+-------+----+----+----+--------------------+----+----+----------------+--------------+--------------+--------------+--------------------+----+----------------+----+----+----+---------+
| 73| Not in universe|0.0|0.0| High school grad...|0.0| Not in universe| Widowed| Not in universe ...| Not in universe| White| All other| Female| Not in universe| Not in universe| Not in labor force| 0.0| 0.0| 0.0| Nonfiler| Not in universe| Not in universe| Other Rel 18+ ev...| Other relative o...|1700.09| ?| ?| ?| Not in universe ...| ?| 0.0| Not in universe| United-States| United-States| United-States| Native- Born in ...| 0.0| Not in universe| 2.0| 0.0|95.0| - 50000.|
+---+----------------+---+---+--------------------+---+----------------+--------+--------------------+----------------+------+----------+-------+----------------+----------------+-------------------+----+----+----+---------+----------------+----------------+--------------------+--------------------+-------+----+----+----+--------------------+----+----+----------------+--------------+--------------+--------------+--------------------+----+----------------+----+----+----+---------+
only showing top 1 row
###Markdown
1.2.2. Make correct column names I could not find where to get column names easily...The `census-income.names` file reports that there should be 40 columns (0-39) and we have 42 (0-41).The first column is clearly the age.The last column in the dataset is a dummy variable for income below or above 50 000.The second-to-last column is clearly the year.Therefore, there must be an extra column in among `_c1`--`_c39` that is not listed in the `census-income.names` file... I was able to identify that variable by comparing the variables descriptions and the column values in the cell output above.**This might not be possible if you have hundreds or thousands of features.**The 'extra' column is `_c24` which is continuous. I suspect it is an income variable.I made a dictionary below for the correct column names.
###Code
correct_columns = {
"_c0": "age",
"_c1": "class_of_worker",
"_c2": "detailed_industry_recode",
"_c3": "detailed_occupation_recode",
"_c4": "education",
"_c5": "wage_per_hour",
"_c6": "enroll_in_edu_inst_last_wk",
"_c7": "marital_stat",
"_c8": "major_industry_code",
"_c9": "major_occupation_code",
"_c10": "race",
"_c11": "hispanic_origin",
"_c12": "sex",
"_c13": "member_of_a_labor_union",
"_c14": "reason_for_unemployment",
"_c15": "full_or_part_time_employment_stat",
"_c16": "capital_gains",
"_c17": "capital_losses",
"_c18": "dividends_from_stocks",
"_c19": "tax_filer_stat",
"_c20": "region_of_previous_residence",
"_c21": "state_of_previous_residence",
"_c22": "detailed_household_and_family_stat",
"_c23": "detailed_household_summary_in_household",
"_c24": "UNKNOWN_VARIABLE",
"_c25": "migration_code-change_in_msa",
"_c26": "migration_code-change_in_reg",
"_c27": "migration_code-move_within_reg",
"_c28": "live_in_this_house_1_year_ago",
"_c29": "migration_prev_res_in_sunbelt",
"_c30": "num_persons_worked_for_employer",
"_c31": "family_members_under_18",
"_c32": "country_of_birth_father",
"_c33": "country_of_birth_mother",
"_c34": "country_of_birth_self",
"_c35": "citizenship",
"_c36": "own_business_or_self_employed",
"_c37": "fill_inc_questionnaire_for_veterans_admin",
"_c38": "veterans_benefits",
"_c39": "weeks_worked_in_year",
"_c40": "year",
"_c41": "income_binary",
}
my_cols = [val for val in correct_columns.values()]
df = (
spark.read.format("csv").\
options(header='false', inferSchema='True').\
load(os.path.join(DATA_PATH, 'census-income.data.gz')).\
toDF(*my_cols)
)
###Output
_____no_output_____
###Markdown
1.3 Print the data Schema, Summary, of columns and of rows Print schema
###Code
df.printSchema()
###Output
root
|-- age: integer (nullable = true)
|-- class_of_worker: string (nullable = true)
|-- detailed_industry_recode: double (nullable = true)
|-- detailed_occupation_recode: double (nullable = true)
|-- education: string (nullable = true)
|-- wage_per_hour: double (nullable = true)
|-- enroll_in_edu_inst_last_wk: string (nullable = true)
|-- marital_stat: string (nullable = true)
|-- major_industry_code: string (nullable = true)
|-- major_occupation_code: string (nullable = true)
|-- race: string (nullable = true)
|-- hispanic_origin: string (nullable = true)
|-- sex: string (nullable = true)
|-- member_of_a_labor_union: string (nullable = true)
|-- reason_for_unemployment: string (nullable = true)
|-- full_or_part_time_employment_stat: string (nullable = true)
|-- capital_gains: double (nullable = true)
|-- capital_losses: double (nullable = true)
|-- dividends_from_stocks: double (nullable = true)
|-- tax_filer_stat: string (nullable = true)
|-- region_of_previous_residence: string (nullable = true)
|-- state_of_previous_residence: string (nullable = true)
|-- detailed_household_and_family_stat: string (nullable = true)
|-- detailed_household_summary_in_household: string (nullable = true)
|-- UNKNOWN_VARIABLE: double (nullable = true)
|-- migration_code-change_in_msa: string (nullable = true)
|-- migration_code-change_in_reg: string (nullable = true)
|-- migration_code-move_within_reg: string (nullable = true)
|-- live_in_this_house_1_year_ago: string (nullable = true)
|-- migration_prev_res_in_sunbelt: string (nullable = true)
|-- num_persons_worked_for_employer: double (nullable = true)
|-- family_members_under_18: string (nullable = true)
|-- country_of_birth_father: string (nullable = true)
|-- country_of_birth_mother: string (nullable = true)
|-- country_of_birth_self: string (nullable = true)
|-- citizenship: string (nullable = true)
|-- own_business_or_self_employed: double (nullable = true)
|-- fill_inc_questionnaire_for_veterans_admin: string (nullable = true)
|-- veterans_benefits: double (nullable = true)
|-- weeks_worked_in_year: double (nullable = true)
|-- year: double (nullable = true)
|-- income_binary: string (nullable = true)
###Markdown
Print summary
###Code
df.summary()
###Output
_____no_output_____
###Markdown
Print of rows
###Code
print(f"Number of rows: {df.count()}")
###Output
Number of rows: 199523
###Markdown
Print of columns
###Code
print(f"Number of columns: {len(df.columns)}")
###Output
Number of columns: 42
###Markdown
1.4 Print a table that shows distinct values of all columns **WARNING:** *This task as it is written in the doc file seems rather strange as we have continuous variables.*
###Code
df.columns
tab_distinct = []
for c in df.columns:
tab_distinct.append([i[c] for i in df.select(c).distinct().collect()])
df_distinct = pd.DataFrame(tab_distinct).transpose().dropna(axis=0, how='all')
df_distinct.columns = df.columns
display(df_distinct)
df_distinct.iloc[99799]
###Output
_____no_output_____
###Markdown
Let's get rid of those continuous variables and reproduce our table.
###Code
df.dtypes
tab_distinct = []
selected_cols = []
for c_type in df.dtypes:
if c_type[1] == 'double':
continue
c = c_type[0]
selected_cols.append(c)
tab_distinct.append([i[c] for i in df.select(c).distinct().collect()])
df_distinct = pd.DataFrame(tab_distinct).transpose().dropna(axis=0, how='all')
df_distinct.columns = selected_cols
display(df_distinct)
df_distinct.iloc[90]
###Output
_____no_output_____
###Markdown
If we also drop integers, we can get a nice table.
###Code
tab_distinct = []
selected_cols = []
for c_type in df.dtypes:
if ('double' in c_type[1] or 'int' in c_type[1]):
continue
c = c_type[0]
selected_cols.append(c)
tab_distinct.append([i[c] for i in df.select(c).distinct().collect()])
df_distinct = pd.DataFrame(tab_distinct).transpose().dropna(axis=0, how='all')
df_distinct.columns = selected_cols
display(df_distinct)
df_distinct.iloc[50]
###Output
_____no_output_____
###Markdown
1.5 Make exploratory data analysis and visualize your findings from data I prefer using pandas.
###Code
pd_df = df.toPandas()
pd_df['income_binary'].unique()
pd_df.iloc[np.where(pd_df['income_binary']==' - 50000.')]['UNKNOWN_VARIABLE'].mean()
pd_df.iloc[np.where(pd_df['income_binary']==' 50000+.')]['UNKNOWN_VARIABLE'].mean()
###Output
_____no_output_____
###Markdown
Hm, `UNKNOWN_VARIABLE` does not seem to be an income variable... I was going to plot several graphs of income versus some typical characteristics from Labor Economics (e.g. Mincer Equation), but it seems that the data does not have the continuous income variable. So, I decided to make several plots to see if the two groups (above 50k / below 50k) can be visually separated depending on age, wage, capital gains and losses, and dividents.
###Code
pd_df.iloc[0]
###Output
_____no_output_____
###Markdown
1.5.1 Continuous Variables
###Code
fig = plt.figure(figsize=(16,12))
ax = plt.subplot(2, 2, 1)
for label, pdf in pd_df.groupby(by='income_binary'):
pdf = pdf.iloc[np.where(pdf['wage_per_hour']!=0)]
plt.scatter(x=pdf['age'], y=pdf['wage_per_hour'], label=label)
plt.legend(loc=1)
plt.yscale('symlog')
plt.xlabel("Age, y")
plt.ylabel("Wage/hour")
ax = plt.subplot(2, 2, 2)
for label, pdf in pd_df.groupby(by='income_binary'):
# exclude observations with zeros in any of the variables
pdf = pdf.iloc[np.where(pdf['wage_per_hour'].mul(pdf['weeks_worked_in_year'])!=0)]
plt.scatter(x=pdf['weeks_worked_in_year'], y=pdf['wage_per_hour'], label=label)
plt.legend(loc=1)
plt.yscale('log')
plt.xlabel("Weeks worked in a year")
plt.ylabel("Wage/hour")
ax = plt.subplot(2, 2, 3)
for label, pdf in pd_df.groupby(by='income_binary'):
# exclude observations with zeros in any of the variables
pdf = pdf.iloc[np.where(pdf['capital_gains'].mul(pdf['dividends_from_stocks'], fill_value=0)!=0.)]
plt.scatter(x=pdf['capital_gains'], y=pdf['dividends_from_stocks'], label=label)
plt.legend(loc=1)
plt.yscale('symlog')
plt.xscale('symlog')
plt.xlabel("Capital gains")
plt.ylabel("Dividends from stocks")
ax = plt.subplot(2, 2, 4)
for label, pdf in pd_df.groupby(by='income_binary'):
# exclude observations with zeros in any of the variables
pdf = pdf.iloc[np.where(pdf['capital_losses'].mul(pdf['dividends_from_stocks'], fill_value=0)!=0.)]
plt.scatter(x=pdf['capital_losses'], y=pdf['dividends_from_stocks'], label=label)
plt.legend(loc=1)
plt.yscale('symlog')
plt.xscale('symlog')
plt.xlabel("Capital losses")
plt.ylabel("Dividends from stocks")
###Output
_____no_output_____
###Markdown
It seems that from continuous variables the following ones have highest potential in explaining classification above/below 50k.First, is total capital - as we can see from two graphs at the bottom* large capital gains, and* large capital losses,both seem to positively correlate with being classified 'above 50k'.Among working population, the proportion of those classified 'above 50k' is highest among those who work full-time (52 weeks/year). 1.5.2 Discreet Variables I might check discreet variables later, after proper feature engineering (e.g., decoding string variables into categorical numeric). 1.6 Binary Target Variable Already exists: `income_binary` in the data we loaded (`df`). Later, I will transform it into a numerical binary variable. 1.7 Most Promising Features I already showed above that among continuous variables, most promising are capital gains (and losses, too, as they both are good proxies for total capital) and working weeks (those who work 52 weeks/year have the highest share of those classified 'above 50k'). For other features, according to Labor Economics (Mincer Equation), the most influential features for working individuals are:* characteristics that determine human capital: - education, - age (inversed U-shape), - occupation, - employment status (full-time or partial employment, self-employed),* and the set of individual characteristics that affect personal tastes : - gender and ethnicity (due to various kinds of discrimination / selection effects), and - geographic location (state) and migration status. 1.8 When should the raw model dataset be randomly distributed to Train/Test? Clearly, the data set should be split into tran/dev/test sets **before** any feature engineering. This ensures we do not overfit on the test data because we selected features on the data that already includes the test set. This allows us to be sure that when we deploy model into production, our model performance indicators are as close to the values we are going to see in production, because our model was trained (including the selection of the features) without using the test set. 1.9 Feature Engineering 1.9.1 Encode strings into numerical variables
###Code
str_cols = [c_type[0] for c_type in df.dtypes if 'str' in c_type[1]]
str_cols
indexed = df # make a copy of `df`
# in case I want to use SQL queries for feature engineering without One-Hot encoded variables
indexed.createOrReplaceTempView("indexed")
cat_cols = []
for str_ in str_cols:
cat_ = 'catIndex_' + str_
indexer = StringIndexer(inputCol=str_, outputCol=cat_)
indexed = indexer.fit(indexed).transform(indexed)
cat_cols.append(cat_)
# indexed.show(1)
###Output
_____no_output_____
###Markdown
Let's make a list of indexers to later feed it into a pipeline
###Code
indexers = [StringIndexer(inputCol=str_, outputCol='catIndex_' + str_) for str_ in str_cols]
###Output
_____no_output_____
###Markdown
PySpark 3.0 allows you to transform multiple columns at a time.
###Code
cat_cols = ['catIndex_' + str_ for str_ in str_cols]
spark3_indexer = StringIndexer(inputCols=str_cols, outputCols=cat_cols)
###Output
_____no_output_____
###Markdown
1.9.2 One-Hot Encoding
###Code
vec_cols = ['catVec_' + str_ for str_ in str_cols]
encoder = OneHotEncoder(inputCols=cat_cols,
outputCols=vec_cols)
model = encoder.fit(indexed)
encoded = model.transform(indexed)
# encoded.show(1)
###Output
_____no_output_____
###Markdown
1.9.3 Assembling into Vectors
###Code
numeric_features = [col[0] for col in encoded.dtypes if 'string' not in col[1] and 'income_binary' not in col[1]]
assembler = VectorAssembler(
inputCols=numeric_features,
outputCol="features")
output = assembler.transform(encoded)
# output.show(1)
###Output
_____no_output_____
###Markdown
I want to use SQL queries for later analysis
###Code
output.createOrReplaceTempView("output")
spark.sql("SELECT catIndex_income_binary, count(catIndex_income_binary) from output group by catIndex_income_binary").show()
###Output
+----------------------+-----------------------------+
|catIndex_income_binary|count(catIndex_income_binary)|
+----------------------+-----------------------------+
| 0.0| 187141|
| 1.0| 12382|
+----------------------+-----------------------------+
###Markdown
1.9.4 Normalization I want to use `MinMaxScaler` for all variables (both continuous and dummy variables) to lie between 0 and 1
###Code
mmscaler = MinMaxScaler(inputCol="features", outputCol="scaled_features")
###Output
_____no_output_____
###Markdown
1.10 Choose a ML Algorithm I want to use Gradient-Boosted Tree Classifier. Main advantages:* high accuracy;* flexible;* works great with categorical and numerical values.
###Code
classifier = GBTClassifier(labelCol='catIndex_income_binary', featuresCol="scaled_features", )
###Output
_____no_output_____
###Markdown
1.10.2 Create a Pipeline
###Code
gbt_pipeline = Pipeline(stages=[spark3_indexer, encoder, assembler, mmscaler, classifier])
###Output
_____no_output_____
###Markdown
1.10.3 Feed All the Data
###Code
gbt_model = gbt_pipeline.fit(df)
prediction = gbt_model.transform(df)
prediction.select("prediction", "catIndex_income_binary", "scaled_features").show(5)
###Output
+----------+----------------------+--------------------+
|prediction|catIndex_income_binary| scaled_features|
+----------+----------------------+--------------------+
| 0.0| 0.0|(411,[0,7,10,12,1...|
| 0.0| 0.0|(411,[0,1,2,7,8,1...|
| 0.0| 0.0|(411,[0,7,10,12,1...|
| 0.0| 0.0|(411,[0,7,14,28,2...|
| 0.0| 0.0|(411,[0,7,14,28,2...|
+----------+----------------------+--------------------+
only showing top 5 rows
###Markdown
1.11 Model Performance Measurements 1.11.1 Train Data Area under ROC
###Code
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction", labelCol='catIndex_income_binary')
evaluator.evaluate(prediction)
###Output
_____no_output_____
###Markdown
1.11 Test data
###Code
df_test = (
spark.read.format("csv").\
options(header='false', inferSchema='True').\
load(os.path.join(DATA_PATH, 'census-income.test.gz')).\
toDF(*my_cols)
)
###Output
_____no_output_____
###Markdown
Sanity check
###Code
df_test.createOrReplaceTempView("df_test")
spark.sql("SELECT income_binary, count(income_binary) from df_test group by income_binary").show()
test_prediction = gbt_model.transform(df_test)
evaluator.evaluate(test_prediction)
###Output
_____no_output_____
###Markdown
I am quite satisfied with the result. 1.12 Steps Taken to Avoid Model Overfitting None. It is possible to reduce number of iterations and penalize tree size to prevent overfitting. In theory, with other classification methods, I would use L1 or L2 regularization, depending on whether we want to select only features that best explain the results or we want to use as many features as possible just to avoid high variance problem. For neural nets I would use dropout. 1.13 What changes do you propose to make this modeling process more effective, if you are to train this in a cluster with 1,000 nodes, 300 million rows and 5,000 features in AWS? 1. I would probably not use Pandas for visualization.2. Perhaps, I would consider adding ETL stage, to store data in `parquet` format.3. Spark is created to be scalable, so no changes are needed in the training/prediction part. *MAYBE*, the initialization would be different - to run it on a cluster.I never worked with AWS, so I don't know what else is needed. In Watson Studio (the only cloud service that I am familiar with) all you need is to select an appropriate environment - that's it.
###Code
spark.stop()
print(f"Elapsed time: {time.time() - t0}s")
sc.stop()
###Output
_____no_output_____ |
workshops/Medical_Imaging_AI/spleen_segmentation_3d_tutorial.ipynb | ###Markdown
Spleen 3D segmentation with MONAIThis tutorial shows how to integrate MONAI into an existing PyTorch medical DL program.And easily use below features:1. Transforms for dictionary format data.1. Load Nifti image with metadata.1. Add channel dim to the data if no channel dimension.1. Scale medical image intensity with expected range.1. Crop out a batch of balanced images based on positive / negative label ratio.1. Cache IO and transforms to accelerate training and validation.1. 3D UNet model, Dice loss function, Mean Dice metric for 3D segmentation task.1. Sliding window inference method.1. Deterministic training for reproducibility.The Spleen dataset can be downloaded from http://medicaldecathlon.com/.Target: Spleen Modality: CT Size: 61 3D volumes (41 Training + 20 Testing) Source: Memorial Sloan Kettering Cancer Center Challenge: Large ranging foreground size Setup environment !pip uninstall -y monai monai-weekly
###Code
!pip install -q "monai[nibabel, skimage, pillow, tensorboard, gdown, ignite, torchvision, itk, tqdm, lmdb, psutil, openslide]==0.7.0"
#!pip list | grep monai
#!python -c "import monai" || pip install -q "monai-weekly[gdown, nibabel, tqdm]"
!python -c "import matplotlib" || pip install -q matplotlib
# !pip install -q pytorch-lightning==1.4.0
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup imports
###Code
from monai.utils import first, set_determinism
from monai.transforms import (
AsDiscrete,
AsDiscreted,
EnsureChannelFirstd,
Compose,
CropForegroundd,
LoadImaged,
Orientationd,
RandCropByPosNegLabeld,
ScaleIntensityRanged,
Spacingd,
EnsureTyped,
EnsureType,
Invertd,
)
from monai.handlers.utils import from_engine
from monai.networks.nets import UNet
from monai.networks.layers import Norm
from monai.metrics import DiceMetric
from monai.losses import DiceLoss
from monai.inferers import sliding_window_inference
from monai.data import CacheDataset, DataLoader, Dataset, decollate_batch
from monai.config import print_config
from monai.apps import download_and_extract
import torch
import matplotlib.pyplot as plt
import tempfile
import shutil
import os
import glob
###Output
_____no_output_____
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
import pathlib, os, glob
model_dir = "10.192.21.7/models" ## a folder to keep all the models
pathlib.Path(model_dir).mkdir(parents=True, exist_ok=True)
# make sure that the folder is empty
files = glob.glob(model_dir + "/*")
for f in files:
os.remove(f)
directory = "10.192.21.7/datasets" ## input data folder
root_dir = tempfile.mkdtemp() if directory is None else directory
data_dir = os.path.join(root_dir, "Task09_Spleen")
###Output
_____no_output_____
###Markdown
Download datasetDownloads and extracts the dataset. The dataset comes from http://medicaldecathlon.com/.
###Code
resource = "https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar"
md5 = "410d4a301da4e5b2f6f86ec3ddba524e"
compressed_file = os.path.join(root_dir, "Task09_Spleen.tar")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
###Output
_____no_output_____
###Markdown
Set MSD Spleen dataset path
###Code
train_images = sorted(glob.glob(os.path.join(data_dir, "imagesTr", "*.nii.gz")))
train_labels = sorted(glob.glob(os.path.join(data_dir, "labelsTr", "*.nii.gz")))
data_dicts = [
{"image": image_name, "label": label_name}
for image_name, label_name in zip(train_images, train_labels)
]
train_files, val_files = (
data_dicts[:-9],
data_dicts[-9:],
) ## keep the last 9 files as validation files
###Output
_____no_output_____
###Markdown
Set deterministic training for reproducibility
###Code
set_determinism(seed=0)
###Output
_____no_output_____
###Markdown
Setup transforms for training and validationHere we use several transforms to augment the dataset:1. `LoadImaged` loads the spleen CT images and labels from NIfTI format files.1. `AddChanneld` as the original data doesn't have channel dim, add 1 dim to construct "channel first" shape.1. `Spacingd` adjusts the spacing by `pixdim=(1.5, 1.5, 2.)` based on the affine matrix.1. `Orientationd` unifies the data orientation based on the affine matrix.1. `ScaleIntensityRanged` extracts intensity range [-57, 164] and scales to [0, 1].1. `CropForegroundd` removes all zero borders to focus on the valid body area of the images and labels.1. `RandCropByPosNegLabeld` randomly crop patch samples from big image based on pos / neg ratio. The image centers of negative samples must be in valid body area.1. `RandAffined` efficiently performs `rotate`, `scale`, `shear`, `translate`, etc. together based on PyTorch affine transform.1. `EnsureTyped` converts the numpy array to PyTorch Tensor for further steps.
###Code
train_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
Orientationd(keys=["image", "label"], axcodes="RAS"),
ScaleIntensityRanged(
keys=["image"],
a_min=-57,
a_max=164,
b_min=0.0,
b_max=1.0,
clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
RandCropByPosNegLabeld(
keys=["image", "label"],
label_key="label",
spatial_size=(96, 96, 96),
pos=1,
neg=1,
num_samples=4,
image_key="image",
image_threshold=0,
),
# user can also add other random transforms
# RandAffined(
# keys=['image', 'label'],
# mode=('bilinear', 'nearest'),
# prob=1.0, spatial_size=(96, 96, 96),
# rotate_range=(0, 0, np.pi/15),
# scale_range=(0.1, 0.1, 0.1)),
EnsureTyped(keys=["image", "label"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
Orientationd(keys=["image", "label"], axcodes="RAS"),
ScaleIntensityRanged(
keys=["image"],
a_min=-57,
a_max=164,
b_min=0.0,
b_max=1.0,
clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
EnsureTyped(keys=["image", "label"]),
]
)
###Output
_____no_output_____
###Markdown
Check transforms in DataLoader
###Code
check_ds = Dataset(data=val_files, transform=val_transforms)
check_loader = DataLoader(check_ds, batch_size=1)
check_data = first(check_loader)
image, label = (check_data["image"][0][0], check_data["label"][0][0])
print(f"image shape: {image.shape}, label shape: {label.shape}")
# plot the slice [:, :, 80]
plt.figure("check", (12, 6))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 80], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 80])
plt.show()
###Output
_____no_output_____
###Markdown
Define CacheDataset and DataLoader for training and validationHere we use CacheDataset to accelerate training and validation process, it's 10x faster than the regular Dataset. To achieve best performance, set `cache_rate=1.0` to cache all the data, if memory is not enough, set lower value. Users can also set `cache_num` instead of `cache_rate`, will use the minimum value of the 2 settings. And set `num_workers` to enable multi-threads during caching. If want to to try the regular Dataset, just change to use the commented code below.
###Code
train_ds = CacheDataset(
data=train_files, transform=train_transforms, cache_rate=1, num_workers=0
)
# train_ds = monai.data.Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld
# to generate 2 x 4 images for network training
train_loader = DataLoader(train_ds, batch_size=2, shuffle=True, num_workers=0)
val_ds = CacheDataset(
data=val_files, transform=val_transforms, cache_rate=1.0, num_workers=0
)
# val_ds = Dataset(data=val_files, transform=val_transforms)
val_loader = DataLoader(val_ds, batch_size=1, num_workers=0)
###Output
_____no_output_____
###Markdown
!top Create Model, Loss, Optimizer
###Code
# standard PyTorch program style: create UNet, DiceLoss and Adam optimizer
device = torch.device("cuda:0")
model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=2,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
norm=Norm.BATCH,
).to(device)
loss_function = DiceLoss(to_onehot_y=True, softmax=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-4)
dice_metric = DiceMetric(include_background=False, reduction="mean")
###Output
_____no_output_____
###Markdown
Execute a typical PyTorch training process
###Code
%%time
max_epochs = 20 # 600
val_interval = 2
best_metric = -1
best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
post_pred = Compose(
[EnsureType(), AsDiscrete(argmax=True, to_onehot=True, num_classes=2)]
)
post_label = Compose([EnsureType(), AsDiscrete(to_onehot=True, num_classes=2)])
for epoch in range(max_epochs):
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
print(
f"{step}/{len(train_ds) // train_loader.batch_size}, "
f"train_loss: {loss.item():.4f}"
)
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
model.eval()
with torch.no_grad():
for val_data in val_loader:
val_inputs, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
roi_size = (160, 160, 160)
sw_batch_size = 4
val_outputs = sliding_window_inference(
val_inputs, roi_size, sw_batch_size, model
)
val_outputs = [post_pred(i) for i in decollate_batch(val_outputs)]
val_labels = [post_label(i) for i in decollate_batch(val_labels)]
# compute metric for current iteration
dice_metric(y_pred=val_outputs, y=val_labels)
# aggregate the final mean dice result
metric = dice_metric.aggregate().item()
# reset the status for next validation round
dice_metric.reset()
metric_values.append(metric)
if metric > best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(model.state_dict(), model_dir + "/best_metric_model.pth")
print("saved new best metric model")
print(
f"current epoch: {epoch + 1} current mean dice: {metric:.4f}"
f"\nbest mean dice: {best_metric:.4f} "
f"at epoch: {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} " f"at epoch: {best_metric_epoch}"
)
###Output
_____no_output_____
###Markdown
Plot the loss and metric
###Code
plt.figure("train", (12, 6))
plt.subplot(1, 2, 1)
plt.title("Epoch Average Loss")
x = [i + 1 for i in range(len(epoch_loss_values))]
y = epoch_loss_values
plt.xlabel("epoch")
plt.plot(x, y)
plt.subplot(1, 2, 2)
plt.title("Val Mean Dice")
x = [val_interval * (i + 1) for i in range(len(metric_values))]
y = metric_values
plt.xlabel("epoch")
plt.plot(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
Check best model output with the input image and label
###Code
model.load_state_dict(torch.load(model_dir + "/best_metric_model.pth"))
model.eval()
with torch.no_grad():
for i, val_data in enumerate(val_loader):
roi_size = (160, 160, 160)
sw_batch_size = 4
val_outputs = sliding_window_inference(
val_data["image"].to(device), roi_size, sw_batch_size, model
)
# plot the slice [:, :, 80]
plt.figure("check", (18, 6))
plt.subplot(1, 3, 1)
plt.title(f"image {i}")
plt.imshow(val_data["image"][0, 0, :, :, 80], cmap="gray")
plt.subplot(1, 3, 2)
plt.title(f"label {i}")
plt.imshow(val_data["label"][0, 0, :, :, 80])
plt.subplot(1, 3, 3)
plt.title(f"output {i}")
plt.imshow(torch.argmax(val_outputs, dim=1).detach().cpu()[0, :, :, 80])
plt.show()
if i == 2:
break
###Output
_____no_output_____
###Markdown
Evaluation on original image spacings
###Code
val_org_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
Spacingd(keys=["image"], pixdim=(1.5, 1.5, 2.0), mode="bilinear"),
Orientationd(keys=["image"], axcodes="RAS"),
ScaleIntensityRanged(
keys=["image"],
a_min=-57,
a_max=164,
b_min=0.0,
b_max=1.0,
clip=True,
),
CropForegroundd(keys=["image"], source_key="image"),
EnsureTyped(keys=["image", "label"]),
]
)
val_org_ds = Dataset(data=val_files, transform=val_org_transforms)
val_org_loader = DataLoader(val_org_ds, batch_size=1, num_workers=4)
post_transforms = Compose(
[
EnsureTyped(keys="pred"),
Invertd(
keys="pred",
transform=val_org_transforms,
orig_keys="image",
meta_keys="pred_meta_dict",
orig_meta_keys="image_meta_dict",
meta_key_postfix="meta_dict",
nearest_interp=False,
to_tensor=True,
),
AsDiscreted(keys="pred", argmax=True, to_onehot=True, num_classes=2),
AsDiscreted(keys="label", to_onehot=True, num_classes=2),
]
)
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____ |
notebooks/minimalAnalysis_HPDSinputFormat.ipynb | ###Markdown
Test using fake data following HPDS format- Age general distribution- Age distribution by sex- Comorbidities per dayShiny App demo: https://avillachlab.shinyapps.io/demo/
###Code
#load the libraries
library( "ggplot2" )
library( "plyr" )
library( "plotly" )
##### file modified
inputPath <- "./"
inputFile <- "Zak_table_modified.txt"
# read the file and rename columns
dataInput <- read.delim( paste0( inputPath, inputFile ), header = FALSE, sep = "\t", colClasses = "character")
colnames(dataInput) <- c("PATIENT_NUM", "CONCEPT_PATH", "NVAL_NUM", "TVAL_CHAR", "START_DATE", "AGE", "SEX")
#for the diagnostic, we select the last item in the concept path
dataInput$DIAGNOSE <- sapply(strsplit( as.character(dataInput$CONCEPT_PATH), "[\\]"),function(x)x[length(x)])
#we create one subset of the data for the diagnosis analysis
dataSelection <- dataInput[, c("PATIENT_NUM", "DIAGNOSE", "START_DATE")]
dataSelection$START_DATE <- as.Date(dataSelection$START_DATE, format = "%d-%b-%y")
head( dataSelection )
#we create one subset of the data for the general demographic analysis
demogData <- dataInput[ , c("PATIENT_NUM", "AGE", "SEX")]
demogData <- demogData[ ! duplicated( demogData ), ]
head(demogData)
demogData$AGE <- as.numeric( demogData$AGE )
demogData$SEX <- as.factor( demogData$SEX )
###Output
_____no_output_____
###Markdown
Boxplot for the age distribution by sex
###Code
ageBP <- ggplot(demogData, aes(x=SEX, y=AGE, fill=SEX))+
geom_boxplot()
ageBP + scale_fill_discrete(name="Sex",
breaks=c("F", "M"),
labels=c("Female", "Male"))
###Output
_____no_output_____
###Markdown
Age distribution- General age distribution- Age distribution according to the sex of the patient
###Code
#general barplot
ggplot(demogData, aes(x=AGE)) + geom_bar()
min(demogData$AGE)
max(demogData$AGE)
#estimate the mean age for males and females
mu <- ddply(demogData, "SEX", summarise, grp.mean=mean(AGE))
#histogram showing age according to sex
ggplot2::ggplot(demogData, aes(x=AGE, fill=SEX, color=SEX)) +
geom_histogram(position="identity", alpha = 0.5) +
geom_vline(data=mu, aes(xintercept=grp.mean, color=SEX),
linetype="dashed")
###Output
_____no_output_____
###Markdown
Prepare the data to estimate the comorbidities per day
###Code
head(dataSelection)
output <- data.frame()
for( i in 1:length(unique( dataSelection$PATIENT_NUM))){
selection <- dataSelection[ dataSelection$PATIENT_NUM == unique(dataSelection$PATIENT_NUM)[i], ]
selection <- selection[ order( selection$START_DATE , decreasing = TRUE), ]
for( j in 1:nrow(selection)){
if( j == 1){
selection$DAY[j] <- paste0("DAY ", j)
}else{
selection$DAY[j] <- paste0("DAY ", as.numeric(gsub("DAY ", "", selection$DAY[j-1])) + as.numeric(selection$START_DATE[j-1] - selection$START_DATE[j]) )
}
}
output <- rbind(output, selection)
}
head(output)
toplot <- as.data.frame( table( paste0( output$DIAGNOSE, "-", output$DAY)))
toplot$Prev <- (toplot$Freq/length(unique(output$PATIENT_NUM)))
toplot$DIAG <- sapply(strsplit( as.character(toplot$Var1), "-"), '[', 1)
toplot$DAY <- sapply(strsplit( as.character(toplot$Var1), "-"), '[', 2)
#barplot
p <- ggplot(data=toplot, aes(x=DAY, y=Freq, fill=DIAG)) +
geom_bar(stat="identity", color="black", position=position_dodge())+
theme(legend.position="bottom",
legend.direction = "vertical")
p
#htmp
htmp <- ggplot(toplot, aes(DAY, DIAG )) +
geom_tile(aes(fill = Freq), color = "white") +
scale_fill_gradient(low = "white", high = "steelblue") +
theme(legend.title = element_text(size = 10),
legend.text = element_text(size = 12),
plot.title = element_text(size=16),
axis.title=element_text(size=14,face="bold"),
axis.text.x = element_text(angle = 90, hjust = 1)) +
labs(fill = "Comorbidity frequency")
htmp
#alternatives using plotly
# library(plotly)
# toplot <- as.data.frame( table( paste0( output$DIAGNOSE, "-", output$DAY)))
# toplot$Prev <- (toplot$Freq/length(unique(output$PATIENT_NUM)))
# toplot$DIAG <- sapply(strsplit( as.character(toplot$Var1), "-"), '[', 1)
# toplot$DAY <- sapply(strsplit( as.character(toplot$Var1), "-"), '[', 2)
# p <- ggplot(data=toplot, aes(x=DAY, y=Freq, fill=DIAG)) +
# geom_bar(stat="identity", color="black", position=position_dodge())+
# theme_minimal()+ theme(legend.position="bottom")
# p
#
# plotly( p )
#
#
# htmp <- ggplot(toplot, aes(DAY, DIAG )) +
# geom_tile(aes(fill = Freq), color = "white") +
# scale_fill_gradient(low = "white", high = "steelblue") +
# theme(legend.title = element_text(size = 10),
# legend.text = element_text(size = 12),
# plot.title = element_text(size=16),
# axis.title=element_text(size=14,face="bold"),
# axis.text.x = element_text(angle = 90, hjust = 1)) +
# labs(fill = "Comorbidity frequency")
#
# htmp
#
# fig <- plot_ly(
# x = toplot$DAY, y = toplot$DIAG,
# z = toplot$Freq, type = "heatmap"
# )
#inputPath <- "./"
#inputFile <- "Zak_table.txt"
#library(plotly)
#theme_set(theme_bw())
#dataInput <- read.delim( paste0( inputPath, inputFile ), header = FALSE, sep = "\t", colClasses = "character")
#colnames(dataInput) <- c("PATIENT_NUM", "CONCEPT_PATH", "NVAL_NUM", "TVAL_CHAR", "START_DATE")
#dataInput$DIAGNOSE <- sapply(strsplit( as.character(dataInput$CONCEPT_PATH), "[\\]"),function(x)x[length(x)])
#dataSelection <- dataInput[, c("PATIENT_NUM", "DIAGNOSE", "START_DATE")]
#dataSelection$START_DATE <- as.Date(dataSelection$START_DATE, format = "%d-%b-%y")
#head(dataSelection)
#output <- data.frame()
#for( i in 1:length(unique( dataSelection$PATIENT_NUM))){
# selection <- dataSelection[ dataSelection$PATIENT_NUM == unique(dataSelection$PATIENT_NUM)[i], ]
# selection <- selection[ order( selection$START_DATE , decreasing = TRUE), ]
# for( j in 1:nrow(selection)){
# if( j == 1){
# selection$DAY[j] <- paste0("DAY ", j)
# }else{
# selection$DAY[j] <- paste0("DAY ", as.numeric(gsub("DAY ", "", selection$DAY[j-1])) + as.numeric(selection$START_DATE[j-1] - selection$START_DATE[j]) )
# }
# }
# output <- rbind(output, selection)
#}
# Simulate COVID status
#covid_stat_df <- data.frame(PATIENT_NUM = unique(dataSelection$PATIENT_NUM),
# covid_status = sample(c("Y", "N"),
# size = length(unique(dataSelection$PATIENT_NUM)),
# replace = T)
)
#output <- merge(output, covid_stat_df)
#head(output)
# toplot <- as.data.frame( table( paste0( output$DIAGNOSE, "-", output$DAY)))
# toplot$Prev <- (toplot$Freq/length(unique(output$PATIENT_NUM)))
# toplot$DIAG <- sapply(strsplit( as.character(toplot$Var1), "-"), '[', 1)
# toplot$DAY <- sapply(strsplit( as.character(toplot$Var1), "-"), '[', 2)
#p <- ggplot(data=toplot, aes(x=DAY, y=Freq, fill=DIAG)) +
# geom_bar(stat="identity", color="black", position=position_dodge())+
# theme(legend.position="bottom",
# legend.direction = "vertical")
#p
#plotly( p )
#htmp <- ggplot(toplot, aes(DAY, DIAG )) +
# geom_tile(aes(fill = Freq), color = "white") +
# scale_fill_gradient(low = "white", high = "steelblue") +
# theme_bw() +
# theme(legend.title = element_text(size = 10),
# legend.text = element_text(size = 12),
# plot.title = element_text(size=16),
# axis.title=element_text(size=14,face="bold"),
# axis.text.x = element_text(angle = 90, hjust = 1)) +
# labs(fill = "Comorbidity frequency")
#htmp
#fig <- plot_ly(
# x = toplot$DAY, y = toplot$DIAG,
# z = toplot$Freq, type = "heatmap"
# )
###Output
_____no_output_____ |
notebook/Deterministic-Chain-Example.ipynb | ###Markdown
Determinstic Chain ExampleSelf-contained example of the $\eta$-return mixture for prediction in a determinstic chain
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Set up Environment MRP
###Code
class LinearChainMRP:
def __init__(self, n_states=8):
self.n_states = n_states
self.state = 0
def step(self):
reward = 0.0
done = False
if self.state == (self.n_states-1):
done = True
reward = 1.0
else:
self.state += 1
return self.state, reward, done
def reset(self):
self.state = 0
return self.state
###Output
_____no_output_____
###Markdown
Learners
###Code
def np_one_hot(dim, idx):
"""Create a one-hot vector"""
vec = np.zeros(dim)
vec[idx] = 1.0
return vec
class ValueFunctionBase:
def __init__(self, n_states=8, gamma=0.99, lr=0.1):
self.n_states = n_states
self.prev_state = None
self.gamma = gamma
self.lr = lr
def begin_episode(self, state):
self.prev_state = state
def step(self, state, reward, done):
pass
class MFValue(ValueFunctionBase):
def __init__(self, n_states=8, gamma=0.99, lr=0.1):
super().__init__(n_states, gamma, lr)
self.theta = np.zeros(n_states)
def step(self, state, reward, done):
# Target and update
prev_target = reward + ((1 - done) *
self.gamma * self.theta[state])
theta_delta = prev_target - self.theta[self.prev_state]
self.theta[self.prev_state] += self.lr * theta_delta
# Continue
self.prev_state = state
class SFValue(ValueFunctionBase):
def __init__(self, n_states=8, gamma=0.99, lr=0.1):
super().__init__(n_states, gamma, lr)
self.psi = np.identity(n_states)
self.w = np.zeros(n_states)
self.theta = np.zeros(n_states) # dummy variable to be combined
def step(self, state, reward, done):
# SR lesarning
prev_phi = np_one_hot(self.n_states, self.prev_state)
target = prev_phi + ((1 - done) *
self.gamma * self.psi[state])
sf_delta = target - self.psi[self.prev_state]
self.psi[self.prev_state] += self.lr * sf_delta
# Reward learning
r_delta = reward - self.w[self.prev_state]
self.w[self.prev_state] += self.lr * r_delta
self.theta = self.psi @ self.w
# Continue
self.prev_state = state
class LambdaValue(ValueFunctionBase):
def __init__(self, n_states=8, gamma=0.99, lr=0.1, lamb=0.0):
super().__init__(n_states, gamma, lr)
self.lamb = lamb
self.theta = np.zeros(n_states)
self.psi = np.identity(n_states)
self.w = np.zeros(n_states)
def step(self, state, reward, done):
# SR lesarning
prev_phi = np_one_hot(self.n_states, self.prev_state)
target = prev_phi + ((1 - done) *
self.gamma * self.lamb * self.psi[state])
sf_delta = target - self.psi[self.prev_state]
self.psi[self.prev_state] += self.lr * sf_delta
# Reward learning
r_delta = reward - self.w[self.prev_state]
self.w[self.prev_state] += self.lr * r_delta
# Value learning
v_vec = self.psi @ (
((1-self.lamb) * self.theta) + (self.lamb * self.w)
)
v_target = reward + ((1-done) * self.gamma * v_vec[state])
v_delta = v_target - self.theta[self.prev_state]
self.theta[self.prev_state] += self.lr * v_delta
# Continue
self.prev_state = state
###Output
_____no_output_____
###Markdown
Trainer
###Code
def train_value(env, learner, num_episodes=3):
"""
Method to run a learner in an environment
"""
out_dict = {
'theta': [],
'psi': [],
'w': [],
}
for episode_idx in range(num_episodes):
s = env.reset()
learner.begin_episode(s)
done = False
while not done:
s, reward, done = env.step()
learner.step(s, reward, done)
# store
if hasattr(learner, 'theta'):
out_dict['theta'].append(np.copy(learner.theta))
if hasattr(learner, 'psi'):
out_dict['psi'].append(np.copy(learner.psi))
if hasattr(learner, 'w'):
out_dict['w'].append(np.copy(learner.w))
return out_dict
###Output
_____no_output_____
###Markdown
Experiment Run experiment
###Code
Num_states = 16
Num_episodes = 20
Gamma = 0.9999
Lr = 1.0
def run_experiment():
sf_lambda = 0.5
# Agent learners
learner_dict = {
'mf': MFValue(n_states=Num_states, gamma=Gamma, lr=Lr),
'sf': SFValue(n_states=Num_states, gamma=Gamma, lr=Lr),
'lvf_2': LambdaValue(n_states=Num_states, gamma=Gamma, lr=Lr, lamb=0.2),
'lvf_5': LambdaValue(n_states=Num_states, gamma=Gamma, lr=Lr, lamb=0.5),
'lvf_7': LambdaValue(n_states=Num_states, gamma=Gamma, lr=Lr, lamb=0.7),
'lvf_9': LambdaValue(n_states=Num_states, gamma=Gamma, lr=Lr, lamb=0.9),
}
# Environment
env = LinearChainMRP(n_states=Num_states)
# Train
data_dict = {}
for k in learner_dict:
cur_learner = learner_dict[k]
cur_out = train_value(env, cur_learner,
num_episodes=Num_episodes)
data_dict[k] = cur_out
print(k, data_dict[k].keys())
return data_dict
exp_dict = run_experiment()
###Output
mf dict_keys(['theta', 'psi', 'w'])
sf dict_keys(['theta', 'psi', 'w'])
lvf_2 dict_keys(['theta', 'psi', 'w'])
lvf_5 dict_keys(['theta', 'psi', 'w'])
lvf_7 dict_keys(['theta', 'psi', 'w'])
lvf_9 dict_keys(['theta', 'psi', 'w'])
###Markdown
Visualize
###Code
from mpl_toolkits.axes_grid1 import make_axes_locatable
def script_plt_all_thetas(in_dict):
only_include = ['mf', 'lvf_7', 'sf']
title_list = ['$\eta$ = 0', '$\eta$ = 0.7', '$\eta$ = 1.0']
plt.figure(figsize=((10, 2.8)))
subplt_counter = 1
fig, axes = plt.subplots(nrows=1, ncols=len(only_include))
for idx, ax in enumerate(axes.flat):
cur_ax_key = only_include[idx]
im = ax.imshow(in_dict[cur_ax_key]['theta'],
cmap='magma',
vmin=0.0, vmax=1.0)
print(cur_ax_key)
ax.title.set_text(title_list[idx])
if subplt_counter == 1:
ax.set_ylabel('Episodes')
cur_yticks = np.arange(Num_episodes, step=4)
ax.set_yticks(cur_yticks)
else:
ax.set_ylabel('')
ax.set_yticks([])
ax.set_xlabel('States')
subplt_counter += 1
fig.subplots_adjust(right=0.8)
#cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
cbar_ax = fig.add_axes([ax.get_position().x1+0.01,ax.get_position().y0,0.02,ax.get_position().height])
fig.colorbar(im, cax=cbar_ax)
return fig
plt.rc('text', usetex=True)
plt.rc('font', **{'family': 'serif', 'serif': ['Times'], 'size': 15})
cur_prop_fig = script_plt_all_thetas(exp_dict)
# cur_prop_fig.savefig('./path_out_chain.pdf', bbox_inches="tight")
sns.color_palette("colorblind", 10)
def script_plt_all_errors(in_dict):
ag_k_list = ['mf', 'lvf_7', 'sf'] # learner keys
ag_k_float_lambda = [0.0, 0.7, 1.0] # NOTE: correspond to above
# Make true value vector
true_v = np.zeros(Num_states)
true_v[-1] = 1.0
for i in reversed(range((Num_states-1))):
true_v[i] = Gamma * true_v[i+1]
# Colors
c10pal = sns.color_palette("colorblind", 10)
cmapping_keylist = [0.0, 0.3, 0.5, 0.7, 0.9, 0.99, 1.0]
cmapping_colidxs = [7, 1, 2, 3, 4, 6, 9]
cmapping = {cmapping_keylist[i]: cmapping_colidxs[i]
for i in range(len(cmapping_keylist))}
# Plot error
plt.figure(figsize=(3.8,2.8))
lege_list = []
for i, ag_k in enumerate(ag_k_list):
delta_vec = in_dict[ag_k]['theta'] - true_v
err = np.linalg.norm(delta_vec, ord=np.inf,
axis=1)
# Current lambda and color
cur_lamb = ag_k_float_lambda[i]
cur_col = c10pal[cmapping[cur_lamb]]
#
plt.plot(err, color=cur_col)
print(ag_k, cur_lamb)
lege_list.append(ag_k)
# plt.title('Inf Norm Error')
lege_list = ['$\eta$ = 0 (MF)', '$\eta$ = 0.7', '$\eta$ = 1 (full SF)']
plt.legend(lege_list, loc='lower left', fontsize=13)
plt.ylabel('max\{abs state error\}')
plt.xlabel('Episodes')
cur_xticks = np.arange(Num_episodes, step=4)
plt.xticks(cur_xticks, cur_xticks)
plt.rc('text', usetex=True)
plt.rc('font', **{'family': 'serif', 'serif': ['Times'], 'size': 15})
script_plt_all_errors(exp_dict)
# plt.savefig('./path_out.pgf', bbox_inches="tight")
def script_get_vectors(in_dict, epis_idx):
param_k_list = ['theta', 'w', 'psi'] # parameters to see
ag_k_list = ['mf', 'sf', 'lvf_7'] # learner keys
for ag_k in in_dict:
# Check
if ag_k not in ag_k_list:
continue
for param_k in param_k_list:
# Check
if not param_k in in_dict[ag_k]:
continue
if len(in_dict[ag_k][param_k]) == 0:
continue
# Get the parameters
cur_param = in_dict[ag_k][param_k]
cur_epis_param = cur_param[epis_idx]
if len(np.shape(cur_epis_param)) == 1:
cur_epis_param = np.expand_dims(
cur_epis_param, 0)
elif len(np.shape(cur_epis_param)) == 2:
cur_epis_param = cur_epis_param[0:1, :]
else:
raise Exception('This is unexpected')
info_str = f'Agent: {ag_k}, Param: {param_k}'
if param_k == 'theta':
info_str += f', {cur_epis_param[0,0]}'
print(info_str)
plt.figure(figsize=(3,2))
plt.imshow(cur_epis_param,
cmap='cividis',
vmin=0.0, vmax=1.0)
plt.xticks([])
plt.yticks([])
plt.show()
script_get_vectors(exp_dict, 16)
###Output
Agent: mf, Param: theta, 0.9985010495451367
###Markdown
$\phi (S = 0)$ without SF
###Code
def script_plt_phi_s0():
phi_s0 = np.zeros((1, Num_states))
phi_s0[0,0] = 1.0
plt.figure(figsize=(3,2))
plt.imshow(phi_s0,
cmap='cividis',
vmin=0.0, vmax=1.0)
plt.xticks([])
plt.yticks([])
plt.show()
script_plt_phi_s0()
def script_plt_zero():
phi_s0 = np.zeros((1, Num_states))
plt.figure(figsize=(3,2))
plt.imshow(phi_s0,
cmap='cividis',
vmin=0.0, vmax=1.0)
plt.xticks([])
plt.yticks([])
plt.show()
script_plt_zero()
###Output
_____no_output_____ |
notebooks/DataTypes.ipynb | ###Markdown
Programming and Database Fundamentals for Data Scientists - EAS503 Dealing with data types`Python` has five standard Data Types:- Numbers- String- List- Tuple- DictionaryPython sets the variable type based on the value that is assigned to it. Python will change the variable type if the variable value is set to another value. For example:
###Code
height = 10.4
print(type(height))
height = 'tall'
print(type(height))
###Output
<class 'float'>
<class 'str'>
###Markdown
`Python` also allows operations on some types of variables with different data types. However, for others, trying to operate between two data types might lead to an error. For example:
###Code
height = 10.2
tall = True
s = '4.3'
height + tall
# Strongly typed language
height + s
###Output
_____no_output_____
###Markdown
Numbers
###Code
var = 382
###Output
_____no_output_____
###Markdown
Most of the time using the standard Python number type is fine. Python will automatically convert a number from one type to another if it needs. But, under certain circumstances that a specific number type is needed (ie. complex, hexidecimal), the format can be forced into a format by using additional syntax in the table below:| Type | Format | Description ||---------|----------------|-------------|| int |a = 10 |Signed Integer|| float |a = 45.67 |(.) Floating point real values|| complex |a = 3.14J |(J) Contains integer in the range 0 to 255.| Most of the time Python will do variable conversion automatically. You can also use Python conversion functions (int(), long(), float(), complex()) to convert data from one type to another. IntegersIn `Python` (3), the largest allowed integer is not pre-determined, but depends on the available memory.
###Code
x = 9999999999999999999999999999999999999999999999999999
print(x)
print(type(x))
###Output
9999999999999999999999999999999999999999999999999999
<class 'int'>
###Markdown
FloatThe `float` type designates a number with a decimal point. In **scientific notation**, one can use the character `e` or `E` to denote a floating point number.
###Code
a = 4.2
print(type(a))
a = 2e3
print(a)
print(type(a))
a = 4.2e-2
print(a)
print(type(a))
###Output
0.042
<class 'float'>
###Markdown
_Storage_: A `float` is stored in a `double-precision` format (64 bits). Typically, the largest size floating point number is approximately $1.8 \times 10^{308}$. Smallest floating point number that one can use is approximately $5.0 \times 10^{-324}$.
###Code
largefloat = 1.79e308
print(largefloat)
largefloat = 1.8e308
print(largefloat)
smallfloat = 5.0e-324
print(smallfloat)
smallfloat = 1e-325
print(smallfloat)
import sys
aint = 1000
print(sys.getsizeof(aint))
afloat = 2.3707969876878768e-10
print(sys.getsizeof(afloat))
###Output
28
24
###Markdown
**Quick question** - Why bother with `int` at all? Why not use `float` for representing all numbers._Answer_ - Using `int` is more efficient, both in terms of storage and computation.In general, floating point arithmetic is much more expensive than integer. Moreover, `float` is actually an approximation while `int` is exact.
###Code
# using int
a = 1000000000000000000000000000
print(a == a+1)
# using float
a = 1000000000000000000000000000.0
print(a == a+1)
###Output
True
###Markdown
Sequence data structures in PythonWhile basic data types allow you to create one variable at a time, in most practical applications (especially for data science), one needs to manipulate a large collection of variables. Clearly, creating one variable for each data element is very inefficient. All modern programming languages, including `Python`, allows for creation of data types that are collections of variables. Two broad categories of such collections are defined in `Python` - sequences and non-sequences.Python defines several sequence data structures. While each data structure has its own characteristics, they all satisfy a core set of operations, that includes:|Operation| Description||------|------||`len(s)`| **Finding length**||`s1 + s2`| **Concatenation**||`s*4`| **Repetition**|| `x in s`| **Membership**||`for x in` s| **Iteration**| Python ListsOne of the most versatile data types in `Python`.
###Code
l = ['red','blue','green']
print(l)
print(type(l))
print(len(l))
## python indexing starts from 0
print(l[0])
print(l[1])
###Output
red
blue
###Markdown
`Python` lists also support reverse indexing, by using a negative index.
###Code
print(l[-2])
print(l[len(l)-1])
###Output
_____no_output_____
###Markdown
A `list` can contain any type of data type and allows for a mix of data types.
###Code
l = [3,4,'garfield','chester',4.3]
print(l)
l1 = ['3', 4, 'five', [4.4, 2.3]]
print(l1)
###Output
['3', 4, 'five', [4.4, 2.3]]
###Markdown
ConcatenationThe operator `+` concatenates two lists and returns the concatenated list.
###Code
print(l + l1)
###Output
[3, 4, 'garfield', 'chester', 4.3, '3', 4, 'five', [4.4, 2.3]]
###Markdown
Append
###Code
l.extend(l1)
print(l)
l.append(98)
print(l)
###Output
[3, 4, 'garfield', 'chester', 4.3, '3', 4, 'five', [4.4, 2.3], 98]
###Markdown
Indexing
###Code
l = [7,4,3,9,8,5,6,3,1,3]
l[:]
l[::-1]
l
l[::-1]
l.
l.reverse()
print(l)
###Output
[3, 1, 3, 6, 5, 8, 9, 3, 4, 7]
###Markdown
A third option for indexing allows us to choose the step size for the slicing.
###Code
# pick every second element
print(l[::2])
# print every third element within a range
print(l[2:18:3])
# reverse a list
print(l[::-1])
###Output
_____no_output_____
###Markdown
Replication
###Code
# replicating lists
print([1/6]*9)
print(['a']*8)
###Output
[0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666]
['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a']
###Markdown
Iteration ```for varname in somelist:```
###Code
a = [0,1,2,3,4,5,6,7,8,9]
for i in a:
print(i)
print(i*4)
for i in range(10):
a[i] = 2*a[i]
print(a)
def fun(x):
return 7*x
# a pythonic way
b = [fun(i) for i in a]
print(b)
###Output
_____no_output_____
###Markdown
Memory storage of listsWhile `Python` gives us a simplified view of a `list`, the actual memory representation of a `list` is not as simple. A `list` consists of a main **pointer** (an address pointing to a memory location, typically uses 8 bytes in a 64 bit computer) which references to other pointers, which in turn point to the actual data items. Consider an empty `list`:
###Code
import sys
l = []
print(sys.getsizeof(l))
biglist = [4,5,8,9]
a = [biglist]
b = [biglist]
print(a)
print(b)
biglist[0] = 'number'
print(biglist)
b[0].append('anothernumber')
print(biglist)
a = [8,3,2,4]
b = a # memory reference
print(a)
print(b)
a[2] = -1
print(b)
c = a.copy()
a[1] = 400
print(c)
###Output
[8, 3, -1, 4]
###Markdown
An empty `list` takes 64 bytes, this is for the meta data information (overhead). Now consider the size of the `list` after adding data.
###Code
l = []
print(sys.getsizeof(l))
l = ['v']
print(sys.getsizeof(l))
l = ['v','a']
print(sys.getsizeof(l))
l = ['v','a','r']
print(sys.getsizeof(l))
l = ['v','a','r','un']
print(sys.getsizeof(l))
l = ['v','a','r','un','This is a long string']
print(sys.getsizeof(l))
l = ['v','a','r','un','This is a long string',[8,16,24]]
print(sys.getsizeof(l))
###Output
64
72
80
88
96
104
112
###Markdown
Note that, regardless of what was actually added, the size of the `list` grows by 8 bytes with the number of elements, where each memory location uses up 8 bytes.This also reveals a shortcoming of the `getsizeof()` function. It only provides a "shallow" size of a collection, i.e., size of the head and the memory locations for the actual data. Copying a list variableSince a list does not directly contain the data, assigning another variable name to an existing list does not quite have the "desired" copying effect.
###Code
l = [4,5,21,12]
l1 = l
print(l)
print(l1)
l[2] = 12
print(l)
print(l1)
###Output
_____no_output_____
###Markdown
Looks like the list pointed to by `l1` has also changed! This is because the statement `l1 = l` does not create a new list. It just creates a new binding between the target (`l1`) and the actual list object. An explicit `copy()` is needed to create a new copy.
###Code
l1 = l.copy()
l[0] = -23
print(l)
print(l1)
l = ['b','u','f',[2,3,4]]
l1 = ['f','a','l','o']
l2 = l + l1
print(l2)
l[3][0] = -4
print(l)
print(l2)
###Output
_____no_output_____
###Markdown
List functions modify the original listThis is true for `append`, `extend`, `pop`, `insert`, and other supported functions for the `list` class.
###Code
print(l)
l.append(4)
print(l)
l2 = l + l1
###Output
_____no_output_____
###Markdown
However, binary operators such as `+` and `*` do no modify the existing object and return a new one.
###Code
print(l + [3,4,2])
print(l)
###Output
_____no_output_____
###Markdown
Understanding performance of python lists
###Code
# create a long list
l = []
i = 0
for i in range(10000000):
l.append(i)
i = i + 1
print(l)
del(l[1])
print(l)
l.remove(1)
print(l)
###Output
['b', 'f', [2, 3, 4]]
###Markdown
Access performanceWe first want to figure out how quickly can we access a particular value within a `list`. **lambda** functions - As an additional topic, we will also learn how to create quick, one-time functions in `Python` using the `lambda` keyword. We will refer to such functions as `lambda` functions. Basic syntax for a `lambda` function is:```lambda arguments : expression```The function can take any number of arguments as input (a comma separated "list"). But it can only have a single expression, whose value is returned as the return value of this function.For example:
###Code
double = lambda x: 2*x
print(double(4))
add = lambda x,y: x + y
print(add(8,12))
exponent = lambda x,y=10: pow(y,x)
print(exponent(2))
print(exponent(2,4))
###Output
8
20
100
16
###Markdown
Difference between `lambda` and `def`:There are two ways to create a function. Both have almost the same effect, but with some minor differences:1. `lambda` functions can have limited capabilities since all the logic has to be squeezed into a single expression2. `lambda` functions can be defined in places where `def` functions cannot be defined. This benefit will become apparent when we talk about "functional programming", e.g., `map`, `reduce` and `filter`.Anyways, getting back to studying the performance of lists.
###Code
list_access = lambda l,pos: l[pos]
timeit list_access(l,0)
l = list(range(1000000))
timeit list_access(l,0)
timeit list_access(l,500000)
###Output
100 ns ± 0.529 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
###Markdown
Looks like the access time is independent of where we want to access the data from._Next question is: does it depend on the size of the string?_
###Code
l1 = l[0:1000]
timeit list_access(l1,0)
l1 = l[0:10000]
timeit list_access(l1,0)
###Output
103 ns ± 1.07 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
###Markdown
In general, one can say that list access takes $O(1)$ (or constant) time. >The $O()$ or the `big-oh` notation is used in computer science to denote the worst case complexity (in terms of memory or speed) of an algorithm.>If an algorithm has $O(1)$ time complexity, it means that the time taken to run the algorithm, is a constant factor, and does not depend on the input size. Typically, this is the fastest algorithm you can design.>If an algorithm has $O(n)$ time complexity, where $n$ is the size of the input, it means that in worst case the algorithm will take $c\times n$ time, where $c$ is a constant factor. This means that the time taken by the algorithm grows linearly with the size of the input. This is also considered as an efficient algorithm.>However, a $O(n^2)$ algorithm is typically considered to be inefficient, since the time grows quadratically with the size of the data. Such (and higher complexity) algorithms should be avoided. >But for many problems, one might not find algorithms that are linear or sub-linear. For instance, the best sorting algorithm is $O(n\log{n})$. The best algorithm for multiplying two matrices is $O(n^{2.8})$ (the _Strassen_ algorithm).Coming back to lists. Let us now study the performance of list deletion.
###Code
# create a long list
l = []
i = 0
for i in range(10000000):
l.append(i)
i = i + 1
timeit del l[0]
# create a long list
l = []
i = 0
for i in range(10000000):
l.append(i)
i = i + 1
timeit del l[5000000]
# create a long list
l = []
i = 0
for i in range(10000000):
l.append(i)
i = i + 1
timeit del l[9000000]
###Output
357 µs ± 12.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
What we observe above is that the time taken to delete a value from a `list` depends on the location from where we want to delete the item. In fact, deleting elements towards the beginning of the list is significantly slower than deleting elements from the beginning of the list. Strings
###Code
# creating a new string
s1 = 'This is great'
s2 = "This is even greater"
s3 = '''This is the greatest'''
s4 = """This is also equally good"""
ss = '''This is a string with multiple lines
This is the second line of the string'''
print(ss)
ss = "My string has a \"quote\""
print(ss)
print(s1)
print(s2)
print(s3)
print(s4)
# we typically reserve three quotes for multi-line strings
s5 = 'This is a multi line string.
where we need multiple lines.'
s5 = '''This is a multi line string.
where we need multiple lines.'''
print(s5)
s5 = """This is also a multi line string.
This uses double quotes"""
print(s5)
l = [3,4,5]
print(l)
l[2] = 9
print(l)
s1[0] = 'G'
###Output
_____no_output_____
###Markdown
ImmutabilityA general concept. Objects of certain type in Python are immutable whereas others a mutable.
###Code
# for instance, lists
l = ['t','h','i','s']
print(l)
l[3] = 4
print(l)
l = ['1','2','3',4]
#for i in range(len(l)):
# l[i] = int(l[i])
l = [int(i) for i in l]
# now consider, strings
s = 'this'
print(s)
s[3] = 'n'
s = 'this'
s+=s
s = s + s
print(s)
# del will not work with strings, either
del(s[4])
s = ['this','is','a','list']
s*4
s = 'this'
for x in s:
print(x)
# you can, however, delete an entire string object
s = 'this'
print(s)
del s
print(s)
###Output
_____no_output_____
###Markdown
Accessing `str` elementsSimilar operations as for `list`
###Code
s = 'time flies liKe an arrow, fruit flies like a banana'
print(s[0:2])
print(s[::-1])
print(s[0:9:2])
t = s[::-1]
print(t)
# getting length of a string - using in-built Python function
print(len(s))
s1 = ''
print(len(s1))
# some useful methods defined for str
print(s.capitalize())
print(s.casefold())
print(s.count('e'))
print(s.center(30))
print(len(s.center(30)))
# Concatenating multiple strings - using the + operation
t = 'fruit flies like a banana'
print(s+t)
print(s.casefold()+', '+t)
###Output
_____no_output_____
###Markdown
Searching within a stringTwo methods are provided, `find` and `index`, both return the starting index of a substring. However, `index` throws an exception if the substring is not found. `find` returns -1. Both allow specifying start and end index to do the search.
###Code
print(s)
# searching in a string
print(s.find('flies'))
print(s.find('fly'))
print(s.index('flies'))
print(s.index('fly'))
print(s.find('li',10))
###Output
_____no_output_____
###Markdown
Regular Expressions in `Python`Regular expressions are used to identify whether a pattern exists in a given sequence of characters (string) or not. They help in manipulating textual data, which is often a pre-requisite for data science projects that involve text mining. For example:> Consider the problem of parsing a text file for user information. For example, you might be interested in checking if each line in a file is of following format or not:```firstname lastname phonenumber emailaddress```or```firstname phonenumber emailaddress```where _firstname_ and _lastname_ are plain strings of alphabets with first letter in upper case, _phonenumber_ is of format (xxx) xxx-xxxx (x - a digit), and _emailaddress_ is of the format [email protected]. Let the file contain following lines:```Varun Chandola (716) 645-4747 [email protected] Simpson (800) 555-7666 homer@simpsonBart (800) 111-7452 [email protected] potter (111) 222-3333 [email protected] 777 (777) 777-7777 [email protected]```You need to create a code that returns `True` if the line matches the above specified pattern and `False` otherwise.There is a first principle way of writing this. But you will soon realize that code will fast become cumbersome, error-prone, and large; in other words **plain ugly!!!*.Fortunately, the theoretical Computer Science community offers the notion of __regular expressions__ to find an easy solution to this problem. In `Python`, regular expressions are supported by the `re` module.
###Code
import re
###Output
_____no_output_____
###Markdown
Basic Patterns: Ordinary CharactersOrdinary characters are the simplest regular expressions. They match themselves exactly and do not have a special meaning in their regular expression syntax.Examples are 'A', 'a', 'X', '5'.Ordinary characters can be used to perform simple exact matches:
###Code
def re_match(pattern,query):
if re.match(pattern,query):
print("A Match!!!")
else:
print("Not a match!!!")
pattern=r'V'
query='Varun'
re_match(pattern,query)
s = r'This is a string.\n This is second line.'
print(s)
###Output
This is a string.\n This is second line.
###Markdown
The _pattern_ contains the string pattern that we are trying to find. As a general practice we add a `r` character at the beginning of the string, that indicates that it is a raw string. While in the above example, the pattern is just a literal string, in general it can be much more flexible and will be referred to as a __regular expression__ or a __regex__.> Both strings and regular expressions often contain the _backslash_ (`\`) character to indicate special treatment of a character. For example, one can denote a new line character using `\n`. By using a _raw string_, we are telling the `re` module to not process the backslash in the pattern string before actually passing it to the function.
###Code
print(r'Var\nun')
###Output
_____no_output_____
###Markdown
The `match()` function returns a match object if the pattern is contained in the query string, and `None` otherwise. While here we are using it to just check if the query contains the pattern, we can use the same returned object (in case of a match) for more detailed output. More on this later.
###Code
re_match(r'V','Chandola, Varun')
###Output
Not a match!!!
###Markdown
`Python` offers two different primitive operations based on regular expressions: `re.match()` checks for a match only at the beginning of the string, while `re.search()` checks for a match anywhere in the string
###Code
def re_search(pattern,query):
if re.search(pattern,query):
print("A Match!!!")
else:
print("Not a match!!!")
re_search(r'V','Chandola, Varun')
re_search(r'@', '[email protected]')
re_search(r'\d$','4I am 4 twenty four4')
re_search(r'^.','9 am 4 twenty four')
re_match(r'[^\w]','%%&*%*&%^*&%^')
###Output
A Match!!!
###Markdown
Wild Card Characters: Special CharactersSpecial characters are characters which do not match themselves as seen but actually have a special meaning when used in a regular expression.The most common wildcards are:Wild Card | Meaning | Effect----------|---------|-------------`.` | A period| Matches any single character except newline character.`\w`|Lowercase w| Matches any single letter, digit or underscore.`\W`|Uppercase w| Matches any character not part of \w (lowercase w).`\s`|Lowercase s| Matches a single whitespace character like: space, newline, tab, return.`\S`|Uppercase s| Matches any character not part of \s (lowercase s).`\t`|Lowercase t|Matches tab.`\n`|Lowercase n|Matches newline.`\r`|Lowercase r|Matches return.`\d`|Lowercase d|Matches decimal digit 0-9.`^` |Caret| Matches a pattern at the start of the string.`$`|Dollar|Matches a pattern at the end of string.`[abc]`| Or|Matches a or b or c.`[a-zA-Z0-9]`|Or|Matches any letter from (a to z) or (A to Z) or (0 to 9).`[^a]`|Else|Matches any number except for `a``\A`|Uppercase letter|Matches only at the start of the string.`\b`|Lowercase letter|Matches only the beginning or end of the word.`\`|Backslash|If the character following the backslash is a recognized escape character, then the special meaning of the term is taken. For example, `\n` is considered as newline. However, if the character following the `\` is not a recognized escape character, then the `\` is treated like any other character and passed through
###Code
re_search(r'.','Anything goes')
re_search(r'\w','%@#@$##$')
re_search(r'\w','[email protected]')
re_search(r'\W','%@#@$##$')
re_search(r'\W','vsdfscom')
re_search(r'\s','Varun Chandola')
re_search(r'\s','VarunChandola')
re_search(r'\S','VarunChandola')
re_search(r'\t','Varun\tChandola')
re_search(r'\t','Varun Chandola')
re_search(r'\d','716-232-2323')
re_search(r'\d','[email protected]')
###Output
_____no_output_____
###Markdown
Searching for patterns in the beginning or end using `^` and `$`
###Code
re_search(r'^\d','4chan')
re_search(r'^\d','whois4chan')
re_search(r'^Var','Varun Chandola')
re_search(r'^Var','I am Varun Chandola')
re_search(r'\d$','chan4')
re_search(r'\d$','whois4chan')
re_search(r'ola$','Varun Chandola')
re_search(r'ola$','I am Chandola, Varun')
###Output
_____no_output_____
###Markdown
Searching for one of the possible patterns
###Code
re_search(r'[VvCc]','Varun')
re_search(r'[VvCc]','varun')
re_search(r'[VvCc]','Chandola')
re_search(r'[VvCc]','James Bond')
###Output
A Match!!!
A Match!!!
A Match!!!
Not a match!!!
###Markdown
Specifying ranges
###Code
re_search(r'[a-fA-F0-9]','a quick brown fox')
re_search(r'[a-fA-F0-9]','Stump')
re_search(r'[^a]','aaaaaa')
re_search(r'[^a]','This contains aaaaa')
###Output
Not a match!!!
A Match!!!
###Markdown
Now we can start combinging different wild cards to create more expressive regexes.
###Code
re_search(r'[VvCc]','Name is Varun')
re_search(r'^[VvCc]','I am Varun')
###Output
_____no_output_____
###Markdown
RepetitionsFinding repetitive patterns is a necessity in regular expression matching. For example, you might be interested in checking if a string consists of only digits between 0 and 4. This can be handled using one of the following repetition special characters:Character|Effect---------|------`+`|Check for **one or more** characters to its left`*`|Checks for **zero or more** characters to its left`?`|Checks for **exactly zero or one character** to its left{x}|Checks for **exactly x times** repeat occurence of a character to its left{x,}|Checks for **at least x times** repeat occurence of a character to its left{x,y}|Checks for **at least x and not more than y times** repeat occurence of a character to its left
###Code
re_search(r'Varun+Chandola','VarunChandola')
re_search(r'Varun+Chandola','Varun Chandola')
re_search(r'Varun+Chandola','VarunnnnnnnnChandola')
re_search(r'Varun+Chandola','VaruChandola')
re_search(r'Varun+Chandola','My name is VarunnnnnnChandola')
re_search(r'\d+8{3}\d+','238882324532')
'\s'
re_search(r'Varun\s?Chandola','Varun Chandolaishere')
re_search(r'Varun?Chandola','VarunnChandola is here')
re_search(r'\d{10,11}','My number is 166454747')
# a phone number matcher
# (716) 645-4747
re_search(r'\(\d{3}\)\s\d{3}-\d{4}','716-645-4747')
# a more relaxed phone number matcher
regex = r'(\d{1}\s)?\(\d{3}\)\s\d{3}-\d{4}'
re_search(regex,'1 (800) 444-4444')
re_search(regex,'(716) 645-4747')
re_search(regex,'1 800 645-4777')
re_search(regex,' (800) 645-4777')
regex = r'(ACGT)+'
reout = re.search(regex,'uuuACGTACGTACGTACGTnnn')
reout.group(1)
###Output
_____no_output_____
###Markdown
GroupsWe have been using the `re.search` and `re.match` functions to check if a pattern exists in a string or not. However, we can do much more than that.For example, if we want to check for multiple patterns within a string, then we can use the _grouping_ feature of regular expressions. Let us consider a code to extract the various components of an email address:```[email protected]```We can divide a regex into individual parts using parentheses. Each part is known as a _group_. The `re.search` and `re.match` can then return the groups for us to further process.
###Code
regex = r'.+@.+\..+'
reout = re.search(regex,'[email protected]')
print(reout.groups())
regex = r'(.+)@(.+)\.(.+)'
reout = re.findall(regex,'[email protected] [email protected] [email protected]')
#parts = reout.group(3)
#print(type(parts))
#print(parts)
reout
reout.groups()
reout.groups()
###Output
_____no_output_____
###Markdown
You can also access the groups using the `group()` function.
###Code
# this gives the whole matched text
reout.group()
reout.group(1)
###Output
_____no_output_____
###Markdown
Greedy vs. non-greedy matching By default, the normal behavior of a wild card within a regular expression is to match the entire string. But sometimes, for several reasons, you might want to just match the first occurence. This can be done by adding a `?` qualifier (also known as the *non-greedy modifier*) after the wildcard part.
###Code
heading = '<h1>TITLE</h1>'
re.search(r'<.*>',heading).group()
heading = '<h1>TITLE</h1>'
re.search(r'<.*?>',heading).group()
###Output
_____no_output_____
###Markdown
`re` moduleWe have already seen how `re.search()` and `re.match()` work in a vanilla form. But they offer much more. Additionally, the `re` module supports many more functionalities.- `re.search(pattern,string, flags=0)` - Scan through the given string/sequence looking for the first location where the regular expression produces a match. It returns a corresponding match object if found, else returns `None` if no position in the string matches the pattern.- `re.match(pattern,string, flags=0)` - Returns a corresponding match object if zero or more characters at the beginning of string match the pattern. Else it returns `None`, if the string does not match the given pattern.- `re.findall(pattern,string, flags=0)` - Finds all the possible matches in the entire sequence and returns them as a list of strings. Each returned string represents one match.
###Code
text = '''<tag1>This is the body of the first tag</tag1>
<tag2>This is the body of the second tag</tag2>
<tag3>This is the body of the third tag</tag3>
'''
reout = re.findall(r'(<.*>)(.*)(<.*>)',text)
reout[0]
strs = re.findall(r'<.*>',text)
print(strs[0])
reout = re.search(r'(<.*>)(.*)(<.*>)',strs[0])
reout.group(2)
###Output
<tag1>This is the body of the first tag</tag1>
###Markdown
- `sub(pattern, repl, string, count=0, flags=0)`This is the substitute function. It returns the string obtained by replacing or substituting the leftmost non-overlapping occurrences of pattern in string by the replacement repl. If the pattern is not found then the string is returned unchanged.
###Code
email_address = 'Please contact us at: [email protected]'
new_email_address = re.sub(r'([\w\.-]+)@([\w\.-]+)', r'[email protected]', email_address)
print(new_email_address)
regex = 'very complicated pattern '
re.search(regex,'lsdfhslkdhfks')
re.search(regex,'sdfnlskdhflkshdflk')
pattern = re.compile(regex)
pattern.search('slkdjflsdjfl')
pattern.search('slkdjflsdjlf')
pattern.search('pyoyoqijoioe')
###Output
_____no_output_____
###Markdown
`re.compile()`Compiles a regular expression pattern into a regular expression object. When you need to use an expression several times in a single program, using the `compile()` function to save the resulting regular expression object for reuse is more efficient. This is because the compiled versions of the most recent patterns passed to `compile()` and the module-level matching functions are cached.
###Code
regex = r'(.*)@(.*)\.(.*)'
pattern = re.compile(regex)
pattern.search('[email protected]').groups()
pattern.search('[email protected]').groups()
###Output
_____no_output_____
###Markdown
A small parsing exerciseExtract all URLs within an html page.
###Code
f = open('people.html')
txt = f.read()
f.close()
txt
regex = r'<img.*src=(\".*\")>(.*)(</img>)'
#regex = r'<a.*href=(\".*\")>(.*)(</a>)'
pattern = re.compile(regex)
res = pattern.findall(txt)
for r in res:
print(r[0])
###Output
"img/chandola.png"
"img/suchismit.jpg"
"img/ducthanh.jpg"
"img/jialiang.png"
"img/niyaziso.jpg"
"img/arshad.jpg"
"img/yanboguo.png"
"img/unknown.jpg"
###Markdown
An expression's behavior can be modified by specifying a flags value. You can add flag as an extra argument to the various functions. Some of the flags used are: IGNORECASE, DOTALL, MULTILINE, VERBOSE, etc. TuplesAnother immutable data structure that can be used to define an arbitrary sequence of objects
###Code
t = (3,5,2.1,2.7,"yes",[3,2,4])
print(type(t))
# accessing elements is very similar to list
print(t[0:2])
print(t[::-1])
# immutable
del(t[2])
t[2] = 8
t = 4,3,5,2
print(type(t))
# you can modify a mutable object (such as a list)
del(t[-1][0])
print(t)
# direct assignment between two tuples
a,b,c,d = 3,4,5,2
def func(a,b):
return a+b,a-b,a*b,a/b
sm,dif,prod,div = func(4,5)
t = 1,2,3,4
print(type(t))
t[0] = 4
codes = [234,137,471,286]
names = ['erie','ramsey','clinton','niagara']
# give me the name of the county with code 471
i = codes.index(471)
print(names[i])
codenames = [(234,'erie'),(137,'ramsey')]
###Output
_____no_output_____
###Markdown
DictionariesOnly built in data structure to create **maps**, a collection of `key-value` pairs.Dictionaries are:- Mutable- Unordered- Indexed (for fast lookups)Highly useful in almost every application, where quick access to different types of data is needed.
###Code
# creating an empty dictionary
d = {}
print(type(d))
# creating a dictionary with values
d = {'k1': 6,
'k2': 8,
'k3':12,
(4,4,3,2): {}}
print(d)
d['k6']
# accessing the value for a given key
print(d['k1'])
print(d.get('k1'))
# get number of key value pairs in the dictionary
len(d)
# adding values to a dictionary
d = {}
d["k1"] = 6
d["k2"] = 8
d["k4"] = 9
d["k3"] = 12
d["k2"] = 19
print(d)
keys = d.keys()
tuple(keys)
# accessing keys and values
keys = d.keys()
values = d.values()
print(keys)
print(values)
l_keys = list(keys)
print(l_keys)
l_values = list(values)
print(l_values)
# iterating over dictionary
for k in d.keys():
print(str(k)+": "+str(d[k]))
# simultaneously iterating over keys and values
for k,v in d.items():
print(str(k)+": "+str(v))
# checking for a key in dictionary
print('k4' in d)
print('k9' in d)
# accessing non-existent key value
a = d['k9']
# can use any immutable object as a key
d[9] = 49
print(d)
d[('n1',4,5)] = 56
print(d)
# but not a mutable object
d[[6,4,5]] = 63
# value can be of any type (mutable or immutable)
d[7] = d
print(d)
###Output
{'k1': 6, 'k2': 19, 'k4': 9, 'k3': 12, 9: 49, ('n1', 4, 5): 56, 7: {...}}
###Markdown
Hash Based IndexingPython dictionaries are implemented as a _hash table_. Each `key` is hashed using a hash-function (see `hash()` for a similar function in `Python`). The hash allows you to directly access the location containing the `value` in $O(1)$ time.
###Code
# All immutables types are hashable
print(hash(1))
print(hash('Varun'))
print(hash(('varun','anynumber')))
print(hash(('Varun','anynumber')))
help(hash)
# All mutable types are not
print(hash(['varun',10]))
###Output
_____no_output_____
###Markdown
The "hashability" property determines if a type maybe used as a `key` in a dictionary. Clearly, a mutable type cannot be used as a key because its hash value can change over the course of the program. Why use Dictionaries?_Example_: Consider an application that requires the geographical coordinates for a US County name. We will build two mini applications: one that relies on lists, and one that uses dictionaries **Solution 1** Using list of tuples
###Code
import csv #we are going to use the csv module available in Python to read a csv file
def listcreate():
# reading the csv file with county data
data = []
# the open command uses the iso-8859-1 encoding instead of the usual utf-8 because the file contains accented characters
f = open('./us_counties.csv',encoding='iso-8859-1')
#with open('./us_counties.csv',encoding='utf-8') as f:
reader = csv.reader(f)
next(reader) # skip the header using the in-built function next which just skips one entry in an iterator
for row in reader:
#row will be a list consisting of all the tokens in a given row of the file
name = row[3]
lat = float(row[10])
lon = float(row[11])
data.append((name,lat,lon))
f.close()
return data
timeit listcreate()
data = listcreate()
###Output
_____no_output_____
###Markdown
Now let us say, we need the coordinates of a given county
###Code
countyname = 'Ponce Municipio'
# only way to find the coordinates will be to iterate through data
for d in data:
if d[0] == countyname:
print(d)
break
###Output
('Ponce Municipio', 18.001717, -66.606662)
###Markdown
**Solution 1 (a)** Using separate lists
###Code
def listcreate2():
# reading the csv file with county data
names = []
coordinates = []
# the open command uses the iso-8859-1 encoding instead of the usual utf-8 because the file contains accented characters
with open('./us_counties.csv',encoding='iso-8859-1') as f:
#with open('./us_counties.csv',encoding='utf-8') as f:
reader = csv.reader(f)
next(reader) # skip the header using the in-built function next which just skips one entry in an iterator
for row in reader:
#row will be a list consisting of all the tokens in a given row of the file
name = row[3]
lat = float(row[10])
lon = float(row[11])
names.append(name)
coordinates.append((lat,lon))
return names,coordinates
names,coordinates = listcreate2()
countyname = 'Ponce Municipio'
i = names.index(countyname)
print(coordinates[i])
###Output
(18.001717, -66.606662)
###Markdown
**Solution 2** Using dictionary
###Code
def dictcreate():
# reading the csv file with county data
coordinateMap = {} #initializing an empty dictionary
# the open command uses the iso-8859-1 encoding instead of the usual utf-8 because the file contains accented characters
with open('./us_counties.csv',encoding='iso-8859-1') as f:
#with open('./us_counties.csv',encoding='utf-8') as f:
reader = csv.reader(f)
next(reader) # skip the header using the in-built function next which just skips one entry in an iterator
for row in reader:
#row will be a list consisting of all the tokens in a given row of the file
name = row[3]
lat = float(row[10])
lon = float(row[11])
coordinateMap[name] = (lat,lon)
return coordinateMap
coordinateMap = dictcreate()
countyname = 'Ponce Municipio'
print(coordinateMap[countyname])
###Output
(18.001717, -66.606662)
###Markdown
Which approach is better? Clearly solution 1 requires extra lines for accessing data. Solution 1a is shorter, however, requires two lookups, first the index of the name in the names list and then getting the corresponding coordinate entry.Which one is faster? Let us first investigate the creation of the data structures.
###Code
timeit listcreate()
timeit dictcreate()
###Output
6.95 ms ± 37.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Both seem to be equal. In fact, creating lists might be faster than the dictionary. How about accessing?
###Code
countyname = 'Ponce Municipio'
def listaccess(names,coordinates,countyname):
i = names.index(countyname)
c = coordinates[i]
return c
timeit listaccess(names,coordinates,countyname)
def dictaccess(coordinateMap,countyname):
c = coordinateMap[countyname]
return c
timeit dictaccess(coordinateMap,countyname)
###Output
121 ns ± 2.06 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
###Markdown
Aha! It appears that using a `dict` is much better than using `list` in terms of accessing relevant values Space requirementOf course, another issue is the space needed to store each of the representations in the memory. Clearly, the `dict` data structure would be larger as it requires storing the hash values as well as the original data.
###Code
import sys
print(sys.getsizeof(coordinateMap))
print(sys.getsizeof(coordinates)+sys.getsizeof(names))
###Output
73824
53472
###Markdown
Note on order of keysThe keys are unordered collections. Which means that when you iterate over them, you might not get the same order as how they were added to the dictionary. This is differen from a `list`.You can use the inbuilt function `sorted()` to sort the keys.
###Code
d = {'key1': -9, 'key2': 'value2','key0': 'value0','key9': 2,'key7': 'value7'}
for k in d.keys():
print(k+":"+str(d[k]))
for k in sorted(d.keys()):
print(k+":"+str(d[k]))
###Output
key0:value0
key1:-9
key2:value2
key7:value7
key9:2
###Markdown
Sets`Set` is an unordered collection with no duplicate elements. Basic uses include membership testing and eliminating duplicate entries. Set objects also support mathematical operations like union, intersection, difference, and symmetric difference.Curly braces or the `set()` function can be used to create sets. Note: to create an empty set you have to use set(), not {}; the latter creates an empty dictionary.
###Code
s = set({3,5,6,3,'sdf'})
print(type(s))
print(s)
s = set({})
print(s)
s = set({4,5})
print(type(s))
print(s)
s = {4,5}
print(type(s))
print(s)
###Output
_____no_output_____
###Markdown
You can convert any other Python collection to set using the `set()` function.
###Code
s = set([3,6,2,1,3,45,6,3,2,1])
print(type(s))
print(s)
s = set({'s':1,'t':2})
print(type(s))
print(s)
s = set((4,3,'f'))
print(type(s))
print(s)
s1 = set(['apples','oranges','bananas'])
s2 = set(['pineapples','avocados','mangoes','apples','bananas'])
print(list(s1.intersection(s2)))
print(s1.union(s2))
print(s1.difference(s2))
###Output
_____no_output_____ |
notebooks/ImagePatches.ipynb | ###Markdown
Image PatchesIn this module, we will explore the topology of different collections of image patches capturing line segments, which, as we will show using persistent homology and projective coordinates, concentrate on the projective plane $RP^2$. Each image patch is a square $d \times d$ region of pixels. Each pixel can be thought of as a dimension, so each patch lives in $\mathbb{R}^{d \times d}$, and a collection of patches can be thought of as a Euclidean point cloud in $\mathbb{R}^{d \times d}$First, we perform all of the necessary library imports.
###Code
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
from ripser import ripser
from persim import plot_diagrams as plot_dgms
from dreimac import ProjectiveCoords, get_stereo_proj_codim1
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
We now define a few functions which will help us to sample patches from an image and to plot a collection of patches
###Code
def getPatches(I, dim):
"""
Given an image I, return all of the dim x dim patches in I
:param I: An M x N image
:param d: The dimension of the square patches
:returns P: An (M-d+1)x(N-d+1)x(d^2) array of all patches
"""
#http://stackoverflow.com/questions/13682604/slicing-a-numpy-image-array-into-blocks
shape = np.array(I.shape*2)
strides = np.array(I.strides*2)
W = np.asarray(dim)
shape[I.ndim:] = W
shape[:I.ndim] -= W - 1
if np.any(shape < 1):
raise ValueError('Window size %i is too large for image'%dim)
P = np.lib.stride_tricks.as_strided(I, shape=shape, strides=strides)
P = np.reshape(P, [P.shape[0]*P.shape[1], dim*dim])
return P
def imscatter(X, P, dim, zoom=1):
"""
Plot patches in specified locations in R2
Parameters
----------
X : ndarray (N, 2)
The positions of each patch in R2
P : ndarray (N, dim*dim)
An array of all of the patches
dim : int
The dimension of each patch
"""
#https://stackoverflow.com/questions/22566284/matplotlib-how-to-plot-images-instead-of-points
ax = plt.gca()
for i in range(P.shape[0]):
patch = np.reshape(P[i, :], (dim, dim))
x, y = X[i, :]
im = OffsetImage(patch, zoom=zoom, cmap = 'gray')
ab = AnnotationBbox(im, (x, y), xycoords='data', frameon=False)
ax.add_artist(ab)
ax.update_datalim(X)
ax.autoscale()
ax.set_xticks([])
ax.set_yticks([])
def plotPatches(P, zoom = 1):
"""
Plot patches in a best fitting rectangular grid
"""
N = P.shape[0]
d = int(np.sqrt(P.shape[1]))
dgrid = int(np.ceil(np.sqrt(N)))
ex = np.arange(dgrid)
x, y = np.meshgrid(ex, ex)
X = np.zeros((N, 2))
X[:, 0] = x.flatten()[0:N]
X[:, 1] = y.flatten()[0:N]
imscatter(X, P, d, zoom)
###Output
_____no_output_____
###Markdown
Finally, we add a furthest points subsampling function which will help us to subsample image patches when displaying them
###Code
def getCSM(X, Y):
"""
Return the Euclidean cross-similarity matrix between the M points
in the Mxd matrix X and the N points in the Nxd matrix Y.
:param X: An Mxd matrix holding the coordinates of M points
:param Y: An Nxd matrix holding the coordinates of N points
:return D: An MxN Euclidean cross-similarity matrix
"""
C = np.sum(X**2, 1)[:, None] + np.sum(Y**2, 1)[None, :] - 2*X.dot(Y.T)
C[C < 0] = 0
return np.sqrt(C)
def getGreedyPerm(X, M, Verbose = False):
"""
Purpose: Naive O(NM) algorithm to do the greedy permutation
:param X: Nxd array of Euclidean points
:param M: Number of points in returned permutation
:returns: (permutation (N-length array of indices), \
lambdas (N-length array of insertion radii))
"""
#By default, takes the first point in the list to be the
#first point in the permutation, but could be random
perm = np.zeros(M, dtype=np.int64)
lambdas = np.zeros(M)
ds = getCSM(X[0, :][None, :], X).flatten()
for i in range(1, M):
idx = np.argmax(ds)
perm[i] = idx
lambdas[i] = ds[idx]
ds = np.minimum(ds, getCSM(X[idx, :][None, :], X).flatten())
if Verbose:
interval = int(0.05*M)
if i%interval == 0:
print("Greedy perm %i%s done..."%(int(100.0*i/float(M)), "%"))
Y = X[perm, :]
return {'Y':Y, 'perm':perm, 'lambdas':lambdas}
###Output
_____no_output_____
###Markdown
Oriented Line SegmentsWe now examine the collection of patches which hold oriented, slightly blurry line segments that are varying distances from the center of the patch. First, let's start by setting up the patches. Below, the "dim" variable sets the patch resolution, and the "sigma" variable sets the blurriness (a larger sigma means blurrier line segments).
###Code
def getLinePatches(dim, NAngles, NOffsets, sigma):
"""
Sample a set of line segments, as witnessed by square patches
Parameters
----------
dim: int
Patches will be dim x dim
NAngles: int
Number of angles to sweep between 0 and pi
NOffsets: int
Number of offsets to sweep from the origin to the edge of the patch
sigma: float
The blur parameter. Higher sigma is more blur
"""
N = NAngles*NOffsets
P = np.zeros((N, dim*dim))
thetas = np.linspace(0, np.pi, NAngles+1)[0:NAngles]
#ps = np.linspace(-0.5*np.sqrt(2), 0.5*np.sqrt(2), NOffsets)
ps = np.linspace(-1, 1, NOffsets)
idx = 0
[Y, X] = np.meshgrid(np.linspace(-0.5, 0.5, dim), np.linspace(-0.5, 0.5, dim))
for i in range(NAngles):
c = np.cos(thetas[i])
s = np.sin(thetas[i])
for j in range(NOffsets):
patch = X*c + Y*s + ps[j]
patch = np.exp(-patch**2/sigma**2)
P[idx, :] = patch.flatten()
idx += 1
return P
P = getLinePatches(dim=10, NAngles = 16, NOffsets = 16, sigma=0.25)
plt.figure(figsize=(8, 8))
plotPatches(P, zoom=2)
ax = plt.gca()
ax.set_facecolor((0.7, 0.7, 0.7))
plt.show()
###Output
_____no_output_____
###Markdown
Now let's compute persistence diagrams for this collection of patches. This time, we will compute with both $\mathbb{Z}/2$ coefficients and $\mathbb{Z}/3$ coefficients up to H2.
###Code
dgmsz2 = ripser(P, coeff=2, maxdim=2)['dgms']
dgmsz3 = ripser(P, coeff=3, maxdim=2)['dgms']
plt.figure(figsize=(8, 4))
plt.subplot(121)
plot_dgms(dgmsz2)
plt.title("$\mathbb{Z}/2$")
plt.subplot(122)
plot_dgms(dgmsz3)
plt.title("$\mathbb{Z}/3$")
plt.show()
###Output
_____no_output_____
###Markdown
Notice how there is one higher persistence dot both for H1 and H2, which both go away when switching to $\mathbb{Z} / 3\mathbb{Z}$. This is the signature of the projective plane! To verify this, we will now look at these patches using "projective coordinates" (finding a map to $RP^2$).
###Code
def plotProjBoundary():
t = np.linspace(0, 2*np.pi, 200)
plt.plot(np.cos(t), np.sin(t), 'c')
plt.axis('equal')
ax = plt.gca()
ax.arrow(-0.1, 1, 0.001, 0, head_width = 0.15, head_length = 0.2, fc = 'c', ec = 'c', width = 0)
ax.arrow(0.1, -1, -0.001, 0, head_width = 0.15, head_length = 0.2, fc = 'c', ec = 'c', width = 0)
ax.set_facecolor((0.35, 0.35, 0.35))
P = getLinePatches(dim=10, NAngles = 200, NOffsets = 200, sigma=0.25)
proj = ProjectiveCoords(P, n_landmarks=100)
h1 = proj.dgms_[1]
# Find the index with greatest persistence in H1 and use
# the cocycle corresponding to that
idx = np.argmax(h1[:, 1]-h1[:, 0])
print("Max persistence index {}, peristence {}".format(idx, h1[idx, 1]-h1[idx, 0]))
res = proj.get_coordinates(proj_dim=2, perc=0.9, cocycle_idx=[idx])
X = res['X']
idx = getGreedyPerm(X, 400)['perm']
SFinal = get_stereo_proj_codim1(X[idx, :])
P = P[idx, :]
plt.figure(figsize=(8, 8))
imscatter(SFinal, P, 10)
plotProjBoundary()
plt.show()
###Output
Max persistence index 0, peristence 2.4123587608337402
|
Seaborn Study/sources/6 热图HEATMAPPLOT.ipynb | ###Markdown
6 热图Heatmapplot热图是指通过将矩阵单个的值表示为颜色的图形表示。热力图显示数值数据的一般视图非常有用,制作热图很简单,且不需要提取特定数据点。在seaborn中使用heatmap函数绘制热力图,此外我们也使用clustermap函数绘制树状图与热图。该章节主要内容有:1. 基础热图绘制 Basic Heatmap plot2. 热图外观设定 Customize seaborn heatmap3. 热图上使用标准化 Use normalization on heatmap4. 树状图与热图 Dendrogram with heatmap
###Code
# library 导入库
import seaborn as sns
import pandas as pd
import numpy as np
# jupyter notebook显示多行输出
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
###Output
_____no_output_____
###Markdown
1. 基础热图绘制 Basic Heatmap plot+ 普通热图 Basic Heatmap+ 相关矩阵热图 Correlation matrix+ 相关矩阵半热图 an half heatmap of correlation matrix+ 多数据热力图制作 Basic Heatmap of long format data
###Code
# 普通热图 Basic Heatmap
# Create a dataset (fake) 制作5行5列的矩阵
df = pd.DataFrame(np.random.random((5,5)), columns=["a","b","c","d","e"])
# 显示数据
df
# Default heatmap: just a visualization of this square matrix 默认热力图
p1 = sns.heatmap(df)
# 相关矩阵热图 Correlation matrix
# 一个常见的任务是检查某些变量是否相关可以轻松计算每对变量之间的相关性,并将其绘制为热图,发现哪个变量彼此相关。
# Create a dataset (fake) 创建数据
df = pd.DataFrame(np.random.random((100,5)), columns=["a","b","c","d","e"])
df.head()
# Calculate correlation between each pair of variable 计算相关系数
corr_matrix=df.corr()
# 显示相关系数结果
corr_matrix
# plot it 绘图 cmap设定颜色版
sns.heatmap(corr_matrix, cmap='PuOr')
# 相关矩阵半热图 an half heatmap of correlation matrix
# Create a dataset (fake) 建立数据
df = pd.DataFrame(np.random.random((100,5)), columns=["a","b","c","d","e"])
# Calculate correlation between each pair of variable 计算相关系数
corr_matrix=df.corr()
# Can be great to plot only a half matrix 创建一个corr_matrix等大的O矩阵
mask = np.zeros_like(corr_matrix)
# np.triu_indices_from(mask)返回矩阵上三角形的索引
indices=np.triu_indices_from(mask)
# 显示索引结果
indices
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
# mask设置具有缺失值的单元格将自动被屏蔽;square使每个单元格为正方形
p2 = sns.heatmap(corr_matrix, mask=mask, square=True)
# 多数据热力图制作 Basic Heatmap of long format data
# 创建两个函数列表
people=np.repeat(("A","B","C","D","E"),5)
feature=list(range(1,6))*5
value=np.random.random(25)
# 创建表格
df=pd.DataFrame({'feature': feature, 'people': people, 'value': value })
# plot it 创建透视表
df_wide=df.pivot_table( index='people', columns='feature', values='value' )
p2=sns.heatmap( df_wide, square=True)
###Output
_____no_output_____
###Markdown
2. 热图外观设定 Customize seaborn heatmap+ 单元格值的显示 Annotate each cell with value+ 自定义网格线 Custom grid lines+ 轴的显示 Remove X or Y labels+ 标签隐藏 Hide a few axis labels to avoid overlapping+ 颜色条坐标显示范围设置 Coordinate range setting of color bar
###Code
# Create a dataset (fake)
df = pd.DataFrame(np.random.random((10,10)), columns=["a","b","c","d","e","f","g","h","i","j"])
# annot_kws设置各个单元格中的值,size设定大小
sns.heatmap(df, annot=True, annot_kws={"size": 7});
# 自定义网格线 Custom grid lines
sns.heatmap(df, linewidths=2, linecolor='yellow');
# 轴的显示 Remove X or Y labels
# 由xticklables和yticklabels控制坐标轴,cbar控制颜色条的显示
sns.heatmap(df, yticklabels=False, cbar=False);
# 标签隐藏 Hide a few axis labels to avoid overlapping
# xticklabels表示标签index为该值倍数时显示
sns.heatmap(df, xticklabels=3);
# 颜色条坐标显示范围设置 Coordinate range setting of color bar
sns.heatmap(df, vmin=0, vmax=0.5);
###Output
_____no_output_____
###Markdown
3. 热图上使用标准化 Use normalization on heatmap+ 列的规范化 Column normalization+ 行的规范化 Row normalization
###Code
# 列的规范化 Column normalization
# 有时矩阵某一列值远远高于其他列的值,导致整体热图各点颜色趋于两级,需要对列的数据进行规范化的
# Create a dataframe where the average value of the second column is higher:
df = pd.DataFrame(np.random.randn(10,10) * 4 + 3)
# 使得第一列数据明显大于其他列
df[1]=df[1]+40
# If we do a heatmap, we just observe that a column as higher values than others: 没有规范化的热力图
sns.heatmap(df, cmap='viridis');
# Now if we normalize it by column 规范化列
df_norm_col=(df-df.mean())/df.std()
sns.heatmap(df_norm_col, cmap='viridis');
# 行的规范化 Row normalization
# 列的规范化相同的原理适用于行规范化。
# Create a dataframe where the average value of the second row is higher
df = pd.DataFrame(np.random.randn(10,10) * 4 + 3)
df.iloc[2]=df.iloc[2]+40
# If we do a heatmap, we just observe that a row has higher values than others: 第2行的数据明显大于其他行
sns.heatmap(df, cmap='viridis');
# 1: substract mean 行的规范化
df_norm_row=df.sub(df.mean(axis=1), axis=0)
# 2: divide by standard dev
df_norm_row=df_norm_row.div( df.std(axis=1), axis=0 )
# And see the result
sns.heatmap(df_norm_row, cmap='viridis');
###Output
_____no_output_____
###Markdown
4. 树状图与热图 Dendrogram with heatmap+ 基础树状图与热图绘制 Dendrogram with heat map and coloured leaves+ 树形图与热图规范化 normalize of Dendrogram with heatmap+ 树形图与热图距离参数设定 distance of Dendrogram with + 树形图与热图聚类方法参数设定 cluster method of Dendrogram with heatmap+ 图像颜色设定 Change color palette + 离群值设置 outliers set 树状图就是层次聚类的表现形式。层次聚类的合并算法通过计算两类数据点间的相似性,对所有数据点中最为相似的两个数据点进行组合,并反复迭代这一过程。简单的说层次聚类的合并算法是通过计算每一个类别的数据点与所有数据点之间的距离来确定它们之间的相似性,距离越小,相似度越高。并将距离最近的两个数据点或类别进行组合,生成聚类树。在树状图中通过线条连接表示两类数据的距离。
###Code
# 基础树状图与热图绘制 Dendrogram with heat map and coloured leaves
from matplotlib import pyplot as plt
import pandas as pd
# 使用mtcars数据集,通过一些数字变量提供几辆汽车的性能参数。
# Data set mtcars数据集 下载
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
df = df.set_index('model')
# 横轴为汽车性能参数,纵轴为汽车型号
df.head()
# Prepare a vector of color mapped to the 'cyl' column
# 设定发动机汽缸数6,4,,8指示不同的颜色
my_palette = dict(zip(df.cyl.unique(), ["orange","yellow","brown"]))
my_palette
# 列出不同汽车的发动机汽缸数
row_colors = df.cyl.map(my_palette)
row_colors
# metric数据度量方法, method计算聚类的方法
# standard_scale标准维度(0:行或1:列即每行或每列的含义,减去最小值并将每个维度除以其最大值)
sns.clustermap(df, metric="correlation", method="single", cmap="Blues", standard_scale=1, row_colors=row_colors)
# 树形图与热图规范化 normalize of Dendrogram with heatmap
# Standardize or Normalize every column in the figure
# Standardize 标准化
sns.clustermap(df, standard_scale=1)
# Normalize 正则化
sns.clustermap(df, z_score=1)
# 树形图与热图距离参数设定 distance of Dendrogram with heatmap
# 相似性
sns.clustermap(df, metric="correlation", standard_scale=1)
# 欧几里得距离
sns.clustermap(df, metric="euclidean", standard_scale=1)
# 树形图与热图聚类方法参数设定 cluster method of Dendrogram with heatmap
# single-linkage算法
sns.clustermap(df, metric="euclidean", standard_scale=1, method="single")
# 聚类分析法ward,推荐使用
sns.clustermap(df, metric="euclidean", standard_scale=1, method="ward")
# 图像颜色设定 Change color palette
sns.clustermap(df, metric="euclidean", standard_scale=1, method="ward", cmap="mako")
sns.clustermap(df, metric="euclidean", standard_scale=1, method="ward", cmap="viridis")
# 离群值设置 outliers set
# Ignore outliers
# Let's create an outlier in the dataset, 添加离群值
df.iloc[15,5] = 1000
# use the outlier detection 计算时忽略离群值
sns.clustermap(df, robust=True)
# do not use it 不忽略离群值
sns.clustermap(df, robust=False)
###Output
_____no_output_____ |
src/Feature Engineering.ipynb | ###Markdown
Power of Feature EngineeringCompare the performance of logistic regression to a DNN Classifier on a non-linear dataset. This is to show that similar accuracy, to the DNN, can be acheived by using logistic regression with transformations of the data. Prepare Data
###Code
import numpy as np
import pandas as pd
n_points = 2000
age = np.round(np.linspace(18,60,n_points),2) #age of employee
np.random.shuffle(age)
performance = np.linspace(-10,10,n_points) #performance score of employee
np.random.shuffle(performance)
noise = np.random.randn(n_points)
g = (0.5 * age) +2*(performance) + age**2 + 1000*age/performance -10000 + 1000*noise
y = [1 if y>=0 else 0 for y in g]
data = pd.DataFrame(data={'age':age,'performance':performance,'y':y})
print(sum(y))
data.head()
#data.to_csv('/Users/conorosully/Documents/git/deep-learning/data/performance.csv')
data = pd.read_csv('/Users/conorosully/Documents/git/deep-learning/data/performance.csv')
sum(data['y'])
import matplotlib.pyplot as plt
%matplotlib inline
plt.subplots(nrows=1, ncols=1,figsize=(15,10))
plt.scatter('age','performance',c='#ff2121',s=50,edgecolors='#000000',data=data[data.y == 1])
plt.scatter('age','performance',c='#2176ff',s=50,edgecolors='#000000',data=data[data.y == 0])
plt.ylabel("Performance Score",size=20)
plt.xlabel('Age',size=20)
plt.yticks(size=12)
plt.xticks(size=12)
plt.legend(['Promoted','Not Promoted'],loc =2,prop={"size":20})
plt.savefig('/Users/conorosully/Documents/git/deep-learning/figures/article_feature_eng/figure1.png',format='png')
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.model_selection import train_test_split
import sklearn.metrics as metric
import statsmodels.api as sm
x = data[['age','performance']]
y = data['y']
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state = 101)
model = sm.Logit(y_train,x_train).fit() #fit logistic regression model
predictions = np.around(model.predict(x_test))
accuracy = metric.accuracy_score(y_test,predictions)
print(round(accuracy*100,2))
n_points = 1000000 #use many point to visualise decision boundry
age_db = np.linspace(18,60,n_points)
np.random.shuffle(age_db)
performance_db= np.linspace(-10,10,n_points)
np.random.shuffle(performance_db)
data_db = pd.DataFrame({'age':age_db,'performance':performance_db})
#make predictions on the decision boundry points
predictions = model.predict(data_db)
y_db = [round(p) for p in predictions]
data_db['y'] = y_db
fig, ax = plt.subplots( nrows=1, ncols=1,figsize=(15,10))
#Plot decision boundry
plt.scatter('age','performance',c='#ffbdbd',s=1,data=data_db[data_db.y == 1])
plt.scatter('age','performance',c='#b0c4ff',s=1,data=data_db[data_db.y == 0])
#Plot employee data points
plt.scatter('age','performance',c='#ff2121',s=50,edgecolors='#000000',data=data[data.y == 1])
plt.scatter('age','performance',c='#2176ff',s=50,edgecolors='#000000',data=data[data.y == 0])
plt.ylabel("Performance Score",size=20)
plt.xlabel('Age',size=20)
plt.yticks(size=12)
plt.xticks(size=12)
plt.savefig('/Users/conorosully/Documents/git/deep-learning/figures/article_feature_eng/figure2.png',format='png')
###Output
_____no_output_____
###Markdown
Add transformations and interactions
###Code
data['age_sqrd'] = age**2
data['age_perf_ratio'] = age/performance
x = data[['age','performance','age_sqrd','age_perf_ratio']]
y = data['y']
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state = 101)
model = sm.Logit(y_train,x_train).fit(disp=0) #fit new logistic regression model
predictions = np.around(model.predict(x_test))
accuracy_score(y_test,predictions)
#Update decision boundry points
data_db.drop('y',axis=1,inplace=True)
data_db['age_sqrd'] = data_db['age']**2
data_db['age_perf_ratio'] = data_db['age']/data_db['performance']
#make predictions on the decision boundry points
predictions = model.predict(data_db)
y_db = [round(p) for p in predictions]
data_db['y'] = y_db
fig, ax = plt.subplots( nrows=1, ncols=1,figsize=(15,10))
#Plot decision boundry
plt.scatter('age','performance',c='#ffbdbd',s=1,data=data_db[data_db.y == 1])
plt.scatter('age','performance',c='#b0c4ff',s=1,data=data_db[data_db.y == 0])
#Plot employee data points
plt.scatter('age','performance',c='#ff2121',s=50,edgecolors='#000000',data=data[data.y == 1])
plt.scatter('age','performance',c='#2176ff',s=50,edgecolors='#000000',data=data[data.y == 0])
plt.ylabel("Performance Score",size=20)
plt.xlabel('Age',size=20)
plt.yticks(size=12)
plt.xticks(size=12)
plt.savefig('/Users/conorosully/Documents/git/deep-learning/figures/article_feature_eng/figureFinal.png',format='png')
###Output
_____no_output_____
###Markdown
DNN Classifier
###Code
x = data[['age','performance']]
y = data['y']
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state = 101)
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(12, input_dim=2, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=100, batch_size=10) #fit ANN
accuracy = model.evaluate(x_test, y_test)
print(round(accuracy[1]*100,2))
#make predictions on the decision boundry points
predictions = model.predict(data_db[['age','performance']])
y_db = np.around(predictions )
data_db['y'] = y_db
fig, ax = plt.subplots( nrows=1, ncols=1,figsize=(15,10))
#Plot decision boundry
plt.scatter('age','performance',c='#ffbdbd',s=1,data=data_db[data_db.y == 1])
plt.scatter('age','performance',c='#b0c4ff',s=1,data=data_db[data_db.y == 0])
#Plot employee data points
plt.scatter('age','performance',c='#ff2121',s=50,edgecolors='#000000',data=data[data.y == 1])
plt.scatter('age','performance',c='#2176ff',s=50,edgecolors='#000000',data=data[data.y == 0])
plt.ylabel("Performance Score",size=20)
plt.xlabel('Age',size=20)
plt.yticks(size=12)
plt.xticks(size=12)
plt.savefig('/Users/conorosully/Documents/git/deep-learning/figures/article_feature_eng/figure_ann.png',format='png')
###Output
_____no_output_____ |
Data/Code/genre_split_test.ipynb | ###Markdown
Data cleaning: IMDB- title.crewThe following codes merges director names in imdb.name.basics and director ids in imdb.title.crew Name of clean datasets: "df_directors_wide" and "df_directors_long"
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 1: Clearning imdb.title.crew
###Code
# Read original data
df_title_crew = pd.read_csv('Data/zippedData/imdb.title.crew.csv.gz')
df_title_crew.info()
df_title_crew.sample(5)
# Make a copy of the original dataset
df_crew= df_title_crew.copy()
# Delete 'writers'
df_crew.drop(['writers'], axis=1, inplace=True)
# Drop missing values in 'directors'
df_crew.dropna(axis=0, subset=['directors'], inplace=True)
# Drop duplicate in 'tconst', film id
df_crew.drop_duplicates(subset='tconst', inplace=True)
df_crew.head()
# Count how many directors in each cell.
# Create a new data frame with split directors columns
directors = df_crew['directors'].str.split(',', expand =True)
directors.info()
# Most films has only one or two directors.
# For this project, drop films with 4 or more directors
# Drop films with 4 or more directors
# Step 1 - split directors into director1, director2, director3, director4
# 'director4' has 4th and above directors
df_crew[['director1', 'director2', 'director3', 'director4']] = df_crew['directors'].str.split(',', n=3, expand =True)
# Step 2 - drop rows which 'director4' is not null.
df_crew.dropna(axis=0, subset=['director4'])
# Step 3 - drop 'directors' and 'director4' columns.
df_crew.drop(['directors', 'director4'], axis=1, inplace=True)
df_crew.sample(20)
###Output
_____no_output_____
###Markdown
Step 2: Clearning imdb.name.basics
###Code
df_name = pd.read_csv('Data/zippedData/imdb.name.basics.csv.gz')
df_name.info()
df_name.sample(5)
# Keep 'nconst', 'primary_name'
df_name2 = df_name[['nconst', 'primary_name' ]]
df_name2
###Output
_____no_output_____
###Markdown
Step 3: Merge director ids (in df_crew) and director names (in df_name2)
###Code
# Merge names of directors for director1, and rename it director_name1
df_merge1 = df_crew.merge(df_name2, how='left', left_on='director1', right_on='nconst' )
# Rename
df_merge1.rename(columns={'primary_name':'director_name1'}, inplace=True)
# drop keys
df_merge1.drop(axis=1, columns='nconst', inplace=True)
df_merge1.head(2)
# Merge names of directors for director2, and rename it director_name2
df_merge2 = df_merge1.merge(df_name2, how='left', left_on='director2', right_on='nconst' )
# Rename
df_merge2.rename(columns={'primary_name':'director_name2'}, inplace=True)
# drop keys
df_merge2.drop(axis=1, columns='nconst', inplace=True)
df_merge2.head(2)
# Merge names of directors for director3, and rename it director_name3
df_merge3 = df_merge2.merge(df_name2, how='left', left_on='director2', right_on='nconst' )
# Rename
df_merge3.rename(columns={'primary_name':'director_name3'}, inplace=True)
# drop keys
df_merge3.drop(axis=1, columns='nconst', inplace=True)
df_merge3.head(2)
###Output
_____no_output_____
###Markdown
Clean dataset in a wide format
###Code
df_directors_wide = df_merge3
df_directors_wide.head(3)
###Output
_____no_output_____
###Markdown
Reshape from wide to long format
###Code
## reshape wide to long format
df_directors_long = pd.wide_to_long(df_directors_wide, ["director", "director_name"], i='tconst', j='n_th_director' )
# Drop NaN in 'director'
df_directors_long.dropna(axis=0, subset=['director'], inplace=True)
df_directors_long.reset_index(inplace=True)
# Delete 'n_th_director'
df_directors_long.drop('n_th_director', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Clean dataset in a long format
###Code
df_directors_long.head()
#number of films shot by directors
df_directors_long['director_name'].value_counts()[:20]
###Output
_____no_output_____
###Markdown
Merging mdb.title.ratings
###Code
df_title_rating = pd.read_csv('Data/zippedData/imdb.title.ratings.csv.gz')
df_title_rating.info()
# drop 'numvotes'
df_title_rating.drop(axis=1, columns = 'numvotes', inplace=True)
# Merge imdb.title.ratings and emiko's df_directors_wide
df_directors_ratings = df_title_rating.merge(df_directors_wide, how='inner', on='tconst' )
df_directors_ratings.head(3)
###Output
_____no_output_____
###Markdown
Merge Piotr data to df_directors_ratings
###Code
df_title_basic = pd.read_csv('Data/zippedData/imdb.title.basics.csv.gz')
df_title_basic.info()
df_title_basic
df_title_basic.dropna(inplace = True)
df_title_basic
df_year = df_title_basic[df_title_basic["start_year"] >= 2010]
df_year = df_title_basic[df_title_basic["start_year"] <= 2021]
df_year["start_year"].value_counts()
df_year.drop("primary_title", axis=1, inplace=True)
df_year
# Merge piotr's df_year and emiko's df_directors_wide
df_imdb = df_year.merge(df_directors_ratings, how='inner', on='tconst' )
df_imdb.head()
df_imdb.info()
df_num = pd.read_csv('Data/zippedData/tn.movie_budgets.csv.gz')
df_num.info()
def data_cleaning_money(column_name):
df_num[column_name] = df_num[column_name].str.replace('$', '').str.replace(',', '').astype(float)
data_cleaning_money('production_budget')
data_cleaning_money('domestic_gross')
data_cleaning_money('worldwide_gross')
df_num['release_date'] = pd.to_datetime(df_num['release_date'])
df_num_cleaned = df_num.drop(columns = 'id')
df_num_good = df_num_cleaned[df_num_cleaned['domestic_gross'] != 0.0]
df_num_good = df_num_good[df_num_good['worldwide_gross'] != 0.0]
df_num_good = df_num_good[df_num_good['production_budget'] != 0.0]
df_num_good = df_num_good[df_num_good['domestic_gross'] != df_num_good['worldwide_gross']]
df_num_good['ROI'] = (df_num_good['worldwide_gross'] - df_num_good['production_budget']) / df_num_good['production_budget']
df_num_final = df_num_good.sort_values(by = 'ROI', ascending = False)
df_num_final.drop(df_num_final[df_num_final['release_date'] < pd.Timestamp(2010, 1, 1)].index, inplace = True)
df_num_final['profit'] = (df_num_final['worldwide_gross'] - df_num_final['production_budget'])
df_num_final['month'] = df_num_final['release_date'].dt.month
df_num_final
df_merge = df_num_final.merge(df_imdb, how = 'inner', left_on = 'movie', right_on = 'original_title')
df_merge
df_merge.drop('original_title', axis = 1, inplace = True)
df_dubplicates = df_merge[df_merge.duplicated('movie')]
df_dubplicates
df_merge.drop_duplicates(subset = ['movie'], inplace = True)
df.assign(Book=df.Book.str.split(","))
df_split = df_merge.assign(Genre = df_merge.genres.str.split(',')).explode('Genre')
df_split.head()
df_split
df_genre_test = df_split.groupby(by='Genre').count().reset_index()
df_genre_test
df_genre_test.plot.bar(x='Genre', y='ROI', figsize=(10,5), fontsize=12, legend=False)
plt.title('ROI by Genre', fontsize=20)
plt.xlabel('Genre', fontsize=16)
plt.ylabel('ROI (mean)', fontsize=16)
df_merge['genres'].value_counts()[:20]
df_merge.drop('director1', axis = 1, inplace = True)
df_merge.drop('director2', axis = 1, inplace = True)
df_merge.drop('director3', axis = 1, inplace = True)
df_merge.drop('tconst', axis = 1, inplace = True)
# Step 1 - split genres into genre1 and genre2.
df_merge[['genre1', 'genre2']] = df_merge['genres'].str.split(',', n=1, expand =True)
df_merge.head()
df_merge.drop('genres', axis=1, inplace=True)
import datetime
import calendar
datetime.datetime.now()
df_merge['month_name'] = df_merge['month'].apply(lambda x: calendar.month_name[x])
df_merge.head()
df_merge['month'].value_counts()
df_test = df_merge.groupby(by='genre1').mean()
df_test
df_test.reset_index(inplace=True)
fig_genre_budget, ax = plt.subplots()
plt.bar(df_test['genre1'], df_test['production_budget'])
plt.xticks(rotation = 90);
df_test.plot.scatter('genre1' , 'profit')
df_merge.groupby(by='month').mean().reset_index()
df_merge_month = df_merge.groupby(by='month_name').mean().reset_index()
df_merge_month = df_merge_month.sort_values(by = 'month')
df_merge_month
fig_month, ax = plt.subplots(figsize = (10,5))
import seaborn as sns
sns.set_style('darkgrid')
ax.bar(df_merge_month['month_name'], df_merge_month['ROI'])
plt.xticks(rotation = 90);
plt.title('Seasonality')
plt.xlabel('Release Month')
plt.ylabel('Average ROI')
plt.savefig('Images/Seasonality.png', bbox_inches = 'tight');
df_merge.head()
fig_runtime, ax = plt.subplots()
sns.set_style('darkgrid')
ax.scatter(df_merge['runtime_minutes'], df_merge['ROI'])
df_merge.sort_values(by = 'worldwide_gross', ascending = False).head(20)
df_merge_dir = df_merge.groupby(by='director_name1').sum().reset_index()
df_merge_dir.sort_values(by = 'profit', ascending = False).head(20)
###Output
_____no_output_____ |
Exercise 2/T3_probability.ipynb | ###Markdown
Exercise 2 - Probability**R code presented in this excercise is not required on homeworks or exams, its only to show what is possible in R an to complement the excercise with nice graphs.** Adéla Vrtková, Michal Béreš, Martina Litschmannová In this exercise, we will go through an introduction to probability. We assume you are familiar with the terms: **definition of probability, conditional probability, total probability theorem, Bayes' theorem**. Auxiliary functions Total probability $P(A)=\sum_{i=1}^{n}P(B_i)P(A|B_i)$
###Code
# probability calculation P(A) - total probability theorem
total_probability = function(P_B, P_AB)
{ # we consider P_B as a vector of values P(B_i) and P_AB as a vector of values P(A|B_i)
P_A = 0
for (i in 1:length(P_B))
{
P_A = P_A + P_B[i]*P_AB[i]
}
return(P_A)
}
###Output
_____no_output_____
###Markdown
Bayes' theorem $P(B_k|A)=\frac{P(B_k)P(A|B_k)}{\sum_{i=1}^{n}P(B_i)P(A|B_i)}$
###Code
# calculation of conditional probability P(B_k|A) - Bayes' theorem
bayes = function(P_B, P_AB, k)
{ # we consider P_B as a vector of values P(B_i), P_AB as a vector of values P(A|B_i) and k as and index in P(B_k|A)
P_A = total_probability(P_B, P_AB)
P_BkA = P_B[k]*P_AB[k]/P_A
return(P_BkA)
}
###Output
_____no_output_____
###Markdown
**We will add functions from the last exercise for computing combinatorial selections, they are in the combinatorics script.R**
###Code
source('combinatorics.R')
###Output
_____no_output_____
###Markdown
Examples Example 1. Determine the probability that a number greater than 14 will fall on a 20-wall fair dice roll.
###Code
omega = 1:20
A = c(15,16,17,18,19,20)
# probability as a proportion favorable to all
length(A)/length(omega)
###Output
_____no_output_____
###Markdown
Example 2. Determine the probability that a number greater than 14 will fall on a 20-wall dice roll, if you know that even numbers fall twice as often as odd numbers.
###Code
p_odd = 1/(20+10)
p_even = 2*p_odd
probability = c(p_odd, p_even, p_odd, p_even, p_odd, p_even, p_odd, p_even, p_odd, p_even,
p_odd, p_even, p_odd, p_even, p_odd, p_even, p_odd, p_even, p_odd, p_even)
probability
# probability is
sum(probability[15:20])
###Output
_____no_output_____
###Markdown
Example 3. Determine the probability that you will guess exactly 4 numbers in the lottery.(6 numbers out of 49 are drawn)
###Code
(combinations(6,4)*combinations(43,2))/combinations(49,6)
###Output
_____no_output_____
###Markdown
Example 4. From the alphabetical list of students enrolled in the exercise, the teacher selects the first 12 and offers them a bet: “If each of you was born in a different zodiac sign, I will give each of you CZK 100. However, if there are at least two students among you who were born in the same sign, each of you will give me 100 CZK. ”Is it worthwhile for students to accept a bet? How likely are students to win?
###Code
permutation(12)/r_permutation_repetition(12,12)
###Output
_____no_output_____
###Markdown
Example 5. Calculate the probability that an electric current will flow from point 1 to point 2 if part of the el. circuit, including the probability of failure of individual components is indicated in the following figure.(The failures of the individual components are independent of each other.) 
###Code
# divided into blocks I=(A, B) and II=(C, D, E)
PI = 1 - (1 - 0.1)*(1 - 0.3)
PI
PII = 0.2*0.3*0.2
PII
# result
(1 - PI)*(1-PII)
###Output
_____no_output_____
###Markdown
Example 6. The patient is suspected of having one of four mutually exclusive diseases - N1, N2, N3, N4 with a probability of occurrence of P(N1)=0.1; P(N2)=0.2; P(N3)=0.4; P(N4)=0.3. Laboratory test A is positive in the case of the first disease in 50% of cases, in the second disease in 75% of cases, in the third disease in 15% of cases and in the fourth in 20% of cases. What is the probability that the result of the laboratory test will be positive?
###Code
# total probability theorem
P_N = c(0.1,0.2,0.4,0.3) # P(N1), P(N2),...
P_PN = c(0.5,0.75,0.15,0.2) # P(P|N1), P(P|N2),...
P_P = total_probability(P_B = P_N, P_AB = P_PN) # P(P)
P_P
###Output
_____no_output_____
###Markdown
Example 7. Telegraphic characters consist of "dot" and "comma" signals. It is statistically found that 25% of "dot" messages and 20% of "comma" signals are distorted. It is also known that signals are used in a 3: 2 ratio. Determine the probability that the signal was received correctly if a "dot" signal was received.
###Code
# Bayes' theorem
P_O = c(0.6, 0.4) # P(O.), P(O-)
P_PO = c(0.75, 0.2) # P(P.|O.), P(P.|O-)
bayes(P_B = P_O, P_AB = P_PO, k = 1) # k=1 because correctly=O.
###Output
_____no_output_____
###Markdown
Example 8. 85% of green taxis and 15% of blue taxis run in one city. The witness of the traffic accident testified that the accident was caused by the driver of the blue taxi, who then left. Tests carried out under similar lighting conditions showed that the witness identified the color of the taxi well in 80% of cases and was wrong in 20% of cases. - What is the probability that the culprit of the accident actually drove a blue taxi? - Subsequently, another independent witness was found who also claims that the taxi was blue. What is the probability that the culprit of the accident actually drove a blue taxi now? - Does the probability that the perpetrator of the accident actually drove a blue taxi affect whether the two witnesses mentioned above testified gradually or simultaneously?
###Code
# a) again Bayes' theorem
P_B = c(0.85, 0.15) # P(Z), P(M)
P_SB = c(0.20, 0.80) # P(SM|Z), P(SM|M)
bayes(P_B = P_B, P_AB = P_SB, k = 2) # blue is second
# b) first option - second pass through Bayes
P_M = bayes(P_B = P_B, P_AB = P_SB, k = 2)
P_B = c(1 - P_M, P_M) # P(Z), P(M)
P_SB = c(0.20, 0.80) # P(S2M|Z), P(S2M|M)
bayes(P_B = P_B, P_AB = P_SB, k = 2)
# c) or answered at once
P_B = c(0.85, 0.15) # P(Z), P(M)
P_SB = c(0.20^2, 0.80^2) # P(S1M & S2M|Z), P(S1M & S2M|M)
bayes(P_B = P_B, P_AB = P_SB, k = 2)
###Output
_____no_output_____
###Markdown
Example 9. We need to find out the answer to a sensitive question. How to estimate what percentage of respondents will answer YES to the question and at the same time guarantee complete anonymity to all respondents? One of the solutions is the so-called double-anonymous survey:We will let the respondents throw the coin A and the coin B. - Those who got head on coin A will write the answer(YES/NO) to the sensitive question on teir card. - Those who got tail on coin A will write YES = if coin B landed on head or NO = if coin B landed on tail. How do we determine the proportion of students who answered YES to a sensitive question?Assume that respondents were asked if they were cheating on an exam. From the questionnaires, it was found that 120 respondents answered "YES" and 200 respondents answered "NO". What percentage of students cheated at the exam?
###Code
# total probability theorem
# P(YES)=P(A_YES) * P(YES|A_YES) + P(A_NO) * P(B_YES|A_NO)
# equation 120/320=0.5 * x + 0.5 * 0.5
(120/320-0.5^2)/0.5
###Output
_____no_output_____ |
LinearAlgebra/commonMethods.ipynb | ###Markdown
This is my notebook following the instructions from:https://web.stanford.edu/class/cs231a/section/section1.pdf
###Code
v = np.array([[1],
[2],
[3]])
v.shape
v + v
v1 = np.array([1, 2, 3])
v2 = np.array([4, 5, 6])
v3 = np.array([7, 8, 9])
M = np.vstack([v1, v2, v3])
print (M)
# Dot multiplication
print(M.dot(v))
# Transposing the Dot Product
print (v.T.dot(v))
# Elementwise Multiplication
np.multiply(M,v)
# Transposing
print (M.T)
print (M.T.shape, v.T.shape)
# Taking a Determinant and it's inverse
i_m = np.array([[3,0,2], [2,0,-2],[0,1,1]])
print (np.linalg.inv(i_m))
# If determinant is = 0 matrix is not invertible
print (np.linalg.det(i_m))
# Eignvalues and Eignvectors
eigvals, eigvecs = np.linalg.eig(M)
print (eigvals)
U, S, Vtranspose = np.linalg.svd(i_m)
U
S
Vtranspose
###Output
_____no_output_____ |
jupyter_notebooks/optimization/trust_region.ipynb | ###Markdown
Cauchy point
###Code
fig, ax = plt.subplots(figsize=(9, 9))
T0 = np.linspace(-3.2, 4.8, 100)
T1 = np.linspace(-4, 4, 100)
T0, T1 = np.meshgrid(T0, T1) # pairwise combinations of theta values
Z = rosen([T0, T1]) # Rosenbrock function
contour_levels = np.concatenate([np.array([3, 10, 30, 100, 200, 300, 500]), np.arange(1000, 79000, 1000)])
contour_plot = ax.contour(T0, T1, Z, levels=contour_levels, cmap='coolwarm', alpha=0.5) # contour plot of Rosenbrock function
init_point = np.array((3, -1)) # center of the initial trust region
init_radius = 1.
phis = np.arange(0, 6.28, 0.01) # angles for coordinate evaluations
circle_coords = (np.cos(phis)*init_radius+init_point[0], np.sin(phis)*init_radius+init_point[1])
ax.plot(*circle_coords, color='saddlebrown') # trust region
ax.scatter(*init_point, color='black') # center of the trust region
ax.text(init_point[0]+0.1, init_point[1]+0.1, '$x_o$', fontsize=19)
# here goes Cauchy point algorithm
def is_pos_def(x):
'''Check if a matrix is positive definite'''
return np.all(np.linalg.eigvals(x) > 0)
def cauchy_point(init_point: np.ndarray,
init_radius: float,
) -> np.ndarray:
x = init_point
radius = init_radius
n_iter = 0
gain = 999
while n_iter <= 5 and gain > 0.001:
n_iter += 1
f = rosen(x)
f_derivative = rosen_der(x)
f_hessian = rosen_hess(x)
f_derivative_norm = np.linalg.norm(f_derivative)
p_s = -radius*f_derivative/f_derivative_norm
if is_pos_def(f_hessian):
tau = f_derivative_norm**3/(radius*(f_derivative @ f_hessian @ f_derivative))
if tau > 1:
tau = 1
else:
tau = 1
p_c = p_s*tau
phi_new = rosen(x) + np.dot(f_derivative, p_c) + (p_c @ f_hessian @ p_c)/2 # modeled new point value of a function
x_new = x + p_c
f_new = rosen(x_new) # actial new point value of a function
rho = (f - f_new)/(f - phi_new) # rate between actual and modeled reduction
if rho < 0.25:
radius = 0.25*np.linalg.norm(p_c)
x_new = x
elif rho > 0.75:
radius = min(2*radius, 1.1)
ax.plot(*zip(x, x_new)) # plot vector of change
ax.scatter(*x_new, color='dimgrey', alpha=0.8) # center of new trust region
draw_trust_region = plt.Circle((x_new[0], x_new[1]), radius=radius, fill=False, alpha=0.6) # contours of trust region
ax.add_artist(draw_trust_region)
x = x_new
gain = f - rosen(x_new)
return x
cauchy_point(np.array([3, -1]), 1)
plt.savefig('../../assets/images/optimization/cauchy_point.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Dogleg
###Code
fig, ax = plt.subplots(figsize=(10.5, 9))
ax.set_xlim(-1.5, 2)
ax.set_ylim(-1.5, 1.5)
trust_region_dogleg = plt.Circle((0, 0), radius=1, fill=False)
ax.add_artist(trust_region_dogleg)
ax.scatter(0, 0, color='black', zorder=20, alpha=0.4)
ax.text(-0.1, 0.1, '$x_o$', fontsize=16)
ax.quiver([0, 0, 0.7, 0], # x origin
[0, 0, 0.3, 0], # y origin
[1.5, 0.7, 0.8, 0.997], # x vector
[-0.3, 0.3, -0.6, 0.077], # y vector
color=['peru', 'olive', 'lightgrey', 'navy'], linewidth=0.6,
angles='xy', scale_units='xy', scale=1,
headwidth=2.4)
ax.text(0.25, 0.2, '$p^C$', fontsize=17)
ax.text(0.65, -0.36, '$p^b$', fontsize=17)
ax.text(0.65, 0.1, '$p^d$', fontsize=17)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.savefig('../../assets/images/optimization/dogleg.png', bbox_inches='tight');
pB = np.array([1.5, -0.3])
pU = np.array([0.7, 0.3])
pB_pU = pB - pU
dot_pB_pU = np.dot(pB_pU, pB_pU)
dot_pU = np.dot(pU, pU)
dot_pU_pB_pU = np.dot(pU, pB_pU)
fact = dot_pU_pB_pU**2 - dot_pB_pU * (dot_pU - trust_radius**2)
tau = (-dot_pU_pB_pU + np.sqrt(fact)) / dot_pB_pU
#print(pU + tau * pB_pU)
###Output
_____no_output_____ |
indexing_demo.ipynb | ###Markdown
###Code
import reprlib
from collections import defaultdict
from indexing import Preprocessor,Indexer
import re
# for testing
doc_map = defaultdict()
import jsonlines
def clean_str(text) :
text = (text.encode('ascii', 'ignore')).decode("utf-8")
text = re.sub("&.*?;", "", text)
text = re.sub("[\]\|\[\@\,\$\%\*\&\\\(\)\":]", "", text)
text = re.sub("-", " ", text)
text = re.sub("\.+", "", text)
text = re.sub("^\s+","" ,text)
text = re.sub("\.+", "", text)
text = text.lower()
return text
def document_generator(file):
with jsonlines.open(file) as reader:
for doc_id, obj in enumerate(reader):
item = {'doc_id': doc_id, 'url': obj['url'], 'title': clean_str(obj['title']), 'desc': clean_str(obj['desc'])}
yield item
p = Preprocessor()
indexer = Indexer(p)
inverted = {}
for doc in document_generator("crawlers/stack/data/solr.jsonl"):
doc_id = doc['doc_id']
# index both title and description
text = doc['title'] + " " +doc['desc']
doc_map[doc_id] = (doc['url'], text)
doc_index = indexer.inverted_index(text)
indexer.inverted_index_add(inverted,doc_id=doc_id,doc_index=doc_index)
# Print Inverted-Index (3 rows)
i = 0
for word, doc_locations in inverted.items():
print(word, reprlib.repr(doc_locations))
i += 1
if i > 3:
break
###Output
instal {0: [7, 72, 130, 261, 532, 816, ...], 115: [1911, 2734], 188: [831, 1578], 829: [6270, 11787], ...}
uniti {0: [15, 80, 99, 115, 178, 380, ...], 158: [5019], 365: [7622], 473: [3916], ...}
use {0: [28, 1049, 1343, 1611, 1759, 1880, ...], 1: [1041], 2: [112, 241, 377, 776, 812, 823, ...], 3: [1692, 3680, 3940], ...}
ubuntu {0: [34, 599, 624, 944, 1254], 1408: [1420, 3081], 1444: [2089, 4014, 6279], 1476: [5540, 7515], ...}
###Markdown
Sample querying
###Code
# Search something and print results
queries = ['dolby', 'vim emacs', 'github week']
for query in queries:
tokenized_query = ' '.join(p.tokenize_string(query))
result_docs = indexer.search(inverted, tokenized_query)
print(f"Search for '{query}': doc_ids={result_docs}")
for _, word in p.word_index(tokenized_query):
def extract_text(doc_id, position):
return doc_map[doc_id][1][position:position+50].replace("\n", ' ')
for doc_id in result_docs:
for position in inverted[word][doc_id]:
print(
f"\t - {extract_text(doc_id, position)}..."
f"\n\t -->{doc_map[doc_id][0]}"
f"\n"
)
print("\n")
###Output
Search for 'dolby': doc_ids={11074}
- dolby digital expires at midnight ...
-->/r/programming/comments/60b7kv/the_last_patent_on_ac3_dolby_digital_expires_at/
Search for 'vim emacs': doc_ids={3942, 2058, 2672, 3315, 3732, 4221, 1431, 2587, 2877}
- vim running inside gnome terminal showing the diff...
-->https://askubuntu.com/questions/48299/what-ides-are-available-for-ubuntu
- vim splits or extra tabs you like to install it in...
-->https://askubuntu.com/questions/48299/what-ides-are-available-for-ubuntu
- vim emacs nano gedit kate to name a few enable res...
-->https://askubuntu.com/questions/68918/how-do-i-restrict-my-kids-computing-time
- vim or emacs to write c code just try this on you...
-->https://askubuntu.com/questions/61408/what-is-a-command-to-compile-and-run-c-programs
- vim is amazing! vim is a highly configurable text ...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- vim is a highly configurable text editor built to ...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- vim was originally released for the amiga vim has ...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- vim has since been developed to be supporting ma...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- vim is free and open source software and is releas...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- vim text editor dpkg apt less vim style keys and s...
-->https://askubuntu.com/questions/162075/my-computer-boots-to-a-black-screen-what-options-do-i-have-to-fix-it
- vim style keys and searching like man and grep i'm...
-->https://askubuntu.com/questions/162075/my-computer-boots-to-a-black-screen-what-options-do-i-have-to-fix-it
- vim or pico/nano or check your email in mutt or pi...
-->https://askubuntu.com/questions/162075/my-computer-boots-to-a-black-screen-what-options-do-i-have-to-fix-it
- vim users like i that prefers to do everything wit...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- vim called once installed you can run to check...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- vim ~/profile in file put my_env_var=value save w...
-->https://stackoverflow.com/questions/135688/setting-environment-variables-on-os-x
- vim it gives you all the tools you need to do mass...
-->https://askubuntu.com/questions/10607/how-can-i-rename-many-files-at-once
- vim or emacs in a python ide or wingide which is a...
-->https://askubuntu.com/questions/6588/is-there-a-visual-studio-style-tool-ide
- emacs but for command line programming it is a kil...
-->https://askubuntu.com/questions/48299/what-ides-are-available-for-ubuntu
- emacs nano gedit kate to name a few enable restric...
-->https://askubuntu.com/questions/68918/how-do-i-restrict-my-kids-computing-time
- emacs to write c code just try this on your termi...
-->https://askubuntu.com/questions/61408/what-is-a-command-to-compile-and-run-c-programs
- emacs it has a solid python mode you don't need an...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- emacs tutorial it should be easily accessible from...
-->https://askubuntu.com/questions/10998/what-developer-text-editors-are-available-for-ubuntu
- emacs vim or pico/nano or check your email in mutt...
-->https://askubuntu.com/questions/162075/my-computer-boots-to-a-black-screen-what-options-do-i-have-to-fix-it
- emacs type this will open three buffers mine their...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- emacs asks you if you want to save this buffer yes...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- emacs lisp function although one can of course use...
-->https://stackoverflow.com/questions/135688/setting-environment-variables-on-os-x
- emacs lisp function note this solution is an amal...
-->https://stackoverflow.com/questions/135688/setting-environment-variables-on-os-x
- emacs i think nothing beats dired for this task ev...
-->https://askubuntu.com/questions/10607/how-can-i-rename-many-files-at-once
- emacs that often you may find dired a handy tool s...
-->https://askubuntu.com/questions/10607/how-can-i-rename-many-files-at-once
- emacs dired mode for a directory now enter edit di...
-->https://askubuntu.com/questions/10607/how-can-i-rename-many-files-at-once
- emacs uses a different syntax than pcre for exampl...
-->https://askubuntu.com/questions/10607/how-can-i-rename-many-files-at-once
- emacs in a python ide or wingide which is a commer...
-->https://askubuntu.com/questions/6588/is-there-a-visual-studio-style-tool-ide
Search for 'github week': doc_ids={4710, 9863, 2058, 5835, 3850, 10127, 2511, 4498, 4221, 3286, 2268, 3741}
- github repo another option you can try is it's ...
-->https://stackoverflow.com/questions/292926/robust-and-mature-html-parser-for-php
- github protest over chinese tech companies' 996 cu...
-->/r/programming/comments/b799yb/github_protest_over_chinese_tech_companies_996/
- github at the following location this is how to i...
-->https://askubuntu.com/questions/68918/how-do-i-restrict-my-kids-computing-time
- github repo i didn't know what that compile line m...
-->https://stackoverflow.com/questions/38922754/how-to-use-threetenabp-in-android-project
- github has an api and today i learned that sourcef...
-->https://askubuntu.com/questions/1056077/how-to-install-latest-hplip-on-my-ubuntu-to-support-my-hp-printer-and-or-scanner
- github ...
-->/r/programming/comments/henwet/the_uk_gov_just_spent_118_million_on_a_covid/
- github repo that houses the sources for the packag...
-->https://askubuntu.com/questions/548003/how-do-i-install-the-firefox-developer-edition
- github i found the this library makes it very s...
-->https://stackoverflow.com/questions/2900023/change-app-language-programmatically-in-android
- github page the localizationactivity extends appco...
-->https://stackoverflow.com/questions/2900023/change-app-language-programmatically-in-android
- github for practical tutorial check i've success...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- github's native tool explains in detail but the b...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- github repository named there is a branch ca...
-->https://askubuntu.com/questions/922085/i-need-rules-to-drop-some-malicious-apache-connections
- github here are over viewed few ways involved into...
-->https://askubuntu.com/questions/922085/i-need-rules-to-drop-some-malicious-apache-connections
- github which satisfies the likes of errors generat...
-->https://askubuntu.com/questions/432542/is-ffmpeg-missing-from-the-official-repositories-in-14-04
- github and extract it in a directory of your choic...
-->https://askubuntu.com/questions/775579/recovering-broken-or-deleted-ntfs-partitions
- week the pyquery parsing broke and the regex still...
-->https://stackoverflow.com/questions/292926/robust-and-mature-html-parser-for-php
- week ago i created a library named which allows ...
-->https://stackoverflow.com/questions/292926/robust-and-mature-html-parser-for-php
- week chinese tech companies really make their empl...
-->/r/programming/comments/b799yb/github_protest_over_chinese_tech_companies_996/
- week using the following abbreviations be careful ...
-->https://askubuntu.com/questions/68918/how-do-i-restrict-my-kids-computing-time
- week finally change the field used by the login ac...
-->https://askubuntu.com/questions/68918/how-do-i-restrict-my-kids-computing-time
- weeks ago when i started learning android i would ...
-->https://stackoverflow.com/questions/38922754/how-to-use-threetenabp-in-android-project
- weeks ago the latest hplip driver version availabl...
-->https://askubuntu.com/questions/1056077/how-to-install-latest-hplip-on-my-ubuntu-to-support-my-hp-printer-and-or-scanner
- week project thats failed to deliver heres the git...
-->/r/programming/comments/henwet/the_uk_gov_just_spent_118_million_on_a_covid/
- weeks before they reach the main firefox release c...
-->https://askubuntu.com/questions/548003/how-do-i-install-the-firefox-developer-edition
- weeks after they have stabilized in nightly builds...
-->https://askubuntu.com/questions/548003/how-do-i-install-the-firefox-developer-edition
- weeks the new way to do this is now using the met...
-->https://stackoverflow.com/questions/2900023/change-app-language-programmatically-in-android
- week period you may choose to merge/rebase on that...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- week that way if you do find merge/rebase conflict...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- weeks to merge everything together in one big lump...
-->https://stackoverflow.com/questions/161813/how-to-resolve-merge-conflicts-in-git
- week according to i created a github repos...
-->https://askubuntu.com/questions/922085/i-need-rules-to-drop-some-malicious-apache-connections
- weeks we will announce here when is stable and...
-->https://askubuntu.com/questions/432542/is-ffmpeg-missing-from-the-official-repositories-in-14-04
- weeks to install newest version ffmpeg 2811 this v...
-->https://askubuntu.com/questions/432542/is-ffmpeg-missing-from-the-official-repositories-in-14-04
- weeks ago i had a problem with my pc that my broth...
-->https://askubuntu.com/questions/775579/recovering-broken-or-deleted-ntfs-partitions
|
homeworks/08_homework/ORF307_HW8.ipynb | ###Markdown
ORF307 Homework 8 {-} Due: Friday, April 30, 2021 9:00 pm ET- Please export your code with output as pdf.- If there is any additional answers, please combine them as **ONE** pdf file before submitting to the Gradescope. Q1 A small integer programming problem {-} Consider the following integer programming problem:$$\begin{array}{ll}\mbox{minimize} & -x_1 - 2x_2\\\mbox{subject to} &-3x_1 + 4x_2 \le 4\\&3x_1 + 2x_2 \le 11\\&2x_1 - x_2 \le 5\\&x_1, x_2 \ge 0\\&x_1, x_2 \in \mathbf{Z}.\end{array}$$Use a figure to answer the following questions:1. What is the optimal cost of the linear programming relaxation? What is the optimal cost of the integer programming problem?2. What is the convex hull of the set of all solutions to the integer programming problem?3. Solve the problem by branch and bound. You can solve the linear programming relaxations graphically. Show the resulting tree. \newpage Q2 Sudoku {-}Sudoku is a popular puzzle game. You have probably already seen it. The goal is to fill a 9-by-9 grid with integers from 1 to 9 so that each integer appears only once in each row, in each column and in each 3-by-3 subblock. The grid is partially populated with clues and your task is to fill in the rest of the grid.  Formulate the Sudoku problem as an integer linear program. Q3 Planning to move {-} It is the end of the semester and you might be planning to change accommodation. Moving is complicated and you want to use some integer programming tricks to make your life easier. You rented a truck with capacity $Q$ and you bought $m$ boxes. Each box $i$ has size $b_i$ for $i=1,\dots,m$. You have $n$ items to move of size $a_j$ for $j=1,\dots,n$. 1. Formulate your packing problem as an integer programming problem to determine if your move is doable with the truck and the boxes you have.2. Given the data below, is your move feasible? If so, what is the minimum number of boxes you need to use? Answer using a solver such as `GLPK_MI` or `GUROBI`. *Note*. If you find that `GLPK_MI` is taking too long to return a solution, please run it with the `tm_lim=30000` option which sets the time limit to 30 seconds: `problem.solve(solver=cp.GLPK_MI, tm_lim=30000)`. For this problem, the solution returned after 30 seconds is already optimal.
###Code
import numpy as np
n = 30 # Number of items
m = 10 # Number of boxes
Q = 100 # Truck capacity
# Item sizes
a = np.array([4, 3, 1, 2, 2,
5, 4, 2, 4, 2,
4, 1, 5, 2, 1,
3, 5, 3, 3, 3,
4, 5, 5, 2, 3,
3, 3, 2, 2, 2])
# Box sizes
b = np.array([7, 19, 10, 14, 10, 12, 13, 10, 15, 14])
###Output
_____no_output_____ |
talks/uc2017/ArcGIS Python API - Advanced Scripting/features/helper.ipynb | ###Markdown
turn back new york
###Code
fl0 = flc.layers[0]
fl0
fset = fl0.query()
feats = fset.features
fset.df
ny_feature = [f for f in feats if f.attributes['name']=='New York City'][0]
ny_feature.attributes
import copy
ny_edit = copy.deepcopy(ny_feature)
ny_edit.attributes['name'] = 'New York'
update_result = fl0.edit_features(updates=[ny_edit])
update_result
###Output
_____no_output_____
###Markdown
turn off edit capability
###Code
flc.manager.update_definition({'capabilities':'Query'})
flc.properties.capabilities
###Output
_____no_output_____ |
Exercises/Exercise-7-Dictionary_and_Set.ipynb | ###Markdown
Exercise 7Related Notes:- Fundamentals_3 Data Structures- Fundamentals_4 FunctionsFor questions 7.1, 7.2 and 7.6, you are to implement both :- an **iterative version**, and - a **recursive version** of the function specified.Please decide on your own function name. Exercise 7.1Write a function that takes in a positive integer, $n$, and returns `True` if $n$ is a prime number and `False` if it's not.**Example Interaction**>```>your_function(7)>True>your_function(9)>False>```
###Code
#YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Exercise 7.2Write a function that takes in a positive integer $n$, and prints outs the digits in English word form. For example, when given as input: `3214`, the function should print: `three two one four` in one line.**Example Interaction**>```>your_function(3214)>three two one four>```
###Code
#YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Exercise 7.3There are 5 usernames with their respective password. * user1: password1* user2: password2* user3: password3Implement a script such that:1. Use a suitable data structure to store usernames and passwords,2. User enters username and password3. Check user username and password * If username does not exists, print "User not found" * if username exists, but password doesn't match, print "Wrong password" * If both username and password match, print "You are in"**Example Interaction**>```>Enter username: user10>Enter password: pass10>User not found>>Enter username: user1>Enter password: pass1>Wrong password>>Enter username: user1>Enter password: password1>You are in>```
###Code
#YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Exercise 7.4The winning number of Toto this week is `7, 20, 29, 41, 47, 49`. Implement a script to help user check result. * Define a function `match_count(win_nums, your_nums)` which returns the number of matched numbers. It takes in 2 lists as parameters, `win_nums` and `your_num`. The `winning_nums` contains winnning numbers, and `your_nums` contains number enters by user.* Ask user to input a list of numbers separated by space ` `.You probably need to use the `str.split()` method for this question. Use `help(str.split)` or search online. **Example Interaction**>```>Enter your Toto numbers separated by space: >1 7 20 29 41 47 49 50>Count of matched numbers: 6>```
###Code
#YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Exercise 7.5 Greatest Common DivisorLet $a$ be an integer. A **divisor** of $a$, also called a **factor** of $a$ is an integer $r$ such that there exists an integer $m$ such that $mr=a$. For example, if $a=20$, then $2$ and $5$ are both divisors of $a$ as $2\times 10=20$ and $5 \times 4=20$.Write a function that takes in a positive integer $n$, and return a list of all positive integers that is the divisor of $n$.**Example Interaction**>```>your_function(20)>[1,2,4,5,10,20]>```Let $a$ and $b$ be positive integers. A **common divisor** of $a$ and $b$ is a positive integer $r$ such that there exists positive integers $m_1,m_2$ such that $a=m_1*r$ and $b=m_2*r$. The highest of such integer is called the **greatest of common divisor of $a$ and $b$**, denoted as $\gcd(a,b)$.Using the function you have defined previously, Write a function `gcd` that takes in 2 positive integer $a$ and $b$, and return the greatest common divisor of $a$ and $b$. **Example Interaction**>```>gcd(4,10)>2>```
###Code
#YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Exercise 7.6 Euclidean AlgorithmA more efficient method to compute the greatest common divisor of two integers $a$ and $b$ is by using the Euclidean algorithm. It is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example,$$\begin{align*}\gcd(252,105)&=21\\ &=\gcd(252-2(105)=42,105)\\&=\gcd(42,105-2(42)=21)\\&=\gcd(42,21)\end{align*}$$In particular, assume $a>b$, then by Division Algorithm, there exists $q\in \mathbb{Z}$ such that $a=qb+r$, where $0\leq r<b$. Then, $$\begin{align*}\gcd(a,b)&=\gcd(a-qb,b)\\&=\gcd(r,b).\end{align*}$$Write a function that takes in 2 positive integer $a$ and $b$, and return the greatest common divisor of $a$ and $b$.**Example Interaction**>```>your_function(4,10)>2>```
###Code
#YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Exercise 7.7Write a function that takes in a list of positive integers, $N$, and returns the greatest common divisor (i.e., the highest common factor) of all integers in $N$.
###Code
#YOUR CODE HERE
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/lines_bars_and_markers/multicolored_line.ipynb | ###Markdown
Multicolored linesThis example shows how to make a multi-colored line. In this example, the lineis colored based on its derivative.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
from matplotlib.colors import ListedColormap, BoundaryNorm
x = np.linspace(0, 3 * np.pi, 500)
y = np.sin(x)
dydx = np.cos(0.5 * (x[:-1] + x[1:])) # first derivative
# Create a set of line segments so that we can color them individually
# This creates the points as a N x 1 x 2 array so that we can stack points
# together easily to get the segments. The segments array for line collection
# needs to be (numlines) x (points per line) x 2 (for x and y)
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
fig, axs = plt.subplots(2, 1, sharex=True, sharey=True)
# Create a continuous norm to map from data points to colors
norm = plt.Normalize(dydx.min(), dydx.max())
lc = LineCollection(segments, cmap='viridis', norm=norm)
# Set the values used for colormapping
lc.set_array(dydx)
lc.set_linewidth(2)
line = axs[0].add_collection(lc)
fig.colorbar(line, ax=axs[0])
# Use a boundary norm instead
cmap = ListedColormap(['r', 'g', 'b'])
norm = BoundaryNorm([-1, -0.5, 0.5, 1], cmap.N)
lc = LineCollection(segments, cmap=cmap, norm=norm)
lc.set_array(dydx)
lc.set_linewidth(2)
line = axs[1].add_collection(lc)
fig.colorbar(line, ax=axs[1])
axs[0].set_xlim(x.min(), x.max())
axs[0].set_ylim(-1.1, 1.1)
plt.show()
###Output
_____no_output_____ |
articles/what-are-the-implications-of-decarbonisation-for-inequality/parser.ipynb | ###Markdown
Fig 1
###Code
df = pd.read_excel(
"raw/Econ_observatory_figures_(2).xlsx", sheet_name="Sheet2", usecols="A:H"
)
df["Unnamed: 0"] = df["Unnamed: 0"].ffill()
df.columns = ["cat", "subcat", "neg", "o", "pos", "hi", "hi_dis", "total"]
df = df.set_index(["cat", "subcat"])[["neg", "o", "pos", "total"]].stack().reset_index()
df.columns = ["cat", "subcat", "a", "value"]
f = "fig1_outcomes"
f1 = eco_git_path + f + ".csv"
df.to_csv("data/" + f + ".csv")
f += local_suffix
open("visualisation/" + f + ".html", "w").write(
vega_embed.replace(
"JSON_PATH", f1.replace("/data/", "/visualisation/").replace(".csv", ".json")
)
)
if LOCAL:
f1 = df
readme = "### " + f + '\n\n\n'
df.head()
base = (
alt.Chart(f1)
.encode(
x=alt.X(
"cat:N",
sort=[],
axis=alt.Axis(
grid=False,
titleAlign="center",
titleAnchor="middle",
labelAlign="left",
title="",
titleY=-15,
titleX=207,
labelFontSize=11,
labelPadding=-10,
labelColor=scale_lightness(colors["eco-gray"], 2),
labelFontWeight='bold',
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
tickCount=10,
orient="bottom",
labelAngle=-90,
zindex=10,
labelLimit=1000,
),
)
)
.transform_filter('datum.subcat=="Distributional outcomes"')
)
bars = (
base.mark_bar(opacity=0.8)
.encode(
y=alt.Y(
"value:Q",
stack=True,
sort=["neg", "m", "pos"],
axis=alt.Axis(
grid=False,
title="% of total evaluations",
titleX=-5,
titleY=-5,
titleBaseline="bottom",
titleAngle=0,
format=".0%",
titleAlign="left",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
),
),
order=alt.Order("a:N", sort="ascending"),
color=alt.Color(
"a:N",
legend=None,
scale=alt.Scale(
range=[colors["eco-dot"], colors["eco-gray"], colors["eco-light-blue"]]
),
),
)
.transform_filter('datum.a!="total"')
)
line = (
base.mark_line(color=colors["eco-turquiose"])
.encode(
y=alt.Y(
"value:Q",
stack=True,
sort=["neg", "m", "pos"],
axis=alt.Axis(
grid=True,
gridColor=colors["eco-gray"],
gridOpacity=0.1,
title="Number of total evaluations",
titleX=5,
titleY=13,
titleBaseline="bottom",
titleAngle=0,
titleAlign="left",
labelColor=colors["eco-gray"],
titleColor=colors["eco-gray"],
tickColor=colors["eco-gray"],
domainColor=colors["eco-gray"],
),
),
x=alt.X(
"cat:N",
axis=alt.Axis(
grid=True,
title="",
labelFontSize=0,
gridColor=colors["eco-gray"],
gridOpacity=0.1,
tickColor=scale_lightness(colors["eco-gray"], 10),
domain=False,
orient="top",
),
),
)
.transform_filter('datum.a=="total"')
)
points = line.mark_point(color=colors["eco-turquiose"], fill=colors["eco-turquiose"])
title = alt.TitleParams(
"Percentage of impacts on distributional outcomes by policy instrument type",
subtitle=["positive impacts (blue), no impact (grey), negative impacts (red)"],
anchor="start",
align="left",
dx=5,
dy=-5,
fontSize=12,
subtitleFontSize=11,
subtitleFontStyle="italic",
)
labels1 = (
alt.Chart(
pd.DataFrame(
[
{"x": " ", "y": 0.1, "t": "negative"},
]
)
)
.mark_text(angle=270, color=colors["eco-dot"])
.encode(
x=alt.X("x:N", sort=[]),
y=alt.Y("y:Q", sort=[], stack=False),
text="t:N",
)
)
labels2 = (
alt.Chart(
pd.DataFrame(
[
{"x": " ", "y": 0.3, "t": "neutral"},
]
)
)
.mark_text(angle=270, color=colors["eco-gray"])
.encode(
x=alt.X("x:N", sort=[]),
y=alt.Y("y:Q", sort=[], stack=False),
text="t:N",
)
)
labels3 = (
alt.Chart(
pd.DataFrame(
[
{"x": " ", "y": 0.5, "t": "positive"},
]
)
)
.mark_text(angle=270, color=colors["eco-light-blue"])
.encode(
x=alt.X("x:N", sort=[]),
y=alt.Y("y:Q", sort=[], stack=False),
text="t:N",
)
)
labels0 = (
alt.Chart(
pd.DataFrame(
[
{"x": " ", "y": 0.5, "t": ""},
]
)
)
.mark_text(angle=270)
.encode(
x=alt.X("x:N", sort=[]),
y=alt.Y("y:Q", sort=[], stack=False),
text="t:N",
)
)
layer1 = (
alt.vconcat(
(bars + labels1 + labels2 + labels3).properties(height=250, width=400),
(line + points + labels0).properties(height=80, width=400),
spacing=5,
)
.configure_view(stroke=None)
.properties(title=title)
)
layer1.save("visualisation/" + f + ".json")
layer1.save("visualisation/" + f + ".png",scale_factor=2.0)
layer1.save("visualisation/" + f + ".svg")
open("README.md", "w").write(readme)
layer1
theme = "_dark"
line.encoding.x.axis.tickColor=colors["eco-background"]
bars.encoding.x.axis.labelColor=scale_lightness(colors["eco-gray"], 1.6)
layer1 = (
alt.vconcat(
(bars + labels1 + labels2 + labels3).properties(height=250, width=400),
(line + points + labels0).properties(height=80, width=400),
spacing=5,
)
.configure_view(stroke=None)
.properties(title=title)
)
layer1 = layer1.configure_axisYQuantitative(labelFontSize=12)
layer1 = layer1.configure_axisXQuantitative(labelFontSize=12)
layer1.config.font="Georgia"
layer1.config.background=colors["eco-background"]
layer1.config.view.stroke=None
layer1.title.fontSize = 14
layer1.title.subtitleFontSize = 12
layer1.title.dy -= 2
layer1.title.color = colors["eco-dot"]
layer1.title.subtitleColor = colors["eco-dot"]
layer1.save("visualisation/" + f + theme + ".json")
layer1.save("visualisation/" + f + theme + ".png",scale_factor=2.0)
layer1.save("visualisation/" + f + theme + ".svg")
readme = re.sub(f, f + theme, readme)
open("README.md", "a").write(readme)
layer1
###Output
WARN Domains that should be unioned has conflicting sort properties. Sort will be set to true.
WARN Domains that should be unioned has conflicting sort properties. Sort will be set to true.
WARN Domains that should be unioned has conflicting sort properties. Sort will be set to true.
WARN Domains that should be unioned has conflicting sort properties. Sort will be set to true.
###Markdown
Fig 2
###Code
# polar chart
df = pd.read_excel(
"raw/Econ_observatory_figures_(2).xlsx", sheet_name="Sheet2", usecols="A:H"
)
df["Unnamed: 0"] = df["Unnamed: 0"].ffill()
df.columns = ["cat", "subcat", "neg", "o", "pos", "hi", "hi_dis", "total"]
df = df.set_index(["cat", "subcat"])[["hi"]].stack().reset_index()
df.columns = ["cat", "subcat", "a", "value"]
df=df.drop('a',axis=1)
df['one']=1
f = "fig2_polar"
f2 = eco_git_path + f + ".csv"
df.to_csv("data/" + f + ".csv")
###Output
_____no_output_____ |
Communication _Infrastructure/BCID.ipynb | ###Markdown
Introduction : Business Communication Infrastructure Discovery
###Code
# Date of creation : 17 july 2021
# Version : 1.0
# Description : This notebook helps in discovery of infrastructure which help in business communication
###Output
_____no_output_____
###Markdown
Importing packages
###Code
# Importing packages
import os
###Output
_____no_output_____
###Markdown
Getting the location of working directory
###Code
# Getting the location of working directory
def current_directory_location():
cwd=os.getcwd()
print("Location of current directory : ", cwd)
current_directory_location()
###Output
Location of current directory : /root/Desktop/offensivenotebook/Business Communication Infrastructure Discovery
###Markdown
Making directory name results to save the result
###Code
# Making directory name results
def directory_results():
try:
os.mkdir("Results")
except Exception as e:
print(e)
directory_results()
###Output
[Errno 17] File exists: 'Results'
###Markdown
Tools
###Code
Citrix
Outlook 365
VPN Portal attack tools
MX Records tools
Lync & Skype
Exchange services tool
Mxtoolbox
Lyncsmash
ruler
###Output
_____no_output_____ |
chapter/machine_learning/python_machine_learning_modules.ipynb | ###Markdown
Python provides many bindings for machine learning libraries, somespecialized for technologies such as neural networks, and othersgeared towards novice users. For our discussion, we focus on thepowerful and popular Scikit-learn module. Scikit-learn isdistinguished by its consistent and sensible API, its wealth ofmachine learning algorithms, its clear documentation, and its readilyavailable datasets that make it easy to follow along with the onlinedocumentation. Like Pandas, Scikit-learn relies on Numpy for numericalarrays. Since its release in 2007, Scikit-learn has become the mostwidely-used, general-purpose, open-source machine learning modulesthat is popular in both industry and academia. As with all of thePython modules we use, Scikit-learn is available on all the majorplatforms.To get started, let's revisit the familiar ground of linear regression usingScikit-learn. First, let's create some data.
###Code
%matplotlib inline
import numpy as np
from matplotlib.pylab import subplots
from sklearn.linear_model import LinearRegression
X = np.arange(10) # create some data
Y = X+np.random.randn(10) # linear with noise
###Output
_____no_output_____
###Markdown
We next import and create an instance of the `LinearRegression`class from Scikit-learn.
###Code
from sklearn.linear_model import LinearRegression
lr=LinearRegression() # create model
###Output
_____no_output_____
###Markdown
Scikit-learn has a wonderfully consistent API. AllScikit-learn objects use the `fit` method to compute model parametersand the `predict` method to evaluate the model. For the`LinearRegression` instance, the `fit` method computes thecoefficients of the linear fit. This method requires a matrix ofinputs where the rows are the samples and the columns are thefeatures. The *target* of the regression are the `Y` values, whichmust be correspondingly shaped, as in the following,
###Code
X,Y = X.reshape((-1,1)), Y.reshape((-1,1))
lr.fit(X,Y)
lr.coef_
###Output
_____no_output_____
###Markdown
**Programming Tip.**The negative one in the `reshape((-1,1))` call above is for the trulylazy. Using a negative one tells Numpy to figure out what thatdimension should be given the other dimension and number of arrayelements. the `coef_` property of the linear regression object showsthe estimated parameters for the fit. The convention is to denoteestimated parameters with a trailing underscore. The model has a`score` method that computes the $R^2$ value for the regression.Recall from our statistics chapter ([ch:stats:sec:reg](ch:stats:sec:reg)) that the$R^2$ value is an indicator of the quality of the fit and variesbetween zero (bad fit) and one (perfect fit).
###Code
lr.score(X,Y)
###Output
_____no_output_____
###Markdown
Now, that we have this fitted, we can evaluatethe fit using the `predict` method,
###Code
xi = np.linspace(0,10,15) # more points to draw
xi = xi.reshape((-1,1)) # reshape as columns
yp = lr.predict(xi)
###Output
_____no_output_____
###Markdown
The resulting fit is shown in [Figure](fig:python_machine_learning_modules_002)
###Code
fig,ax=subplots()
_=ax.plot(xi,yp,'-k',lw=3)
_=ax.plot(X,Y,'o',ms=20,color='gray')
_=ax.tick_params(labelsize='x-large')
_=ax.set_xlabel('$X$')
_=ax.set_ylabel('$Y$')
fig.tight_layout()
fig.savefig('fig-machine_learning/python_machine_learning_modules_002.pdf')
###Output
_____no_output_____
###Markdown
-->The Scikit-learn module can easily perform basic linear regression. The circles show the training data and the fitted line is shown in black.**Multilinear Regression.** The Scikit-learn module easily extendslinear regression to multiple dimensions. For example, formulti-linear regression, $$y = \alpha_0 + \alpha_1 x_1 + \alpha_2 x_2 + \ldots + \alpha_n x_n$$ The problem is to find all of the $\alpha$ terms given thetraining set $\left\{x_1,x_2,\ldots,x_n,y\right\}$. We can create anotherexample data set and see how this works,
###Code
X=np.random.randint(20,size=(10,2))
Y=X.dot([1,3])+1 + np.random.randn(X.shape[0])*20
ym=Y/Y.max() # scale for marker size
fig,ax=subplots()
_=ax.scatter(X[:,0],X[:,1],ym*400,color='gray',alpha=.7,label=r'$Y=f(X_1,X_2)$')
_=ax.set_xlabel(r'$X_1$',fontsize=22)
_=ax.set_ylabel(r'$X_2$',fontsize=22)
_=ax.set_title('Two dimensional regression',fontsize=20)
_=ax.legend(loc=4,fontsize=22)
fig.tight_layout()
fig.savefig('fig-machine_learning/python_machine_learning_modules_003.png')
###Output
_____no_output_____
###Markdown
-->Scikit-learn can easily perform multi-linear regression. The size of the circles indicate the value of the two-dimensional function of $(X_1,X_2)$. [Figure](fig:python_machine_learning_modules_003) shows thetwo dimensional regression example, where the size of the circles isproportional to the targetted $Y$ value. Note that we salted the outputwith random noise just to keep things interesting. Nonetheless, theinterface with Scikit-learn is the same,
###Code
lr=LinearRegression()
lr.fit(X,Y)
print(lr.coef_)
###Output
[0.42919895 0.70632676]
###Markdown
The `coef_` variable now has two terms in it,corresponding to the two input dimensions. Note that the constantoffset is already built-in and is an option on the `LinearRegression`constructor. [Figure](fig:python_machine_learning_modules_004) showshow the regression performs.
###Code
_=lr.fit(X,Y)
yp=lr.predict(X)
ypm=yp/yp.max() # scale for marker size
_=ax.scatter(X[:,0],X[:,1],ypm*400,marker='x',color='k',lw=2,alpha=.7,label=r'$\hat{Y}$')
_=ax.legend(loc=0,fontsize=22)
_=fig.canvas.draw()
fig.savefig('fig-machine_learning/python_machine_learning_modules_004.png')
###Output
_____no_output_____
###Markdown
-->The predicted data is plotted in black. It overlays the training data, indicating a good fit.**Polynomial Regression.** We can extend this to include polynomialregression by using the `PolynomialFeatures` in the `preprocessing`sub-module. To keep it simple, let's go back to our one-dimensionalexample. First, let's create some synthetic data,
###Code
from sklearn.preprocessing import PolynomialFeatures
X = np.arange(10).reshape(-1,1) # create some data
Y = X+X**2+X**3+ np.random.randn(*X.shape)*80
###Output
_____no_output_____
###Markdown
Next, we have to create a transformationfrom `X` to a polynomial of `X`,
###Code
qfit = PolynomialFeatures(degree=2) # quadratic
Xq = qfit.fit_transform(X)
print(Xq)
###Output
[[ 1. 0. 0.]
[ 1. 1. 1.]
[ 1. 2. 4.]
[ 1. 3. 9.]
[ 1. 4. 16.]
[ 1. 5. 25.]
[ 1. 6. 36.]
[ 1. 7. 49.]
[ 1. 8. 64.]
[ 1. 9. 81.]]
###Markdown
Note there is an automatic constant term in the output $0^{th}$column where `fit_transform` has mapped the single-column input into a set ofcolumns representing the individual polynomial terms. The middle column hasthe linear term, and the last has the quadratic term. With these polynomialfeatures stacked as columns of `Xq`, all we have to do is `fit` and `predict`again. The following draws a comparison between the linear regression and thequadratic regression (see [Figure](fig:python_machine_learning_modules_005)),
###Code
lr=LinearRegression() # create linear model
qr=LinearRegression() # create quadratic model
lr.fit(X,Y) # fit linear model
qr.fit(Xq,Y) # fit quadratic model
lp = lr.predict(xi)
qp = qr.predict(qfit.fit_transform(xi))
###Output
_____no_output_____
###Markdown
-->The title shows the $R^2$ score for the linear and quadratic rogressions.
###Code
fig,ax=subplots()
_=ax.plot(xi,lp,'--k',lw=3,label='linear fit')
_=ax.plot(xi,qp,'-k',lw=3,label='quadratic fit')
_=ax.plot(X.flat,Y.flat,'o',color='gray',ms=10,label='training data')
_=ax.legend(loc=0)
_=ax.set_title('quad R2={:.3}; linear R2={:.3}'.format(lr.score(X,Y),qr.score(Xq,Y)))
_=ax.set_xlabel('$X$',fontsize=22)
_=ax.set_ylabel(r'$X+X^2+X^3+\epsilon$',fontsize=22)
fig.savefig('fig-machine_learning/python_machine_learning_modules_005.png')
###Output
_____no_output_____ |
examples/Python usage examples.ipynb | ###Markdown
Import itab module
###Code
import itab
###Output
_____no_output_____
###Markdown
Simple example The data file looks like this:
###Code
%%bash
head -n +4 simple.itab.tsv
tail -n +5 simple.itab.tsv | csvlook -t
###Output
#
# Simple example file
#
## schema=./simple.itab.schema.tsv
|----------------------+---------+---------|
| DATE | INTEGER | FLOAT |
|----------------------+---------+---------|
| 2015-05-22 10:39:27 | 23 | 3.4 |
| avui | 22.3 | 3.4e10 |
| 2015/05/22 10:39:27 | 4 | 2.4 |
| 2012-05-22 10:45:22 | 12 | 2.5 |
|----------------------+---------+---------|
###Markdown
The schema data file looks like this:
###Code
%%bash
head -n +3 simple.itab.schema.tsv
tail -n +4 simple.itab.schema.tsv | csvlook -t
###Output
#
# This is a simple example of an iTab schema file.
#
|----------+------------------------------+----------------|
| header | reader | validator |
|----------+------------------------------+----------------|
| DATE | date(x, '%Y-%m-%d %H:%M:%S') | x.month == 5 |
| INTEGER | int(x) | x > 10 |
| FLOAT | float(x) | 2.3 < x < 2.6 |
|----------+------------------------------+----------------|
###Markdown
Basic reader that returns each row as a list of parsed cell values
###Code
reader = itab.reader('simple.itab.tsv')
for row, errors in reader:
if len(errors) > 0:
print("\nLine {}. ERRORS: \n\t{}".format(reader.line_num, '\n\t'.join(errors)))
else:
print("\nLine {}. VALUES: \n\t{}".format(reader.line_num, '\t'.join(["{}".format(c) for c in row])))
###Output
Line 6. ERRORS:
Validation error at line 6 column 2: FLOAT. [value:'3.4' validator:'2.3 < x < 2.6']
Line 7. ERRORS:
Reading error at line 7 column 1: DATE. [value:'avui' reader:'date(x, '%Y-%m-%d %H:%M:%S')']
Reading error at line 7 column 2: INTEGER. [value:'22.3' reader:'int(x)']
Validation error at line 7 column 2: FLOAT. [value:'3.4e10' validator:'2.3 < x < 2.6']
Line 8. ERRORS:
Reading error at line 8 column 1: DATE. [value:'2015/05/22 10:39:27' reader:'date(x, '%Y-%m-%d %H:%M:%S')']
Validation error at line 8 column 1: INTEGER. [value:'4' validator:'x > 10']
Line 9. VALUES:
2012-05-22 10:45:22 12 2.5
###Markdown
Reader that reaturns each row as a dictionary
###Code
reader = itab.DictReader('simple.itab.tsv')
for row, errors in reader:
if len(errors) == 0:
print("\nLine {}. VALUES: \n\t{}".format(reader.line_num, row))
###Output
Line 9. VALUES:
{'FLOAT': 2.5, 'DATE': datetime.datetime(2012, 5, 22, 10, 45, 22), 'INTEGER': 12}
###Markdown
Pass the schema as a python dictionary
###Code
reader = itab.DictReader('simple.itab.tsv', schema={'fields': {'INTEGER': {'reader': 'int(x)'}}})
for row, errors in reader:
if len(errors) == 0:
print("\nLine {}. VALUES: \n\t{}".format(reader.line_num, row))
###Output
WARNING:root:Unknown header 'DATE'
WARNING:root:Unknown header 'FLOAT'
###Markdown
Advance example The schema file looks like this:
###Code
%%bash
head -n +3 mutations.itab.schema.tsv
tail -n +4 mutations.itab.schema.tsv | csvlook -t
###Output
#
# Mutations file schema
#
|-------------+----------------+----------------------------------------------------|
| header | reader | validator |
|-------------+----------------+----------------------------------------------------|
| CHROMOSOME | str(x).upper() | x in ([str(c) for c in range(1,23)] + ['X', 'Y']) |
| POSITION | int(x) | x > 0 |
| REF | str(x).upper() | x in "ACTG" |
| ALT | str(x).upper() | x in "ACTG" |
| SAMPLE | str(x) | match("^CGP_donor_[0-9]{7}$", x) |
| TYPE | str(x).lower() | x in ['subs'] |
|-------------+----------------+----------------------------------------------------|
###Markdown
And the data file is a standard TSV file without any metadata:
###Code
%%bash
head mutations.itab.tsv | csvlook -t
###Output
|-------------+----------+-----+-----+-------------------+-------|
| CHROMOSOME | POSITION | REF | ALT | SAMPLE | TYPE |
|-------------+----------+-----+-----+-------------------+-------|
| 1 | 99150 | T | A | CGP_donor_1397260 | subs |
| 1 | 231793 | A | G | CGP_donor_1337223 | subs |
| 1 | 404447 | C | T | CGP_donor_1337236 | subs |
| 1 | 559388 | G | T | CGP_donor_1397282 | subs |
| 1 | 585741 | G | T | CGP_donor_1353434 | subs |
| 1 | 661926 | G | A | CGP_donor_1163904 | subs |
| 1 | 717900 | T | C | CGP_donor_1234124 | subs |
| 1 | 718896 | T | A | CGP_donor_1186990 | subs |
| 1 | 753461 | C | T | CGP_donor_1397086 | subs |
|-------------+----------+-----+-----+-------------------+-------|
###Markdown
In this example the tabbulated file is a normal tsv file without any metadata. For this reason we have to provide a valid schema file path or URL when we open the reader. And we will load the result as a pandas dataframe.
###Code
import pandas as pd
def load_mutations(file):
reader = itab.DictReader(file, schema='mutations.itab.schema.tsv')
for ix, (row, errors) in enumerate(reader, start=1):
if len(errors) > 0:
# Manage here the errors of parsing and validation
print("\nLine {}. ERRORS: \n {}".format(reader.line_num, '\n\t'.join(errors)))
continue
yield row
data = pd.DataFrame.from_dict(load_mutations('mutations.itab.tsv'))
data.head()
data.dtypes
###Output
_____no_output_____ |
src/Tile-Coding/Tile_Coding.ipynb | ###Markdown
Tile Coding---Tile coding is an innovative way of discretizing a continuous space that enables better generalization compared to a single grid-based approach. The fundamental idea is to create several overlapping grids or _tilings_; then for any given sample value, you need only check which tiles it lies in. You can then encode the original continuous value by a vector of integer indices or bits that identifies each activated tile. 1. Import the Necessary Packages
###Code
# Import common libraries
import sys
import gym
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
###Output
_____no_output_____
###Markdown
2. Specify the Environment, and Explore the State and Action SpacesWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's begin with an environment that has a continuous state space, but a discrete action space.
###Code
# Create an environment
env = gym.make('Acrobot-v1')
env.seed(505);
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Explore action space
print("Action space:", env.action_space)
###Output
_____no_output_____
###Markdown
Note that the state space is multi-dimensional, with most dimensions ranging from -1 to 1 (positions of the two joints), while the final two dimensions have a larger range. How do we discretize such a space using tiles? 3. TilingLet's first design a way to create a single tiling for a given state space. This is very similar to a uniform grid! The only difference is that you should include an offset for each dimension that shifts the split points.For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, `bins = (10, 10)`, and `offsets = (-0.1, 0.5)`, then return a list of 2 NumPy arrays (2 dimensions) each containing the following split points (9 split points per dimension):```[array([-0.9, -0.7, -0.5, -0.3, -0.1, 0.1, 0.3, 0.5, 0.7]), array([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5, 4.5])]```Notice how the split points for the first dimension are offset by `-0.1`, and for the second dimension are offset by `+0.5`. This might mean that some of our tiles, especially along the perimeter, are partially outside the valid state space, but that is unavoidable and harmless.
###Code
def create_tiling_grid(low, high, bins=(10, 10), offsets=(0.0, 0.0)):
"""Define a uniformly-spaced grid that can be used for tile-coding a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins or tiles along each corresponding dimension.
offsets : tuple
Split points for each dimension should be offset by these values.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
# TODO: Implement this
grid = [np.linspace(low[dim],high[dim],bins[dim] + 1)[1:-1] + offsets[dim] for dim in range(len(bins))]
# grid = [grid[i][1:] if offsets[i] < 0 else grid[i][0:-1] for i in range(len(bins))]
return grid
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_tiling_grid(low, high, bins=(10, 10), offsets=(-0.1, 0.5)) # [test]
###Output
_____no_output_____
###Markdown
You can now use this function to define a set of tilings that are a little offset from each other.
###Code
def create_tilings(low, high, tiling_specs):
"""Define multiple tilings using the provided specifications.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
tiling_specs : list of tuples
A sequence of (bins, offsets) to be passed to create_tiling_grid().
Returns
-------
tilings : list
A list of tilings (grids), each produced by create_tiling_grid().
"""
# TODO: Implement this
tilings = [create_tiling_grid(low,high,bins,offsets) for bins,offsets in tiling_specs]
return tilings
# Tiling specs: [(<bins>, <offsets>), ...]
tiling_specs = [((10, 10), (-0.066, -0.33)),
((10, 10), (0.0, 0.0)),
((10, 10), (0.066, 0.33))]
tilings = create_tilings(low, high, tiling_specs)
###Output
_____no_output_____
###Markdown
It may be hard to gauge whether you are getting desired results or not. So let's try to visualize these tilings.
###Code
from matplotlib.lines import Line2D
def visualize_tilings(tilings):
"""Plot each tiling as a grid."""
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
linestyles = ['-', '--', ':']
legend_lines = []
fig, ax = plt.subplots(figsize=(10, 10))
for i, grid in enumerate(tilings):
for x in grid[0]:
l = ax.axvline(x=x, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)], label=i)
for y in grid[1]:
l = ax.axhline(y=y, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)])
legend_lines.append(l)
ax.grid('off')
ax.legend(legend_lines, ["Tiling #{}".format(t) for t in range(len(legend_lines))], facecolor='white', framealpha=0.9)
ax.set_title("Tilings")
return ax # return Axis object to draw on later, if needed
visualize_tilings(tilings);
###Output
_____no_output_____
###Markdown
Great! Now that we have a way to generate these tilings, we can next write our encoding function that will convert any given continuous state value to a discrete vector. 4. Tile EncodingImplement the following to produce a vector that contains the indices for each tile that the input state value belongs to. The shape of the vector can be the same as the arrangment of tiles you have, or it can be ultimately flattened for convenience.You can use the same `discretize()` function here from grid-based discretization, and simply call it for each tiling.
###Code
def discretize(sample, grid):
"""Discretize a sample as per given grid.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
grid : list of array_like
A list of arrays containing split points for each dimension.
Returns
-------
discretized_sample : array_like
A sequence of integers with the same number of dimensions as sample.
"""
# TODO: Implement this
discretized_sample = tuple(np.digitize(s,g) for s,g in zip(sample,grid))
return discretized_sample
def tile_encode(sample, tilings, flatten=False):
"""Encode given sample using tile-coding.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
tilings : list
A list of tilings (grids), each produced by create_tiling_grid().
flatten : bool
If true, flatten the resulting binary arrays into a single long vector.
Returns
-------
encoded_sample : list or array_like
A list of binary vectors, one for each tiling, or flattened into one.
"""
# TODO: Implement this
encoded_sample = [discretize(sample,grid) for grid in tilings]
return np.concatenate(encoded_sample) if flatten else encoded_sample
# Test with some sample values
samples = [(-1.2 , -5.1 ),
(-0.75, 3.25),
(-0.5 , 0.0 ),
( 0.25, -1.9 ),
( 0.15, -1.75),
( 0.75, 2.5 ),
( 0.7 , -3.7 ),
( 1.0 , 5.0 )]
encoded_samples = [tile_encode(sample, tilings) for sample in samples]
print("\nSamples:", repr(samples), sep="\n")
print("\nEncoded samples:", repr(encoded_samples), sep="\n")
###Output
_____no_output_____
###Markdown
Note that we did not flatten the encoding above, which is why each sample's representation is a pair of indices for each tiling. This makes it easy to visualize it using the tilings.
###Code
from matplotlib.patches import Rectangle
def visualize_encoded_samples(samples, encoded_samples, tilings, low=None, high=None):
"""Visualize samples by activating the respective tiles."""
samples = np.array(samples) # for ease of indexing
# Show tiling grids
ax = visualize_tilings(tilings)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Pre-render (invisible) samples to automatically set reasonable axis limits, and use them as (low, high)
ax.plot(samples[:, 0], samples[:, 1], 'o', alpha=0.0)
low = [ax.get_xlim()[0], ax.get_ylim()[0]]
high = [ax.get_xlim()[1], ax.get_ylim()[1]]
# Map each encoded sample (which is really a list of indices) to the corresponding tiles it belongs to
tilings_extended = [np.hstack((np.array([low]).T, grid, np.array([high]).T)) for grid in tilings] # add low and high ends
tile_centers = [(grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 for grid_extended in tilings_extended] # compute center of each tile
tile_toplefts = [grid_extended[:, :-1] for grid_extended in tilings_extended] # compute topleft of each tile
tile_bottomrights = [grid_extended[:, 1:] for grid_extended in tilings_extended] # compute bottomright of each tile
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
for sample, encoded_sample in zip(samples, encoded_samples):
for i, tile in enumerate(encoded_sample):
# Shade the entire tile with a rectangle
topleft = tile_toplefts[i][0][tile[0]], tile_toplefts[i][1][tile[1]]
bottomright = tile_bottomrights[i][0][tile[0]], tile_bottomrights[i][1][tile[1]]
ax.add_patch(Rectangle(topleft, bottomright[0] - topleft[0], bottomright[1] - topleft[1],
color=colors[i], alpha=0.33))
# In case sample is outside tile bounds, it may not have been highlighted properly
if any(sample < topleft) or any(sample > bottomright):
# So plot a point in the center of the tile and draw a connecting line
cx, cy = tile_centers[i][0][tile[0]], tile_centers[i][1][tile[1]]
ax.add_line(Line2D([sample[0], cx], [sample[1], cy], color=colors[i]))
ax.plot(cx, cy, 's', color=colors[i])
# Finally, plot original samples
ax.plot(samples[:, 0], samples[:, 1], 'o', color='r')
ax.margins(x=0, y=0) # remove unnecessary margins
ax.set_title("Tile-encoded samples")
return ax
visualize_encoded_samples(samples, encoded_samples, tilings);
###Output
_____no_output_____
###Markdown
Inspect the results and make sure you understand how the corresponding tiles are being chosen. Note that some samples may have one or more tiles in common. 5. Q-Table with Tile CodingThe next step is to design a special Q-table that is able to utilize this tile coding scheme. It should have the same kind of interface as a regular table, i.e. given a `` pair, it should return a ``. Similarly, it should also allow you to update the `` for a given `` pair (note that this should update all the tiles that `` belongs to).The `` supplied here is assumed to be from the original continuous state space, and `` is discrete (and integer index). The Q-table should internally convert the `` to its tile-coded representation when required.
###Code
class QTable:
"""Simple Q-table."""
def __init__(self, state_size, action_size):
"""Initialize Q-table.
Parameters
----------
state_size : tuple
Number of discrete values along each dimension of state space.
action_size : int
Number of discrete actions in action space.
"""
self.state_size = state_size
self.action_size = action_size
# TODO: Create Q-table, initialize all Q-values to zero
# Note: If state_size = (9, 9), action_size = 2, q_table.shape should be (9, 9, 2)
self.q_table = np.zeros(shape=self.state_size+(self.action_size,))
print("QTable(): size =", self.q_table.shape)
class TiledQTable:
"""Composite Q-table with an internal tile coding scheme."""
def __init__(self, low, high, tiling_specs, action_size):
"""Create tilings and initialize internal Q-table(s).
Parameters
----------
low : array_like
Lower bounds for each dimension of state space.
high : array_like
Upper bounds for each dimension of state space.
tiling_specs : list of tuples
A sequence of (bins, offsets) to be passed to create_tilings() along with low, high.
action_size : int
Number of discrete actions in action space.
"""
self.tilings = create_tilings(low, high, tiling_specs)
self.state_sizes = [tuple(len(splits)+1 for splits in tiling_grid) for tiling_grid in self.tilings]
self.action_size = action_size
self.q_tables = [QTable(state_size, self.action_size) for state_size in self.state_sizes]
print("TiledQTable(): no. of internal tables = ", len(self.q_tables))
def get(self, state, action):
"""Get Q-value for given <state, action> pair.
Parameters
----------
state : array_like
Vector representing the state in the original continuous space.
action : int
Index of desired action.
Returns
-------
value : float
Q-value of given <state, action> pair, averaged from all internal Q-tables.
"""
# TODO: Encode state to get tile indices
encoded_state = tile_encode(state,self.tilings)
# TODO: Retrieve q-value for each tiling, and return their average
value = 0.0
for idx,q_table in zip(encoded_state,self.q_tables):
value += q_table.q_table[tuple(idx+(action,))]
value /= len(self.tilings)
return value
def update(self, state, action, value, alpha=0.1):
"""Soft-update Q-value for given <state, action> pair to value.
Instead of overwriting Q(state, action) with value, perform soft-update:
Q(state, action) = alpha * value + (1.0 - alpha) * Q(state, action)
Parameters
----------
state : array_like
Vector representing the state in the original continuous space.
action : int
Index of desired action.
value : float
Desired Q-value for <state, action> pair.
alpha : float
Update factor to perform soft-update, in [0.0, 1.0] range.
"""
# TODO: Encode state to get tile indices
encoded_state = tile_encode(state,self.tilings)
# TODO: Update q-value for each tiling by update factor alpha
for idx,q_table in zip(encoded_state,self.q_tables):
q_table.q_table[tuple(idx+(action,))] += alpha*(value-q_table.q_table[tuple(idx+(action,))])
# Test with a sample Q-table
tq = TiledQTable(low, high, tiling_specs, 2)
s1 = 3; s2 = 4; a = 0; q = 1.0
print("[GET] Q({}, {}) = {}".format(samples[s1], a, tq.get(samples[s1], a))) # check value at sample = s1, action = a
print("[UPDATE] Q({}, {}) = {}".format(samples[s2], a, q)); tq.update(samples[s2], a, q) # update value for sample with some common tile(s)
print("[GET] Q({}, {}) = {}".format(samples[s1], a, tq.get(samples[s1], a))) # check value again, should be slightly updated
###Output
_____no_output_____
###Markdown
If you update the q-value for a particular state (say, `(0.25, -1.91)`) and action (say, `0`), then you should notice the q-value of a nearby state (e.g. `(0.15, -1.75)` and same action) has changed as well! This is how tile-coding is able to generalize values across the state space better than a single uniform grid. 6. Implement a Q-Learning Agent using Tile-CodingNow it's your turn to apply this discretization technique to design and test a complete learning agent!
###Code
class QLearningAgent:
"""Q-Learning agent that can act on a continuous state space by discretizing it."""
def __init__(self, env, tq, alpha=0.02, gamma=0.99,
epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=0):
"""Initialize variables, create grid for discretization."""
# Environment info
self.env = env
self.tq = tq
self.state_sizes = tq.state_sizes # list of state sizes for each tiling
self.action_size = self.env.action_space.n # 1-dimensional discrete action space
self.seed = np.random.seed(seed)
print("Environment:", self.env)
print("State space sizes:", self.state_sizes)
print("Action space size:", self.action_size)
# Learning parameters
self.alpha = alpha # learning rate
self.gamma = gamma # discount factor
self.epsilon = self.initial_epsilon = epsilon # initial exploration rate
self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon
self.min_epsilon = min_epsilon
def reset_episode(self, state):
"""Reset variables for a new episode."""
# Gradually decrease exploration rate
self.epsilon *= self.epsilon_decay_rate
self.epsilon = max(self.epsilon, self.min_epsilon)
self.last_state = state
Q_s = [self.tq.get(state, action) for action in range(self.action_size)]
self.last_action = np.argmax(Q_s)
return self.last_action
def reset_exploration(self, epsilon=None):
"""Reset exploration rate used when training."""
self.epsilon = epsilon if epsilon is not None else self.initial_epsilon
def act(self, state, reward=None, done=None, mode='train'):
"""Pick next action and update internal Q table (when mode != 'test')."""
Q_s = [self.tq.get(state, action) for action in range(self.action_size)]
# Pick the best action from Q table
greedy_action = np.argmax(Q_s)
if mode == 'test':
# Test mode: Simply produce an action
action = greedy_action
else:
# Train mode (default): Update Q table, pick next action
# Note: We update the Q table entry for the *last* (state, action) pair with current state, reward
value = reward + self.gamma * max(Q_s)
self.tq.update(self.last_state, self.last_action, value, self.alpha)
# Exploration vs. exploitation
do_exploration = np.random.uniform(0, 1) < self.epsilon
if do_exploration:
# Pick a random action
action = np.random.randint(0, self.action_size)
else:
# Pick the greedy action
action = greedy_action
# Roll over current state, action for next step
self.last_state = state
self.last_action = action
return action
n_bins = 5
bins = tuple([n_bins]*env.observation_space.shape[0])
offset_pos = (env.observation_space.high - env.observation_space.low)/(3*n_bins)
tiling_specs = [(bins, -offset_pos),
(bins, tuple([0.0]*env.observation_space.shape[0])),
(bins, offset_pos)]
tq = TiledQTable(env.observation_space.low,
env.observation_space.high,
tiling_specs,
env.action_space.n)
agent = QLearningAgent(env, tq)
def run(agent, env, num_episodes=10000, mode='train',max_step = 1000):
"""Run agent in given reinforcement learning environment and return scores."""
scores = []
max_avg_score = -np.inf
for i_episode in range(1, num_episodes+1):
# Initialize episode
state = env.reset()
action = agent.reset_episode(state)
total_reward = 0
done = False
# Roll out
if max_step != None:
# Roll out for max_step steps
for i in range(max_step):
state, reward, _ , __ = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
else:
# Roll out steps until done
while not done:
state, reward, done, _ = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
# Save final score
scores.append(total_reward)
# Print episode stats
if mode == 'train':
if len(scores) > 100:
avg_score = np.mean(scores[-100:])
if avg_score > max_avg_score:
max_avg_score = avg_score
if i_episode % 100 == 0:
print("\rEpisode {}/{} | Max Average Score: {}".format(i_episode, num_episodes, max_avg_score), end="")
sys.stdout.flush()
return scores
scores = run(agent, env)
def plot_scores(scores, rolling_window=100):
"""Plot scores and optional rolling mean using specified window."""
plt.plot(scores); plt.title("Scores");
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
return rolling_mean
rolling_mean = plot_scores(scores)
###Output
_____no_output_____ |
notebooks/Map Outlist Rs's onto BGC2 Cats.ipynb | ###Markdown
Going to Map Rs's onto BGC2 Cats.
###Code
from os.path import join
from csv import DictReader, DictWriter
fieldnames = '#ID DescID M200b Vmax Vrms R200b Rs Np X Y Z VX VY VZ Parent_ID'.split(' ')
###Output
_____no_output_____
###Markdown
missed_keys = dict()for cosmo_idx in xrange(40): halodir = '/u/ki/swmclau2/des/NewAemulusBoxes/Box%03d/halos/m200b/'%cosmo_idx halodir = '/home/users/swmclau2/scratch/NewTrainingBoxes/Box%03d/halos/m200b/'%cosmo_idx halodir = '/home/users/swmclau2/scratch/TrainingBoxes/Box000/halos/m200b/' halodir = '/home/users/swmclau2/scratch/TestBoxes/TestBox000-000/halos/m200b/' for snapshot_idx in xrange(10): outlist_fname = join(halodir, "out_%d.list"%snapshot_idx) bgc2list_fname = join(halodir, "outbgc2_%d.list"%snapshot_idx) bgc2list_fname2 = join(halodir, "outbgc2_rs_%d.list"%snapshot_idx) print cosmo_idx, snapshot_idx outlist_rs = dict() with open(outlist_fname) as csvfile: reader = DictReader(filter(lambda row: row[0]!='' or row[:3]=='ID', csvfile), delimiter = ' ') for row in reader: outlist_rs[row['ID']] = row['Rs'] with open(bgc2list_fname) as oldfile, open(bgc2list_fname2, 'w') as newfile: reader = DictReader(filter(lambda row: row[0]!='' or row[:3]=='ID', oldfile), delimiter = ' ') writer = DictWriter(newfile, fieldnames, delimiter = ' ') writer.writeheader() for row in reader: try: row['Rs'] = outlist_rs[row['ID']] except KeyError: missed_keys[cosmo_idx] = row['ID'] writer.writerow(row) break
###Code
missed_keys = dict()
cosmo_idx = 0
halodir = '/u/ki/swmclau2/des/NewAemulusBoxes/Box%03d/halos/m200b/'%cosmo_idx
#halodir = '/home/users/swmclau2/scratch/NewTrainingBoxes/Box%03d/halos/m200b/'%cosmo_idx
#halodir = '/home/users/swmclau2/scratch/TrainingBoxes/Box000/halos/m200b/'
#halodir = '/home/users/swmclau2/scratch/TestBoxes/TestBox000-000/halos/m200b/'
#for snapshot_idx in xrange(10):
snapshot_idx = 2
outlist_fname = join(halodir, "out_%d.list"%snapshot_idx)
bgc2list_fname = join(halodir, "outbgc2_%d.list"%snapshot_idx)
bgc2list_fname2 = join(halodir, "outbgc2_rs_%d.list"%snapshot_idx)
print (cosmo_idx, snapshot_idx),
outlist_idxs = set()
with open(outlist_fname) as csvfile:
reader = DictReader(filter(lambda row: row[0]!='#' or row[:3]=='#ID', csvfile), delimiter = ' ')
for row in reader:
#outlist_rs[row['#ID']] = row['Rs']
outlist_idxs.add(int(row['#ID']))
bgc2_idxs = set()
with open(bgc2list_fname) as oldfile, open(bgc2list_fname2, 'r') as newfile:
reader = DictReader(filter(lambda row: row[0]!='#' or row[:3]=='#ID', newfile), delimiter = ' ')
#writer = DictWriter(newfile, fieldnames, delimiter = ' ')
#writer.writeheader()
for row in reader:
bgc2_idxs.add(int(row['#ID']) )
#try:
# row['Rs'] = outlist_rs[row['#ID']]
#except KeyError:
# missed_keys[cosmo_idx] = row['#ID']
#writer.writerow(row)
print len(bgc2_idxs - outlist_idxs), len(outlist_idxs - bgc2_idxs)
print bgc2_idxs - outlist_idxs
###Output
set([8836805, 8836806, 8836807, 8836808, 8836809, 8836810, 8836811, 8836812, 8836813, 8836814, 8836815, 8836816, 8836817, 8836818, 8836819, 8836820, 8836821, 8836822, 8836823, 8836824, 8836825, 8836826, 8836827, 8836828, 8836829, 8836830, 8836831, 8836832, 8836833, 8836834, 8836835, 8836836, 8836837, 8836838, 8836839, 8836840, 8836841, 8836842, 8836843, 8836844, 8836845, 8836846, 8836847, 8836848, 8836849, 8836850, 8836851, 8836852, 8836853, 8836854])
###Markdown
(0, 0) 0 0(0, 1) 528 0(0, 2) 50 0(0, 3) 2 0(0, 4) 0 639(0, 5) 0 794(0, 6) 39 0(0, 7) 2 0(0, 8) 0 617(0, 9) 0 636
###Code
bgc2list_rs = dict()
bgc2_zero_rs = set()
with open(bgc2list_fname2) as csvfile:
reader = DictReader(filter(lambda row: row[0]!='#' or row[:3]=='#ID', csvfile), delimiter = ' ')
for row in reader:
bgc2list_rs[int(row['#ID'])] = float(row['Rs'])
if float(row['Rs']) <= 0:
bgc2_zero_rs.add(int(row['#ID']) )
#print row
print len(zero_rs)
outlist_rs = dict()
outlist_zero_rs = set()
with open(outlist_fname) as csvfile:
reader = DictReader(filter(lambda row: row[0]!='#' or row[:3]=='#ID', csvfile), delimiter = ' ')
for row in reader:
outlist_rs[int(row['#ID'])] = float(row['Rs'])
if float(row['Rs']) <= 0:
outlist_zero_rs.add(int(row['#ID']) )
#print row
print len(zero_rs)
###Output
31219
###Markdown
rs_file_keys = set(outlist_rs.keys())
###Code
outlist_counter = 0
bgc2_counter = 0
for key in outlist_zero_rs:
if key in outlist_idxs:
outlist_counter+=1
if key in bgc2_idxs:
bgc2_counter+=1
print outlist_counter, bgc2_counter
print_counter = 0
for idx, rs in outlist_rs.iteritems():
if rs == 0:
print idx,
print_counter+=1
if print_counter > 50:
break
outlist_counter = 0
bgc2_counter = 0
for key in bgc2_zero_rs:
if key in outlist_idxs:
outlist_counter+=1
if key in bgc2_idxs:
bgc2_counter+=1
print outlist_counter, bgc2_counter
for key in zero_rs
outlist_rs.values()
import numpy as np
rs_arr = np.array(list(outlist_rs.itervalues())).astype(float)
from matplotlib import pyplot as plt
%matplotlib inline
n_zero_rs = np.sum(rs_arr == 0)
print n_zero_rs, n_zero_rs*1.0/rs_arr.shape[0]
plt.hist(rs_arr)
###Output
_____no_output_____ |
Project_5 - Implement Route Planner/project_notebook.ipynb | ###Markdown
Implementing a Route PlannerIn this project you will use A\* search to implement a "Google-maps" style route planning algorithm. The Map
###Code
# Run this cell first!
from helpers import Map, load_map_10, load_map_40, show_map
import math
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Map Basics
###Code
map_10 = load_map_10()
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. This map is quite literal in its expression of distance and connectivity. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
###Code
map_10.intersections
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map_40()
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
The Algorithm Writing your algorithmThe algorithm written will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]`. However you must complete several methods before it will work.```bash> PathPlanner(map_40, 5, 34).path[5, 16, 37, 12, 34]``` Your TurnImplement the following functions to get your search algorithm running smoothly! Data StructuresThe next few functions requre you to decide on data structures to use - lists, sets, dictionaries, etc. Make sure to think about what would work most efficiently for each of these. Some can be returned as just an empty data structure (see `create_closedSet()` for an example), while others should be initialized with one or more values within.
###Code
# Do not change this cell
# When you write your methods correctly this cell will execute
# without problems
class PathPlanner():
"""Construct a PathPlanner Object"""
def __init__(self, M, start=None, goal=None):
""" """
self.map = M
self.start= start
self.goal = goal
self.closedSet = self.create_closedSet() if goal != None and start != None else None
self.openSet = self.create_openSet() if goal != None and start != None else None
self.cameFrom = self.create_cameFrom() if goal != None and start != None else None
self.gScore = self.create_gScore() if goal != None and start != None else None
self.fScore = self.create_fScore() if goal != None and start != None else None
self.path = self.run_search() if self.map and self.start != None and self.goal != None else None
def reconstruct_path(self, current):
""" Reconstructs path after search """
total_path = [current]
while current in self.cameFrom.keys():
current = self.cameFrom[current]
total_path.append(current)
return total_path
def _reset(self):
"""Private method used to reset the closedSet, openSet, cameFrom, gScore, fScore, and path attributes"""
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = self.run_search() if self.map and self.start and self.goal else None
def run_search(self):
""" """
if self.map == None:
raise(ValueError, "Must create map before running search. Try running PathPlanner.set_map(start_node)")
if self.goal == None:
raise(ValueError, "Must create goal node before running search. Try running PathPlanner.set_goal(start_node)")
if self.start == None:
raise(ValueError, "Must create start node before running search. Try running PathPlanner.set_start(start_node)")
self.closedSet = self.closedSet if self.closedSet != None else self.create_closedSet()
self.openSet = self.openSet if self.openSet != None else self.create_openSet()
self.cameFrom = self.cameFrom if self.cameFrom != None else self.create_cameFrom()
self.gScore = self.gScore if self.gScore != None else self.create_gScore()
self.fScore = self.fScore if self.fScore != None else self.create_fScore()
while not self.is_open_empty():
current = self.get_current_node()
if current == self.goal:
self.path = [x for x in reversed(self.reconstruct_path(current))]
return self.path
else:
self.openSet.remove(current)
self.closedSet.add(current)
print("openSet: ", self.openSet)
for neighbor in self.get_neighbors(current):
if neighbor in self.closedSet:
continue # Ignore the neighbor which is already evaluated.
if not neighbor in self.openSet: # Discover a new node
self.openSet.add(neighbor)
# The distance from start to a neighbor
#the "dist_between" function may vary as per the solution requirements.
if self.get_tentative_gScore(current, neighbor) >= self.get_gScore(neighbor):
continue # This is not a better path.
# This path is the best until now. Record it!
self.record_best_path_to(current, neighbor)
print("No Path Found")
self.path = None
return False
def create_closedSet(self):
""" Creates and returns a data structure suitable to hold the set of nodes already evaluated"""
# EXAMPLE: return a data structure suitable to hold the set of nodes already evaluated
return set()
def create_openSet(self):
""" Creates and returns a data structure suitable to hold the set of currently discovered nodes
that are not evaluated yet. Initially, only the start node is known."""
if self.start == None:
raise(ValueError, "Must create start node before creating an open set. Try running PathPlanner.set_start(start_node)")
return set([self.start])
# TODO: return a data structure suitable to hold the set of currently discovered nodes
# that are not evaluated yet. Make sure to include the start node.
def create_cameFrom(self):
"""Creates and returns a data structure that shows which node can most efficiently be reached from another,
for each node."""
# TODO: return a data structure that shows which node can most efficiently be reached from another,
# for each node.
if self.start == None:
raise(ValueError, "Must create start node before creating an cameFrom dictionary. Try running PathPlanner.set_start(start_node)")
return dict()
def create_gScore(self):
"""Creates and returns a data structure that holds the cost of getting from the start node to that node,
for each node. The cost of going from start to start is zero."""
newGScore = { key: math.inf for key in self.map.intersections } # Init dict with infinity for each node
newGScore[self.start] = 0 # Start node gScore is zero
return newGScore
def create_fScore(self):
"""Creates and returns a data structure that holds the total cost of getting from the start node to the goal
by passing by that node, for each node. That value is partly known, partly heuristic.
For the first node, that value is completely heuristic."""
# TODO: return a data structure that holds the total cost of getting from the start node to the goal
# by passing by that node, for each node. That value is partly known, partly heuristic.
# For the first node, that value is completely heuristic. The rest of the node's value should be
# set to infinity.
newFScore = dict({ key: math.inf for key in self.map.intersections })
newFScore[self.start] = self.heuristic_cost_estimate(self.start)
return newFScore
###Output
_____no_output_____
###Markdown
Set certain variablesThe below functions help set certain variables if they weren't a part of initializating our `PathPlanner` class, or if they need to be changed for anothe reason.
###Code
def set_map(self, M):
"""Method used to set map attribute """
self._reset(self)
self.start = None
self.goal = None
self.map = M
def set_start(self, start):
"""Method used to set start attribute """
self._reset(self)
self.goal = None
self.start = start
###Output
_____no_output_____
###Markdown
PathPlanner classThe below class is already partly implemented for you - you will implement additional functions that will also get included within this class further below.Let's very briefly walk through each part below.`__init__` - We initialize our path planner with a map, M, and typically a start and goal node. If either of these are `None`, the rest of the variables here are also set to none. If you don't have both a start and a goal, there's no path to plan! The rest of these variables come from functions you will soon implement. - `closedSet` includes any explored/visited nodes. - `openSet` are any nodes on our frontier for potential future exploration. - `cameFrom` will hold the previous node that best reaches a given node- `gScore` is the `g` in our `f = g + h` equation, or the actual cost to reach our current node- `fScore` is the combination of `g` and `h`, i.e. the `gScore` plus a heuristic; total cost to reach the goal- `path` comes from the `run_search` function, which is already built for you.`reconstruct_path` - This function just rebuilds the path after search is run, going from the goal node backwards using each node's `cameFrom` information.`_reset` - Resets *most* of our initialized variables for PathPlanner. This *does not* reset the map, start or goal variables, for reasons which you may notice later, depending on your implementation.`run_search` - This does a lot of the legwork to run search once you've implemented everything else below. First, it checks whether the map, goal and start have been added to the class. Then, it will also check if the other variables, other than `path` are initialized (note that these are only needed to be re-run if the goal or start were not originally given when initializing the class, based on what we discussed above for `__init__`.From here, we use a function you will implement, `is_open_empty`, to check that there are still nodes to explore (you'll need to make sure to feed `openSet` the start node to make sure the algorithm doesn't immediately think there is nothing to open!). If we're at our goal, we reconstruct the path. If not, we move our current node from the frontier (`openSet`) and into explored (`closedSet`). Then, we check out the neighbors of the current node, check out their costs, and plan our next move.This is the main idea behind A*, but none of it is going to work until you implement all the relevant parts, which will be included below after the class code.
###Code
def set_goal(self, goal):
"""Method used to set goal attribute """
self._reset(self)
self.goal = goal
###Output
_____no_output_____
###Markdown
Get node informationThe below functions concern grabbing certain node information. In `is_open_empty`, you are checking whether there are still nodes on the frontier to explore. In `get_current_node()`, you'll want to come up with a way to find the lowest `fScore` of the nodes on the frontier. In `get_neighbors`, you'll need to gather information from the map to find the neighbors of the current node.
###Code
def is_open_empty(self):
"""returns True if the open set is empty. False otherwise. """
return self.openSet == None or len(self.openSet) < 1
def get_current_node(self):
""" Returns the node in the open set with the lowest value of f(node)."""
smallestFScoreNode = None
smallestFScore = 0
for node in self.openSet:
if smallestFScoreNode == None: # First node always has the smallest distance
smallestFScoreNode = node
smallestFScore = self.calculate_fscore(node)
else:
newFScore = self.calculate_fscore(node)
if newFScore < smallestFScore: # Found a smallest FScore-Node
smallestFScore = newFScore
smallestFScoreNode = node
return smallestFScoreNode # get index of lowest entry and return node based on the info
def get_neighbors(self, node):
"""Returns the neighbors of a node"""
return self.map.roads[node]
###Output
_____no_output_____
###Markdown
Scores and CostsBelow, you'll get into the main part of the calculation for determining the best path - calculating the various parts of the `fScore`.
###Code
def get_gScore(self, node):
"""Returns the g Score of a node"""
return self.gScore[node] # Return gScore which was saved in gScore (dict) earlier
def distance(self, node_1, node_2):
""" Computes the Euclidean L2 Distance"""
# Simple Pythagoras => a2 + b2 = c2
node1 = self.map.intersections[node_1] # get Node 1 to access x,y easier
node2 = self.map.intersections[node_2] # get Node 2 to access x,y easier
# Calculate distance (Euclidean L2 Distance) between both given nodes using pythagoras
return math.sqrt(((node2[0] - node1[0]) ** 2) + ((node2[1] - node1[1]) ** 2))
def get_tentative_gScore(self, current, neighbor):
"""Returns the tentative g Score of a node"""
# TODO: Return the g Score of the current node
# plus distance from the current node to it's neighbors
return self.gScore[current] + self.distance(current, neighbor)
def heuristic_cost_estimate(self, node):
""" Returns the heuristic cost estimate of a node """
# in this case: heuristic is the Euclidean L2 distance from the node to the goal
return self.distance(node, self.goal)
def calculate_fscore(self, node):
"""Calculate the f score of a node. """
# TODO: Calculate and returns the f score of a node.
# REMEMBER F = G + H
return self.heuristic_cost_estimate(node) + self.gScore[node]
###Output
_____no_output_____
###Markdown
Recording the best pathNow that you've implemented the various functions on scoring, you can record the best path to a given neighbor node from the current node!
###Code
def record_best_path_to(self, current, neighbor):
"""Record the best path to a node """
# TODO: Record the best path to a node, by updating cameFrom, gScore, and fScore
self.cameFrom[neighbor] = current # save previous node for path-finding later
self.gScore[neighbor] = self.get_tentative_gScore(current, neighbor) # set neighbors gScore
self.fScore[neighbor] = self.calculate_fscore(neighbor) # set neighbors fScore
###Output
_____no_output_____
###Markdown
Associating your functions with the `PathPlanner` classTo check your implementations, we want to associate all of the above functions back to the `PathPlanner` class. Python makes this easy using the dot notation (i.e. `PathPlanner.myFunction`), and setting them equal to your function implementations. Run the below code cell for this to occur.*Note*: If you need to make further updates to your functions above, you'll need to re-run this code cell to associate the newly updated function back with the `PathPlanner` class again!
###Code
# Associates implemented functions with PathPlanner class
PathPlanner.create_closedSet = create_closedSet
PathPlanner.create_openSet = create_openSet
PathPlanner.create_cameFrom = create_cameFrom
PathPlanner.create_gScore = create_gScore
PathPlanner.create_fScore = create_fScore
PathPlanner.set_map = set_map
PathPlanner.set_start = set_start
PathPlanner.set_goal = set_goal
PathPlanner.is_open_empty = is_open_empty
PathPlanner.get_current_node = get_current_node
PathPlanner.get_neighbors = get_neighbors
PathPlanner.get_gScore = get_gScore
PathPlanner.distance = distance
PathPlanner.get_tentative_gScore = get_tentative_gScore
PathPlanner.heuristic_cost_estimate = heuristic_cost_estimate
PathPlanner.calculate_fscore = calculate_fscore
PathPlanner.record_best_path_to = record_best_path_to
###Output
_____no_output_____
###Markdown
Preliminary TestThe below is the first test case, just based off of one set of inputs. If some of the functions above aren't implemented yet, or are implemented incorrectly, you likely will get an error from running this cell. Try debugging the error to help you figure out what needs further revision!
###Code
planner = PathPlanner(map_40, 5, 34)
show_map(map_40, start=5, goal=34, path=planner.path)
path = planner.path
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path)
###Output
openSet: set()
openSet: {32, 14}
openSet: {32, 14, 30}
openSet: {32, 14, 22, 29, 30}
###Markdown
VisualizeOnce the above code worked for you, let's visualize the results of your algorithm!
###Code
# Visualize your the result of the above test! You can also change start and goal here to check other paths
start = 5
goal = 34
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path)
###Output
openSet: set()
openSet: {32, 14}
openSet: {32, 14, 30}
openSet: {32, 14, 22, 29, 30}
###Markdown
Testing your CodeIf the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:**Submission Checklist**1. Does my code pass all tests?2. Does my code implement `A*` search and not some other search algorithm?3. Do I use an **admissible heuristic** to direct search efforts towards the goal?4. Do I use data structures which avoid unnecessarily slow lookups?When you can answer "yes" to all of these questions, and also have answered the written questions below, submit by pressing the Submit button in the lower right!
###Code
from test import test
test(PathPlanner)
###Output
openSet: set()
openSet: {32, 14}
openSet: {32, 14, 30}
openSet: {32, 14, 22, 29, 30}
openSet: set()
openSet: {30, 14}
openSet: {14}
openSet: {16}
openSet: {5}
openSet: {5}
openSet: {5, 22, 29}
openSet: {17, 5, 22, 28, 29, 31}
openSet: {0, 5, 22, 25, 26, 28, 29, 31}
openSet: {0, 1, 5, 15, 18, 22, 25, 26, 28, 29, 31}
All tests pass! Congratulations!
|
data_ml/notebooks/PreparePodCastDataset.ipynb | ###Markdown
Fetch Annotated data from Blob
###Code
# TODO: automate this part as well
annotations_dir = "../data/AnnotationRound2/OS_7_05_2019_Curated_chunked_annotations"
positives_dir = "../data/AnnotationRound2/OS_7_05_2019_Curated_chunked"
negatives_dir = "../data/AnnotationRound2/OS_7_05_2019_Curated_chunked_negatives"
target_data_dir = "../data/DataArchives/Round2_OS_07_05"
###Output
_____no_output_____
###Markdown
Parse and make dataset
###Code
# annotations, positive wavs, negative wavs, target directory
# parse JSON annotations to TSV, add annotations for negative chunks
# (optional) merge with a previous dataset
# (optional) split a held-out dev set
np.random.seed(10)
df, wavfile_map = src.datautils.make_dataset(annotations_dir, positives_dir, negatives_dir, target_data_dir,
data_source="Orcasound_PodCast_Round2", location="orcasound_lab")
df.head()
# inspect a few random annotations
row = df.iloc[np.random.randint(len(df))]; print(row)
af = src.dataloader.AudioFile(Path(positives_dir)/row["wav_filename"], src.params.SAMPLE_RATE)
start_idx = int(row["start_time_s"]*src.params.SAMPLE_RATE)
end_idx = int(start_idx + row["duration_s"]*src.params.SAMPLE_RATE)
_ = plt.imshow(af.get_window(start_idx, end_idx).T)
###Output
wav_filename 1562344334_0005.wav
start_time_s 21.3991
duration_s 2.60547
location orcasound_lab
date 2019-07-05
data_source Orcasound_PodCast_Round2
data_source_id 1562344334
Name: 344, dtype: object
###Markdown
Load with Dataloader and add mean, invstd
###Code
wav_dataset = src.dataloader.AudioFileDataset(Path(target_data_dir)/"wav",Path(target_data_dir)/"train.tsv")
num_windows = len(wav_dataset); num_hrs = num_windows*src.params.WINDOW_S/3600
print("Dataset size: {} windows, {} hrs".format(num_windows, num_hrs))
labels = [l[2] for l in wav_dataset.windows]
_ = sns.countplot(labels)
src.datautils.compute_dataset_stats(target_data_dir, wav_dataset)
###Output
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1539/1539 [00:06<00:00, 227.38it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1539/1539 [00:06<00:00, 230.20it/s]
|
01_data-types.ipynb | ###Markdown
Data Types in Python [**Reference**](https://realpython.com/python-data-types/) Types in Python* Numbers: integers (`int`), floating-point numbers [real numbers] (`float`), complex numbers (`complex`) * `float` * Decimal notation: `400`, `-400` * Scientific notation * `4e2`, `4e+2` * `4e-2`* Characters: strings (`str`) * Double qoutes (`" "`), single quotes (`' '`) * Escape sequences in strings * `\'`, `\"`, `\\`, `\n` * `format()` * Raw strings (`r' '`, `R' '`) * Triple-quoted strings (`''' '''`)* Boolean: {`True`,`False`} (`bool`)* Null: `NoneType`_Strictly speaking, in Python, there are only class types. The `int` class, `float` class, `str` class, and so on._
###Code
type(1)
4e+34
type(1.0)
type(1.)
type(0.1)
type(.1)
type(1+2j)
type(1.1-2.5j)
type(j)
type(1j)
type('a')
type('string')
type(True)
type(False)
type(None)
###Output
_____no_output_____ |
Ten_Problems/Problem_07.ipynb | ###Markdown
Diffusion with Chemical Reaction in a One-Dimensional SlabThis is the seventh problem of the famous set of [Ten Problems in Chemical Engineering](https://www.polymath-software.com/ASEE/Tenprobs.pdf). Here, the goal is to solve second order ordinary differential equations with two point boundary conditionsJacob Albrecht, 2019 Problem SetupFor a first order irreversible reaction ($A \rightarrow B$) occuring in a 1D domain, the concentration of species $A$ is governed by the differential equation:$$\frac{d^2C_A}{dz^2} = \frac{k}{D_{AB}}C_A$$at location $z = 0$, the boundary condition is:$$C_A = C_{A0}$$at the other edge of the domain, $z = L$, the condition is:$$\frac{dC_A}{dz} = 0$$ Problem Tasksa)Numerically solve the differential equation with the boundary conditions with $C_{A0} = 0.2 kg\cdot mol/m^3$$k = 10^{-3} s^{-1}$$D_{AB} = 1.2\cdot10^{-9} m^2/s$ $L = 10^{-3} m$ This solution should utilize an ODE solver with a shooting technique and employ Newton’s method or some other technique for converging on the boundary condition.b) Compare the concentration profiles over the thickness as predicted by the numerical solution of (a) with the analytical solution of the differential equation. Solutions Solution to part a)Rewriting the second-order differential equation and boundary conditions as a set of first-order equations will allow for the use of the scipy integration functions, following [the approach here](https://polymath-software.com/ASEE/Matlab/Matlab.pdf).$$y_2 = \frac{dC_A}{dz}$$At $z = 0$, $y_2$ will have an unknown value $\alpha$. Define new quantities $y_3$ and $y_4$ to incorporate the boundary conditions and to solve for $\alpha$ by the shooting technique$$y_3 = \frac{\partial C_A}{\partial \alpha}$$$$y_4 = \frac{dy_3}{dz}$$$$\frac{dy_4}{dz} = \frac{ky_3}{D_{AB}}$$The initial conditions for $C_A$, $y_2$, $y_3$, and $y_4$ are $C_{A0}$, $\alpha$, $0$, and $1$, respectively
###Code
import numpy as np
from scipy.optimize import newton
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
%matplotlib inline
# define function for solving the expanded ode:
def odefun(t,y):
dy = np.zeros(4)
dy[0] = y[1]
dy[1] = k*y[0]/Dab
dy[2] = y[3]
dy[3] = k*y[2]/Dab
return(dy)
# define function for optimizing α
def objfun(alpha):
y0 = np.array([CA0,alpha,0,1])
sol = solve_ivp(odefun,tspan,y0)
err = sol.y[1,-1]/sol.y[3,-1]
return(err)
# define constants:
L=1e-3
CA0 = 0.2
k = 1e-2
Dab = 1.210e-9
alpha=0
# set integration range:
tspan = [0,L]
# run optimization
opt_α = newton(objfun,alpha)
# use optimal α to calculate profile:
y0 = np.array([CA0,opt_α,0,1])
t_eval = np.linspace(0,L,num=25)
numerical_sol = solve_ivp(odefun,tspan,y0,t_eval=t_eval)
err = numerical_sol.y[1,-1]/numerical_sol.y[3,-1]
print('Optimal \u03B1: {:.4}'.format(opt_α)) # unicode is ok with python
print('Final error: {:.2}'.format(err))
plt.plot(numerical_sol.t,numerical_sol.y[0,:],'o')
plt.xlabel('z')
plt.ylabel('$C_A$');
###Output
_____no_output_____
###Markdown
Solution to part b)Instead of using the analytical expression given in the problem statement, we will use `sympy` to calculate the analytical solution from the differential equation and boundary conditions.
###Code
from sympy import symbols, Function, Eq, dsolve, Derivative, lambdify
k, D_AB, z, L, C_A0 = symbols('k D_AB z L C_A0')
C = Function('C')
deq = Eq(Derivative(C(z),(z,2))-k/D_AB*C(z))
###Output
_____no_output_____
###Markdown
Inspect the differential equation from sympy:
###Code
deq
###Output
_____no_output_____
###Markdown
Solve the differential equation with the boundary conditions and view the output:
###Code
analytical_sol = dsolve(deq,ics={C(0):C_A0,
Derivative(C(L),L):0})
analytical_sol
###Output
_____no_output_____
###Markdown
Substitute in the constant values from the problem statement:
###Code
C = analytical_sol.subs({C_A0:0.2,L:1e-3,D_AB:1.210e-9,k:1e-2})
C
# convert the sympy expression to numpy for plotting
lam_C = lambdify(z,C.rhs, 'numpy')
lam_z = np.linspace(0,1e-3,num=100)
plt.plot(lam_z,lam_C(lam_z),label= 'Analytical')
plt.scatter(numerical_sol.t,numerical_sol.y[0,:],facecolors='none',edgecolors='b',label='Numeric')
plt.xlabel('z')
plt.ylabel('$C_A$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The analytical and numeric solutions match perfectly. Calculus FTW. Reference“The Use of Mathematical Software packages in Chemical Engineering”, Michael B. Cutlip, John J. Hwalek, Eric H.Nuttal, Mordechai Shacham, Workshop Material from Session 12, Chemical Engineering Summer School, Snowbird,Utah, Aug., 1997.
###Code
%load_ext watermark
%watermark -v -p scipy,matplotlib,numpy,sympy
###Output
CPython 3.7.3
IPython 7.6.1
scipy 1.3.0
matplotlib 3.1.0
numpy 1.16.4
sympy 1.4
|
baseline_lstm_medium.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
from keras.optimizers import Adam
from sklearn.preprocessing import StandardScaler
from datetime import datetime,timedelta
import datetime
data = pd.read_excel('/content/Event.xlsx')
df_medium = data[['Date','MEDIUM']]
df_medium.head()
med_data = df_medium['MEDIUM'].values
med_data_copy = med_data.copy()
med_for_scale = med_data.reshape(-1,1)
scaler = StandardScaler()
scaler = scaler.fit(med_for_scale)
df_med = scaler.transform(med_for_scale)
df_med
Train = []
Test = []
n_future = 1
n_past = 40
for i in range(n_past,len(df_med),n_future+1):
Train.append(df_med[i-n_past:i,0:df_med.shape[1]])
Test.append(df_med[i + n_future - 1:i + n_future,0])
Train,Test = np.array(Train),np.array(Test)
'''
size = len(df_med)
training_size = int(size - (0.8*size))
x_train,y_train = Train[:training_size],Train[training_size:]
x_test,y_test = Test[:training_size],Test[training_size:]
'''
#Building the model
model = Sequential()
model.add(LSTM(64,activation = 'relu',input_shape = (Train.shape[1],Train.shape[2]),return_sequences=True))
model.add(LSTM(32,activation='relu',return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(16,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(Test.shape[1]))
model.compile(optimizer='adam',loss = 'mse')
model.summary()
history = model.fit(Train,Test,epochs=15,batch_size=16,validation_split=0.25,verbose=2)
model.save('baseline.h5')
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],label = 'training_loss')
plt.plot(history.history['val_loss'],label = 'validation_loss')
plt.legend()
train_data_dates = df_medium['Date']
n_future = 40
forecast_period_dates = pd.date_range(list(train_data_dates)[-1],periods=n_future,freq='1d').tolist()
forecast = model.predict(Train[-n_future:])
forecast_dates = []
for i in forecast_period_dates:
forecast_dates.append(i.date())
y_pred = scaler.inverse_transform(forecast)
y_pred
dates = pd.DataFrame(forecast_dates,columns=['Date'])
predi = pd.DataFrame(y_pred,columns=['Medium'])
predi['Medium'] = (predi['Medium']).astype(int)
med_predicted = pd.concat([dates,predi],axis = 1)
med_predicted.head()
med_predicted.to_excel('baseline_lstm.xlsx')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.