path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Pop_Segmentation_Exercise.ipynb | ###Markdown
Population Segmentation with SageMakerIn this notebook, you'll employ two, unsupervised learning algorithms to do **population segmentation**. Population segmentation aims to find natural groupings in population data that reveal some feature-level similarities between different regions in the US.Using **principal component analysis** (PCA) you will reduce the dimensionality of the original census data. Then, you'll use **k-means clustering** to assign each US county to a particular cluster based on where a county lies in component space. How each cluster is arranged in component space can tell you which US counties are most similar and what demographic traits define that similarity; this information is most often used to inform targeted, marketing campaigns that want to appeal to a specific group of people. This cluster information is also useful for learning more about a population by revealing patterns between regions that you otherwise may not have noticed. US Census DataYou'll be using data collected by the [US Census](https://en.wikipedia.org/wiki/United_States_Census), which aims to count the US population, recording demographic traits about labor, age, population, and so on, for each county in the US. The bulk of this notebook was taken from an existing SageMaker example notebook and [blog post](https://aws.amazon.com/blogs/machine-learning/analyze-us-census-data-for-population-segmentation-using-amazon-sagemaker/), and I've broken it down further into demonstrations and exercises for you to complete. Machine Learning WorkflowTo implement population segmentation, you'll go through a number of steps:* Data loading and exploration* Data cleaning and pre-processing * Dimensionality reduction with PCA* Feature engineering and data transformation* Clustering transformed data with k-means* Extracting trained model attributes and visualizing k clustersThese tasks make up a complete, machine learning workflow from data loading and cleaning to model deployment. Each exercise is designed to give you practice with part of the machine learning workflow, and to demonstrate how to use SageMaker tools, such as built-in data management with S3 and built-in algorithms.--- First, import the relevant libraries into this SageMaker notebook.
###Code
# data managing and display libs
import pandas as pd
import numpy as np
import os
import io
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# sagemaker libraries
import boto3
import sagemaker
###Output
_____no_output_____
###Markdown
Loading the Data from Amazon S3This particular dataset is already in an Amazon S3 bucket; you can load the data by pointing to this bucket and getting a data file by name. > You can interact with S3 using a `boto3` client.
###Code
# boto3 client to get S3 data
s3_client = boto3.client('s3')
bucket_name='aws-ml-blog-sagemaker-census-segmentation'
###Output
_____no_output_____
###Markdown
Take a look at the contents of this bucket; get a list of objects that are contained within the bucket and print out the names of the objects. You should see that there is one file, 'Census_Data_for_SageMaker.csv'.
###Code
# get a list of objects in the bucket
obj_list=s3_client.list_objects(Bucket=bucket_name)
# print object(s)in S3 bucket
files=[]
for contents in obj_list['Contents']:
files.append(contents['Key'])
print(files)
# there is one file --> one key
file_name=files[0]
print(file_name)
###Output
Census_Data_for_SageMaker.csv
###Markdown
Retrieve the data file from the bucket with a call to `client.get_object()`.
###Code
# get an S3 object by passing in the bucket and file name
data_object = s3_client.get_object(Bucket=bucket_name, Key=file_name)
# what info does the object contain?
display(data_object)
# information is in the "Body" of the object
data_body = data_object["Body"].read()
print('Data type: ', type(data_body))
###Output
Data type: <class 'bytes'>
###Markdown
This is a `bytes` datatype, which you can read it in using [io.BytesIO(file)](https://docs.python.org/3/library/io.htmlbinary-i-o).
###Code
pd.set_option('display.max_columns', 999)
# read in bytes data
data_stream = io.BytesIO(data_body)
# create a dataframe
counties_df = pd.read_csv(data_stream, header=0, delimiter=",")
counties_df.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA)Now that you've loaded in the data, it is time to clean it up, explore it, and pre-process it. Data exploration is one of the most important parts of the machine learning workflow because it allows you to notice any initial patterns in data distribution and features that may inform how you proceed with modeling and clustering the data. EXERCISE: Explore data & drop any incomplete rows of dataWhen you first explore the data, it is good to know what you are working with. How many data points and features are you starting with, and what kind of information can you get at a first glance? In this notebook, you're required to use complete data points to train a model. So, your first exercise will be to investigate the shape of this data and implement a simple, data cleaning step: dropping any incomplete rows of data.You should be able to answer the **question**: How many data points and features are in the original, provided dataset? (And how many points are left after dropping any incomplete rows?)
###Code
# print out stats about data
print("There are {} rows of data".format(len(counties_df)))
# drop any incomplete rows of data, and create a new df
clean_counties_df = counties_df.dropna()
print("After Dropping incomplete rows, there are {} rows of data".format(len(clean_counties_df)))
print('Dropped {} records'.format(len(counties_df) - len(clean_counties_df)))
###Output
There are 3220 rows of data
After Dropping incomplete rows, there are 3218 rows of data
Dropped 2 records
###Markdown
EXERCISE: Create a new DataFrame, indexed by 'State-County'Eventually, you'll want to feed these features into a machine learning model. Machine learning models need numerical data to learn from and not categorical data like strings (State, County). So, you'll reformat this data such that it is indexed by region and you'll also drop any features that are not useful for clustering.To complete this task, perform the following steps, using your *clean* DataFrame, generated above:1. Combine the descriptive columns, 'State' and 'County', into one, new categorical column, 'State-County'. 2. Index the data by this unique State-County name.3. After doing this, drop the old State and County columns and the CensusId column, which does not give us any meaningful demographic information.After completing this task, you should have a DataFrame with 'State-County' as the index, and 34 columns of numerical data for each county. You should get a resultant DataFrame that looks like the following (truncated for display purposes):``` TotalPop Men Women Hispanic ... Alabama-Autauga 55221 26745 28476 2.6 ...Alabama-Baldwin 195121 95314 99807 4.5 ...Alabama-Barbour 26932 14497 12435 4.6 ......```
###Code
# index data by 'State-County'
county_index = clean_counties_df['State']+'-'+clean_counties_df['County']
clean_counties_df.index = county_index
# drop the old State and County columns, and the CensusId column
# clean df should be modified or created anew
clean_counties_df = clean_counties_df.drop(columns = ['State', 'County', 'CensusId'])
###Output
_____no_output_____
###Markdown
Now, what features do you have to work with?
###Code
# features
features_list = clean_counties_df.columns.values
print('Features: \n', features_list)
###Output
Features:
['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian'
'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr'
'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction'
'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp'
'WorkAtHome' 'MeanCommute' 'Employed' 'PrivateWork' 'PublicWork'
'SelfEmployed' 'FamilyWork' 'Unemployment']
###Markdown
Visualizing the DataIn general, you can see that features come in a variety of ranges, mostly percentages from 0-100, and counts that are integer values in a large range. Let's visualize the data in some of our feature columns and see what the distribution, over all counties, looks like.The below cell displays **histograms**, which show the distribution of data points over discrete feature ranges. The x-axis represents the different bins; each bin is defined by a specific range of values that a feature can take, say between the values 0-5 and 5-10, and so on. The y-axis is the frequency of occurrence or the number of county data points that fall into each bin. I find it helpful to use the y-axis values for relative comparisons between different features.Below, I'm plotting a histogram comparing methods of commuting to work over all of the counties. I just copied these feature names from the list of column names, printed above. I also know that all of these features are represented as percentages (%) in the original data, so the x-axes of these plots will be comparable.
###Code
# transportation (to work)
transport_list = ['Drive', 'Carpool', 'Transit', 'Walk', 'OtherTransp']
n_bins = 30 # can decrease to get a wider bin (or vice versa)
for column_name in transport_list:
ax=plt.subplots(figsize=(6,3))
# get data by column_name and display a histogram
ax = plt.hist(clean_counties_df[column_name], bins=n_bins)
title="Histogram of " + column_name
plt.title(title, fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
EXERCISE: Create histograms of your ownCommute transportation method is just one category of features. If you take a look at the 34 features, you can see data on profession, race, income, and more. Display a set of histograms that interest you!
###Code
clean_counties_df.head()
# create a list of features that you want to compare or examine
my_list = ['Hispanic', 'White', 'Black', 'Native', 'Asian', 'Pacific']
n_bins = 40 # define n_bins
# histogram creation code is similar to above
for column_name in my_list:
ax=plt.subplots(figsize=(6,3))
# get data by column_name and display a histogram
ax = plt.hist(clean_counties_df[column_name], bins=n_bins)
title="Histogram of " + column_name
plt.title(title, fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
EXERCISE: Normalize the dataYou need to standardize the scale of the numerical columns in order to consistently compare the values of different features. You can use a [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) to transform the numerical values so that they all fall between 0 and 1.
###Code
# scale numerical features into a normalized range, 0-1
# store them in this dataframe
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
counties_scaled = pd.DataFrame(scaler.fit_transform(clean_counties_df), \
columns = clean_counties_df.columns, \
index = clean_counties_df.index)
counties_scaled.head()
###Output
_____no_output_____
###Markdown
--- Data ModelingNow, the data is ready to be fed into a machine learning model!Each data point has 34 features, which means the data is 34-dimensional. Clustering algorithms rely on finding clusters in n-dimensional feature space. For higher dimensions, an algorithm like k-means has a difficult time figuring out which features are most important, and the result is, often, noisier clusters.Some dimensions are not as important as others. For example, if every county in our dataset has the same rate of unemployment, then that particular feature doesn’t give us any distinguishing information; it will not help t separate counties into different groups because its value doesn’t *vary* between counties.> Instead, we really want to find the features that help to separate and group data. We want to find features that cause the **most variance** in the dataset!So, before I cluster this data, I’ll want to take a dimensionality reduction step. My aim will be to form a smaller set of features that will better help to separate our data. The technique I’ll use is called PCA or **principal component analysis** Dimensionality ReductionPCA attempts to reduce the number of features within a dataset while retaining the “principal components”, which are defined as *weighted*, linear combinations of existing features that are designed to be linearly independent and account for the largest possible variability in the data! You can think of this method as taking many features and combining similar or redundant features together to form a new, smaller feature set.We can reduce dimensionality with the built-in SageMaker model for PCA. Roles and Buckets> To create a model, you'll first need to specify an IAM role, and to save the model attributes, you'll need to store them in an S3 bucket.The `get_execution_role` function retrieves the IAM role you created at the time you created your notebook instance. Roles are essentially used to manage permissions and you can read more about that [in this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). For now, know that we have a FullAccess notebook, which allowed us to access and download the census data stored in S3.You must specify a bucket name for an S3 bucket in your account where you want SageMaker model parameters to be stored. Note that the bucket must be in the same region as this notebook. You can get a default S3 bucket, which automatically creates a bucket for you and in your region, by storing the current SageMaker session and calling `session.default_bucket()`.
###Code
from sagemaker import get_execution_role
session = sagemaker.Session() # store the current SageMaker session
# get IAM role
role = get_execution_role()
print(role)
# get default bucket
bucket_name = session.default_bucket()
print(bucket_name)
print()
###Output
sagemaker-us-east-1-088450696203
###Markdown
Define a PCA ModelTo create a PCA model, I'll use the built-in SageMaker resource. A SageMaker estimator requires a number of parameters to be specified; these define the type of training instance to use and the model hyperparameters. A PCA model requires the following constructor arguments:* role: The IAM role, which was specified, above.* train_instance_count: The number of training instances (typically, 1).* train_instance_type: The type of SageMaker instance for training.* num_components: An integer that defines the number of PCA components to produce.* sagemaker_session: The session used to train on SageMaker.Documentation on the PCA model can be found [here](http://sagemaker.readthedocs.io/en/latest/pca.html).Below, I first specify where to save the model training data, the `output_path`.
###Code
# define location to store model artifacts
prefix = 'counties'
output_path='s3://{}/{}/'.format(bucket_name, prefix)
print('Training artifacts will be uploaded to: {}'.format(output_path))
# define a PCA model
from sagemaker import PCA
# this is current features - 1
# you'll select only a portion of these to use, later
N_COMPONENTS=33
pca_SM = PCA(role=role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path=output_path, # specified, above
num_components=N_COMPONENTS,
sagemaker_session=session)
###Output
train_instance_count has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
train_instance_type has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
###Markdown
Convert data into a RecordSet formatNext, prepare the data for a built-in model by converting the DataFrame to a numpy array of float values.The *record_set* function in the SageMaker PCA model converts a numpy array into a **RecordSet** format that is the required format for the training input data. This is a requirement for _all_ of SageMaker's built-in models. The use of this data type is one of the reasons that allows training of models within Amazon SageMaker to perform faster, especially for large datasets.
###Code
# convert df to np array
train_data_np = counties_scaled.values.astype('float32')
# convert to RecordSet format
formatted_train_data = pca_SM.record_set(train_data_np)
###Output
_____no_output_____
###Markdown
Train the modelCall the fit function on the PCA model, passing in our formatted, training data. This spins up a training instance to perform the training job.Note that it takes the longest to launch the specified training instance; the fitting itself doesn't take much time.
###Code
%%time
# train the PCA mode on the formatted data
pca_SM.fit(formatted_train_data)
###Output
Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.
Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.
###Markdown
Accessing the PCA Model AttributesAfter the model is trained, we can access the underlying model parameters. Unzip the Model DetailsNow that the training job is complete, you can find the job under **Jobs** in the **Training** subsection in the Amazon SageMaker console. You can find the job name listed in the training jobs. Use that job name in the following code to specify which model to examine.Model artifacts are stored in S3 as a TAR file; a compressed file in the output path we specified + 'output/model.tar.gz'. The artifacts stored here can be used to deploy a trained model.
###Code
# Get the name of the training job, it's suggested that you copy-paste
# from the notebook or from a specific job in the AWS console
training_job_name='pca-2021-02-04-16-46-04-568'
# where the model is saved, by default
model_key = os.path.join(prefix, training_job_name, 'output/model.tar.gz')
print(model_key)
# download and unzip model
boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')
# unzipping as model_algo-1
os.system('tar -zxvf model.tar.gz')
os.system('unzip model_algo-1')
###Output
counties/pca-2021-02-04-16-46-04-568/output/model.tar.gz
###Markdown
MXNet ArrayMany of the Amazon SageMaker algorithms use MXNet for computational speed, including PCA, and so the model artifacts are stored as an array. After the model is unzipped and decompressed, we can load the array using MXNet.You can take a look at the MXNet [documentation, here](https://aws.amazon.com/mxnet/).
###Code
import mxnet as mx
# loading the unzipped artifacts
pca_model_params = mx.ndarray.load('model_algo-1')
# what are the params
print(pca_model_params)
###Output
{'s':
[1.7896362e-02 3.0864021e-02 3.2130770e-02 3.5486195e-02 9.4831578e-02
1.2699370e-01 4.0288666e-01 1.4084760e+00 1.5100485e+00 1.5957943e+00
1.7783760e+00 2.1662524e+00 2.2966361e+00 2.3856051e+00 2.6954880e+00
2.8067985e+00 3.0175958e+00 3.3952675e+00 3.5731301e+00 3.6966958e+00
4.1890211e+00 4.3457499e+00 4.5410376e+00 5.0189657e+00 5.5786467e+00
5.9809699e+00 6.3925138e+00 7.6952214e+00 7.9913125e+00 1.0180052e+01
1.1718245e+01 1.3035975e+01 1.9592180e+01]
<NDArray 33 @cpu(0)>, 'v':
[[ 2.46869749e-03 2.56468095e-02 2.50773830e-03 ... -7.63925165e-02
1.59879066e-02 5.04589686e-03]
[-2.80601848e-02 -6.86634064e-01 -1.96283013e-02 ... -7.59587288e-02
1.57304872e-02 4.95312130e-03]
[ 3.25766727e-02 7.17300594e-01 2.40726061e-02 ... -7.68136829e-02
1.62378680e-02 5.13597298e-03]
...
[ 1.12151138e-01 -1.17030945e-02 -2.88011521e-01 ... 1.39890045e-01
-3.09406728e-01 -6.34506866e-02]
[ 2.99992133e-02 -3.13433539e-03 -7.63589665e-02 ... 4.17341813e-02
-7.06735924e-02 -1.42857227e-02]
[ 7.33537527e-05 3.01008171e-04 -8.00925500e-06 ... 6.97060227e-02
1.20169498e-01 2.33626723e-01]]
<NDArray 34x33 @cpu(0)>, 'mean':
[[0.00988273 0.00986636 0.00989863 0.11017046 0.7560245 0.10094159
0.0186819 0.02940491 0.0064698 0.01154038 0.31539047 0.1222766
0.3030056 0.08220861 0.256217 0.2964254 0.28914267 0.40191284
0.57868284 0.2854676 0.28294644 0.82774544 0.34378946 0.01576072
0.04649627 0.04115358 0.12442778 0.47014 0.00980645 0.7608103
0.19442631 0.21674445 0.0294168 0.22177474]]
<NDArray 1x34 @cpu(0)>}
###Markdown
PCA Model AttributesThree types of model attributes are contained within the PCA model.* **mean**: The mean that was subtracted from a component in order to center it.* **v**: The makeup of the principal components; (same as ‘components_’ in an sklearn PCA model).* **s**: The singular values of the components for the PCA transformation. This does not exactly give the % variance from the original feature space, but can give the % variance from the projected feature space. We are only interested in v and s. From s, we can get an approximation of the data variance that is covered in the first `n` principal components. The approximate explained variance is given by the formula: the sum of squared s values for all top n components over the sum over squared s values for _all_ components:\begin{equation*}\frac{\sum_{n}^{ } s_n^2}{\sum s^2}\end{equation*}From v, we can learn more about the combinations of original features that make up each principal component.
###Code
# get selected params
s=pd.DataFrame(pca_model_params['s'].asnumpy())
v=pd.DataFrame(pca_model_params['v'].asnumpy())
###Output
_____no_output_____
###Markdown
Data VarianceOur current PCA model creates 33 principal components, but when we create new dimensionality-reduced training data, we'll only select a few, top n components to use. To decide how many top components to include, it's helpful to look at how much **data variance** the components capture. For our original, high-dimensional data, 34 features captured 100% of our data variance. If we discard some of these higher dimensions, we will lower the amount of variance we can capture. Tradeoff: dimensionality vs. data varianceAs an illustrative example, say we have original data in three dimensions. So, three dimensions capture 100% of our data variance; these dimensions cover the entire spread of our data. The below images are taken from the PhD thesis, [“Approaches to analyse and interpret biological profile data”](https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/index/index/docId/696) by Matthias Scholz, (2006, University of Potsdam, Germany).Now, you may also note that most of this data seems related; it falls close to a 2D plane, and just by looking at the spread of the data, we can visualize that the original, three dimensions have some correlation. So, we can instead choose to create two new dimensions, made up of linear combinations of the original, three dimensions. These dimensions are represented by the two axes/lines, centered in the data. If we project this in a new, 2D space, we can see that we still capture most of the original data variance using *just* two dimensions. There is a tradeoff between the amount of variance we can capture and the number of component-dimensions we use to represent our data.When we select the top n components to use in a new data model, we'll typically want to include enough components to capture about 80-90% of the original data variance. In this project, we are looking at generalizing over a lot of data and we'll aim for about 80% coverage. **Note**: The _top_ principal components, with the largest s values, are actually at the end of the s DataFrame. Let's print out the s values for the top n, principal components.
###Code
# looking at top 5 components
n_principal_components = 5
start_idx = N_COMPONENTS - n_principal_components # 33-n
# print a selection of s
print(s.iloc[start_idx:, :])
###Output
0
28 7.991313
29 10.180052
30 11.718245
31 13.035975
32 19.592180
###Markdown
EXERCISE: Calculate the explained varianceIn creating new training data, you'll want to choose the top n principal components that account for at least 80% data variance. Complete a function, `explained_variance` that takes in the entire array `s` and a number of top principal components to consider. Then return the approximate, explained variance for those top n components. For example, to calculate the explained variance for the top 5 components, calculate s squared for *each* of the top 5 components, add those up and normalize by the sum of *all* squared s values, according to this formula:\begin{equation*}\frac{\sum_{5}^{ } s_n^2}{\sum s^2}\end{equation*}> Using this function, you should be able to answer the **question**: What is the smallest number of principal components that captures at least 80% of the total variance in the dataset?
###Code
# Calculate the explained variance for the top n principal components
# you may assume you have access to the global var N_COMPONENTS
def explained_variance(s, n_top_components):
'''Calculates the approx. data variance that n_top_components captures.
:param s: A dataframe of singular values for top components;
the top value is in the last row.
:param n_top_components: An integer, the number of top components to use.
:return: The expected data variance covered by the n_top_components.'''
start_idx = N_COMPONENTS - n_top_components ## 33-3 = 30, for example
# calculate approx variance
exp_variance = np.square(s.iloc[start_idx:,:]).sum()/np.square(s).sum()
return exp_variance[0]
###Output
_____no_output_____
###Markdown
Test CellTest out your own code by seeing how it responds to different inputs; does it return a reasonable value for the single, top component? What about for the top 5 components?
###Code
# test cell
n_top_components = 1 # select a value for the number of top components
# calculate the explained variance
exp_variance = explained_variance(s, n_top_components)
print('Explained variance: ', exp_variance)
###Output
Explained variance: 0.32098714
###Markdown
As an example, you should see that the top principal component accounts for about 32% of our data variance! Next, you may be wondering what makes up this (and other components); what linear combination of features make these components so influential in describing the spread of our data?Below, let's take a look at our original features and use that as a reference.
###Code
# features
features_list = counties_scaled.columns.values
print('Features: \n', features_list)
###Output
Features:
['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian'
'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr'
'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction'
'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp'
'WorkAtHome' 'MeanCommute' 'Employed' 'PrivateWork' 'PublicWork'
'SelfEmployed' 'FamilyWork' 'Unemployment']
###Markdown
Component MakeupWe can now examine the makeup of each PCA component based on **the weightings of the original features that are included in the component**. The following code shows the feature-level makeup of the first component.Note that the components are again ordered from smallest to largest and so I am getting the correct rows by calling N_COMPONENTS-1 to get the top, 1, component.
###Code
import seaborn as sns
def display_component(v, features_list, component_num, n_weights=10):
# get index of component (last row - component_num)
row_idx = N_COMPONENTS-component_num
# get the list of weights from a row in v, dataframe
v_1_row = v.iloc[:, row_idx]
v_1 = np.squeeze(v_1_row.values)
# match weights to features in counties_scaled dataframe, using list comporehension
comps = pd.DataFrame(list(zip(v_1, features_list)),
columns=['weights', 'features'])
# we'll want to sort by the largest n_weights
# weights can be neg/pos and we'll sort by magnitude
comps['abs_weights']=comps['weights'].apply(lambda x: np.abs(x))
sorted_weight_data = comps.sort_values('abs_weights', ascending=False).head(n_weights)
# display using seaborn
ax=plt.subplots(figsize=(10,6))
ax=sns.barplot(data=sorted_weight_data,
x="weights",
y="features",
palette="Blues_d")
ax.set_title("PCA Component Makeup, Component #" + str(component_num))
plt.show()
# display makeup of first component
num=1
display_component(v, counties_scaled.columns.values, component_num=num, n_weights=10)
###Output
_____no_output_____
###Markdown
Deploying the PCA ModelWe can now deploy this model and use it to make "predictions". Instead of seeing what happens with some test data, we'll actually want to pass our training data into the deployed endpoint to create principal components for each data point. Run the cell below to deploy/host this model on an instance_type that we specify.
###Code
%%time
# this takes a little while, around 7mins
pca_predictor = pca_SM.deploy(initial_instance_count=1,
instance_type='ml.t2.medium')
###Output
Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.
###Markdown
We can pass the original, numpy dataset to the model and transform the data using the model we created. Then we can take the largest n components to reduce the dimensionality of our data.
###Code
# pass np train data to the PCA model
train_pca = pca_predictor.predict(train_data_np)
# check out the first item in the produced training features
data_idx = 0
print(train_pca[data_idx])
###Output
label {
key: "projection"
value {
float32_tensor {
values: 0.0002009272575378418
values: 0.0002455431967973709
values: -0.0005782842636108398
values: -0.0007815659046173096
values: -0.00041911262087523937
values: -0.0005133943632245064
values: -0.0011316537857055664
values: 0.0017268601804971695
values: -0.005361668765544891
values: -0.009066537022590637
values: -0.008141040802001953
values: -0.004735097289085388
values: -0.00716288760304451
values: 0.0003725700080394745
values: -0.01208949089050293
values: 0.02134685218334198
values: 0.0009293854236602783
values: 0.002417147159576416
values: -0.0034637749195098877
values: 0.01794189214706421
values: -0.01639425754547119
values: 0.06260128319263458
values: 0.06637358665466309
values: 0.002479255199432373
values: 0.10011336207389832
values: -0.1136140376329422
values: 0.02589476853609085
values: 0.04045158624649048
values: -0.01082391943782568
values: 0.1204797774553299
values: -0.0883558839559555
values: 0.16052711009979248
values: -0.06027412414550781
}
}
}
###Markdown
EXERCISE: Create a transformed DataFrameFor each of our data points, get the top n component values from the list of component data points, returned by our predictor above, and put those into a new DataFrame.You should end up with a DataFrame that looks something like the following:``` c_1 c_2 c_3 c_4 c_5 ...Alabama-Autauga -0.060274 0.160527 -0.088356 0.120480 -0.010824 ...Alabama-Baldwin -0.149684 0.185969 -0.145743 -0.023092 -0.068677 ...Alabama-Barbour 0.506202 0.296662 0.146258 0.297829 0.093111 ......```
###Code
# create dimensionality-reduced data
def create_transformed_df(train_pca, counties_scaled, n_top_components):
''' Return a dataframe of data points with component features.
The dataframe should be indexed by State-County and contain component values.
:param train_pca: A list of pca training data, returned by a PCA model.
:param counties_scaled: A dataframe of normalized, original features.
:param n_top_components: An integer, the number of top components to use.
:return: A dataframe, indexed by State-County, with n_top_component values as columns.
'''
# create a dataframe of component features, indexed by State-County
# create new dataframe to add data to
counties_transformed=pd.DataFrame()
# for each of our new, transformed data points
# append the component values to the dataframe
for data in train_pca:
# get component values for each data point
components=data.label['projection'].float32_tensor.values
counties_transformed=counties_transformed.append([list(components)])
# index by county, just like counties_scaled
counties_transformed.index=counties_scaled.index
# keep only the top n components
start_idx = N_COMPONENTS - n_top_components
counties_transformed = counties_transformed.iloc[:,start_idx:]
# reverse columns, component order
return counties_transformed.iloc[:, ::-1]
###Output
_____no_output_____
###Markdown
Now we can create a dataset where each county is described by the top n principle components that we analyzed earlier. Each of these components is a linear combination of the original feature space. We can interpret each of these components by analyzing the makeup of the component, shown previously. Define the `top_n` components to use in this transformed dataYour code should return data, indexed by 'State-County' and with as many columns as `top_n` components.You can also choose to add descriptive column names for this data; names that correspond to the component number or feature-level makeup.
###Code
## Specify top n
top_n = 7
# call your function and create a new dataframe
counties_transformed = create_transformed_df(train_pca, counties_scaled, n_top_components=top_n)
## TODO: Add descriptive column names
counties_transformed.columns = ['c_'+str(num) for num in range(1, top_n+1)]
# print result
counties_transformed.head()
###Output
_____no_output_____
###Markdown
Delete the Endpoint!Now that we've deployed the mode and created our new, transformed training data, we no longer need the PCA endpoint.As a clean up step, you should always delete your endpoints after you are done using them (and if you do not plan to deploy them to a website, for example).
###Code
# delete predictor endpoint
session.delete_endpoint(pca_predictor.endpoint)
###Output
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
###Markdown
--- Population Segmentation Now, you’ll use the unsupervised clustering algorithm, k-means, to segment counties using their PCA attributes, which are in the transformed DataFrame we just created. K-means is a clustering algorithm that identifies clusters of similar data points based on their component makeup. Since we have ~3000 counties and 34 attributes in the original dataset, the large feature space may have made it difficult to cluster the counties effectively. Instead, we have reduced the feature space to 7 PCA components, and we’ll cluster on this transformed dataset. EXERCISE: Define a k-means modelYour task will be to instantiate a k-means model. A `KMeans` estimator requires a number of parameters to be instantiated, which allow us to specify the type of training instance to use, and the model hyperparameters. You can read about the required parameters, in the [`KMeans` documentation](https://sagemaker.readthedocs.io/en/stable/kmeans.html); note that not all of the possible parameters are required. Choosing a "Good" KOne method for choosing a "good" k, is to choose based on empirical data. A bad k would be one so *high* that only one or two very close data points are near it, and another bad k would be one so *low* that data points are really far away from the centers.You want to select a k such that data points in a single cluster are close together but that there are enough clusters to effectively separate the data. You can approximate this separation by measuring how close your data points are to each cluster center; the average centroid distance between cluster points and a centroid. After trying several values for k, the centroid distance typically reaches some "elbow"; it stops decreasing at a sharp rate and this indicates a good value of k. The graph below indicates the average centroid distance for value of k between 5 and 12.A distance elbow can be seen around 8 when the distance starts to increase and then decrease at a slower rate. This indicates that there is enough separation to distinguish the data points in each cluster, but also that you included enough clusters so that the data points aren’t *extremely* far away from each cluster.
###Code
# define a KMeans estimator
k_clusters = 8 # number of clusters to train the algorithm with
kmeans_SM = sagemaker.KMeans(role=role,
instance_count=1,
instance_type='ml.c4.xlarge',
output_path=output_path,
sagemaker_session=session,
k=k_clusters)
###Output
_____no_output_____
###Markdown
EXERCISE: Create formatted, k-means training dataJust as before, you should convert the `counties_transformed` df into a numpy array and then into a RecordSet. This is the required format for passing training data into a `KMeans` model.
###Code
# convert the transformed dataframe into record_set data
# convert df to np array
kmeans_transformed_data = counties_transformed.values.astype('float32')
# convert to RecordSet format
kmeans_formatted_train = kmeans_SM.record_set(kmeans_transformed_data)
###Output
_____no_output_____
###Markdown
EXERCISE: Train the k-means modelPass in the formatted training data and train the k-means model.
###Code
%%time
# train kmeans
kmeans_SM.fit(kmeans_formatted_train)
###Output
Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.
###Markdown
EXERCISE: Deploy the k-means modelDeploy the trained model to create a `kmeans_predictor`.
###Code
%%time
# deploy the model to create a predictor
kmeans_predictor = kmeans_SM.deploy(initial_instance_count=1,
instance_type='ml.t2.medium')
###Output
Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.
###Markdown
EXERCISE: Pass in the training data and assign predicted cluster labelsAfter deploying the model, you can pass in the k-means training data, as a numpy array, and get resultant, predicted cluster labels for each data point.
###Code
# get the predicted clusters for all the kmeans training data
cluster_info = kmeans_predictor.predict(kmeans_transformed_data)
###Output
_____no_output_____
###Markdown
Exploring the resultant clustersThe resulting predictions should give you information about the cluster that each data point belongs to.You should be able to answer the **question**: which cluster does a given data point belong to?
###Code
# print cluster info for first data point
data_idx = 0
print('County is: ', counties_transformed.index[data_idx])
print()
print(cluster_info[data_idx])
###Output
County is: Alabama-Autauga
label {
key: "closest_cluster"
value {
float32_tensor {
values: 3.0
}
}
}
label {
key: "distance_to_cluster"
value {
float32_tensor {
values: 0.28326112031936646
}
}
}
###Markdown
Visualize the distribution of data over clustersGet the cluster labels for each of our data points (counties) and visualize the distribution of points over each cluster.
###Code
# get all cluster labels
cluster_labels = [c.label['closest_cluster'].float32_tensor.values[0] for c in cluster_info]
# count up the points in each cluster
cluster_df = pd.DataFrame(cluster_labels)[0].value_counts()
print(cluster_df)
###Output
3.0 900
0.0 745
5.0 419
2.0 352
1.0 315
7.0 210
6.0 186
4.0 91
Name: 0, dtype: int64
###Markdown
Now, you may be wondering, what do each of these clusters tell us about these data points? To improve explainability, we need to access the underlying model to get the cluster centers. These centers will help describe which features characterize each cluster. Delete the Endpoint!Now that you've deployed the k-means model and extracted the cluster labels for each data point, you no longer need the k-means endpoint.
###Code
# delete kmeans endpoint
session.delete_endpoint(kmeans_predictor.endpoint)
###Output
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
###Markdown
--- Model Attributes & ExplainabilityExplaining the result of the modeling is an important step in making use of our analysis. By combining PCA and k-means, and the information contained in the model attributes within a SageMaker trained model, you can learn about a population and remark on some patterns you've found, based on the data. EXERCISE: Access the k-means model attributesExtract the k-means model attributes from where they are saved as a TAR file in an S3 bucket.You'll need to access the model by the k-means training job name, and then unzip the file into `model_algo-1`. Then you can load that file using MXNet, as before.
###Code
# download and unzip the kmeans model file
# use the name model_algo-1
training_job_name='kmeans-2021-02-05-18-58-37-825'
# where the model is saved, by default
model_key = os.path.join(prefix, training_job_name, 'output/model.tar.gz')
print(model_key)
# download and unzip model
boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')
# unzipping as model_algo-1
os.system('tar -zxvf model.tar.gz')
os.system('unzip model_algo-1')
# get the trained kmeans params using mxnet
kmeans_model_params = mx.ndarray.load('model_algo-1')
print(kmeans_model_params)
###Output
[
[[-4.85139638e-02 7.61406198e-02 1.59808427e-01 -6.04455695e-02
-2.62346789e-02 7.62279481e-02 -2.76150256e-02]
[-3.02197069e-01 -3.67337257e-01 8.29321295e-02 7.69246295e-02
4.62787636e-02 -3.35046053e-02 1.11290626e-01]
[-1.24498643e-01 1.56188617e-02 -3.83366138e-01 9.09953788e-02
-7.62417680e-04 1.00822560e-01 -4.88333218e-03]
[-2.11222723e-01 8.09158236e-02 -2.41858587e-02 -7.52177015e-02
-3.10943145e-02 -4.42866385e-02 -7.37198163e-03]
[ 1.30836928e+00 -2.33184725e-01 -1.71644345e-01 -4.26719397e-01
-1.20495185e-01 1.12252936e-01 1.55296803e-01]
[ 3.40004891e-01 2.07678989e-01 5.48592843e-02 2.44637132e-01
6.52574301e-02 -5.18050715e-02 3.05690542e-02]
[ 3.42456162e-01 -2.27115169e-01 -1.14112429e-01 -2.14798003e-01
1.98427722e-01 -1.49064705e-01 -6.39920980e-02]
[ 1.14819348e-01 -3.31382632e-01 6.96039572e-02 1.23040855e-01
-1.01833351e-01 -3.42654996e-03 -9.12202224e-02]]
<NDArray 8x7 @cpu(0)>]
###Markdown
There is only 1 set of model parameters contained within the k-means model: the cluster centroid locations in PCA-transformed, component space.* **centroids**: The location of the centers of each cluster in component space, identified by the k-means algorithm.
###Code
# get all the centroids
cluster_centroids=pd.DataFrame(kmeans_model_params[0].asnumpy())
cluster_centroids.columns=counties_transformed.columns
display(cluster_centroids)
###Output
_____no_output_____
###Markdown
Visualizing Centroids in Component SpaceYou can't visualize 7-dimensional centroids in space, but you can plot a heatmap of the centroids and their location in the transformed feature space. This gives you insight into what characteristics define each cluster. Often with unsupervised learning, results are hard to interpret. This is one way to make use of the results of PCA + clustering techniques, together. Since you were able to examine the makeup of each PCA component, you can understand what each centroid represents in terms of the PCA components.
###Code
# generate a heatmap in component space, using the seaborn library
plt.figure(figsize = (12,9))
ax = sns.heatmap(cluster_centroids.T, cmap = 'YlGnBu')
ax.set_xlabel("Cluster")
plt.yticks(fontsize = 16)
plt.xticks(fontsize = 16)
ax.set_title("Attribute Value by Centroid")
plt.show()
###Output
_____no_output_____
###Markdown
If you've forgotten what each component corresponds to at an original-feature-level, that's okay! You can use the previously defined `display_component` function to see the feature-level makeup.
###Code
# what do each of these components mean again?
# let's use the display function, from above
component_num=7
display_component(v, counties_scaled.columns.values, component_num=component_num)
###Output
_____no_output_____
###Markdown
Natural GroupingsYou can also map the cluster labels back to each individual county and examine which counties are naturally grouped together.
###Code
# add a 'labels' column to the dataframe
counties_transformed['labels']=list(map(int, cluster_labels))
# sort by cluster label 0-6
sorted_counties = counties_transformed.sort_values('labels', ascending=True)
# view some pts in cluster 0
sorted_counties.head(20)
###Output
_____no_output_____
###Markdown
You can also examine one of the clusters in more detail, like cluster 1, for example. A quick glance at the location of the centroid in component space (the heatmap) tells us that it has the highest value for the `comp_6` attribute. You can now see which counties fit that description.
###Code
# get all counties with label == 1
cluster=counties_transformed[counties_transformed['labels']==1]
cluster.head()
###Output
_____no_output_____ |
1_data_prep/CMIP_gridcell_temp_prep/CPU_ACCESS1-0.ipynb | ###Markdown
Step 0: load necessary libraries
###Code
import xarray as xr
import datetime
import pandas as pd
import numpy as np
import xesmf as xe
import time
import gc
import matplotlib.pyplot as plt
def regrid_data_2006(var, start_year, end_year, interval, height=True):
t0 = time.time()
print("******Start to process "+var+"******")
ds = []
# load the data
start_time = time.time()
for s_year in np.arange(start_year,end_year,interval):
#print(s_year)
e_year = s_year+interval-1
s_s_year = str(s_year)
s_e_year = str(e_year)
print(CMIP_dir+mod+"/"+var+"_day_"+mod+rcp+s_s_year+"0101-"+s_e_year+"1231.nc")
temp_ds = xr.open_dataset(CMIP_dir+mod+"/"+var+"_day_"+mod+rcp+s_s_year+"0101-"+s_e_year+"1231.nc")[var]
ds.append(temp_ds)
del temp_ds
gc.collect()
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to load the data")
# merge the time series
print("*********Start to merge*********")
start_time = time.time()
ds_merge_ts = xr.merge(ds).sel(time=slice("2006-01-01", "2015-12-31"))
del ds
gc.collect()
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to merge the time series")
# build the regridder
print("*********Start to build the regridder*********")
start_time = time.time()
regridder = xe.Regridder(ds_merge_ts, ds_out, 'patch', periodic=True, reuse_weights=True)
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to build the regridder")
# regrid the layer
print("*********Start to regrid the layer*********")
start_time = time.time()
ds_merge_ts_reg = regridder(ds_merge_ts[var])
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to regrid the layer")
# mask the layer
print("*********Start to mask the layer*********")
start_time = time.time()
ds_merge_ts_reg_mask = ds_merge_ts_reg.where(mask)
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to mask the layer")
# plot the layer
print("*********Start to plot the layer*********")
start_time = time.time()
fig, ((ax1, ax2, ax3)) = plt.subplots(nrows=1, ncols=3,figsize=(18,3))
ds_merge_ts[var].loc["2015-12-31"].plot(ax=ax1,
vmax=ds_merge_ts[var].loc["2015-12-31"].max(),
vmin=ds_merge_ts[var].loc["2015-12-31"].min())
ds_merge_ts_reg.loc["2015-12-31"].plot(ax=ax2,
vmax=ds_merge_ts[var].loc["2015-12-31"].max(),
vmin=ds_merge_ts[var].loc["2015-12-31"].min())
ds_merge_ts_reg_mask.loc["2015-12-31"].plot(ax=ax3,
vmax=ds_merge_ts[var].loc["2015-12-31"].max(),
vmin=ds_merge_ts[var].loc["2015-12-31"].min())
plt.show()
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to plot the layer")
elapsed_time = time.time() - t0
print("It takes elapsed_time", elapsed_time, "to deal with "+var+" in total")
print("******End "+var+"******")
print("\n")
if (height):
return ds_merge_ts_reg_mask.rename(var).drop("height")
else:
return ds_merge_ts_reg_mask.rename(var)
def regrid_data_2061(var, start_year, end_year, interval, height=True):
t0 = time.time()
print("******Start to process "+var+"******")
ds = []
# load the data
start_time = time.time()
for s_year in np.arange(start_year,end_year,interval):
#print(s_year)
e_year = s_year+interval-1
s_s_year = str(s_year)
s_e_year = str(e_year)
print(CMIP_dir+mod+"/"+var+"_day_"+mod+rcp+s_s_year+"0101-"+s_e_year+"1231.nc")
temp_ds = xr.open_dataset(CMIP_dir+mod+"/"+var+"_day_"+mod+rcp+s_s_year+"0101-"+s_e_year+"1231.nc")[var]
ds.append(temp_ds)
del temp_ds
gc.collect()
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to load the data")
# merge the time series
print("*********Start to merge*********")
start_time = time.time()
ds_merge_ts = xr.merge(ds).sel(time=slice("2061-01-01", "2070-12-31"))
del ds
gc.collect()
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to merge the time series")
# build the regridder
print("*********Start to build the regridder*********")
start_time = time.time()
regridder = xe.Regridder(ds_merge_ts, ds_out, 'patch', periodic=True, reuse_weights=True)
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to build the regridder")
# regrid the layer
print("*********Start to regrid the layer*********")
start_time = time.time()
ds_merge_ts_reg = regridder(ds_merge_ts[var])
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to regrid the layer")
# mask the layer
print("*********Start to mask the layer*********")
start_time = time.time()
ds_merge_ts_reg_mask = ds_merge_ts_reg.where(mask)
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to mask the layer")
# plot the layer
print("*********Start to plot the layer*********")
start_time = time.time()
fig, ((ax1, ax2, ax3)) = plt.subplots(nrows=1, ncols=3,figsize=(18,3))
ds_merge_ts[var].loc["2070-12-31"].plot(ax=ax1,
vmax=ds_merge_ts[var].loc["2070-12-31"].max(),
vmin=ds_merge_ts[var].loc["2070-12-31"].min())
ds_merge_ts_reg.loc["2070-12-31"].plot(ax=ax2,
vmax=ds_merge_ts[var].loc["2070-12-31"].max(),
vmin=ds_merge_ts[var].loc["2070-12-31"].min())
ds_merge_ts_reg_mask.loc["2070-12-31"].plot(ax=ax3,
vmax=ds_merge_ts[var].loc["2070-12-31"].max(),
vmin=ds_merge_ts[var].loc["2070-12-31"].min())
plt.show()
elapsed_time = time.time() - start_time
print("It takes elapsed_time", elapsed_time, "to plot the layer")
elapsed_time = time.time() - t0
print("It takes elapsed_time", elapsed_time, "to deal with "+var+" in total")
print("******End "+var+"******")
print("\n")
if (height):
return ds_merge_ts_reg_mask.rename(var).drop("height")
else:
return ds_merge_ts_reg_mask.rename(var)
#########################################################################################################
def get_ds_2006(start_year, end_year, interval):
# define the variable list *****
var_ls_height = ["tasmax"]
#var_ls_no_height =["pr","prsn","rlds","rlus","rsds","rsus"]
# get a list of variable DataArray
temp_var = []
for var in var_ls_height:
temp_var.append(regrid_data_2006(var, start_year, end_year, interval, height=True))
#for var in var_ls_no_height:
#temp_var.append(regrid_data_2006(var, start_year, end_year, interval, height=False))
ds_merge = xr.merge(temp_var)
return ds_merge
def get_ds_2061(start_year, end_year, interval):
# define the variable list *****
var_ls_height = ["tasmax"]
#var_ls_no_height =["pr","prsn","rlds","rlus","rsds","rsus"]
# get a list of variable DataArray
temp_var = []
for var in var_ls_height:
temp_var.append(regrid_data_2061(var, start_year, end_year, interval, height=True))
#for var in var_ls_no_height:
#temp_var.append(regrid_data_2061(var, start_year, end_year, interval, height=False))
ds_merge = xr.merge(temp_var)
return ds_merge
def get_urban_df(ds):
start_time = time.time()
df_all = ds.to_dataframe()
df = df_all[~np.isnan(df_all["tasmax"])]
print("It takes elapsed_time", time.time()-start_time, "to convert to dataframe and get urban grid")
return df
###Output
_____no_output_____
###Markdown
Step 1: define the grid and mask
###Code
# define the model
mod = "ACCESS1-0"
rcp = "_rcp85_r1i1p1_"
# define the grid mask
CESM = xr.open_dataset("/glade/collections/cdg/data/cesmLE/CESM-CAM5-BGC-LE/lnd/proc/tseries/daily/TREFMXAV_U/b.e11.BRCP85C5CNBDRD.f09_g16.002.clm2.h1.TREFMXAV_U.20060101-20801231.nc")
grid = CESM["TREFMXAV_U"].loc["2006-01-02"]
mask = CESM["TREFMXAV_U"].loc["2006-01-02"].notnull().squeeze()
ds_out = xr.Dataset({'lat':(['lat'], grid["lat"].values),
'lon':(['lon'], grid["lon"].values)})
# define the load directory *****
CMIP_dir = "/glade/scratch/zhonghua/CMIP5_tasmax_nc/"
# define the save directory *****
CMIP_save_dir = "/glade/scratch/zhonghua/CMIP5_tasmax_csv/"
###Output
_____no_output_____
###Markdown
Step 2: 2006-2015
###Code
ds = get_ds_2006(2006, 2031, 25)
df = get_urban_df(ds)
start_time=time.time()
df.to_csv(CMIP_save_dir+mod+"/2006.csv")
print(time.time()-start_time)
###Output
It takes elapsed_time 5.176332473754883 to convert to dataframe and get urban grid
54.123281955718994
###Markdown
Step 3: 2061-2070
###Code
del ds, df
gc.collect()
ds = get_ds_2061(2056, 2081, 25)
df = get_urban_df(ds)
start_time=time.time()
df.to_csv(CMIP_save_dir+mod+"/2061.csv")
print(time.time()-start_time)
###Output
It takes elapsed_time 5.165327787399292 to convert to dataframe and get urban grid
53.563414573669434
|
PAT NH3 Syn.ipynb | ###Markdown
0. Zulaufbedingungen Dampf.| | | || - | - | - ||$n_{H_2 O, 0}$ | 60 000 | kgmol/h || T | 500 | °C || p | 35 | bar |Luft.| | | Einheiten || - | - | - ||$n_{Ar, 0}$ | 0,01 $\cdot$ 15 000 | kgmol/h ||$n_{O_2, 0}$ | 0,21 $\cdot$ 15 000 | kgmol/h ||$n_{N_2, 0}$ | 0,78 $\cdot$ 15 000 | kgmol/h || T | 20 | °C || p | 35 | bar |Rohgas.| | | Einheiten || - | - | - ||$n_{CH_4, 0}$ | 20 000 | kgmol/h || T | 20 | °C || p | 35 | bar | 1. Dampfreformierung* Vorwärmer: von $T_{0}$ auf Betriebstemperatur $T$=995°C: $\dot Q = \sum\limits_j^{\mathcal C} \dot n_{j, 1} \cdot \Delta H^\circ_j(T)- \sum\limits_j^{\mathcal C} \dot n_{j, 0} \cdot \Delta H^\circ_j(T_0) =\sum\limits_j^{\mathcal C}\dot n_{j, 0}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_0))$* Reaktor: Berücksichtigte (unabhängige) Reaktionen:$2.0 CO+ 2.0 H_2\rightleftharpoons 1.0 CO_2+1.0 CH_4 \quad K(T)=7.39082e-05\\1.0 CO+ 3.0 H_2\rightleftharpoons 1.0 H_2O+1.0 CH_4 \quad K(T)=0.000123279\\1.5 H_2+ 0.5 N_2\rightleftharpoons 1.0 NH_3 \quad K(T)=0.000143712\\2.0 CO+ 4.0 H_2\rightleftharpoons 2.0 CH_4+1.0 O_2 \quad K(T)=3.46185e-23$Isothermer Reaktor. Lösung nach [A.1.1](A-1-1). $\Rightarrow \{n_{j,1}\}_j^{\mathcal C}, \{\xi_{i,1}\}_i^{\mathcal R}$Totale zu tauschende Energie: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 0}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_0))+ \sum\limits_{i}^{\mathcal R} \xi_{i,1}\cdot \underbrace{( \sum\limits_j^{\mathcal{C}}{\nu_{ij} \cdot \Delta H_j^{\circ}(T)})}_{\Delta_RH_i^{\circ}(T)}$ 2. Wassergas-Shift Reaktion (WGS)* Vorkühler: von $T_{1}$ auf Betriebstemperatur $T$=210°C: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 1}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_1))$* Reaktor: Berücksichtigte (unabhängige) Reaktionen:$1.0 H_2+ 1.0 CO_2\rightleftharpoons 1.0 CO+1.0 H_2O \quad K(T)=0.00533179$Isothermer Reaktor. Lösung nach [A.1.1](A-1-1). $\Rightarrow \{n_{j,2}\}_j^{\mathcal C}, \{\xi_{i,2}\}_i^{\mathcal R}$Totale zu tauschende Energie: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 1}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_1))+ \sum\limits_{i}^{\mathcal R} \xi_{i,2}\cdot \underbrace{( \sum\limits_j^{\mathcal{C}}{\nu_{ij} \cdot \Delta H_j^{\circ}(T)})}_{\Delta_RH_i^{\circ}(T)}$ 3. Trocknung (Dampf-Flüssigkeit Abscheider)* Vorkühler: von $T_2$ auf Betriebstemperatur $T$=20°C: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 2}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_2))$* Isothermer Vedampfer. Lösung nach [A.2.1](A-2-1).$\Rightarrow \{x_{j,3}\}_j^{\mathcal C}, \{y_{j,3}\}_j^{\mathcal C}, \dot n_{3, V}/\dot n_2=V/F\equiv\Psi$Anteil der Verflüssigung an der Vorkühlerleistung$\dot Q_V = \sum\limits_j^{\mathcal C}[(1-\Psi) \cdot n_{j,2} \cdot x_{j,2}] \cdot \Delta_V H_j^{\circ}(T)$ 4. CO2-AbschneiderVereinfachte Abtrennung.$j=CO_2 \rightarrow \quad \dot n_{CO2, 4} = 0,95 \cdot [\Psi \cdot \dot n_{CO_2, 2}]$$j\neq CO_2 \rightarrow \quad \dot n_{j, 4} = [\Psi \cdot \dot n_{j, 2}]$ 5. Methanisierung* Vorwärmer: von $T_{4}$=20°C auf Betriebstemperatur $T$=350°C: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 4}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_4))$* Reaktor: Berücksichtigte (unabhängige) Reaktionen:$0.75 CO2+ 0.25 CH4\rightleftharpoons1.0 CO+0.5 H2O \quad K(T)=0.00409148\\0.5 CO2+ 1.0 H2O\rightleftharpoons0.5 CH4+1.0 O2 \quad K(T)=2.82336e-34\\0.5 H2O+ 0.25 CH4\rightleftharpoons1.0 H2+0.25 CO2 \quad K(T)=0.0848723\\$Isothermer Reaktor. Lösung nach [A.1.1](A-1-1). $\Rightarrow \{\dot n_{j,5}\}_j^{\mathcal C}, \{\xi_{i,5}\}_i^{\mathcal R}$Totale zu tauschende Energie: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 4}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_4))+ \sum\limits_{i}^{\mathcal R} \xi_{i,5}\cdot \underbrace{( \sum\limits_j^{\mathcal{C}}{\nu_{ij} \cdot \Delta H_j^{\circ}(T)})}_{\Delta_RH_i^{\circ}(T)}$ 6. Verdichter* Adiabate Verdichtung. Lösung nach [A.3.1](A-3-1).$\Rightarrow T_{2,rev}, \dot W_{1-2,rev}, T_2, \dot W_{1-2}$* Abkühler: von $T_{2,Verdichter}$ zurück auf Betriebstemperatur $T_5$=350°C: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 5}\cdot(\Delta H^\circ_j(T_5)-\Delta H^\circ_j(T_{2,Verdichter}))$ 7. Rücklauf-Mischer* Adiabate Mischung. Rücklaufverhältnis ${\mathcal R}$, Ausgangstemperatur: $T_6$.Komponentenbilanz:$\dot n_{j,6} = \dot n_{j,5}+{\mathcal R}\cdot \dot n_{j, 8}$Energiebilanz (Objektivfunktion):$0=\sum\limits_j^{\mathcal C} \dot n_{j, 5} \cdot \Delta H^\circ_j(T_5)+ \sum\limits_j^{\mathcal C} {\mathcal R}\cdot \dot n_{j, 8} \cdot \Delta H^\circ_j(T_8)- \sum\limits_j^{\mathcal C} \dot n_{j, 6} \cdot \Delta H^\circ_j(T_6)$Lösung $\Rightarrow T_6$ 8. Ammoniaksynthese* Vorwärmer: von $T_6$ auf Eintrittstemperatur der Ammoniaksynthese $T$=300°C:$\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 6}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_6))$* Reaktor: Berücksichtigte (unabhängige) Reaktionen:$2.0 H_2O+ 1.0 CH_4+ 1.333 N_2\rightleftharpoons 1.0 CO2+2.667 NH_3 \quad K(T)=1.93108e-09\\1.0 H_2O+ 1.0 CH4+ 1.0 N_2\rightleftharpoons1.0 CO+2.0 NH_3 \quad K(T)=2.91601e-10\\2.0 H_2O+ 0.667 N_2\rightleftharpoons 1.333 NH_3+1.0 O2 \quad K(T)=1.47386e-41\\0.667 NH_3\rightleftharpoons 1.0 H_2+0.333 N_2 \quad K(T)=6.03022$Adiabater Reaktor. Lösung nach [A.1.2](A-1-2). $\Rightarrow \{n_{j,7}\}_j^{\mathcal C}, \{\xi_{i,7}\}_i^{\mathcal R}, T_7$ 9. Verflüssigung und Produkt-Abtrennung* Vorkühler: von $T_7$ auf Betriebstemperatur $T$=21°C: $\dot Q = \sum\limits_j^{\mathcal C}\dot n_{j, 2}\cdot(\Delta H^\circ_j(T)-\Delta H^\circ_j(T_7))$* Isothermer Vedampfer. Lösung nach [A.2.1](A-2-1).$\Rightarrow \{x_{j,8}\}_j^{\mathcal C}, \{y_{j,8}\}_j^{\mathcal C}, \dot n_{8, V}/\dot n_7=V/F\equiv\Psi$Anteil der Verflüssigung an der Vorkühlerleistung$\dot Q_V = \sum\limits_j^{\mathcal C}[(1-\Psi) \cdot n_{j,7}\cdot x_{j,8}] \cdot \Delta_V H_j^{\circ}(T)$ 10. Rücklaufstrom* Produktstrom: $(1-\mathcal R)\cdot \dot n_{j,8}$* Rücklaufstrom: $\mathcal R \cdot \dot n_{j,8}$Rücklaufstrom in den Mischer (Schritt 7) hinein schicken, und wieder 7-9 berechnen, bis keine Änderung bei der Komponentenbilanz mehr ist:$\dot n_{j,6} = \dot n_{j,5}+{\mathcal R}\cdot \dot n_{j, 8}$ A. Methoden zur Lösung der Grundoperationen A.1 Gleichgewichtsreaktionen* Stoffbilanzen (C) + Gleichgewichtbedingungen (R)$\begin{array}{llll}\dot n_j &= \dot n_{j, 0} + \sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}& \quad &\mathcal{C}\\K_{i}(T)\cdot p^{-\sum\limits_{j}^{\mathcal C}{\nu_{ij}}} &= \prod\limits_{j}^{\mathcal C}{\left(\frac{\dot n_j}{\sum\limits_j^{\mathcal{C}}\dot n_j}\right)^{\nu_{ij}}} &\quad& \mathcal{R} \\&&& \overline{\mathcal{C}+\mathcal{R}}\\\end{array}$* Energiebilanz (für den adiabaten Fall) (1)$\begin{array}{lll}0 &= \dot Q & + \sum\limits_i(\dot n_i \Delta H_i^{\circ}(T))_{ein}-\sum\limits_i\dot n_i (\Delta H_i^{\circ}(T))_{aus}\\&= 0 &+ \sum\limits_i\dot n_{i,0} \cdot\Delta H_i^{\circ}(T_{ein}) -\sum\limits_i\dot n_{i}\cdot\Delta H_i^{\circ}(T) \\\end{array}\\$ Gleichungen: $\mathcal{C} + \mathcal{R} + 1$Variablen: $\mathcal{C}$ ($\dot n_j$), $\mathcal{R}$ ($\xi_j$), $T_2$ $\Rightarrow$ Durch Einsetzen der Stoffbilanzen in den Gleichgewichtsbedingungen:Gleichungen: $\mathcal{R} + 1$Variablen: $\mathcal{R}$ ($\xi_j$), $T_2$ __Stark nichtlineares System__. Lösung nach dem Liniensuchverfahren [2] A.1.1. Isotherm A.1.1.1. Direkte Lösung f ($\mathcal{R}\times1$)$\begin{array}{llll}0 &= -K_{i}(T)\cdot p^{-\sum\limits_{j}^{\mathcal C}{\nu_{ij}}} + \prod\limits_{j}^{\mathcal C}{\left(\frac{\dot n_{j, 0} + \sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}}{\sum\limits_j^{\mathcal{C}}\dot n_{j, 0} + \sum\limits_{j}^{\mathcal{C}}\sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}}\right)^{\nu_{ij}}}\\\end{array}$ jac ($\mathcal{R} \times \mathcal{R}$)$\begin{array}{ll}{{\partial f_j}\over{\partial \xi_i}}=&\prod\limits_{k}^{\mathcal C}{\left(\frac{\dot n_{k, 0} + \sum\limits_{j}^{\mathcal R}{\nu_{jk} \cdot \xi_j}}{\sum\limits_k^{\mathcal{C}}\dot n_{k, 0} + \sum\limits_{k}^{\mathcal{C}}\sum\limits_{j}^{\mathcal R}{\nu_{jk} \cdot \xi_j}}\right)^{\nu_{ij}}} \cdot \left[\sum\limits_k^{\mathcal{C}}\frac{\nu_{i k} \nu_{j k}}{\dot n_{k, 0} + \sum\limits_{j}^{\mathcal R}{\nu_{jk} \cdot \xi_j}} -\frac{\left(\sum\limits_k^{\mathcal C}\nu_{ik}\right)\cdot\left(\sum\limits_k^{\mathcal C}\nu_{jk}\right)}{\sum\limits_k^{\mathcal{C}}\dot n_{k, 0} + \sum\limits_{k}^{\mathcal{C}}\sum\limits_{j}^{\mathcal R}{\nu_{jk} \cdot \xi_j}}\right]\\\end{array}$ A.1.1.2 GleichgewichtsabständeDivision durch Null vermeiden. $\mathcal S: \text{Substrate} (\nu_{ij}0), \mathcal C: \text{Komponente} \\ \mathcal C=\mathcal S+\mathcal P$. f ($\mathcal{R}\times1$)$\begin{array}{ll}0 &= \left[K_{i}(T)\cdot p^{-\sum\limits_{j}^{\mathcal C}{\nu_{ij}}}\right]\cdot n_T^{\sum\limits_{j}^{\mathcal C}{\nu_{ij}}}\cdot\prod\limits_{j}^{\mathcal S}n_j^{-\nu_{ij}} - \prod\limits_{j}^{\mathcal P}n_j^{\nu_{ij}}\\n_j &= \dot n_{j, 0} + \sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}\\n_T & =\sum\limits_j^{\mathcal{C}}\dot n_{j, 0} + \sum\limits_{j}^{\mathcal{C}}\sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}\\\end{array}$ jac ($\mathcal{R} \times \mathcal{R}$)$\begin{array}{ll}{{\partial f_j}\over{\partial \xi_i}}=&-\left[K_{j}(T)\cdot p^{-\sum\limits_{k}^{\mathcal C}{\nu_{jk}}}\right]\cdot n_T^{\sum\limits_{k}^{\mathcal C}{\nu_{jk}}}\cdot\prod\limits_{k}^{\mathcal S}n_k^{-\nu_{jk}}\cdot\left[\left(\sum\limits_k^{\mathcal{S}}\frac{\nu_{i k} \nu_{j k}}{n_k}\right)- \frac{\left(\sum\limits_k{\nu_{jk}}\right)\cdot\left(\sum\limits_k{\nu_{ik}}\right)}{n_T}\right] \\&-\prod\limits_{k}^{\mathcal P}n_k^{+\nu_{jk}}\cdot \left(\sum\limits_k^{\mathcal{P}}\frac{\nu_{i k} \nu_{j k}}{n_k}\right)\\\end{array}$ A.1.2. Adiabat A.1.2.2 Gleichgewichtsabstände f ($(\mathcal{R +1 })\times1$)$\begin{array}{llll}f_i &= 0 =& \left[K_{i}(T)\cdot p^{-\sum\limits_{j}^{\mathcal C}{\nu_{ij}}}\right]\cdot n_T^{\sum\limits_{j}^{\mathcal C}{\nu_{ij}}}\cdot\prod\limits_{j}^{\mathcal S}n_j^{-\nu_{ij}} - \prod\limits_{j}^{\mathcal P}n_j^{\nu_{ij}}\\n_j &=& \dot n_{j, 0} + \sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}\\n_T &=&\sum\limits_j^{\mathcal{C}}\dot n_{j, 0} + \sum\limits_{j}^{\mathcal{C}}\sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i}\\f_q &= 0 =& + \sum\limits_i^{\mathcal{C}}\dot n_{i,0} \cdot\Delta H_i^{\circ}(T_{ein})-\sum\limits_j^{\mathcal{C}} (\dot n_{j, 0} + \sum\limits_{i}^{\mathcal R}{\nu_{ij} \cdot \xi_i})\cdot\Delta H_j^{\circ}(T)\\\end{array}$ jac ($(\mathcal{R +1 }) \times (\mathcal{R +1 })$)$\begin{array}{ll}{{\partial f_j}\over{\partial \xi_i}}=&-\left[K_{j}(T)\cdot p^{-\sum\limits_{k}^{\mathcal C}{\nu_{jk}}}\right]\cdot n_T^{\sum\limits_{k}^{\mathcal C}{\nu_{jk}}}\cdot\prod\limits_{k}^{\mathcal S}n_k^{-\nu_{jk}}\cdot\left[\left(\sum\limits_k^{\mathcal{S}}\frac{\nu_{i k} \nu_{j k}}{n_k}\right)- \frac{\left(\sum\limits_k{\nu_{jk}}\right)\cdot\left(\sum\limits_k{\nu_{ik}}\right)}{n_T}\right] \\&-\prod\limits_{k}^{\mathcal P}n_k^{+\nu_{jk}}\cdot \left(\sum\limits_k^{\mathcal{P}}\frac{\nu_{i k} \nu_{j k}}{n_k}\right)\\{{\partial f_j}\over{\partial T}}=&+\left[K_{j}(T)\cdot p^{-\sum\limits_{k}^{\mathcal C}{\nu_{jk}}}\right]\cdot n_T^{\sum\limits_{k}^{\mathcal C}{\nu_{jk}}}\cdot\prod\limits_{k}^{\mathcal S}n_k^{-\nu_{jk}} \cdot \left(\frac{\partial ln K_j(T)}{\partial T}\right)\\\frac{d ln K_j(T)}{d T}=& \frac{\Delta Hr^\circ_i(T)}{R T^2} \hspace{2cm} \text{(...Theor.)}\\{{\partial f_q}\over{\partial \xi_i}}=&-\sum\limits_k^{\mathcal{C}} (\nu_{ik})\cdot\Delta H_k^{\circ}(T)\\{{\partial f_q}\over{\partial T}}=&- \sum\limits_k^{\mathcal{C}}(\dot n_{k, 0} + \sum\limits_{j}^{\mathcal R}{\nu_{jk} \cdot \xi_j})\cdot \left(\frac{\partial \Delta H_k^{\circ}(T)}{\partial T}\right)\\\frac{\partial \Delta H_k^{\circ}(T)}{\partial T}=& \left(\frac{Cp_k^{ig}(T)}{R}\right)\cdot R\\\end{array}$ A.2 Verdampfer (Flash) A.2.1 Isothermer Verdampfer Rachford - Rice [2]Variablen: (N) Molbrüche in der Flüssigkeit $\{\dot x_i\}$, (N) Molbrüche im Gas $\{y_i\}$, (1) Dampf/Flüssigkeit VerhältnisGleichungen: (N) Stoffbilanzen, (N) Isotherme Gleichgewichtsbedingungen, (1) Zwangsbedingung* Stoffbilanzen (N)$\dot F z_i = \dot V y_i + \dot L x_i $* Isotherme Verdampfungs-Gleichgewichtsbedingungen (N)$(\hat{\phi_i}^V P) y_i^V=(\hat{\phi_i}^L P) x_i^L \hspace{2cm} K_i\equiv{{y_i}\over{x_i}}={{\hat{\phi_i}^L}\over{\hat{\phi_i}^V}}$* Zwangsbedingung (1)$0 = \sum\limits_i y_i - \sum\limits_i x_i$*System zu einer Funktion einer Veränderlichen reduziert: $\psi\equiv(V/F)$*$\begin{array}{ll}z_i &= \psi y_i + (1-\psi) x_i \\&= \psi x_i K_i + (1-\psi) x_i \\x_i &={{z_i}\over{1+\psi(K_i-1)}} \hspace{3cm} y_i &={{z_i K_i}\over{1+\psi(K_i-1)}}\\ \Rightarrow 0 &= \sum\limits_i y_i - \sum\limits_i x_i = \sum\limits_i {{z_i(K_i-1)}\over{1+\psi(K_i-1)}}\\\end{array}$(Ojektivfunktion)Lösung nach Rachford Rice-Algorithmus [5] A.3 Verdichter A.3.1 Adiabate Verdichtung A.3.1.1 Reversible VerdichtungIdealgas: $dS = \frac{dQ_{rev}}{T}=\frac{Cp^{IG}}{T}dT-\frac{R}{P}dP=0$$dW = dU = C_v dT$ .... (Adiabat, dU=0+dW, jedoch wird eigentlich folgende Gleichung benutzt: $W=\Delta H=\int\limits_{T_1}^{T_2}Cp^{IG}dT$. Vergleich [7] SVN: Gl. 7.22 S. 276 (288 pdf) mit Gl.3.34 S. 78 (90 pdf)).Temperatur der reversiblen Verdichtung: $T_{2,rev}$$\int\limits_{T_1}^{T_2, rev}\frac{Cp^{IG}}{T}dT=R\cdot ln \frac{P_2}{P_1}$Temperaturabhängigkeiten der Wärmekapazität werden in den „Cp-Mittelwerten" eingeschlossen, damit rein formal direkt gelöst werden kann:$\left_S\equiv \frac{1}{T_{2,rev}/T_1} \int\limits_{T_1}^{T_2, rev}\frac{Cp^{IG}}{T}dT$ $\left_H\equiv \frac{1}{T_{2,rev}-T_1} \int\limits_{T_1}^{T_2, rev}Cp^{IG}dT$ Es gilt dann:$\frac{T_{2,rev}}{T_1}=\left(\frac{P_2}{P_1}\right)^{R/\left_S}$Als Funktion von ($\kappa$ nach [6], $\gamma$ nach [7]), $\kappa\equiv Cp^{IG} / Cv^{IG}$$\frac{R}{Cp^{IG}}=\frac{Cp^{IG}-Cv^{IG}}{Cp^{IG}}=\frac{\kappa-1}{\kappa} \Leftrightarrow\kappa=\frac{1}{1-R/Cp^{IG}}$$\begin{array}{lr}(1) & \quad T_{2,rev}=T_1\cdot\left(\frac{P_2}{P_1}\right)^{\frac{\kappa-1}{\kappa}}\\\end{array}$$W=\left_H\cdot (T_{2,rev}-T_1)=\frac{\gamma\cdot R\cdot T_1}{\gamma-1}\cdot\left[\left(\frac{P_2}{P_1}\right)^{\frac{\gamma-1}{\gamma}}-1\right]$Reversible Verdichterleistung: - Methode 1. [6]$\begin{array}{lr}(2) & \quad \begin{array}{ll}\dot W_{1-2, rev} &=\sum\limits_j^{\mathcal C}\dot n_{j, 2}\cdot\Delta H_j^{\circ}(T_{2, rev})-\sum\limits_j^{\mathcal C}\dot n_{j, 1}\cdot\Delta H_j^{\circ}(T_1)\\&=\sum\limits_j^{\mathcal C}\dot n_{j, 1}\cdot(\Delta H_j^{\circ}(T_{2,rev})-\Delta H_j^{\circ}(T_{1}))\\\end{array}\end{array}$$\quad\text{(hier }h(T, P)\rightsquigarrow \Delta H_j^{\circ}(T)\text{)}$, d.h. Druck bei Enthalpieberechnung vernachlässigt, denn das Idealgas wird vorausgesetzt, wonach $H^R=0$ - Methode 2. [7]$\begin{array}{lr}(2) & \quad \dot W_{1-2, rev}=\dot n_1 \cdot \frac{R\cdot T_1 \cdot \gamma}{\gamma-1}\cdot \left[\left(\frac{P_2}{P_1}\right)^{\frac{\gamma-1}{\gamma}}-1 \right]\end{array}$ A.3.1.2 Irreversible Verdichtung: Wirkungsgrad$\eta_{therm}=0,72$$\eta_{mech}=1,0$Tatsächliche Arbeit:$\begin{array}{lr}(3) & \quad \dot W_{1-2} = \frac{\dot W_{1-2, rev}}{\eta_{therm}\cdot \eta_{mech}}\end{array}$Die tatsächliche Temperatur am Auslaß wird iterativ berechnet, denn sowohl $\Delta H_j^{\circ}(T_2)$ als auch $\left_H$ sind temperaturabhängig. - Methode 1: [6] Ojektivfunktion:$\begin{array}{lr}(4) & \quad 0 = \sum\limits_j^{\mathcal C}\dot n_{j, 1}\cdot(\Delta H_j^{\circ}(T_{2})-\Delta H_j^{\circ}(T_{1}))-\frac{\dot W_{1-2, rev}}{\eta_{therm}\cdot \eta_{mech}}\end{array}$ - Methode 2: [7] Ojektivfunktion:$\begin{array}{lr}(4) & \quad 0 = \left_H\cdot (T_{2}-T_1)-\frac{\dot W_{1-2, rev}}{\eta_{therm}\cdot \eta_{mech}}\end{array}$Zusammenfassung:* Parameter: $T_1, P_1, P_2, \{\dot n_{j,1}\}_j^{\mathcal C}, \eta_{therm}, \eta_{mech} $* Variablen: $T_{2,rev}, \dot W_{1-2,rev}, T_2, \dot W_{1-2}$ (4)* Gleichungen: (4), s. o. B. Quellen[1] MYERS, Andrea K.; MYERS, Alan L. Numerical solution of chemical equilibria with simultaneous reactions. The Journal of chemical physics, 1986, 84. Jg., Nr. 10, S. 5787-5795.[2] Jr., J. E. Dennis ; Schnabel, Robert B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Philadelphia: SIAM, 1996.[3] Barin, Ihsan: Thermochemical Data of Pure Substances. Weinheim, New York: VCH, 1993.[4] e.V., VDI: VDI-Wärmeatlas. Wiesbaden: Springer Berlin Heidelberg, 2013.[5] Henley, Ernest J. ; Seader, J. D. ; Roper, D. Keith: Separation Process Principles. New York: Wiley, 2011.[6] Gmehling, Jürgen ; Kolbe, Bärbel ; Kleiber, Michael ; Rarey, Jürgen: Chemical Thermodynamics for Process Simulation. New York: John Wiley & Sons, 2012.[7] Smith, Joseph Mauk ; Ness, Hendrick C. Van ; Abbott, Michael M.: Introduction to chemical engineering thermodynamics. New York: McGraw-Hill, 2005. 10. Ergebnisse
###Code
import warnings
warnings.filterwarnings('ignore')
import bsp_pat_ue_03_2
###Output
========================================
Dampf-Reformierung: Vorwärmer + Reaktor
========================================
Vormärmer (auf T= 1268.15 °K), Q: 801968,17827302974183 kW
Berücksichtigte (unabhängige) Reaktionen:
2.0 CO+ 2.0 H2<<==>>1.0 CO2+1.0 CH4 K(T)=7.39082e-05
1.0000000000000002 CO+ 3.0 H2<<==>>1.0 H2O+1.0000000000000002 CH4 K(T)=0.000123279
1.5 H2+ 0.5 N2<<==>>1.0 NH3 K(T)=0.000143712
2.0 CO+ 4.0 H2<<==>>2.0 CH4+1.0 O2 K(T)=3.46185e-23
Isothermisch betriebener Reaktor, Q: 735341,15632314386312 kW
Totale zu tauschende Energie, Q: 1537309,3345961733721 kW
n_{CO}=13625,70494775651423 kmol/h
n_{H2}=57728,619700805276807 kmol/h
n_{CO2}=5809,5057644187072583 kmol/h
n_{H2O}=41055,276079679781105 kmol/h
n_{CH4}=564,78928782477976256 kmol/h
n_{NH3}=57,683762576922099186 kmol/h
n_{AR}=150 kmol/h
n_{O2}=0,0037218631478026509285 kmol/h
n_{N2}=11671,158118711538918 kmol/h
n_T=130662,74138363669044 kmol/h
y_{CO}=0,10428148685286121877
y_{H2}=0,44181393325668288918
y_{CO2}=0,044461838951943576104
y_{H2O}=0,31420798036938529796
y_{CH4}=0,004322496848328870149
y_{NH3}=0,0004414706286282312972
y_{AR}=0,0011479936698985024855
y_{O2}=2,8484502226039715266e-08
y_{N2}=0,089322770937769066513
T: 1268.15 °K
p: 35 bar
H: -72402776.528519 J/kmol
Gesamtfehler: 2.30692595962236e-08
========================================
========================================
Wassergashift: Vorkühler + Reaktor
========================================
Vorkühler (auf T= 483.15 °K), Q: -985366,13835413742345 kW
Berücksichtigte Reaktionen:
1.0 H2+ 1.0 CO2<<==>>1.0 CO+1.0 H2O K(T)=0.00533179
Isothermisch betriebener Reaktor, Q: -148279,44339609821327 kW
Totale zu tauschende Energie, Q: -1133645,581750235986 kW
n_{CO}=262,43623308968358288 kmol/h
n_{H2}=71091,888415472101769 kmol/h
n_{CO2}=19172,774479085539497 kmol/h
n_{H2O}=27692,007365012952505 kmol/h
n_{CH4}=564,78928782477976256 kmol/h
n_{NH3}=57,683762576922099186 kmol/h
n_{AR}=150 kmol/h
n_{O2}=0,0037218631478026509285 kmol/h
n_{N2}=11671,158118711538918 kmol/h
n_T=130662,74138363669044 kmol/h
y_{CO}=0,0020085008955930978153
y_{H2}=0,54408691921395102575
y_{CO2}=0,14673482490921169186
y_{H2O}=0,21193499441211716139
y_{CH4}=0,004322496848328870149
y_{NH3}=0,0004414706286282312972
y_{AR}=0,0011479936698985024855
y_{O2}=2,8484502226039715266e-08
y_{N2}=0,089322770937769066513
T: 483.15 °K
p: 35 bar
H: -103636807.370703 J/kmol
Gesamtfehler: [[6.70198588e-09]]
========================================
========================================
Trocknung: Vorkühler + Abscheider
========================================
Vorkühler (auf T= 293.15 °K), Q: -502447,49886184721254 kW
Dampfphase:
n_{CO}=262,4360199768953521 kmol/h
n_{H2}=71091,71959557615628 kmol/h
n_{CO2}=19160,517151931053377 kmol/h
n_{H2O}=75,316800224054787805 kmol/h
n_{CH4}=564,78556888775688094 kmol/h
n_{NH3}=53,07076388363465469 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=0,0037217897537182276745 kmol/h
n_{N2}=11671,148868191954534 kmol/h
n_T=103028,99587262334535 kmol/h
y_{CO}=0,0025472054517747875872
y_{H2}=0,69001662098761695763
y_{CO2}=0,18597208474857959692
y_{H2O}=0,00073102527678111793759
y_{CH4}=0,0054818118346760267082
y_{NH3}=0,00051510512583656769669
y_{AR}=0,0014558754153826650342
y_{O2}=3,612371179788978006e-08
y_{N2}=0,11328023503843330511
H2/N2 Verhältnis: 6.09123578136566
Stoechiometrisches Verhältnis: 3
T: 293.15 °K
p: 35 bar
H: -74225864.611222 J/kmol
========================================
Flüssigphase (Abwasser):
n_{CO}=0,00021311278821939787513 kmol/h
n_{H2}=0,16881989595010482574 kmol/h
n_{CO2}=12,25732715447689003 kmol/h
n_{H2O}=27616,690564788892516 kmol/h
n_{CH4}=0,0037189370229437244617 kmol/h
n_{NH3}=4,6129986932874551542 kmol/h
n_{AR}=0,0026178379045153571739 kmol/h
n_{O2}=7,3394084423167842412e-08 kmol/h
n_{N2}=0,0092505195843291965901 kmol/h
n_T=27633,745511013305077 kmol/h
x_{CO}=7,7120485940729072454e-09
x_{H2}=6,1091934092345341172e-06
x_{CO2}=0,00044356372716337525786
x_{H2O}=0,99938282175663828433
x_{CH4}=1,3457954953746073194e-07
x_{NH3}=0,0001669335302882097151
x_{AR}=9,4733372406747012738e-08
x_{O2}=2,6559586138315086848e-12
x_{N2}=3,3475446101022086774e-07
T: 293.15 °K
p: 35 bar
H: -242027292.604286 J/kmol
========================================
========================================
CO2-Abschneider:
========================================
Abtrennung-Faktor (CO2): 0.05 - Produkt
n_{CO}=262,4360199768953521 kmol/h
n_{H2}=71091,71959557615628 kmol/h
n_{CO2}=958,02585759655278252 kmol/h
n_{H2O}=75,316800224054787805 kmol/h
n_{CH4}=564,78556888775688094 kmol/h
n_{NH3}=53,07076388363465469 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=0,0037217897537182276745 kmol/h
n_{N2}=11671,148868191954534 kmol/h
n_T=84826,504578288833727 kmol/h
y_{CO}=0,0030937974077982380132
y_{H2}=0,83808380351171429812
y_{CO2}=0,011293944768316641417
y_{H2O}=0,000887892299682615572
y_{CH4}=0,0066581261563890088359
y_{NH3}=0,00062563893381524526578
y_{AR}=0,0017682843694644821871
y_{O2}=4,3875316709334403173e-08
y_{N2}=0,13758846867750293419
T: 293.15 °K
p: 35 bar
H: -5673653.276627 J/kmol
========================================
Abtrennung-Faktor (CO2): 0.95 - Abfluss
n_{CO}=0 kmol/h
n_{H2}=0 kmol/h
n_{CO2}=18202,491294334500708 kmol/h
n_{H2O}=0 kmol/h
n_{CH4}=0 kmol/h
n_{NH3}=0 kmol/h
n_{AR}=0 kmol/h
n_{O2}=0 kmol/h
n_{N2}=0 kmol/h
n_T=18202,491294334500708 kmol/h
y_{CO}=0
y_{H2}=0
y_{CO2}=1
y_{H2O}=0
y_{CH4}=0
y_{NH3}=0
y_{AR}=0
y_{O2}=0
y_{N2}=0
T: 293.15 °K
p: 35 bar
H: -393690072.811983 J/kmol
========================================
========================================
Methanisierung: Aufheizer + Reaktor
========================================
Aufheizer (auf T= 623.15 °K), Q: 229031,84859929344384 kW
Berücksichtigte (unabhängige) Reaktionen:
0.75 CO2+ 0.25 CH4<<==>>1.0 CO+0.5 H2O K(T)=0.00409148
0.5 CO2+ 1.0 H2O<<==>>0.5 CH4+1.0 O2 K(T)=2.82336e-34
0.5 H2O+ 0.25 CH4<<==>>1.0 H2+0.25 CO2 K(T)=0.0848723
Isothermisch betriebener Reaktor, Q: -63789,184448484782479 kW
Totale zu tauschende Energie, Q: 165242,66415080873412 kW
n_{CO}=1,8986244685947894457e-07 kmol/h
n_{H2}=66472,300662783483858 kmol/h
n_{CO2}=1,3353687245398760922e-07 kmol/h
n_{H2O}=2253,8119785166272777 kmol/h
n_{CH4}=1785,2474461378058095 kmol/h
n_{NH3}=53,07076388363465469 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=0 kmol/h
n_{N2}=11671,148868191954534 kmol/h
n_T=82385,577101999006118 kmol/h
y_{CO}=2,3045592874152741876e-12
y_{H2}=0,80684390400623418049
y_{CO2}=1,6208768227557572574e-12
y_{H2O}=0,0273568755332775429
y_{CH4}=0,021669417256464037352
y_{NH3}=0,00064417542184512955206
y_{AR}=0,0018206752618408976029
y_{O2}=0
y_{N2}=0,1416649525164127843
T: 623.15 °K
p: 35 bar
H: 1378850.756326 J/kmol
Gesamtfehler: 2.7021508537823955e-09
========================================
========================================
Verdichtung: Adiabater Verdichter + Abkühler
========================================
Verdichten (auf p= 180.0 bar), W: 338973880,47486966848 W
gamma=Cp/Cv=: 1.36884
T: 1098.23 °K
n_{CO}=1,8986244685947894457e-07 kmol/h
n_{H2}=66472,300662783483858 kmol/h
n_{CO2}=1,3353687245398760922e-07 kmol/h
n_{H2O}=2253,8119785166272777 kmol/h
n_{CH4}=1785,2474461378058095 kmol/h
n_{NH3}=53,07076388363465469 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=0 kmol/h
n_{N2}=11671,148868191954534 kmol/h
n_T=82385,577101999006118 kmol/h
y_{CO}=2,3045592874152741876e-12
y_{H2}=0,80684390400623418049
y_{CO2}=1,6208768227557572574e-12
y_{H2O}=0,0273568755332775429
y_{CH4}=0,021669417256464037352
y_{NH3}=0,00064417542184512955206
y_{AR}=0,0018206752618408976029
y_{O2}=0
y_{N2}=0,1416649525164127843
T: 1098.23 °K
p: 180 bar
H: 16222273.157505 J/kmol
========================================
Abkühlen (zurück auf T= 623.15 °K), Q: -339689,97796940658009 kW
n_{CO}=1,8986244685947894457e-07 kmol/h
n_{H2}=66472,300662783483858 kmol/h
n_{CO2}=1,3353687245398760922e-07 kmol/h
n_{H2O}=2253,8119785166272777 kmol/h
n_{CH4}=1785,2474461378058095 kmol/h
n_{NH3}=53,07076388363465469 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=0 kmol/h
n_{N2}=11671,148868191954534 kmol/h
n_T=82385,577101999006118 kmol/h
y_{CO}=2,3045592874152741876e-12
y_{H2}=0,80684390400623418049
y_{CO2}=1,6208768227557572574e-12
y_{H2O}=0,0273568755332775429
y_{CH4}=0,021669417256464037352
y_{NH3}=0,00064417542184512955206
y_{AR}=0,0018206752618408976029
y_{O2}=0
y_{N2}=0,1416649525164127843
T: 623.15 °K
p: 180 bar
H: 1378850.756326 J/kmol
========================================
========================================
==============1. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0
Temperatur der Mischung: 573.15 °K
Vorwärmer (auf T= 573.15 °K), Q: -34311,300301955074247 kW
n_{CO}=1,8986244685947894457e-07 kmol/h
n_{H2}=66472,300662783483858 kmol/h
n_{CO2}=1,3353687245398760922e-07 kmol/h
n_{H2O}=2253,8119785166272777 kmol/h
n_{CH4}=1785,2474461378058095 kmol/h
n_{NH3}=53,07076388363465469 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=0 kmol/h
n_{N2}=11671,148868191954534 kmol/h
n_T=82385,577101999006118 kmol/h
y_{CO}=2,3045592874152741876e-12
y_{H2}=0,80684390400623418049
y_{CO2}=1,6208768227557572574e-12
y_{H2O}=0,0273568755332775429
y_{CH4}=0,021669417256464037352
y_{NH3}=0,00064417542184512955206
y_{AR}=0,0018206752618408976029
y_{O2}=0
y_{N2}=0,1416649525164127843
T: 573.15 °K
p: 180 bar
H: -120449.065706 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 1,6783343421088323604e-08 kW (adiabat)
n_{CO}=4,9311163842284875506e-05 kmol/h
n_{H2}=52252,767884164211864 kmol/h
n_{CO2}=1,0461943279528275523e-05 kmol/h
n_{H2O}=2253,8119087385130115 kmol/h
n_{CH4}=1785,2473866880977766 kmol/h
n_{NH3}=9532,7594087481684255 kmol/h
n_{AR}=149,99738216209547659 kmol/h
n_{O2}=-3,2428589527428488686e-25 kmol/h
n_{N2}=6931,3045457596872438 kmol/h
n_T=72905,888576033888967 kmol/h
y_{CO}=6,7636736627738974931e-10
y_{H2}=0,71671532855222741532
y_{CO2}=1,4349929043958452098e-10
y_{H2O}=0,030913989977476265952
y_{CH4}=0,024487012250406288427
y_{NH3}=0,13075431347094013113
y_{AR}=0,0020574110691437843233
y_{O2}=-4,448006897770481439e-30
y_{N2}=0,095071943859939347932
T: 775.61 °K
p: 180 bar
H: -136110.621286 J/kmol
Gesamtfehler: 30048.000289537442
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -357380,27685884822858 kW
Dampfphase (zum Splitter):
n_{CO}=4,9284479085610095742e-05 kmol/h
n_{H2}=52228,872303327967529 kmol/h
n_{CO2}=9,9646139881232100315e-06 kmol/h
n_{H2O}=19,215588359022060416 kmol/h
n_{CH4}=1780,5473221543518321 kmol/h
n_{NH3}=5243,6659823905183657 kmol/h
n_{AR}=149,63162467389122412 kmol/h
n_{O2}=-3,2345033444283020014e-25 kmol/h
n_{N2}=6927,8116071542744976 kmol/h
n_T=66349,744487309115357 kmol/h
y_{CO}=7,4279832519021539602e-10
y_{H2}=0,78717518361298910445
y_{CO2}=1,5018315540451831676e-10
y_{H2O}=0,00028961058563351351272
y_{CH4}=0,026835782651182363973
y_{NH3}=0,079030688400109863623
y_{AR}=0,0022551951905627294279
y_{O2}=-4,874929616056327291e-30
y_{N2}=0,10441353859272159421
H2/N2 Verhältnis: 7.539014520745886
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -5828576.498723 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=2,6684756674789672953e-08 kmol/h
n_{H2}=23,895580836239915357 kmol/h
n_{CO2}=4,9732929140506644442e-07 kmol/h
n_{H2O}=2234,5963203794917717 kmol/h
n_{CH4}=4,700064533746269646 kmol/h
n_{NH3}=4289,0934263576509693 kmol/h
n_{AR}=0,36575748820425107999 kmol/h
n_{O2}=-8,3556083145471092821e-28 kmol/h
n_{N2}=3,4929386054126601024 kmol/h
n_T=6556,1440887247599676 kmol/h
x_{CO}=4,0701906995328833203e-12
x_{H2}=0,0036447613918655485249
x_{CO2}=7,5856980116088516111e-11
x_{H2O}=0,34084002605921082107
x_{CH4}=0,00071689463709937329555
x_{NH3}=0,65420975675904824431
x_{AR}=5,5788506708771968656e-05
x_{O2}=-1,274469902996682822e-31
x_{N2}=0,00053277331320848654757
T: 294.15 °K
p: 180 bar
H: -112671547.675879 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,942758326848808066e-05 kmol/h
n_{H2}=41783,097842662376934 kmol/h
n_{CO2}=7,9716911904985693804e-06 kmol/h
n_{H2O}=15,372470687217649754 kmol/h
n_{CH4}=1424,4378577234815566 kmol/h
n_{NH3}=4194,9327859124150564 kmol/h
n_{AR}=119,70529973911298782 kmol/h
n_{O2}=-2,5876026755426421522e-25 kmol/h
n_{N2}=5542,2492857234201438 kmol/h
n_T=53079,795589847301017 kmol/h
y_{CO}=7,4279832524504837164e-10
y_{H2}=0,7871751836710978445
y_{CO2}=1,5018315541560476002e-10
y_{H2O}=0,00028961058565489235326
y_{CH4}=0,026835782653163362671
y_{NH3}=0,079030688405943849695
y_{AR}=0,002255195190729206503
y_{O2}=-4,874929616416191947e-30
y_{N2}=0,10441353860042933144
T: 294.15 °K
p: 180 bar
H: -5828576.498723 J/kmol
========================================
Abgas:
n_{CO}=9,8568958171220167767e-06 kmol/h
n_{H2}=10445,774460665590595 kmol/h
n_{CO2}=1,9929227976246414981e-06 kmol/h
n_{H2O}=3,8431176718044115503 kmol/h
n_{CH4}=356,10946443087027546 kmol/h
n_{NH3}=1048,7331964781035367 kmol/h
n_{AR}=29,92632493477823985 kmol/h
n_{O2}=-6,4690066888566019366e-26 kmol/h
n_{N2}=1385,5623214308545812 kmol/h
n_T=13269,948897461821616 kmol/h
y_{CO}=7,4279832524504826824e-10
y_{H2}=0,78717518367109773347
y_{CO2}=1,5018315541560473417e-10
y_{H2O}=0,00028961058565489235326
y_{CH4}=0,026835782653163362671
y_{NH3}=0,079030688405943863573
y_{AR}=0,002255195190729206503
y_{O2}=-4,8749296164161912464e-30
y_{N2}=0,10441353860042931756
T: 294.15 °K
p: 180 bar
H: -5828576.498723 J/kmol
========================================
========================================
==============1. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 493.383 °K
Vorwärmer (auf T= 573.15 °K), Q: 90632,993452262599021 kW
n_{CO}=3,9617445715347561616e-05 kmol/h
n_{H2}=108255,39850544586079 kmol/h
n_{CO2}=8,1052280629525574396e-06 kmol/h
n_{H2O}=2269,1844492038444514 kmol/h
n_{CH4}=3209,6853038612875935 kmol/h
n_{NH3}=4248,0035497960498105 kmol/h
n_{AR}=269,7026819012084502 kmol/h
n_{O2}=0 kmol/h
n_{N2}=17213,398153915375588 kmol/h
n_T=135465,37269184630713 kmol/h
y_{CO}=2,924544105117435659e-10
y_{H2}=0,79913705144194224772
y_{CO2}=5,9832471589548967492e-11
y_{H2O}=0,016751029463195264008
y_{CH4}=0,023693769411926468571
y_{NH3}=0,031358593457379811686
y_{AR}=0,0019909344841557573454
y_{O2}=0
y_{N2}=0,1270686213891135119
T: 573.15 °K
p: 180 bar
H: 963320.293597 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -1,2842396895090740547e-07 kW (adiabat)
n_{CO}=2,1150315856807494816e-05 kmol/h
n_{H2}=86838,289606724589248 kmol/h
n_{CO2}=3,1784033967673741207e-06 kmol/h
n_{H2O}=2269,1844775246236168 kmol/h
n_{CH4}=3209,6853272552421004 kmol/h
n_{NH3}=18526,076098871104477 kmol/h
n_{AR}=269,7026819012084502 kmol/h
n_{O2}=-4,4191342478973300471e-19 kmol/h
n_{N2}=10074,361879377849618 kmol/h
n_T=121187,30009598335891 kmol/h
y_{CO}=1,745258442102094919e-10
y_{H2}=0,71656262279914240132
y_{CO2}=2,6227198677171612858e-11
y_{H2O}=0,018724606255996902038
y_{CH4}=0,026485327461813996247
y_{NH3}=0,15287143194210936481
y_{AR}=0,0022255028512690460014
y_{O2}=-3,6465324703143528155e-24
y_{N2}=0,083130508488915122456
T: 755.148 °K
p: 180 bar
H: 1076816.980739 J/kmol
Gesamtfehler: 32.003339510910074
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -573232,1886202635942 kW
Dampfphase (zum Splitter):
n_{CO}=2,11172823546106815e-05 kmol/h
n_{H2}=86741,255650312537909 kmol/h
n_{CO2}=2,9070683801424558513e-06 kmol/h
n_{H2O}=27,978542910683611211 kmol/h
n_{CH4}=3188,0934575651326668 kmol/h
n_{NH3}=9045,9538800427599199 kmol/h
n_{AR}=268,24503585462360888 kmol/h
n_{O2}=-4,3941743367455694422e-19 kmol/h
n_{N2}=10059,771749884566816 kmol/h
n_T=109331,29834059464338 kmol/h
y_{CO}=1,9314947023109755568e-10
y_{H2}=0,79337991009908737094
y_{CO2}=2,6589534965776237569e-11
y_{H2O}=0,00025590607021725353312
y_{CH4}=0,029159934125779506803
y_{NH3}=0,082738923045356627117
y_{AR}=0,0024535063602125857171
y_{O2}=-4,0191367004200782554e-24
y_{N2}=0,092011820058452167825
H2/N2 Verhältnis: 8.622586854548453
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6164944.684297 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=3,3033502196814130419e-08 kmol/h
n_{H2}=97,033956412033319339 kmol/h
n_{CO2}=2,7133501662491789884e-07 kmol/h
n_{H2O}=2241,2059346139403715 kmol/h
n_{CH4}=21,591869690109440683 kmol/h
n_{NH3}=9480,1222188283427386 kmol/h
n_{AR}=1,457646046584895716 kmol/h
n_{O2}=-2,4959911151761037466e-21 kmol/h
n_{N2}=14,590129493283745532 kmol/h
n_T=11856,001755388662787 kmol/h
x_{CO}=2,7862261565746170308e-12
x_{H2}=0,0081843743306515923802
x_{CO2}=2,2885878585040438843e-11
x_{H2O}=0,18903556032559770683
x_{CH4}=0,0018211763240088914197
x_{NH3}=0,79960533207318129634
x_{AR}=0,00012294583595238983958
x_{O2}=-2,1052553526559514924e-25
x_{N2}=0,0012306112800209996049
T: 294.15 °K
p: 180 bar
H: -82724435.690720 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=1,689382588368854791e-05 kmol/h
n_{H2}=69393,004520250033238 kmol/h
n_{CO2}=2,3256547041139645116e-06 kmol/h
n_{H2O}=22,382834328546891811 kmol/h
n_{CH4}=2550,4747660521065882 kmol/h
n_{NH3}=7236,7631040342075721 kmol/h
n_{AR}=214,59602868369887574 kmol/h
n_{O2}=-3,5153394693964557463e-19 kmol/h
n_{N2}=8047,8173999076534528 kmol/h
n_T=87465,038672475726344 kmol/h
y_{CO}=1,9314947023518377621e-10
y_{H2}=0,79337991011587172263
y_{CO2}=2,6589534966338752698e-11
y_{H2O}=0,00025590607022266738824
y_{CH4}=0,029159934126396405696
y_{NH3}=0,082738923047107004738
y_{AR}=0,0024535063602644912456
y_{O2}=-4,0191367005051061699e-24
y_{N2}=0,092011820060398735732
T: 294.15 °K
p: 180 bar
H: -6164944.684297 J/kmol
========================================
Abgas:
n_{CO}=4,2234564709221361306e-06 kmol/h
n_{H2}=17348,251130062504672 kmol/h
n_{CO2}=5,8141367602849102203e-07 kmol/h
n_{H2O}=5,5957085821367211764 kmol/h
n_{CH4}=637,61869151302641967 kmol/h
n_{NH3}=1809,1907760085514383 kmol/h
n_{AR}=53,649007170924704724 kmol/h
n_{O2}=-8,7883486734911369585e-20 kmol/h
n_{N2}=2011,9543499769129085 kmol/h
n_T=21866,25966811892431 kmol/h
y_{CO}=1,9314947023518380205e-10
y_{H2}=0,79337991011587183365
y_{CO2}=2,6589534966338755929e-11
y_{H2O}=0,00025590607022266738824
y_{CH4}=0,029159934126396405696
y_{NH3}=0,082738923047107018616
y_{AR}=0,0024535063602644912456
y_{O2}=-4,0191367005051061699e-24
y_{N2}=0,09201182006039874961
T: 294.15 °K
p: 180 bar
H: -6164944.684297 J/kmol
========================================
========================================
==============2. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 452.805 °K
Vorwärmer (auf T= 573.15 °K), Q: 172022,3876221824612 kW
n_{CO}=1,7083688330548025479e-05 kmol/h
n_{H2}=135865,3051830335171 kmol/h
n_{CO2}=2,4591915765679521473e-06 kmol/h
n_{H2O}=2276,1948128451740558 kmol/h
n_{CH4}=4335,7222121899121703 kmol/h
n_{NH3}=7289,8338679178423263 kmol/h
n_{AR}=364,59341084579438075 kmol/h
n_{O2}=-3,5153394693964557463e-19 kmol/h
n_{N2}=19718,966268099604349 kmol/h
n_T=169850,61577447471791 kmol/h
y_{CO}=1,0058066762166767386e-10
y_{H2}=0,79991058356499333826
y_{CO2}=1,4478555555154611035e-11
y_{H2O}=0,013401157260845460162
y_{CH4}=0,025526679384823801333
y_{NH3}=0,042919090017296039619
y_{AR}=0,002146553365045767315
y_{O2}=-2,0696654253312067846e-24
y_{N2}=0,11609593629193652731
T: 573.15 °K
p: 180 bar
H: 1140183.593852 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -9,9623998006184900849e-08 kW (adiabat)
n_{CO}=1,4596762642403195685e-05 kmol/h
n_{H2}=110131,96166707950761 kmol/h
n_{CO2}=1,8634573573710102045e-06 kmol/h
n_{H2O}=2276,194816523568079 kmol/h
n_{CH4}=4335,7222152725717024 kmol/h
n_{NH3}=24445,396205324708717 kmol/h
n_{AR}=364,59341084579438075 kmol/h
n_{O2}=-2,5895183035691419758e-18 kmol/h
n_{N2}=11141,185099396172518 kmol/h
n_T=152695,05343090253882 kmol/h
y_{CO}=9,5594207634293229889e-11
y_{H2}=0,72125428553529635778
y_{CO2}=1,2203783393770910208e-11
y_{H2O}=0,014906801270766706705
y_{CH4}=0,028394647487612098558
y_{NH3}=0,16009291497046904129
y_{AR}=0,0023877224746561937982
y_{O2}=-1,6958756982530209638e-23
y_{N2}=0,072963628153401668963
T: 746.185 °K
p: 180 bar
H: 1268285.259804 J/kmol
Gesamtfehler: 72564624.0506876
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -710375,69717367331032 kW
Dampfphase (zum Splitter):
n_{CO}=1,4566747799516768105e-05 kmol/h
n_{H2}=109976,06348856045224 kmol/h
n_{CO2}=1,6808641325580972884e-06 kmol/h
n_{H2O}=31,60401029358932945 kmol/h
n_{CH4}=4298,7067746588663795 kmol/h
n_{NH3}=11634,96973692753636 kmol/h
n_{AR}=362,17380333479661658 kmol/h
n_{O2}=-2,5716104901264467842e-18 kmol/h
n_{N2}=11119,956219679806964 kmol/h
n_T=137423,47404970266507 kmol/h
y_{CO}=1,059989779775381823e-10
y_{H2}=0,80027130916394229043
y_{CO2}=1,2231273762848431249e-11
y_{H2O}=0,00022997534090781721365
y_{CH4}=0,031280731362293180686
y_{NH3}=0,084665082274220329617
y_{AR}=0,0026354580673800015629
y_{O2}=-1,8713036530965553738e-23
y_{N2}=0,080917443664407395776
H2/N2 Verhältnis: 9.889972704562245
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6406047.494243 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=3,0014842886429419445e-08 kmol/h
n_{H2}=155,89817851907213253 kmol/h
n_{CO2}=1,8259322481291310143e-07 kmol/h
n_{H2O}=2244,590806229977261 kmol/h
n_{CH4}=37,015440613705386852 kmol/h
n_{NH3}=12810,426468397168719 kmol/h
n_{AR}=2,4196075109977024375 kmol/h
n_{O2}=-1,7907813442695290946e-20 kmol/h
n_{N2}=21,228879716363966423 kmol/h
n_T=15271,579381199891941 kmol/h
x_{CO}=1,9654052891023917341e-12
x_{H2}=0,010208386090248268777
x_{CO2}=1,1956407406810613036e-11
x_{H2O}=0,14697830200635769726
x_{CH4}=0,0024238122130409217245
x_{NH3}=0,83884097051291350855
x_{AR}=0,00015843859045541969569
x_{O2}=-1,1726235379511322319e-24
x_{N2}=0,0013900906506202157648
T: 294.15 °K
p: 180 bar
H: -74401760.839235 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=1,1653398239613415501e-05 kmol/h
n_{H2}=87980,850790848373435 kmol/h
n_{CO2}=1,3446913060464780001e-06 kmol/h
n_{H2O}=25,283208234871466402 kmol/h
n_{CH4}=3438,9654197270929217 kmol/h
n_{NH3}=9307,9757895420298155 kmol/h
n_{AR}=289,7390426678372819 kmol/h
n_{O2}=-2,0572883921011577355e-18 kmol/h
n_{N2}=8895,9649757438455708 kmol/h
n_T=109938,77923976213788 kmol/h
y_{CO}=1,0599897797845173867e-10
y_{H2}=0,80027130917083966199
y_{CO2}=1,2231273762953848309e-11
y_{H2O}=0,00022997534090979929785
y_{CH4}=0,031280731362562777531
y_{NH3}=0,084665082274950037577
y_{AR}=0,0026354580674027151647
y_{O2}=-1,871303653112683744e-23
y_{N2}=0,080917443665104796247
T: 294.15 °K
p: 180 bar
H: -6406047.494243 J/kmol
========================================
Abgas:
n_{CO}=2,9133495599033534516e-06 kmol/h
n_{H2}=21995,212697712086083 kmol/h
n_{CO2}=3,3617282651161939414e-07 kmol/h
n_{H2O}=6,3208020587178648242 kmol/h
n_{CH4}=859,74135493177300305 kmol/h
n_{NH3}=2326,9939473855065444 kmol/h
n_{AR}=72,434760666959306263 kmol/h
n_{O2}=-5,1432209802528924127e-19 kmol/h
n_{N2}=2223,991243935960938 kmol/h
n_T=27484,694809940527193 kmol/h
y_{CO}=1,0599897797845175159e-10
y_{H2}=0,80027130917083955097
y_{CO2}=1,2231273762953848309e-11
y_{H2O}=0,00022997534090979929785
y_{CH4}=0,031280731362562777531
y_{NH3}=0,0846650822749500237
y_{AR}=0,0026354580674027155984
y_{O2}=-1,8713036531126834502e-23
y_{N2}=0,080917443665104796247
T: 294.15 °K
p: 180 bar
H: -6406047.494243 J/kmol
========================================
========================================
==============3. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 434.152 °K
Vorwärmer (auf T= 573.15 °K), Q: 225422,02561355952639 kW
n_{CO}=1,1843260686472893069e-05 kmol/h
n_{H2}=154453,15145363184274 kmol/h
n_{CO2}=1,4782281785004656358e-06 kmol/h
n_{H2O}=2279,0951867514982041 kmol/h
n_{CH4}=5224,2128658648989585 kmol/h
n_{NH3}=9361,0465534256654792 kmol/h
n_{AR}=439,73642482993273006 kmol/h
n_{O2}=-2,0572883921011577355e-18 kmol/h
n_{N2}=20567,113843935800105 kmol/h
n_T=192324,35634176110034 kmol/h
y_{CO}=6,1579619512296064652e-11
y_{H2}=0,8030867976969490174
y_{CO2}=7,6861205029780435254e-12
y_{H2O}=0,011850268110095932977
y_{CH4}=0,02716355309975119689
y_{NH3}=0,048673224398011506742
y_{AR}=0,0022864312830379084421
y_{O2}=-1,0696972714393742298e-23
y_{N2}=0,10693972534288875842
T: 573.15 °K
p: 180 bar
H: 1148287.561898 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 1,7075008816189236689e-08 kW (adiabat)
n_{CO}=1,1415449827852350214e-05 kmol/h
n_{H2}=126168,53123252748628 kmol/h
n_{CO2}=1,3356175044593581862e-06 kmol/h
n_{H2O}=2279,0951874645306816 kmol/h
n_{CH4}=5224,2128664353203931 kmol/h
n_{NH3}=28217,460032925988344 kmol/h
n_{AR}=439,73642482993273006 kmol/h
n_{O2}=3,8057875042321502979e-18 kmol/h
n_{N2}=11138,907104185636854 kmol/h
n_T=173467,94286111995461 kmol/h
y_{CO}=6,5807258906573107749e-11
y_{H2}=0,72733053238279987696
y_{CO2}=7,6995062167115565759e-12
y_{H2O}=0,013138422868651848663
y_{CH4}=0,030116301492189096606
y_{NH3}=0,16266671275116895146
y_{AR}=0,0025349722696716924485
y_{O2}=2,1939428354662041352e-23
y_{N2}=0,064213058162011810159
T: 740.361 °K
p: 180 bar
H: 1273109.386061 J/kmol
Gesamtfehler: 16.000118169647454
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -797231,60372038092464 kW
Dampfphase (zum Splitter):
n_{CO}=1,1389328897908098577e-05 kmol/h
n_{H2}=125972,87193634154391 kmol/h
n_{CO2}=1,1981602538044159085e-06 kmol/h
n_{H2O}=33,570817361217230257 kmol/h
n_{CH4}=5175,3562652183400132 kmol/h
n_{NH3}=13349,187393695385254 kmol/h
n_{AR}=436,58221488704089097 kmol/h
n_{O2}=3,7773750469474539165e-18 kmol/h
n_{N2}=11115,294096451414589 kmol/h
n_T=156082,86273654244724 kmol/h
y_{CO}=7,2969759127505268517e-11
y_{H2}=0,80708970688196235432
y_{CO2}=7,6764369437357412387e-12
y_{H2O}=0,00021508330108997261612
y_{CH4}=0,033157748227059302693
y_{NH3}=0,085526284945972072538
y_{AR}=0,0027971181924082827962
y_{O2}=2,420108768310579895e-23
y_{N2}=0,071214058362860502283
H2/N2 Verhältnis: 11.333291844842748
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6582610.063677 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=2,6120929944253579041e-08 kmol/h
n_{H2}=195,65929618595151851 kmol/h
n_{CO2}=1,3745725065494233064e-07 kmol/h
n_{H2O}=2245,5243701033132311 kmol/h
n_{CH4}=48,856601216979498759 kmol/h
n_{NH3}=14868,272639230604909 kmol/h
n_{AR}=3,1542099428918048964 kmol/h
n_{O2}=2,8412457284696194868e-20 kmol/h
n_{N2}=23,613007734223327105 kmol/h
n_T=17385,080124577543756 kmol/h
x_{CO}=1,5024912027412792547e-12
x_{H2}=0,011254437414033001408
x_{CO2}=7,906621636473879446e-12
x_{H2O}=0,12916387811696614096
x_{CH4}=0,0028102603422241294318
x_{NH3}=0,85523175813720853089
x_{AR}=0,00018143200494424182754
x_{O2}=1,6343011986795590085e-24
x_{N2}=0,0013582340470515031489
T: 294.15 °K
p: 180 bar
H: -70875803.514178 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=9,1114631183264788617e-06 kmol/h
n_{H2}=100778,29754907323513 kmol/h
n_{CO2}=9,5852820304353276918e-07 kmol/h
n_{H2O}=26,856653888973784916 kmol/h
n_{CH4}=4140,2850121746723744 kmol/h
n_{NH3}=10679,349914956308567 kmol/h
n_{AR}=349,26577190963274688 kmol/h
n_{O2}=3,0219000375579632102e-18 kmol/h
n_{N2}=8892,2352771611331264 kmol/h
n_T=124866,29018923395779 kmol/h
y_{CO}=7,2969759128089115858e-11
y_{H2}=0,80708970688842007757
y_{CO2}=7,6764369437971626304e-12
y_{H2O}=0,00021508330109169354312
y_{CH4}=0,033157748227324604362
y_{NH3}=0,085526284946656386254
y_{AR}=0,0027971181924306628974
y_{O2}=2,4201087683299435196e-23
y_{N2}=0,071214058363430310372
T: 294.15 °K
p: 180 bar
H: -6582610.063677 J/kmol
========================================
Abgas:
n_{CO}=2,2778657795816192919e-06 kmol/h
n_{H2}=25194,574387268301507 kmol/h
n_{CO2}=2,3963205076088308642e-07 kmol/h
n_{H2O}=6,7141634722434444527 kmol/h
n_{CH4}=1035,0712530436678662 kmol/h
n_{NH3}=2669,8374787390766869 kmol/h
n_{AR}=87,31644297740817251 kmol/h
n_{O2}=7,5547500938949060996e-19 kmol/h
n_{N2}=2223,0588192902828268 kmol/h
n_T=31216,572547308478534 kmol/h
y_{CO}=7,2969759128089128783e-11
y_{H2}=0,80708970688842007757
y_{CO2}=7,6764369437971626304e-12
y_{H2O}=0,00021508330109169354312
y_{CH4}=0,033157748227324611301
y_{NH3}=0,085526284946656400132
y_{AR}=0,0027971181924306633311
y_{O2}=2,4201087683299438134e-23
y_{N2}=0,07121405836343032425
T: 294.15 °K
p: 180 bar
H: -6582610.063677 J/kmol
========================================
========================================
==============4. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 424.002 °K
Vorwärmer (auf T= 573.15 °K), Q: 260986,5522484894318 kW
n_{CO}=9,3013255651859581239e-06 kmol/h
n_{H2}=167250,59821185673354 kmol/h
n_{CO2}=1,0920650754975201931e-06 kmol/h
n_{H2O}=2280,6686324056008743 kmol/h
n_{CH4}=5925,5324583124775017 kmol/h
n_{NH3}=10732,42067883994423 kmol/h
n_{AR}=499,26315407172825189 kmol/h
n_{O2}=3,0219000375579632102e-18 kmol/h
n_{N2}=20563,384145353087661 kmol/h
n_T=207251,86729123297846 kmol/h
y_{CO}=4,4879332991078030407e-11
y_{H2}=0,80699199673233368291
y_{CO2}=5,2692653136048058756e-12
y_{H2O}=0,011004333337082923897
y_{CH4}=0,02859097259657421633
y_{NH3}=0,051784434172352275527
y_{AR}=0,0024089681825165765519
y_{O2}=1,4580809702966634234e-23
y_{N2}=0,099219294928991674798
T: 573.15 °K
p: 180 bar
H: 1115564.882480 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -1,3648986816406249735e-07 kW (adiabat)
n_{CO}=9,4259879808717291053e-06 kmol/h
n_{H2}=137487,35083117589238 kmol/h
n_{CO2}=1,0503081890168259733e-06 kmol/h
n_{H2O}=2280,6686323644521508 kmol/h
n_{CH4}=5925,5324582295725122 kmol/h
n_{NH3}=30574,58559943180444 kmol/h
n_{AR}=499,26315407172825189 kmol/h
n_{O2}=-1,2301734521638306625e-18 kmol/h
n_{N2}=10642,301685057156647 kmol/h
n_T=187409,70237080688821 kmol/h
y_{CO}=5,0296157891663297785e-11
y_{H2}=0,73361917281713007188
y_{CO2}=5,6043426553161967422e-12
y_{H2O}=0,012169426681292866421
y_{CH4}=0,031618066638328973239
y_{NH3}=0,16314302414790268769
y_{AR}=0,0026640197799572369261
y_{O2}=-6,564086259151205139e-24
y_{N2}=0,056786289879487708565
T: 735.959 °K
p: 180 bar
H: 1233676.282781 J/kmol
Gesamtfehler: 16.00754316297337
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -852278,56268348859157 kW
Dampfphase (zum Splitter):
n_{CO}=9,4035098746480817169e-06 kmol/h
n_{H2}=137266,83372996724211 kmol/h
n_{CO2}=9,4061983451885722642e-07 kmol/h
n_{H2O}=34,892867303371929211 kmol/h
n_{CH4}=5868,2079063213486734 kmol/h
n_{NH3}=14497,005679866177161 kmol/h
n_{AR}=495,58266714414287435 kmol/h
n_{O2}=-1,2207402592426677816e-18 kmol/h
n_{N2}=10618,793018300790209 kmol/h
n_T=168781,31587924718042 kmol/h
y_{CO}=5,5714163772041373182e-11
y_{H2}=0,81328216345252424269
y_{CO2}=5,5730092493336489365e-12
y_{H2O}=0,00020673418216503173291
y_{CH4}=0,034768113257647975667
y_{NH3}=0,085892242303037436013
y_{AR}=0,0029362412809596969386
y_{O2}=-7,232674143293282461e-24
y_{N2}=0,062914505452695182464
H2/N2 Verhältnis: 12.926783062198961
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6718019.680016 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=2,2478106223647160044e-08 kmol/h
n_{H2}=220,51710120865763542 kmol/h
n_{CO2}=1,0968835449796886603e-07 kmol/h
n_{H2O}=2245,7757650610788005 kmol/h
n_{CH4}=57,324551908223369878 kmol/h
n_{NH3}=16077,57991956564365 kmol/h
n_{AR}=3,6804869275853624444 kmol/h
n_{O2}=-9,4331929211625844098e-21 kmol/h
n_{N2}=23,508666756364807071 kmol/h
n_T=18628,386491559718706 kmol/h
x_{CO}=1,2066587858161427348e-12
x_{H2}=0,0118376919722982029
x_{CO2}=5,8882369955814897974e-12
x_{H2O}=0,1205566443597057974
x_{CH4}=0,0030772687660966116865
x_{NH3}=0,86306884003403938621
x_{AR}=0,00019757411247484329801
x_{O2}=-5,0638808284680300864e-25
x_{N2}=0,0012619808360256441152
T: 294.15 °K
p: 180 bar
H: -69174429.007357 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=7,5228078997184662206e-06 kmol/h
n_{H2}=109813,46698397380533 kmol/h
n_{CO2}=7,5249586761508582349e-07 kmol/h
n_{H2O}=27,914293842697546211 kmol/h
n_{CH4}=4694,5663250570787568 kmol/h
n_{NH3}=11597,604543892943184 kmol/h
n_{AR}=396,46613371531429948 kmol/h
n_{O2}=-9,765922073941342253e-19 kmol/h
n_{N2}=8495,0344146406332584 kmol/h
n_T=135025,05270339775598 kmol/h
y_{CO}=5,5714163772580869425e-11
y_{H2}=0,81328216346039960971
y_{CO2}=5,5730092493876152013e-12
y_{H2O}=0,0002067341821670336038
y_{CH4}=0,03476811325798464386
y_{NH3}=0,085892242303869159592
y_{AR}=0,0029362412809881294901
y_{O2}=-7,2326741433633184144e-24
y_{N2}=0,062914505453304417348
T: 294.15 °K
p: 180 bar
H: -6718019.680016 J/kmol
========================================
Abgas:
n_{CO}=1,8807019749296159199e-06 kmol/h
n_{H2}=27453,366745993444056 kmol/h
n_{CO2}=1,8812396690377140293e-07 kmol/h
n_{H2O}=6,9785734606743847763 kmol/h
n_{CH4}=1173,6415812642694618 kmol/h
n_{NH3}=2899,4011359732353412 kmol/h
n_{AR}=99,116533428828546448 kmol/h
n_{O2}=-2,4414805184853350818e-19 kmol/h
n_{N2}=2123,7586036601574051 kmol/h
n_T=33756,263175849431718 kmol/h
y_{CO}=5,5714163772580869425e-11
y_{H2}=0,81328216346039960971
y_{CO2}=5,5730092493876143935e-12
y_{H2O}=0,0002067341821670336038
y_{CH4}=0,034768113257984650799
y_{NH3}=0,085892242303869159592
y_{AR}=0,0029362412809881294901
y_{O2}=-7,2326741433633184144e-24
y_{N2}=0,062914505453304403471
T: 294.15 °K
p: 180 bar
H: -6718019.680016 J/kmol
========================================
========================================
==============5. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 417.892 °K
Vorwärmer (auf T= 573.15 °K), Q: 285245,74253748107003 kW
n_{CO}=7,7126703465779446357e-06 kmol/h
n_{H2}=176285,76764675727463 kmol/h
n_{CO2}=8,8603274006907345918e-07 kmol/h
n_{H2O}=2281,7262723593244118 kmol/h
n_{CH4}=6479,8137711948847937 kmol/h
n_{NH3}=11650,675307776578848 kmol/h
n_{AR}=546,4635158774096908 kmol/h
n_{O2}=-9,765922073941342253e-19 kmol/h
n_{N2}=20166,183282832585974 kmol/h
n_T=217410,62980539671844 kmol/h
y_{CO}=3,5475129957912000018e-11
y_{H2}=0,81084244962884233221
y_{CO2}=4,0753883140955775828e-12
y_{H2O}=0,010495007877037510965
y_{CH4}=0,029804493814285609038
y_{NH3}=0,05358834256726567602
y_{AR}=0,0025135087293870900238
y_{O2}=-4,4919248349001036526e-24
y_{N2}=0,092756197343631482943
T: 573.15 °K
p: 180 bar
H: 1073457.757262 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 8,7205039130316841028e-08 kW (adiabat)
n_{CO}=8,0137631826926585416e-06 kmol/h
n_{H2}=145694,88183044316247 kmol/h
n_{CO2}=8,6890627843357590429e-07 kmol/h
n_{H2O}=2281,7262720924850328 kmol/h
n_{CH4}=6479,8137709109178104 kmol/h
n_{NH3}=32044,599185875824332 kmol/h
n_{AR}=546,4635158774096908 kmol/h
n_{O2}=-2,3259542049764365434e-18 kmol/h
n_{N2}=9969,2213437829614122 kmol/h
n_T=197016,70592786543421 kmol/h
y_{CO}=4,0675551572904535411e-11
y_{H2}=0,73950521680017866633
y_{CO2}=4,4103177664117084943e-12
y_{H2O}=0,011581384742712644981
y_{CH4}=0,032889666591438188048
y_{NH3}=0,16264914711145586623
y_{AR}=0,0027736912628997497168
y_{O2}=-1,180587297925917054e-23
y_{N2}=0,050600893446229040784
T: 732.35 °K
p: 180 bar
H: 1184575.318001 J/kmol
Gesamtfehler: 16.003079270412798
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -887471,69218424276914 kW
Dampfphase (zum Splitter):
n_{CO}=7,9944344313467161274e-06 kmol/h
n_{H2}=145459,48058334793313 kmol/h
n_{CO2}=7,7824495928301980864e-07 kmol/h
n_{H2O}=35,902684810546439564 kmol/h
n_{CH4}=6416,6363819221378435 kmol/h
n_{NH3}=15284,905929073189327 kmol/h
n_{AR}=542,41755034371090005 kmol/h
n_{O2}=-2,3080459559910735086e-18 kmol/h
n_{N2}=9946,9495983218330366 kmol/h
n_T=177686,29273659206228 kmol/h
y_{CO}=4,4991846631255062298e-11
y_{H2}=0,818630848457099769
y_{CO2}=4,3798817978059898009e-12
y_{H2O}=0,0002020565810525114356
y_{CH4}=0,036112163088357379648
y_{NH3}=0,086021862989771866181
y_{AR}=0,0030526696346991279325
y_{O2}=-1,2989442913267568212e-23
y_{N2}=0,055980399191432266004
H2/N2 Verhältnis: 14.623526453566095
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6823508.199507 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,9328751345942023852e-08 kmol/h
n_{H2}=235,40124709522913804 kmol/h
n_{CO2}=9,0661319150556347108e-08 kmol/h
n_{H2O}=2245,8235872819386714 kmol/h
n_{CH4}=63,177388988780435852 kmol/h
n_{NH3}=16759,693256802631367 kmol/h
n_{AR}=4,0459655336988644692 kmol/h
n_{O2}=-1,790824898536326352e-20 kmol/h
n_{N2}=22,27174546112967235 kmol/h
n_T=19330,413191273397388 kmol/h
x_{CO}=9,9991402957322677675e-13
x_{H2}=0,012177765926869895188
x_{CO2}=4,6900869764300326418e-12
x_{H2O}=0,11618083717245096531
x_{CH4}=0,0032682896308741274209
x_{NH3}=0,86701164078760151188
x_{AR}=0,0002093056932601381143
x_{O2}=-9,2642867017453019144e-25
x_{N2}=0,001152160858768708435
T: 294.15 °K
p: 180 bar
H: -68311705.897906 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=6,3955475450773739183e-06 kmol/h
n_{H2}=116367,58446667836688 kmol/h
n_{CO2}=6,2259596742641582574e-07 kmol/h
n_{H2O}=28,72214784843714952 kmol/h
n_{CH4}=5133,3091055377108205 kmol/h
n_{NH3}=12227,924743258550734 kmol/h
n_{AR}=433,93404027496876552 kmol/h
n_{O2}=-1,846436764792858961e-18 kmol/h
n_{N2}=7957,559678657466975 kmol/h
n_T=142149,03418927366147 kmol/h
y_{CO}=4,4991846631624682785e-11
y_{H2}=0,81863084846382505599
y_{CO2}=4,3798817978419721575e-12
y_{H2O}=0,00020205658105417137623
y_{CH4}=0,036112163088654052057
y_{NH3}=0,08602186299047856477
y_{AR}=0,0030526696347242068298
y_{O2}=-1,2989442913374279589e-23
y_{N2}=0,055980399191892162014
T: 294.15 °K
p: 180 bar
H: -6823508.199507 J/kmol
========================================
Abgas:
n_{CO}=1,5988868862693430561e-06 kmol/h
n_{H2}=29091,896116669584444 kmol/h
n_{CO2}=1,556489918566039035e-07 kmol/h
n_{H2O}=7,1805369621092856036 kmol/h
n_{CH4}=1283,3272763844272504 kmol/h
n_{NH3}=3056,9811858146372288 kmol/h
n_{AR}=108,48351006874216296 kmol/h
n_{O2}=-4,6160919119821464394e-19 kmol/h
n_{N2}=1989,3899196643660616 kmol/h
n_T=35537,258547318408091 kmol/h
y_{CO}=4,4991846631624676323e-11
y_{H2}=0,81863084846382494497
y_{CO2}=4,3798817978419713497e-12
y_{H2O}=0,00020205658105417137623
y_{CH4}=0,036112163088654045118
y_{NH3}=0,08602186299047856477
y_{AR}=0,0030526696347242063961
y_{O2}=-1,2989442913374279589e-23
y_{N2}=0,055980399191892162014
T: 294.15 °K
p: 180 bar
H: -6823508.199507 J/kmol
========================================
========================================
==============6. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 413.935 °K
Vorwärmer (auf T= 573.15 °K), Q: 302291,0769322552369 kW
n_{CO}=6,5854099919368523335e-06 kmol/h
n_{H2}=182839,88512946185074 kmol/h
n_{CO2}=7,5613283988040346143e-07 kmol/h
n_{H2O}=2282,5341263650643668 kmol/h
n_{CH4}=6918,5565516755159479 kmol/h
n_{NH3}=12280,995507142186398 kmol/h
n_{AR}=583,93142243706427053 kmol/h
n_{O2}=-1,846436764792858961e-18 kmol/h
n_{N2}=19628,708546849422419 kmol/h
n_T=224534,61129127271124 kmol/h
y_{CO}=2,9329153105015379686e-11
y_{H2}=0,81430601757996556866
y_{CO2}=3,3675558326262957157e-12
y_{H2O}=0,010165622632691117405
y_{CH4}=0,030812873400175116306
y_{NH3}=0,054695333768435945576
y_{AR}=0,0026006298943354070816
y_{O2}=-8,2233948440029519129e-24
y_{N2}=0,087419522691699863559
T: 573.15 °K
p: 180 bar
H: 1032759.228422 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 7,2415669759114588536e-08 kW (adiabat)
n_{CO}=6,9559609032677684277e-06 kmol/h
n_{H2}=151808,07395492689102 kmol/h
n_{CO2}=7,4306458255561420917e-07 kmol/h
n_{H2O}=2282,5341260206496372 kmol/h
n_{CH4}=6918,5565513180335984 kmol/h
n_{NH3}=32968,869624205079162 kmol/h
n_{AR}=583,93142243706427053 kmol/h
n_{O2}=5,0059128762846944578e-14 kmol/h
n_{N2}=9284,7714883179760363 kmol/h
n_T=203846,73717492466676 kmol/h
y_{CO}=3,4123484141415168102e-11
y_{H2}=0,74471672227285912182
y_{CO2}=3,6452120492759063134e-12
y_{H2O}=0,011197305179635838601
y_{CH4}=0,033939991619199139095
y_{NH3}=0,16173361458277293878
y_{AR}=0,0028645610448794272748
y_{O2}=2,4557238176390469688e-19
y_{N2}=0,045547805262885029953
T: 729.292 °K
p: 180 bar
H: 1137571.271069 J/kmol
Gesamtfehler: 0.589937322571693
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -910357,71156675636303 kW
Dampfphase (zum Splitter):
n_{CO}=6,939234763811500325e-06 kmol/h
n_{H2}=151564,08318182994844 kmol/h
n_{CO2}=6,6616172403957892253e-07 kmol/h
n_{H2O}=36,728313742070874071 kmol/h
n_{CH4}=6851,4164488788337621 kmol/h
n_{NH3}=15842,975130997903761 kmol/h
n_{AR}=579,63638275600897032 kmol/h
n_{O2}=4,96762894351433701e-14 kmol/h
n_{N2}=9264,0933258398581529 kmol/h
n_T=184138,93279165000422 kmol/h
y_{CO}=3,7684777784597817469e-11
y_{H2}=0,82309634840720247162
y_{CO2}=3,6177125278936653767e-12
y_{H2O}=0,00019945979475879415158
y_{CH4}=0,03720786443661395354
y_{NH3}=0,08603816091828417334
y_{AR}=0,0031478209087179472025
y_{O2}=2,6977613414803829961e-19
y_{N2}=0,050310345483727793303
H2/N2 Verhältnis: 16.360379569912155
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6905691.027152 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,6726139456268357447e-08 kmol/h
n_{H2}=243,99077309693231541 kmol/h
n_{CO2}=7,6902858516035366052e-08 kmol/h
n_{H2O}=2245,8058122785769228 kmol/h
n_{CH4}=67,140102439199381479 kmol/h
n_{NH3}=17125,894493207182677 kmol/h
n_{AR}=4,2950396810552611271 kmol/h
n_{O2}=3,8283932770358177511e-16 kmol/h
n_{N2}=20,678162478118956358 kmol/h
n_T=19707,804383274691645 kmol/h
x_{CO}=8,4870638719811508408e-13
x_{H2}=0,012380413788022532479
x_{CO2}=3,9021525192349204252e-12
x_{H2O}=0,11395515039623610454
x_{CH4}=0,0034067773933290608837
x_{NH3}=0,86899048527416289378
x_{AR}=0,00021793598099021318126
x_{O2}=1,9425772668115265425e-20
x_{N2}=0,0010492372502683498369
T: 294.15 °K
p: 180 bar
H: -67874767.751154 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=5,5513878110492005988e-06 kmol/h
n_{H2}=121251,26654546397913 kmol/h
n_{CO2}=5,329293792316631592e-07 kmol/h
n_{H2O}=29,38265099365670352 kmol/h
n_{CH4}=5481,1331591030675554 kmol/h
n_{NH3}=12674,380104798325192 kmol/h
n_{AR}=463,70910620480725584 kmol/h
n_{O2}=3,9741031548114699867e-14 kmol/h
n_{N2}=7411,2746606718874318 kmol/h
n_T=147311,14623332003248 kmol/h
y_{CO}=3,7684777784951773223e-11
y_{H2}=0,82309634841493339863
y_{CO2}=3,6177125279276448092e-12
y_{H2O}=0,00019945979476066757162
y_{CH4}=0,037207864436963430932
y_{NH3}=0,086038160919092290801
y_{AR}=0,0031478209087475129621
y_{O2}=2,6977613415057219749e-19
y_{N2}=0,050310345484200331978
T: 294.15 °K
p: 180 bar
H: -6905691.027152 J/kmol
========================================
Abgas:
n_{CO}=1,3878469527622997262e-06 kmol/h
n_{H2}=30312,816636365983868 kmol/h
n_{CO2}=1,3323234480791576333e-07 kmol/h
n_{H2O}=7,3456627484141723272 kmol/h
n_{CH4}=1370,2832897757664341 kmol/h
n_{NH3}=3168,5950261995803885 kmol/h
n_{AR}=115,92727655120178554 kmol/h
n_{O2}=9,9352578870286718112e-15 kmol/h
n_{N2}=1852,8186651679711758 kmol/h
n_T=36827,786558330000844 kmol/h
y_{CO}=3,768477778495176676e-11
y_{H2}=0,82309634841493317658
y_{CO2}=3,6177125279276448092e-12
y_{H2O}=0,00019945979476066754451
y_{CH4}=0,037207864436963423993
y_{NH3}=0,086038160919092276924
y_{AR}=0,0031478209087475129621
y_{O2}=2,6977613415057214934e-19
y_{N2}=0,050310345484200331978
T: 294.15 °K
p: 180 bar
H: -6905691.027152 J/kmol
========================================
========================================
==============7. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 411.221 °K
Vorwärmer (auf T= 573.15 °K), Q: 314660,33277805539547 kW
n_{CO}=5,741250257908679014e-06 kmol/h
n_{H2}=187723,56720824746299 kmol/h
n_{CO2}=6,6646625168565068901e-07 kmol/h
n_{H2O}=2283,1946295102839031 kmol/h
n_{CH4}=7266,3806052408735923 kmol/h
n_{NH3}=12727,450868681957218 kmol/h
n_{AR}=613,706488366902704 kmol/h
n_{O2}=3,9741031548114699867e-14 kmol/h
n_{N2}=19082,423528863841057 kmol/h
n_T=229696,72333531905315 kmol/h
y_{CO}=2,4994915793933234322e-11
y_{H2}=0,817267066253279717
y_{CO2}=2,9015052631496218456e-12
y_{H2O}=0,0099400400508857025828
y_{CH4}=0,031634672448649453491
y_{NH3}=0,055409805955751463558
y_{AR}=0,0026718121158001597974
y_{O2}=1,7301523056599897693e-19
y_{N2}=0,083076603147737002053
T: 573.15 °K
p: 180 bar
H: 997355.770814 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 5,5779351128472228824e-09 kW (adiabat)
n_{CO}=6,1482859178481210851e-06 kmol/h
n_{H2}=156472,96028372136061 kmol/h
n_{CO2}=6,5169576953008502089e-07 kmol/h
n_{H2O}=2283,1946291327890322 kmol/h
n_{CH4}=7266,3806048486085274 kmol/h
n_{NH3}=33561,18881914071244 kmol/h
n_{AR}=613,706488366902704 kmol/h
n_{O2}=-3,006133760808191181e-18 kmol/h
n_{N2}=8665,5545536344652646 kmol/h
n_T=208862,98538564477349 kmol/h
y_{CO}=2,9436934009613633379e-11
y_{H2}=0,74916558333593463725
y_{CO2}=3,1202070980973174102e-12
y_{H2O}=0,010931542632683800137
y_{CH4}=0,034790178793202432284
y_{NH3}=0,16068519157271121678
y_{AR}=0,0029383209630646357595
y_{O2}=-1,4392850677958393565e-23
y_{N2}=0,041489182669846351448
T: 726.69 °K
p: 180 bar
H: 1096840.362272 J/kmol
Gesamtfehler: 96224.00437943684
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -925635,8663852653699 kW
Dampfphase (zum Splitter):
n_{CO}=6,133644040775956409e-06 kmol/h
n_{H2}=156224,19253459671745 kmol/h
n_{CO2}=5,8499310388878995672e-07 kmol/h
n_{H2O}=37,423740040224231507 kmol/h
n_{CH4}=7196,5864789821671366 kmol/h
n_{NH3}=16251,67375153015746 kmol/h
n_{AR}=609,24330763943464717 kmol/h
n_{O2}=-2,9834042582861380334e-18 kmol/h
n_{N2}=8646,4417651852181734 kmol/h
n_T=188965,56158469253569 kmol/h
y_{CO}=3,2459057562025827028e-11
y_{H2}=0,8267336715907581679
y_{CO2}=3,0957656991963635274e-12
y_{H2O}=0,00019804529315304946185
y_{CH4}=0,038084116590148153758
y_{NH3}=0,086003362809010802659
y_{AR}=0,0032240970392937349197
y_{O2}=-1,578808452312010402e-23
y_{N2}=0,045756706632644104926
H2/N2 Verhältnis: 18.068032697985817
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -6969375.590266 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,4641877072166044275e-08 kmol/h
n_{H2}=248,76774912464611589 kmol/h
n_{CO2}=6,6702665641295156809e-08 kmol/h
n_{H2O}=2245,7708890925632659 kmol/h
n_{CH4}=69,794125866440893446 kmol/h
n_{NH3}=17309,515067610547703 kmol/h
n_{AR}=4,4631807274679839992 kmol/h
n_{O2}=-2,2729502522052906873e-20 kmol/h
n_{N2}=19,11278844924636644 kmol/h
n_T=19897,423800952255988 kmol/h
x_{CO}=7,3586798069695869444e-13
x_{H2}=0,012502510457410905692
x_{CO2}=3,352326729055306108e-12
x_{H2O}=0,11286742001174868144
x_{CH4}=0,003507696602881628118
x_{NH3}=0,86993749755350391339
x_{AR}=0,00022430947707182083309
x_{O2}=-1,1423339398843317498e-24
x_{N2}=0,00096056598292109903522
T: 294.15 °K
p: 180 bar
H: -67662795.100688 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=4,9069152326207647884e-06 kmol/h
n_{H2}=124979,35402767737105 kmol/h
n_{CO2}=4,6799448311103199714e-07 kmol/h
n_{H2O}=29,938992032179392311 kmol/h
n_{CH4}=5757,2691831857337093 kmol/h
n_{NH3}=13001,33900122412706 kmol/h
n_{AR}=487,39464611154778595 kmol/h
n_{O2}=-2,3867234066289105037e-18 kmol/h
n_{N2}=6917,1534121481745387 kmol/h
n_T=151172,4492677540693 kmol/h
y_{CO}=3,2459057562332142349e-11
y_{H2}=0,82673367159856003816
y_{CO2}=3,0957656992255781895e-12
y_{H2O}=0,00019804529315491843666
y_{CH4}=0,038084116590507546829
y_{NH3}=0,086003362809822417323
y_{AR}=0,0032240970393241606684
y_{O2}=-1,5788084523269094991e-23
y_{N2}=0,045756706633075912294
T: 294.15 °K
p: 180 bar
H: -6969375.590266 J/kmol
========================================
Abgas:
n_{CO}=1,2267288081551909853e-06 kmol/h
n_{H2}=31244,838506919335487 kmol/h
n_{CO2}=1,1699862077775797282e-07 kmol/h
n_{H2O}=7,484748008044844525 kmol/h
n_{CH4}=1439,3172957964331999 kmol/h
n_{NH3}=3250,3347503060308554 kmol/h
n_{AR}=121,84866152788691807 kmol/h
n_{O2}=-5,9668085165722743334e-19 kmol/h
n_{N2}=1729,2883530370431799 kmol/h
n_T=37793,112316938502772 kmol/h
y_{CO}=3,2459057562332148811e-11
y_{H2}=0,82673367159856014919
y_{CO2}=3,0957656992255789973e-12
y_{H2O}=0,00019804529315491843666
y_{CH4}=0,038084116590507560707
y_{NH3}=0,086003362809822417323
y_{AR}=0,0032240970393241611021
y_{O2}=-1,5788084523269094991e-23
y_{N2}=0,045756706633075919233
T: 294.15 °K
p: 180 bar
H: -6969375.590266 J/kmol
========================================
========================================
==============8. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 409.269 °K
Vorwärmer (auf T= 573.15 °K), Q: 323920,9679593910696 kW
n_{CO}=5,0967776794802440506e-06 kmol/h
n_{H2}=191451,65469046085491 kmol/h
n_{CO2}=6,0153135556501957989e-07 kmol/h
n_{H2O}=2283,7509705488068903 kmol/h
n_{CH4}=7542,5166293235397461 kmol/h
n_{NH3}=13054,409765107762723 kmol/h
n_{AR}=637,39202827364317727 kmol/h
n_{O2}=-2,3867234066289101185e-18 kmol/h
n_{N2}=18588,302280340129073 kmol/h
n_T=233558,02636975303176 kmol/h
y_{CO}=2,1822318670441989557e-11
y_{H2}=0,819717727822240394
y_{CO2}=2,5755113832513570178e-12
y_{H2O}=0,0097780881524205422173
y_{CH4}=0,032293973136177919758
y_{NH3}=0,055893646508388100669
y_{AR}=0,0027290521254215768104
y_{O2}=-1,0218974032818780436e-23
y_{N2}=0,079587512230953699754
T: 573.15 °K
p: 180 bar
H: 968219.014429 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -4,5943790011935763167e-08 kW (adiabat)
n_{CO}=5,5284276875683146783e-06 kmol/h
n_{H2}=160104,08192904430325 kmol/h
n_{CO2}=5,8376502017802110605e-07 kmol/h
n_{H2O}=2283,7509701526892059 kmol/h
n_{CH4}=7542,5166289096559922 kmol/h
n_{NH3}=33952,791606868042436 kmol/h
n_{AR}=637,39202827364317727 kmol/h
n_{O2}=1,8336469416135449617e-18 kmol/h
n_{N2}=8139,1113594599846692 kmol/h
n_T=212659,64452882052865 kmol/h
y_{CO}=2,5996599871204426875e-11
y_{H2}=0,752865369843813248
y_{CO2}=2,7450672245382544419e-12
y_{H2O}=0,010738995521283238241
y_{CH4}=0,035467550252052934545
y_{NH3}=0,15965789692772019981
y_{AR}=0,0029972401660215380707
y_{O2}=8,6224490108420254039e-24
y_{N2}=0,038272947260367208566
T: 724.491 °K
p: 180 bar
H: 1063367.347409 J/kmol
Gesamtfehler: 0.16543579228791164
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -936193,62871653016191 kW
Dampfphase (zum Splitter):
n_{CO}=5,5154195207669665593e-06 kmol/h
n_{H2}=159852,75520118678105 kmol/h
n_{CO2}=5,2471080801774635695e-07 kmol/h
n_{H2O}=38,013850484397202933 kmol/h
n_{CH4}=7470,9459138335614625 kmol/h
n_{NH3}=16560,476408081020054 kmol/h
n_{AR}=632,81528700580122404 kmol/h
n_{O2}=1,8199586079764755726e-18 kmol/h
n_{N2}=8121,3750320819744957 kmol/h
n_T=192676,38169871366699 kmol/h
y_{CO}=2,8625301513814857381e-11
y_{H2}=0,82964374663048279235
y_{CO2}=2,7232751797957065516e-12
y_{H2O}=0,00019729377388605777205
y_{CH4}=0,03877458071351750496
y_{NH3}=0,085949695867888642464
y_{AR}=0,0032843428001961606257
y_{O2}=9,4456756552844894573e-24
y_{N2}=0,042150340173540676303
H2/N2 Verhältnis: 19.68296680915711
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7018438.699071 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,3008166801348694807e-08 kmol/h
n_{H2}=251,32672785752635036 kmol/h
n_{CO2}=5,9054212160274788802e-08 kmol/h
n_{H2O}=2245,7371196682929622 kmol/h
n_{CH4}=71,570715076093819107 kmol/h
n_{NH3}=17392,315198787029658 kmol/h
n_{AR}=4,5767412678421255379 kmol/h
n_{O2}=1,3688333637069777224e-20 kmol/h
n_{N2}=17,736327378009796973 kmol/h
n_T=19983,262830106854381 kmol/h
x_{CO}=6,5095309575255492369e-13
x_{H2}=0,012576861447322024851
x_{CO2}=2,9551836788388511516e-12
x_{H2O}=0,11238090290654433046
x_{CH4}=0,0035815329904269783967
x_{NH3}=0,87034411488179885819
x_{AR}=0,00022902872804886039229
x_{O2}=6,8498992154840431229e-25
x_{N2}=0,00088755913037640520556
T: 294.15 °K
p: 180 bar
H: -67569354.341583 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=4,4123356166135734168e-06 kmol/h
n_{H2}=127882,20416094941902 kmol/h
n_{CO2}=4,1976864641419706438e-07 kmol/h
n_{H2O}=30,411080387517767321 kmol/h
n_{CH4}=5976,75673106684917 kmol/h
n_{NH3}=13248,381126464817498 kmol/h
n_{AR}=506,25222960464100197 kmol/h
n_{O2}=1,4559668863811805351e-18 kmol/h
n_{N2}=6497,1000256655806879 kmol/h
n_T=154141,10535897093359 kmol/h
y_{CO}=2,862530151407648233e-11
y_{H2}=0,82964374663806539356
y_{CO2}=2,723275179820596287e-12
y_{H2O}=0,00019729377388786096289
y_{CH4}=0,038774580713871888149
y_{NH3}=0,085949695868674194643
y_{AR}=0,0032843428002261782807
y_{O2}=9,4456756553708207012e-24
y_{N2}=0,042150340173925916754
T: 294.15 °K
p: 180 bar
H: -7018438.699071 J/kmol
========================================
Abgas:
n_{CO}=1,1030839041533931424e-06 kmol/h
n_{H2}=31970,551040237347479 kmol/h
n_{CO2}=1,0494216160354923963e-07 kmol/h
n_{H2O}=7,6027700968794391656 kmol/h
n_{CH4}=1494,1891827667120651 kmol/h
n_{NH3}=3312,0952816162034651 kmol/h
n_{AR}=126,56305740116020786 kmol/h
n_{O2}=3,6399172159529503749e-19 kmol/h
n_{N2}=1624,2750064163947172 kmol/h
n_T=38535,276339742726122 kmol/h
y_{CO}=2,862530151407648233e-11
y_{H2}=0,82964374663806528254
y_{CO2}=2,723275179820596287e-12
y_{H2O}=0,00019729377388786093579
y_{CH4}=0,038774580713871888149
y_{NH3}=0,085949695868674180765
y_{AR}=0,003284342800226177847
y_{O2}=9,4456756553708192318e-24
y_{N2}=0,042150340173925909815
T: 294.15 °K
p: 180 bar
H: -7018438.699071 J/kmol
========================================
========================================
==============9. Iteration==============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 407.811 °K
Vorwärmer (auf T= 573.15 °K), Q: 331043,48180804186268 kW
n_{CO}=4,602198063473052679e-06 kmol/h
n_{H2}=194354,50482373291743 kmol/h
n_{CO2}=5,5330551886818470007e-07 kmol/h
n_{H2O}=2284,2230589041450912 kmol/h
n_{CH4}=7762,0041772046542974 kmol/h
n_{NH3}=13301,451890348453162 kmol/h
n_{AR}=656,24961176673650698 kmol/h
n_{O2}=1,4559668863811805351e-18 kmol/h
n_{N2}=18168,248893857533403 kmol/h
n_T=236526,68246096995426 kmol/h
y_{CO}=1,9457416032681538677e-11
y_{H2}=0,82170224010901593559
y_{CO2}=2,3392942948814555234e-12
y_{H2O}=0,0096573588871144474405
y_{CH4}=0,032816611201932736896
y_{NH3}=0,056236580803281546737
y_{AR}=0,0027745267677147856557
y_{O2}=6,1556136975008494262e-24
y_{N2}=0,076812682209143723355
T: 573.15 °K
p: 180 bar
H: 945026.787302 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 2,3015975952148437402e-07 kW (adiabat)
n_{CO}=5,0524115276265334329e-06 kmol/h
n_{H2}=162972,7080634550075 kmol/h
n_{CO2}=5,3260063169812207087e-07 kmol/h
n_{H2O}=2284,2230584953413199 kmol/h
n_{CH4}=7762,004176775146334 kmol/h
n_{NH3}=34222,649731378929573 kmol/h
n_{AR}=656,24961176673650698 kmol/h
n_{O2}=-3,6952216952815242188e-30 kmol/h
n_{N2}=7707,6499733422961071 kmol/h
n_T=215605,4846207984956 kmol/h
y_{CO}=2,3433594634721782612e-11
y_{H2}=0,75588386979156541035
y_{CO2}=2,4702554883278900955e-12
y_{H2O}=0,010594457105359705168
y_{CH4}=0,036000958836584162626
y_{NH3}=0,15872810374730894623
y_{AR}=0,0030437519385044021526
y_{O2}=-1,7138811203159267858e-35
y_{N2}=0,035748858554773395302
T: 722.656 °K
p: 180 bar
H: 1036727.109384 J/kmol
Gesamtfehler: 0.8285675048828125
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -943780,57869952567853 kW
Dampfphase (zum Splitter):
n_{CO}=5,040666855377232494e-06 kmol/h
n_{H2}=162720,05796840137918 kmol/h
n_{CO2}=4,7931532359349217128e-07 kmol/h
n_{H2O}=38,51258103323274895 kmol/h
n_{CH4}=7689,2318914787574613 kmol/h
n_{NH3}=16799,928501576741837 kmol/h
n_{AR}=651,59533972266683577 kmol/h
n_{O2}=-3,6679755973518900485e-30 kmol/h
n_{N2}=7691,0569008599504741 kmol/h
n_T=195590,38318859270657 kmol/h
y_{CO}=2,5771547522716858085e-11
y_{H2}=0,8319430399091632955
y_{CO2}=2,4506078252679214499e-12
y_{H2O}=0,00019690426699428499515
y_{CH4}=0,039312934337852545319
y_{NH3}=0,085893428028230314752
y_{AR}=0,0033314283099922939804
y_{O2}=-1,875335350093189406e-35
y_{N2}=0,039322265110416226852
H2/N2 Verhältnis: 21.15704773295948
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7056077.601255 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,1744672249300588218e-08 kmol/h
n_{H2}=252,65009505365031828 kmol/h
n_{CO2}=5,32853081046298268e-08 kmol/h
n_{H2O}=2245,7104774621084289 kmol/h
n_{CH4}=72,772285296388488973 kmol/h
n_{NH3}=17422,721229802187736 kmol/h
n_{AR}=4,6542720440695886097 kmol/h
n_{O2}=-2,7246097929633956808e-32 kmol/h
n_{N2}=16,593072482346464369 kmol/h
n_T=20015,101432205781748 kmol/h
x_{CO}=5,8679054363672934594e-13
x_{H2}=0,012622973504878474971
x_{CO2}=2,6622552121391352588e-12
x_{H2O}=0,1122008042411880846
x_{CH4}=0,0036358689237409704334
x_{NH3}=0,87047878774783471467
x_{AR}=0,00023253801936750216508
x_{O2}=-1,3612770349603895919e-36
x_{N2}=0,00082902764894946889915
T: 294.15 °K
p: 180 bar
H: -67536058.461149 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=4,0325334843017866728e-06 kmol/h
n_{H2}=130176,04637472110335 kmol/h
n_{CO2}=3,834522588747937582e-07 kmol/h
n_{H2O}=30,810064826586195608 kmol/h
n_{CH4}=6151,385513183005969 kmol/h
n_{NH3}=13439,94280126139347 kmol/h
n_{AR}=521,27627177813349135 kmol/h
n_{O2}=-2,9343804778815120388e-30 kmol/h
n_{N2}=6152,8455206879598336 kmol/h
n_T=156472,30655087420018 kmol/h
y_{CO}=2,5771547522952119883e-11
y_{H2}=0,83194303991675788712
y_{CO2}=2,4506078252902920811e-12
y_{H2O}=0,00019690426699608246683
y_{CH4}=0,039312934338211417973
y_{NH3}=0,085893428029014409764
y_{AR}=0,0033314283100227054177
y_{O2}=-1,8753353501103087312e-35
y_{N2}=0,039322265110775182773
T: 294.15 °K
p: 180 bar
H: -7056077.601255 J/kmol
========================================
Abgas:
n_{CO}=1,0081333710754462447e-06 kmol/h
n_{H2}=32544,011593680264923 kmol/h
n_{CO2}=9,586306471869841308e-08 kmol/h
n_{H2O}=7,7025162066465471256 kmol/h
n_{CH4}=1537,8463782957510375 kmol/h
n_{NH3}=3359,9857003153474579 kmol/h
n_{AR}=130,31906794453334442 kmol/h
n_{O2}=-7,3359511947037783454e-31 kmol/h
n_{N2}=1538,211380171989731 kmol/h
n_T=39118,076637718535494 kmol/h
y_{CO}=2,5771547522952116652e-11
y_{H2}=0,83194303991675788712
y_{CO2}=2,450607825290292485e-12
y_{H2O}=0,00019690426699608249393
y_{CH4}=0,039312934338211424912
y_{NH3}=0,085893428029014423641
y_{AR}=0,0033314283100227058514
y_{O2}=-1,8753353501103089985e-35
y_{N2}=0,03932226511077519665
T: 294.15 °K
p: 180 bar
H: -7056077.601255 J/kmol
========================================
========================================
==============10. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 406.693 °K
Vorwärmer (auf T= 573.15 °K), Q: 336636,52769375318894 kW
n_{CO}=4,222395931161265935e-06 kmol/h
n_{H2}=196648,34703750457265 kmol/h
n_{CO2}=5,1698913132878139389e-07 kmol/h
n_{H2O}=2284,6220433432135906 kmol/h
n_{CH4}=7936,6329593208129154 kmol/h
n_{NH3}=13493,013565145029133 kmol/h
n_{AR}=671,27365394022899636 kmol/h
n_{O2}=0 kmol/h
n_{N2}=17823,994388879913458 kmol/h
n_T=238857,88365287316265 kmol/h
y_{CO}=1,7677440101987924541e-11
y_{H2}=0,8232859808943515656
y_{CO2}=2,1644214686257133059e-12
y_{H2O}=0,0095647755410217215333
y_{CH4}=0,033227427279959260986
y_{NH3}=0,056489714129570559042
y_{AR}=0,0028103474906265852977
y_{O2}=0
y_{N2}=0,074621754644628474074
T: 573.15 °K
p: 180 bar
H: 926945.236802 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 1,405758327907986155e-07 kW (adiabat)
n_{CO}=4,6867916000030872059e-06 kmol/h
n_{H2}=165261,8177226120315 kmol/h
n_{CO2}=4,937499960393163275e-07 kmol/h
n_{H2O}=2284,6220429252957729 kmol/h
n_{CH4}=7936,6329588796561438 kmol/h
n_{NH3}=34417,366442606878991 kmol/h
n_{AR}=671,27365394022899636 kmol/h
n_{O2}=4,3182534299790744473e-19 kmol/h
n_{N2}=7361,8179501489867107 kmol/h
n_T=217933,53077629365725 kmol/h
y_{CO}=2,1505601195504085008e-11
y_{H2}=0,7583129458506843168
y_{CO2}=2,2655990304958863406e-12
y_{H2O}=0,01048311397877747797
y_{CH4}=0,036417677126639687313
y_{NH3}=0,15792598009131472714
y_{AR}=0,0030801761048385154808
y_{O2}=1,9814543519747375733e-24
y_{N2}=0,033780106823973823915
T: 721.143 °K
p: 180 bar
H: 1015943.607836 J/kmol
Gesamtfehler: 0.5061044578347682
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -949444,75532089534681 kW
Dampfphase (zum Splitter):
n_{CO}=4,6760171862919918198e-06 kmol/h
n_{H2}=165008,50078244769247 kmol/h
n_{CO2}=4,4483471139045104343e-07 kmol/h
n_{H2O}=38,93041677242310783 kmol/h
n_{CH4}=7863,0315724923739253 kmol/h
n_{NH3}=16989,207137540863187 kmol/h
n_{AR}=666,56539110510095725 kmol/h
n_{O2}=4,2867650475871553328e-19 kmol/h
n_{N2}=7346,1447232955106301 kmol/h
n_T=197912,38002877478721 kmol/h
y_{CO}=2,3626703825046373484e-11
y_{H2}=0,8337452197632658768
y_{CO2}=2,2476345912356276268e-12
y_{H2O}=0,00019670531356559098107
y_{CH4}=0,039729862130293974731
y_{NH3}=0,085842063719903688446
y_{AR}=0,0033679822909622215976
y_{O2}=2,1659913578550743121e-24
y_{N2}=0,037118166747129678618
H2/N2 Verhältnis: 22.46191805331385
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7084893.890067 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,0774413711095682201e-08 kmol/h
n_{H2}=253,31694016432354033 kmol/h
n_{CO2}=4,8915284648865257601e-08 kmol/h
n_{H2O}=2245,6916261528735959 kmol/h
n_{CH4}=73,601386387281422685 kmol/h
n_{NH3}=17428,159305065997614 kmol/h
n_{AR}=4,7082628351278401624 kmol/h
n_{O2}=3,1488382391919321447e-21 kmol/h
n_{N2}=15,673226853475188847 kmol/h
n_T=20021,150747518768185 kmol/h
x_{CO}=5,3815157020332511594e-13
x_{H2}=0,012652466553065873714
x_{CO2}=2,4431804779893208411e-12
x_{H2O}=0,11216596162091581335
x_{CH4}=0,0036761816202275018894
x_{NH3}=0,87048739237813987923
x_{AR}=0,00023516444658558971633
x_{O2}=1,5727558716186402543e-25
x_{N2}=0,00078283346709292610938
T: 294.15 °K
p: 180 bar
H: -67531046.798708 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,7408137490335933711e-06 kmol/h
n_{H2}=132006,8006259581598 kmol/h
n_{CO2}=3,5586776911236085592e-07 kmol/h
n_{H2O}=31,144333417938486974 kmol/h
n_{CH4}=6290,4252579938993222 kmol/h
n_{NH3}=13591,365710032690913 kmol/h
n_{AR}=533,2523128840807658 kmol/h
n_{O2}=3,4294120380697247477e-19 kmol/h
n_{N2}=5876,9157786364094136 kmol/h
n_T=158329,90402301988797 kmol/h
y_{CO}=2,362670382525912046e-11
y_{H2}=0,83374521977077331591
y_{CO2}=2,2476345912558664907e-12
y_{H2O}=0,00019670531356736218795
y_{CH4}=0,039729862130651723284
y_{NH3}=0,085842063720676653471
y_{AR}=0,0033679822909925484671
y_{O2}=2,1659913578745779674e-24
y_{N2}=0,037118166747463911259
T: 294.15 °K
p: 180 bar
H: -7084893.890067 J/kmol
========================================
Abgas:
n_{CO}=9,3520343725839813102e-07 kmol/h
n_{H2}=33001,700156489532674 kmol/h
n_{CO2}=8,896694227809018751e-08 kmol/h
n_{H2O}=7,7860833544846199672 kmol/h
n_{CH4}=1572,6063144984746032 kmol/h
n_{NH3}=3397,8414275081718188 kmol/h
n_{AR}=133,31307822102016303 kmol/h
n_{O2}=8,5735300951743094618e-20 kmol/h
n_{N2}=1469,2289446591016713 kmol/h
n_T=39582,476005754964717 kmol/h
y_{CO}=2,362670382525912046e-11
y_{H2}=0,83374521977077331591
y_{CO2}=2,2476345912558664907e-12
y_{H2O}=0,00019670531356736218795
y_{CH4}=0,039729862130651723284
y_{NH3}=0,085842063720676639593
y_{AR}=0,0033679822909925484671
y_{O2}=2,1659913578745779674e-24
y_{N2}=0,03711816674746390432
T: 294.15 °K
p: 180 bar
H: -7084893.890067 J/kmol
========================================
========================================
==============11. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 405.818 °K
Vorwärmer (auf T= 573.15 °K), Q: 341092,13239290780621 kW
n_{CO}=3,9306761958930722098e-06 kmol/h
n_{H2}=198479,10128874165821 kmol/h
n_{CO2}=4,8940464156634838573e-07 kmol/h
n_{H2O}=2284,9563119345657469 kmol/h
n_{CH4}=8075,6727041317044495 kmol/h
n_{NH3}=13644,436473916326577 kmol/h
n_{AR}=683,24969504617627081 kmol/h
n_{O2}=3,4294120380697242662e-19 kmol/h
n_{N2}=17548,0646468283594 kmol/h
n_T=240715,48112501885043 kmol/h
y_{CO}=1,632913752585618587e-11
y_{H2}=0,82453816580936400982
y_{CO2}=2,0331249127768786494e-12
y_{H2O}=0,0094923529689718722652
y_{CH4}=0,033548622076107745271
y_{NH3}=0,056682837390212979789
y_{AR}=0,0028384119370009328921
y_{O2}=1,4246744837689156657e-24
y_{N2}=0,072899609799980141789
T: 573.15 °K
p: 180 bar
H: 913021.967916 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 2,311452229817708297e-07 kW (adiabat)
n_{CO}=4,4056476884703235056e-06 kmol/h
n_{H2}=167099,69669218262425 kmol/h
n_{CO2}=4,6406765313935609257e-07 kmol/h
n_{H2O}=2284,9563115102682787 kmol/h
n_{CH4}=8075,6727036820702779 kmol/h
n_{NH3}=34564,039539171382785 kmol/h
n_{AR}=683,24969504617627081 kmol/h
n_{O2}=-8,4325084099167720438e-19 kmol/h
n_{N2}=7088,2631142008340248 kmol/h
n_T=219795,87806066308985 kmol/h
y_{CO}=2,0044268925072272441e-11
y_{H2}=0,76024945584313252667
y_{CO2}=2,1113573977546320767e-12
y_{H2O}=0,010395810566018104035
y_{CH4}=0,036741693133358968582
y_{NH3}=0,15725517623051965543
y_{AR}=0,0031085646422250065406
y_{O2}=-3,8365179931124193617e-24
y_{N2}=0,03224929956259003222
T: 719.908 °K
p: 180 bar
H: 999921.036848 J/kmol
Gesamtfehler: 0.8321386346126416
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -953811,89779997977894 kW
Dampfphase (zum Splitter):
n_{CO}=4,395616533017868629e-06 kmol/h
n_{H2}=166846,04416763101472 kmol/h
n_{CO2}=4,1847699018736354152e-07 kmol/h
n_{H2O}=39,277095712373409242 kmol/h
n_{CH4}=8001,4838439200666471 kmol/h
n_{NH3}=17140,7418526482179 kmol/h
n_{AR}=678,5028609389689791 kmol/h
n_{O2}=-8,3716018435247315051e-19 kmol/h
n_{N2}=7073,3169149937102702 kmol/h
n_T=199779,36674065844272 kmol/h
y_{CO}=2,2002354921288915435e-11
y_{H2}=0,83515153185323165452
y_{CO2}=2,0946957486697767689e-12
y_{H2O}=0,00019660236366154416887
y_{CH4}=0,040051602797575698578
y_{NH3}=0,085798359120564424152
y_{AR}=0,0033962609452739644759
y_{O2}=-4,1904236558707788787e-24
y_{N2}=0,035405642886596026819
H2/N2 Verhältnis: 23.588091156209618
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7106956.520890 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=1,0031155452455816235e-08 kmol/h
n_{H2}=253,65252455163283685 kmol/h
n_{CO2}=4,5590662951992590751e-08 kmol/h
n_{H2O}=2245,6792157978957221 kmol/h
n_{CH4}=74,188859762002792309 kmol/h
n_{NH3}=17423,297686523161246 kmol/h
n_{AR}=4,7468341072075199705 kmol/h
n_{O2}=-6,0906566392040087355e-21 kmol/h
n_{N2}=14,946199207124799102 kmol/h
n_T=20016,511320004647132 kmol/h
x_{CO}=5,0114404518291959827e-13
x_{H2}=0,012672164520543302557
x_{CO2}=2,277652795096414455e-12
x_{H2O}=0,11219133944461427699
x_{CH4}=0,0037063831245418704663
x_{NH3}=0,87044627355593062745
x_{AR}=0,00023714592576828643277
x_{O2}=-3,0428162742145946216e-25
x_{N2}=0,00074669351564425449647
T: 294.15 °K
p: 180 bar
H: -67537556.228914 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,5164932264142947338e-06 kmol/h
n_{H2}=133476,83533410480595 kmol/h
n_{CO2}=3,3478159214989088616e-07 kmol/h
n_{H2O}=31,421676569898728815 kmol/h
n_{CH4}=6401,1870751360538634 kmol/h
n_{NH3}=13712,593482118574684 kmol/h
n_{AR}=542,80228875117518328 kmol/h
n_{O2}=-6,6972814748197855892e-19 kmol/h
n_{N2}=5658,6535319949689438 kmol/h
n_T=159823,49339252678328 kmol/h
y_{CO}=2,2002354921486925026e-11
y_{H2}=0,83515153186074753133
y_{CO2}=2,0946957486886282474e-12
y_{H2O}=0,0001966023636633134784
y_{CH4}=0,040051602797936146361
y_{NH3}=0,085798359121336570388
y_{AR}=0,0033962609453045290026
y_{O2}=-4,1904236559084909415e-24
y_{N2}=0,035405642886914667766
T: 294.15 °K
p: 180 bar
H: -7106956.520890 J/kmol
========================================
Abgas:
n_{CO}=8,791233066035734717e-07 kmol/h
n_{H2}=33369,208833526194212 kmol/h
n_{CO2}=8,369539803747269507e-08 kmol/h
n_{H2O}=7,8554191424746804273 kmol/h
n_{CH4}=1600,2967687840130111 kmol/h
n_{NH3}=3428,1483705296427615 kmol/h
n_{AR}=135,7005721877937674 kmol/h
n_{O2}=-1,6743203687049459158e-19 kmol/h
n_{N2}=1414,6633829987417812 kmol/h
n_T=39955,873348131688545 kmol/h
y_{CO}=2,2002354921486925026e-11
y_{H2}=0,83515153186074753133
y_{CO2}=2,0946957486886278435e-12
y_{H2O}=0,0001966023636633134784
y_{CH4}=0,040051602797936139422
y_{NH3}=0,085798359121336570388
y_{AR}=0,0033962609453045290026
y_{O2}=-4,1904236559084902068e-24
y_{N2}=0,035405642886914660827
T: 294.15 °K
p: 180 bar
H: -7106956.520890 J/kmol
========================================
========================================
==============12. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 405.124 °K
Vorwärmer (auf T= 573.15 °K), Q: 344673,2386872349889 kW
n_{CO}=3,7063556732737735725e-06 kmol/h
n_{H2}=199949,13599688827526 kmol/h
n_{CO2}=4,6831846460387846891e-07 kmol/h
n_{H2O}=2285,2336550865256868 kmol/h
n_{CH4}=8186,4345212738599002 kmol/h
n_{NH3}=13765,664246002210348 kmol/h
n_{AR}=692,79967091327068829 kmol/h
n_{O2}=-6,6972814748197855892e-19 kmol/h
n_{N2}=17329,802400186923478 kmol/h
n_T=242209,07049452571664 kmol/h
y_{CO}=1,5302299231430075458e-11
y_{H2}=0,8255229070845528927
y_{CO2}=1,9335298370440801484e-12
y_{H2O}=0,0094349631515479315258
y_{CH4}=0,033799041896157580589
y_{NH3}=0,056833809806942525711
y_{AR}=0,002860337432858150921
y_{O2}=-2,7650828522423785301e-24
y_{N2}=0,071548940610705172305
T: 573.15 °K
p: 180 bar
H: 902371.061364 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 1,2751897176106772489e-07 kW (adiabat)
n_{CO}=4,1889673537791945963e-06 kmol/h
n_{H2}=168580,07216051142314 kmol/h
n_{CO2}=4,4126720598592136889e-07 kmol/h
n_{H2O}=2285,2336546580168033 kmol/h
n_{CH4}=8186,4345208182994611 kmol/h
n_{NH3}=34678,373471146529482 kmol/h
n_{AR}=692,79967091327068829 kmol/h
n_{O2}=-4,2289595193467221456e-19 kmol/h
n_{N2}=6873,4477876147630013 kmol/h
n_T=221296,36127029251656 kmol/h
y_{CO}=1,8929219304526963597e-11
y_{H2}=0,76178420283470849039
y_{CO2}=1,9940102198379814382e-12
y_{H2O}=0,010326575825920700918
y_{CH4}=0,036993082370746013798
y_{NH3}=0,15670557469668552897
y_{AR}=0,0031306419452016274498
y_{O2}=-1,9109936987086061622e-24
y_{N2}=0,031059922305814589155
T: 718.909 °K
p: 180 bar
H: 987645.954773 J/kmol
Gesamtfehler: 16.006584435190376
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -957259,95258935564198 kW
Dampfphase (zum Splitter):
n_{CO}=4,1795059947480392556e-06 kmol/h
n_{H2}=168326,24207922889036 kmol/h
n_{CO2}=3,9821778482171633358e-07 kmol/h
n_{H2O}=39,562192558769822881 kmol/h
n_{CH4}=8111,8173293609625034 kmol/h
n_{NH3}=17262,963737028032483 kmol/h
n_{AR}=688,02452533092832709 kmol/h
n_{O2}=-4,1986557136028691643e-19 kmol/h
n_{N2}=6859,0708432411820468 kmol/h
n_T=201287,68071132648038 kmol/h
y_{CO}=2,0763843966710220042e-11
y_{H2}=0,83624711399570605952
y_{CO2}=1,9783514987650040015e-12
y_{H2O}=0,00019654552339522661009
y_{CH4}=0,040299621420560004237
y_{NH3}=0,085762644171117077829
y_{AR}=0,0034181154201460809024
y_{O2}=-2,0858980036572951523e-24
y_{N2}=0,034075959437463901325
H2/N2 Verhältnis: 24.540676999289904
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7123876.205636 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=9,461359031155243039e-09 kmol/h
n_{H2}=253,83008128256253144 kmol/h
n_{CO2}=4,3049421164205035312e-08 kmol/h
n_{H2O}=2245,6714620992470373 kmol/h
n_{CH4}=74,617191457336218718 kmol/h
n_{NH3}=17415,409734118507913 kmol/h
n_{AR}=4,7751455823424224789 kmol/h
n_{O2}=-3,0303805743853018881e-21 kmol/h
n_{N2}=14,37694437357941446 kmol/h
n_T=20008,680558966079843 kmol/h
x_{CO}=4,7286271596553347535e-13
x_{H2}=0,012685997987581834959
x_{CO2}=2,1515372311121950177e-12
x_{H2O}=0,1122348600489460474
x_{CH4}=0,0037292409783891292477
x_{NH3}=0,87039271202060608523
x_{AR}=0,00023865369676406345857
x_{O2}=-1,5145329377042590079e-25
x_{N2}=0,00071853535431749425335
T: 294.15 °K
p: 180 bar
H: -67547331.429069 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,3436047957984316586e-06 kmol/h
n_{H2}=134660,99366338309483 kmol/h
n_{CO2}=3,1857422785737305628e-07 kmol/h
n_{H2O}=31,649754047015857594 kmol/h
n_{CH4}=6489,453863488771276 kmol/h
n_{NH3}=13810,37098962242635 kmol/h
n_{AR}=550,41962026474266167 kmol/h
n_{O2}=-3,3589245708822956203e-19 kmol/h
n_{N2}=5487,2566745929452736 kmol/h
n_T=161030,14456906117266 kmol/h
y_{CO}=2,0763843966894387282e-11
y_{H2}=0,8362471140031232375
y_{CO2}=1,9783514987825512972e-12
y_{H2O}=0,00019654552339696989877
y_{CH4}=0,040299621420917447479
y_{NH3}=0,085762644171877761012
y_{AR}=0,0034181154201763982309
y_{O2}=-2,0858980036757963313e-24
y_{N2}=0,034075959437766138727
T: 294.15 °K
p: 180 bar
H: -7123876.205636 J/kmol
========================================
Abgas:
n_{CO}=8,359011989496077029e-07 kmol/h
n_{H2}=33665,248415845766431 kmol/h
n_{CO2}=7,96435569643432376e-08 kmol/h
n_{H2O}=7,9124385117539626222 kmol/h
n_{CH4}=1622,3634658721923643 kmol/h
n_{NH3}=3452,5927474056056781 kmol/h
n_{AR}=137,604905066185637 kmol/h
n_{O2}=-8,3973114272057366434e-20 kmol/h
n_{N2}=1371,814168648236091 kmol/h
n_T=40257,536142265285889 kmol/h
y_{CO}=2,0763843966894384051e-11
y_{H2}=0,83624711400312312648
y_{CO2}=1,9783514987825508933e-12
y_{H2O}=0,00019654552339696989877
y_{CH4}=0,040299621420917447479
y_{NH3}=0,085762644171877761012
y_{AR}=0,0034181154201763982309
y_{O2}=-2,0858980036757963313e-24
y_{N2}=0,034075959437766138727
T: 294.15 °K
p: 180 bar
H: -7123876.205636 J/kmol
========================================
========================================
==============13. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 404.57 °K
Vorwärmer (auf T= 573.15 °K), Q: 347565,11902262433432 kW
n_{CO}=3,5334672426579104973e-06 kmol/h
n_{H2}=201133,29432616656413 kmol/h
n_{CO2}=4,5211110031136069197e-07 kmol/h
n_{H2O}=2285,4617325636427267 kmol/h
n_{CH4}=8274,7013096265764034 kmol/h
n_{NH3}=13863,441753506062014 kmol/h
n_{AR}=700,41700242683816668 kmol/h
n_{O2}=-3,3589245708822956203e-19 kmol/h
n_{N2}=17158,405542784897989 kmol/h
n_T=243415,72167106013512 kmol/h
y_{CO}=1,4516183336065949483e-11
y_{H2}=0,82629541323533761688
y_{CO2}=1,8573619534826965356e-12
y_{H2O}=0,0093891294977737783239
y_{CH4}=0,033994112018814442999
y_{NH3}=0,056953764770545207974
y_{AR}=0,0028774517833870514628
y_{O2}=-1,3799127467293910928e-24
y_{N2}=0,070490128677768451593
T: 573.15 °K
p: 180 bar
H: 894243.917356 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 3,5442776150173611062e-08 kW (adiabat)
n_{CO}=4,0214383738623027449e-06 kmol/h
n_{H2}=169774,02918620113633 kmol/h
n_{CO2}=4,2366405267855111046e-07 kmol/h
n_{H2O}=2285,461732132565885 kmol/h
n_{CH4}=8274,7013091670523863 kmol/h
n_{NH3}=34769,618514383109869 kmol/h
n_{AR}=700,41700242683816668 kmol/h
n_{O2}=-3,610601326033017555e-18 kmol/h
n_{N2}=6705,3171623463776996 kmol/h
n_T=222509,54491110218805 kmol/h
y_{CO}=1,807310502328770733e-11
y_{H2}=0,7629966132645225052
y_{CO2}=1,9040264220926563118e-12
y_{H2O}=0,010271297498925998151
y_{CH4}=0,037188073493534812286
y_{NH3}=0,15626124500984617249
y_{AR}=0,0031478065478344817267
y_{O2}=-1,6226725588223814306e-23
y_{N2}=0,030134964165358886501
T: 718.104 °K
p: 180 bar
H: 978263.780011 J/kmol
Gesamtfehler: 197928.00509823597
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -960024,55007687932812 kW
Dampfphase (zum Splitter):
n_{CO}=4,0124150411649144989e-06 kmol/h
n_{H2}=169520,09268558592885 kmol/h
n_{CO2}=3,8256688037646244533e-07 kmol/h
n_{H2O}=39,794926181034988133 kmol/h
n_{CH4}=8199,7634471292803937 kmol/h
n_{NH3}=17361,898658289355808 kmol/h
n_{AR}=695,62055486696829121 kmol/h
n_{O2}=-3,5848956465704239522e-18 kmol/h
n_{N2}=6691,3840894746899721 kmol/h
n_T=202508,55436592220212 kmol/h
y_{CO}=1,9813558265191670802e-11
y_{H2}=0,83710089786010721813
y_{CO2}=1,8891393579441284964e-12
y_{H2O}=0,00019650985265921565195
y_{CH4}=0,040490948506991045197
y_{NH3}=0,085734149416531013621
y_{AR}=0,0034350181257221420843
y_{O2}=-1,7702440559921043899e-23
y_{N2}=0,033042476207319676496
H2/N2 Verhältnis: 25.334084909613097
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7136887.031260 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=9,0233326973876818749e-09 kmol/h
n_{H2}=253,93650061519855399 kmol/h
n_{CO2}=4,1097172302088671749e-08 kmol/h
n_{H2O}=2245,6668059515309324 kmol/h
n_{CH4}=74,937862037771594714 kmol/h
n_{NH3}=17407,719856093750423 kmol/h
n_{AR}=4,7964475598698586012 kmol/h
n_{O2}=-2,5705679462594376194e-20 kmol/h
n_{N2}=13,933072871687254946 kmol/h
n_T=20000,99054517993136 kmol/h
x_{CO}=4,5114429097020168214e-13
x_{H2}=0,012696196224114005002
x_{CO2}=2,0547568488163669659e-12
x_{H2O}=0,11227777949710601724
x_{CH4}=0,0037467075380741366245
x_{NH3}=0,87034288718611996227
x_{AR}=0,00023981050085848017455
x_{O2}=-1,2852203198067626606e-24
x_{N2}=0,00069661914201094336189
T: 294.15 °K
p: 180 bar
H: -67556729.210431 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,2099320329319320227e-06 kmol/h
n_{H2}=135616,07414846873144 kmol/h
n_{CO2}=3,0605350430116999862e-07 kmol/h
n_{H2O}=31,835940944827992638 kmol/h
n_{CH4}=6559,8107577034252245 kmol/h
n_{NH3}=13889,518926631484646 kmol/h
n_{AR}=556,49644389357467844 kmol/h
n_{O2}=-2,8679165172563392388e-18 kmol/h
n_{N2}=5353,1072715797527053 kmol/h
n_T=162006,84349273773842 kmol/h
y_{CO}=1,9813558265369346613e-11
y_{H2}=0,83710089786761365804
y_{CO2}=1,8891393579610691392e-12
y_{H2O}=0,00019650985266097785995
y_{CH4}=0,040490948507354143637
y_{NH3}=0,08573414941729981531
y_{AR}=0,0034350181257529447017
y_{O2}=-1,7702440560079788534e-23
y_{N2}=0,033042476207615981143
T: 294.15 °K
p: 180 bar
H: -7136887.031260 J/kmol
========================================
Abgas:
n_{CO}=8,024830082329827939e-07 kmol/h
n_{H2}=33904,018537117175583 kmol/h
n_{CO2}=7,6513376075292486419e-08 kmol/h
n_{H2O}=7,958985236206995495 kmol/h
n_{CH4}=1639,9526894258558514 kmol/h
n_{NH3}=3472,3797316578702521 kmol/h
n_{AR}=139,12411097339364119 kmol/h
n_{O2}=-7,1697912931408461711e-19 kmol/h
n_{N2}=1338,2768178949377216 kmol/h
n_T=40501,710873184427328 kmol/h
y_{CO}=1,9813558265369343382e-11
y_{H2}=0,83710089786761365804
y_{CO2}=1,8891393579610691392e-12
y_{H2O}=0,00019650985266097780574
y_{CH4}=0,040490948507354136698
y_{NH3}=0,08573414941729981531
y_{AR}=0,0034350181257529447017
y_{O2}=-1,7702440560079788534e-23
y_{N2}=0,033042476207615974204
T: 294.15 °K
p: 180 bar
H: -7136887.031260 J/kmol
========================================
========================================
==============14. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 404.126 °K
Vorwärmer (auf T= 573.15 °K), Q: 349904,90117685322184 kW
n_{CO}=3,3997944797914108613e-06 kmol/h
n_{H2}=202088,37481125220074 kmol/h
n_{CO2}=4,3959037675515758137e-07 kmol/h
n_{H2O}=2285,6479194614553307 kmol/h
n_{CH4}=8345,0582038412303518 kmol/h
n_{NH3}=13942,58969051512031 kmol/h
n_{AR}=706,49382605567018345 kmol/h
n_{O2}=-2,8679165172563392388e-18 kmol/h
n_{N2}=17024,25613977170724 kmol/h
n_T=244392,42059473678819 kmol/h
y_{CO}=1,3911210795809060053e-11
y_{H2}=0,82690115478812176164
y_{CO2}=1,7987070780894117923e-12
y_{H2O}=0,0093523682686200235864
y_{CH4}=0,034146141617376120359
y_{NH3}=0,057050008574674210271
y_{AR}=0,0028908172534008822613
y_{O2}=-1,1734883227054146473e-23
y_{N2}=0,069659509482097009547
T: 573.15 °K
p: 180 bar
H: 888041.121354 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -1,1681450737847222348e-08 kW (adiabat)
n_{CO}=3,8914328152097463743e-06 kmol/h
n_{H2}=170737,07501900388161 kmol/h
n_{CO2}=4,1000808073188196812e-07 kmol/h
n_{H2O}=2285,6479190289815051 kmol/h
n_{CH4}=8345,0582033791743015 kmol/h
n_{NH3}=34843,456219585073995 kmol/h
n_{AR}=706,49382605567018345 kmol/h
n_{O2}=-2,3444595004723260923e-18 kmol/h
n_{N2}=6573,8228752367294874 kmol/h
n_T=223491,55406659096479 kmol/h
y_{CO}=1,7411990495400400702e-11
y_{H2}=0,76395314235513123169
y_{CO2}=1,8345573838093094503e-12
y_{H2O}=0,010226999085379108803
y_{CH4}=0,03733947906099709807
y_{NH3}=0,15590502453261928517
y_{AR}=0,0031611656601804516073
y_{O2}=-1,0490148096485905269e-23
y_{N2}=0,029414189286446189991
T: 717.459 °K
p: 180 bar
H: 971090.474277 J/kmol
Gesamtfehler: 486552.0006133561
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -962260,39377604692709 kW
Dampfphase (zum Splitter):
n_{CO}=3,8827475163677572465e-06 kmol/h
n_{H2}=170483,06267257520813 kmol/h
n_{CO2}=3,7041836093940727435e-07 kmol/h
n_{H2O}=39,983823444532745839 kmol/h
n_{CH4}=8269,8750093039070634 kmol/h
n_{NH3}=17442,071526155974425 kmol/h
n_{AR}=701,68102375299918094 kmol/h
n_{O2}=-2,3278552062648315399e-18 kmol/h
n_{N2}=6560,2355925291349195 kmol/h
n_T=203496,90965201490326 kmol/h
y_{CO}=1,9080130125674643616e-11
y_{H2}=0,83776733004242698311
y_{CO2}=1,8202652883993480654e-12
y_{H2O}=0,00019648368868377047501
y_{CH4}=0,040638823574135833627
y_{NH3}=0,085711726806204049933
y_{AR}=0,0034481163617994456394
y_{O2}=-1,1439265639093721886e-23
y_{N2}=0,032237519496923841555
H2/N2 Verhältnis: 25.987338452710922
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7146924.969283 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=8,6852988419888250734e-09 kmol/h
n_{H2}=254,01234642867152047 kmol/h
n_{CO2}=3,9589719792474693765e-08 kmol/h
n_{H2O}=2245,6640955844504788 kmol/h
n_{CH4}=75,183194075264978551 kmol/h
n_{NH3}=17401,384693429088657 kmol/h
n_{AR}=4,8128023026709563226 kmol/h
n_{O2}=-1,6604294207494269573e-20 kmol/h
n_{N2}=13,587282707594487974 kmol/h
n_T=19994,64441457601788 kmol/h
x_{CO}=4,3438126043623252094e-13
x_{H2}=0,01270401919558880946
x_{CO2}=1,9800161970977747278e-12
x_{H2O}=0,11231327995767549643
x_{CH4}=0,0037601665987761429336
x_{NH3}=0,87030228366173445487
x_{AR}=0,00024070457084996436286
x_{O2}=-8,3043708428733836473e-25
x_{N2}=0,00067954610380186093472
T: 294.15 °K
p: 180 bar
H: -67564456.466454 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,1061980130942060513e-06 kmol/h
n_{H2}=136386,45013806017232 kmol/h
n_{CO2}=2,9633468875152587242e-07 kmol/h
n_{H2O}=31,987058755626200934 kmol/h
n_{CH4}=6615,9000074431260146 kmol/h
n_{NH3}=13953,657220924780631 kmol/h
n_{AR}=561,34481900239939023 kmol/h
n_{O2}=-1,8622841650118651549e-18 kmol/h
n_{N2}=5248,188474023309027 kmol/h
n_T=162797,52772161195753 kmol/h
y_{CO}=1,9080130125844945887e-11
y_{H2}=0,83776733004990466824
y_{CO2}=1,8202652884155952174e-12
y_{H2O}=0,00019648368868552422623
y_{CH4}=0,040638823574498557367
y_{NH3}=0,085711726806969090742
y_{AR}=0,003448116361830222236
y_{O2}=-1,1439265639195823856e-23
y_{N2}=0,032237519497211583608
T: 294.15 °K
p: 180 bar
H: -7146924.969283 J/kmol
========================================
Abgas:
n_{CO}=7,7654950327355130107e-07 kmol/h
n_{H2}=34096,612534515028528 kmol/h
n_{CO2}=7,4083672187881441635e-08 kmol/h
n_{H2O}=7,996764688906547569 kmol/h
n_{CH4}=1653,9750018607810489 kmol/h
n_{NH3}=3488,4143052311937936 kmol/h
n_{AR}=140,33620475059981914 kmol/h
n_{O2}=-4,6557104125296619242e-19 kmol/h
n_{N2}=1312,0471185058265746 kmol/h
n_T=40699,381930402974831 kmol/h
y_{CO}=1,9080130125844949118e-11
y_{H2}=0,83776733004990466824
y_{CO2}=1,8202652884155952174e-12
y_{H2O}=0,00019648368868552422623
y_{CH4}=0,040638823574498564306
y_{NH3}=0,085711726806969090742
y_{AR}=0,0034481163618302226696
y_{O2}=-1,1439265639195826795e-23
y_{N2}=0,032237519497211583608
T: 294.15 °K
p: 180 bar
H: -7146924.969283 J/kmol
========================================
========================================
==============15. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 403.769 °K
Vorwärmer (auf T= 573.15 °K), Q: 351798,32765445054974 kW
n_{CO}=3,29606045995368489e-06 kmol/h
n_{H2}=202858,75080084364163 kmol/h
n_{CO2}=4,2987156120551340223e-07 kmol/h
n_{H2O}=2285,7990372722529173 kmol/h
n_{CH4}=8401,1474535809320514 kmol/h
n_{NH3}=14006,727984808414476 kmol/h
n_{AR}=711,34220116449489524 kmol/h
n_{O2}=-1,8622841650118651549e-18 kmol/h
n_{N2}=16919,337342215261742 kmol/h
n_T=245183,10482361091999 kmol/h
y_{CO}=1,3443260955215202061e-11
y_{H2}=0,82737654760830214862
y_{CO2}=1,7532674672456351484e-12
y_{H2O}=0,0093228244210248386453
y_{CH4}=0,034264789409632720463
y_{NH3}=0,05712762302641163914
y_{AR}=0,0029012692439646160271
y_{O2}=-7,595483246497043957e-24
y_{N2}=0,069006946275467606622
T: 573.15 °K
p: 180 bar
H: 883297.726400 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 2,1983888414171008459e-07 kW (adiabat)
n_{CO}=3,7901569247595574839e-06 kmol/h
n_{H2}=171513,42152424855158 kmol/h
n_{CO2}=3,9936674143623614784e-07 kmol/h
n_{H2O}=2285,7990368391660922 kmol/h
n_{CH4}=8401,147453117340774 kmol/h
n_{NH3}=34903,614170112014108 kmol/h
n_{AR}=711,34220116449489524 kmol/h
n_{O2}=4,5973652571093378425e-19 kmol/h
n_{N2}=6470,8942495634628358 kmol/h
n_T=224286,21863923454657 kmol/h
y_{CO}=1,6898750836118215125e-11
y_{H2}=0,76470780311352393177
y_{CO2}=1,7806120405401254223e-12
y_{H2O}=0,010191437756217580982
y_{CH4}=0,037457261101854086305
y_{NH3}=0,15562085972948094814
y_{AR}=0,003171582299974891328
y_{O2}=2,0497760785312545928e-24
y_{N2}=0,0288510559802692447
T: 716.944 °K
p: 180 bar
H: 965595.123754 J/kmol
Gesamtfehler: 32.00978460276329
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -964075,70391753234435 kW
Dampfphase (zum Splitter):
n_{CO}=3,781733623373493073e-06 kmol/h
n_{H2}=171259,34667662280845 kmol/h
n_{CO2}=3,6094678787923185025e-07 kmol/h
n_{H2O}=40,136478575268377256 kmol/h
n_{CH4}=8325,7735210986356833 kmol/h
n_{NH3}=17507,01529579423368 kmol/h
n_{AR}=706,51665315235140952 kmol/h
n_{O2}=4,5649412809181370347e-19 kmol/h
n_{N2}=6457,5766255545868262 kmol/h
n_T=204296,3652549405524 kmol/h
y_{CO}=1,8511017650848209373e-11
y_{H2}=0,83828875984547424061
y_{CO2}=1,766780272453247035e-12
y_{H2O}=0,00019646202967805027721
y_{CH4}=0,040753409931749799699
y_{NH3}=0,085694208378703257134
y_{AR}=0,0034582928200677580215
y_{O2}=2,2344701409926796697e-24
y_{N2}=0,031608866938310019312
H2/N2 Verhältnis: 26.52068362594378
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7154696.296584 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=8,4233013860634597141e-09 kmol/h
n_{H2}=254,07484762572977388 kmol/h
n_{CO2}=3,8419953557004204948e-08 kmol/h
n_{H2O}=2245,6625582638998821 kmol/h
n_{CH4}=75,373932018706383928 kmol/h
n_{NH3}=17396,598874317798618 kmol/h
n_{AR}=4,8255480121434697338 kmol/h
n_{O2}=3,2423976191200093122e-21 kmol/h
n_{N2}=13,317624008875986519 kmol/h
n_T=19989,853384293997806 kmol/h
x_{CO}=4,213788479188291393e-13
x_{H2}=0,012710190656933892384
x_{CO2}=1,9219727544987297135e-12
x_{H2O}=0,11234012155629682916
x_{CH4}=0,0037706095486152138747
x_{NH3}=0,8702714595365907968
x_{AR}=0,00024139987028106230029
x_{O2}=1,6220217117009267963e-25
x_{N2}=0,00066621919419398826796
T: 294.15 °K
p: 180 bar
H: -67570313.227222 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=3,0253868986987949666e-06 kmol/h
n_{H2}=137007,47734129824676 kmol/h
n_{CO2}=2,8875743030338549079e-07 kmol/h
n_{H2O}=32,109182860214701805 kmol/h
n_{CH4}=6660,6188168789085466 kmol/h
n_{NH3}=14005,61223663538658 kmol/h
n_{AR}=565,21332252188108214 kmol/h
n_{O2}=3,6519530247345099166e-19 kmol/h
n_{N2}=5166,0613004436700066 kmol/h
n_T=163437,09220395248849 kmol/h
y_{CO}=1,8511017651509772917e-11
y_{H2}=0,83828875987543371995
y_{CO2}=1,7667802725163896328e-12
y_{H2O}=0,00019646202968507159759
y_{CH4}=0,040753409933206280469
y_{NH3}=0,085694208381765876736
y_{AR}=0,003458292820191353166
y_{O2}=2,2344701410725372458e-24
y_{N2}=0,031608866939439685118
T: 294.15 °K
p: 180 bar
H: -7154696.296584 J/kmol
========================================
Abgas:
n_{CO}=7,5634672467469842401e-07 kmol/h
n_{H2}=34251,869335324554413 kmol/h
n_{CO2}=7,2189357575846359462e-08 kmol/h
n_{H2O}=8,0272957150536736748 kmol/h
n_{CH4}=1665,1547042197266819 kmol/h
n_{NH3}=3501,4030591588457355 kmol/h
n_{AR}=141,30333063047024211 kmol/h
n_{O2}=9,1298825618362723842e-20 kmol/h
n_{N2}=1291,5153251109170469 kmol/h
n_T=40859,273050988114846 kmol/h
y_{CO}=1,8511017651509769686e-11
y_{H2}=0,83828875987543371995
y_{CO2}=1,7667802725163896328e-12
y_{H2O}=0,00019646202968507159759
y_{CH4}=0,040753409933206280469
y_{NH3}=0,085694208381765862859
y_{AR}=0,003458292820191353166
y_{O2}=2,2344701410725368785e-24
y_{N2}=0,031608866939439685118
T: 294.15 °K
p: 180 bar
H: -7154696.296584 J/kmol
========================================
========================================
==============16. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 403.482 °K
Vorwärmer (auf T= 573.15 °K), Q: 353329,34111563005717 kW
n_{CO}=3,2152493455582742288e-06 kmol/h
n_{H2}=203479,77800408171606 kmol/h
n_{CO2}=4,2229430275737307354e-07 kmol/h
n_{H2O}=2285,92116137684161 kmol/h
n_{CH4}=8445,8662630167145835 kmol/h
n_{NH3}=14058,683000519022244 kmol/h
n_{AR}=715,21070468397658715 kmol/h
n_{O2}=3,6519530247345099166e-19 kmol/h
n_{N2}=16837,21016863562545 kmol/h
n_T=245822,66930595148006 kmol/h
y_{CO}=1,307954776764939991e-11
y_{H2}=0,82775025825966563886
y_{CO2}=1,7178818534094777384e-12
y_{H2O}=0,009299065736414159275
y_{CH4}=0,034357556554334575671
y_{NH3}=0,057190343918288318037
y_{AR}=0,002909457889718965011
y_{O2}=1,4856046576360622131e-24
y_{N2}=0,068493317626780764185
T: 573.15 °K
p: 180 bar
H: 879659.657136 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -4,3449401855468753121e-08 kW (adiabat)
n_{CO}=3,7109652633843814455e-06 kmol/h
n_{H2}=172138,70189019705867 kmol/h
n_{CO2}=3,9104076172225228267e-07 kmol/h
n_{H2O}=2285,9211609436329127 kmol/h
n_{CH4}=8445,8662625522520102 kmol/h
n_{NH3}=34952,733744016884884 kmol/h
n_{AR}=715,21070468397658715 kmol/h
n_{O2}=-1,0835600487931166259e-18 kmol/h
n_{N2}=6390,1847968866959491 kmol/h
n_T=224928,618563382508 kmol/h
y_{CO}=1,6498413083609771134e-11
y_{H2}=0,76530369052033364596
y_{CO2}=1,7385104848810562221e-12
y_{H2O}=0,010162873784331203503
y_{CH4}=0,037549095870929811991
y_{NH3}=0,15539478243035390048
y_{AR}=0,0031797230128030053685
y_{O2}=-4,8173507476007584376e-24
y_{N2}=0,028409834363011522013
T: 716.532 °K
p: 180 bar
H: 961372.929682 J/kmol
Gesamtfehler: 24.000509833549668
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -965550,9740772616351 kW
Dampfphase (zum Splitter):
n_{CO}=3,7027459103350727072e-06 kmol/h
n_{H2}=171884,57154548866674 kmol/h
n_{CO2}=3,5353271346431567948e-07 kmol/h
n_{H2O}=40,259453805872155385 kmol/h
n_{CH4}=8370,3424003645141056 kmol/h
n_{NH3}=17559,561349268729828 kmol/h
n_{AR}=710,37512132732445025 kmol/h
n_{O2}=-1,0759435055152317885e-18 kmol/h
n_{N2}=6377,077902127809466 kmol/h
n_T=204942,18777643918293 kmol/h
y_{CO}=1,8067270337905619211e-11
y_{H2}=0,83869784647102585406
y_{CO2}=1,7250363006612854343e-12
y_{H2O}=0,00019644297858431575407
y_{CH4}=0,040842456552611337839
y_{NH3}=0,085680559669720268712
y_{AR}=0,0034662220063546942544
y_{O2}=-5,2499854576028440436e-24
y_{N2}=0,03111647226509282535
H2/N2 Verhältnis: 26.9535003623112
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7160733.356603 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=8,2193530493090873582e-09 kmol/h
n_{H2}=254,13034470836564083 kmol/h
n_{CO2}=3,7508048257936523775e-08 kmol/h
n_{H2O}=2245,6617071377590946 kmol/h
n_{CH4}=75,523862187738416196 kmol/h
n_{NH3}=17393,172394748136867 kmol/h
n_{AR}=4,8355833566522443689 kmol/h
n_{O2}=-7,6165432778848750357e-21 kmol/h
n_{N2}=13,106894758885198726 kmol/h
n_T=19986,430786943263229 kmol/h
x_{CO}=4,1124666730298503014e-13
x_{H2}=0,012715143965090927244
x_{CO2}=1,876675664201156259e-12
x_{H2O}=0,11235931677469020162
x_{CH4}=0,0037787568486510191186
x_{NH3}=0,87024904980419559575
x_{AR}=0,00024194331694465647116
x_{O2}=-3,8108571570148154815e-25
x_{N2}=0,0006557896656763749629
T: 294.15 °K
p: 180 bar
H: -67574535.608773 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,9621967282680585893e-06 kmol/h
n_{H2}=137507,65723639092175 kmol/h
n_{CO2}=2,8282617077145254359e-07 kmol/h
n_{H2O}=32,207563044697721466 kmol/h
n_{CH4}=6696,2739202916118302 kmol/h
n_{NH3}=14047,649079414986772 kmol/h
n_{AR}=568,3000970618595602 kmol/h
n_{O2}=-8,6075480441218535375e-19 kmol/h
n_{N2}=5101,6623217022488461 kmol/h
n_T=163953,7502211513638 kmol/h
y_{CO}=1,8067270338570830751e-11
y_{H2}=0,8386978465019052642
y_{CO2}=1,7250363007247986074e-12
y_{H2O}=0,00019644297859154849386
y_{CH4}=0,040842456554115093292
y_{NH3}=0,085680559672874911925
y_{AR}=0,0034662220064823152585
y_{O2}=-5,2499854577961408652e-24
y_{N2}=0,031116472266238488775
T: 294.15 °K
p: 180 bar
H: -7160733.356603 J/kmol
========================================
Abgas:
n_{CO}=7,4054918206701443556e-07 kmol/h
n_{H2}=34376,914309097723162 kmol/h
n_{CO2}=7,0706542692863109427e-08 kmol/h
n_{H2O}=8,0518907611744285902 kmol/h
n_{CH4}=1674,0684800729025028 kmol/h
n_{NH3}=3511,9122698537457836 kmol/h
n_{AR}=142,07502426546486163 kmol/h
n_{O2}=-2,1518870110304629029e-19 kmol/h
n_{N2}=1275,4155804255615294 kmol/h
n_T=40988,437555287833675 kmol/h
y_{CO}=1,806727033857082752e-11
y_{H2}=0,8386978465019052642
y_{CO2}=1,7250363007247982035e-12
y_{H2O}=0,00019644297859154846676
y_{CH4}=0,040842456554115086353
y_{NH3}=0,085680559672874898047
y_{AR}=0,0034662220064823152585
y_{O2}=-5,2499854577961401306e-24
y_{N2}=0,031116472266238481836
T: 294.15 °K
p: 180 bar
H: -7160733.356603 J/kmol
========================================
========================================
==============17. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 403.251 °K
Vorwärmer (auf T= 573.15 °K), Q: 354565,76147810852854 kW
n_{CO}=3,1520591751275370045e-06 kmol/h
n_{H2}=203979,95789917439106 kmol/h
n_{CO2}=4,1636304322544012633e-07 kmol/h
n_{H2O}=2286,019541561324786 kmol/h
n_{CH4}=8481,5213664294187765 kmol/h
n_{NH3}=14100,719843298620617 kmol/h
n_{AR}=718,29747922395506521 kmol/h
n_{O2}=-8,6075480441218535375e-19 kmol/h
n_{N2}=16772,81118989420429 kmol/h
n_T=246339,32732315035537 kmol/h
y_{CO}=1,2795598694611334505e-11
y_{H2}=0,82804463305037556697
y_{CO2}=1,6902012672919699043e-12
y_{H2O}=0,0092799617763123197489
y_{CH4}=0,034430236773778613579
y_{NH3}=0,057241042250639737055
y_{AR}=0,0029158863386912043346
y_{O2}=-3,4941834654076110671e-24
y_{N2}=0,068088239795716681835
T: 573.15 °K
p: 180 bar
H: 876859.861627 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 1,4509624905056423499e-07 kW (adiabat)
n_{CO}=3,6488260830218195381e-06 kmol/h
n_{H2}=172641,80605929280864 kmol/h
n_{CO2}=3,845027980951171087e-07 kmol/h
n_{H2O}=2286,0195411282784335 kmol/h
n_{CH4}=8481,5213659645105508 kmol/h
n_{NH3}=34992,821070794896514 kmol/h
n_{AR}=718,29747922395506521 kmol/h
n_{O2}=4,2222759643263669419e-19 kmol/h
n_{N2}=6326,7605761460645226 kmol/h
n_T=225447,22609658385045 kmol/h
y_{CO}=1,6184834678154904908e-11
y_{H2}=0,76577480702881373098
y_{CO2}=1,705511328537669754e-12
y_{H2O}=0,010139931995210819601
y_{CH4}=0,037620872577651245927
y_{NH3}=0,15521513250202342848
y_{AR}=0,0031861003200643915265
y_{O2}=1,8728444955528090543e-24
y_{N2}=0,028063155558346132568
T: 716.204 °K
p: 180 bar
H: 958118.102451 J/kmol
Gesamtfehler: 40.003409742501084
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -966749,0230725936126 kW
Dampfphase (zum Splitter):
n_{CO}=3,6407661427937780179e-06 kmol/h
n_{H2}=172387,62544919201173 kmol/h
n_{CO2}=3,4770858205055899892e-07 kmol/h
n_{H2O}=40,35829288597089004 kmol/h
n_{CH4}=8405,8788292657973216 kmol/h
n_{NH3}=17602,012629652559554 kmol/h
n_{AR}=713,4539427508685776 kmol/h
n_{O2}=4,1926757032885486064e-19 kmol/h
n_{N2}=6313,8187847239323673 kmol/h
n_T=205463,14793245962937 kmol/h
y_{CO}=1,7719801235856676457e-11
y_{H2}=0,83901968395502490861
y_{CO2}=1,6923160456577728735e-12
y_{H2O}=0,00019642594446044872245
y_{CH4}=0,04091185652286551816
y_{NH3}=0,085669925756104603476
y_{AR}=0,0034724180462693756029
y_{O2}=2,0405974235870291003e-24
y_{N2}=0,030729689718185061037
H2/N2 Verhältnis: 27.30322667262449
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7165438.066737 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=8,0599402280421736856e-09 kmol/h
n_{H2}=254,18061010083670226 kmol/h
n_{CO2}=3,6794216044558142867e-08 kmol/h
n_{H2O}=2245,6612482423056463 kmol/h
n_{CH4}=75,642536698715403531 kmol/h
n_{NH3}=17390,80844114233696 kmol/h
n_{AR}=4,8435364730865000382 kmol/h
n_{O2}=2,9600261037817729884e-21 kmol/h
n_{N2}=12,94179142213140743 kmol/h
n_T=19984,078164124268369 kmol/h
x_{CO}=4,0331808977978166046e-13
x_{H2}=0,012719156125780702385
x_{CO2}=1,8411765484817358997e-12
x_{H2O}=0,11237252129766384101
x_{CH4}=0,0037851401554169555527
x_{NH3}=0,87023320790945790648
x_{AR}=0,00024236977233495760856
x_{O2}=1,4811922174335355129e-25
x_{N2}=0,00064760512448245547562
T: 294.15 °K
p: 180 bar
H: -67577478.952739 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,9126129142350223296e-06 kmol/h
n_{H2}=137910,10035935364431 kmol/h
n_{CO2}=2,7816686564044724149e-07 kmol/h
n_{H2O}=32,286634308776719138 kmol/h
n_{CH4}=6724,7030634126385849 kmol/h
n_{NH3}=14081,610103722048734 kmol/h
n_{AR}=570,76315420069488482 kmol/h
n_{O2}=3,3541405626308390777e-19 kmol/h
n_{N2}=5051,0550277791462577 kmol/h
n_T=164370,51834596772096 kmol/h
y_{CO}=1,7719801236524324303e-11
y_{H2}=0,83901968398663751003
y_{CO2}=1,6923160457215358568e-12
y_{H2O}=0,0001964259444678496491
y_{CH4}=0,040911856524406993441
y_{NH3}=0,085669925759332465898
y_{AR}=0,0034724180464002088811
y_{O2}=2,0405974236639145124e-24
y_{N2}=0,030729689719342891813
T: 294.15 °K
p: 180 bar
H: -7165438.066737 J/kmol
========================================
Abgas:
n_{CO}=7,2815322855875537064e-07 kmol/h
n_{H2}=34477,525089838396525 kmol/h
n_{CO2}=6,9541716410111797137e-08 kmol/h
n_{H2O}=8,0716585771941762317 kmol/h
n_{CH4}=1681,1757658531591915 kmol/h
n_{NH3}=3520,4025259305112741 kmol/h
n_{AR}=142,69078855017369278 kmol/h
n_{O2}=8,3853514065770952869e-20 kmol/h
n_{N2}=1262,7637569447861097 kmol/h
n_T=41092,629586491915688 kmol/h
y_{CO}=1,7719801236524324303e-11
y_{H2}=0,83901968398663739901
y_{CO2}=1,6923160457215360587e-12
y_{H2O}=0,0001964259444678496491
y_{CH4}=0,040911856524406993441
y_{NH3}=0,085669925759332479775
y_{AR}=0,0034724180464002093148
y_{O2}=2,0405974236639148797e-24
y_{N2}=0,030729689719342891813
T: 294.15 °K
p: 180 bar
H: -7165438.066737 J/kmol
========================================
========================================
==============18. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 403.066 °K
Vorwärmer (auf T= 573.15 °K), Q: 355562,87200810742797 kW
n_{CO}=3,1024753610945011683e-06 kmol/h
n_{H2}=204382,40102213711361 kmol/h
n_{CO2}=4,1170373809443482424e-07 kmol/h
n_{H2O}=2286,0986128254039613 kmol/h
n_{CH4}=8509,9505095504446217 kmol/h
n_{NH3}=14134,680867605684398 kmol/h
n_{AR}=720,76053636279038983 kmol/h
n_{O2}=3,3541405626308390777e-19 kmol/h
n_{N2}=16722,203895971098973 kmol/h
n_T=246756,09544796668342 kmol/h
y_{CO}=1,2573044469123227957e-11
y_{H2}=0,82827701034536382885
y_{CO2}=1,6684643082352976928e-12
y_{H2O}=0,0092646084737042378593
y_{CH4}=0,034487296024445862619
y_{NH3}=0,057281992738397236042
y_{AR}=0,002920943189080314837
y_{O2}=1,3592939037804335513e-24
y_{N2}=0,067768149214767023358
T: 573.15 °K
p: 180 bar
H: 874697.684486 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 6,5066019694010421341e-08 kW (adiabat)
n_{CO}=3,5999149427434647797e-06 kmol/h
n_{H2}=173046,20670295078889 kmol/h
n_{CO2}=3,7935271444726385333e-07 kmol/h
n_{H2O}=2286,0986123926663822 kmol/h
n_{CH4}=8509,9505090853544971 kmol/h
n_{NH3}=35025,47708130517276 kmol/h
n_{AR}=720,76053636279038983 kmol/h
n_{O2}=3,1277028341138844169e-28 kmol/h
n_{N2}=6276,8057891213538824 kmol/h
n_T=225865,29923519739532 kmol/h
y_{CO}=1,5938326759060106568e-11
y_{H2}=0,76614782035532957849
y_{CO2}=1,6795528827659240825e-12
y_{H2O}=0,010121513221081884087
y_{CH4}=0,037677104618995940766
y_{NH3}=0,15507241351329736911
y_{AR}=0,0031911078806853379088
y_{O2}=1,3847646560603158492e-33
y_{N2}=0,027790040392991970791
T: 715.942 °K
p: 180 bar
H: 955600.466527 J/kmol
Gesamtfehler: 0.2342529296875
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -967720,42719452746678 kW
Dampfphase (zum Splitter):
n_{CO}=3,5919800679360487902e-06 kmol/h
n_{H2}=172791,9808745670889 kmol/h
n_{CO2}=3,4311931180523050154e-07 kmol/h
n_{H2O}=40,437602956895986495 kmol/h
n_{CH4}=8434,2136556486948393 kmol/h
n_{NH3}=17636,254775943227287 kmol/h
n_{AR}=715,91067247387968564 kmol/h
n_{O2}=3,1058224377996683993e-28 kmol/h
n_{N2}=6263,9937036229230216 kmol/h
n_T=205882,7912891478336 kmol/h
y_{CO}=1,7446723183159963487e-11
y_{H2}=0,83927354873120507683
y_{CO2}=1,6665759660804406236e-12
y_{H2O}=0,00019641079617261836461
y_{CH4}=0,040966093389900815058
y_{NH3}=0,085661626524661013682
y_{AR}=0,0034772730054982172419
y_{O2}=1,5085391150144488375e-33
y_{N2}=0,030425047495032987027
H2/N2 Verhältnis: 27.5849544316478
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7169114.924407 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,9348748074160143698e-09 kmol/h
n_{H2}=254,2258283836622752 kmol/h
n_{CO2}=3,6233402642033312085e-08 kmol/h
n_{H2O}=2245,6610094357706657 kmol/h
n_{CH4}=75,736853436660240391 kmol/h
n_{NH3}=17389,222305361938197 kmol/h
n_{AR}=4,8498638889107832384 kmol/h
n_{O2}=2,1880396314215860032e-30 kmol/h
n_{N2}=12,812085498430816344 kmol/h
n_T=19982,507946049539896 kmol/h
x_{CO}=3,9709103742032340314e-13
x_{H2}=0,012722418485740936284
x_{CO2}=1,8132560114176170135e-12
x_{H2O}=0,11238133953894259565
x_{CH4}=0,0037901575553538082407
x_{NH3}=0,87022221430831137035
x_{AR}=0,00024270546539625803182
x_{O2}=1,0949774864071487382e-34
x_{N2}=0,00064116503984850336736
T: 294.15 °K
p: 180 bar
H: -67579482.052713 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,873584054348839371e-06 kmol/h
n_{H2}=138233,58469965367112 kmol/h
n_{CO2}=2,74495449444184433e-07 kmol/h
n_{H2O}=32,350082365516790617 kmol/h
n_{CH4}=6747,3709245189565991 kmol/h
n_{NH3}=14109,003820754582193 kmol/h
n_{AR}=572,72853797910374851 kmol/h
n_{O2}=2,4846579502397349885e-28 kmol/h
n_{N2}=5011,1949628983393268 kmol/h
n_T=164706,23303131826106 kmol/h
y_{CO}=1,7446723183830199503e-11
y_{H2}=0,8392735487634466196
y_{CO2}=1,6665759661444639183e-12
y_{H2O}=0,00019641079618016367989
y_{CH4}=0,040966093391474570073
y_{NH3}=0,085661626527951797994
y_{AR}=0,0034772730056318004906
y_{O2}=1,5085391150724010476e-33
y_{N2}=0,030425047496201798602
T: 294.15 °K
p: 180 bar
H: -7169114.924407 J/kmol
========================================
Abgas:
n_{CO}=7,1839601358720963098e-07 kmol/h
n_{H2}=34558,396174913410505 kmol/h
n_{CO2}=6,8623862361046095014e-08 kmol/h
n_{H2O}=8,0875205913791958778 kmol/h
n_{CH4}=1686,842731129738695 kmol/h
n_{NH3}=3527,2509551886446388 kmol/h
n_{AR}=143,18213449477590871 kmol/h
n_{O2}=6,2116448755993352291e-29 kmol/h
n_{N2}=1252,7987407245843769 kmol/h
n_T=41176,558257829557988 kmol/h
y_{CO}=1,7446723183830196271e-11
y_{H2}=0,8392735487634466196
y_{CO2}=1,6665759661444639183e-12
y_{H2O}=0,00019641079618016365278
y_{CH4}=0,040966093391474563135
y_{NH3}=0,085661626527951797994
y_{AR}=0,0034772730056318004906
y_{O2}=1,5085391150724007055e-33
y_{N2}=0,030425047496201791664
T: 294.15 °K
p: 180 bar
H: -7169114.924407 J/kmol
========================================
========================================
==============19. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 402.917 °K
Vorwärmer (auf T= 573.15 °K), Q: 356365,88080250157509 kW
n_{CO}=3,0634465012083182096e-06 kmol/h
n_{H2}=204705,88536243716953 kmol/h
n_{CO2}=4,0803232189817201575e-07 kmol/h
n_{H2O}=2286,1620608821435781 kmol/h
n_{CH4}=8532,6183706567626359 kmol/h
n_{NH3}=14162,074584638217857 kmol/h
n_{AR}=722,72592014119925352 kmol/h
n_{O2}=0 kmol/h
n_{N2}=16682,343831090292952 kmol/h
n_T=247091,81013331722352 kmol/h
y_{CO}=1,2398009062119258884e-11
y_{H2}=0,82846082697758804958
y_{CO2}=1,6513389160005753285e-12
y_{H2O}=0,0092522777652915954433
y_{CH4}=0,034532178003200623972
y_{NH3}=0,057315030299859535956
y_{AR}=0,0029249286722666197925
y_{O2}=0
y_{N2}=0,067514758267744334752
T: 573.15 °K
p: 180 bar
H: 873022.350285 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -1,4971415201822918988e-07 kW (adiabat)
n_{CO}=3,5613107398672886736e-06 kmol/h
n_{H2}=173370,97316460881848 kmol/h
n_{CO2}=3,7528497193004883571e-07 kmol/h
n_{H2O}=2286,1620604497738896 kmol/h
n_{CH4}=8532,6183701916452264 kmol/h
n_{NH3}=35052,016050765516411 kmol/h
n_{AR}=722,72592014119925352 kmol/h
n_{O2}=-3,1386767211954549946e-18 kmol/h
n_{N2}=6237,373098026646403 kmol/h
n_T=226201,86866812015069 kmol/h
y_{CO}=1,5743949246910902634e-11
y_{H2}=0,76644359388107441422
y_{CO2}=1,6590710507376980053e-12
y_{H2O}=0,010106733750303341715
y_{CH4}=0,037721255003028156261
y_{NH3}=0,15495900302306203633
y_{AR}=0,0031950484069677223542
y_{O2}=-1,3875556111344387664e-23
y_{N2}=0,027574365918161456573
T: 715.733 °K
p: 180 bar
H: 953646.731959 J/kmol
Gesamtfehler: 64.00226894302365
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -968506,59023788652848 kW
Dampfphase (zum Splitter):
n_{CO}=3,5534743039281609232e-06 kmol/h
n_{H2}=173116,7073034660425 kmol/h
n_{CO2}=3,3949354277055314011e-07 kmol/h
n_{H2O}=40,501168459774909536 kmol/h
n_{CH4}=8456,8063979938178818 kmol/h
n_{NH3}=17663,834221735702158 kmol/h
n_{AR}=717,87101162610508709 kmol/h
n_{O2}=-3,1167564721802781268e-18 kmol/h
n_{N2}=6224,6631753571919035 kmol/h
n_T=206220,3832825316058 kmol/h
y_{CO}=1,7231440690909417075e-11
y_{H2}=0,83947427766893112366
y_{CO2}=1,6462656957250849071e-12
y_{H2O}=0,00019639750355185107869
y_{CH4}=0,04100858636305473176
y_{NH3}=0,085655132338913420043
y_{AR}=0,0034810865937271884968
y_{O2}=-1,5113716803583867962e-23
y_{N2}=0,030184519473937292122
H2/N2 Verhältnis: 27.811417650487098
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7171995.655518 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,8364359391287926874e-09 kmol/h
n_{H2}=254,2658611427545452 kmol/h
n_{CO2}=3,5791429159495788247e-08 kmol/h
n_{H2O}=2245,6608919899981629 kmol/h
n_{CH4}=75,811972197828069397 kmol/h
n_{NH3}=17388,181829029806977 kmol/h
n_{AR}=4,8549085150942818956 kmol/h
n_{O2}=-2,1920249015176967146e-20 kmol/h
n_{N2}=12,709922669453932897 kmol/h
n_T=19981,485385588563076 kmol/h
x_{CO}=3,9218485468229874489e-13
x_{H2}=0,012725073053302714612
x_{CO2}=1,7912296550144680623e-12
x_{H2O}=0,11238708482171697045
x_{CH4}=0,0037941109364685418788
x_{NH3}=0,87021467626079773705
x_{AR}=0,00024297035097051831337
x_{O2}=-1,0970280037244483332e-24
x_{N2}=0,00063608497713275597828
T: 294.15 °K
p: 180 bar
H: -67580821.120599 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,8427794431425287386e-06 kmol/h
n_{H2}=138493,36584277285147 kmol/h
n_{CO2}=2,7159483421644251209e-07 kmol/h
n_{H2O}=32,400934767819933313 kmol/h
n_{CH4}=6765,445118395055033 kmol/h
n_{NH3}=14131,067377388560999 kmol/h
n_{AR}=574,29680930088397872 kmol/h
n_{O2}=-2,4934051777442228866e-18 kmol/h
n_{N2}=4979,7305402857537047 kmol/h
n_T=164976,30662602529628 kmol/h
y_{CO}=1,7231440691581559483e-11
y_{H2}=0,8394742777016762636
y_{CO2}=1,6462656957893004567e-12
y_{H2O}=0,00019639750355951191571
y_{CH4}=0,041008586364654341094
y_{NH3}=0,085655132342254539091
y_{AR}=0,0034810865938629744105
y_{O2}=-1,5113716804173404828e-23
y_{N2}=0,030184519475114697518
T: 294.15 °K
p: 180 bar
H: -7171995.655518 J/kmol
========================================
Abgas:
n_{CO}=7,1069486078563197288e-07 kmol/h
n_{H2}=34623,34146069320559 kmol/h
n_{CO2}=6,7898708554110614787e-08 kmol/h
n_{H2O}=8,1002336919549815519 kmol/h
n_{CH4}=1691,3612795987633035 kmol/h
n_{NH3}=3532,7668443471393402 kmol/h
n_{AR}=143,57420232522096626 kmol/h
n_{O2}=-6,2335129443605552906e-19 kmol/h
n_{N2}=1244,9326350714379714 kmol/h
n_T=41244,076656506316795 kmol/h
y_{CO}=1,7231440691581559483e-11
y_{H2}=0,83947427770167615257
y_{CO2}=1,6462656957893004567e-12
y_{H2O}=0,00019639750355951188861
y_{CH4}=0,041008586364654341094
y_{NH3}=0,085655132342254539091
y_{AR}=0,0034810865938629744105
y_{O2}=-1,5113716804173404828e-23
y_{N2}=0,030184519475114690579
T: 294.15 °K
p: 180 bar
H: -7171995.655518 J/kmol
========================================
========================================
==============20. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 402.798 °K
Vorwärmer (auf T= 573.15 °K), Q: 357011,7524670867715 kW
n_{CO}=3,0326418900020080008e-06 kmol/h
n_{H2}=204965,66650555632077 kmol/h
n_{CO2}=4,0513170667043009484e-07 kmol/h
n_{H2O}=2286,2129132844470405 kmol/h
n_{CH4}=8550,6925645328610699 kmol/h
n_{NH3}=14184,138141272196663 kmol/h
n_{AR}=724,29419146297948373 kmol/h
n_{O2}=-2,4934051777442228866e-18 kmol/h
n_{N2}=16650,879408477707329 kmol/h
n_T=247361,88372802431695 kmol/h
y_{CO}=1,2259940150425170075e-11
y_{H2}=0,82860650726171358738
y_{CO2}=1,6378097569626957928e-12
y_{H2O}=0,0092423815618988009596
y_{CH4}=0,034567543049334119309
y_{NH3}=0,057341648306930469159
y_{AR}=0,00292807517693123966
y_{O2}=-1,0079989447710281897e-23
y_{N2}=0,067313844629293975719
T: 573.15 °K
p: 180 bar
H: 871720.276964 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 2,825631035698784632e-07 kW (adiabat)
n_{CO}=3,5307702739236095846e-06 kmol/h
n_{H2}=173631,57801701896824 kmol/h
n_{CO2}=3,7206481213577418978e-07 kmol/h
n_{H2O}=2286,2129128524525186 kmol/h
n_{CH4}=8550,6925640678000491 kmol/h
n_{NH3}=35073,530467871845758 kmol/h
n_{AR}=724,29419146297948373 kmol/h
n_{O2}=2,7113280379045867826e-18 kmol/h
n_{N2}=6206,1832451778855102 kmol/h
n_T=226472,491402354819 kmol/h
y_{CO}=1,5590283182122917586e-11
y_{H2}=0,76667844709025700922
y_{CO2}=1,64286978004210519e-12
y_{H2O}=0,010094881275407212654
y_{CH4}=0,037755987542330234075
y_{NH3}=0,1548688330785376599
y_{AR}=0,0031981552681212234088
y_{O2}=1,1971997221894805075e-23
y_{N2}=0,0274036957281132941
T: 715.567 °K
p: 180 bar
H: 952126.099111 J/kmol
Gesamtfehler: 40.01293231039046
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -969141,63597618637141 kW
Dampfphase (zum Splitter):
n_{CO}=3,5230115364946980579e-06 kmol/h
n_{H2}=173377,2773230112798 kmol/h
n_{CO2}=3,3662262138673198775e-07 kmol/h
n_{H2O}=40,552072691336356058 kmol/h
n_{CH4}=8474,8207075088557758 kmol/h
n_{NH3}=17686,017708994746499 kmol/h
n_{AR}=719,43525708512083838 kmol/h
n_{O2}=2,6924176773672085446e-18 kmol/h
n_{N2}=6193,5539826197500588 kmol/h
n_T=206491,6570557707455 kmol/h
y_{CO}=1,7061277857846231141e-11
y_{H2}=0,83963332847553662219
y_{CO2}=1,6301996224599216502e-12
y_{H2O}=0,00019638601030153311809
y_{CH4}=0,041041952144755679266
y_{NH3}=0,08565003526277907564
y_{AR}=0,0034840887390545044293
y_{O2}=1,3038869054809880943e-23
y_{N2}=0,029994209309397962954
H2/N2 Verhältnis: 27.993180944178373
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7174257.470650 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,7587374289111113916e-09 kmol/h
n_{H2}=254,30069400768317678 kmol/h
n_{CO2}=3,5442190749042241731e-08 kmol/h
n_{H2O}=2245,6608401611174486 kmol/h
n_{CH4}=75,871856558942340598 kmol/h
n_{NH3}=17387,512758877084707 kmol/h
n_{AR}=4,8589343778586506772 kmol/h
n_{O2}=1,8910360537378608139e-20 kmol/h
n_{N2}=12,629262558134238148 kmol/h
n_T=19980,834346584018931 kmol/h
x_{CO}=3,8830898137161130803e-13
x_{H2}=0,012727230990477764877
x_{CO2}=1,7738093489355806355e-12
x_{H2O}=0,11239074415635448567
x_{CH4}=0,003797231650782989
x_{NH3}=0,87020954502555147858
x_{AR}=0,00024317975393614942969
x_{O2}=9,4642496990260370981e-25
x_{N2}=0,00063206882876973302394
T: 294.15 °K
p: 180 bar
H: -67581703.980588 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,8184092291957589546e-06 kmol/h
n_{H2}=138701,82185840900638 kmol/h
n_{CO2}=2,6929809710938561138e-07 kmol/h
n_{H2O}=32,441658153069084847 kmol/h
n_{CH4}=6779,8565660070853482 kmol/h
n_{NH3}=14148,814167195796472 kmol/h
n_{AR}=575,54820566809678439 kmol/h
n_{O2}=2,1539341418937669898e-18 kmol/h
n_{N2}=4954,8431860957998651 kmol/h
n_T=165193,32564461653237 kmol/h
y_{CO}=1,7061277858519869584e-11
y_{H2}=0,83963332850868810375
y_{CO2}=1,6301996225242874494e-12
y_{H2O}=0,00019638601030928708809
y_{CH4}=0,041041952146376153854
y_{NH3}=0,085650035266160828851
y_{AR}=0,0034840887391920680009
y_{O2}=1,3038869055324700449e-23
y_{N2}=0,029994209310582234385
T: 294.15 °K
p: 180 bar
H: -7174257.470650 J/kmol
========================================
Abgas:
n_{CO}=7,04602307298939421e-07 kmol/h
n_{H2}=34675,455464602244319 kmol/h
n_{CO2}=6,7324524277346376375e-08 kmol/h
n_{H2O}=8,1104145382672694353 kmol/h
n_{CH4}=1694,9641415017708823 kmol/h
n_{NH3}=3537,2035417989482085 kmol/h
n_{AR}=143,88705141702413925 kmol/h
n_{O2}=5,3848353547344155485e-19 kmol/h
n_{N2}=1238,7107965239497389 kmol/h
n_T=41298,331411154125817 kmol/h
y_{CO}=1,7061277858519866352e-11
y_{H2}=0,83963332850868810375
y_{CO2}=1,6301996225242870455e-12
y_{H2O}=0,00019638601030928706098
y_{CH4}=0,041041952146376153854
y_{NH3}=0,085650035266160814973
y_{AR}=0,0034840887391920671336
y_{O2}=1,3038869055324698979e-23
y_{N2}=0,029994209310582230915
T: 294.15 °K
p: 180 bar
H: -7174257.470650 J/kmol
========================================
========================================
==============21. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 402.702 °K
Vorwärmer (auf T= 573.15 °K), Q: 357530,65626138664084 kW
n_{CO}=3,0082716760552377933e-06 kmol/h
n_{H2}=205174,12252119250479 kmol/h
n_{CO2}=4,0283496956337319413e-07 kmol/h
n_{H2O}=2286,2536366696963341 kmol/h
n_{CH4}=8565,1040121448913851 kmol/h
n_{NH3}=14201,884931079432135 kmol/h
n_{AR}=725,5455878301922894 kmol/h
n_{O2}=2,1539341418937669898e-18 kmol/h
n_{N2}=16625,992054287755309 kmol/h
n_T=247578,90274661555304 kmol/h
y_{CO}=1,2150759384914356863e-11
y_{H2}=0,82872215784548408646
y_{CO2}=1,627097321679522613e-12
y_{H2O}=0,0092344444995362175849
y_{CH4}=0,034595451862515280705
y_{NH3}=0,057363065970182203213
y_{AR}=0,0029305630640618493196
y_{O2}=8,6999906615557194874e-24
y_{N2}=0,067154316744442527076
T: 573.15 °K
p: 180 bar
H: 870705.560286 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -1,1975606282552084194e-07 kW (adiabat)
n_{CO}=3,5065615417277221685e-06 kmol/h
n_{H2}=173840,55341253406368 kmol/h
n_{CO2}=3,6951080068667893801e-07 kmol/h
n_{H2O}=2286,2536362380546962 kmol/h
n_{CH4}=8565,1040116799249517 kmol/h
n_{NH3}=35090,931004426107393 kmol/h
n_{AR}=725,5455878301922894 kmol/h
n_{O2}=-1,7028693931892898718e-18 kmol/h
n_{N2}=6181,4690176144167708 kmol/h
n_T=226689,85667419887614 kmol/h
y_{CO}=1,5468541879963295432e-11
y_{H2}=0,76686516089857337253
y_{CO2}=1,6300279426165233498e-12
y_{H2O}=0,010085381277221782889
y_{CH4}=0,03778335800877842543
y_{NH3}=0,15479709378818468091
y_{AR}=0,0032006089662536350154
y_{O2}=-7,5118905546650558874e-24
y_{N2}=0,027268397043889315567
T: 715.434 °K
p: 180 bar
H: 950939.448254 J/kmol
Gesamtfehler: 0.4310915855835322
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -969653,71292246296071 kW
Dampfphase (zum Splitter):
n_{CO}=3,4988642777572063556e-06 kmol/h
n_{H2}=173586,22288416561787 kmol/h
n_{CO2}=3,3434518177902823404e-07 kmol/h
n_{H2O}=40,59281337986641347 kmol/h
n_{CH4}=8489,1844049473802443 kmol/h
n_{NH3}=17703,840267885367211 kmol/h
n_{AR}=720,68343960060872178 kmol/h
n_{O2}=-1,6910052409411279614e-18 kmol/h
n_{N2}=6168,9035722231019463 kmol/h
n_T=206709,42738603512407 kmol/h
y_{CO}=1,6926486236565131976e-11
y_{H2}=0,8397595846128572683
y_{CO2}=1,6174646023343972778e-12
y_{H2O}=0,00019637620737269551778
y_{CH4}=0,04106820144567387143
y_{NH3}=0,085646022491845719138
y_{AR}=0,003486456562167517155
y_{O2}=-8,1805908044815280681e-24
y_{N2}=0,029843358621748052734
H2/N2 Verhältnis: 28.138910075653843
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7176036.555218 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,6972639705150403429e-09 kmol/h
n_{H2}=254,33052836841554267 kmol/h
n_{CO2}=3,5165618907650730448e-08 kmol/h
n_{H2O}=2245,6608228581858384 kmol/h
n_{CH4}=75,919606732545418026 kmol/h
n_{NH3}=17387,090736540718353 kmol/h
n_{AR}=4,8621482295834397291 kmol/h
n_{O2}=-1,1864152248161480064e-20 kmol/h
n_{N2}=12,565445391314364443 kmol/h
n_T=19980,429288163628371 kmol/h
x_{CO}=3,852401699018308567e-13
x_{H2}=0,012728982185772026031
x_{CO2}=1,7600031718517156395e-12
x_{H2O}=0,11239302176119676802
x_{CH4}=0,0037996984783891956695
x_{NH3}=0,87020606479152684543
x_{AR}=0,00024334553384522646991
x_{O2}=-5,9378865598617269014e-25
x_{N2}=0,00062888765878172422485
T: 294.15 °K
p: 180 bar
H: -67582279.592574 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,7990914222057650845e-06 kmol/h
n_{H2}=138868,9783073324943 kmol/h
n_{CO2}=2,674761454232226084e-07 kmol/h
n_{H2O}=32,474250703893133618 kmol/h
n_{CH4}=6791,3475239579047411 kmol/h
n_{NH3}=14163,072214308294861 kmol/h
n_{AR}=576,54675168048709111 kmol/h
n_{O2}=-1,3528041927529022921e-18 kmol/h
n_{N2}=4935,1228577784813751 kmol/h
n_T=165367,54190882813418 kmol/h
y_{CO}=1,6926486237238650865e-11
y_{H2}=0,83975958464627198374
y_{CO2}=1,6174646023987578264e-12
y_{H2O}=0,00019637620738050952547
y_{CH4}=0,041068201447308015639
y_{NH3}=0,085646022495253645856
y_{AR}=0,0034864565623062464608
y_{O2}=-8,1805908048070400862e-24
y_{N2}=0,029843358622935543811
T: 294.15 °K
p: 180 bar
H: -7176036.555218 J/kmol
========================================
Abgas:
n_{CO}=6,9977285555144116523e-07 kmol/h
n_{H2}=34717,244576833116298 kmol/h
n_{CO2}=6,6869036355805638866e-08 kmol/h
n_{H2O}=8,1185626759732798519 kmol/h
n_{CH4}=1697,8368809894757305 kmol/h
n_{NH3}=3540,7680535770728056 kmol/h
n_{AR}=144,13668792012174436 kmol/h
n_{O2}=-3,3820104818822547673e-19 kmol/h
n_{N2}=1233,7807144446201164 kmol/h
n_T=41341,88547720702627 kmol/h
y_{CO}=1,6926486237238650865e-11
y_{H2}=0,83975958464627187272
y_{CO2}=1,6174646023987576244e-12
y_{H2O}=0,00019637620738050947126
y_{CH4}=0,0410682014473080087
y_{NH3}=0,085646022495253645856
y_{AR}=0,0034864565623062460271
y_{O2}=-8,1805908048070386169e-24
y_{N2}=0,029843358622935543811
T: 294.15 °K
p: 180 bar
H: -7176036.555218 J/kmol
========================================
========================================
==============22. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 402.625 °K
Vorwärmer (auf T= 573.15 °K), Q: 357947,15296189428773 kW
n_{CO}=2,9889538690652443467e-06 kmol/h
n_{H2}=205341,2789701159636 kmol/h
n_{CO2}=4,0101301787721019115e-07 kmol/h
n_{H2O}=2286,286229220520454 kmol/h
n_{CH4}=8576,5949700957116875 kmol/h
n_{NH3}=14216,142978191930524 kmol/h
n_{AR}=726,54413384258259612 kmol/h
n_{O2}=-1,3528041927529022921e-18 kmol/h
n_{N2}=16606,271725970436819 kmol/h
n_T=247753,11901082715485 kmol/h
y_{CO}=1,2064243150596311695e-11
y_{H2}=0,82881410248216602632
y_{CO2}=1,6185992712353517673e-12
y_{H2O}=0,0092280825296920127726
y_{CH4}=0,034617505540751242199
y_{NH3}=0,057380278540794742159
y_{AR}=0,0029325327436577359325
y_{O2}=-5,4602912696077218944e-24
y_{N2}=0,067027498149255265725
T: 573.15 °K
p: 180 bar
H: 869912.923948 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 2,2158728705512153565e-07 kW (adiabat)
n_{CO}=3,4873403098421748435e-06 kmol/h
n_{H2}=174008,03084065107396 kmol/h
n_{CO2}=3,6748196310300093301e-07 kmol/h
n_{H2O}=2286,2862287891957749 kmol/h
n_{CH4}=8576,5949696308562125 kmol/h
n_{NH3}=35104,97506540921313 kmol/h
n_{AR}=726,54413384258259612 kmol/h
n_{O2}=4,3016425399394744548e-19 kmol/h
n_{N2}=6161,8556823617927876 kmol/h
n_T=226864,28692453957046 kmol/h
y_{CO}=1,5371922822749737751e-11
y_{H2}=0,76701376492338901869
y_{CO2}=1,6198316979932331236e-12
y_{H2O}=0,010077770546360472273
y_{CH4}=0,037804958576329976416
y_{NH3}=0,1547399793123276357
y_{AR}=0,003202549611011486913
y_{O2}=1,8961303245452212983e-24
y_{N2}=0,027160977013589503865
T: 715.328 °K
p: 180 bar
H: 950011.317773 J/kmol
Gesamtfehler: 8.039680946751458
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -970065,99101395125035 kW
Dampfphase (zum Splitter):
n_{CO}=3,4796917787130602244e-06 kmol/h
n_{H2}=173753,67509884663741 kmol/h
n_{CO2}=3,3253577075827484636e-07 kmol/h
n_{H2O}=40,625405850190411172 kmol/h
n_{CH4}=8500,6372946052051702 kmol/h
n_{NH3}=17718,145087608056201 kmol/h
n_{AR}=721,67942014305378962 kmol/h
n_{O2}=4,2716977266795259831e-19 kmol/h
n_{N2}=6149,3408198456954779 kmol/h
n_T=206884,10313071109704 kmol/h
y_{CO}=1,6819522263511213658e-11
y_{H2}=0,83985996247426086825
y_{CO2}=1,6073529368027048094e-12
y_{H2O}=0,00019636794337404505869
y_{CH4}=0,041088885833309315987
y_{NH3}=0,085642854229854445403
y_{AR}=0,0034883270835853233925
y_{O2}=2,064778135132674703e-24
y_{N2}=0,029723602377094860555
H2/N2 Verhältnis: 28.255658645247546
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7177438.053169 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,6485311291143428085e-09 kmol/h
n_{H2}=254,35574180441236081 kmol/h
n_{CO2}=3,4946192344726106507e-08 kmol/h
n_{H2O}=2245,6608229390058113 kmol/h
n_{CH4}=75,957675025653387024 kmol/h
n_{NH3}=17386,829977801160567 kmol/h
n_{AR}=4,8647136995286821559 kmol/h
n_{O2}=2,9944813259948343746e-21 kmol/h
n_{N2}=12,514862516098434142 kmol/h
n_T=19980,183793828451599 kmol/h
x_{CO}=3,8280584459148722779e-13
x_{H2}=0,012730400507555637499
x_{CO2}=1,749042587387428379e-12
x_{H2O}=0,11239440272641253171
x_{CH4}=0,0038016504673320087627
x_{NH3}=0,87020370605348673632
x_{AR}=0,00024347692452413309712
x_{O2}=1,4987256164094670459e-25
x_{N2}=0,00062636373370897802017
T: 294.15 °K
p: 180 bar
H: -67582651.347907 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,783753422970448349e-06 kmol/h
n_{H2}=139002,94007907732157 kmol/h
n_{CO2}=2,6602861660661991944e-07 kmol/h
n_{H2O}=32,50032468015233178 kmol/h
n_{CH4}=6800,5098356841645 kmol/h
n_{NH3}=14174,516070086447144 kmol/h
n_{AR}=577,34353611444294074 kmol/h
n_{O2}=3,4173581813436214606e-19 kmol/h
n_{N2}=4919,4726558765560185 kmol/h
n_T=165507,28250456883688 kmol/h
y_{CO}=1,6819522264185582346e-11
y_{H2}=0,83985996250793459872
y_{CO2}=1,6073529368671505802e-12
y_{H2O}=0,00019636794338191831803
y_{CH4}=0,041088885834956755116
y_{NH3}=0,085642854233288240318
y_{AR}=0,0034883270837251850391
y_{O2}=2,0647781352154607293e-24
y_{N2}=0,029723602378286608644
T: 294.15 °K
p: 180 bar
H: -7177438.053169 J/kmol
========================================
Abgas:
n_{CO}=6,9593835574261187548e-07 kmol/h
n_{H2}=34750,735019769315841 kmol/h
n_{CO2}=6,6507154151654966624e-08 kmol/h
n_{H2O}=8,1250811700380811686 kmol/h
n_{CH4}=1700,1274589210406702 kmol/h
n_{NH3}=3543,6290175216108764 kmol/h
n_{AR}=144,33588402861070676 kmol/h
n_{O2}=8,5433954533590500403e-20 kmol/h
n_{N2}=1229,8681639691385499 kmol/h
n_T=41376,820626142194669 kmol/h
y_{CO}=1,6819522264185582346e-11
y_{H2}=0,83985996250793459872
y_{CO2}=1,6073529368671507821e-12
y_{H2O}=0,00019636794338191831803
y_{CH4}=0,041088885834956755116
y_{NH3}=0,085642854233288254195
y_{AR}=0,0034883270837251854728
y_{O2}=2,0647781352154610967e-24
y_{N2}=0,029723602378286608644
T: 294.15 °K
p: 180 bar
H: -7177438.053169 J/kmol
========================================
========================================
==============23. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 402.564 °K
Vorwärmer (auf T= 573.15 °K), Q: 358281,18376050889492 kW
n_{CO}=2,9736158698299276112e-06 kmol/h
n_{H2}=205475,24074186081998 kmol/h
n_{CO2}=3,9956548906060750219e-07 kmol/h
n_{H2O}=2286,3123031967793395 kmol/h
n_{CH4}=8585,7572818219705368 kmol/h
n_{NH3}=14227,586833970082807 kmol/h
n_{AR}=727,34091827653844575 kmol/h
n_{O2}=3,4173581813436209791e-19 kmol/h
n_{N2}=16590,621524068508734 kmol/h
n_T=247892,85960656785755 kmol/h
y_{CO}=1,1995568870153701571e-11
y_{H2}=0,82888729053338494612
y_{CO2}=1,6118475122468639726e-12
y_{H2O}=0,0092229857157862405598
y_{CH4}=0,034634951952421999533
y_{NH3}=0,057394097016552896029
y_{AR}=0,0029340938639011437949
y_{O2}=1,3785625720592876405e-24
y_{N2}=0,066926580904345445155
T: 573.15 °K
p: 180 bar
H: 869292.522796 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: 1,5964931911892360597e-07 kW (adiabat)
n_{CO}=3,472058354332368199e-06 kmol/h
n_{H2}=174142,18631617646315 kmol/h
n_{CO2}=3,6586824014094660138e-07 kmol/h
n_{H2O}=2286,3123027657311468 kmol/h
n_{CH4}=8585,7572813572242012 kmol/h
n_{NH3}=35116,28978533334157 kmol/h
n_{AR}=727,34091827653844575 kmol/h
n_{O2}=-4,8349016411663391109e-19 kmol/h
n_{N2}=6146,2700483868811716 kmol/h
n_T=227004,15665613411693 kmol/h
y_{CO}=1,5295131179434049519e-11
y_{H2}=0,7671321480688436667
y_{CO2}=1,6117248491408189436e-12
y_{H2O}=0,010071675939524916618
y_{CH4}=0,037822026732148912587
y_{NH3}=0,15469447917875572829
y_{AR}=0,0032040863435743795369
y_{O2}=-2,1298736165832626901e-24
y_{N2}=0,02707558372024548507
T: 715.243 °K
p: 180 bar
H: 949283.980017 J/kmol
Gesamtfehler: 24.00688074803128
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -970397,47681813372765 kW
Dampfphase (zum Splitter):
n_{CO}=3,4644485192188971934e-06 kmol/h
n_{H2}=173887,8094993524137 kmol/h
n_{CO2}=3,3109639864848355741e-07 kmol/h
n_{H2O}=40,651471682955836684 kmol/h
n_{CH4}=8509,7692677184932109 kmol/h
n_{NH3}=17729,617037066011108 kmol/h
n_{AR}=722,47415719184630234 kmol/h
n_{O2}=-4,8012673913786911887e-19 kmol/h
n_{N2}=6133,7953405792541162 kmol/h
n_T=207024,11677738654544 kmol/h
y_{CO}=1,673451660129297349e-11
y_{H2}=0,83993986883819027334
y_{CO2}=1,599313180459841775e-12
y_{H2O}=0,00019636104389243280548
y_{CH4}=0,041105207450424251225
y_{NH3}=0,085640346218289836733
y_{AR}=0,0034898067356065078093
y_{O2}=-2,3191826468932061754e-24
y_{N2}=0,029628409654936901235
H2/N2 Verhältnis: 28.34913782482528
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7178543.485889 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,6098351134706631032e-09 kmol/h
n_{H2}=254,37681682403675154 kmol/h
n_{CO2}=3,4771841492463110146e-08 kmol/h
n_{H2O}=2245,6608310827746209 kmol/h
n_{CH4}=75,988013638731885635 kmol/h
n_{NH3}=17386,672748267337738 kmol/h
n_{AR}=4,8667610846922357837 kmol/h
n_{O2}=-3,3634249787648636958e-21 kmol/h
n_{N2}=12,474707807626469247 kmol/h
n_T=19980,039878747578769 kmol/h
x_{CO}=3,8087186826612861419e-13
x_{H2}=0,012731547007616403344
x_{CO2}=1,7403289341768694504e-12
x_{H2O}=0,1123952127047438132
x_{CH4}=0,0038031962964853605752
x_{NH3}=0,87020210475285686424
x_{AR}=0,00024358114980052892619
x_{O2}=-1,6833925260318668571e-25
x_{N2}=0,00062435850421439906179
T: 294.15 °K
p: 180 bar
H: -67582889.409388 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,7715588153751179241e-06 kmol/h
n_{H2}=139110,24759948195424 kmol/h
n_{CO2}=2,6487711891878680358e-07 kmol/h
n_{H2O}=32,52117734636467361 kmol/h
n_{CH4}=6807,8154141747954782 kmol/h
n_{NH3}=14183,693629652811069 kmol/h
n_{AR}=577,97932575347704187 kmol/h
n_{O2}=-3,8410139131029528546e-19 kmol/h
n_{N2}=4907,0362724634032929 kmol/h
n_T=165619,29342190924217 kmol/h
y_{CO}=1,6734516601967807467e-11
y_{H2}=0,83993986887206162351
y_{CO2}=1,5993131805243354076e-12
y_{H2O}=0,00019636104390035122183
y_{CH4}=0,041105207452081855835
y_{NH3}=0,085640346221743365862
y_{AR}=0,003489806735747237685
y_{O2}=-2,3191826469867292406e-24
y_{N2}=0,029628409656131692029
T: 294.15 °K
p: 180 bar
H: -7178543.485889 J/kmol
========================================
Abgas:
n_{CO}=6,9288970384377926928e-07 kmol/h
n_{H2}=34777,561899870474008 kmol/h
n_{CO2}=6,6219279729696687659e-08 kmol/h
n_{H2O}=8,1302943365911666262 kmol/h
n_{CH4}=1701,9538535436984148 kmol/h
n_{NH3}=3545,9234074132018577 kmol/h
n_{AR}=144,49483143836923205 kmol/h
n_{O2}=-9,6025347827573797292e-20 kmol/h
n_{N2}=1226,7590681158505959 kmol/h
n_T=41404,823355477303267 kmol/h
y_{CO}=1,6734516601967807467e-11
y_{H2}=0,83993986887206151248
y_{CO2}=1,5993131805243354076e-12
y_{H2O}=0,00019636104390035122183
y_{CH4}=0,041105207452081848896
y_{NH3}=0,085640346221743365862
y_{AR}=0,003489806735747237685
y_{O2}=-2,3191826469867288733e-24
y_{N2}=0,029628409656131692029
T: 294.15 °K
p: 180 bar
H: -7178543.485889 J/kmol
========================================
========================================
==============24. Iteration=============
========================================
========================================
Mischung: Rücklauf + Zulauf + Vorwärmer
========================================
Rücklaufverhältnis = nr/nv = 0.8
Temperatur der Mischung: 402.515 °K
Vorwärmer (auf T= 573.15 °K), Q: 358548,89783455536235 kW
n_{CO}=2,9614212622345971863e-06 kmol/h
n_{H2}=205582,54826226542355 kmol/h
n_{CO2}=3,9841399137277443927e-07 kmol/h
n_{H2O}=2286,3331558629915889 kmol/h
n_{CH4}=8593,0628603126024245 kmol/h
n_{NH3}=14236,764393536446732 kmol/h
n_{AR}=727,97670791557254688 kmol/h
n_{O2}=-3,8410139131029528546e-19 kmol/h
n_{N2}=16578,185140655357827 kmol/h
n_T=248004,87052390826284 kmol/h
y_{CO}=1,1940980255664410069e-11
y_{H2}=0,82894560831798969058
y_{CO2}=1,6064764798010948336e-12
y_{H2O}=0,0092189042539089320616
y_{CH4}=0,034648766543011140506
y_{NH3}=0,057405180646095371744
y_{AR}=0,0029353323036669714749
y_{O2}=-1,5487655161726631399e-24
y_{N2}=0,06684620792178024018
T: 573.15 °K
p: 180 bar
H: 868806.113022 J/kmol
========================================
========================================
Adiabate Ammoniaksynthese: Reaktor
========================================
Berücksichtigte (unabhängige) Reaktionen:
1.9999999999999993 H2O+ 1.0 CH4+ 1.3333333333333333 N2<<==>>1.0 CO2+2.6666666666666665 NH3 K(T)=1.93108e-09
0.9999999999999994 H2O+ 1.0000000000000002 CH4+ 1.0 N2<<==>>1.0 CO+2.0 NH3 K(T)=2.91601e-10
1.9999999999999998 H2O+ 0.6666666666666666 N2<<==>>1.3333333333333333 NH3+1.0 O2 K(T)=1.47386e-41
0.6666666666666666 NH3<<==>>1.0 H2+0.3333333333333333 N2 K(T)=6.03022
Totale zu tauschende Energie, Q: -2,0099004109700519591e-07 kW (adiabat)
n_{CO}=3,4598948395627086811e-06 kmol/h
n_{H2}=174249,60715571115725 kmol/h
n_{CO2}=3,6458335187213294791e-07 kmol/h
n_{H2O}=2286,3331554321794101 kmol/h
n_{CH4}=8593,0628598479597713 kmol/h
n_{NH3}=35125,39179881268501 kmol/h
n_{AR}=727,97670791557254688 kmol/h
n_{O2}=-8,1985200881052273117e-19 kmol/h
n_{N2}=6133,8714380172377787 kmol/h
n_T=227116,24311956128804 kmol/h
y_{CO}=1,5234026382434076327e-11
y_{H2}=0,76722653017811925924
y_{CO2}=1,6052720266256102795e-12
y_{H2O}=0,010066797178520518405
y_{CH4}=0,037835527489437623117
y_{NH3}=0,1546582107750063273
y_{AR}=0,0032053044639891400437
y_{O2}=-3,6098343189787896001e-24
y_{N2}=0,027007629898087786441
T: 715.176 °K
p: 180 bar
H: 948713.066975 J/kmol
Gesamtfehler: 0.7235565187066545
========================================
========================================
Abkühler + Produkt-Abscheider
========================================
Vorkühler (auf T= 294.15 °K), Q: -970663,70300425111782 kW
Dampfphase (zum Splitter):
n_{CO}=3,4523157720605480345e-06 kmol/h
n_{H2}=173995,21287920404575 kmol/h
n_{CO2}=3,2995021562571398376e-07 kmol/h
n_{H2O}=40,672313109402068676 kmol/h
n_{CH4}=8517,0506782320371713 kmol/h
n_{NH3}=17738,810874727168994 kmol/h
n_{AR}=723,10831344447728952 kmol/h
n_{O2}=-8,1415173255166387477e-19 kmol/h
n_{N2}=6121,4286475679373325 kmol/h
n_T=207136,28371006727684 kmol/h
y_{CO}=1,6666880906065422887e-11
y_{H2}=0,84000354624349060639
y_{CO2}=1,5929136590776301017e-12
y_{H2O}=0,00019635532886495149236
y_{CH4}=0,041118101210159949122
y_{NH3}=0,085638356333735357606
y_{AR}=0,0034909785019959518014
y_{O2}=-3,9305124043755706862e-24
y_{N2}=0,029552662322976690906
H2/N2 Verhältnis: 28.423955075966276
Stoechiometrisches Verhältnis: 3
T: 294.15 °K
p: 180 bar
H: -7179416.296270 J/kmol
========================================
Produkt-Flüssigkeit:
n_{CO}=7,5790675021604150189e-09 kmol/h
n_{H2}=254,3942765071357428 kmol/h
n_{CO2}=3,4633136246418997237e-08 kmol/h
n_{H2O}=2245,6608423227780804 kmol/h
n_{CH4}=76,012181615922173705 kmol/h
n_{NH3}=17386,580924085519655 kmol/h
n_{AR}=4,8683944710953195312 kmol/h
n_{O2}=-5,7002762588589256161e-21 kmol/h
n_{N2}=12,442790449300511924 kmol/h
n_T=19979,959409493960266 kmol/h
x_{CO}=3,7933347861270364816e-13
x_{H2}=0,012732472143717730312
x_{CO2}=1,7333937247394858454e-12
x_{H2O}=0,11239566593908976999
x_{CH4}=0,0038044212247864577815
x_{NH3}=0,87020101367808655457
x_{AR}=0,00024366388206108809816
x_{O2}=-2,852996917773854379e-25
x_{N2}=0,00062276355019092539868
T: 294.15 °K
p: 180 bar
H: -67583040.612796 J/kmol
========================================
========================================
Abtrennung: Splitter
========================================
Rücklaufverhältnis = nr/nv = 0.8
Rücklaufstrom:
n_{CO}=2,761852617648438597e-06 kmol/h
n_{H2}=139196,1703033632366 kmol/h
n_{CO2}=2,639601725005711976e-07 kmol/h
n_{H2O}=32,537850487521659204 kmol/h
n_{CH4}=6813,6405425856301008 kmol/h
n_{NH3}=14191,048699781735195 kmol/h
n_{AR}=578,48665075558176341 kmol/h
n_{O2}=-6,5132138604133115759e-19 kmol/h
n_{N2}=4897,1429180543500479 kmol/h
n_T=165709,02696805389132 kmol/h
y_{CO}=1,6666880906740709228e-11
y_{H2}=0,84000354627752471526
y_{CO2}=1,5929136591421699804e-12
y_{H2O}=0,00019635532887290715106
y_{CH4}=0,041118101211825922037
y_{NH3}=0,085638356337205137625
y_{AR}=0,0034909785021373942147
y_{O2}=-3,9305124045348222527e-24
y_{N2}=0,029552662324174062969
T: 294.15 °K
p: 180 bar
H: -7179416.296270 J/kmol
========================================
Abgas:
n_{CO}=6,9046315441210943749e-07 kmol/h
n_{H2}=34799,042575840801874 kmol/h
n_{CO2}=6,5990043125142772929e-08 kmol/h
n_{H2O}=8,1344626218804130247 kmol/h
n_{CH4}=1703,4101356464070705 kmol/h
n_{NH3}=3547,7621749454328892 kmol/h
n_{AR}=144,62166268889541243 kmol/h
n_{O2}=-1,6283034651033274125e-19 kmol/h
n_{N2}=1224,2857295135872846 kmol/h
n_T=41427,256742013451003 kmol/h
y_{CO}=1,6666880906740709228e-11
y_{H2}=0,84000354627752482628
y_{CO2}=1,5929136591421697784e-12
y_{H2O}=0,00019635532887290715106
y_{CH4}=0,041118101211825922037
y_{NH3}=0,085638356337205151503
y_{AR}=0,0034909785021373946484
y_{O2}=-3,9305124045348222527e-24
y_{N2}=0,029552662324174069908
T: 294.15 °K
p: 180 bar
H: -7179416.296270 J/kmol
========================================
|
object_detection_for_image_cropping/coordinates_display_test.ipynb | ###Markdown
Display box coordinates resulting from object detection or image augmentation on images---*Last Updated 1 June 2021* Use this notebook to verify the coordinates resulting from either object detection or image augmentation. __Coordinates from object detection__: Bounding boxes resulting from object detection were exported from [taxon]_generate_crops_yolo.ipynb or [taxon]_generate_crops_tf2.ipynb. Crop coordinates were exported to [taxon]_crops_[yolo or tf2]_1000img_display_test.tsv.__Coordinates from pre-processing and augmentation__: Bounding boxes resulting from pre-processing and augmentation of EOL user-generated cropping coordinates and images were tidied and exported from [taxon]_preprocessing.ipynb to [taxon]_crops_train_aug_fin.tsv and [taxon]_crops_test_notaug_fin.tsv. If results aren't as expected, data reformatting and tidying steps in [taxon]_preprocessing.ipynb file can be adjusted and results re-displayed accordingly. Installs & Imports---
###Code
# Mount Google Drive to import your file containing coordinates (object detection bounding boxes, cropping coordinates, etc.)
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# For importing data and images
import os
import numpy as np
import pandas as pd
import urllib
from urllib.request import urlretrieve
from six.moves.urllib.request import urlopen
from six import BytesIO
from collections import defaultdict
from io import StringIO
from IPython.display import display
# For drawing on and displaying images
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
import cv2
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'svg'
# For saving images
# Un-comment out if running "Save crop dimensions displayed on images to Google Drive" below
!pip install scipy==1.1.0
import scipy
from scipy import misc
###Output
_____no_output_____
###Markdown
Display crop dimensions on images---
###Code
# Define functions
# Read in data file exported from "Combine output files A-D" block above
def read_datafile(fpath, sep="\t", header=0, disp_head=True):
"""
Defaults to tab-separated data files with header in row 0
"""
try:
df = pd.read_csv(fpath, sep=sep, header=header)
if disp_head:
print("Data header: \n", df.head())
except FileNotFoundError as e:
raise Exception("File not found: Enter the path to your file in form field and re-run").with_traceback(e.__traceback__)
return df
# Draw cropping box on image
def draw_box_on_image(df, img):
# Get box coordinates
xmin = df['xmin'][i].astype(int)
ymin = df['ymin'][i].astype(int)
xmax = df['xmin'][i].astype(int) + df['crop_width'][i].astype(int)
ymax = df['ymin'][i].astype(int) + df['crop_height'][i].astype(int)
boxcoords = [xmin, ymin, xmax, ymax]
# Set box/font color and size
maxdim = max(df['im_height'][i],df['im_width'][i])
fontScale = maxdim/600
box_col = (255, 0, 157)
# Add label to image
tag = df['class_name'][i]
image_wbox = cv2.putText(img, tag, (xmin+7, ymax-12), cv2.FONT_HERSHEY_SIMPLEX, fontScale, box_col, 2, cv2.LINE_AA)
# Draw box label on image
image_wbox = cv2.rectangle(img, (xmin, ymax), (xmax, ymin), box_col, 5)
return image_wbox, boxcoords
# For uploading an image from url
# Modified from https://www.pyimagesearch.com/2015/03/02/convert-url-to-image-with-python-and-opencv/
def url_to_image(url):
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
im_h, im_w = image.shape[:2]
return image
# Display crop dimensions on images
# Import your file with cropping coordinates
# TO DO: Enter filepath in form field
fpath = "/content/drive/MyDrive/train/tf2/results/chiroptera_cropcoords_tf2_rcnn_concat_displaytest.tsv" #@param {type:"string"}
df = read_datafile(fpath)
# TO DO: Choose start index in cropping dataframe
start = 0 #@param {type:"integer"}
# TO DO: Choose how many images to display coords on (max. 50)
num_images = 5 #@param {type:"slider", min:0, max:50, step:5}
stop = start + num_images
# Display cropping dimensions on images
print("\nDisplaying cropping coordinates on images: \n")
for i, row in df.iloc[start:stop].iterrows():
# Read in image
url = df['eolMediaURL'][i]
img = url_to_image(url)
# Draw bounding box on image
image_wbox, boxcoords = draw_box_on_image(df, img)
# Plot cropping box on image
_, ax = plt.subplots(figsize=(10, 10))
ax.imshow(image_wbox)
# Display image URL and coordinates above image
# Helps with fine-tuning data transforms in post-processing steps above
plt.title('{}) {} \n xmin: {}, ymin: {}, xmax: {}, ymax: {}'.format(i+1, url, boxcoords[0], boxcoords[1], boxcoords[2], boxcoords[3]))
###Output
_____no_output_____
###Markdown
Save crop dimensions displayed on images to Google Drive---Useful if want to share results with someone remotely. Saves images with detection boxes to Google Drive.
###Code
# Display crop dimensions on images & save results
# Import your file with cropping coordinates
# TO DO: Enter filepath in form field
fpath = "/content/drive/MyDrive/train/tf2/results/chiroptera_cropcoords_tf2_rcnn_concat_displaytest.tsv" #@param {type:"string"}
df = read_datafile(fpath)
# Path to folder for exporting images with bounding boxes
# TO DO: Enter path to where you want images saved to in form field
pathbase = "/content/drive/MyDrive/train/tf2/out/" #@param {type:"string"}
# TO DO: Choose start index in cropping dataframe
start = 0 #@param {type:"integer"}
# TO DO: Choose how many images to display coords on (max. 50)
num_images = 5 #@param {type:"slider", min:0, max:50, step:5}
stop = start + num_images
# Display cropping dimensions on images
print("\nDisplaying cropping coordinates on images: \n")
for i, row in df.iloc[start:stop].iterrows():
# Make output path
path = pathbase + str(df.dataObjectVersionID[i]) + '.jpg'
# Read in image
url = df['eolMediaURL'][i]
img = url_to_image(url)
# Draw bounding box on image
image_wbox, boxcoords = draw_box_on_image(df, img)
# Plot cropping box on image
_, ax = plt.subplots(figsize=(10, 10))
ax.imshow(image_wbox)
# Display image URL and coordinates above image
# Helps with fine-tuning data transforms in post-processing steps above
plt.title('{}) Image from: {} \n Saved to: {}'.format(i+1, url, path))
# Export image to Google Drive
misc.imsave(path, image_wbox)
###Output
_____no_output_____ |
02_Setup_Cognitive_Search/optional_notebooks/Generate_Labelled_Entities_Json.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Generate labels.json If you have entities in CSV file, you can generate the required json format using the following code.
###Code
import csv
import json
data = []
keys = []
with open("C:\LabelSkill\LabelSkillEntities.csv") as csv_file:
csv_data = csv.reader(csv_file)
count = 1
for row in csv_data:
if count == 1:
for y in row:
keys.append(y)
else:
element = {}
for y in range(len(row)):
element[keys[y]] = row[y]
data.append(element)
count = count+1
print(json.dumps(data,))
###Output
[{"name": "Active Directory", "type": "Product"}, {"name": "Active Directory B2C", "type": "Product"}, {"name": "Active Directory Domain Services", "type": "Product"}, {"name": "Advisor", "type": "Product"}, {"name": "Analysis Services", "type": "Product"}, {"name": "Anomaly Detector", "type": "Product"}, {"name": "API Apps", "type": "Product"}, {"name": "API for FHIR", "type": "Product"}, {"name": "API Management", "type": "Product"}, {"name": "App Configuration", "type": "Product"}, {"name": "App Service", "type": "Product"}, {"name": "App Service - Mobile Apps", "type": "Product"}, {"name": "App Service - Web Apps", "type": "Product"}, {"name": "Application Gateway", "type": "Product"}, {"name": "Application Insights", "type": "Product"}, {"name": "Arc", "type": "Product"}, {"name": "Archive Storage", "type": "Product"}, {"name": "Artifacts", "type": "Product"}, {"name": "Automation", "type": "Product"}, {"name": "Avere vFXT", "type": "Product"}, {"name": "Azure China 21Vianet", "type": "Product"}, {"name": "Azure DevOps", "type": "Product"}, {"name": "Azure Germany", "type": "Product"}, {"name": "Azure NetApp Files", "type": "Product"}, {"name": "Azure Notebooks", "type": "Product"}, {"name": "Azure Role-based access control", "type": "Product"}, {"name": "Azure Stack", "type": "Product"}, {"name": "Azure US Government", "type": "Product"}, {"name": "Backup", "type": "Product"}, {"name": "Bastion", "type": "Product"}, {"name": "Batch", "type": "Product"}, {"name": "Bing Autosuggest", "type": "Product"}, {"name": "Bing Custom Search", "type": "Product"}, {"name": "Bing Entity Search", "type": "Product"}, {"name": "Bing Image Search", "type": "Product"}, {"name": "Bing News Search", "type": "Product"}, {"name": "Bing Spell Check", "type": "Product"}, {"name": "Bing Video Search", "type": "Product"}, {"name": "Bing Visual Search", "type": "Product"}, {"name": "Bing Web Search", "type": "Product"}, {"name": "Blob Storage", "type": "Product"}, {"name": "Blockchain Service", "type": "Product"}, {"name": "Blockchain Tokens", "type": "Product"}, {"name": "Blockchain Workbench", "type": "Product"}, {"name": "Blueprints", "type": "Product"}, {"name": "Boards", "type": "Product"}, {"name": "Bot Service", "type": "Product"}, {"name": "Cache for Redis", "type": "Product"}, {"name": "CLIs", "type": "Product"}, {"name": "Cloud Services", "type": "Product"}, {"name": "Cloud Shell", "type": "Product"}, {"name": "Cognitive Search", "type": "Product"}, {"name": "Cognitive Services", "type": "Product"}, {"name": "Computer Vision", "type": "Product"}, {"name": "Container Instances", "type": "Product"}, {"name": "Container Registry", "type": "Product"}, {"name": "Content Delivery Network", "type": "Product"}, {"name": "Content Moderator", "type": "Product"}, {"name": "Content Protection", "type": "Product"}, {"name": "Cosmos DB", "type": "Product"}, {"name": "Cost Management", "type": "Product"}, {"name": "CycleCloud", "type": "Product"}, {"name": "Data Box Family", "type": "Product"}, {"name": "Data Catalog", "type": "Product"}, {"name": "Data Explorer", "type": "Product"}, {"name": "Data Factory", "type": "Product"}, {"name": "Data Lake", "type": "Product"}, {"name": "Data Lake Analytics", "type": "Product"}, {"name": "Data Lake Storage", "type": "Product"}, {"name": "Data Science Virtual Machines", "type": "Product"}, {"name": "Data Share", "type": "Product"}, {"name": "Database for MariaDB", "type": "Product"}, {"name": "Database for MySQL", "type": "Product"}, {"name": "Database for PostgreSQL", "type": "Product"}, {"name": "Database Migration Service", "type": "Product"}, {"name": "Databricks", "type": "Product"}, {"name": "DDoS Protection", "type": "Product"}, {"name": "Dedicated Host", "type": "Product"}, {"name": "Dedicated HSM", "type": "Product"}, {"name": "Dev Spaces", "type": "Product"}, {"name": "Developer tool integrations", "type": "Product"}, {"name": "DevOps tool integrations", "type": "Product"}, {"name": "DevTest Labs", "type": "Product"}, {"name": "Digital Twins", "type": "Product"}, {"name": "Disk Storage", "type": "Product"}, {"name": "DNS", "type": "Product"}, {"name": "Encoding", "type": "Product"}, {"name": "Event Grid", "type": "Product"}, {"name": "Event Hubs", "type": "Product"}, {"name": "ExpressRoute", "type": "Product"}, {"name": "Face", "type": "Product"}, {"name": "File Storage", "type": "Product"}, {"name": "Files", "type": "Product"}, {"name": "Firewall", "type": "Product"}, {"name": "Firewall Manager", "type": "Product"}, {"name": "Form Recognizer", "type": "Product"}, {"name": "Front Door", "type": "Product"}, {"name": "Functions", "type": "Product"}, {"name": "FXT Edge Filler", "type": "Product"}, {"name": "HDInsight", "type": "Product"}, {"name": "HPC Cache", "type": "Product"}, {"name": "Immersive Reader", "type": "Product"}, {"name": "Information Protection", "type": "Product"}, {"name": "Ink Recognizer", "type": "Product"}, {"name": "Internet Analyzer", "type": "Product"}, {"name": "IoT", "type": "Product"}, {"name": "IoT Central", "type": "Product"}, {"name": "IoT Edge", "type": "Product"}, {"name": "IoT Hub", "type": "Product"}, {"name": "IoT Solution Accelerators", "type": "Product"}, {"name": "Key Vault", "type": "Product"}, {"name": "Kinect DK", "type": "Product"}, {"name": "Kubernetes Service", "type": "Product"}, {"name": "Lab Services", "type": "Product"}, {"name": "Language Understanding", "type": "Product"}, {"name": "Lighthouse", "type": "Product"}, {"name": "Linux Virtual Machines", "type": "Product"}, {"name": "Live and On-Demand Streaming", "type": "Product"}, {"name": "Load Balancer", "type": "Product"}, {"name": "Log Analytics", "type": "Product"}, {"name": "Logic Apps", "type": "Product"}, {"name": "Machine Learning", "type": "Product"}, {"name": "Machine Learning Studio", "type": "Product"}, {"name": "Managed Applications", "type": "Product"}, {"name": "Managed Disks", "type": "Product"}, {"name": "Maps", "type": "Product"}, {"name": "Media Analytics", "type": "Product"}, {"name": "Media Player", "type": "Product"}, {"name": "Media Services", "type": "Product"}, {"name": "Microsoft Azure portal", "type": "Product"}, {"name": "Microsoft Genomics", "type": "Product"}, {"name": "Migrate", "type": "Product"}, {"name": "Mobile Apps", "type": "Product"}, {"name": "Monitor", "type": "Product"}, {"name": "Network Watcher", "type": "Product"}, {"name": "Notification Hubs", "type": "Product"}, {"name": "Open Datasets", "type": "Product"}, {"name": "Personalizer", "type": "Product"}, {"name": "Pipelines", "type": "Product"}, {"name": "PlayFab", "type": "Product"}, {"name": "Policy", "type": "Product"}, {"name": "Power BI Embedded", "type": "Product"}, {"name": "Private Link", "type": "Product"}, {"name": "QnA Maker", "type": "Product"}, {"name": "Quantum", "type": "Product"}, {"name": "Queue Storage", "type": "Product"}, {"name": "R Server for HDInsight", "type": "Product"}, {"name": "Red Hat OpenShift", "type": "Product"}, {"name": "Remote Rendering", "type": "Product"}, {"name": "Repos", "type": "Product"}, {"name": "Resource Graph", "type": "Product"}, {"name": "Resource Manager", "type": "Product"}, {"name": "SAP HANA on Azure Large Instances", "type": "Product"}, {"name": "Scheduler", "type": "Product"}, {"name": "SDKs", "type": "Product"}, {"name": "Search", "type": "Product"}, {"name": "Security Center", "type": "Product"}, {"name": "Security Center for IoT", "type": "Product"}, {"name": "Sentinel", "type": "Product"}, {"name": "Service Bus", "type": "Product"}, {"name": "Service Fabric", "type": "Product"}, {"name": "Service Health", "type": "Product"}, {"name": "SignalR Service", "type": "Product"}, {"name": "Site Recovery", "type": "Product"}, {"name": "Spatial Anchors", "type": "Product"}, {"name": "Speaker Recognition", "type": "Product"}, {"name": "Speech to Text", "type": "Product"}, {"name": "Speech Translation", "type": "Product"}, {"name": "Sphere", "type": "Product"}, {"name": "Spring Cloud", "type": "Product"}, {"name": "SQL Database", "type": "Product"}, {"name": "SQL Database Edge", "type": "Product"}, {"name": "SQL Server on Virtual Machines", "type": "Product"}, {"name": "SQL Server Stretch Database", "type": "Product"}, {"name": "Stack Edge", "type": "Product"}, {"name": "Stack HCI", "type": "Product"}, {"name": "Storage", "type": "Product"}, {"name": "Storage Accounts", "type": "Product"}, {"name": "Storage Explorer", "type": "Product"}, {"name": "StorSimple", "type": "Product"}, {"name": "Stream Analytics", "type": "Product"}, {"name": "Synapse Analytics", "type": "Product"}, {"name": "Table Storage", "type": "Product"}, {"name": "Test Plans", "type": "Product"}, {"name": "Text Analytics", "type": "Product"}, {"name": "Text to Speech", "type": "Product"}, {"name": "Time Series Insights", "type": "Product"}, {"name": "Traffic Manager", "type": "Product"}, {"name": "Translator Speech", "type": "Product"}, {"name": "Translator Text", "type": "Product"}, {"name": "Video Indexer", "type": "Product"}, {"name": "Virtual Machine Scale Sets", "type": "Product"}, {"name": "Virtual Machines", "type": "Product"}, {"name": "Virtual Network", "type": "Product"}, {"name": "Virtual WAN", "type": "Product"}, {"name": "VMware Solution by CloudSimple", "type": "Product"}, {"name": "VPN Gateway", "type": "Product"}, {"name": "Web App for Containers", "type": "Product"}, {"name": "Web Application Firewall", "type": "Product"}, {"name": "Web Apps", "type": "Product"}, {"name": "Windows Virtual Machines", "type": "Product"}, {"name": "indexer", "type": "Feature"}, {"name": "cache", "type": "Feature"}, {"name": "analyzer", "type": "Feature"}, {"name": "scoring profile", "type": "Feature"}, {"name": "boosting", "type": "Feature"}, {"name": "knowledge store", "type": "Feature"}]
|
nb/00 Introduction.ipynb | ###Markdown
IntroductionFirst upgrade the Raspberry Pi to the latest version```$ sudo apt-get update$ sudo apt-get upgrade```Then install jupyterlabCreate a directory called `git` to install all repositories cloned from GitHub```$ mkdir git$ cd git```Clone the repository from GitHub```$ git clone https://github.com/Bugnon/oc-2018 Plot
###Code
plt.plot([1, 2, 3, 2, 1])
plt.show()
###Output
_____no_output_____ |
Neural Networks and Deep Learning/week 3/Planar_data_classification_with_onehidden_layer_v6c.ipynb | ###Markdown
Updates to Assignment If you were working on the older version:* Please click on the "Coursera" icon in the top right to open up the folder directory. * Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 6b: "Planar data classification with one hidden layer v6b.ipynb" List of bug fixes and enhancements* Clarifies that the classifier will learn to classify regions as either red or blue.* compute_cost function fixes np.squeeze by casting it as a float.* compute_cost instructions clarify the purpose of np.squeeze.* compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated.* nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions. Planar data classification with one hidden layerWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. **You will learn how to:**- Implement a 2-class classification neural network with a single hidden layer- Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation 1 - Packages Let's first import all the packages that you will need during this assignment.- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.- testCases provides some test examples to assess the correctness of your functions- planar_utils provide various useful functions used in this assignment
###Code
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
###Output
_____no_output_____
###Markdown
2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
###Code
X, Y = load_planar_dataset()
###Output
_____no_output_____
###Markdown
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
###Code
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____
###Markdown
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1).Lets first get a better sense of what our data is like. **Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
###Code
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = X.shape[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
###Output
The shape of X is: (2, 400)
The shape of Y is: (1, 400)
I have m = 400 training examples!
###Markdown
**Expected Output**: **shape of X** (2, 400) **shape of Y** (1, 400) **m** 400 3 - Simple Logistic RegressionBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
###Code
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
###Output
_____no_output_____
###Markdown
You can now plot the decision boundary of these models. Run the code below.
###Code
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
###Output
Accuracy of logistic regression: 47 % (percentage of correctly labelled datapoints)
###Markdown
**Expected Output**: **Accuracy** 47% **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network modelLogistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.**Here is our model**:**Mathematically**:For one example $x^{(i)}$:$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$**Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( of input units, of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure **Exercise**: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
###Code
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
###Output
The size of the input layer is: n_x = 5
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 2
###Markdown
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). **n_x** 5 **n_h** 4 **n_y** 2 4.2 - Initialize the model's parameters **Exercise**: Implement the function `initialize_parameters()`.**Instructions**:- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.- You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).- You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01057952 -0.00909008 0.00551454 0.02292208]] **b2** [[ 0.]] 4.3 - The Loop **Question**: Implement `forward_propagation()`.**Instructions**:- Look above at the mathematical representation of your classifier.- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.- You can use the function `np.tanh()`. It is part of the numpy library.- The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.matmul(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.matmul(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
###Output
0.262818640198 0.091999045227 -1.30766601287 0.212877681719
###Markdown
**Expected Output**: 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.**Instructions**:- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:```pythonlogprobs = np.multiply(np.log(A2),Y)cost = - np.sum(logprobs) no need to use a for loop!```(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`). Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
[Note that the parameters argument is not used in this function,
but the auto-grader currently expects this parameter.
Future version of this notebook will fix both the notebook
and the auto-grader so that `parameters` is not needed.
For now, please include `parameters` in the function signature,
and also when invoking this function.]
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs)
### END CODE HERE ###
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
###Output
cost = 0.6926858869721941
###Markdown
**Expected Output**: **cost** 0.693058761... Using the cache computed during forward propagation, you can now implement backward propagation.**Question**: Implement the function `backward_propagation()`.**Instructions**:Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <!--$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$- Note that $*$ denotes elementwise multiplication.- The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !-->- Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = parameters["A1"]
A2 = parameters["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = None
dW2 = None
db2 = None
dZ1 = None
dW1 = None
db1 = None
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
###Output
_____no_output_____
###Markdown
**Expected output**: **dW1** [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] **db1** [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] **dW2** [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] **db2** [[-0.16655712]] **Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = None
db1 = None
dW2 = None
db2 = None
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **W1** [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] **b1** [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]] **W2** [[-0.01041081 -0.04463285 0.01758031 0.04747113]] **b2** [[ 0.00010457]] 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() **Question**: Build your neural network model in `nn_model()`.**Instructions**: The neural network model has to use the previous functions in the right order.
###Code
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
### START CODE HERE ### (≈ 1 line of code)
parameters = None
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = None
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = None
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = None
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = None
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **cost after iteration 0** 0.692739 $\vdots$ $\vdots$ **W1** [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] **b1** [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] **W2** [[-2.45566237 -3.27042274 2.00784958 3.36773273]] **b2** [[ 0.20459656]] 4.5 Predictions**Question**: Use your model to predict by building predict().Use forward propagation to predict results.**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
###Code
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = None
predictions = None
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
###Output
_____no_output_____
###Markdown
**Expected Output**: **predictions mean** 0.666666666667 It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
###Code
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
###Output
_____no_output_____
###Markdown
**Expected Output**: **Cost after iteration 9000** 0.218607
###Code
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
###Output
_____no_output_____
###Markdown
**Expected Output**: **Accuracy** 90% Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. 4.6 - Tuning hidden layer size (optional/ungraded exercise) Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
###Code
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
###Output
_____no_output_____
###Markdown
**Interpretation**:- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Optional questions**:**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?- Play with the learning_rate. What happens?- What if we change the dataset? (See part 5 below!) **You've learnt to:**- Build a complete neural network with a hidden layer- Make a good use of a non-linear unit- Implemented forward propagation and backpropagation, and trained a neural network- See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
###Code
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____ |
PolicyEvaluation_Rl.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/Kristian209/Move37_rlhw/blob/master/PolicyEvaluation_Rl.ipynb)
###Code
import numpy as np
import pprint
import sys
import gym.spaces
if "../" not in sys.path:
sys.path.append("../")
from gridworld import GridworldEnv
pp = pprint.PrettyPrinter(indent=2)
env = GridworldEnv()
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
# Start with a random (all 0) value function
V = np.zeros(env.nS)
while True:
delta = 0
# For each state, perform a "full backup"
for s in range(env.nS):
v = 0
# Look at the possible next actions
for a, action_prob in enumerate(policy[s]):
# For each action, look at the possible next states...
for prob, next_state, reward, done in env.P[s][a]:
# Calculate the expected value. Ref: Sutton book eq. 4.6.
v += action_prob * prob * (reward + discount_factor * V[next_state])
# How much our value function changed (across any states)
delta = max(delta, np.abs(v - V[s]))
V[s] = v
# Stop evaluating once our value function change is below a threshold
if delta < theta:
break
return np.array(V)
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
print("Value Function")
print(v)
print("")
print("Reshaped Grid Value Function")
print(v.reshape(env.shape))
print("")
###Output
Value Function
[ 0. -13.99993529 -19.99990698 -21.99989761 -13.99993529
-17.9999206 -19.99991379 -19.99991477 -19.99990698 -19.99991379
-17.99992725 -13.99994569 -21.99989761 -19.99991477 -13.99994569
0. ]
Reshaped Grid Value Function
[[ 0. -13.99993529 -19.99990698 -21.99989761]
[-13.99993529 -17.9999206 -19.99991379 -19.99991477]
[-19.99990698 -19.99991379 -17.99992725 -13.99994569]
[-21.99989761 -19.99991477 -13.99994569 0. ]]
|
facerec_segment/facerec_segmentation.ipynb | ###Markdown
READING
###Code
#df_kept.to_csv('chosen_segments_'+str(cut)+'.csv')
cut=0.5
df_kept=pd.read_csv('chosen_segments_'+str(cut)+'.csv')
df_kept['unique_id']=df_kept['shot_id'].map(str)+'_'+df_kept['file'].map(str)
df_kept['unique_id']=df_kept['unique_id'].str.replace('.mp4','')
soap_opera_scale={'extramarital affair': 1/1.98, 'get divorced': 1/1.96,'illegitimate child': 1/1.45,'institutionalized for emotional problem': 1/1.43,'happily married': 1/4.05,'serious accident': 1/2.96,'murdered': 1/1.81,'attempt suicide': 1/1.26,'blackmailed': 1/1.86,'unfaithful spouse': 1/2.23,'sexually assaulted': 1/2.60,'abortion': 1/1.41}
#filtering ZeroShot based on facerec results
#classification_results_path='../other_experiments/Hugging_Face_classification/soap_opera_scale_Max_Jack_Tanya.csv'
#classification_results_path='../other_experiments/Hugging_Face_classification/events_ranking_Max_Jack_Tanya.csv'
#classification_results_path='../other_experiments/Hugging_Face_classification/life_scale_ranking_Max_Jack_Tanya.csv'
#classification_results_path2='../other_experiments/Hugging_Face_classification/life_scale_ranking_end_Max_Jack_Tanya.csv'
#classification_results_path='../other_experiments/Hugging_Face_classification/daily_life_Max_Jack_Tanya.csv'
#classification_results_path='../other_experiments/Hugging_Face_classification/reduced_life_scale_Max_Jack_Tanya.csv'
classification_results_path='../other_experiments/Hugging_Face_classification/reduced_life_scale_Max_Jack_Tanya _clean.csv'
hf_results=pd.read_csv(classification_results_path)
#hf_results2=pd.read_csv(classification_results_path2)
#hf_results= pd.merge(hf_results,hf_results2,on=['transcript','Unnamed: 0', 'Unnamed: 0.1','begin', 'end', 'filename','shot_id'])
#hf_results=hf_results.sort_values(by=['score'],ascending=False)
#hf_results['filename']=hf_results['filename_y'].str.replace('.xml','')
hf_results['filename']=hf_results['filename'].str.replace('.xml','')
#print(hf_results)
#print(hf_results.columns)
#print(hf_results.sort_values(by=['score'],ascending=True)[0:50])
#cols_to_keep=['index','begin', 'end', 'filename', 'shot_id', 'transcript','Marital separation','score']#,'blackmailed','institutionalized for emotional problem','illegitimate child','extramarital affair']
#cols_to_keep=['Unnamed: 0_x', 'Unnamed: 0.1_x','begin_x', 'end_x', 'filename_x', 'shot_id_x', 'transcript','score_x','Unnamed: 0_y', 'Unnamed: 0.1_y','begin_y', 'end_y', 'filename_y', 'shot_id_y', 'transcript_y','score_y']
cols_to_keep=['index','begin', 'end', 'filename', 'shot_id', 'transcript','score','Marital separation','New family member','Business readjustment', 'Money change', 'Work change', 'Arguing','Mortgage', 'Child leaving home', 'Trouble with in-laws', 'Achievement']
#min_max_scaler=MinMaxScaler()
min_max_scaler=StandardScaler()
#hf_results[['death relative', 'Divorce', 'Imprisonment', 'injury illness', 'Marriage','Fired', 'Marital reconciliation', 'Retirement', 'Pregnancy','Sexual difficulties']] = min_max_scaler.fit_transform(hf_results[['death relative', 'Divorce', 'Imprisonment', 'injury illness', 'Marriage','Fired', 'Marital reconciliation', 'Retirement', 'Pregnancy','Sexual difficulties']])
#print(hf_results)
#'Unnamed: 0_x','Unnamed: 0_y']
#hf_results[['Death', 'Divorce', 'Imprisonment', 'injury illness', 'Marriage','Fired', 'Marital reconciliation', 'Retirement', 'Pregnancy','Sexual difficulties']]=10*hf_results[['Death', 'Divorce', 'Imprisonment', 'injury illness', 'Marriage','Fired', 'Marital reconciliation', 'Retirement', 'Pregnancy','Sexual difficulties']]
hf_results['score_max'] = hf_results.drop(cols_to_keep, axis=1).max(axis=1)
hf_results['unique_id']=hf_results['shot_id'].map(str)+'_'+hf_results['filename'].map(str)
#print(hf_results['unique_id'])
hf_results=hf_results.sort_values(by=['score_max'],ascending=False)
#print(hf_results.loc[hf_results['shot_id']==47])
#df.loc[df['column_name'] == some_value]
#print(hf_results[0:20])
#print(hf_results.columns)
filtered_results = pd.merge(hf_results,df_kept,on='unique_id')
filtered_result_char=filtered_results[filtered_results.character == 'Tanya Branning']
#filtered_result_char=filtered_result_char[filtered_result_char.transcript.str.len()>90]
#filtered_result_char.to_csv('top_sentences_Tanya')
#print(filtered_result_char.loc[[1]])
print(filtered_result_char[0:20])
print(len('Cos he s taken this very hard.'))
segs = {}
for file in recs:
seg_start = 0
segs[file] = []
sorted_ids = list(recs[file])
if seg_start == 0:
seg_start = sorted_ids[0]
cur = sorted_ids[0]
for i, e in enumerate(sorted_ids[1:]):
if e == cur + 1:
cur += 1
elif i == len(recs) - 1 and e == cur + 1:
segs[file].append((seg_start, e))
else:
if cur - seg_start > (0 if keep == 'all' else (keep * 2 + 1)):
segs[file].append((seg_start, cur))
seg_start = e
cur = e
segs['5245830105934359183.mp4']
# if I keep everything
s = """5555360238519252381.mp4 100
5531550228324592939.mp4 128
5544620672795594434.mp4 31
5547193787702629969.mp4 96
5549784941472309008.mp4 100
5552368364300855101.mp4 108
5555325449284154780.mp4 89
5534228999422914578.mp4 129
5542003749222140011.mp4 87
5544574287152993687.mp4 100
5539381671692122744.mp4 117"""
# print(s)
print(sum([len(segs[ep]) for ep in segs]))
timed_segs = {}
for ep in segs:
timed_segs[ep] = []
print(ep)
for s, e in segs[ep]:
try:
timed_segs[ep].append((shots_starts[ep][s], shots_ends[ep][e]))
except Exception as ex:
print('@', ep, s, e)
print(str(ex))
timed_segs['5245830105934359183.mp4']
segs['5245830105934359183.mp4']
total_segments = 0
total_shots = 0
for ep in segs:
total_segments += len(segs[ep])
total_shots += sum([e - s for s, e in segs[ep]])
print('Shots:', total_shots)
print('Segments:', total_segments)
pickle.dump(timed_segs, open(f'segs/segments_add{add}_keep{keep}_cut{cut}.pickle', 'wb'))
###Output
_____no_output_____ |
Course I/Алгоритмы Python/Part1/лекции/jupyter/lec03_dict_set_gens/lec3p1_v2019s2_v4.ipynb | ###Markdown
Лекция 3 "Словари, множества, кортежи и генераторы" часть 1 Словари Финансовый университет при Правительстве РФ, лектор С.В. Макрушин Словари Словари - это наборы объектов, доступ к которым осуществляется не по индексу, а пo ключу. В качестве ключа можно указать неизменяемый объект, например число, строку или кортеж. Элементы словаря могут содержать объекты nроизвольного типа данных и иметь неограниченную степень вложенности. Элементы в словарях расnолагаются в nроизвольном nорядке. Чтобы nолучить элемент, необходимо указать ключ, который использовался nри сохранении значения. Создание словаря Создание словаря с nомощью конструктора `dict()`. Допустимые форматы вызова конструктора: * `diсt(=[, ... , =])` * `diсt()`* `diсt()`* `diсt()`
###Code
dict() # пустой словарь
# при данном способе имена ключей должны быть корректными идентификаторами Python:
dict(a=1, b=2)
# ошибка:
dict(1a=1, b=2)
# создание словаря без явного использования конструктора:
d0 = {'a': 1, 'b': 2}
# создание словаря на основе другого словаря (копирование словаря):
d0c = dict(d0)
# ... на основе списка кортежей вида: (ключ, значение):
dict([('a', 1), ('b', 1)])
# ... на основе списка списков вида: (ключ, значение):
dict([['a', 1], ['b', 1]])
###Output
_____no_output_____
###Markdown
Объединить два списка в список кортежей позволяет функция `zip()`
###Code
k = ['a', 'b', 'c']
v = [1, 2, 3]
lkv = list(zip(k, v))
lkv
dict(lkv)
###Output
_____no_output_____
###Markdown
В действительности конструктор `dict()` работает не только со списками, но и с итерируемыми объектами:
###Code
# dict можно применять напрямую к zip:
dict(zip(k, v))
zip(k, v) # zip() возвращает не список, а итерируемый объект!
list(zip('hello', [1, 2])) # zip() заканчивает работу как только кончается один из итерируемых объектов
###Output
_____no_output_____
###Markdown
Создание словаря при помощи указания всех элементов словаря внутри фигурных скобок. Это наиболее часто используемый способ создания словаря. Между ключом и значением указывается двоеточие, а пары "ключ/значение" перечисляются через запятую.
###Code
d1 = {} # пустой словарь
d1
d2 = {'a':1, 'b':2}
d2
d2 = {'1a':1, 'b':2}
d2
###Output
_____no_output_____
###Markdown
Создание словаря при помощи поэлементного заполнения словаря.
###Code
d3 = {} # создание пустого словаря
d3['a'] = 1 # добавление элемента в словарь
d3['b'] = 2
d3
# напоминание: ключами словаря могут быть только неизменяемые объекты (кортежи, строки, числа и т.д.)
d3[d2] = 3
###Output
_____no_output_____
###Markdown
Создание словаря с помощью метода `dict.fromkeys( [, ] )`. Метод создает новый словарь, ключами которого будут элементы последовательности. Если второй параметр не указан, то значением элементов словаря будет значение `None`.
###Code
d4 = dict.fromkeys(['a', 'b'], 1)
d4
d5 = dict.fromkeys(['a', 'b'])
d5
###Output
_____no_output_____
###Markdown
Создание поверхностной копии с помощью функции `dict()`:
###Code
d6 = dict(d4)
d6
d6 is d4 # словари разные
###Output
_____no_output_____
###Markdown
Создание поверхностной копии с помощью функции copy()
###Code
d7 = d6.copy()
d7
d7 is d6 # словари разные
###Output
_____no_output_____
###Markdown
Создание глубокой копии словаря
###Code
import copy
d8 = copy.deepcopy(d6)
d8
###Output
_____no_output_____
###Markdown
--- Операции над словарями
###Code
d9 = {1: 'int', 'a': 'string', (1, 2): 'tup1e'}
d9[1]
d9['a']
d9[(1, 2)]
d9['c'] # ошибка! Обращение к несуществующему элементу
###Output
_____no_output_____
###Markdown
Проверить существование ключа можно с nомощью оператора `in`:* если ключ найден, то возвращается значение `True`* в противном случае - `False`.
###Code
'a' in d9
'c' in d9
###Output
_____no_output_____
###Markdown
Определение размера словаря:
###Code
len(d9)
###Output
_____no_output_____
###Markdown
Метод `get( [, ])` позволяет избежать вывода сообщения об ошибке при отсутствии указанного ключа:* если ключ присутствует в словаре, то метод возвращает значение, соответствующее этому ключу* если ключ отсутствует, то возвращается значение `None` или значение, указанное во втором nараметре.
###Code
print(d9.get('a'))
print(d9.get('c')) # обращение к несуществующему элементу
d9.get('c', 'default')
d9.get('a', 'default')
d91 = dict(zip('abcd', range(4)))
d91
# способ, НЕ позволяющий работать с отсутствующими элементами:
d91['a'] = d91['a'] + 1
d91
# способ, НЕ позволяющий работать с отсутствующими элементами:
d91['e'] = d91['e'] + 1
d91
# способ, позволяющий работать с отсутствующими элементами:
d91['a'] = d91.get('a', 0) + 1
d91
d91['e'] = d91.get('e', 0) + 1
d91
###Output
_____no_output_____
###Markdown
Кроме того, можно воспользоваться методом `setdefault([, ])`:* если ключ присутствует в словаре, то метод возвращает значение, соответствующее этому ключу* если ключ отсутствует, то вставляет новый элемент, со значением, указанным во втором nараметре * если второй параметр не указан, значением нового элемента будет `None`.
###Code
d10 = dict(a=1, b=2)
d10
print(d10.setdefault('a'))
d10
print(d10.setdefault('c'))
d10
d10.setdefault('d', 3)
d10
###Output
_____no_output_____
###Markdown
Так как словари относятся к изменяемым типам данных, мы можем изменить элемент по ключу. Если элемент с указанным ключом отсутствует в словаре, то он будет добавлен в словарь.
###Code
d11 = dict(a=1, b=2)
d11
d11['a'] = 11 # изменение значения по ключу
d11
d11['с'] = 13 # добавление значения
d11
###Output
_____no_output_____
###Markdown
Получить количество ключей в словаре позволяет функция `len()`:
###Code
len(d11)
###Output
_____no_output_____
###Markdown
Удалить элемент из словаря можно с помощью оператора `del`:
###Code
d12 = dict(a=1, b=2)
del d12['b']
d12
###Output
_____no_output_____
###Markdown
Перебор элементов словаря:
###Code
d13 = dict(a=5, b=12, c=9, d=5)
# обход ключей словаря в цикле for:
for k in d13:
print(f'key: {k}')
# НЕ эффективный обход ключей и значений словаря в цикле for:
for k in d13:
print(f'key: {k}, value: { d13[k]}')
# обход ключей словаря в цикле for:
for k in d13.keys():
print(f'key: {k}')
# обход упорядоченных ключей в цикле for:
for k in sorted(d13.keys()):
print(f'key: {k}, value: { d13[k]}')
# получение cписка ключей
list(d13)
# получение cписка ключей
list(d13.keys())
# обход значений словаря в цикле for:
for v in d13.values():
print(f'value: {v}')
# в словаре могут содержаться одинаковые значения!
# получение cписка значений:
list(d13.values())
# обход пар ключ-значение в цикле for:
for k, v in d13.items():
print(f'key: {k}, value: {v}')
# получение cписка кортежей ключ-значение:
list(d13.items())
###Output
_____no_output_____
###Markdown
Методы `dict.items()`, `dict.keys()` и `dict.values()` возвращают __представления словарей__. __Представление словаря__ - это итерируемый объект, доступный только для чтения и хранящий элементы, ключи или значения словаря в зависимости от того, какое представление было запрошено. Между представлениями и обычными итерируемыми объектами есть два различия:* если словарь, для которого было получено представление, изменяется, то представление будет отражать эти изменения. * представления ключей и элементов поддерживают некоторые операции, свойственные множествам. Допустим, у нас имеются представление словаря `v` и множество или представление словаря `х`; для этой пары поддерживаются следующие операции: * `v & х` Пересечение * `v | х` Объединение * `v - х` Разность * `v ^ х` Строгая дизъюнкция Как обходить словарь и при необходимости удалять из него элементы?
###Code
d13c = dict(d13)
d13c
# попытка реализации тривиального способа решения задачи:
for k, v in d13c.items():
if k > 'b':
del d13c[k] # НЕЛЬЗЯ удалять ключ-значение из словаря во время итерации по представлению этого словаря!
# решение проблемы удаления:
d13c = dict(d13)
for k, v in list(d13c.items()): # сначала итерируемся по представлению словарь.items() и создаем из него список
if k > 'b':
del d13c[k] # во время удаления представление уже не используется, используется его КОПИЯ в списке
d13c
###Output
_____no_output_____
###Markdown
`рор( [, ])` - удаляет элемент с указанным ключом и возвращает его значение. Если ключ отсутствует, то возвращается значение из второго параметра. Если ключ отсутствует и второй параметр не указан, то возбуждается исключение KeyError.
###Code
d14 = dict(a=5, b=12, c=9, d=5)
d14.pop('a')
d14
###Output
_____no_output_____
###Markdown
`popitem()` - удаляет произвольный элемент и возвращает кортеж из ключа и значения. Если словарь пустой, возбуждается исключение `KeyError`.
###Code
d15 = dict(a=5, b=12, c=9, d=5)
d15.popitem()
d15
###Output
_____no_output_____
###Markdown
`clear()` - удаляет все элементы словаря. Метод ничего не возвращает в качестве значения.
###Code
d15.clear()
d15
###Output
_____no_output_____
###Markdown
`update()` - добавляет элементы в словарь. Метод изменяет текущий словарь и ничего не возвращает. Форматы метода:* `uрdаtе(=[, ... , =])`* `uрdаtе()`* `update()`* `update()`Если элемент с указанным ключом уже присутствует в словаре, то его значение будет перезаписано.
###Code
d16 = dict(a=5, b=12, c=9, d=5)
d16.update(c=3, d=4)
d16
###Output
_____no_output_____
###Markdown
---- Множества Множество - это неупорядоченная коллекция уникальных элементов, с которой можно сравнивать другие элементы, чтобы определить, принадлежат ли они этому множеству. Множество может содержать только элементы неизменяемых типов, например числа, строки, кортежи. Объявить множество можно с помощью конструктора `set()`.
###Code
s1 = set()
s1
# преобразуем список в множество:
s2 = set ([1, 2, 3, 4, 5])
s2
# преобразуем список в множество:
s3 = set ([1, 2, 3, 1, 2, 3])
s3
# использование итерируемого объекта вместо списка:
s21 = set(range(5))
s21
###Output
_____no_output_____
###Markdown
В Python 3 можно также создать множество, указав элементы внутри фигурных скобок. Обратите внимание на то, что при указании пустых фигурных скобок будет создан словарь, а не множество. Чтобы создать пустое множество, следует использовать функцию `set()`.
###Code
{1, 2, 3, 1, 2, 3}
# обход элементов множества в цикле for:
for v in s2:
print(v, end=' ')
len(s2)
###Output
_____no_output_____
###Markdown
Пример: обход уникальных значений в списке (порядок обхода не сохраняется):
###Code
# создаем список с повторяющимися элементами:
import random
rl = []
for _ in range(15):
rl.append(random.randint(1, 15))
rl
# обход уникальных значений в списке, порядок обхода не сохраняется:
for e in set(rl):
print(e)
s3 = {1, 2, 3, 4}
s4 = {2, 4, 5, 6}
# объединение множеств:
s3 | s4
# объединение множеств:
s3.union(s4)
# добавляют элементы множества s4 во множество s5:
s5 = {7, 8 , 9}
s5.update(s4) # эквивалентно: s5 |= s4
s5
s51 = s5.copy()
s51 |= s4
s51
# & и intersection() - пересечение множеств:
s4 & s3 # s4.intersection(s3)
###Output
_____no_output_____
###Markdown
`а &= b` и `а.intersection_update(b)` - во множестве а останутся элементы, которые существуют и во множестве а, и во множестве b
###Code
s7 = {2, 4, 5, 6}
s7 &= s3 # s7.intersection_update(s7)
s7
# разница множеств:
s4 - s3 # s4.difference(s3)
# удаляем элементы из множества а, которые присутствуют во множестве b:
s6 = {2, 4, 5, 6}
s6 -= s3 # s6.difference_update(s3)
s6
###Output
_____no_output_____
###Markdown
`^` и `symmetric_difference()` - возвращают элементы обоих множеств, присутствующие только в одном из множеств-аргументов.
###Code
{3, 4, 5, 6} ^ {5, 6, 7, 8}
###Output
_____no_output_____
###Markdown
`a ^= b` и `a.symmetric_difference_update(b)` - во множестве а будут все элементы обоих множеств, исключая одинаковые элементы. Оnераторы сравнения множеств `in` - проверка наличия элемента во множестве:
###Code
s3
2 in s3
7 in s3
###Output
_____no_output_____
###Markdown
== - проверка на равенство (совпадение значений множеств):
###Code
set([1, 2, 3]) == set([1, 2, 3])
###Output
_____no_output_____
###Markdown
* `а <= b` и `a.issubset(b)` - проверяют, входят ли все элементы множества а во множество b* `а < b` - проверяет, входят ли все элементы множества а во множество b. Причем множество а не должно быть равно множеству b* `а >= b` и `а.issuperset(b)` - проверяют, входят ли все элементы множества b во множество а* `а > b` - проверяет, входят ли все элементы множества b во множество а. Причем множество а не должно быть равно множеству b* `а.isdisjoint(b)` - возвращает `True`, если результатом пересечения множеств а и b является пустое множество (это означает, что множества не имеют одинаковых элементов) `сору()` - создает копию множества. Обратите внимание на то, что оператор `=` присваивает лишь ссылку на тот же объект, а не копирует его. `add()` - добавляет во множество:
###Code
s8 = {1, 2, 3}
s8.add(4)
s8
###Output
_____no_output_____
###Markdown
`remove()` - удаляет из множества. Если элемент не найден, то возбуждается исключение `KeyError`
###Code
s9 = {1, 2, 3}
s9.remove(2)
s9
s9.remove(4)
###Output
_____no_output_____
###Markdown
`disсаrd()`- удаляет из множества, если он присутствует
###Code
s10 = {1, 2, 3}
s10.discard(2)
s10
s10.discard(4)
###Output
_____no_output_____
###Markdown
`рор()` - удаляет произвольный элемент из множества и возвращает его. Если элементов нет, то возбуждается исключение `KeyError`
###Code
s11 = {1, 2, 3}
s11.pop()
s11
###Output
_____no_output_____
###Markdown
`clear()` - удаляет все элементы из множества
###Code
s11.clear()
s11
###Output
_____no_output_____
###Markdown
---- Кортежи Кортежи, так же как и списки, являются упорядоченными последовательностями элементов. Кортежи во многом аналогичны спискам, но имеют одно очень важное отличие - изменить кортеж нельзя. Можно сказать, что кортеж - это список, доступный "только для чтения". Создание кортежа при помощи функции `tuple([])`. Функция позволяет преобразовать любую последовательность в кортеж. Если параметр не указан, то создается пустой кортеж.
###Code
tuple() # создание пустого кортежа
tuple([1, 2, 3, 4]) # создание кортежа из списка
tuple(range(10))
###Output
_____no_output_____
###Markdown
Создание кортежа с помощью выражений в скобках:
###Code
t1 = () # пустой кортеж
t2 = (2,) # кортеж из одного элемента (обратите внимание на синтаксис!)
t3 = (2, 5, 6)
t4 = (1, 'my string', True)
t5 = (3, (2, 1), 'Ivanov') # вложенные кортежи
# создание списка из кортежа:
lst9 = list((1, 2, 3, 4))
lst9
###Output
_____no_output_____
###Markdown
Во многих случаях скобки при объявлении кортежа можно опустить:
###Code
a = 1
b = 2
ab = (a, b)
ab
ab2 = a, b
ab2
for x, y in ((1, 1), (2, 1), (3, 5)):
print(f'x = {x}, y = {y}')
# возвращаемое значение функции может быть кортежем:
def my_func(a):
return a, a + 1
my_func(1)
###Output
_____no_output_____
###Markdown
Когда справа от оператора присваивания указывается последовательность (в данном случае - это кортежи), а слева указан кортеж, мы говорим, что последовательность справа __распаковывается__.
###Code
a, b, c = (10, 11, 12)
print(f'a = {a}, b = {b}, c = {c}')
a, b, c = range(10, 13)
print(f'a = {a}, b = {b}, c = {c}')
x = 10
y = 1
y, x = x, y
print(f'x = {x}, y = {y}')
###Output
x = 1, y = 10
###Markdown
Существует возможность частичной распаковки:
###Code
a, b, c = range (10, 16)
print('a = {}, b = {}, c = {}'.format(a, b, c))
a, b, *c = range (10, 16)
print(f'a = {a}, b = {b}, c = {c}')
# символ звездочка может находиться в любом месте, но не более одного раза:
a, *b, c = range (10, 16)
print(f'a = {a}, b = {b}, c = {c}')
a, *b, c = range (10, 12)
print(f'a = {a}, b = {b}, c = {c}')
for a, *b in [(1, 2, 3), (4, 5, 6, 7), (9,)]:
print(f'a = {a}, b = {b}')
###Output
a = 1, b = [2, 3]
a = 4, b = [5, 6, 7]
a = 9, b = []
|
2. Regression/Assingments/week-4-ridge-regression-assignment-1-blank.ipynb | ###Markdown
Regression Week 4: Ridge Regression (interpretation) In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression* Use matplotlib to visualize polynomial regressions* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty* Use matplotlib to visualize polynomial regressions under L2 regularization* Choose best L2 penalty using cross-validation.* Assess the final fit using test data.We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.) Fire up graphlab create
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Polynomial regression, revisited We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function `polynomial_sframe` from Week 3:
###Code
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature.apply(lambda x: x**power)
return poly_sframe
###Output
_____no_output_____
###Markdown
Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
###Output
_____no_output_____
###Markdown
As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
###Code
sales = sales.sort(['sqft_living','price'])
sales.head()
###Output
_____no_output_____
###Markdown
Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using `polynomial_sframe()` and fit a model with these features. When fitting the model, use an L2 penalty of `1e-5`:
###Code
l2_small_penalty = 1e-5
###Output
_____no_output_____
###Markdown
Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (`l2_penalty=1e-5`) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)With the L2 penalty specified above, fit the model and print out the learned weights.Hint: make sure to add 'price' column to the new SFrame before calling `graphlab.linear_regression.create()`. Also, make sure GraphLab Create doesn't create its own validation set by using the option `validation_set=None` in this call.
###Code
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features15 = poly15_data.column_names() # get the name of the features
poly15_data['price'] = sales['price'] # add price to the data since it's the target
model15 = graphlab.linear_regression.create(poly15_data, target='price', features=my_features15,
l2_penalty=l2_small_penalty, validation_set=None)
model15.get("coefficients")
###Output
_____no_output_____
###Markdown
***QUIZ QUESTION: What's the learned value for the coefficient of feature `power_1`?*** Observe overfitting Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a *high variance*. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3. First, split the data into split the sales data into four subsets of roughly equal size and call them `set_1`, `set_2`, `set_3`, and `set_4`. Use `.random_split` function and make sure you set `seed=0`.
###Code
(semi_split1, semi_split2) = sales.random_split(.5, seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
###Output
_____no_output_____
###Markdown
Next, fit a 15th degree polynomial on `set_1`, `set_2`, `set_3`, and `set_4`, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.Hint: When calling `graphlab.linear_regression.create()`, use the same L2 penalty as before (i.e. `l2_small_penalty`). Also, make sure GraphLab Create doesn't create its own validation set by using the option `validation_set = None` in this call.
###Code
poly15_set_1 = polynomial_sframe(set_1['sqft_living'], 15)
my_features15_set_1 = poly15_set_1.column_names() # get the name of the features
poly15_set_1['price'] = set_1['price'] # add price to the data since it's the target
model15_set_1 = graphlab.linear_regression.create(poly15_set_1, target='price', features=my_features15_set_1,
l2_penalty=l2_small_penalty, validation_set=None)
model15_set_1.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_1['power_1'], poly15_set_1['price'], '.',
poly15_set_1['power_1'], model15_set_1.predict(poly15_set_1), '-')
poly15_set_2 = polynomial_sframe(set_2['sqft_living'], 15)
my_features15_set_2 = poly15_set_2.column_names() # get the name of the features
poly15_set_2['price'] = set_2['price'] # add price to the data since it's the target
model15_set_2 = graphlab.linear_regression.create(poly15_set_2, target='price', features=my_features15_set_2,
l2_penalty=l2_small_penalty, validation_set=None)
model15_set_2.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_2['power_1'], poly15_set_2['price'], '.',
poly15_set_2['power_1'], model15_set_2.predict(poly15_set_2), '-')
poly15_set_3 = polynomial_sframe(set_3['sqft_living'], 15)
my_features15_set_3 = poly15_set_3.column_names() # get the name of the features
poly15_set_3['price'] = set_3['price'] # add price to the data since it's the target
model15_set_3 = graphlab.linear_regression.create(poly15_set_3, target='price', features=my_features15_set_3,
l2_penalty=l2_small_penalty, validation_set=None)
model15_set_3.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_3['power_1'], poly15_set_3['price'], '.',
poly15_set_3['power_1'], model15_set_3.predict(poly15_set_3), '-')
poly15_set_4 = polynomial_sframe(set_4['sqft_living'], 15)
my_features15_set_4 = poly15_set_4.column_names() # get the name of the features
poly15_set_4['price'] = set_4['price'] # add price to the data since it's the target
model15_set_4 = graphlab.linear_regression.create(poly15_set_4, target='price', features=my_features15_set_4,
l2_penalty=l2_small_penalty, validation_set=None)
model15_set_4.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_4['power_1'], poly15_set_4['price'], '.',
poly15_set_4['power_1'], model15_set_4.predict(poly15_set_4), '-')
###Output
_____no_output_____
###Markdown
The four curves should differ from one another a lot, as should the coefficients you learned.***QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature `power_1`?*** (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.) Ridge regression comes to rescue Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of `model15` looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)With the argument `l2_penalty=1e5`, fit a 15th-order polynomial model on `set_1`, `set_2`, `set_3`, and `set_4`. Other than the change in the `l2_penalty` parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option `validation_set = None` in this call.
###Code
l2_penalty = 1e5
poly15_set_1 = polynomial_sframe(set_1['sqft_living'], 15)
my_features15_set_1 = poly15_set_1.column_names() # get the name of the features
poly15_set_1['price'] = set_1['price'] # add price to the data since it's the target
model15_set_1 = graphlab.linear_regression.create(poly15_set_1, target='price', features=my_features15_set_1,
l2_penalty=l2_penalty, validation_set=None)
model15_set_1.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_1['power_1'], poly15_set_1['price'], '.',
poly15_set_1['power_1'], model15_set_1.predict(poly15_set_1), '-')
poly15_set_2 = polynomial_sframe(set_2['sqft_living'], 15)
my_features15_set_2 = poly15_set_2.column_names() # get the name of the features
poly15_set_2['price'] = set_2['price'] # add price to the data since it's the target
model15_set_2 = graphlab.linear_regression.create(poly15_set_2, target='price', features=my_features15_set_2,
l2_penalty=l2_penalty, validation_set=None)
model15_set_2.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_2['power_1'], poly15_set_2['price'], '.',
poly15_set_2['power_1'], model15_set_2.predict(poly15_set_2), '-')
poly15_set_3 = polynomial_sframe(set_3['sqft_living'], 15)
my_features15_set_3 = poly15_set_3.column_names() # get the name of the features
poly15_set_3['price'] = set_3['price'] # add price to the data since it's the target
model15_set_3 = graphlab.linear_regression.create(poly15_set_3, target='price', features=my_features15_set_3,
l2_penalty=l2_penalty, validation_set=None)
model15_set_3.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_3['power_1'], poly15_set_3['price'], '.',
poly15_set_3['power_1'], model15_set_3.predict(poly15_set_3), '-')
poly15_set_4 = polynomial_sframe(set_4['sqft_living'], 15)
my_features15_set_4 = poly15_set_4.column_names() # get the name of the features
poly15_set_4['price'] = set_4['price'] # add price to the data since it's the target
model15_set_4 = graphlab.linear_regression.create(poly15_set_4, target='price', features=my_features15_set_4,
l2_penalty=l2_penalty, validation_set=None)
model15_set_4.get("coefficients").print_rows(num_rows=16)
plt.plot(poly15_set_4['power_1'], poly15_set_4['price'], '.',
poly15_set_4['power_1'], model15_set_4.predict(poly15_set_4), '-')
###Output
_____no_output_____
###Markdown
These curves should vary a lot less, now that you applied a high degree of regularization.***QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature `power_1`?*** (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.) Selecting an L2 penalty via cross-validation Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. **Cross-validation** seeks to overcome this issue by using all of the training set in a smart way.We will implement a kind of cross-validation called **k-fold cross-validation**. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation setSet aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set...Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation setAfter this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data. To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use `seed=1` to get consistent answer.)
###Code
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
###Output
_____no_output_____
###Markdown
Once the data is shuffled, we divide it into equal segments. Each segment should receive `n/k` elements, where `n` is the number of observations in the training set and `k` is the number of segments. Since the segment 0 starts at index 0 and contains `n/k` elements, it ends at index `(n/k)-1`. The segment 1 starts where the segment 0 left off, at index `(n/k)`. With `n/k` elements, the segment 1 ends at index `(n*2/k)-1`. Continuing in this fashion, we deduce that the segment `i` starts at index `(n*i/k)` and ends at `(n*(i+1)/k)-1`. With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
###Code
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k - 1
print i, (start, end)
###Output
0 (0, 1938)
1 (1939, 3878)
2 (3879, 5817)
3 (5818, 7757)
4 (7758, 9697)
5 (9698, 11636)
6 (11637, 13576)
7 (13577, 15515)
8 (15516, 17455)
9 (17456, 19395)
###Markdown
Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of `train_valid_shuffled`. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
###Code
train_valid_shuffled[0:10] # rows 0 to 9
###Output
_____no_output_____
###Markdown
Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the `train_valid_shuffled` dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.Extract the fourth segment (segment 3) and assign it to a variable called `validation4`.
###Code
start = (n*3) / 10
end = (n*4) / 10 - 1
validation4 = train_valid_shuffled[start:end+1]
###Output
_____no_output_____
###Markdown
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
###Code
print int(round(validation4['price'].mean(), 0))
###Output
536234
###Markdown
After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has `append()` method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the `train_valid_shuffled` dataframe.
###Code
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
###Output
+------------+---------------------------+-----------+----------+-----------+
| id | date | price | bedrooms | bathrooms |
+------------+---------------------------+-----------+----------+-----------+
| 2780400035 | 2014-05-05 00:00:00+00:00 | 665000.0 | 4.0 | 2.5 |
| 1703050500 | 2015-03-21 00:00:00+00:00 | 645000.0 | 3.0 | 2.5 |
| 4139480190 | 2014-09-16 00:00:00+00:00 | 1153000.0 | 3.0 | 3.25 |
| 7237300290 | 2015-03-26 00:00:00+00:00 | 338000.0 | 5.0 | 2.5 |
+------------+---------------------------+-----------+----------+-----------+
+-------------+----------+--------+------------+------+-----------+-------+------------+
| sqft_living | sqft_lot | floors | waterfront | view | condition | grade | sqft_above |
+-------------+----------+--------+------------+------+-----------+-------+------------+
| 2800.0 | 5900 | 1 | 0 | 0 | 3 | 8 | 1660 |
| 2490.0 | 5978 | 2 | 0 | 0 | 3 | 9 | 2490 |
| 3780.0 | 10623 | 1 | 0 | 1 | 3 | 11 | 2650 |
| 2400.0 | 4496 | 2 | 0 | 0 | 3 | 7 | 2400 |
+-------------+----------+--------+------------+------+-----------+-------+------------+
+---------------+----------+--------------+---------+-------------+
| sqft_basement | yr_built | yr_renovated | zipcode | lat |
+---------------+----------+--------------+---------+-------------+
| 1140 | 1963 | 0 | 98115 | 47.68093246 |
| 0 | 2003 | 0 | 98074 | 47.62984888 |
| 1130 | 1999 | 0 | 98006 | 47.55061236 |
| 0 | 2004 | 0 | 98042 | 47.36923712 |
+---------------+----------+--------------+---------+-------------+
+---------------+---------------+-----+
| long | sqft_living15 | ... |
+---------------+---------------+-----+
| -122.28583258 | 2580.0 | ... |
| -122.02177564 | 2710.0 | ... |
| -122.10144844 | 3850.0 | ... |
| -122.12606473 | 1880.0 | ... |
+---------------+---------------+-----+
[4 rows x 21 columns]
###Markdown
Extract the remainder of the data after *excluding* fourth segment (segment 3) and assign the subset to `train4`.
###Code
first = train_valid_shuffled[:start]
last = train_valid_shuffled[end+1:]
train4 = first.append(last)
###Output
_____no_output_____
###Markdown
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
###Code
print int(round(train4['price'].mean(), 0))
###Output
539450
###Markdown
Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) `k`, (ii) `l2_penalty`, (iii) dataframe, (iv) name of output column (e.g. `price`) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.* For each i in [0, 1, ..., k-1]: * Compute starting and ending indices of segment i and call 'start' and 'end' * Form validation set by taking a slice (start:end+1) from the data. * Form training set by appending slice (end+1:n) to the end of slice (0:start). * Train a linear model using training set just formed, with a given l2_penalty * Compute validation error using validation set just formed
###Code
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
val_error = 0
for i in range(k):
start = (n*i)/k
end = (n*(i+1))/k - 1
validation = data[start:end+1]
first = data[:start]
last = data[end+1:]
train = first.append(last)
model = graphlab.linear_regression.create(train, target=output_name, features=features_list,
l2_penalty=l2_penalty, validation_set=None, verbose=False)
predictions = model.predict(validation)
residuals = validation[output_name] - predictions
rss = sum(residuals * residuals)
val_error += rss
return val_error / k
###Output
_____no_output_____
###Markdown
Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:* We will again be aiming to fit a 15th-order polynomial model using the `sqft_living` input* For `l2_penalty` in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: `np.logspace(1, 7, num=13)`.) * Run 10-fold cross-validation with `l2_penalty`* Report which L2 penalty produced the lowest average validation error.Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use `train_valid_shuffled` when generating polynomial features!
###Code
import numpy as np
poly15_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features15 = poly15_data.column_names() # get the name of the features
poly15_data['price'] = train_valid_shuffled['price'] # add price to the data since it's the target
l2_penalties = np.logspace(1, 7, num=13)
validation_errors = []
for l2_penalty in l2_penalties:
validation_errors.append(k_fold_cross_validation(10, l2_penalty, poly15_data, "price", my_features15))
for i in range(13):
print l2_penalties[i], validation_errors[i]
###Output
10.0 4.91826427769e+14
31.6227766017 2.87504229919e+14
100.0 1.60908965822e+14
316.227766017 1.22090967326e+14
1000.0 1.21192264451e+14
3162.27766017 1.2395000929e+14
10000.0 1.36837175248e+14
31622.7766017 1.71728094842e+14
100000.0 2.2936143126e+14
316227.766017 2.52940568729e+14
1000000.0 2.58682548441e+14
3162277.66017 2.62819399742e+14
10000000.0 2.64889015378e+14
###Markdown
***QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?*** You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
###Code
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(l2_penalties, validation_errors)
plt.xscale('log')
###Output
_____no_output_____
###Markdown
Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of `l2_penalty`. This way, your final model will be trained on the entire dataset.
###Code
poly15_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features15 = poly15_data.column_names() # get the name of the features
poly15_data['price'] = train_valid_shuffled['price'] # add price to the data since it's the target
l2_penalty_best = 1000.0
model = graphlab.linear_regression.create(poly15_data, target="price", features=my_features15,
l2_penalty=l2_penalty_best, validation_set=None)
###Output
_____no_output_____
###Markdown
***QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty? ***
###Code
poly15_test = polynomial_sframe(test['sqft_living'], 15)
predictions = model.predict(poly15_test)
residuals = predictions - test["price"]
rss = sum(residuals * residuals)
print rss
###Output
1.28780855058e+14
|
Exercises and solutions/chap2_exercises_solutions.ipynb | ###Markdown
Solutions for chapter 2 exercises Set up
###Code
# Common libraries
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
from statsmodels.formula.api import ols
import matplotlib.pyplot as plt
import seaborn as sns
#Loading the data
dat_df = pd.read_csv("AirCnC_MnM_exercises_data.csv")
###Output
_____no_output_____
###Markdown
0. The problem as seen by the PM This preliminary section shows the calculations that the PM ran, for reference.
###Code
# The booking rate is lower for customers who have seen the ad
dat_df.groupby('ad').agg(bkg_rate = ('bkd', lambda x: np.mean(x)))
###Output
_____no_output_____
###Markdown
0.b. What is the booking rate for customers who have seen the ad, restricting to customers considering an M&M property? Customers who haven’t seen the ad, with the same restriction?
###Code
# This remains true even when restricting to customers considering an M&M property
dat_df[(dat_df['mm']==1)].groupby('ad').agg(bkg_rate = ('bkd', lambda x: np.mean(x)))
###Output
_____no_output_____
###Markdown
1. Understanding the behaviors 1.a. What are the behavioral categories for the variables in the data (Income, Ad, MM, Bkd)?Income is a personal characteristic.Ad is a business behavior.MM is a customer behavior. Bkd is a customer behavior. 1.b. What is (are) the goal(s) of the ad?The goals of the ad are - to increase the percentage of customers who consider an M&M property- to increase the percentage of customers who book an M&M property
###Code
# The ad indeed increases the probability that a customer will consider an M&M property
mod_mm = smf.logit('mm ~ ad', data = dat_df)
res_mm = mod_mm.fit()
res_mm.summary()
#The ad increases the probability that a customer will book an M&M property
dat_df['bkd_mm'] = dat_df['bkd'] * dat_df['mm'] # Equal to 1 if and only if a customer books an M&M property
mod_bkd_mm = smf.logit('bkd_mm ~ ad', data = dat_df)
res_bkd_mm = mod_bkd_mm.fit()
res_bkd_mm.summary()
###Output
Optimization terminated successfully.
Current function value: 0.280956
Iterations 6
###Markdown
2. Resolving the mystery 2.a. How does income affect these behaviors?
###Code
# Income increases the probability that a customer will consider an M&M property
mod_mm = smf.logit('mm ~ income + ad', data = dat_df)
res_mm = mod_mm.fit()
res_mm.summary()
# Income increases the probability that a customer will book an M&M property
mod_bkd_mm = smf.logit('bkd_mm ~ income + ad', data = dat_df)
res_bkd_mm = mod_bkd_mm.fit()
res_bkd_mm.summary()
###Output
Optimization terminated successfully.
Current function value: 0.064053
Iterations 9
###Markdown
2.b. What is the average income of customers considering an M&M property after seeing the ad? Without seeing the ad?
###Code
# Customers considering an M&M property after seeing the ad have a lower income than customers
# considering an M&M property without having seen the ad
dat_df.groupby(['ad', 'mm']).agg(avg_income = ('income', lambda x: np.mean(x)))
###Output
_____no_output_____ |
P2-Advanced-Lane-Lines/examples/example.ipynb | ###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('../camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
###Output
_____no_output_____ |
AAAI/Learnability/CIN/MLP/MNIST/MNIST_CIN_2000_MLP_m_5.ipynb | ###Markdown
generating CIN train and test data
###Code
np.random.seed(0)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
bg_idx, fg_idx
# for i in background_data[bg_idx]:
# imshow(i)
imshow(torch.sum(background_data[bg_idx], axis = 0))
imshow(foreground_data[fg_idx])
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] )/m
tr_data.shape
imshow(tr_data)
foreground_label[fg_idx]
train_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
train_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
train_images.append(tr_data)
train_label.append(label)
train_images = torch.stack(train_images)
train_images.shape, len(train_label)
imshow(train_images[0])
test_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
np.random.seed(i)
fg_idx = np.random.randint(0,12665)
tr_data = ( foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
test_images.append(tr_data)
test_label.append(label)
test_images = torch.stack(test_images)
test_images.shape, len(test_label)
imshow(test_images[0])
torch.sum(torch.isnan(train_images)), torch.sum(torch.isnan(test_images))
np.unique(train_label), np.unique(test_label)
###Output
_____no_output_____
###Markdown
creating dataloader
###Code
class CIN_Dataset(Dataset):
"""CIN_Dataset dataset."""
def __init__(self, list_of_images, labels):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.image = list_of_images
self.label = labels
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.image[idx] , self.label[idx]
batch = 250
train_data = CIN_Dataset(train_images, train_label)
train_loader = DataLoader( train_data, batch_size= batch , shuffle=True)
test_data = CIN_Dataset( test_images , test_label)
test_loader = DataLoader( test_data, batch_size= batch , shuffle=False)
train_loader.dataset.image.shape, test_loader.dataset.image.shape
###Output
_____no_output_____
###Markdown
model
###Code
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.fc1 = nn.Linear(28*28, 50)
self.fc2 = nn.Linear(50, 2)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.zeros_(self.fc1.bias)
torch.nn.init.xavier_normal_(self.fc2.weight)
torch.nn.init.zeros_(self.fc2.bias)
def forward(self, x):
x = x.view(-1, 28*28)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
###Output
_____no_output_____
###Markdown
training
###Code
torch.manual_seed(12)
classify = Classification().double()
classify = classify.to("cuda")
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001 ) #, momentum=0.9)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d test images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
nos_epochs = 200
tr_loss = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
epoch_loss = []
cnt=0
iteration = desired_num // batch
running_loss = 0
#training data set
for i, data in enumerate(train_loader):
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
inputs = inputs.double()
# zero the parameter gradients
optimizer_classify.zero_grad()
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_classify.step()
running_loss += loss.item()
mini = 1
if cnt % mini == mini-1: # print every 40 mini-batches
# print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
tr_loss.append(np.mean(epoch_loss))
if(np.mean(epoch_loss) <= 0.001):
break;
else:
print('[Epoch : %d] loss: %.3f' %(epoch + 1, np.mean(epoch_loss) ))
print('Finished Training')
plt.plot(tr_loss)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total test set images", total)
###Output
_____no_output_____ |
Introduction to Data Science - kNN.ipynb | ###Markdown
Introduction to Data Science - kNN Imports
###Code
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn import neighbors, datasets
from matplotlib.colors import ListedColormap
from pandas import DataFrame
%matplotlib inline
plt.style.use('dark_background')
from matplotlib import animation, rc
from IPython.display import HTML
###Output
_____no_output_____
###Markdown
Load Data
###Code
df = pd.read_table('data/anaconda.dat')
df['class'] = [0 if x == 'F' else 1 for x in df['gender']]
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
Plot It
###Code
colors = ['red', 'blue']
ax = df.plot.scatter(x='snout_length', y='weight', c='class', cmap=ListedColormap(colors), \
figsize=(14, 8), grid = True, legend = True, title='Anaconda Snout-Vent Length vs Weight')
ax.set_xlabel('Snout-Vent Length')
ax.set_ylabel('Weight')
ax.get_figure().subplots_adjust(bottom=0.25)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting Decision Boundaries
###Code
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
data = np.array([df['snout_length'], df['weight'], df['class']]).T
np.random.shuffle(data)
test_size = 4
X = data[:-test_size, 0:2]
y = np.array(data[:-test_size, 2])
h = 1
# Generate mesh values
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# fig, ax = plt.subplots(nrows=2, ncols=4)
fig = plt.figure(figsize=(24, 44))
cnt = 1
for num_neigh in range(5, 35, 4):
clf = neighbors.KNeighborsClassifier(num_neigh, weights='uniform')
clf.fit(X, y)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.subplot(4, 2, cnt)
# plt.figure(2, 4, cnt)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("Anaconda classification (k = {})".format(num_neigh))
cnt += 1
plt.show()
###Output
_____no_output_____ |
Python399MoocCourse/class3/3.6.ipynb | ###Markdown
索引
###Code
x = np.random.normal(0, 1, size=1000000)
np.min(x)
np.argmin(x) # 最小值的索引
np.random.shuffle(x)
x
np.sort(x)
np.argsort(x)
np.partition(x, 3)
###Output
_____no_output_____ |
Machine learning/Choosing the right estimator for a regression problem.ipynb | ###Markdown
Choosing the right estimator for our problem___ 1. Regression problem
###Code
# Load the libraries
from sklearn.datasets import load_boston
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.kernel_approximation import RBFSampler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
# Import the data
boston = load_boston()
boston_df = pd.DataFrame(boston['data'], columns=boston['feature_names'])
boston_df['TARGET'] = pd.Series(boston['target'])
boston_df.head()
# Number of samples
len(boston_df)
# Setup seed
np.random.seed(2)
# Create the data
X = boston_df.drop('TARGET', axis=1)
y = boston_df['TARGET']
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=.2)
# Try Ridge Regression model
rng = Ridge()
rng.fit(X_train,y_train)
rng.score(X_test, y_test)
# Try Random Forest Regressor
model = RandomForestRegressor()
model.fit(X_train, y_train)
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
2. Classification problem
###Code
# Load the data
heart = pd.read_csv('../Data/heart.csv')
heart.head()
# Number of samples
len(heart)
# Prepare data
X = heart.drop('target', axis=1)
y = heart['target']
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=.2,
random_state=2)
# Try SGD Classifier
clf = LinearSVC(max_iter=50000, random_state=2)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
C:\Users\olgar\anaconda3\lib\site-packages\sklearn\svm\_base.py:985: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
warnings.warn("Liblinear failed to converge, increase "
###Markdown
The Linear SVC is too slow if maximum iteration is increased.
###Code
# Try Random Forest Classifier
rfc = RandomForestClassifier(random_state=2)
rfc.fit(X_train, y_train)
rfc.score(X_test, y_test)
# Predict the target and compare to the true labels
y_pred = rfc.predict(X_test)
y_pred
# different way to calculate the score for the model
np.mean(y_pred == y_test)
# the same but using sklearn metrics
accuracy_score(y_test, y_pred)
# predict with predict_proba to get probability estimates
rfc.predict_proba(X_test)[:5], rfc.predict(X_test)[:5]
###Output
_____no_output_____ |
week4/day4/exercises/pandas/6_Stats/Wind_Stats/Solutions.ipynb | ###Markdown
Wind Statistics Introduction:The data have been modified to contain some missing values, identified by NaN. Using pandas should make this exerciseeasier, in particular for the bonus question.You should be able to perform all of these operations without usinga for loop or other looping construct.1. The data in 'wind.data' has the following format:
###Code
"""
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
"""
###Output
_____no_output_____ |
reports/c12-deploy.ipynbc12-deploy_2021_10_13.ipynb | ###Markdown
0.0. Imports
###Code
import re
import sqlite3
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
from sqlalchemy import create_engine
from umap.umap_ import UMAP
from scipy.cluster import hierarchy as hc
from sklearn import cluster
from sklearn import metrics
from sklearn import preprocessing as pp
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn import ensemble as en
from sklearn.mixture import GaussianMixture as gm
###Output
_____no_output_____
###Markdown
0.1. Helper Functions
###Code
def descriptive_statistics(num_attr):
# Central Tendency: mean, median
c1 = pd.DataFrame(num_attr.apply(np.mean))
c2 = pd.DataFrame(num_attr.apply(np.median))
# Dispension: min, max, range, std, skew, kurtosis
d1 = pd.DataFrame(num_attr.apply(min))
d2 = pd.DataFrame(num_attr.apply(max))
d3 = pd.DataFrame(num_attr.apply(lambda x: x.max() - x.min()))
d4 = pd.DataFrame(num_attr.apply(lambda x: x.std()))
# Measures of Shape
s1 = pd.DataFrame(num_attr.apply(lambda x: x.skew()))
s2 = pd.DataFrame(num_attr.apply(lambda x: x.kurtosis()))
# concat
m = pd.concat([d1,d2,d3,c1,c2,d4,s1,s2], axis=1).reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
return m
###Output
_____no_output_____
###Markdown
0.2. Load Data
###Code
path = '/home/cid/repos/clustering-high-value-customers-identification/'
df_raw = pd.read_csv(path + '/data/raw/Ecommerce.csv', encoding='latin1')
# drop extra column
df_raw = df_raw.drop('Unnamed: 8', axis=1)
###Output
_____no_output_____
###Markdown
1.0. Data Description
###Code
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1. Rename Columns
###Code
cols_new = ['invoice_no', 'stock_code', 'description', 'quantity', 'invoice_date', 'unit_price', 'customer_id', 'country']
df1.columns = cols_new
###Output
_____no_output_____
###Markdown
1.2. Data Dimnesions
###Code
print('Number of Rows: {}'.format(df1.shape[0]))
print('Number of Columns: {}'.format(df1.shape[1]))
###Output
Number of Rows: 541909
Number of Columns: 8
###Markdown
1.3. Data Types
###Code
df1.dtypes
###Output
_____no_output_____
###Markdown
1.4. Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.5. Replace NA
###Code
df_missing = df1.loc[df1['customer_id'].isna(), :]
df_not_missing = df1.loc[~df1['customer_id'].isna(), :]
df_backup = pd.DataFrame(df_missing['invoice_no'].drop_duplicates())
df_backup['customer_id'] = np.arange(19000, 19000+len(df_backup), 1)
# merge
df1 = pd.merge(df1, df_backup, how='left', on='invoice_no' )
# coalesce
df1['customer_id'] = df1['customer_id_x'].combine_first(df1['customer_id_y'])
df1 = df1.drop(['customer_id_x', 'customer_id_y'], axis=1)
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.6. Change dtypes
###Code
df1.dtypes
df1['invoice_date'] = pd.to_datetime(df1['invoice_date'])
df1['customer_id'] = df1['customer_id'].astype(int)
###Output
_____no_output_____
###Markdown
1.7. Descriptive Statistics
###Code
num_att = df1.select_dtypes(include=['int64', 'float64'])
cat_att = df1.select_dtypes(include=['object'])
###Output
_____no_output_____
###Markdown
1.7.1. Numerical Attributes
###Code
descriptive_statistics(num_att)
###Output
_____no_output_____
###Markdown
1.7.2. Categorical Attributes
###Code
cat_att.describe(include=['O'])
###Output
_____no_output_____
###Markdown
2.0. Data Filtering
###Code
df2 = df1.copy()
###Output
_____no_output_____
###Markdown
2.1. Filter Columns
###Code
cols_drop = ['description']
df2 = df2.drop(cols_drop, axis=1)
###Output
_____no_output_____
###Markdown
2.2. Filter Rows
###Code
# Numerical Attributes
df2 = df2.loc[df2['unit_price'] >= 0.4, :]
# Categorical Attributes
df2 = df2.loc[~df2['stock_code'].isin(['POST', 'D', 'DOT', 'M', 'S', 'AMAZONFEE', 'm', 'DCGSSBOY', 'DCGSSGIRL', 'PADS', 'B', 'CRUK'] ), :]
# map
df2 = df2.loc[~df2['country'].isin(['European Community', 'Unspecified' ]), :]
# bad user
df2 = df2[~df2['customer_id'].isin( [16446] )]
# quantity
df2_returns = df2.loc[df2['quantity'] < 0, :]
df2_purchases = df2.loc[df2['quantity'] >= 0, :]
###Output
_____no_output_____
###Markdown
3.0. Feature Engineering
###Code
df3 = df2.copy()
###Output
_____no_output_____
###Markdown
3.1. Feature Creation
###Code
drop_cols = ['invoice_no', 'stock_code', 'quantity', 'invoice_date', 'unit_price', 'country']
df_ref = df3.drop(drop_cols, axis=1).drop_duplicates(ignore_index=True)
df2_purchases.loc[:, ['gross_revenue']] = (df2_purchases.loc[:, 'quantity'] * df2_purchases.loc[:, 'unit_price'])
###Output
/home/cid/.pyenv/versions/3.8.0/envs/clustering-high-value-customers-identification/lib/python3.8/site-packages/pandas/core/indexing.py:1773: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_single_column(ilocs[0], value, pi)
###Markdown
3.1.1. Gross Revenue
###Code
df_monetary = df2_purchases.loc[:, ['customer_id', 'gross_revenue']].groupby('customer_id').sum().reset_index() # .rename(columns={'gross_revenue': 'monetary'})
df_ref = pd.merge(df_ref, df_monetary, how='left', on='customer_id')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.2. Recency
###Code
df_recency = df2_purchases.loc[:, ['customer_id', 'invoice_date']].groupby('customer_id').max().reset_index()
df_recency['recency_days'] = (df_recency['invoice_date'].max() - df_recency['invoice_date']).dt.days
df_ref = pd.merge(df_ref, df_recency[['customer_id', 'recency_days']], how='left', on='customer_id')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.3. Quatily of purchased
###Code
df_freq = df2_purchases[['customer_id', 'invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index().\
rename(columns={'invoice_no': 'qtde_invoices'})
df_ref = pd.merge(df_ref, df_freq, how='left', on='customer_id')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.4. Quantity total of items purchased
###Code
df_freq = (df2_purchases.loc[:, ['customer_id', 'quantity']].groupby('customer_id')
.sum()
.reset_index()
.rename(columns={'quantity': 'qtde_items'}))
df_ref = pd.merge(df_ref, df_freq, how='left', on='customer_id')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.5. Quantity of products purchased
###Code
df_freq = ( df2_purchases.loc[:, ['customer_id', 'stock_code']].groupby('customer_id')
.count()
.reset_index()
.rename(columns={'stock_code': 'qtde_products'}))
df_ref = pd.merge(df_ref, df_freq, how='left', on='customer_id')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.6. Average Ticket
###Code
df_avg_ticket = (df2_purchases.loc[:, ['customer_id','gross_revenue']].groupby('customer_id')
.mean()
.reset_index()
.rename(columns={'gross_revenue': 'avg_ticket'}))
df_ref = pd.merge(df_ref, df_avg_ticket, how='left', on='customer_id')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.7. Average Recency Days
###Code
# df_aux = df2[['customer_id', 'invoice_date']].drop_duplicates().sort_values(['customer_id', 'invoice_date'], ascending=[False, False])
# df_aux['next_customer_id'] = df_aux['customer_id'].shift()
# df_aux['previus_date'] = df_aux['invoice_date'].shift()
# df_aux['avg_recency_days'] = df_aux.apply( lambda x: (x['invoice_date'] - x['previus_date']).days if x['customer_id'] == x['next_customer_id'] else np.nan, axis=1)
# df_aux['avg_recency_days'] = df_aux['avg_recency_days'] * -1
# df_aux = df_aux.drop(columns=['invoice_date', 'next_customer_id', 'previus_date'], axis=1).dropna()
# df_avg_recency_days = df_aux.groupby( 'customer_id' ).mean().reset_index()
# df_ref = pd.merge(df_ref, df_avg_recency_days, on='customer_id', how='left')
# df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.8. Frequency Purchase
###Code
df_aux = (df2_purchases[['customer_id', 'invoice_no', 'invoice_date']].drop_duplicates()
.groupby('customer_id')
.agg( max_ = ('invoice_date', 'max'),
min_ = ('invoice_date', 'min'),
days = ('invoice_date', lambda x: ((x.max() - x.min()).days) + 1 ),
buy_ = ( 'invoice_no', 'count' ))).reset_index()
df_aux['frequency'] = df_aux[['buy_', 'days']].apply( lambda x: x['buy_'] / x['days'] if x['days'] != 0 else 0, axis=1 )
df_ref = pd.merge(df_ref, df_aux[['customer_id', 'frequency']], on='customer_id', how='left')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.9. Number Or Returns
###Code
df_returns = df2_returns[['quantity', 'customer_id']].groupby('customer_id').sum().reset_index().rename(columns={'quantity': 'qtde_returns'})
df_returns['qtde_returns'] = df_returns['qtde_returns'] * -1
df_ref = pd.merge(df_ref, df_returns, on='customer_id', how='left')
df_ref['qtde_returns'].fillna(0, inplace=True)
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.10. Basket Size
###Code
df_aux = (df2_purchases[['customer_id', 'invoice_no', 'quantity']].groupby('customer_id')
.agg( n_purchase=('invoice_no', 'nunique'),
n_products=('quantity', 'sum'))).reset_index()
df_aux['avg_basket_size'] = df_aux['n_products'] / df_aux['n_purchase']
df_ref = pd.merge(df_ref, df_aux[['avg_basket_size', 'customer_id']], on='customer_id', how='left')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
3.1.11. Unique Basket Size
###Code
df_aux = (df2_purchases.loc[:, ['customer_id', 'invoice_no', 'stock_code']].groupby( 'customer_id' )
.agg(n_purchase=('invoice_no', 'nunique'),
n_products=('stock_code', 'nunique'))).reset_index()
df_aux['avg_unique_basket_size'] = df_aux['n_products'] / df_aux['n_purchase']
df_ref = pd.merge(df_ref, df_aux[['avg_unique_basket_size', 'customer_id']], on='customer_id', how='left')
df_ref.isna().sum()
###Output
_____no_output_____
###Markdown
4.0. EDA
###Code
df_ref = df_ref.dropna()
df4 = df_ref.copy()
###Output
_____no_output_____
###Markdown
4.3. Space Study
###Code
# select dataser
cols_selected = ['customer_id', 'gross_revenue', 'recency_days', 'qtde_products', 'frequency', 'qtde_returns']
df43 = df4[cols_selected].copy()
mms = pp.MinMaxScaler()
df43['gross_revenue'] = mms.fit_transform( df43[['gross_revenue']].values )
df43['recency_days'] = mms.fit_transform( df43[['recency_days']].values )
df43['qtde_products'] = mms.fit_transform( df43[['qtde_products']].values )
df43['frequency'] = mms.fit_transform( df43[['frequency']].values )
df43['qtde_returns'] = mms.fit_transform( df43[['qtde_returns']].values )
###Output
_____no_output_____
###Markdown
4.3.4. Tree-Based Embbedding
###Code
X = df43.drop(columns=['customer_id', 'gross_revenue'])
y = df43['gross_revenue']
# model training
rf_model = en.RandomForestRegressor( n_estimators=100, random_state=42 )
# model training definition
rf_model.fit( X, y)
# leaf
df_leaf = pd.DataFrame( rf_model.apply( X ) )
# create dataframe tree
df_tree = pd.DataFrame()
# reduzer dimensionality
reducer = UMAP(random_state=42)
embedding = reducer.fit_transform( df_leaf )
# embedding
df_tree['embedding_x'] = embedding[:, 0]
df_tree['embedding_y'] = embedding[:, 1]
###Output
/home/cid/.pyenv/versions/3.8.0/envs/clustering-high-value-customers-identification/lib/python3.8/site-packages/sklearn/manifold/_spectral_embedding.py:245: UserWarning: Graph is not fully connected, spectral embedding may not work as expected.
warnings.warn("Graph is not fully connected, spectral embedding"
###Markdown
5.0. Data Preparation
###Code
df5 = df_tree.copy()
# df5.to_csv( '../src/data/tree_based_embedding.csv', index=False )
###Output
_____no_output_____
###Markdown
7.0. Hyperpameter Fine Tuning
###Code
X = df5.copy()
X.head()
###Output
_____no_output_____
###Markdown
8.0. Model Training
###Code
# model definition
k = 8
gmm_model = gm(n_components=k, n_init=100, random_state=42)
# model training
gmm_model.fit(X)
# model predict
labels = gmm_model.predict(X)
###Output
_____no_output_____
###Markdown
8.2. Cluster Validation
###Code
print("SS value: {}".format(metrics.silhouette_score( X, labels, metric='euclidean' )))
###Output
SS value: 0.37607449293136597
###Markdown
9.0. Cluster Analysis
###Code
df92 = df4[cols_selected].copy()
df92['cluster'] = labels
# change dtypes
df92['recency_days'] = df92['recency_days'].astype(int)
df92['qtde_products'] = df92['qtde_products'].astype(int)
df92['qtde_returns'] = df92['qtde_returns'].astype(int)
###Output
_____no_output_____
###Markdown
9.2. Cluster Profile
###Code
# cluster - qt_users - per_user
df_cluster = df92[['customer_id', 'cluster']].groupby('cluster').count().reset_index().rename(columns={'customer_id': 'qt_users'})
df_cluster['per_user'] = 100 * (df_cluster['qt_users'] / df_cluster['qt_users'].sum())
# gross_revenue
monetary = df92[['gross_revenue', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, monetary, how='left', on='cluster')
# recency_days
recency_days = df92[['recency_days', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, recency_days, how='left', on='cluster')
# qtde_products
qtde_products = df92[['qtde_products', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, qtde_products, how='left', on='cluster')
# frequency
frequency = df92[['frequency', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, frequency, how='left', on='cluster')
# qtde_returns
qtde_returns = df92[['qtde_returns', 'cluster']].groupby('cluster').mean().reset_index()
df_cluster = pd.merge(df_cluster, qtde_returns, how='left', on='cluster')
df_cluster.sort_values('gross_revenue', ascending=False).style.highlight_max( color='lightgreen', axis=0 )
# 1 Cluster Insiders
# 5 Cluster More Products
# 4 Cluster Spend Money
# 2 Cluster Even More Products
# 6 Cluster Less Days
# 0 Cluster Less 1k
# 7 Cluster Stop Returners
# 3 Cluster More Buy
###Output
_____no_output_____
###Markdown
**Cluster 01: ( Candidato à Insider )**- Número de customers: 468 (16% do customers )- Faturamento médio: 8836- Recência média: 21 dias- Média de Produtos comprados: 424 produtos- Frequência de Produtos comprados: 0.09 produtos/dia- Receita em média: $8836.13,00 dólares 10.0. EDA
###Code
df10 = df92.copy()
###Output
_____no_output_____
###Markdown
11.0. Deploy to Product
###Code
df11 = df10.copy()
###Output
_____no_output_____
###Markdown
11.1. Insert into SQLITE
###Code
# create table
query_create_table_insiders = """
CREATE TABLE insiders (
customer_id INTEGER,
gross_revenue REAL,
recency_days INTEGER,
qtde_products INTEGER,
frequency REAL,
qtde_returns INTEGER,
cluster INTEGER
)
"""
conn = sqlite3.connect( 'insiders_db.sqlite' )
conn.execute( query_create_table_insiders )
conn.commit()
conn.close()
# Drop Table
query_drop_table = """
DROP TABLE insiders
"""
# insert data
conn = create_engine( 'sqlite:///insiders_db.sqlite' )
df92.to_sql( 'insiders', con=conn, if_exists='append', index=False )
# consulting database
query = """
SELECT * FROM insiders
"""
conn = create_engine( 'sqlite:///insiders_db.sqlite' )
df = pd.read_sql_query( query, conn )
df.head()
###Output
_____no_output_____ |
Python_Data_Science_Handbook/03.02-Data-Indexing-and-Selection.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
###Code
%qtconsole --style solarized-dark
###Output
_____no_output_____
###Markdown
Data Indexing and Selection In [Chapter 2](02.00-Introduction-to-NumPy.ipynb), we looked in detail at methods and tools to access, set, and modify values in NumPy arrays.These included indexing (e.g., ``arr[2, 1]``), slicing (e.g., ``arr[:, 1:5]``), masking (e.g., ``arr[arr > 0]``), fancy indexing (e.g., ``arr[0, [1, 5]]``), and combinations thereof (e.g., ``arr[:, [1, 5]]``).Here we'll look at similar means of accessing and modifying values in Pandas ``Series`` and ``DataFrame`` objects.If you have used the NumPy patterns, the corresponding patterns in Pandas will feel very familiar, though there are a few quirks to be aware of.We'll start with the simple case of the one-dimensional ``Series`` object, and then move on to the more complicated two-dimesnional ``DataFrame`` object. Data Selection in SeriesAs we saw in the previous section, a ``Series`` object acts in many ways like a one-dimensional NumPy array, and in many ways like a standard Python dictionary.If we keep these two overlapping analogies in mind, it will help us to understand the patterns of data indexing and selection in these arrays. Series as dictionaryLike a dictionary, the ``Series`` object provides a mapping from a collection of keys to a collection of values:
###Code
import pandas as pd
data = pd.Series([0.25, 0.5, 0.75, 1.0],
index=['a', 'b', 'c', 'd'])
data
data['b']
###Output
_____no_output_____
###Markdown
We can also use dictionary-like Python expressions and methods to examine the keys/indices and values:
###Code
'a' in data
data.keys()
list(data.items())
###Output
_____no_output_____
###Markdown
``Series`` objects can even be modified with a dictionary-like syntax.Just as you can extend a dictionary by assigning to a new key, you can extend a ``Series`` by assigning to a new index value:
###Code
data['e'] = 1.25
data
###Output
_____no_output_____
###Markdown
This easy mutability of the objects is a convenient feature: under the hood, Pandas is making decisions about memory layout and data copying that might need to take place; the user generally does not need to worry about these issues. Series as one-dimensional array A ``Series`` builds on this dictionary-like interface and provides array-style item selection via the same basic mechanisms as NumPy arrays – that is, *slices*, *masking*, and *fancy indexing*.Examples of these are as follows:
###Code
# slicing by explicit index
data['a':'c']
# slicing by implicit integer index
data[0:2]
# masking
data[(data > 0.3) & (data < 0.8)]
# fancy indexing
data[['a', 'e']]
###Output
_____no_output_____
###Markdown
Among these, slicing may be the source of the most confusion.Notice that when slicing with an explicit index (i.e., ``data['a':'c']``), the final index is *included* in the slice, while when slicing with an implicit index (i.e., ``data[0:2]``), the final index is *excluded* from the slice. Indexers: loc, iloc, and ixThese slicing and indexing conventions can be a source of confusion.For example, if your ``Series`` has an explicit integer index, an indexing operation such as ``data[1]`` will use the explicit indices, while a slicing operation like ``data[1:3]`` will use the implicit Python-style index.
###Code
data = pd.Series(['a', 'b', 'c'], index=[1, 3, 5])
data
# explicit index when indexing
data[1]
# implicit index when slicing
data[1:3]
###Output
_____no_output_____
###Markdown
Because of this potential confusion in the case of integer indexes, Pandas provides some special *indexer* attributes that explicitly expose certain indexing schemes.These are not functional methods, but attributes that expose a particular slicing interface to the data in the ``Series``.First, the ``loc`` attribute allows indexing and slicing that always references the explicit index:
###Code
?data.loc
?data.iloc
data.loc[1]
data.loc[1:3]
###Output
_____no_output_____
###Markdown
The ``iloc`` attribute allows indexing and slicing that always references the implicit Python-style index:
###Code
data.iloc[1]
data.iloc[1:3]
###Output
_____no_output_____
###Markdown
A third indexing attribute, ``ix``, is a hybrid of the two, and for ``Series`` objects is equivalent to standard ``[]``-based indexing.The purpose of the ``ix`` indexer will become more apparent in the context of ``DataFrame`` objects, which we will discuss in a moment.One guiding principle of Python code is that "explicit is better than implicit."The explicit nature of ``loc`` and ``iloc`` make them very useful in maintaining clean and readable code; especially in the case of integer indexes, I recommend using these both to make code easier to read and understand, and to prevent subtle bugs due to the mixed indexing/slicing convention. Data Selection in DataFrameRecall that a ``DataFrame`` acts in many ways like a two-dimensional or structured array, and in other ways like a dictionary of ``Series`` structures sharing the same index.These analogies can be helpful to keep in mind as we explore data selection within this structure. DataFrame as a dictionaryThe first analogy we will consider is the ``DataFrame`` as a dictionary of related ``Series`` objects.Let's return to our example of areas and populations of states:
###Code
area = pd.Series({'California': 423967, 'Texas': 695662,
'New York': 141297, 'Florida': 170312,
'Illinois': 149995})
pop = pd.Series({'California': 38332521, 'Texas': 26448193,
'New York': 19651127, 'Florida': 19552860,
'Illinois': 12882135})
data = pd.DataFrame({'area':area, 'pop':pop})
data
###Output
_____no_output_____
###Markdown
The individual ``Series`` that make up the columns of the ``DataFrame`` can be accessed via dictionary-style indexing of the column name:
###Code
data['area']
###Output
_____no_output_____
###Markdown
Equivalently, we can use attribute-style access with column names that are strings:
###Code
data.area
###Output
_____no_output_____
###Markdown
This attribute-style column access actually accesses the exact same object as the dictionary-style access:
###Code
data.area is data['area']
###Output
_____no_output_____
###Markdown
Though this is a useful shorthand, keep in mind that it does not work for all cases!For example, if the column names are not strings, or if the column names conflict with methods of the ``DataFrame``, this attribute-style access is not possible.For example, the ``DataFrame`` has a ``pop()`` method, so ``data.pop`` will point to this rather than the ``"pop"`` column:
###Code
data.pop is data['pop']
###Output
_____no_output_____
###Markdown
In particular, you should avoid the temptation to try column assignment via attribute (i.e., use ``data['pop'] = z`` rather than ``data.pop = z``).Like with the ``Series`` objects discussed earlier, this dictionary-style syntax can also be used to modify the object, in this case adding a new column:
###Code
data['density'] = data['pop'] / data['area']
data
###Output
_____no_output_____
###Markdown
This shows a preview of the straightforward syntax of element-by-element arithmetic between ``Series`` objects; we'll dig into this further in [Operating on Data in Pandas](03.03-Operations-in-Pandas.ipynb). DataFrame as two-dimensional arrayAs mentioned previously, we can also view the ``DataFrame`` as an enhanced two-dimensional array.We can examine the raw underlying data array using the ``values`` attribute:
###Code
data.values
###Output
_____no_output_____
###Markdown
With this picture in mind, many familiar array-like observations can be done on the ``DataFrame`` itself.For example, we can transpose the full ``DataFrame`` to swap rows and columns:
###Code
data.T
###Output
_____no_output_____
###Markdown
When it comes to indexing of ``DataFrame`` objects, however, it is clear that the dictionary-style indexing of columns precludes our ability to simply treat it as a NumPy array.In particular, passing a single index to an array accesses a row:
###Code
data.values[0]
###Output
_____no_output_____
###Markdown
and passing a single "index" to a ``DataFrame`` accesses a column:
###Code
data['area']
###Output
_____no_output_____
###Markdown
Thus for array-style indexing, we need another convention.Here Pandas again uses the ``loc``, ``iloc``, and ``ix`` indexers mentioned earlier.Using the ``iloc`` indexer, we can index the underlying array as if it is a simple NumPy array (using the implicit Python-style index), but the ``DataFrame`` index and column labels are maintained in the result:
###Code
data.iloc[:3, :2]
###Output
_____no_output_____
###Markdown
Similarly, using the ``loc`` indexer we can index the underlying data in an array-like style but using the explicit index and column names:
###Code
data.loc[:'Illinois', :'pop']
###Output
_____no_output_____
###Markdown
The ``ix`` indexer allows a hybrid of these two approaches:
###Code
data.ix[:3, :'pop'] # Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
###Output
_____no_output_____
###Markdown
Keep in mind that for integer indices, the ``ix`` indexer is subject to the same potential sources of confusion as discussed for integer-indexed ``Series`` objects.Any of the familiar NumPy-style data access patterns can be used within these indexers.For example, in the ``loc`` indexer we can combine masking and fancy indexing as in the following:
###Code
data.loc[data.density > 100, ['pop', 'density']]
###Output
_____no_output_____
###Markdown
Any of these indexing conventions may also be used to set or modify values; this is done in the standard way that you might be accustomed to from working with NumPy:
###Code
data.iloc[0, 2] = 90
data
###Output
_____no_output_____
###Markdown
To build up your fluency in Pandas data manipulation, I suggest spending some time with a simple ``DataFrame`` and exploring the types of indexing, slicing, masking, and fancy indexing that are allowed by these various indexing approaches. Additional indexing conventionsThere are a couple extra indexing conventions that might seem at odds with the preceding discussion, but nevertheless can be very useful in practice.First, while *indexing* refers to columns, *slicing* refers to rows:
###Code
data['Florida':'Illinois']
###Output
_____no_output_____
###Markdown
Such slices can also refer to rows by number rather than by index:
###Code
data[1:3]
###Output
_____no_output_____
###Markdown
Similarly, direct masking operations are also interpreted row-wise rather than column-wise:
###Code
data[data.density > 100]
###Output
_____no_output_____ |
test_Azure_Face_API/test_01.ipynb | ###Markdown
Face Detection using Microsoft Face APIhttps://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/howtoidentifyfacesinimagehttps://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236
###Code
########### Python 2.7 #############
import httplib, urllib, base64
import json
headers = {
# Request headers
'Content-Type': 'application/json',
# NOTE: Replace the "Ocp-Apim-Subscription-Key" value with a valid subscription key.
'Ocp-Apim-Subscription-Key': '<copy_your_subscription_key_here>',
}
###Output
_____no_output_____
###Markdown
Get your new subscription key from Congitives Serviceshttps://portal.azure.com
###Code
params = urllib.urlencode({
# Request parameters
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
'returnFaceAttributes': 'age,gender,smile,glasses,emotion',
})
try:
# NOTE: You must use the same region in your REST call as you used to obtain your subscription keys.
# For example, if you obtained your subscription keys from westus, replace "westcentralus" in the
# URL below with "westus".
conn = httplib.HTTPSConnection('westeurope.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/detect?%s" % params, "{\"url\":\"https://lh5.googleusercontent.com/-ed6cmM0559o/AAAAAAAAAAI/AAAAAAABs_o/uLwscELIloI/photo.jpg\"}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
data = json.loads(data)
data
####################################
# ########### Python 3.2 #############
# import http.client, urllib.request, urllib.parse, urllib.error, base64
# headers = {
# # Request headers
# 'Content-Type': 'application/json',
# # NOTE: Replace the "Ocp-Apim-Subscription-Key" value with a valid subscription key.
# 'Ocp-Apim-Subscription-Key': '6726adbabb494773a28a7a5a21d5974a',
# }
# params = urllib.parse.urlencode({
# # Request parameters
# 'returnFaceId': 'true',
# 'returnFaceLandmarks': 'false',
# 'returnFaceAttributes': 'age,gender',
# })
# try:
# # NOTE: You must use the same location in your REST call as you used to obtain your subscription keys.
# # For example, if you obtained your subscription keys from westus, replace "westcentralus" in the
# # URL below with "westus".
# conn = http.client.HTTPSConnection('westcentralus.api.cognitive.microsoft.com')
# conn.request("POST", "/face/v1.0/detect?%s" % params, "{body}", headers)
# response = conn.getresponse()
# data = response.read()
# print(data)
# conn.close()
# except Exception as e:
# print("[Errno {0}] {1}".format(e.errno, e.strerror))
# ####################################
import numpy as np
import urllib
import cv2
# METHOD #1: OpenCV, NumPy, and urllib
def url_to_image(url):
# download the image, convert it to a NumPy array, and then read
# it into OpenCV format
resp = urllib.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
# return the image
return image
urls = [
# "http://www.pyimagesearch.com/wp-content/uploads/2015/01/opencv_logo.png",
# "http://www.pyimagesearch.com/wp-content/uploads/2015/01/google_logo.png",
"https://lh5.googleusercontent.com/-ed6cmM0559o/AAAAAAAAAAI/AAAAAAABs_o/uLwscELIloI/photo.jpg",
]
# loop over the image URLs
for url in urls:
# download the image URL and display it
print "downloading %s" % (url)
image = url_to_image(url)
cv2.imshow("Image", image)
cv2.waitKey(0)
for face in data:
x = face['faceRectangle']['left']
y = face['faceRectangle']['top']
w = face['faceRectangle']['width']
h = face['faceRectangle']['height']
cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
# cv2.imshow('Features', image)
cv2.imwrite("test_face_detection"+'.png', image)
###Output
_____no_output_____ |
old/Debugging_NaN_error.ipynb | ###Markdown
In this notebook we will try to debug the NaN error that occurs occasionally during training of the PINN. It is hard to reproduce the error in a reliable and fast way, but luckily the code below does the trick. It is the code from example 5.1. with sinusoidal load. The Adam optimizer stops training if the average loss of the last 100 epochs is bigger than the average loss of the 100 epochs before. The L-BFGS optimizer stops training if the loss does not improve for 100 epochs in a row.However this also shows that the stability of the optimizers is dependant on many factors and not really predictable. For example changing the stop criterion for the Adam optimizer from the current one to stopping after 100 epochs without improvement results in the NaN error less frequently. Furthermore as we saw in example 5.1. with a load on the right side of the bar, the error didn't occur at all.
###Code
from typing import Any, List, Tuple
import torch
# General parameters for modeling the problem
time_steps: int = 10
num_points: int = 112
displacement_step: float = 0.0
# Initial crack
crack: float = 0.0
crack_width: float = 0.025
# Material constants
young_modulus: float = 1
poisson_ratio: float = 0.3 # typical for metal
lame_lambda: float = 0
lame_mu: float = 0.5
G_c: float = 0.5*crack_width
# Stress degradation function
def g(x: torch.Tensor) -> torch.Tensor:
"""Stress degradation function.
"""
return (1-x)**2
# Finite differences
delta: float = 0.00001
# Debug mode. Asserts that all tensors have the right shape.
DEBUG: bool = True
import os
import numpy as np
# The Gauss points and weights are generated seperately for each region. They are generated on the reference interval [-1,1] and transposed to the actual region.
def generate_gauss_points(x1: float, x2: float, num_points: int) -> Tuple[np.ndarray, np.ndarray]:
points, weights = np.polynomial.legendre.leggauss(num_points)
points *= abs(x2 - x1)*0.5
points += (x2 + x1)*0.5
weights *= (x2 - x1)*0.5
return points, weights
lc, lc_weights = generate_gauss_points(-1.0, -2*crack_width, num_points)
c, c_weights = generate_gauss_points(-2*crack_width, 2*crack_width, num_points)
rc, rc_weights = generate_gauss_points(2*crack_width, 1.0, num_points)
gauss_points: np.ndarray = np.concatenate((lc, c, rc))
# As a convention we will mark variables of the type torch.Tensor with the suffix _t if its type could be uncertain.
gauss_points_t: torch.Tensor = torch.tensor(gauss_points, dtype=torch.float).unsqueeze(dim=-1) # shape [336,1]
gauss_weights: np.ndarray = np.concatenate((lc_weights, c_weights, rc_weights))
gauss_weights_t: torch.Tensor = torch.tensor(gauss_weights, dtype=torch.float).unsqueeze(dim=-1) # shape [336,1]
visualization_points = np.linspace(-1.0, 1.0, num=num_points*3)
visualization_points_t: torch.Tensor = torch.tensor(visualization_points, dtype=torch.float).unsqueeze(dim=-1)
import torch.nn as nn
# PINN architecture:
in_dim = 1
phase_field_dim = 1
dis_field_dim = 1
hidden_dim = 50
depth = 3
class PINN(nn.Module):
def __init__(self) -> None:
super().__init__()
self.hidden_dim = hidden_dim
self.depth = depth
self.in_dim = in_dim
self.out_dim = dis_field_dim + phase_field_dim
self.fcs = nn.ModuleList([nn.Linear(self.hidden_dim, self.hidden_dim) for i in range(depth-1)])
self.fcs.insert(0, nn.Linear(self.in_dim, self.hidden_dim))
self.fcs.append(nn.Linear(self.hidden_dim, self.out_dim))
# Xavier initialization:
for fc in self.fcs:
nn.init.xavier_uniform_(fc.weight)
self.activation = nn.Tanh()
self.sigmoid = nn.Sigmoid()
def forward(self, x: torch.Tensor) -> torch.Tensor:
for fc in self.fcs[:-1]:
x = self.activation(fc(x))
x = self.fcs[-1](x)
#x[:,-1] = self.sigmoid(x[:,-1])
return x
# Dirichlet boundary conditions
class PINN_BC(nn.Module):
def __init__(self, pinn: PINN=None):
super().__init__()
self.pinn = PINN()
def forward(self, x: torch.Tensor) -> torch.Tensor:
ones = torch.ones(x.shape, device=device)
y = self.pinn(x)
u_hat, phi= y[...,:dis_field_dim], y[...,dis_field_dim:]
return torch.cat(((x-ones)*(x+ones)*u_hat, phi), dim=-1)
import math
import time
H_init: torch.Tensor = torch.Tensor([1000.0 if abs(x)<=crack_width else 0.0 for x in gauss_points]).unsqueeze(dim=-1)
def jacobian(f, x: torch.Tensor, h: float, mode: str='forward', y_1: torch.Tensor=None):
# Precompute this tensor for faster results:
h_t_in: torch.Tensor = torch.full_like(x, h, device=device)
if mode == 'forward':
if y_1 is None:
y_1: torch.Tensor = f(x) # shape [bs, out_dim]
y_2: torch.Tensor = f(x + h_t_in) # shape [bs, out_dim]
elif mode == 'backward':
if y_1 is None:
y_1: torch.Tensor = f(x - h_t_in) # shape [bs, out_dim]
y_2: torch.Tensor = f(x)# shape [bs, out_dim]
else:
y_2: torch.Tensor = y_1 # shape [bs, out_dim]
y_1: torch.Tensor = f(x - h_t_in) # shape [bs, out_dim]
elif mode == 'central':
y_1: torch.Tensor = f(x - 0.5*h_t_in) # shape [bs, out_dim]
y_2: torch.Tensor = f(x + 0.5*h_t_in) # shape [bs, out_dim]
else:
print("Please enter a valid differentiation mode")
return None
h_t_out: torch.Tensor = torch.full_like(y_1, h, device=device)
if DEBUG:
assert y_1.shape == y_2.shape
return (y_2 - y_1)/h_t_out # shape [bs, out_dim]
def f_e(phi: torch.Tensor, nabla_u: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
eigenvalues: torch.Tensor = nabla_u
psi_pos: torch.Tensor = (lame_lambda/8 + lame_mu/4)*(eigenvalues + torch.abs(eigenvalues))**2
psi_neg: torch.Tensor = (lame_lambda/8 + lame_mu/4)*(eigenvalues - torch.abs(eigenvalues))**2
H: torch.Tensor = torch.maximum(psi_pos, H_init)
if DEBUG:
assert phi.shape == psi_pos.shape
assert phi.shape == g(phi).shape
assert psi_pos.shape == psi_neg.shape
return g(phi)*psi_pos + psi_neg, psi_pos
def f_c(phi: torch.Tensor, nabla_phi: torch.Tensor, psi_pos: torch.Tensor) -> torch.Tensor:
H: torch.Tensor = torch.maximum(psi_pos, H_init)
if DEBUG:
assert phi.shape == torch.abs(nabla_phi).shape
assert phi.shape == g(phi).shape
assert g(phi).shape == H.shape
return G_c/(2*crack_width)*(phi**2 + (crack_width**2)*torch.abs(nabla_phi)**2) + g(phi)*H
def external_work(x: torch.Tensor, u: torch.tensor) -> torch.Tensor:
return torch.sin(math.pi*x)*u
def total_energy(model: PINN_BC, x: torch.Tensor, verbose: bool=False, save_loss: bool=False, fe_list: List[float]=None, fc_list: List[float]=None) -> torch.Tensor:
t1: float = time.time()
outputs: torch.Tensor = model(x)
u: torch.Tensor = outputs[...,:dis_field_dim]
phi: torch.Tensor = outputs[...,dis_field_dim:]
t2: float = time.time()
nabla_pinn: torch.Tensor = jacobian(model, x, 0.0001, y_1=outputs)
t3: float = time.time()
nabla_u: torch.Tensor = nabla_pinn[:,:dis_field_dim]
nabla_phi: torch.Tensor = nabla_pinn[:,dis_field_dim:]
t4: float = time.time()
strain_energy, psi_pos = f_e(phi, nabla_u)
if DEBUG:
assert strain_energy.shape == gauss_weights_t.shape
strain_energy = torch.sum(strain_energy*gauss_weights_t)
t5: float = time.time()
if DEBUG:
assert f_c(phi, nabla_phi, psi_pos).shape == gauss_weights_t.shape
fracture_energy = torch.sum(f_c(phi, nabla_phi, psi_pos)*gauss_weights_t)
t6: float = time.time()
ext_work = torch.sum(external_work(gauss_points_t, u)*gauss_weights_t)
t7: float = time.time()
if verbose:
print(f'U: {t2-t1} jac: {t3-t2} nabla: {t4-t3} f_e: {t5-t4} f_c: {t6-t5} external work: {t7-t6}')
if save_loss:
fe_list.append(strain_energy.detach().to('cpu').item())
fc_list.append(fracture_energy.detach().to('cpu').item())
return torch.abs(strain_energy + fracture_energy - ext_work)
# Move the tensors to the GPU if available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
gauss_points_t: torch.Tensor = gauss_points_t.to(device)
gauss_weights_t: torch.Tensor = gauss_weights_t.to(device)
H_init: torch.Tensor = H_init.to(device)
losses = []
fe_list: List[float] = []
fc_list: List[float] = []
def train(model: PINN_BC, optimizer, loss, x: torch.Tensor, epochs: int) -> None:
print('Training with the Adam optimizer')
for i in range(epochs):
optimizer.zero_grad()
loss_sum: torch.Tensor = loss(model, x, save_loss=True, fe_list=fe_list, fc_list=fc_list)
loss_sum.backward()
losses.append(loss_sum.detach().to("cpu").item())
if i%100 == 0:
print(f'Epoch {i} Loss {losses[-1]}')
optimizer.step()
# Stop criterion. If the mean of the last 100 epochs is smaller than the current one break
if len(losses) > 100 and sum(losses[-100:])/100 <= losses[-1]:
print('Stopping training, since the loss does not improve')
break
def train_lbfgs(model: PINN_BC, optimizer, loss, x: torch.Tensor, epochs: int) -> None:
print('Training with the L-BFGS optimizer')
min_loss: float = float('inf')
counter: int = 0
for i in range(epochs):
print(f'Epoch {i} ', end='')
def closure():
optimizer.zero_grad()
loss_sum: torch.Tensor = loss(model, x, save_loss=True, fe_list=fe_list, fc_list=fc_list)
loss_sum.backward()
losses.append(loss_sum.detach().to("cpu").item())
return loss_sum
if np.isnan(losses[-1]):
print('NaN')
nan_handling(model, )
optimizer.step(closure)
torch.save(model.state_dict())
if i%10 == 0:
print(f'Loss {losses[-1]}')
if losses[-1] < min_loss:
counter = 0
min_loss = losses[-1]
torch.save(model.state_dict(), f'./saved_models/{project_name}/model_best.pt')
else:
counter += 1
if counter == 100:
print('Stopping training, since the loss did not improve for 100 epochs')
break
project_name: str = 'sinus_work'
os.makedirs(f'./saved_models/{project_name}/', exist_ok=True)
intermediate_model_path: str = f'./saved_models/{project_name}/model_inter.pt'
model_path: str = f'./saved_models/{project_name}/model_final.pt'
# Train with ADAM
model: PINN_BC = PINN_BC().to(device)
optimizer = torch.optim.Adam(model.parameters())
train(model, optimizer, total_energy, gauss_points_t, 1500)
torch.save(model.state_dict(), intermediate_model_path)
# Train with L-BFGS
optimizer = torch.optim.LBFGS(model.parameters())
train_lbfgs(model, optimizer, total_energy, gauss_points_t, 100)
torch.save(model.state_dict(), model_path)
###Output
Training with the Adam optimizer
Epoch 0 Loss 5.974319934844971
Epoch 100 Loss 0.3969902992248535
Epoch 200 Loss 0.35138416290283203
Epoch 300 Loss 0.18402302265167236
Epoch 400 Loss 0.025734499096870422
Epoch 500 Loss 0.0009995698928833008
Stopping training, since the loss does not improve
Training with the L-BFGS optimizer
Epoch 0 Loss 0.0003642439842224121
Epoch 1 Epoch 2 Epoch 3 Epoch 4 Epoch 5 Epoch 6 Epoch 7 Epoch 8 Epoch 9 Epoch 10 Loss 2.9340386390686035e-05
Epoch 11 Epoch 12 Epoch 13 Epoch 14 Epoch 15 Epoch 16 Epoch 17 Epoch 18 Epoch 19 Epoch 20 Loss 0.000642240047454834
Epoch 21 Epoch 22 Epoch 23 Epoch 24 Epoch 25 Epoch 26 Epoch 27 Epoch 28 Epoch 29 Epoch 30 Loss 0.0005124509334564209
Epoch 31 Epoch 32 Epoch 33 Epoch 34 Epoch 35 Epoch 36 Epoch 37 Epoch 38 Epoch 39 Epoch 40 Loss 2.9474496841430664e-05
Epoch 41 Epoch 42 Epoch 43 Epoch 44 Epoch 45 Epoch 46 Epoch 47 Epoch 48 Epoch 49 Epoch 50 NaN
###Markdown
Different setup for the optimizers that runs without any errors (most of the time at least)
###Code
import math
import time
# Move the tensors to the GPU if available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
gauss_points_t: torch.Tensor = gauss_points_t.to(device)
gauss_weights_t: torch.Tensor = gauss_weights_t.to(device)
H_init: torch.Tensor = H_init.to(device)
losses = []
fe_list: List[float] = []
fc_list: List[float] = []
def train(model: PINN_BC, optimizer, loss, x: torch.Tensor, epochs: int) -> None:
print('Training with the Adam optimizer')
min_loss: float = float('inf')
counter: int = 0
for i in range(epochs):
optimizer.zero_grad()
loss_sum: torch.Tensor = loss(model, x, save_loss=True, fe_list=fe_list, fc_list=fc_list)
loss_sum.backward()
losses.append(loss_sum.detach().to("cpu").item())
if i%100 == 0:
print(f'Epoch {i} Loss {losses[-1]}')
optimizer.step()
# Stop criterion.
if losses[-1] < min_loss:
counter = 0
min_loss = losses[-1]
torch.save(model.state_dict(), f'./saved_models/{project_name}/model_best.pt')
else:
counter += 1
if counter == 100:
print('Stopping training, since the loss did not improve for 100 epochs')
break
project_name: str = 'sinus_work'
os.makedirs(f'./saved_models/{project_name}/', exist_ok=True)
intermediate_model_path: str = f'./saved_models/{project_name}/model_inter.pt'
model_path: str = f'./saved_models/{project_name}/model_final.pt'
# Train with ADAM
model: PINN_BC = PINN_BC().to(device)
optimizer = torch.optim.Adam(model.parameters())
train(model, optimizer, total_energy, gauss_points_t, 1500)
torch.save(model.state_dict(), intermediate_model_path)
# Train with L-BFGS
optimizer = torch.optim.LBFGS(model.parameters())
train_lbfgs(model, optimizer, total_energy, gauss_points_t, 100)
torch.save(model.state_dict(), model_path)
import matplotlib.pyplot as plt
visualization_points_t: torch.Tensor = visualization_points_t.to(device)
best_model: str = f'./saved_models/{project_name}/model_best.pt'
model = PINN_BC().to(device)
model.load_state_dict(torch.load(best_model))
model.eval()
outputs = model(visualization_points_t)
dis_field = outputs[...,:dis_field_dim].squeeze().detach().to('cpu').numpy()
phase_field = outputs[...,dis_field_dim:].squeeze().detach().to('cpu').numpy()
plt.plot(visualization_points, dis_field)
plt.title('displacement field u')
plt.ylabel('u(x)')
plt.xlabel('x')
plt.show()
plt.plot(visualization_points, phase_field)
plt.title('Phase field')
plt.ylabel('phi(x)')
plt.xlabel('x')
plt.show()
###Output
_____no_output_____ |
prerequisites4 - pandas - quiz - assign.ipynb | ###Markdown
assign assign은 원본 데이터를 건드리지 않고 새로운 컬럼을 추가하는 함수에요. assign(column_name = lambda 함수)의 형식으로 주로 씁니다. lambda 함수가 받는 인자는 그 데이터 프레임 전체에요. 거기에 아무짓이나 해서 컬럼 하나로만 만들어주면 돼요.
###Code
movies.assign(year = lambda x: x['title'].map(lambda x: x[-5:-1])).head()
###Output
_____no_output_____
###Markdown
method chaining으로 여러번 쓸 수 있어요.
###Code
(movies
.assign(year = lambda x: x['title'].map(lambda x: x[-5:-1]))
.assign(num_genres = lambda x: x['genres'].map(lambda x: len(x.split('|'))))
).head()
###Output
_____no_output_____
###Markdown
movies는 전혀 변하지 않았죠.
###Code
movies.head()
###Output
_____no_output_____
###Markdown
의미있는 단계마다 변수를 만들어 저장해두면 좋아요.
###Code
movies_added = (movies
.assign(year = lambda x: x['title'].map(lambda x: x[-5:-1]))
.assign(num_genres = lambda x: x['genres'].map(lambda x: len(x.split('|'))))
)
movies_added.head()
###Output
_____no_output_____
###Markdown
컬럼 이름을 똑같이 쓰면 덮어쓰기가 됩니다. 하지만 이건 데이터를 변경하기 때문에 꼭 필요할 때만 쓰세요. movies_added의 year를 string에서 int로 바꿔보겠습니다.
###Code
(movies_added
.assign(year = lambda x: x['year'].astype(int))
).head()
###Output
_____no_output_____
###Markdown
이제 year의 중앙값을 구할 수 있겠네요.
###Code
movies_added['year'].median()
###Output
_____no_output_____
###Markdown
Q Q. ratings에서 rating 컬럼을 1~5점에서 0~100점으로 변환해서 'rating_100'이라는 컬럼에 넣어보세요.> hint : (x.rating - 1) * 25
###Code
# A0 =
# YOUR CODE HERE
raise NotImplementedError()
assert 'rating_100' in A0.columns
assert A0.rating_100.max() == 100
assert A0.rating_100.min() == 0
assert A0.rating_100.mean() == 64.6
###Output
_____no_output_____ |
starter_code/.ipynb_checkpoints/model_4-checkpoint.ipynb | ###Markdown
Random Forest (RF) Model
###Code
# Update sklearn to prevent version mismatches
!pip install sklearn --upgrade
# Update sklearn to prevent version mismatches
!pip install tensorflow==2.2 --upgrade
!pip install keras --upgrade
# Install joblib. This will be used to save your model.
# Restart your kernel after installing
!pip install joblib
import pandas as pd
###Output
_____no_output_____
###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
Select your features (columns)
###Code
# Set features. This will also be used as your x values.
#data = df.drop("koi_disposition", axis=1)
#selected_features = data.columns
selected_features = df[['koi_fpflag_nt','koi_fpflag_ss','koi_fpflag_co','koi_fpflag_ec',
'koi_period','koi_period_err1','koi_period_err2',
'koi_time0bk','koi_time0bk_err1','koi_time0bk_err2',
'koi_impact','koi_impact_err1','koi_impact_err2',
'koi_duration','koi_duration_err1','koi_duration_err2',
'koi_depth','koi_depth_err1','koi_depth_err2',
'koi_prad','koi_prad_err1','koi_prad_err2',
'koi_teq','koi_insol','koi_insol_err1','koi_insol_err2',
'koi_model_snr','koi_steff','koi_steff_err1','koi_steff_err2',
'koi_slogg','koi_slogg_err1','koi_slogg_err2',
'koi_srad','koi_srad_err1','koi_srad_err2',
'ra','dec','koi_kepmag']]
selected_features.head()
###Output
_____no_output_____
###Markdown
Create a Train Test SplitUse `koi_disposition` for the y values
###Code
# Define target dataframe, target_names array, and X and y variables
target = df["koi_disposition"]
target_names = ["Confirmed", "False Positive", "Candidate"]
X = selected_features
y = target
# Derive X and y training and testing variables
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y)
X_train.head()
###Output
_____no_output_____
###Markdown
Pre-processingScale the data using the MinMaxScaler and perform some feature selection
###Code
# Scale your data
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from tensorflow.keras.utils import to_categorical
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# Label-encode data set and print the encoded_y_test
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
print(encoded_y_test)
###Output
[0 2 2 ... 2 2 1]
###Markdown
Train the Model
###Code
from sklearn.ensemble import RandomForestClassifier
model4 = RandomForestClassifier(n_estimators=200)
model4
# Fit the data and print Training Data Scores
model4.fit(X_train_scaled, encoded_y_train)
print(f"Training Data Score: {model4.score(X_train_scaled, encoded_y_train)}")
print(f"Testing Data Score: {model4.score(X_test_scaled, encoded_y_test)}")
###Output
Training Data Score: 1.0
Testing Data Score: 0.9004576659038902
###Markdown
Hyperparameter TuningUse `GridSearchCV` to tune the model's parameters
###Code
# Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {
'n_estimators': [200, 700]
}
grid4 = GridSearchCV(estimator=model4, param_grid=param_grid, verbose=3)
# Train the model with GridSearch
grid4.fit(X_train_scaled, encoded_y_train)
print(grid4.best_params_)
print(grid4.best_score_)
###Output
{'n_estimators': 700}
0.8895649437122959
###Markdown
Test RF Model
###Code
# Make predictions
predictions = grid4.predict(X_test_scaled)
predictions
# Calculate classification report
from sklearn.metrics import classification_report
print(classification_report(encoded_y_test, predictions,
target_names=target_names))
###Output
precision recall f1-score support
Confirmed 0.83 0.75 0.79 422
False Positive 0.81 0.84 0.83 450
Candidate 0.97 1.00 0.99 876
accuracy 0.90 1748
macro avg 0.87 0.86 0.87 1748
weighted avg 0.90 0.90 0.90 1748
###Markdown
Save the Model
###Code
# save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = 'rf.sav'
joblib.dump(model4, filename)
###Output
_____no_output_____ |
regression/home-data-for-ml-course/Ex_LinReg.ipynb | ###Markdown
Information on Datahttps://www.kaggle.com/c/home-data-for-ml-course/data
###Code
# Custom Classes and Functions
def display_df_info(df_name, my_df, v=False):
"""Convenience function to display information about a dataframe"""
print("Data: {}".format(df_name))
print("Shape (rows, cols) = {}".format(my_df.shape))
print("First few rows...")
print(my_df.head())
# Optional: Display other optional information with the (v)erbose flag
if v:
print("Dataframe Info:")
print(my_df.info())
class GetAge(BaseEstimator, TransformerMixin):
"""Custom Transformer: Calculate age (years only) relative to current year. Note that
the col values will be replaced but the original col name remains. When the transformer is
used in a pipeline, this is not an issue as the names are not used. However, if the data
from the pipeline is to be converted back to a DataFrame, then the col name change should
be done to reflect the correct data content."""
def fit(self, X, y=None):
return self
def transform(self,X):
current_year = int(d.datetime.now().year)
"""TASK: Replace the 'YearBuilt' column values with the calculated age (subtract the
current year from the original values). -----------Done
"""
X['Age'] = current_year - X['YearBuilt']
# X.drop(['YearBuilt'], axis=1 , inplace=True)
return X
import os
os.getcwd()
def main():
# DATA INPUT
############
file_path = "train_csv_EDA.csv" #TASK: Modify to path of file ---- Done
input_data = pd.read_csv(file_path) # TASK: Read in the input csv file using pandas ---- Done
display_df_info("Raw Input", input_data)
# Seperate out the outcome variable from the loaded dataframe
output_var_name = 'SalePrice'
output_var = input_data[output_var_name]
input_data.drop(output_var_name, axis=1, inplace=True)
# DATA ENGINEERING / MODEL DEFINITION
#####################################
# Subsetting the columns: define features to keep
feature_names = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr',
'TotRmsAbvGrd', 'HouseStyle'] # TASK: Define the names of the columns to keep
features = input_data[feature_names]
display_df_info('Features before Transform', features, v=True)
# Create the pipeline ...
# 1. Pre-processing
# Define variables made up of lists. Each list is a set of columns that will go through the same data transformations.
# numerical_features = [col for col in features.columns if features.dtypes[col] != 'object'] # TASK: Define numerical column names
# categorical_features = [col for col in features.columns if col not in numerical_features] # TASK: Define categorical column names
categorical_features = features.select_dtypes(include="object").columns
numerical_features = features.select_dtypes(exclude="object").columns
"""TASK:
Define the data processing steps (transformers) to be applied to the numerical features in the dataset.
At a minimum, use 2 transformers: GetAge() and one other. Combine them using make_pipeline() or Pipeline()
"""
int_transformer = Pipeline(steps = [
('imputer', SimpleImputer(strategy = 'median')),
('scaler', StandardScaler())
])
cat_transformer = Pipeline(steps=[
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
preprocess = ColumnTransformer(
transformers=[
('ints', int_transformer, numerical_features),
('cat', cat_transformer, categorical_features)
])
# preprocess = make_column_transformer(
# # ("""TASK: Define transformers""", numerical_features),
# (StandardScaler(), GetAge(), numerical_features),
# (OneHotEncoder(), categorical_features)
# )
# 2. Combine pre-processing with ML algorithm
# model = make_pipeline(
# preprocess,
# LinearRegression() # TASK : replace with ML algorithm from scikit ---Done
# )
reg = LinearRegression()
model = Pipeline(steps=[
('preprocessor', preprocess),
('regression', reg)
])
# TRAINING
##########
# Train/Test Split
"""TASK:
Split the data in test and train sets by completing the train_test_split function below.
Define a random_state value so that the experiment is repeatable.
"""
x_train, x_test, y_train, y_test = train_test_split(features, output_var, test_size=0.3, random_state=42) # TASK: Complete the code ---Done
# Train the pipeline
model.fit(x_train, y_train)
# Optional: Train with cross-validation and/or parameter grid search
# # Perform 10-fold CV
# cvscores_10 = cross_val_score(reg, features, output_var, cv = 10)
# print("CV score mean: {}".format(np.mean(cvscores_10)))
# SCORING/EVALUATION
####################
# Fit the model on the test data
pred_test = model.predict(x_test) # y_pred = predicted
# Display the results of the metrics
"""TASK: /Done
Calculate the RMSE and Coeff of Determination between the actual and predicted sale prices.
Name your variables rmse and r2 respectively.
"""
rmse = np.sqrt(mean_squared_error(y_test, pred_test)) # y_test = actual
# print("pred_test type: {}".format(type(pred_test)))
r2 = r2_score(y_test, pred_test)
print("Results on Test Data")
print("####################")
print("RMSE: {:.2f}".format(rmse))
print("R2 Score: {:.5f}".format(r2))
# Compare actual vs predicted values
"""TASK: /Done
Create a new dataframe which combines the actual and predicted Sale Prices from the test dataset. You
may also add columns with other information such as difference, abs diff, %tage difference etc.
Name your variable compare
"""
data = { 'actual': y_test, 'predicted':pred_test } # build dataset
compare = pd.DataFrame(data) # make it a new DataFrame
# add a column for difference, abs diff, %tage diff: diff, abs_diff, percentage_diff
compare['diff'] = compare['predicted'] - compare['actual']
compare['abs_diff'] = compare['diff'].abs()
compare['percentage_diff'] = (compare['diff'] / compare['predicted']) * 100
display_df_info('Actual vs Predicted Comparison', compare)
# Save the model
with open('my_model_lr.joblib', 'wb') as fo:
joblib.dump(model, fo)
if __name__ == '__main__':
main()
###Output
Data: Raw Input
Shape (rows, cols) = (1460, 72)
First few rows...
MSSubClass MSZoning LotFrontage LotArea Street LotShape LandContour \
0 60 RL 65.0 8450 Pave Reg Lvl
1 20 RL 80.0 9600 Pave Reg Lvl
2 60 RL 68.0 11250 Pave IR1 Lvl
3 70 RL 60.0 9550 Pave IR1 Lvl
4 60 RL 84.0 14260 Pave IR1 Lvl
Utilities LotConfig LandSlope ... EnclosedPorch 3SsnPorch ScreenPorch \
0 AllPub Inside Gtl ... 0 0 0
1 AllPub FR2 Gtl ... 0 0 0
2 AllPub Inside Gtl ... 0 0 0
3 AllPub Corner Gtl ... 272 0 0
4 AllPub FR2 Gtl ... 0 0 0
PoolArea MiscVal MoSold YrSold SaleType SaleCondition SalePrice
0 0 0 2 2008 WD Normal 208500
1 0 0 5 2007 WD Normal 181500
2 0 0 9 2008 WD Normal 223500
3 0 0 2 2006 WD Abnorml 140000
4 0 0 12 2008 WD Normal 250000
[5 rows x 72 columns]
Data: Features before Transform
Shape (rows, cols) = (1460, 8)
First few rows...
LotArea YearBuilt 1stFlrSF 2ndFlrSF FullBath BedroomAbvGr \
0 8450 2003 856 854 2 3
1 9600 1976 1262 0 2 3
2 11250 2001 920 866 2 3
3 9550 1915 961 756 1 3
4 14260 2000 1145 1053 2 4
TotRmsAbvGrd HouseStyle
0 8 2Story
1 6 1Story
2 6 2Story
3 7 2Story
4 9 2Story
Dataframe Info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 LotArea 1460 non-null int64
1 YearBuilt 1460 non-null int64
2 1stFlrSF 1460 non-null int64
3 2ndFlrSF 1460 non-null int64
4 FullBath 1460 non-null int64
5 BedroomAbvGr 1460 non-null int64
6 TotRmsAbvGrd 1460 non-null int64
7 HouseStyle 1460 non-null object
dtypes: int64(7), object(1)
memory usage: 91.4+ KB
None
Results on Test Data
####################
RMSE: 42645.12
R2 Score: 0.73938
Data: Actual vs Predicted Comparison
Shape (rows, cols) = (438, 5)
First few rows...
actual predicted diff abs_diff percentage_diff
892 154500 133706.182295 -20793.817705 20793.817705 -15.551875
1105 325000 311498.947448 -13501.052552 13501.052552 -4.334221
413 115000 108758.777437 -6241.222563 6241.222563 -5.738592
522 159000 163646.770058 4646.770058 4646.770058 2.839512
1036 315500 247463.968793 -68036.031207 68036.031207 -27.493308
|
matrix2_day_5_hyperopt.ipynb | ###Markdown
Wczytywanie danych
###Code
df = pd.read_hdf("data/car.h5")
df.shape
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
SUFFIX_CAT = "__cat"
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
df["param_rok-produkcji"] = df["param_rok-produkcji"].map(lambda x: -1 if str(x) == "None" else int(x))
df["param_moc_1"] = df["param_moc"].map(lambda x: "-1" if str(x) == "None" else x.replace(" ", ""))
df["param_moc_2"] = df["param_moc_1"].map(lambda x: int(x.replace("KM", "")))
df["param_pojemność-skokowa_1"] = df["param_pojemność-skokowa"].map(lambda x: "-1" if str(x) == "None" else x.replace(" ", ""))
df["param_pojemność-skokowa_2"] = df["param_pojemność-skokowa_1"].map(lambda x: int(x.replace("cm3", "")))
def run_model(model, feats):
X = df[feats].values
y = df["price_value"].values
scores = cross_val_score(model, X, y, cv = 3, scoring = "neg_mean_absolute_error")
return np.mean(scores), np.std(scores)
feats = ["param_napęd__cat",
"param_rok-produkcji",
"param_stan__cat",
"param_skrzynia-biegów__cat",
"param_faktura-vat__cat",
"param_moc_2",
"param_marka-pojazdu__cat",
"feature_kamera-cofania__cat",
"param_typ__cat",
"param_pojemność-skokowa_2",
"seller_name__cat",
"feature_wspomaganie-kierownicy__cat",
"param_model-pojazdu__cat",
"param_wersja__cat",
"param_kod-silnika__cat",
"feature_system-start-stop__cat",
"feature_asystent-pasa-ruchu__cat",
"feature_czujniki-parkowania-przednie__cat",
"feature_łopatki-zmiany-biegów__cat",
"feature_regulowane-zawieszenie__cat"]
xgb_params = {
"max_depth": 5,
"n_estimators": 50,
"learning_rate": 0.1,
"seed": 0
}
run_model(xgb.XGBRegressor(**xgb_params), feats)
###Output
[20:45:17] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[20:45:21] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[20:45:24] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
###Markdown
Hyperopt
###Code
def obj_func(params):
print("Training with params: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {"loss": np.abs(mean_mae), "status": STATUS_OK}
# space
xgb_reg_params = {
"learning_rate": hp.choice("learning_rate", np.arange(0.05, 0.31, 0.05)),
"max_depth": hp.choice("max_depth", np.arange(5, 16, 1, dtype = int)),
"subsample": hp.quniform("subsample", 0.5, 1, 0.05),
"colsample_bytree": hp.quniform("colsample_bytree", 0.05, 1, 0.05),
"objective": "reg:squarederror",
"n_estimators:": 100,
"seed": 0
}
# run
best = fmin(obj_func, xgb_reg_params, algo = tpe.suggest, max_evals=25)
best
###Output
Training with params:
{'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 5, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}
Training with params:
{'colsample_bytree': 0.1, 'learning_rate': 0.05, 'max_depth': 10, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}
Training with params:
{'colsample_bytree': 0.2, 'learning_rate': 0.15000000000000002, 'max_depth': 14, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}
Training with params:
{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.2, 'max_depth': 10, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}
Training with params:
{'colsample_bytree': 0.45, 'learning_rate': 0.05, 'max_depth': 9, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.6000000000000001}
Training with params:
{'colsample_bytree': 0.8, 'learning_rate': 0.05, 'max_depth': 14, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}
Training with params:
{'colsample_bytree': 0.55, 'learning_rate': 0.15000000000000002, 'max_depth': 10, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.6000000000000001}
Training with params:
{'colsample_bytree': 0.4, 'learning_rate': 0.15000000000000002, 'max_depth': 11, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}
Training with params:
{'colsample_bytree': 0.1, 'learning_rate': 0.2, 'max_depth': 6, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}
Training with params:
{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.3, 'max_depth': 7, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}
Training with params:
{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 10, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}
Training with params:
{'colsample_bytree': 0.30000000000000004, 'learning_rate': 0.05, 'max_depth': 13, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.6000000000000001}
Training with params:
{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.05, 'max_depth': 10, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65}
Training with params:
{'colsample_bytree': 0.15000000000000002, 'learning_rate': 0.2, 'max_depth': 5, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}
Training with params:
{'colsample_bytree': 0.55, 'learning_rate': 0.15000000000000002, 'max_depth': 11, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65}
Training with params:
{'colsample_bytree': 0.55, 'learning_rate': 0.05, 'max_depth': 13, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}
Training with params:
{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.2, 'max_depth': 6, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}
Training with params:
{'colsample_bytree': 0.65, 'learning_rate': 0.25, 'max_depth': 15, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}
Training with params:
{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.25, 'max_depth': 13, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}
Training with params:
{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.05, 'max_depth': 13, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}
Training with params:
{'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 8, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}
Training with params:
{'colsample_bytree': 0.8, 'learning_rate': 0.1, 'max_depth': 14, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}
Training with params:
{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 14, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}
Training with params:
{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.1, 'max_depth': 14, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}
Training with params:
{'colsample_bytree': 0.45, 'learning_rate': 0.1, 'max_depth': 12, 'n_estimators:': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}
100%|██████████| 25/25 [18:27<00:00, 56.63s/it, best loss: 7407.42165231573]
|
cmd/data_utils/1. split_train_val.ipynb | ###Markdown
DEFT 2021 : split train set into train and validation
###Code
import pandas as pd
import numpy as np
IN_DATASET_FILE = '../data/release/classes-train-v2.txt'
OUT_TRAIN_FILE = '../data/work/classes-train-train.txt'
OUT_VAL_FILE = '../data/work/classes-train-val.txt'
VAL_RATE = 0.1
###Output
_____no_output_____
###Markdown
Split DEFT training set into validation and training
###Code
orig_df = pd.read_csv(IN_DATASET_FILE, sep='\t', header=None, names=['file', 'label', 'desc'])
file_list = orig_df['file'].unique()
np.random.seed(42)
np.random.shuffle(file_list)
position = int(len(file_list) * VAL_RATE)
val_list = file_list[:position]
train_list = file_list[position:]
val_df = orig_df[orig_df['file'].isin(val_list)]
train_df = orig_df[orig_df['file'].isin(train_list)]
assert len(val_df) + len(train_df) == len(orig_df)
val_df.to_csv(OUT_VAL_FILE, sep="\t", header=None)
train_df.to_csv(OUT_TRAIN_FILE, sep="\t", header=None)
###Output
_____no_output_____ |
MachineLearning/Classification/Diabetes_Prediction.ipynb | ###Markdown
Batch Prediction
###Code
data = pd.read_csv("pima_indian_diabetes.csv")
data = data.sample(50)
pred_prob = model.predict_proba(data[features])
THRESHOLD = 0.65
predictions = np.where(pred_prob[:, 1] > THRESHOLD, 1, 0)
str(round(accuracy_score(data.Diabetes, predictions) *100,2))+"%"
###Output
_____no_output_____ |
examples/CMB_LSS_write.ipynb | ###Markdown
SACC file for CMB and LSS dataThis example shows how to use the different functionality in SACC to write data from LSS-like and CMB-like experiments.
###Code
import sacc
import pyccl as ccl
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Generate the dataWe will first use CCL to generate some data. This will include all the auto- and cross-correlations between a redshift bin with galaxy clustering and cosmic shear, one CMB lensing map and I/Q/U maps in a given frequency channel.
###Code
# Setup (cosmology, number of bins, number of bands, ell range etc.)
d_ell = 10
n_ell = 100
ells = (np.arange(100) + 0.5) * d_ell
n_ell = len(ells)
# density, shear_E, shear_B, I, Q, U, kappa
n_maps = 3 + 3 + 1
# Cosmology
cosmo = ccl.Cosmology(Omega_c=0.25, Omega_b=0.05, h=0.72, n_s=0.96, A_s=2.1e-9)
###Output
_____no_output_____
###Markdown
LSS power spectraFirst we create the galaxy clustering (GC), weak lensing (WL) and CMB lensing tracers with CCL for which we will compute power spectra
###Code
# CCL tracers
z = np.linspace(0., 2., 1000)
nz = z**2 * np.exp(-(z / 0.25)**1.5)
bz = 1 + z
ndens = 10. # 10 gals per amin^2
# 3 tracers
gc = ccl.NumberCountsTracer(cosmo, False, (z,nz), (z, bz))
sh = ccl.WeakLensingTracer(cosmo, (z,nz))
ck = ccl.CMBLensingTracer(cosmo, 1100.)
# Noise power spectra
nl_gc = np.ones(n_ell) / (ndens * (60 * 180 / np.pi)**2)
nl_sh = np.ones(n_ell) * 0.28**2 / (ndens * (60 * 180 / np.pi)**2)
# Plot N(z)
plt.figure()
plt.plot(z, nz)
plt.show()
###Output
_____no_output_____
###Markdown
Frequency mapsNow we create some information for the frequency map. Let's put it at some low frequency so that it is dominated by synchrotron, and we can ignore its cross-correlation with the LSS tracers.
###Code
# Frequency bandpass
nu = np.linspace(1, 40, 2000)
bpass = np.exp(-((nu - 20.) / 10.)**8)
plt.figure()
plt.plot(nu,bpass)
# Beam
fwhm = 60. # 3 arcmin
sigma = (fwhm / 2.355) * np.pi / 180 / 60
ell_beam = np.arange(3000)
beam = np.exp(-ell_beam * (ell_beam + 1) * sigma**2)
plt.figure()
plt.loglog(ell_beam, beam)
plt.ylim([1E-3,1.1])
# Noise power spectrum
sigma_T = 10. # 10. uK arcmin
nl_tt = np.ones(n_ell) * (sigma_T * np.pi / 180 / 60) **2
nl_pp = 2 * nl_tt
# Signal power spectrum
cl_syn_tt = 0.01 * (ells / 80.)**(-2.4)
cl_syn_ee = 0.1 * cl_syn_tt
cl_syn_bb = 0.5 * cl_syn_ee
cl_syn_eb = 0.5 * cl_syn_bb
plt.figure()
plt.plot(ells, cl_syn_tt)
plt.plot(ells, cl_syn_ee)
plt.plot(ells, cl_syn_bb)
plt.plot(ells, nl_tt)
plt.plot(ells, nl_pp)
plt.loglog()
plt.show()
###Output
_____no_output_____
###Markdown
Power spectraNow let us generate all non-zero power spectra.
###Code
# Compute power spectra
# We will assume that the cross-correlation between clustering, lensing,
# and CMB lensing with the frequency maps is zero.
cls = np.zeros([n_maps, n_maps, n_ell])
plt.figure()
# GC - GC
cls[0, 0, :] = ccl.angular_cl(cosmo, gc, gc, ells) + nl_gc
plt.plot(ells, cls[0, 0, :], label='GG')
# GC - WL (E-only, B is zero)
cls[0, 1, :] = ccl.angular_cl(cosmo, gc, sh, ells)
cls[1, 0, :] = cls[0, 1, :]
plt.plot(ells, cls[0, 1, :], label='GL')
# GC - CMBK
cls[0, 3, :] = ccl.angular_cl(cosmo, gc, ck, ells)
cls[3, 0, :] = cls[0, 3, :]
plt.plot(ells, cls[0, 3, :], label='GK')
# WL - WL
# EE
cls[1, 1, :] = ccl.angular_cl(cosmo, sh, sh, ells) + nl_sh
# BB
cls[2, 2, :] = nl_sh
plt.plot(ells, cls[1, 1, :], label='LL')
# WL - CMBK (E-only, B is zero)
cls[1, 3, :] = ccl.angular_cl(cosmo, sh, ck, ells)
cls[3, 1, :] = cls[1, 3, :]
plt.plot(ells, cls[1, 3, :], label='LK')
# CMBK - CMBK
cls[3, 3, :] = ccl.angular_cl(cosmo, ck, ck, ells)
plt.plot(ells, cls[3, 3, :], label='KK')
# T - T
cls[4, 4, :] = cl_syn_tt
# E - E
cls[5, 5, :] = cl_syn_ee
# E - B
cls[5, 6, :] = cl_syn_eb
cls[6, 5, :] = cls[5, 6, :]
# B - B
cls[6, 6, :] = cl_syn_bb
plt.loglog()
plt.legend(loc='lower left', ncol=2)
plt.show()
###Output
_____no_output_____
###Markdown
Bandpower window functionsFor simplicity let's just assume top-hat windows
###Code
n_ell_large = 3001
ells_large = np.arange(n_ell_large)
window_single = np.zeros([n_ell, n_ell_large])
for i in range(n_ell):
window_single[i, i * d_ell : (i + 1) * d_ell] = 1.
plt.figure()
for w in window_single:
plt.plot(ells_large, w)
plt.xlim([200,300])
plt.show()
###Output
_____no_output_____
###Markdown
CovarianceFinally, let's create a covariance matrix
###Code
fsky = 0.1
n_cross = (n_maps * (n_maps + 1)) // 2
covar = np.zeros([n_cross, n_ell, n_cross, n_ell])
id_i = 0
for i1 in range(n_maps):
for i2 in range(i1, n_maps):
id_j = 0
for j1 in range(n_maps):
for j2 in range(j1, n_maps):
cl_i1j1 = cls[i1, j1, :]
cl_i1j2 = cls[i1, j2, :]
cl_i2j1 = cls[i2, j1, :]
cl_i2j2 = cls[i2, j2, :]
# Knox formula
cov = (cl_i1j1 * cl_i2j2 + cl_i1j2 * cl_i2j1) / (d_ell * fsky * (2 * ells + 1))
covar[id_i, :, id_j, :] = np.diag(cov)
id_j += 1
id_i += 1
covar = covar.reshape([n_cross * n_ell, n_cross * n_ell])
###Output
_____no_output_____
###Markdown
Create SACC file We start by creating an empty `Sacc` object.
###Code
s = sacc.Sacc()
###Output
_____no_output_____
###Markdown
TracersNow we add all maps as individual tracers.The GC and WL maps will be `NZ` tracers, the CMBK tracer will be a `Map` tracer, and the I/Q/U maps will be two `NuMap` tracers (one for temperature, another one for polarization).
###Code
# GC
s.add_tracer('NZ', 'gc', # Name
quantity='galaxy_density', # Quantity
spin=0, # Spin
z=z, # z
nz=nz) # nz
# WL
s.add_tracer('NZ', 'wl', # Name
quantity='galaxy_shear', # Quantity
spin=2, # Spin
z=z, # z
nz=nz, # nz
extra_columns={'error': 0.1*nz}, # You can include extra columns for the N(z)
sigma_g=0.28) # You can add any extra information as **kwargs
# CMBK
s.add_tracer('Map', 'ck', # Name
quantity='cmb_convergence', # Quantity
spin=0, # Spin
ell=ell_beam, beam=beam) # Beam
# T
s.add_tracer('NuMap', 'B20_T', # Name
quantity='cmb_temperature', # Quantity
spin=0, # Spin
nu=nu, bandpass=bpass, # Bandpass
bandpass_extra={'error': 0.01 * bpass}, # You can add some extra bandpass data.
ell=ell_beam, beam=beam, # Beam
beam_extra={'error': 0.01 * beam},
nu_unit='GHz', # Frequency units
map_unit='uK_RJ', # Map units
)
# Q/U
s.add_tracer('NuMap', 'B20_P', # Name
quantity='cmb_polarization', # Quantity
spin=2, # Spin
nu=nu, bandpass=bpass, # Bandpass
bandpass_extra={'error': 0.01 * bpass}, # You can add some extra bandpass data.
ell=ell_beam, beam=beam, # Beam
beam_extra={'error': 0.01 * beam},
nu_unit='GHz', # Frequency units
map_unit='uK_RJ', # Map units
)
###Output
_____no_output_____
###Markdown
Power spectraNow we add all power spectra one-by-one
###Code
# Create a SACC bandpower window object
wins = sacc.BandpowerWindow(ells_large, window_single.T)
# GC-GC
s.add_ell_cl('cl_00', # Data type
'gc', # 1st tracer's name
'gc', # 2nd tracer's name
ells, # Effective multipole
cls[0, 0, :], # Power spectrum values
window=wins, # Bandpower windows
)
# GC-WL
s.add_ell_cl('cl_0e', 'gc', 'wl', ells, cls[0, 1, :], window=wins)
s.add_ell_cl('cl_0b', 'gc', 'wl', ells, cls[0, 2, :], window=wins)
# GC-CMBK
s.add_ell_cl('cl_00', 'gc', 'ck', ells, cls[0, 3, :], window=wins)
# GC-T
s.add_ell_cl('cl_00', 'gc', 'B20_T', ells, cls[0, 4, :], window=wins)
# GC-P
s.add_ell_cl('cl_0e', 'gc', 'B20_P', ells, cls[0, 5, :], window=wins)
s.add_ell_cl('cl_0b', 'gc', 'B20_P', ells, cls[0, 6, :], window=wins)
# WL-WL
s.add_ell_cl('cl_ee', 'wl', 'wl', ells, cls[1, 1, :], window=wins)
s.add_ell_cl('cl_eb', 'wl', 'wl', ells, cls[1, 2, :], window=wins)
s.add_ell_cl('cl_bb', 'wl', 'wl', ells, cls[2, 2, :], window=wins)
# WL-CMBK
s.add_ell_cl('cl_0e', 'wl', 'ck', ells, cls[1, 3, :], window=wins)
s.add_ell_cl('cl_0b', 'wl', 'ck', ells, cls[2, 3, :], window=wins)
# WL-T
s.add_ell_cl('cl_0e', 'wl', 'B20_T', ells, cls[1, 4, :], window=wins)
s.add_ell_cl('cl_0b', 'wl', 'B20_T', ells, cls[2, 4, :], window=wins)
# WL-E/B
s.add_ell_cl('cl_ee', 'wl', 'B20_P', ells, cls[1, 5, :], window=wins)
s.add_ell_cl('cl_eb', 'wl', 'B20_P', ells, cls[1, 6, :], window=wins)
s.add_ell_cl('cl_be', 'wl', 'B20_P', ells, cls[2, 5, :], window=wins)
s.add_ell_cl('cl_bb', 'wl', 'B20_P', ells, cls[2, 6, :], window=wins)
# CMBK-CMBK
s.add_ell_cl('cl_00', 'ck', 'ck', ells, cls[3, 3, :], window=wins)
# CMBK-T
s.add_ell_cl('cl_00', 'ck', 'B20_T', ells, cls[3, 4, :], window=wins)
# CMBK-P
s.add_ell_cl('cl_0e', 'ck', 'B20_P', ells, cls[3, 5, :], window=wins)
s.add_ell_cl('cl_0b', 'ck', 'B20_P', ells, cls[3, 6, :], window=wins)
# T-T
s.add_ell_cl('cl_00', 'B20_T', 'B20_T', ells, cls[4, 4, :], window=wins)
# T-P
s.add_ell_cl('cl_0e', 'B20_T', 'B20_P', ells, cls[4, 5, :], window=wins)
s.add_ell_cl('cl_0b', 'B20_T', 'B20_P', ells, cls[4, 6, :], window=wins)
# P-P
s.add_ell_cl('cl_ee', 'B20_P', 'B20_P', ells, cls[5, 5, :], window=wins)
s.add_ell_cl('cl_eb', 'B20_P', 'B20_P', ells, cls[5, 6, :], window=wins)
s.add_ell_cl('cl_bb', 'B20_P', 'B20_P', ells, cls[6, 6, :], window=wins)
###Output
_____no_output_____
###Markdown
CovarianceFinally, add the covariance
###Code
s.add_covariance(covar)
###Output
_____no_output_____
###Markdown
WritingFinally, write it to file!
###Code
s.save_fits("cmblss.fits", overwrite=True)
###Output
_____no_output_____ |
Sessions/Session11/Day2/FindingSourcesSolutions.ipynb | ###Markdown
Background Subtraction and Source Detection**Version 0.1**By Yusra AlSayyad (Princeton University) Note: for portability, the examples in this notebook are one-dimensional and avoid using libraries. In practice on real astronomical images, I recommend using a library for astronomical image processing, e.g. AstroPy or the LSST Stack. Background EstimationA prerequisite to this notebook is the `introductionToBasicStellarPhotometry.ipynb` notebook. We're going to use the same single stellar simulation, but with increasingly complex backgrounds.First, setup the simulation and necessary imports
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from matplotlib.ticker import MultipleLocator
%matplotlib notebook
def pixel_plot(pix, counts, fig=None, ax=None):
'''Make a pixelated 1D plot'''
if fig is None and ax is None:
fig, ax = plt.subplots()
ax.step(pix, counts,
where='post')
ax.set_xlabel('pixel number')
ax.set_ylabel('relative counts')
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.xaxis.set_major_locator(MultipleLocator(5))
fig.tight_layout()
return fig, ax
# Define your PSF function phi()
# It is sufficient to copy and paste from
# your introductionToBasicStellarPhotometry noteboook
def phi(x, mu, fwhm):
"""Evalute the 1d PSF N(mu, sigma^2) along x
Parameters
----------
x : array-like of shape (n_pixels,)
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
Returns
-------
flux : array-like of shape (n_pixels,)
Flux in each pixel of the input array
"""
sigmaPerFwhm = 2*np.sqrt(2*np.log(2))
sigma = fwhm/sigmaPerFwhm
flux = norm.pdf(x, mu, sigma)
return flux
# Define your image simulation function to
# It is sufficient to copy and paste from
# your introductionToBasicStellarPhotometry noteboook
# Note that the background S should now be supplied as
# an array of length (S) or a constant.
def simulate(x, mu, fwhm, S, F):
"""simulate a noisy stellar signal
Parameters
----------
x : array-like
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
S : float or array-like of len(x)
Sky background for each pixel
F : float
Total stellar flux
Returns
-------
noisy_counts : array-like (same shape as x)
the (noisy) number of counts in each pixel
"""
signal = F * phi(x=x, mu=mu, fwhm=fwhm) + S
noise = np.random.normal(loc=0, scale=np.sqrt(signal))
noisy_counts = signal + noise
return noisy_counts
###Output
_____no_output_____
###Markdown
Problem 1) Simple 1-D Background Estimation Problem 1.1) Estimate a the background as a constant offset (order = 0)For this problem we will use a simulated star with a constant background offset of $S=100$.Background estimation is typically done by inspecting the distribution of counts in pixel bins. First inspect the distribution of counts, and pick an estimator for the background that is robust to the star (reduces biased fromthe star). Remember that we haven't done detection yet and don't know where the sources are.
###Code
# simulate the star
x = np.arange(100)
mu = 35
S = 100
fwhm = 5
F = 500
fig = plt.figure(figsize=(8,4))
ax = plt.subplot()
sim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)
pixel_plot(x, sim_star, fig=fig, ax=ax)
# plot and inspect histogram
fig = plt.figure(figsize=(6,4))
plt.hist(sim_star, bins=20)
plt.xlabel('image counts')
plt.ylabel('num pixels')
S_estimate = np.median(sim_star)
plt.axvline(S_estimate, color='red')
plt.axvline(np.mean(sim_star), color='orange')
print('My background estimate = {:.4f}'.format(S_estimate))
print('The mean pixel count = {:.4f}'.format(np.mean(sim_star)))
# plot your background model over the "image"
fig, ax = pixel_plot(x, sim_star)
pixel_plot(x, np.repeat(S_estimate, len(x)), fig=fig, ax=ax)
###Output
_____no_output_____
###Markdown
Problem 1.2) Estimate a the background as a ramp/line (order = 1)Now let's simulate a slightly more complicated background a linear ramp: $y = 3x + 100$. First simulate the same star with the new background. Then we're going to fit it using the following steps:* Bin the image* Use your robust estimator to estimate the background value per bin center* Fit these bin center with a model* A common simple model that astronomers use are Chebyshev polynomials. Chebyshevs have some very nice properties that prevent ringing at the edges of the fit window. Another popular way to "model" the bin centers is non-parametrically via interpolation.
###Code
# Double check that your simulate function can take S optionally as array-like
# Create and plot the image with S = 3*x + 100
S = S=(3*x + 100)
sim_star = simulate(x=x, mu=mu, fwhm=fwhm, S=S, F=F)
pixel_plot(x, sim_star)
# bin the image in 20-pixel bins
# complete
BIN_SIZE = 20
bins = np.arange(0, 100 + BIN_SIZE, BIN_SIZE)
bin_centers = 0.5 *(bins[0:-1] + bins[1:])
digitized = np.digitize(x, bins=bins)
bin_values = [np.median(sim_star[digitized == i]) for i in range(1, len(bins))]
# Fit the bin_values vs bin_centers with a 1st-order chebyshev polynomial
# Evaluate your model for the full image
# hint: look up np.polynomial.chebyshev.chebfit and np.polynomial.chebyshev.chebeval
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 1)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
# Replot the image:
fig, ax = pixel_plot(x, sim_star)
# binned values
ax.plot(bin_centers, bin_values, 'o')
# Overplot your background model:
ax.plot(x, bg, '-')
# Finally plot your background subtracted image:
fig, ax = pixel_plot(x, sim_star - bg)
###Output
_____no_output_____
###Markdown
Problem 1.3) Estimate a more realistic background (still in 1D)Now repeat the the exercise in problem 1.2 with a more complex background.
###Code
SIGMA_PER_FWHM = 2*np.sqrt(2*np.log(2))
fwhm = 5
x = np.arange(100)
background = 1000*norm.pdf(x, 50, 18) + 100*norm.pdf(x, 20, fwhm/SIGMA_PER_FWHM) + 100*norm.pdf(x, 60, fwhm/SIGMA_PER_FWHM)
sim_star3 = simulate(x=x, mu=35, fwhm=fwhm, S=background, F=200)
fig, ax = pixel_plot(x, sim_star3)
###Output
_____no_output_____
###Markdown
1.3.1) Bin the image. Plot the bin centers. What bin size did you pick?
###Code
BIN_SIZE = 10
bins = np.arange(0, 100 + BIN_SIZE, BIN_SIZE)
bin_centers = 0.5 *(bins[0:-1] + bins[1:])
print(bin_centers)
digitized = np.digitize(x, bins=bins)
bin_values = [np.median(sim_star3[digitized == i]) for i in range(1, len(bins))]
# overplot the binned esimtates:
fig, ax = pixel_plot(x, sim_star3)
ax.plot(bin_centers, bin_values, 'o')
###Output
_____no_output_____
###Markdown
1.3.2) Spatially model the binned estimates (bin_values vs bin_centers) as a chebyshev polynomial.Evaluate your model on the image grid and overplot.(what degree/order did you pick?)
###Code
fig, ax = pixel_plot(x, sim_star3)
ax.plot(bin_centers, bin_values, 'o')
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 2)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
ax.plot(x, bg, '-', label='order=2')
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 3)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
ax.plot(x, bg, '-', label='order=3')
ax.legend()
###Output
_____no_output_____
###Markdown
1.3.3) Subtract off the model and plot the "background-subtracted image."
###Code
# Plot the background subtracted image
fig, ax = pixel_plot(x, sim_star3 - bg)
###Output
_____no_output_____
###Markdown
And now you can see that this problem is fairly unrealistic as far as background subtraction goes and should probably be treated with a deblender. Typically in images, * For chebyshev polynomials we use bin sizes of at least 128 pixels and orders of no more than 6. The spatial scale is controlled by the order, and bin size is less important. In fact you could probably use bin size of 1 if you really wanted to. * For interpolation, the spatial scale is controlled by the bin size. We usually choose bins >= 256 pixels. Problem 2) Finding SourcesNow that we have subtracted background image, let’s look for sources. In the lecture we focused on the matched filter interpretation. Here we will go into the hypothesis testing and maximum likelihood interpretations. Maximum likelihood interpretation: Assume that we know there is a point source somewhere in this image. We want to find to pixel that has the maximum likelihood of having a point source centered on it. recall from session 10, the probability for an individual observation $I_i$ is:$$P(X_i) = \frac{1}{\sqrt{2\pi\sigma_i^2}} \exp{-\frac{(X_i - y_i)^2}{2\sigma_i^2}}$$Here: $X_i$ is the pixel value of pixel $i$ in the image and $y_i$ is the model prediction for that pixel. The model in this case is your `simulate()` function from the `IntroductionToBasicStellarPhotometry.ipynb` notebook: the PSF evaluated at a distance from the center multiplied by the flux amplitude: $F * \phi(x - x_{center}) + S$ Where $F$ is the flux amplitude $\phi$ is the PSF profile (a function of position), and $S$ is the background.Plug it in:$$P(X_i) = \frac{1}{\sqrt{2\pi\sigma_i^2}} \exp{-\frac{(X_i - (F * \phi_i(x_{center}) + S))^2}{2\sigma_i^2}}$$ Hypothesis test interpretation:If I were teaching source detection to my non-scientist, college stats 101 students, I'd frame the problem like this:Pretend you have an infinitely large population of pixels Say I know definitively, that the arbitrarily large population of pixels is drawn from $N(0,100)$ (has a variance of 10). I have another sample of 13 pixels. I want to tesst the hypothesis that those 13 pixels were drawn from the $N(0,100)$ pop too. Test the hypothesis that your subsample of 13 pixels were drawn from the larger sample.* $H_0$: $\mu = 0$* $H_A$: $\mu > 0$$$z = \frac{\bar{x} - \mu}{\sigma / \sqrt{n}} $$$$z = \frac{\sum{x}/13 - 0}{10 /\sqrt{13}} $$OK, if this is coming back now, let's replace this with our *real* estimator for PSF flux, which is a *weighted mean* of the pixels where the weights are the PSF $\phi_i$. Whenever I forget the formulas for weighted means, I consult the [wikipedia page](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean).Now tweak it for a weighted mean (PSF flux):$$ z = \frac{\sum{\phi_i x_i} - \mu} {\sqrt{ \sum{\phi_i^2 \sigma_i^2}}} $$Where the denominator is from the variance estimate of a weighted mean. For constant $\sigma$ it reduces to $\sigma_{\bar{x}}^2 = \sigma^2 \sum{\phi^2_i}$, and for a constant $\phi$ this reduces to $\sigma_{\bar{x}}^2 = \sigma^2 /n$, the denomiator in the simple mean example above. Replace $\mu=0$ again. $$ z = \frac{\sum{\phi_i x_i}} {\sqrt{ \sum{\phi_i^2 \sigma_i^2}}} $$Our detection map is just the nominator for each pixel! We deal with the denominator later when choosing the thresholding, but we could just as easily divide the whole image by the denominator and have a z-score image! 2.0) Plot the problem image
###Code
# set up simulation
x = np.arange(100)
mu = 35
S = 100
fwhm = 5
F = 300
fig = plt.figure(figsize=(8,4))
ax = plt.subplot()
sim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)
# To simplify this pretend we know for sure the background = 100
# Plots the backround subtracted image
image = sim_star - 100
pixel_plot(x, image, fig=fig, ax=ax)
###Output
_____no_output_____
###Markdown
2.1) Make a kernel for the PSF.Properties of kernels: They're centered at x=0 (which also means that they have an odd number of pixels) and sum up to 1. You can use your `phi()`.
###Code
xx = np.arange(-7, 8)
kernel = phi(xx, mu=0, fwhm=5)
pixel_plot(xx, kernel)
print(xx)
print(kernel)
###Output
[-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7]
[0.00082002 0.00346709 0.01174297 0.03186112 0.06924917 0.12056981
0.16816398 0.18788746 0.16816398 0.12056981 0.06924917 0.03186112
0.01174297 0.00346709 0.00082002]
###Markdown
2.2) Correlate the image with the PSF kernel,and plot the result.What are the tradeoffs when choosing the size of your PSF kernel? What happens if its too big? What happens if it's too small.? **hint:** `scipy.signal.convolve`
###Code
import scipy.signal
size = len(kernel)//2
detection_image = scipy.signal.convolve(image, kernel, mode='same')
# mode='same' pads then clips the padding This is the same as:
# size = len(kernel)//2
# scipy.signal.convolve(image, kernel)[size:-size]
# Note: pay attention to how scipy.signal.convolve handles the edges.
pixel_plot(x, detection_image)
print(len(scipy.signal.convolve(image, kernel, mode='full')))
print(len(scipy.signal.convolve(image, kernel, mode='same')))
print(len(scipy.signal.convolve(image, kernel, mode='valid')))
###Output
114
100
86
###Markdown
**Answer to the question:** Bigger PSF kernels = more accurate convolution, more pixels lost on the edges, and more expensive computation. Smaller kernels don't get close enough to zero on the edges and resulting correlated image can look "boxy" 2.3) Detect pixels for which the null hypothesis that there's no source centered there is ruled out at the 5$\sigma$ level.
###Code
# Using a robust estimator for the detection image standard deviation,
# Compute the 5 sigma threshold
N_SIGMA = 5
q1, q2 = np.percentile(detection_image, (30.85, 69.15))
std = q2 - q1
threshold_value = std * N_SIGMA
print('5 sigma threshold value is = {:.4f}'.format(threshold_value))
###Output
5 sigma threshold value is = 16.2267
###Markdown
The noise estimate is a little high, but not bad for the first iteration. In future interations we will mask the fotoprints detected in the initial round, before recomputing. In practice, we use a sigma-clipped rms as the estimate of the standard deviation, in which we iteratively clip outliers until we have what looks like a normal distribution and compute a plain std.
###Code
qq = scipy.stats.probplot(detection_image, dist="norm")
plt.plot(qq[0][0], qq[0][1])
plt.ylabel('value')
plt.xlabel('Normal quantiles')
# Just for fun to see what's going on:
fig, ax = pixel_plot(x, detection_image)
plt.axhline(threshold_value)
###Output
_____no_output_____
###Markdown
2.4) Dilate footprint to provide a window or region for the point source.We will use this window to compute the centroid and total flux of the star in the next two lessons. In the meantime, compute the flux like we did in introductionToStellarPhotometry assuming the input center.
###Code
# complete
growBy = fwhm
mask = detection_image > threshold_value
print(np.count_nonzero(mask))
dilated_mask = scipy.ndimage.binary_dilation(mask, iterations=growBy)
print(np.count_nonzero(dilated_mask))
fig, ax = pixel_plot(x[dilated_mask], image[dilated_mask])
# easy aperture flux:
np.sum(image[dilated_mask])
# Copied from solutions of previous notebook
from scipy.optimize import minimize
psf = phi(x[dilated_mask], mu=35, fwhm=5)
im = image[dilated_mask]
# minimize the square of the residuals to determine flux
def sum_res(A, flux=im, model=psf):
return sum((flux - A*model)**2)
sim_star = simulate(x, mu, fwhm, S, F)
psf_flux = minimize(sum_res, 300, args=(im, psf))
print("The PSF flux is {:.3f}".format(psf_flux.x[0]))
###Output
The PSF flux is 308.353
|
notebooks/1.simulations_to_generate_data.ipynb | ###Markdown
- Remember to have 6 colors selected already- Open the website floattheturn.com
###Code
import os
import sys
import ast
import math
import itertools
import functools
import subprocess
import numpy as np
import pandas as pd
from copy import deepcopy
from collections import defaultdict
# RFI
lj_rfi = "55+,AJo+,KQo,A9s+,K9s+,Q9s+,J9s+,T9s"
hj_rfi = "22+,AJo+,KJo+,QJo,A9s+,K9s+,Q9s+,J9s+,T9s"
co_rfi = "22+,ATo+,KTo+,QTo+,JTo,A5s+,K9s+,Q9s+,J9s+,T9s"
bn_rfi = "22+,A9o+,K9o+,Q9o+,J9o+,T9o,A2s+,K8s+,Q8s+,J8s+,T8s+,97s+,86s+,75s+,64s+,53s+,43s"
sb_rfi = "22+,A5o+,K9o+,Q9o+,J9o+,T9o,A2s+,K8s+,Q8s+,J8s+,T8s+,97s+,86s+,75s+,64s+,53s+,43s"
# vs RFI
vsRFI_bb_vs_lj__raise = "QQ+,AKo,AQs+"
vsRFI_bb_vs_hj__raise = "QQ+,AKo,AJs+,KQs"
vsRFI_bb_vs_co__raise = "JJ+,AKo,AJs+,KQs,QJs,JTs"
vsRFI_bb_vs_bn__raise = "TT+,AQo+,KQo,ATs+,KTs+,QTs+,JTs"
vsRFI_bb_vs_sb__raise = "99+,AQo+,KQo,A9s+,K9s+,Q9s+,J9s+,T9s"
vsRFI_bb_vs_lj__call = "JJ-22,AQo-AJo,KQo-KJo,QJo,AJs-A9s,KQs-K9s,QJs-Q9s,JTs-J9s,T9s"
vsRFI_bb_vs_hj__call = "JJ-22,AQo-ATo,KQo-KTo,QJo-QTo,JTo,ATs-A5s,KJs-K9s,QJs-Q9s,JTs-J9s,T9s"
vsRFI_bb_vs_co__call = "TT-22,AQo-A9o,KQo-K9o,QJo-Q9o,JTo-J9o,T9o,ATs-A2s,KJs-K8s,QTs-Q8s,J9s-J8s,T9s-T8s,98s-97s,87s-86s,76s-75s,65s-64s,54s-53s,43s"
vsRFI_bb_vs_bn__call = "99-22,AJo-A5o,KJo-K9o,QJo-Q9o,JTo-J9o,T9o,A9s-A2s,K9s-K8s,Q9s-Q8s,J9s-J8s,T9s-T8s,98s-97s,87s-86s,76s-75s,65s-64s,54s-53s,43s"
vsRFI_bb_vs_sb__call = "88-22,AJo-A8o,KJo-K8o,QJo-Q8o,JTo-J8o,T9o-T8o,98o,A8s-A2s,K8s-K4s,Q8s-Q4s,J8s-J4s,T8s-T4s,98s-94s,87s-84s,76s-74s,65s-64s,54s-53s,43s-42s,32s"
# vs RFI
vsRFI_sb_vs_bn__raise = lj_rfi
vsRFI_other__raise = "AA-QQ,AKo,AKs,A5s-A4s"
vsRFI_other__call = "JJ-77,AQo,AQs-ATs,KQs-KTs,QJs-QTs,JTs"
# RFIvs3B
RFIvs3B_other__raise = "AA-QQ,AKo,AKs,A5s-A4s"
RFIvs3B_other__call = "JJ-77,AQo,AQs-ATs,KQs-KTs,QJs-QTs,JTs"
# Cold 4B
cold4B = "AA-KK"
# Input: range
m = {
"A": 14, "K": 13, "Q": 12, "J": 11, "T": 10, "9": 9,
"8": 8, "7": 7, "6": 6, "5": 5, "4": 4, "3": 3, "2": 2
}
m2 = {14: 'A', 13: 'K', 12: 'Q', 11: 'J', 10: 'T',
9: '9', 8: '8', 7: '7', 6: '6', 5: '5',
4: '4', 3: '3', 2: '2'}
def range_to_hands(c_range="JJ+,AJs+,KQs,AKo"):
temp = c_range.split(",")
pps = []
pp = temp[0]
if "+" in pp:
for i in range(14,m[pp[0]]-1,-1):
pps.append([i, i])
elif "-" in pp:
for i in range(m[pp[0]],m[pp[-1]]-1,-1):
pps.append([i, i])
else:
pps.append([m[pp[0]], m[pp[0]]])
ss = []
temp_s = [x for x in temp if "s" in x]
for s in temp_s:
if "+" in s:
for i in range(m[s[0]]-1,m[s[1]]-1,-1):
ss.append([m[s[0]], i])
elif "-" in s:
for i in range(m[s[1]],m[s[5]]-1,-1):
ss.append([m[s[0]], i])
else:
ss.append([m[s[0]], m[s[1]]])
os = []
temp_o = [x for x in temp if "o" in x]
for o in temp_o:
if "+" in o:
for i in range(m[o[0]]-1,m[o[1]]-1,-1):
os.append([m[o[0]], i])
elif "-" in o:
for i in range(m[o[1]],m[o[5]]-1,-1):
os.append([m[o[0]], i])
else:
os.append([m[o[0]], m[o[1]]])
# Output: [[2,2]], [[14,13]], [[14,13]]
# PP, Suited, Offsuit
return pps, ss, os
cat1_rankings = ["set", "trips", "two pair", "overpair 9+", "any overpair", "TP J-kicker",
"TP K-kicker", "TP any kicker"]
cat2_nonpaired_rankings = ["top pair bad kicker", "middle pair", "bottom pair", "PP below middle pair",
"AJ high", "KQ high", "KJ high bdfd", "K8 high bdfd", ]
cat2_paired_rankings = ["Ace high", "PP below top card", "KQ high", "all"]
cat3_rankings = ["FD", "OESD", "Gutshot", "3 to a straight not all from low end",
"3 to a straight low end bdfd", "3 to a straight low end",
"5 cards within 7 values with bdfd", "Q- high bdfd",
"3 cards within 4 values as overcards", "A- bdfd"]
first_cat4_pp_rankings = ["JJ", "TT", "99", "88", "77", "66", "55", "44", "33", "22"]
def my_hands_cat1_level_x_and_above(x):
result = [[], [], []]
if x >= 1:
result[1] += my_hands_s_straight
result[2] += my_hands_o_straight
if x >= 2:
result[0] += my_hands_pp_sets
if x >= 3:
result[1] += my_hands_s_trips
result[2] += my_hands_o_trips
if x >= 4:
result[1] += my_hands_s_two_pair
result[2] += my_hands_o_two_pair
if x >= 5:
result[0] += my_hands_pp_overpair_9plus
if x >= 6:
result[0] += my_hands_pp_any_overpair
if x >= 7:
result[1] += my_hands_s_tp_k_kicker
result[2] += my_hands_o_tp_k_kicker
if x >= 8:
result[1] += my_hands_s_tp_j_kicker
result[2] += my_hands_o_tp_j_kicker
if x >= 9:
result[1] += my_hands_s_tp_any_kicker
result[2] += my_hands_o_tp_any_kicker
result[0].sort(reverse=True)
result[1].sort(reverse=True)
result[2].sort(reverse=True)
result[0] = list(k for k,_ in itertools.groupby(result[0]))
result[1] = list(k for k,_ in itertools.groupby(result[1]))
result[2] = list(k for k,_ in itertools.groupby(result[2]))
# Return result
my_hands_cat1 = result
return my_hands_cat1
# Performance improvement by filtering out cat1 from hands already, but would also need a copy of hands
def my_hands_cat2_level_x_and_above(x, my_hands_cat1, x_of_cat3):
result = [[], [], []]
if x >= 1:
# Cat 1
result[1] += my_hands_s_straight
result[2] += my_hands_o_straight
result[0] += my_hands_pp_sets
result[1] += my_hands_s_trips
result[2] += my_hands_o_trips
result[1] += my_hands_s_two_pair
result[2] += my_hands_o_two_pair
result[0] += my_hands_pp_overpair_9plus
result[0] += my_hands_pp_any_overpair
result[1] += my_hands_s_tp_k_kicker
result[2] += my_hands_o_tp_k_kicker
result[1] += my_hands_s_tp_j_kicker
result[2] += my_hands_o_tp_j_kicker
result[1] += my_hands_s_tp_any_kicker
result[2] += my_hands_o_tp_any_kicker
# Cat 2
if x >= 1 and x_of_cat3 <= 20:
result[1] += my_hands_s_tp_bad_kicker
result[2] += my_hands_o_tp_bad_kicker
if x >= 2 and x_of_cat3 <= 19:
result[1] += my_hands_s_middle_pair
result[2] += my_hands_o_middle_pair
if x >= 3 and x_of_cat3 <= 18:
result[0] += my_hands_pp_below_top_pair
if x >= 4 and x_of_cat3 <= 17:
result[1] += my_hands_s_bottom_pair
result[2] += my_hands_o_bottom_pair
if x >= 5 and x_of_cat3 <= 16:
result[1] += my_hands_s_aj_high
result[2] += my_hands_o_aj_high
if x >= 6:
result[0] += my_hands_pp_below_middle_pair
if x >= 7:
result[1] += my_hands_s_kq_high
result[2] += my_hands_o_kq_high
if x >= 8:
result[0] += my_hands_pp_below_bottom_pair
if x >= 9:
result[1] += my_hands_s_kj_high
result[2] += my_hands_o_kj_high
if x >= 10:
result[1] += my_hands_s_k8_high
result[2] += my_hands_o_k8_high
result[0].sort(reverse=True)
result[1].sort(reverse=True)
result[2].sort(reverse=True)
result[0] = list(k for k,_ in itertools.groupby(result[0]))
result[1] = list(k for k,_ in itertools.groupby(result[1]))
result[2] = list(k for k,_ in itertools.groupby(result[2]))
# Interim
cat1_unique_pp = [x for (x,y) in my_hands_cat1[0]]
cat1_unique_s = [x for (x,y) in my_hands_cat1[1]]
cat1_unique_o = [x for (x,y) in my_hands_cat1[2]]
# Remove cat1 from these cat2s
result[0] = [(x,y) for (x,y) in result[0] if x not in cat1_unique_pp]
result[1] = [(x,y) for (x,y) in result[1] if x not in cat1_unique_s]
result[2] = [(x,y) for (x,y) in result[2] if x not in cat1_unique_o]
# Return result
my_hands_cat2 = result
return my_hands_cat2
# Performance improvement by filtering out cat1+cat2 from hands already, but would also need a copy of hands
def my_hands_cat3_level_x_and_above(x, my_hands_cat1, my_hands_cat2):
bdfd_result = [[], [], []]
other_result = [[], [], []]
result = [[], [], []]
if x >= 1:
other_result[0] += my_hands_pp_fd
other_result[1] += my_hands_s_fd
other_result[2] += my_hands_o_fd
if x >= 2:
other_result[0] += my_hands_pp_oesd
other_result[1] += my_hands_s_oesd
other_result[2] += my_hands_o_oesd
if x >= 3:
other_result[0] += my_hands_pp_gutshot
other_result[1] += my_hands_s_gutshot
other_result[2] += my_hands_o_gutshot
if x >= 4:
other_result[1] += my_hands_s_3_to_straight_not_all_from_low_end
other_result[2] += my_hands_o_3_to_straight_not_all_from_low_end
if x >= 5:
bdfd_result[1] += my_hands_s_3_to_straight_low_end_bdfd
bdfd_result[2] += my_hands_o_3_to_straight_low_end_bdfd
if x >= 6:
other_result[1] += my_hands_s_3_to_straight_low_end
other_result[2] += my_hands_o_3_to_straight_low_end
if x >= 7:
bdfd_result[1] += my_hands_s_5_unique_cards_within_7_values_bdfd
bdfd_result[2] += my_hands_o_5_unique_cards_within_7_values_bdfd
if x >= 8:
bdfd_result[0] += my_hands_pp_q_minus_bdfd
bdfd_result[1] += my_hands_s_q_minus_bdfd
bdfd_result[2] += my_hands_o_q_minus_bdfd
if x >= 9:
other_result[1] += my_hands_s_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards
other_result[2] += my_hands_o_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards
if x >= 10:
bdfd_result[0] += my_hands_pp_a_minus_bdfd
bdfd_result[1] += my_hands_s_a_minus_bdfd
bdfd_result[2] += my_hands_o_a_minus_bdfd
# Remove duplicates within bdfd hands
bdfd_result[0].sort(reverse=True)
bdfd_result[1].sort(reverse=True)
bdfd_result[2].sort(reverse=True)
bdfd_result[0] = list(k for k,_ in itertools.groupby(bdfd_result[0]))
bdfd_result[1] = list(k for k,_ in itertools.groupby(bdfd_result[1]))
bdfd_result[2] = list(k for k,_ in itertools.groupby(bdfd_result[2]))
# Add all together
result[0] = bdfd_result[0] + other_result[0]
result[1] = bdfd_result[1] + other_result[1]
result[2] = bdfd_result[2] + other_result[2]
# Reduce with max combos number used and sort
groupby_dict = defaultdict(int)
for val in result[0]:
groupby_dict[tuple(val[0])] += val[1]
result[0] = [(sorted(list(x), reverse=True),min(y, 6)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[1]:
groupby_dict[tuple(val[0])] += val[1]
result[1] = [(sorted(list(x), reverse=True),min(y, 4)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[2]:
groupby_dict[tuple(val[0])] += val[1]
result[2] = [(sorted(list(x), reverse=True),min(y, 12)) for (x,y) in groupby_dict.items()]
# Interim
cat1_unique_pp = [x for (x,y) in my_hands_cat1[0]]
cat1_unique_s = [x for (x,y) in my_hands_cat1[1]]
cat1_unique_o = [x for (x,y) in my_hands_cat1[2]]
cat2_unique_pp = [x for (x,y) in my_hands_cat2[0]]
cat2_unique_s = [x for (x,y) in my_hands_cat2[1]]
cat2_unique_o = [x for (x,y) in my_hands_cat2[2]]
# Remove cat1 and cat2
result[0] = [(x,y) for (x,y) in result[0] if x not in cat1_unique_pp and x not in cat2_unique_pp]
result[1] = [(x,y) for (x,y) in result[1] if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] = [(x,y) for (x,y) in result[2] if x not in cat1_unique_o and x not in cat2_unique_o]
# Add cat2 hands
if x >= 11:
result[1] += [(x,y) for (x,y) in my_hands_s_k8_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_k8_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 12:
result[1] += [(x,y) for (x,y) in my_hands_s_kj_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_kj_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 13:
result[0] += [(x,y) for (x,y) in my_hands_pp_below_bottom_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
if x >= 14:
result[1] += [(x,y) for (x,y) in my_hands_s_kq_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_kq_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 15:
result[0] += [(x,y) for (x,y) in my_hands_pp_below_middle_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
# Add cat4 hands
if x >= 16:
remaining_cat2_type_hands_pp = [x for (x,y) in my_hands_pp_below_top_pair]
remaining_cat2_type_hands_s = [x for (x,y) in my_hands_s_aj_high] + [x for (x,y) in my_hands_s_bottom_pair] + [x for (x,y) in my_hands_s_middle_pair] + [x for (x,y) in my_hands_s_tp_bad_kicker]
remaining_cat2_type_hands_o = [x for (x,y) in my_hands_o_aj_high] + [x for (x,y) in my_hands_o_bottom_pair] + [x for (x,y) in my_hands_o_middle_pair] + [x for (x,y) in my_hands_o_tp_bad_kicker]
result[0] += [(x, 6) for x in my_hands[0] if x not in cat1_unique_pp and x not in cat2_unique_pp and x not in remaining_cat2_type_hands_pp]
result[1] += [(x, 4) for x in my_hands[1] if x not in cat1_unique_s and x not in cat2_unique_s and x not in remaining_cat2_type_hands_s]
result[2] += [(x, 12) for x in my_hands[2] if x not in cat1_unique_o and x not in cat2_unique_o and x not in remaining_cat2_type_hands_o]
# Add cat2 hands with pairs
if x >= 17:
result[1] += [(x,y) for (x,y) in my_hands_s_aj_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_aj_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 18:
result[1] += [(x,y) for (x,y) in my_hands_s_bottom_pair if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_bottom_pair if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 19:
result[0] += [(x,y) for (x,y) in my_hands_pp_below_top_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
if x >= 20:
result[1] += [(x,y) for (x,y) in my_hands_s_middle_pair if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_middle_pair if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 21:
result[1] += [(x,y) for (x,y) in my_hands_s_tp_bad_kicker if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in my_hands_o_tp_bad_kicker if x not in cat1_unique_o and x not in cat2_unique_o]
# Reduce with max combos number used and sort
groupby_dict = defaultdict(int)
for val in result[0]:
groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
result[0] = [(sorted(list(x), reverse=True),min(y, 6)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[1]:
groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
result[1] = [(sorted(list(x), reverse=True),min(y, 4)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[2]:
groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
result[2] = [(sorted(list(x), reverse=True),min(y, 12)) for (x,y) in groupby_dict.items()]
# Return results
my_hands_cat3 = result
return my_hands_cat3
def opponents_hands_cat1_level_x_and_above(x):
result = [[], [], []]
if x >= 1:
result[1] += opponents_hands_s_straight
result[2] += opponents_hands_o_straight
if x >= 2:
result[0] += opponents_hands_pp_sets
if x >= 3:
result[1] += opponents_hands_s_trips
result[2] += opponents_hands_o_trips
if x >= 4:
result[1] += opponents_hands_s_two_pair
result[2] += opponents_hands_o_two_pair
if x >= 5:
result[0] += opponents_hands_pp_overpair_9plus
if x >= 6:
result[0] += opponents_hands_pp_any_overpair
if x >= 7:
result[1] += opponents_hands_s_tp_k_kicker
result[2] += opponents_hands_o_tp_k_kicker
if x >= 8:
result[1] += opponents_hands_s_tp_j_kicker
result[2] += opponents_hands_o_tp_j_kicker
if x >= 9:
result[1] += opponents_hands_s_tp_any_kicker
result[2] += opponents_hands_o_tp_any_kicker
result[0].sort(reverse=True)
result[1].sort(reverse=True)
result[2].sort(reverse=True)
result[0] = list(k for k,_ in itertools.groupby(result[0]))
result[1] = list(k for k,_ in itertools.groupby(result[1]))
result[2] = list(k for k,_ in itertools.groupby(result[2]))
# Return result
opponents_hands_cat1 = resultsco
return opponents_hands_cat1
# Performance improvement by filtering out cat1 from hands already, but would also need a copy of hands
def opponents_hands_cat2_level_x_and_above(x, opponents_hands_cat1):
result = [[], [], []]
if x >= 1:
# Cat 1
result[1] += opponents_hands_s_straight
result[2] += opponents_hands_o_straight
result[0] += opponents_hands_pp_sets
result[1] += opponents_hands_s_trips
result[2] += opponents_hands_o_trips
result[1] += opponents_hands_s_two_pair
result[2] += opponents_hands_o_two_pair
result[0] += opponents_hands_pp_overpair_9plus
result[0] += opponents_hands_pp_any_overpair
result[1] += opponents_hands_s_tp_k_kicker
result[2] += opponents_hands_o_tp_k_kicker
result[1] += opponents_hands_s_tp_j_kicker
result[2] += opponents_hands_o_tp_j_kicker
result[1] += opponents_hands_s_tp_any_kicker
result[2] += opponents_hands_o_tp_any_kicker
# Cat 2
result[1] += opponents_hands_s_tp_bad_kicker
result[2] += opponents_hands_o_tp_bad_kicker
if x >= 2:
result[1] += opponents_hands_s_middle_pair
result[2] += opponents_hands_o_middle_pair
if x >= 3:
result[0] += opponents_hands_pp_below_top_pair
if x >= 4:
res1ult[1] += opponents_hands_s_bottom_pair
result[2] += opponents_hands_o_bottom_pair
if x >= 5:
result[1] += opponents_hands_s_aj_high
result[2] += opponents_hands_o_aj_high
if x >= 6:
result[0] += opponents_hands_pp_below_middle_pair
if x >= 7:
result[1] += opponents_hands_s_kq_high
result[2] += opponents_hands_o_kq_high
if x >= 8:
result[0] += opponents_hands_pp_below_bottom_pair
if x >= 9:
result[1] += opponents_hands_s_kj_high
result[2] += opponents_hands_o_kj_high
if x >= 10:
result[1] += opponents_hands_s_k8_high
result[2] += opponents_hands_o_k8_high
result[0].sort(reverse=True)
result[1].sort(reverse=True)
result[2].sort(reverse=True)
result[0] = list(k for k,_ in itertools.groupby(result[0]))
result[1] = list(k for k,_ in itertools.groupby(result[1]))
result[2] = list(k for k,_ in itertools.groupby(result[2]))
# Interim
cat1_unique_pp = [x for (x,y) in opponents_hands_cat1[0]]
cat1_unique_s = [x for (x,y) in opponents_hands_cat1[1]]
cat1_unique_o = [x for (x,y) in opponents_hands_cat1[2]]
# Remove cat1 from these cat2s
result[0] = [(x,y) for (x,y) in result[0] if x not in cat1_unique_pp]
result[1] = [(x,y) for (x,y) in result[1] if x not in cat1_unique_s]
result[2] = [(x,y) for (x,y) in result[2] if x not in cat1_unique_o]
# Return result
opponents_hands_cat2 = result
return opponents_hands_cat2
# Performance improvement by filtering out cat1+cat2 from hands already, but would also need a copy of hands
def opponents_hands_cat3_level_x_and_above(x, opponents_hands_cat1, opponents_hands_cat2, skip_4_to_10_and_13_to_15=True):
bdfd_result = [[], [], []]
other_result = [[], [], []]
result = [[], [], []]
if x >= 1:
other_result[0] += opponents_hands_pp_fd
other_result[1] += opponents_hands_s_fd
other_result[2] += opponents_hands_o_fd
if x >= 2:
other_result[0] += opponents_hands_pp_oesd
other_result[1] += opponents_hands_s_oesd
other_result[2] += opponents_hands_o_oesd
if x >= 3:
other_result[0] += opponents_hands_pp_gutshot
other_result[1] += opponents_hands_s_gutshot
other_result[2] += opponents_hands_o_gutshot
if x >= 4 and not skip_4_to_10_and_13_to_15:
other_result[1] += opponents_hands_s_3_to_straight_not_all_from_low_end
other_result[2] += opponents_hands_o_3_to_straight_not_all_from_low_end
if x >= 5 and not skip_4_to_10_and_13_to_15:
bdfd_result[1] += opponents_hands_s_3_to_straight_low_end_bdfd
bdfd_result[2] += opponents_hands_o_3_to_straight_low_end_bdfd
if x >= 6 and not skip_4_to_10_and_13_to_15:
other_result[1] += opponents_hands_s_3_to_straight_low_end
other_result[2] += opponents_hands_o_3_to_straight_low_end
if x >= 7 and not skip_4_to_10_and_13_to_15:
bdfd_result[1] += opponents_hands_s_5_unique_cards_within_7_values_bdfd
bdfd_result[2] += opponents_hands_o_5_unique_cards_within_7_values_bdfd
if x >= 8 and not skip_4_to_10_and_13_to_15:
bdfd_result[0] += opponents_hands_pp_q_minus_bdfd
bdfd_result[1] += opponents_hands_s_q_minus_bdfd
bdfd_result[2] += opponents_hands_o_q_minus_bdfd
if x >= 9:
other_result[1] += opponents_hands_s_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards
other_result[2] += opponents_hands_o_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards
if x >= 10 and not skip_4_to_10_and_13_to_15:
bdfd_result[0] += opponents_hands_pp_a_minus_bdfd
bdfd_result[1] += opponents_hands_s_a_minus_bdfd
bdfd_result[2] += opponents_hands_o_a_minus_bdfd
# Remove duplicates within bdfd hands
bdfd_result[0].sort(reverse=True)
bdfd_result[1].sort(reverse=True)
bdfd_result[2].sort(reverse=True)
bdfd_result[0] = list(k for k,_ in itertools.groupby(bdfd_result[0]))
bdfd_result[1] = list(k for k,_ in itertools.groupby(bdfd_result[1]))
bdfd_result[2] = list(k for k,_ in itertools.groupby(bdfd_result[2]))
# Add all together
result[0] = bdfd_result[0] + other_result[0]
result[1] = bdfd_result[1] + other_result[1]
result[2] = bdfd_result[2] + other_result[2]
# Reduce with max combos number used and sort
groupby_dict = defaultdict(int)
for val in result[0]:
groupby_dict[tuple(val[0])] += val[1]
result[0] = [(sorted(list(x), reverse=True),min(y, 6)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[1]:
groupby_dict[tuple(val[0])] += val[1]
result[1] = [(sorted(list(x), reverse=True),min(y, 4)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[2]:
groupby_dict[tuple(val[0])] += val[1]
result[2] = [(sorted(list(x), reverse=True),min(y, 12)) for (x,y) in groupby_dict.items()]
# Interim
cat1_unique_pp = [x for (x,y) in opponents_hands_cat1[0]]
cat1_unique_s = [x for (x,y) in opponents_hands_cat1[1]]
cat1_unique_o = [x for (x,y) in opponents_hands_cat1[2]]
cat2_unique_pp = [x for (x,y) in opponents_hands_cat2[0]]
cat2_unique_s = [x for (x,y) in opponents_hands_cat2[1]]
cat2_unique_o = [x for (x,y) in opponents_hands_cat2[2]]
# Remove cat1 and cat2
result[0] = [(x,y) for (x,y) in result[0] if x not in cat1_unique_pp and x not in cat2_unique_pp]
result[1] = [(x,y) for (x,y) in result[1] if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] = [(x,y) for (x,y) in result[2] if x not in cat1_unique_o and x not in cat2_unique_o]
# Add cat2 hands
if x >= 11 and not skip_4_to_10_and_13_to_15:
result[1] += [(x,y) for (x,y) in opponents_hands_s_k8_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_k8_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 12 and not skip_4_to_10_and_13_to_15:
result[1] += [(x,y) for (x,y) in opponents_hands_s_kj_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_kj_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 13 and not skip_4_to_10_and_13_to_15:
result[0] += [(x,y) for (x,y) in opponents_hands_pp_below_bottom_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
if x >= 14 and not skip_4_to_10_and_13_to_15:
result[1] += [(x,y) for (x,y) in opponents_hands_s_kq_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_kq_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 15 and not skip_4_to_10_and_13_to_15:
result[0] += [(x,y) for (x,y) in opponents_hands_pp_below_middle_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
# Add cat4 hands
if x >= 16:
remaining_cat2_type_hands_pp = [x for (x,y) in opponents_hands_pp_below_bottom_pair] + [x for (x,y) in opponents_hands_pp_below_middle_pair] + [x for (x,y) in opponents_hands_pp_below_top_pair]
remaining_cat2_type_hands_s = [x for (x,y) in opponents_hands_s_k8_high] + [x for (x,y) in opponents_hands_s_kj_high] + [x for (x,y) in opponents_hands_s_kq_high] + [x for (x,y) in opponents_hands_s_aj_high] + [x for (x,y) in opponents_hands_s_bottom_pair] + [x for (x,y) in opponents_hands_s_middle_pair] + [x for (x,y) in opponents_hands_s_tp_bad_kicker]
remaining_cat2_type_hands_o = [x for (x,y) in opponents_hands_o_k8_high] + [x for (x,y) in opponents_hands_o_kj_high] + [x for (x,y) in opponents_hands_o_kq_high] + [x for (x,y) in opponents_hands_o_aj_high] + [x for (x,y) in opponents_hands_o_bottom_pair] + [x for (x,y) in opponents_hands_o_middle_pair] + [x for (x,y) in opponents_hands_o_tp_bad_kicker]
result[0] += [(x, 6) for x in opponents_hands[0] if x not in cat1_unique_pp and x not in cat2_unique_pp and x not in remaining_cat2_type_hands_pp]
result[1] += [(x, 4) for x in opponents_hands[1] if x not in cat1_unique_s and x not in cat2_unique_s and x not in remaining_cat2_type_hands_s]
result[2] += [(x, 12) for x in opponents_hands[2] if x not in cat1_unique_o and x not in cat2_unique_o and x not in remaining_cat2_type_hands_o]
# Add cat2 hands with pairs
if x >= 17:
result[1] += [(x,y) for (x,y) in opponents_hands_s_aj_high if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_aj_high if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 18:
result[1] += [(x,y) for (x,y) in opponents_hands_s_bottom_pair if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_bottom_pair if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 19:
result[0] += [(x,y) for (x,y) in opponents_hands_pp_below_top_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
if x >= 20:
result[1] += [(x,y) for (x,y) in opponents_hands_s_middle_pair if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_middle_pair if x not in cat1_unique_o and x not in cat2_unique_o]
if x >= 21:
result[1] += [(x,y) for (x,y) in opponents_hands_s_tp_bad_kicker if x not in cat1_unique_s and x not in cat2_unique_s]
result[2] += [(x,y) for (x,y) in opponents_hands_o_tp_bad_kicker if x not in cat1_unique_o and x not in cat2_unique_o]
# Reduce with max combos number used and sort
groupby_dict = defaultdict(int)
for val in result[0]:
groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
result[0] = [(sorted(list(x), reverse=True),min(y, 6)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[1]:
groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
result[1] = [(sorted(list(x), reverse=True),min(y, 4)) for (x,y) in groupby_dict.items()]
groupby_dict = defaultdict(int)
for val in result[2]:
groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
result[2] = [(sorted(list(x), reverse=True),min(y, 12)) for (x,y) in groupby_dict.items()]
# Return results
opponents_hands_cat3 = result
return opponents_hands_cat3
opponent_unraised_strategy = None # To be defined later; changes by flop
opponent_raised_strategy = {
'cat1': {1: 6, 2: 6, 3: 6, 4: 6, 5: 6, 6: 6, 7: 6},
'cat2': {1: 3, 2: 5, 3: 6, 4: 6, 5: 7, 6: 7, 7: 7},
'cat3': {1: 2, 2: 2, 3: 2, 4: 2, 5: 2, 6: 5, 7: 5},
}
opponent_reraised_strategy = {
'cat1': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0},
'cat2': {1: 3, 2: 3, 3: 3, 4: 3, 5: 3, 6: 3, 7: 3},
'cat3': {1: 2, 2: 2, 3: 2, 4: 2, 5: 2, 6: 3, 7: 3},
}
opponent_strategy = None
def get_flop_type_number():
return \
7 if flop[0] == flop[1] or flop[1] == flop[2] else \
1 if flop[0] >= 13 and flop[1] >= 13 else \
2 if flop[0] >= 13 and flop[1] >= 9 else \
3 if flop[0] >= 13 else \
4 if flop[0] >= 10 and flop[1] >= 9 else \
5 if flop[0] >= 10 else \
6
def get_opponent_situation(bets):
return \
"oop_open" if bets == 0 and my_position_ip == True else \
"oop_vs_cb" if bets == 1 and my_position_ip == True else \
"oop_vs_br" if bets >= 2 and my_position_ip == True else \
"ip_vs_c" if bets == 0 and my_position_ip == False else \
"ip_vs_b" if bets == 1 and my_position_ip == False else \
"ip_vs_cbr"
def combine_hands(hands1, hands2):
hands = [[], [], []]
for i in range(3):
hands[i] = hands1[i] + hands2[i]
return hands
def get_cat4_hands(all_hands_before_action_w_combos, cat1_hands_for_action, cat2_hands_for_action, cat3_hands_for_action):
hands = [[], [], []]
temp_cat3_hands = deepcopy(cat3_hands_for_action)
# Flip sign (for subtraction)
for i in range(3):
temp_cat3_hands[i] = [(x, -1*y) for (x, y) in temp_cat3_hands[i]]
# Combine (for subtraction)
result = combine_hands(all_hands_before_action_w_combos, temp_cat3_hands)
# Subtraction
for i in range(3):
groupby_dict = defaultdict(int)
for val in result[i]:
groupby_dict[tuple(val[0])] += val[1]
result[i] = [(sorted(list(x), reverse=True), max(0, min(y, 6 if i == 0 else 4 if i == 1 else 12))) for (x,y) in groupby_dict.items()]
result[i] = [(x,y) for (x,y) in result[i] if y != 0 and x not in [x for (x,y) in cat1_hands_for_action[i]] and x not in [x for (x,y) in cat2_hands_for_action[i]]]
return result
#####
all_flops = []
for rank1 in range(14,1,-1):
for rank2 in range(14,1,-1):
for rank3 in range(14,1,-1):
if rank1 >= rank2 and rank2 >= rank3:
all_flops.append([rank1, rank2, rank3])
# # Start from a spot
# flops = flops[406:]
range_names = ["lj_rfi",
"hj_rfi",
"co_rfi",
"bn_rfi",
"sb_rfi",
"vsRFI_bb_vs_lj__raise",
"vsRFI_bb_vs_hj__raise",
"vsRFI_bb_vs_co__raise",
"vsRFI_bb_vs_bn__raise",
"vsRFI_bb_vs_sb__raise",
"vsRFI_bb_vs_lj__call",
"vsRFI_bb_vs_hj__call",
"vsRFI_bb_vs_co__call",
"vsRFI_bb_vs_bn__call",
"vsRFI_bb_vs_sb__call",
"vsRFI_sb_vs_bn__raise",
"vsRFI_other__raise",
"vsRFI_other__call",
"RFIvs3B_other__raise",
"RFIvs3B_other__call",
"cold4B"]
my_ranges = [lj_rfi,
hj_rfi,
co_rfi,
bn_rfi,
sb_rfi,
vsRFI_bb_vs_lj__raise,
vsRFI_bb_vs_hj__raise,
vsRFI_bb_vs_co__raise,
vsRFI_bb_vs_bn__raise,
vsRFI_bb_vs_sb__raise,
vsRFI_bb_vs_lj__call,
vsRFI_bb_vs_hj__call,
vsRFI_bb_vs_co__call,
vsRFI_bb_vs_bn__call,
vsRFI_bb_vs_sb__call,
vsRFI_sb_vs_bn__raise,
vsRFI_other__raise,
vsRFI_other__call,
RFIvs3B_other__raise,
RFIvs3B_other__call,
cold4B]
range_names_print = ['LJ RFI', 'HJ RFI',
'CO RFI', 'BN RFI',
'SB RFI',
'Villain LJ RFI and You BB 3Bet',
'Villain HJ RFI and You BB 3Bet',
'Villain CO RFI and You BB 3Bet',
'Villain BN RFI and You BB 3Bet',
'Villain SB RFI and You BB 3Bet',
'Villain LJ RFI and You BB Call',
'Villain HJ RFI and You BB Call',
'CO RFI and You BB Call',
'Villain BN RFI and You BB Call',
'Villain SB RFI and You BB Call',
'Villain BN RFI and You SB 3Bet',
'Villain RFI and You 3Bet (Not BB)',
'Villain RFI and You Call (Not BB)',
'You RFI, get 3Bet and you 4Bet',
'You RFI, get 3Bet and you Call',
'You Cold 4Bet',
]
# Estimation:
my_position_ips = [False, False,
True, True,
False,
False,
False,
False,
False,
False,
False,
False,
False,
False,
False,
False,
True,
True,
True,
True,
False
]
my_pfrs = [True, True,
True, True,
True,
True,
True,
True,
True,
True,
False,
False,
False,
False,
False,
True,
True,
False,
True,
False,
True,
]
# Might add this later:
# opponents_ranges = [vsRFI_bb_vs_lj__raise, vsRFI_bb_vs_lj__call, vsRFI_hj_vs_lj__raise, vsRFI_co_vs_lj__raise, vsRFI_bn_vs_lj__raise, vsRFI_sb_vs_lj__raise, vsRFI_hj_vs_lj__call, vsRFI_co_vs_lj__call, vsRFI_bn_vs_lj__call, vsRFI_sb_vs_lj__call, vsRFI_bb_vs_hj__raise, vsRFI_bb_vs_hj__call, vsRFI_co_vs_hj__raise, vsRFI_bn_vs_hj__raise, vsRFI_sb_vs_hj__raise, vsRFI_co_vs_hj__call, vsRFI_bn_vs_hj__call, vsRFI_sb_vs_hj__call, vsRFI_bb_vs_co__raise, vsRFI_bb_vs_co__call, vsRFI_bn_vs_co__raise, vsRFI_sb_vs_co__raise, vsRFI_bn_vs_co__call, vsRFI_sb_vs_co__call, vsRFI_bb_vs_bn__raise, vsRFI_bb_vs_bn__call, vsRFI_sb_vs_bn__raise, vsRFI_bb_vs_sb__raise, vsRFI_bb_vs_sb__call, RFIvs3B_lj_vs_blinds_call, lj_rfi, RFIvs3B_lj_vs_hjco_call, RFIvs3B_lj_vs_hjco_call, RFIvs3B_lj_vs_bn_call, RFIvs3B_lj_vs_blinds_call, lj_rfi, lj_rfi, lj_rfi, lj_rfi, RFIvs3B_hj_vs_ahead_call, hj_rfi, RFIvs3B_hj_vs_ahead_call, RFIvs3B_hj_vs_ahead_call, RFIvs3B_hj_vs_ahead_call, hj_rfi, hj_rfi, hj_rfi, RFIvs3B_co_vs_blinds_call, co_rfi, RFIvs3B_co_vs_bn_call, RFIvs3B_co_vs_blinds_call, co_rfi, co_rfi, RFIvs3B_bnsb_vs_ahead_call, bn_rfi, RFIvs3B_bnsb_vs_ahead_call, RFIvs3B_bnsb_vs_ahead_call, sb_rfi]
import re
import pyperclip as clip
import pyautogui as g
from time import sleep
import pyperclip as clip
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
from graphviz import Source
from sklearn.tree import export_graphviz
from sklearn import tree
try:
from PIL import Image
except ImportError:
import Image
import pytesseract
g.FAILSAFE = True
sleep(2)
g.PAUSE = 0.0
g.size()
g.position()
def c():
g.mouseDown()
g.mouseUp()
def c2():
c()
c()
def rc():
g.click(button='right')
def select_all_and_copy():
rc()
g.keyDown('a')
g.keyUp('a')
g.keyDown('ctrl')
g.keyDown('c')
g.keyUp('c')
g.keyUp('ctrl')
# STEP 1: Open site and Set 4 colors manually
# import sys; import webbrowser; from pandas import read_csv; from time import sleep; webbrowser.get('open -a /Applications/Google\ Chrome.app %s').open("https://floattheturn.com/wp/tools/range-analyzer/")
def change_color(category=1):
y = 605 + 23*category
g.moveTo(1156, y)
c()
def click_hand(hand1, hand2, type_):
if type_ == "suited" or type_ == "paired":
x = 765 + 29.9*(14-hand2)
y = 485 + 29.9*(14-hand1)
else:
x = 765 + 29.9*(14-hand1)
y = 485 + 29.9*(14-hand2)
g.moveTo(x, y)
c()
def clear_board():
change_color(7)
for i in range(0, 13):
for j in range(0, 13):
x = 765 + 29.9*i
y = 486 + 29.9*j
g.moveTo(x, y)
c()
def fill_in_dead_cards(flop):
# Only changing suit for paired cards, don't care that much
string_chars = []
suit_to_use_for_paired = ['s','h','c','d']
index = 0
for card in flop:
string_chars.append(m2[card])
if is_paired and paired_value == card:
string_chars.append(suit_to_use_for_paired[index])
string_chars.append(",")
index+=1
else:
string_chars.append("s,")
answer = "".join(string_chars)[:-2] + "c"
clip.copy(answer)
g.moveTo(1178, 807)
c()
g.hotkey('command', 'a')
g.hotkey('command', 'v')
# Change color names
def change_color_names():
g.moveTo(1246, 628)
c()
sleep(0.1)
g.hotkey('command', 'a')
clip.copy("Category 1")
g.hotkey('command', 'v')
g.moveTo(1246, 650)
c()
sleep(0.1)
g.hotkey('command', 'a')
clip.copy("Category 2")
g.hotkey('command', 'v')
g.moveTo(1246, 672)
c()
sleep(0.1)
g.hotkey('command', 'a')
clip.copy("Category 3")
g.hotkey('command', 'v')
g.moveTo(1246, 694)
c()
sleep(0.1)
g.hotkey('command', 'a')
clip.copy("Category 3 FD")
g.hotkey('command', 'v')
g.moveTo(1246, 716)
c()
sleep(0.1)
g.hotkey('command', 'a')
clip.copy("Category 3 BDFD")
g.hotkey('command', 'v')
g.moveTo(1246, 738)
c()
sleep(0.1)
g.hotkey('command', 'a')
clip.copy("Category 4")
g.hotkey('command', 'v')
def click_screen():
g.moveTo(1120, 716)
c()
sleep(0.1)
# Create 4 colors
# Name the 4 colors
# click on all the hands for your entire range
# click on cat1 hands
# click on cat2 hands
# click on cat3 hands
import random
rows = []
i = 0
j = 1
while True:
i += 1
row = {}
ranks = list(range(2,15))
flop = list([x for x in sorted([random.choice(ranks), random.choice(ranks), random.choice(ranks)], reverse=True)])
turn = random.choice(ranks)
river = random.choice(ranks)
turn_suit = random.choice(['suited', 'offsuit', 'offsuit', 'offsuit'])
river_suit = random.choice(['suited', 'offsuit', 'offsuit', 'offsuit'])
flop_action = random.choice(["You bet and get raised", "You raise", "You bet", "You call", "Both check"])
turn_action = random.choice(["You bet and get raised", "You raise", "You bet", "You call", "Both check"])
river_action = random.choice(["You bet and get raised", "You raise", "You bet", "You call", "Both check"])
tones = random.choice([1] + [2]*13 + [3]*6)
# indices_to_randomly_choose_from = [0,1,2,3,4,5]*5 + [6,7,8,9,10,11,12,13,14,15,16]*3 + list(range(len(range_names)))
indices_to_randomly_choose_from = [0,1,2,3,4,10,11,12,13,14,17,19]
hand_index = random.choice(indices_to_randomly_choose_from)
# hand_index = 4
# Initialize variables using index
my_range = my_ranges[hand_index]
my_position_ip = my_position_ips[hand_index]
my_pfr = my_pfrs[hand_index]
# Helpers
board_type = "rainbow" if tones == 3 else "two-tone" if tones == 2 else "monotone" if tones == 1 else "Invalid"
is_paired = 1 if flop[0] == flop[1] or flop[1] == flop[2] else 0
paired_value = 0 if not is_paired else flop[0] if flop[0] == flop[1] else flop[1]
my_hands = range_to_hands(my_range)
is_invalid = False
if board_type == "monotone" and is_paired:
# print("Invalid")
is_invalid = True
if board_type == "two-tone" and flop[0] == flop[2]:
# print("Invalid")
is_invalid = True
# Print
# print(range_names_print[hand_index])
# print("Flop:", "".join([m2[x] for x in flop]))
# print(board_type)
# print("Flop Action:", flop_action)
# print("Turn:", m2[turn], turn_suit)
# print("Turn Action:", turn_action)
# print("River:", m2[river], river_suit)
# print("River Action:", river_action)
# My hands with combos
my_hands_with_combos = [[], [], []]
my_hands_with_combos[0] = [(x, 1) if x[0] == paired_value else (x, 6) if x[0] not in flop else (x, 3) for x in my_hands[0]]
my_hands_with_combos[1] = [(x, 2) if x[0] == paired_value or x[1] == paired_value else (x, 4) if x[0] not in flop and x[1] not in flop else (x, 3) if x[0] in flop and x[1] in flop else (x,3) for x in my_hands[1]]
my_hands_with_combos[2] = [(x, 6) if x[0] == paired_value or x[1] == paired_value else (x, 12) if x[0] not in flop and x[1] not in flop else (x, 6) if x[0] in flop and x[1] in flop else (x,9) for x in my_hands[2]]
# Cat1
#### Assuming no flushes (monotone boards) for simplicity
my_hands_s_straight = [] if is_paired else [(x, 4) for x in my_hands[1] if len(set(x + flop)) == 5 and (max(x + flop) - min(x + flop) == 4 or max([1 if y == 14 else y for y in (x + flop)]) - min([1 if y == 14 else y for y in (x + flop)]) == 4)]
my_hands_o_straight = [] if is_paired else [(x, 12) for x in my_hands[2] if len(set(x + flop)) == 5 and (max(x + flop) - min(x + flop) == 4 or max([1 if y == 14 else y for y in (x + flop)]) - min([1 if y == 14 else y for y in (x + flop)]) == 4)]
my_hands_pp_sets = [(x, 3) for x in my_hands[0] if x[0] in flop]
my_hands_s_trips = [] if not is_paired else [(x, 2) for x in my_hands[1] if x[0] == paired_value or x[1] == paired_value]
my_hands_o_trips = [] if not is_paired else [(x, 6) for x in my_hands[2] if x[0] == paired_value or x[1] == paired_value]
# 2 combos most times, not 3; 7 more often than 6
my_hands_s_two_pair = [] if is_paired else [(x, 2) for x in my_hands[1] if x[0] in flop and x[1] in flop]
my_hands_o_two_pair = [] if is_paired else [(x, 7) for x in my_hands[2] if x[0] in flop and x[1] in flop]
my_hands_pp_overpair_9plus = [(x, 6) for x in my_hands[0] if x[0] > flop[0] and x[0] >= 9]
my_hands_pp_any_overpair = [(x, 6) for x in my_hands[0] if x[0] > flop[0]]
my_hands_s_tp_k_kicker = [(x, 3) for x in my_hands[1] if (x[0] != paired_value and x[0] == flop[0] and x[1] >= 13 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] >= 13 and x[0] not in flop)]
my_hands_o_tp_k_kicker = [(x, 9) for x in my_hands[2] if (x[0] != paired_value and x[0] == flop[0] and x[1] >= 13 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] >= 13 and x[0] not in flop)]
my_hands_s_tp_j_kicker = [(x, 3) for x in my_hands[1] if (x[0] != paired_value and x[0] == flop[0] and x[1] >= 11 and x[1] <= 12 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] >= 11 and x[0] <= 12 and x[0] not in flop)]
my_hands_o_tp_j_kicker = [(x, 9) for x in my_hands[2] if (x[0] != paired_value and x[0] == flop[0] and x[1] >= 11 and x[1] <= 12 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] >= 11 and x[0] <= 12 and x[0] not in flop)]
my_hands_s_tp_any_kicker = [(x, 3) for x in my_hands[1] if (x[0] != paired_value and x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
my_hands_o_tp_any_kicker = [(x, 9) for x in my_hands[2] if (x[0] != paired_value and x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
# Cat2 (flushdraws with high card hand might actually be part of cat3, but saying the combos are part of cat2)
my_hands_s_tp_bad_kicker = [(x, 3) for x in my_hands[1] if (x[0] != paired_value and x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
my_hands_o_tp_bad_kicker = [(x, 9) for x in my_hands[2] if (x[0] != paired_value and x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] != paired_value and x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
my_hands_s_middle_pair = [(x, 3) for x in my_hands[1] if (x[0] == flop[1] and x[1] not in flop) or (x[1] == flop[1] and x[0] not in flop)]
my_hands_o_middle_pair = [(x, 9) for x in my_hands[2] if (x[0] == flop[1] and x[1] not in flop) or (x[1] == flop[1] and x[0] not in flop)]
my_hands_s_bottom_pair = [(x, 3) for x in my_hands[1] if (x[0] == flop[2] and x[1] not in flop) or (x[1] == flop[2] and x[0] not in flop)]
my_hands_o_bottom_pair = [(x, 9) for x in my_hands[2] if (x[0] == flop[2] and x[1] not in flop) or (x[1] == flop[2] and x[0] not in flop)]
my_hands_pp_below_top_pair = [(x, 6) for x in my_hands[0] if x[0] < flop[0] and x[0] > flop[1]]
my_hands_s_aj_high = [(x, 4) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and (x[0] == 14) and (x[1] > 10)]
my_hands_o_aj_high = [(x, 12) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and (x[0] == 14) and (x[1] > 10)]
my_hands_pp_below_middle_pair = [(x, 6) for x in my_hands[0] if x[0] < flop[1] and x[0] > flop[2]]
my_hands_s_kq_high = [(x, 4) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and ((x[0] == 13 and x[1] > 11) or (x[0] == 14))]
my_hands_o_kq_high = [(x, 12) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and ((x[0] == 13 and x[1] > 11) or (x[0] == 14))]
my_hands_pp_below_bottom_pair = [(x, 6) for x in my_hands[0] if x[0] < flop[2]]
my_hands_s_kj_high = [(x, 4) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] == 11)]
my_hands_o_kj_high = [(x, 12) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] == 11)]
my_hands_s_k8_high = [(x, 4) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] < 11 and x[1] >= 8)]
my_hands_o_k8_high = [(x, 12) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] < 11 and x[1] >= 8)]
# Cat3 paired
#### bdfd/fd/no_fd combos should not be counted twice; combine them smartly and carefully.
#### Assuming no suited flushdraw possible with pair on two-tone for simplicity
#### Might say a straight is a gutshot but that is fine because future logic
## Include the combos for that hand
if board_type == "two-tone":
my_hands_pp_fd = []
my_hands_s_fd = [(x, 1) for x in my_hands[1] if x[0] not in flop and x[1] not in flop]
my_hands_o_fd = []
elif board_type == "rainbow":
my_hands_pp_fd = []
my_hands_s_fd = []
my_hands_o_fd = []
else:
my_hands_pp_fd = [(x, 3) for x in my_hands[0] if x[0] not in flop]
my_hands_s_fd = []
# If paired then ignore the flushdraw (anyway, just monotone; just makes things simpler)
my_hands_o_fd = [(x, 6) for x in my_hands[2] if x[0] not in flop and x[1] not in flop]
#### Also added double gutshots. Doing a bit of overcounting for oesd+pair, which is fine for estimation (should be 9).
my_hands_pp_oesd = [(x, 6) for x in my_hands[0] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 3) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 3) or (sorted(x+flop) == [3,4,5,7,14]) or (max(x+flop) - min(x+flop) == 6 and min(x+flop)+2 == sorted(x+flop)[1] and max(x+flop)-2 == sorted(x+flop)[-2])]
my_hands_s_oesd = [(x, 4) for x in my_hands[1] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 3) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 3) or (sorted(x+flop) == [3,4,5,7,14]) or (max(x+flop) - min(x+flop) == 6 and min(x+flop)+2 == sorted(x+flop)[1] and max(x+flop)-2 == sorted(x+flop)[-2])]
my_hands_o_oesd = [(x, 12) for x in my_hands[2] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 3) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 3) or (sorted(x+flop) == [3,4,5,7,14]) or (max(x+flop) - min(x+flop) == 6 and min(x+flop)+2 == sorted(x+flop)[1] and max(x+flop)-2 == sorted(x+flop)[-2])]
my_hands_pp_gutshot = [(x, 6) for x in my_hands[0] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 4) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 4) or (sorted(set([1 if y == 14 else y for y in (x + flop)] + [20,21,22]))[3] <= 5)]
my_hands_s_gutshot = [(x, 4) for x in my_hands[1] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 4) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 4) or (sorted(set([1 if y == 14 else y for y in (x + flop)] + [20,21,22]))[3] <= 5)]
my_hands_o_gutshot = [(x, 12) for x in my_hands[2] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 4) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 4) or (sorted(set([1 if y == 14 else y for y in (x + flop)] + [20,21,22]))[3] <= 5)]
#### Additional rule: 3 to a straight requires two cards from your hand not just one (that's how I want it to be)
my_hands_s_3_to_straight_not_all_from_low_end = [(x, 4) for x in my_hands[1] if (x[0] != 14) and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]+1 and x[1] == flop[0]-1) or (x[0] == flop[0]+2 and x[1] == flop[0]+1) or (x[0] == flop[1]+1 and x[1] == flop[1]-1) or (x[0] == flop[1]+2 and x[1] == flop[1]+1) or (x[0] == flop[2]+1 and x[1] == flop[2]-1) or (x[0] == flop[2]+2 and x[1] == flop[2]+1))]
my_hands_o_3_to_straight_not_all_from_low_end = [(x, 12) for x in my_hands[2] if (x[0] != 14) and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]+1 and x[1] == flop[0]-1) or (x[0] == flop[0]+2 and x[1] == flop[0]+1) or (x[0] == flop[1]+1 and x[1] == flop[1]-1) or (x[0] == flop[1]+2 and x[1] == flop[1]+1) or (x[0] == flop[2]+1 and x[1] == flop[2]-1) or (x[0] == flop[2]+2 and x[1] == flop[2]+1))]
if board_type == "two-tone":
my_hands_s_3_to_straight_low_end_bdfd = [(x, 1) for x in my_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
my_hands_o_3_to_straight_low_end_bdfd = [(x, 6) for x in my_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
elif board_type == "rainbow":
my_hands_s_3_to_straight_low_end_bdfd = [(x, 3) for x in my_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
my_hands_o_3_to_straight_low_end_bdfd = []
else:
my_hands_s_3_to_straight_low_end_bdfd = []
my_hands_o_3_to_straight_low_end_bdfd = []
if board_type == "two-tone":
my_hands_s_3_to_straight_low_end = [(x, 4) for x in my_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
my_hands_o_3_to_straight_low_end = [(x, 12) for x in my_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
elif board_type == "rainbow":
my_hands_s_3_to_straight_low_end = [(x, 4) for x in my_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
my_hands_o_3_to_straight_low_end = [(x, 12) for x in my_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
else:
my_hands_s_3_to_straight_low_end = [(x, 4) for x in my_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
my_hands_o_3_to_straight_low_end = [(x, 12) for x in my_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
if board_type == "two-tone":
my_hands_s_5_unique_cards_within_7_values_bdfd = [(x, 1) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and max(flop+x) - min(flop+x) <= 7]
my_hands_o_5_unique_cards_within_7_values_bdfd = [(x, 6) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and max(flop+x) - min(flop+x) <= 7]
elif board_type == "rainbow":
my_hands_s_5_unique_cards_within_7_values_bdfd = [(x, 3) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and max(flop+x) - min(flop+x) <= 7]
my_hands_o_5_unique_cards_within_7_values_bdfd = []
else:
my_hands_s_5_unique_cards_within_7_values_bdfd = []
my_hands_o_5_unique_cards_within_7_values_bdfd = []
if board_type == "two-tone":
my_hands_pp_q_minus_bdfd = [(x, 3) for x in my_hands[0] if (x[0] not in flop) and x[0] <= 12]
my_hands_s_q_minus_bdfd = [(x, 1) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] <= 12]
my_hands_o_q_minus_bdfd = [(x, 6) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and x[0] <= 12]
elif board_type == "rainbow":
my_hands_pp_q_minus_bdfd = []
my_hands_s_q_minus_bdfd = [(x, 3) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] <= 12]
my_hands_o_q_minus_bdfd = []
else:
my_hands_pp_q_minus_bdfd = []
my_hands_s_q_minus_bdfd = []
my_hands_o_q_minus_bdfd = []
#### 3 cards within 4 values with two overcards
my_hands_s_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards = [(x, 4) for x in my_hands[1] if x[1] > flop[0] and ((max(flop + [x[1]]) - sorted(set(flop + [x[1]] + [-20,-19,-18]))[-3] <= 3) or (max(flop + x) - sorted(set(flop + x + [-20,-19,-18]))[-3] <= 3))]
my_hands_o_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards = [(x, 12) for x in my_hands[2] if x[1] > flop[0] and ((max(flop + [x[1]]) - sorted(set(flop + [x[1]] + [-20,-19,-18]))[-3] <= 3) or (max(flop + x) - sorted(set(flop + x + [-20,-19,-18]))[-3] <= 3))]
if board_type == "two-tone":
my_hands_pp_a_minus_bdfd = [(x, 3) for x in my_hands[0] if (x[0] not in flop) and x[0] > 12 and x[0] <= 14]
my_hands_s_a_minus_bdfd = [(x, 1) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] > 12 and x[0] <= 14]
my_hands_o_a_minus_bdfd = [(x, 6) for x in my_hands[2] if (x[0] not in flop and x[1] not in flop) and x[0] > 12 and x[0] <= 14]
elif board_type == "rainbow":
my_hands_pp_a_minus_bdfd = []
my_hands_s_a_minus_bdfd = [(x, 3) for x in my_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] > 12 and x[0] <= 14]
my_hands_o_a_minus_bdfd = []
else:
my_hands_pp_a_minus_bdfd = []
my_hands_s_a_minus_bdfd = []
my_hands_o_a_minus_bdfd = []
def count_hand_combos(hands):
return sum([y for (x,y) in hands[0]])+sum([y for (x,y) in hands[1]])+sum([y for (x,y) in hands[2]])
best_score1 = 100000
best_score2 = 100000
best_score3 = 100000
best_param1 = None
best_param2 = None
best_param3 = None
best_param4 = None
# Set param1 as a constant
if my_position_ip and (my_pfr or is_paired or max(flop) <= 9):
param1 = 7
else:
param1 = 5
for param1 in [param1]:
for param2 in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]:
# LOCK IN AT 8:
my_hands_cat1 = my_hands_cat1_level_x_and_above(8)
# RANGE FROM 5-10:
my_hands_cat2 = my_hands_cat2_level_x_and_above(param1, my_hands_cat1, param2)
# CHOICES 1, 2, 5, 8, 10, 16:
my_hands_cat3 = my_hands_cat3_level_x_and_above(param2, my_hands_cat1, my_hands_cat2)
my_hands_cat4 = get_cat4_hands(my_hands_with_combos, my_hands_cat1, my_hands_cat2, my_hands_cat3)
# USE CONSTRAINT OPTIMIZATION CODE OR BRUTE FORCE
cat1_combos = count_hand_combos(my_hands_cat1)
cat2_combos = count_hand_combos(my_hands_cat2)
cat3_combos = count_hand_combos(my_hands_cat3)
cat4_combos = count_hand_combos(my_hands_cat4)
# score = (3000 if cat3_combos == 0 else abs(cat1_combos*2/cat3_combos)**10.3) + (0 if cat4_combos == 0 and cat2_combos == 0 else 1000 if cat2_combos == 0 else (cat4_combos/cat2_combos)*2.0)
score1 = (3 if cat3_combos == 0 else abs((cat1_combos*2-cat3_combos)/cat3_combos)) # + (3 if cat4_combos == 0 else abs((cat4_combos-cat2_combos)/cat4_combos))*0.5
score2 = (3 if cat3_combos == 0 else abs((cat1_combos-cat3_combos)/cat3_combos))
score3 = (3 if cat3_combos == 0 else abs((cat1_combos/2-cat3_combos)/cat3_combos))
if score1 < best_score1:
best_score1 = score1
best_param1 = param1
best_param2 = param2
if score2 < best_score2:
best_score2 = score2
best_param3 = param2
if score3 < best_score3:
best_score3 = score3
best_param4 = param2
# my_hands_cat1 = my_hands_cat1_level_x_and_above(8)
# # RANGE FROM 5-10:
# my_hands_cat2 = my_hands_cat2_level_x_and_above(best_param1, my_hands_cat1, best_param2)
# # CHOICES 1, 2, 5, 8, 10, 16:
# my_hands_cat3 = my_hands_cat3_level_x_and_above(best_param2, my_hands_cat1, my_hands_cat2)
# my_hands_cat4 = get_cat4_hands(my_hands_with_combos, my_hands_cat1, my_hands_cat2, my_hands_cat3)
# cat1_combos = count_hand_combos(my_hands_cat1)
# cat2_combos = count_hand_combos(my_hands_cat2)
# cat3_combos = count_hand_combos(my_hands_cat3)
# cat4_combos = count_hand_combos(my_hands_cat4)
# print(cat1_combos, cat2_combos, cat3_combos, cat4_combos)
# print("cat3 combos needed:", cat1_combos*2 - cat3_combos)
# print("cat2 combos needed:", cat4_combos - cat2_combos)
# print("Weakest bluff to use:", best_param2)
row['bluff_code'] = best_param2
row['target1'] = best_param3
row['target2'] = best_param4
row['position'] = range_names_print[hand_index]
row['rank1'] = flop[0]
row['rank2'] = flop[1]
row['rank3'] = flop[2]
row['is_paired'] = 1 if is_paired else 0
row['my_position_ip'] = 1 if my_position_ip else 0
row['range'] = flop[0] - flop[2]
row['top_range'] = flop[0] - flop[1]
row['my_pfr'] = 1 if my_pfr else 0
row['tones'] = tones
row['is_capped'] = 1 if "Call" in range_names_print[hand_index] else 0
row['you_3b'] = 1 if "You 3Bet" in range_names_print[hand_index] or "You SB 3Bet" in range_names_print[hand_index] or "You BB 3Bet" in range_names_print[hand_index] else 0
row['you_rfi_and_no_raise'] = 1 if range_names_print[hand_index].endswith("RFI") else 0
row['is_bb'] = 1 if "You BB" in range_names_print[hand_index] else 0
row['wide_rfi'] = 1 if range_names_print[hand_index] in ['CO RFI', 'BN RFI', 'SB RFI'] else 0
rows.append(row)
if i%10000 == 0:
df = pd.DataFrame(rows)
df.to_csv("../data/raw/{}.csv".format(j), index=False)
j = j + 1
rows = []
range_names_print
my_hands_cat3
###Output
_____no_output_____
###Markdown
Opponent Range Guessing(Ignore for now and might have bugs)
###Code
# opponents_range = "88-22,AJo-A8o,KJo-K8o,QJo-Q8o,JTo-J8o,T9o-T8o,98o,A8s-A2s,K8s-K4s,Q8s-Q4s,J8s-J4s,T8s-T4s,98s-94s,87s-84s,76s-74s,65s-64s,54s-53s,43s-42s,32s"
# opponents_hands = range_to_hands(opponents_range)
# opponents_position_ip = not my_position_ip
# opponents_pfr = not my_pfr
# # My hands with combos
# opponents_with_combos = [[], [], []]
# opponents_with_combos[0] = [(x, 6) if x[0] not in flop else (x, 3) for x in opponents_hands[0]]
# opponents_with_combos[1] = [(x, 4) if x[0] not in flop and x[1] not in flop else (x, 2) if x[0] in flop and x[1] in flop else (x,3) for x in opponents_hands[1]]
# opponents_with_combos[2] = [(x, 12) if x[0] not in flop and x[1] not in flop else (x, 7) if x[0] in flop and x[1] in flop else (x,9) for x in opponents_hands[2]]
# # Important note: lower ranked rules may include higher ranked hands
# # Also tp_j_kicker includes trips because it's okay that it does because of the theory:
# # actions taken by one hand are taken by all better hands within the cat
# # ! Just be careful that you remove cat1 hands from final cat2, same with cat3 with both cat1 & 2
# # Might want to QA with a hand matrix coloring compare with the existing matrix based on default rules
# # Cat1
# #### Assuming no flushes (monotone boards) for simplicity
# opponents_hands_s_straight = [] if is_paired else [(x, 4) for x in opponents_hands[1] if max(x + flop) - min(x + flop) == 4 or max([1 if y == 14 else y for y in (x + flop)]) - min([1 if y == 14 else y for y in (x + flop)]) == 4]
# opponents_hands_o_straight = [] if is_paired else [(x, 12) for x in opponents_hands[2] if max(x + flop) - min(x + flop) == 4 or max([1 if y == 14 else y for y in (x + flop)]) - min([1 if y == 14 else y for y in (x + flop)]) == 4]
# opponents_hands_pp_sets = [(x, 3) for x in opponents_hands[0] if x[0] in flop]
# opponents_hands_s_trips = [] if not is_paired else [(x, 2) for x in opponents_hands[1] if x[0] == paired_value or x[1] == paired_value]
# opponents_hands_o_trips = [] if not is_paired else [(x, 6) for x in opponents_hands[2] if x[0] == paired_value or x[1] == paired_value]
# # 2 combos most times, not 3; 7 more often than 6
# opponents_hands_s_two_pair = [] if is_paired else [(x, 2) for x in opponents_hands[1] if x[0] in flop and x[1] in flop]
# opponents_hands_o_two_pair = [] if is_paired else [(x, 7) for x in opponents_hands[2] if x[0] in flop and x[1] in flop]
# opponents_hands_pp_overpair_9plus = [(x, 6) for x in opponents_hands[0] if x[0] > flop[0] and x[0] >= 9]
# opponents_hands_pp_any_overpair = [(x, 6) for x in opponents_hands[0] if x[0] > flop[0]]
# opponents_hands_s_tp_k_kicker = [(x, 3) for x in opponents_hands[1] if (x[0] == flop[0] and x[1] >= 13) or (x[1] == flop[0] and x[0] >= 13)]
# opponents_hands_o_tp_k_kicker = [(x, 9) for x in opponents_hands[2] if (x[0] == flop[0] and x[1] >= 13) or (x[1] == flop[0] and x[0] >= 13)]
# opponents_hands_s_tp_j_kicker = [(x, 3) for x in opponents_hands[1] if (x[0] == flop[0] and x[1] >= 11 and x[1] <= 12 and x[1] not in flop) or (x[1] == flop[0] and x[0] >= 11 and x[0] <= 12 and x[0] not in flop)]
# opponents_hands_o_tp_j_kicker = [(x, 9) for x in opponents_hands[2] if (x[0] == flop[0] and x[1] >= 11 and x[1] <= 12 and x[1] not in flop) or (x[1] == flop[0] and x[0] >= 11 and x[0] <= 12 and x[0] not in flop)]
# opponents_hands_s_tp_any_kicker = [(x, 3) for x in opponents_hands[1] if (x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
# opponents_hands_o_tp_any_kicker = [(x, 9) for x in opponents_hands[2] if (x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
# # Cat2 (flushdraws with high card hand might actually be part of cat3, but saying the combos are part of cat2)
# opponents_hands_s_tp_bad_kicker = [(x, 3) for x in opponents_hands[1] if (x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
# opponents_hands_o_tp_bad_kicker = [(x, 9) for x in opponents_hands[2] if (x[0] == flop[0] and x[1] <= 10 and x[1] not in flop) or (x[1] == flop[0] and x[0] <= 10 and x[0] not in flop)]
# opponents_hands_s_middle_pair = [(x, 3) for x in opponents_hands[1] if (x[0] == flop[1] and x[1] not in flop) or (x[1] == flop[1] and x[0] not in flop)]
# opponents_hands_o_middle_pair = [(x, 9) for x in opponents_hands[2] if (x[0] == flop[1] and x[1] not in flop) or (x[1] == flop[1] and x[0] not in flop)]
# opponents_hands_s_bottom_pair = [(x, 3) for x in opponents_hands[1] if (x[0] == flop[2] and x[1] not in flop) or (x[1] == flop[2] and x[0] not in flop)]
# opponents_hands_o_bottom_pair = [(x, 9) for x in opponents_hands[2] if (x[0] == flop[2] and x[1] not in flop) or (x[1] == flop[2] and x[0] not in flop)]
# opponents_hands_pp_below_top_pair = [(x, 6) for x in opponents_hands[0] if x[0] < flop[0] and x[0] > flop[1]]
# opponents_hands_s_aj_high = [(x, 4) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and (x[0] == 14) and (x[1] > 10)]
# opponents_hands_o_aj_high = [(x, 12) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and (x[0] == 14) and (x[1] > 10)]
# opponents_hands_pp_below_middle_pair = [(x, 6) for x in opponents_hands[0] if x[0] < flop[1] and x[0] > flop[2]]
# opponents_hands_s_kq_high = [(x, 4) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and ((x[0] == 13 and x[1] > 11) or (x[0] == 14))]
# opponents_hands_o_kq_high = [(x, 12) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and ((x[0] == 13 and x[1] > 11) or (x[0] == 14))]
# opponents_hands_pp_below_bottom_pair = [(x, 6) for x in opponents_hands[0] if x[0] < flop[2]]
# opponents_hands_s_kj_high = [(x, 4) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] == 11)]
# opponents_hands_o_kj_high = [(x, 12) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] == 11)]
# opponents_hands_s_k8_high = [(x, 4) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] < 11 and x[1] >= 8)]
# opponents_hands_o_k8_high = [(x, 12) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and (x[0] == 13) and (x[1] < 11 and x[1] >= 8)]
# # Cat3 paired
# #### bdfd/fd/no_fd combos should not be counted twice; combine them smartly and carefully.
# #### Assuming no suited flushdraw possible with pair on two-tone for simplicity
# #### Might say a straight is a gutshot but that is fine because future logic
# ## Include the combos for that hand
# if board_type == "two-tone":
# opponents_hands_pp_fd = []
# opponents_hands_s_fd = [(x, 1) for x in opponents_hands[1] if x[0] not in flop and x[1] not in flop]
# opponents_hands_o_fd = []
# elif board_type == "rainbow":
# opponents_hands_pp_fd = []
# opponents_hands_s_fd = []
# opponents_hands_o_fd = []
# else:
# opponents_hands_pp_fd = [(x, 3) for x in opponents_hands[0] if x[0] not in flop]
# opponents_hands_s_fd = []
# # If paired then ignore the flushdraw (anyway, just monotone; just makes things simpler)
# opponents_hands_o_fd = [(x, 6) for x in opponents_hands[2] if x[0] not in flop and x[1] not in flop]
# #### Also added double gutshots. Doing a bit of overcounting for oesd+pair, which is fine for estimation (should be 9).
# opponents_hands_pp_oesd = [(x, 6) for x in opponents_hands[0] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 3) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 3) or (sorted(x+flop) == [3,4,5,7,14]) or (max(x+flop) - min(x+flop) == 6 and min(x+flop)+2 == sorted(x+flop)[1] and max(x+flop)-2 == sorted(x+flop)[-2])]
# opponents_hands_s_oesd = [(x, 4) for x in opponents_hands[1] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 3) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 3) or (sorted(x+flop) == [3,4,5,7,14]) or (max(x+flop) - min(x+flop) == 6 and min(x+flop)+2 == sorted(x+flop)[1] and max(x+flop)-2 == sorted(x+flop)[-2])]
# opponents_hands_o_oesd = [(x, 12) for x in opponents_hands[2] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 3) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 3) or (sorted(x+flop) == [3,4,5,7,14]) or (max(x+flop) - min(x+flop) == 6 and min(x+flop)+2 == sorted(x+flop)[1] and max(x+flop)-2 == sorted(x+flop)[-2])]
# opponents_hands_pp_gutshot = [(x, 6) for x in opponents_hands[0] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 4) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 4) or (sorted(set([1 if y == 14 else y for y in (x + flop)] + [20,21,22]))[3] <= 5)]
# opponents_hands_s_gutshot = [(x, 4) for x in opponents_hands[1] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 4) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 4) or (sorted(set([1 if y == 14 else y for y in (x + flop)] + [20,21,22]))[3] <= 5)]
# opponents_hands_o_gutshot = [(x, 12) for x in opponents_hands[2] if (sorted(set(x + flop + [20,21,22]))[3] - min(x + flop) == 4) or (max(x + flop) - sorted(set(x + flop + [-20,-19,-18]))[-4] == 4) or (sorted(set([1 if y == 14 else y for y in (x + flop)] + [20,21,22]))[3] <= 5)]
# #### Additional rule: 3 to a straight requires two cards from your hand not just one (that's how I want it to be)
# opponents_hands_s_3_to_straight_not_all_from_low_end = [(x, 4) for x in opponents_hands[1] if (x[0] != 14) and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]+1 and x[1] == flop[0]-1) or (x[0] == flop[0]+2 and x[1] == flop[0]+1) or (x[0] == flop[1]+1 and x[1] == flop[1]-1) or (x[0] == flop[1]+2 and x[1] == flop[1]+1) or (x[0] == flop[2]+1 and x[1] == flop[2]-1) or (x[0] == flop[2]+2 and x[1] == flop[2]+1))]
# opponents_hands_o_3_to_straight_not_all_from_low_end = [(x, 12) for x in opponents_hands[2] if (x[0] != 14) and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]+1 and x[1] == flop[0]-1) or (x[0] == flop[0]+2 and x[1] == flop[0]+1) or (x[0] == flop[1]+1 and x[1] == flop[1]-1) or (x[0] == flop[1]+2 and x[1] == flop[1]+1) or (x[0] == flop[2]+1 and x[1] == flop[2]-1) or (x[0] == flop[2]+2 and x[1] == flop[2]+1))]
# if board_type == "two-tone":
# opponents_hands_s_3_to_straight_low_end_bdfd = [(x, 1) for x in opponents_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# opponents_hands_o_3_to_straight_low_end_bdfd = [(x, 6) for x in opponents_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# elif board_type == "rainbow":
# opponents_hands_s_3_to_straight_low_end_bdfd = [(x, 3) for x in opponents_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# opponents_hands_o_3_to_straight_low_end_bdfd = []
# else:
# opponents_hands_s_3_to_straight_low_end_bdfd = []
# opponents_hands_o_3_to_straight_low_end_bdfd = []
# if board_type == "two-tone":
# opponents_hands_s_3_to_straight_low_end = [(x, 4) for x in opponents_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# opponents_hands_o_3_to_straight_low_end = [(x, 12) for x in opponents_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# elif board_type == "rainbow":
# opponents_hands_s_3_to_straight_low_end = [(x, 4) for x in opponents_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# opponents_hands_o_3_to_straight_low_end = [(x, 12) for x in opponents_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# else:
# opponents_hands_s_3_to_straight_low_end = [(x, 4) for x in opponents_hands[1] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# opponents_hands_o_3_to_straight_low_end = [(x, 12) for x in opponents_hands[2] if x[0] != 13 and (x[0] not in flop and x[1] not in flop) and ((x[0] == flop[0]-1 and x[1] == flop[0]-2) or (x[0] == flop[1]-1 and x[1] == flop[1]-2) or (x[0] == flop[2]-1 and x[1] == flop[2]-2))]
# if board_type == "two-tone":
# opponents_hands_s_5_unique_cards_within_7_values_bdfd = [(x, 1) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and max(flop+x) - min(flop+x) <= 7]
# opponents_hands_o_5_unique_cards_within_7_values_bdfd = [(x, 6) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and max(flop+x) - min(flop+x) <= 7]
# elif board_type == "rainbow":
# opponents_hands_s_5_unique_cards_within_7_values_bdfd = [(x, 3) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and max(flop+x) - min(flop+x) <= 7]
# opponents_hands_o_5_unique_cards_within_7_values_bdfd = []
# else:
# opponents_hands_s_5_unique_cards_within_7_values_bdfd = []
# opponents_hands_o_5_unique_cards_within_7_values_bdfd = []
# if board_type == "two-tone":
# opponents_hands_pp_q_minus_bdfd = [(x, 3) for x in opponents_hands[0] if (x[0] not in flop) and x[0] <= 12]
# opponents_hands_s_q_minus_bdfd = [(x, 1) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] <= 12]
# opponents_hands_o_q_minus_bdfd = [(x, 6) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and x[0] <= 12]
# elif board_type == "rainbow":
# opponents_hands_pp_q_minus_bdfd = []
# opponents_hands_s_q_minus_bdfd = [(x, 3) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] <= 12]
# opponents_hands_o_q_minus_bdfd = []
# else:
# opponents_hands_pp_q_minus_bdfd = []
# opponents_hands_s_q_minus_bdfd = []
# opponents_hands_o_q_minus_bdfd = []
# #### 3 cards within 4 values with two overcards
# opponents_hands_s_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards = [(x, 4) for x in opponents_hands[1] if x[1] > flop[0] and ((max(flop + [x[1]]) - sorted(set(flop + [x[1]] + [-20,-19,-18]))[-3] <= 3) or (max(flop + x) - sorted(set(flop + x + [-20,-19,-18]))[-3] <= 3))]
# opponents_hands_o_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards = [(x, 12) for x in opponents_hands[2] if x[1] > flop[0] and ((max(flop + [x[1]]) - sorted(set(flop + [x[1]] + [-20,-19,-18]))[-3] <= 3) or (max(flop + x) - sorted(set(flop + x + [-20,-19,-18]))[-3] <= 3))]
# if board_type == "two-tone":
# opponents_hands_pp_a_minus_bdfd = [(x, 3) for x in opponents_hands[0] if (x[0] not in flop) and x[0] > 12 and x[0] <= 14]
# opponents_hands_s_a_minus_bdfd = [(x, 1) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] > 12 and x[0] <= 14]
# opponents_hands_o_a_minus_bdfd = [(x, 6) for x in opponents_hands[2] if (x[0] not in flop and x[1] not in flop) and x[0] > 12 and x[0] <= 14]
# elif board_type == "rainbow":
# opponents_hands_pp_a_minus_bdfd = []
# opponents_hands_s_a_minus_bdfd = [(x, 3) for x in opponents_hands[1] if (x[0] not in flop and x[1] not in flop) and x[0] > 12 and x[0] <= 14]
# opponents_hands_o_a_minus_bdfd = []
# else:
# opponents_hands_pp_a_minus_bdfd = []
# opponents_hands_s_a_minus_bdfd = []
# opponents_hands_o_a_minus_bdfd = []
# def opponents_cat1_level_x_and_above(x):
# result = [[], [], []]
# if x >= 1:
# result[1] += opponents_hands_s_straight
# result[2] += opponents_hands_o_straight
# if x >= 2:
# result[0] += opponents_hands_pp_sets
# if x >= 3:
# result[1] += opponents_hands_s_trips
# result[2] += opponents_hands_o_trips
# if x >= 4:
# result[1] += opponents_hands_s_two_pair
# result[2] += opponents_hands_o_two_pair
# if x >= 5:
# result[0] += opponents_hands_pp_overpair_9plus
# if x >= 6:
# result[0] += opponents_hands_pp_any_overpair
# if x >= 7:
# result[1] += opponents_hands_s_tp_k_kicker
# result[2] += opponents_hands_o_tp_k_kicker
# if x >= 8:
# result[1] += opponents_hands_s_tp_j_kicker
# result[2] += opponents_hands_o_tp_j_kicker
# if x >= 9:
# result[1] += opponents_hands_s_tp_any_kicker
# result[2] += opponents_hands_o_tp_any_kicker
# result[0].sort(reverse=True)
# result[1].sort(reverse=True)
# result[2].sort(reverse=True)
# result[0] = list(k for k,_ in itertools.groupby(result[0]))
# result[1] = list(k for k,_ in itertools.groupby(result[1]))
# result[2] = list(k for k,_ in itertools.groupby(result[2]))
# # Return result
# opponents_hands_cat1 = result
# return opponents_hands_cat1
# # Performance improvement by filtering out cat1 from hands already, but would also need a copy of hands
# def opponents_cat2_level_x_and_above(x, opponents_hands_cat1):
# result = [[], [], []]
# if x >= 1:
# # Cat 1
# result[1] += opponents_hands_s_straight
# result[2] += opponents_hands_o_straight
# result[0] += opponents_hands_pp_sets
# result[1] += opponents_hands_s_trips
# result[2] += opponents_hands_o_trips
# result[1] += opponents_hands_s_two_pair
# result[2] += opponents_hands_o_two_pair
# result[0] += opponents_hands_pp_overpair_9plus
# result[0] += opponents_hands_pp_any_overpair
# result[1] += opponents_hands_s_tp_k_kicker
# result[2] += opponents_hands_o_tp_k_kicker
# result[1] += opponents_hands_s_tp_j_kicker
# result[2] += opponents_hands_o_tp_j_kicker
# result[1] += opponents_hands_s_tp_any_kicker
# result[2] += opponents_hands_o_tp_any_kicker
# # Cat 2
# result[1] += opponents_hands_s_tp_bad_kicker
# result[2] += opponents_hands_o_tp_bad_kicker
# if x >= 2:
# result[1] += opponents_hands_s_middle_pair
# result[2] += opponents_hands_o_middle_pair
# if x >= 3:
# result[0] += opponents_hands_pp_below_top_pair
# if x >= 4:
# result[1] += opponents_hands_s_bottom_pair
# result[2] += opponents_hands_o_bottom_pair
# if x >= 5:
# result[1] += opponents_hands_s_aj_high
# result[2] += opponents_hands_o_aj_high
# if x >= 6:
# result[0] += opponents_hands_pp_below_middle_pair
# if x >= 7:
# result[1] += opponents_hands_s_kq_high
# result[2] += opponents_hands_o_kq_high
# if x >= 8:
# result[0] += opponents_hands_pp_below_bottom_pair
# if x >= 9:
# result[1] += opponents_hands_s_kj_high
# result[2] += opponents_hands_o_kj_high
# if x >= 10:
# result[1] += opponents_hands_s_k8_high
# result[2] += opponents_hands_o_k8_high
# result[0].sort(reverse=True)
# result[1].sort(reverse=True)
# result[2].sort(reverse=True)
# result[0] = list(k for k,_ in itertools.groupby(result[0]))
# result[1] = list(k for k,_ in itertools.groupby(result[1]))
# result[2] = list(k for k,_ in itertools.groupby(result[2]))
# # Interim
# cat1_unique_pp = [x for (x,y) in opponents_hands_cat1[0]]
# cat1_unique_s = [x for (x,y) in opponents_hands_cat1[1]]
# cat1_unique_o = [x for (x,y) in opponents_hands_cat1[2]]
# # Remove cat1 from these cat2s
# result[0] = [(x,y) for (x,y) in result[0] if x not in cat1_unique_pp]
# result[1] = [(x,y) for (x,y) in result[1] if x not in cat1_unique_s]
# result[2] = [(x,y) for (x,y) in result[2] if x not in cat1_unique_o]
# # Return result
# opponents_hands_cat2 = result
# return opponents_hands_cat2
# # Performance improvement by filtering out cat1+cat2 from hands already, but would also need a copy of hands
# def opponents_cat3_level_x_and_above(x, opponents_hands_cat1, opponents_hands_cat2, skip_4_to_10_and_13_to_15=True):
# bdfd_result = [[], [], []]
# other_result = [[], [], []]
# result = [[], [], []]
# if x >= 1:
# other_result[0] += opponents_hands_pp_fd
# other_result[1] += opponents_hands_s_fd
# other_result[2] += opponents_hands_o_fd
# if x >= 2:
# other_result[0] += opponents_hands_pp_oesd
# other_result[1] += opponents_hands_s_oesd
# other_result[2] += opponents_hands_o_oesd
# if x >= 3:
# other_result[0] += opponents_hands_pp_gutshot
# other_result[1] += opponents_hands_s_gutshot
# other_result[2] += opponents_hands_o_gutshot
# if x >= 4 and not skip_4_to_10_and_13_to_15:
# other_result[1] += opponents_hands_s_3_to_straight_not_all_from_low_end
# other_result[2] += opponents_hands_o_3_to_straight_not_all_from_low_end
# if x >= 5 and not skip_4_to_10_and_13_to_15:
# bdfd_result[1] += opponents_hands_s_3_to_straight_low_end_bdfd
# bdfd_result[2] += opponents_hands_o_3_to_straight_low_end_bdfd
# if x >= 6 and not skip_4_to_10_and_13_to_15:
# other_result[1] += opponents_hands_s_3_to_straight_low_end
# other_result[2] += opponents_hands_o_3_to_straight_low_end
# if x >= 7 and not skip_4_to_10_and_13_to_15:
# bdfd_result[1] += opponents_hands_s_5_unique_cards_within_7_values_bdfd
# bdfd_result[2] += opponents_hands_o_5_unique_cards_within_7_values_bdfd
# if x >= 8 and not skip_4_to_10_and_13_to_15:
# bdfd_result[0] += opponents_hands_pp_q_minus_bdfd
# bdfd_result[1] += opponents_hands_s_q_minus_bdfd
# bdfd_result[2] += opponents_hands_o_q_minus_bdfd
# if x >= 9:
# other_result[1] += opponents_hands_s_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards
# other_result[2] += opponents_hands_o_lowest_card_is_one_of_3_cards_within_4_values_and_two_overcards
# if x >= 10 and not skip_4_to_10_and_13_to_15:
# bdfd_result[0] += opponents_hands_pp_a_minus_bdfd
# bdfd_result[1] += opponents_hands_s_a_minus_bdfd
# bdfd_result[2] += opponents_hands_o_a_minus_bdfd
# # Remove duplicates within bdfd hands
# bdfd_result[0].sort(reverse=True)
# bdfd_result[1].sort(reverse=True)
# bdfd_result[2].sort(reverse=True)
# bdfd_result[0] = list(k for k,_ in itertools.groupby(bdfd_result[0]))
# bdfd_result[1] = list(k for k,_ in itertools.groupby(bdfd_result[1]))
# bdfd_result[2] = list(k for k,_ in itertools.groupby(bdfd_result[2]))
# # Add all together
# result[0] = bdfd_result[0] + other_result[0]
# result[1] = bdfd_result[1] + other_result[1]
# result[2] = bdfd_result[2] + other_result[2]
# # Reduce with max combos number used and sort
# groupby_dict = defaultdict(int)
# for val in result[0]:
# groupby_dict[tuple(val[0])] += val[1]
# result[0] = [(sorted(list(x), reverse=True),min(y, 6)) for (x,y) in groupby_dict.items()]
# groupby_dict = defaultdict(int)
# for val in result[1]:
# groupby_dict[tuple(val[0])] += val[1]
# result[1] = [(sorted(list(x), reverse=True),min(y, 4)) for (x,y) in groupby_dict.items()]
# groupby_dict = defaultdict(int)
# for val in result[2]:
# groupby_dict[tuple(val[0])] += val[1]
# result[2] = [(sorted(list(x), reverse=True),min(y, 12)) for (x,y) in groupby_dict.items()]
# # Interim
# cat1_unique_pp = [x for (x,y) in opponents_hands_cat1[0]]
# cat1_unique_s = [x for (x,y) in opponents_hands_cat1[1]]
# cat1_unique_o = [x for (x,y) in opponents_hands_cat1[2]]
# cat2_unique_pp = [x for (x,y) in opponents_hands_cat2[0]]
# cat2_unique_s = [x for (x,y) in opponents_hands_cat2[1]]
# cat2_unique_o = [x for (x,y) in opponents_hands_cat2[2]]
# # Remove cat1 and cat2
# result[0] = [(x,y) for (x,y) in result[0] if x not in cat1_unique_pp and x not in cat2_unique_pp]
# result[1] = [(x,y) for (x,y) in result[1] if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] = [(x,y) for (x,y) in result[2] if x not in cat1_unique_o and x not in cat2_unique_o]
# # Add cat2 hands
# if x >= 11 and not skip_4_to_10_and_13_to_15:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_k8_high if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_k8_high if x not in cat1_unique_o and x not in cat2_unique_o]
# if x >= 12 and not skip_4_to_10_and_13_to_15:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_kj_high if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_kj_high if x not in cat1_unique_o and x not in cat2_unique_o]
# if x >= 13 and not skip_4_to_10_and_13_to_15:
# result[0] += [(x,y) for (x,y) in opponents_hands_pp_below_bottom_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
# if x >= 14 and not skip_4_to_10_and_13_to_15:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_kq_high if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_kq_high if x not in cat1_unique_o and x not in cat2_unique_o]
# if x >= 15 and not skip_4_to_10_and_13_to_15:
# result[0] += [(x,y) for (x,y) in opponents_hands_pp_below_middle_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
# # Add cat4 hands
# if x >= 16:
# remaining_cat2_type_hands_pp = [x for (x,y) in opponents_hands_pp_below_bottom_pair] + [x for (x,y) in opponents_hands_pp_below_middle_pair] + [x for (x,y) in opponents_hands_pp_below_top_pair]
# remaining_cat2_type_hands_s = [x for (x,y) in opponents_hands_s_k8_high] + [x for (x,y) in opponents_hands_s_kj_high] + [x for (x,y) in opponents_hands_s_kq_high] + [x for (x,y) in opponents_hands_s_aj_high] + [x for (x,y) in opponents_hands_s_bottom_pair] + [x for (x,y) in opponents_hands_s_middle_pair] + [x for (x,y) in opponents_hands_s_tp_bad_kicker]
# remaining_cat2_type_hands_o = [x for (x,y) in opponents_hands_o_k8_high] + [x for (x,y) in opponents_hands_o_kj_high] + [x for (x,y) in opponents_hands_o_kq_high] + [x for (x,y) in opponents_hands_o_aj_high] + [x for (x,y) in opponents_hands_o_bottom_pair] + [x for (x,y) in opponents_hands_o_middle_pair] + [x for (x,y) in opponents_hands_o_tp_bad_kicker]
# result[0] += [(x, 6) for x in opponents_hands[0] if x not in cat1_unique_pp and x not in cat2_unique_pp and x not in remaining_cat2_type_hands_pp]
# result[1] += [(x, 4) for x in opponents_hands[1] if x not in cat1_unique_s and x not in cat2_unique_s and x not in remaining_cat2_type_hands_s]
# result[2] += [(x, 12) for x in opponents_hands[2] if x not in cat1_unique_o and x not in cat2_unique_o and x not in remaining_cat2_type_hands_o]
# # Add cat2 hands with pairs
# if x >= 17:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_aj_high if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_aj_high if x not in cat1_unique_o and x not in cat2_unique_o]
# if x >= 18:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_bottom_pair if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_bottom_pair if x not in cat1_unique_o and x not in cat2_unique_o]
# if x >= 19:
# result[0] += [(x,y) for (x,y) in opponents_hands_pp_below_top_pair if x not in cat1_unique_pp and x not in cat2_unique_pp]
# if x >= 20:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_middle_pair if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_middle_pair if x not in cat1_unique_o and x not in cat2_unique_o]
# if x >= 21:
# result[1] += [(x,y) for (x,y) in opponents_hands_s_tp_bad_kicker if x not in cat1_unique_s and x not in cat2_unique_s]
# result[2] += [(x,y) for (x,y) in opponents_hands_o_tp_bad_kicker if x not in cat1_unique_o and x not in cat2_unique_o]
# # Reduce with max combos number used and sort
# groupby_dict = defaultdict(int)
# for val in result[0]:
# groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
# result[0] = [(sorted(list(x), reverse=True),min(y, 6)) for (x,y) in groupby_dict.items()]
# groupby_dict = defaultdict(int)
# for val in result[1]:
# groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
# result[1] = [(sorted(list(x), reverse=True),min(y, 4)) for (x,y) in groupby_dict.items()]
# groupby_dict = defaultdict(int)
# for val in result[2]:
# groupby_dict[tuple(val[0])] = max(groupby_dict[tuple(val[0])], val[1])
# result[2] = [(sorted(list(x), reverse=True),min(y, 12)) for (x,y) in groupby_dict.items()]
# # Return results
# opponents_hands_cat3 = result
# return opponents_hands_cat3
# best_param2 = 16 if opponents_position_ip and opponents_pfr else 2
# opponents_cat1 = opponents_cat1_level_x_and_above(8)
# # RANGE FROM 5-10:
# opponents_cat2 = opponents_cat2_level_x_and_above(7, opponents_cat1)
# # CHOICES 1, 2, 5, 8, 10, 16:
# opponents_cat3 = opponents_cat3_level_x_and_above(best_param2, opponents_cat1, opponents_cat2)
# opponents_cat4 = get_cat4_hands(opponents_with_combos, opponents_cat1, opponents_cat2, opponents_cat3)
# cat1_combos = count_hand_combos(opponents_cat1)
# cat2_combos = count_hand_combos(opponents_cat2)
# cat3_combos = count_hand_combos(opponents_cat3)
# cat4_combos = count_hand_combos(opponents_cat4)
# print(cat1_combos, cat2_combos, cat3_combos, cat4_combos)
# # print("cat3 combos needed:", cat1_combos*2 - cat3_combos)
# # print("cat2 combos needed:", cat4_combos - cat2_combos)
# print("Weakest bluff to use:", best_param2)
# import re
# import pyperclip as clip
# import pyautogui as g
# from time import sleep
# import pyperclip as clip
# import pandas as pd
# import numpy as np
# from sklearn.metrics import mean_squared_error
# from graphviz import Source
# from sklearn.tree import export_graphviz
# from sklearn import tree
# try:
# from PIL import Image
# except ImportError:
# import Image
# import pytesseract
# g.FAILSAFE = True
# sleep(2)
# g.PAUSE = 0.0
# g.size()
# g.position()
# def c():
# g.mouseDown()
# g.mouseUp()
# def c2():
# c()
# c()
# def rc():
# g.click(button='right')
# def select_all_and_copy():
# rc()
# g.keyDown('a')
# g.keyUp('a')
# g.keyDown('ctrl')
# g.keyDown('c')
# g.keyUp('c')
# g.keyUp('ctrl')
# # STEP 1: Open site and Set 4 colors manually
# # import sys; import webbrowser; from pandas import read_csv; from time import sleep; webbrowser.get('open -a /Applications/Google\ Chrome.app %s').open("https://floattheturn.com/wp/tools/range-analyzer/")
# def change_color(category=1):
# y = 605 + 23*category
# g.moveTo(1156, y)
# c()
# def click_hand(hand1, hand2, type_):
# if type_ == "suited" or type_ == "paired":
# x = 765 + 29.9*(14-hand2)
# y = 485 + 29.9*(14-hand1)
# else:
# x = 765 + 29.9*(14-hand1)
# y = 485 + 29.9*(14-hand2)
# g.moveTo(x, y)
# c()
# def clear_board():
# change_color(7)
# for i in range(0, 13):
# for j in range(0, 13):
# x = 765 + 29.9*i
# y = 486 + 29.9*j
# g.moveTo(x, y)
# c()
# def fill_in_dead_cards(flop):
# # Only changing suit for paired cards, don't care that much
# string_chars = []
# suit_to_use_for_paired = ['s','h','c','d']
# index = 0
# for card in flop:
# string_chars.append(m2[card])
# if is_paired and paired_value == card:
# string_chars.append(suit_to_use_for_paired[index])
# string_chars.append(",")
# index+=1
# else:
# string_chars.append("s,")
# answer = "".join(string_chars)[:-2] + "c"
# clip.copy(answer)
# g.moveTo(1178, 807)
# c()
# g.hotkey('command', 'a')
# g.hotkey('command', 'v')
# # Change color names
# def change_color_names():
# g.moveTo(1246, 628)
# c()
# sleep(0.1)
# g.hotkey('command', 'a')
# clip.copy("Category 1")
# g.hotkey('command', 'v')
# g.moveTo(1246, 650)
# c()
# sleep(0.1)
# g.hotkey('command', 'a')
# clip.copy("Category 2")
# g.hotkey('command', 'v')
# g.moveTo(1246, 672)
# c()
# sleep(0.1)
# g.hotkey('command', 'a')
# clip.copy("Category 3")
# g.hotkey('command', 'v')
# g.moveTo(1246, 694)
# c()
# sleep(0.1)
# g.hotkey('command', 'a')
# clip.copy("Category 3 FD")
# g.hotkey('command', 'v')
# g.moveTo(1246, 716)
# c()
# sleep(0.1)
# g.hotkey('command', 'a')
# clip.copy("Category 3 BDFD")
# g.hotkey('command', 'v')
# g.moveTo(1246, 738)
# c()
# sleep(0.1)
# g.hotkey('command', 'a')
# clip.copy("Category 4")
# g.hotkey('command', 'v')
# def click_screen():
# g.moveTo(1120, 716)
# c()
# sleep(0.1)
# # Create 4 colors
# # Name the 4 colors
# # click on all the hands for your entire range
# # click on cat1 hands
# # click on cat2 hands
# # click on cat3 hands
# click_screen()
# change_color_names()
# click_screen()
# fill_in_dead_cards(flop)
# # Fill hands
# click_screen()
# sleep(0.1)
# clear_board()
# sleep(0.1)
# change_color(6)
# sleep(0.1)
# [click_hand(x[0], x[1], "paired") for x in opponents_hands[0]]
# [click_hand(x[0], x[1], "suited") for x in opponents_hands[1]]
# [click_hand(x[0], x[1], "offsuit") for x in opponents_hands[2]]
# sleep(0.1)
# change_color(1)
# [click_hand(x[0][0], x[0][1], "paired") for x in opponents_cat1[0]]
# [click_hand(x[0][0], x[0][1], "suited") for x in opponents_cat1[1]]
# [click_hand(x[0][0], x[0][1], "offsuit") for x in opponents_cat1[2]]
# change_color(2)
# sleep(0.1)
# [click_hand(x[0][0], x[0][1], "paired") for x in opponents_cat2[0]]
# [click_hand(x[0][0], x[0][1], "suited") for x in opponents_cat2[1]]
# [click_hand(x[0][0], x[0][1], "offsuit") for x in opponents_cat2[2]]
# change_color(3)
# sleep(0.1)
# # Category 3 regular
# [click_hand(x[0][0], x[0][1], "paired") for x in opponents_cat3[0]]
# [click_hand(x[0][0], x[0][1], "suited") for x in opponents_cat3[1]]
# [click_hand(x[0][0], x[0][1], "offsuit") for x in opponents_cat3[2]]
# change_color(4)
# sleep(0.1)
# # Category 3 Flushdraw
# [click_hand(x[0][0], x[0][1], "paired") for x in opponents_cat3[0] if ((False) if board_type == "rainbow" else (False) if board_type == "two-tone" else (x[1] == 3))]
# [click_hand(x[0][0], x[0][1], "suited") for x in opponents_cat3[1] if ((False) if board_type == "rainbow" else (x[1] == 1) if board_type == "two-tone" else (False))]
# [click_hand(x[0][0], x[0][1], "offsuit") for x in opponents_cat3[2] if ((False) if board_type == "rainbow" else (False) if board_type == "two-tone" else (x[1] == 6))]
# change_color(5)
# sleep(0.1)
# # Category 3 BDFD
# [click_hand(x[0][0], x[0][1], "paired") for x in opponents_cat3[0] if ((False) if board_type == "rainbow" else (x[1] == 3) if board_type == "two-tone" else (False))]
# [click_hand(x[0][0], x[0][1], "suited") for x in opponents_cat3[1] if ((x[1] == 3) if board_type == "rainbow" else (x[1] == 2) if board_type == "two-tone" else (False))]
# [click_hand(x[0][0], x[0][1], "offsuit") for x in opponents_cat3[2] if ((False) if board_type == "rainbow" else (x[1] == 6) if board_type == "two-tone" else (False))]
# change_color(7)
my_hands_string = "ATs,KTs,QTs,JTs,ATo,KTo,QTo,JTo"
opponents_hands_string = "As2s,As3s,As4s,As5s,As6s,As7s,As8s,Ks4s,Ks5s,Ks6s,Ks7s,K8s,Qs4s,Qs5s,Qs6s,Qs7s,Q8s,Js4s,Js5s,Js6s,Js7s,J8s,T7s-T5s,94s,9s5s,9s6s,97s,98s,87s,8s7s,8s5s,74s,7s5s,7s6s,6s4s,6s5s,5s3s,4s2s,77,98o,T9o,J9o,Q9o,A9o"
final_flop_string = "Ts8s4c7sJc"
command = "source /Users/petermyers/Documents/pbots_calc-master/venv/bin/activate; /Users/petermyers/Documents/pbots_calc-master/python/calculator.sh {}:{} {}".format(my_hands_string, opponents_hands_string, final_flop_string)
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
raw_equity = ast.literal_eval(process.communicate()[0].strip().decode("utf-8"))[0][1]
raw_equity
###Output
_____no_output_____ |
demo/spkDect_Sort_Demo.ipynb | ###Markdown
FPGA and Spiketag Configuration
###Code
from spiketag.base import ProbeFactory, bload
from spiketag.mvc import Sorter
from spiketag.fpga import xike_config
###Output
_____no_output_____
###Markdown
spiketag
###Code
nCh = 160
fs = 25000.
tetrodes = ProbeFactory.genTetrodeProbe(fs=fs, n_ch=nCh)
tetrodes.fromfile('./open-ephys-load/40 tetrode_channel_map')
tetrodes.ch_hash(14)
tetrodes.reorder_by_chip = True
tetrodes._nchips = 5
###Output
_____no_output_____
###Markdown
FPGA
###Code
config = xike_config(probe=tetrodes, offset_value=32, thres_value=-500)
config.ch_ugp[14]
###Output
_____no_output_____
###Markdown
Sorting
###Code
!cp spk.bin mua.bin.spk
%gui qt
sorter = Sorter('/disk0/testbench/mua.bin',
probe=tetrodes,
fet_method='pca',
clu_method='hdbscan',
n_jobs=24)
spks = np.fromfile('./spk.bin', dtype=np.int32).reshape(-1,2).T
spks
sorter.show_mua(chs=tetrodes[0], spks=spks)
###Output
_____no_output_____
###Markdown
Calculate and Download Threshold
###Code
thr = sorter.model.mua.get_threshold()
thr
config.thres[0:84] = thr[:84]
config.thres
!rm *.bin
###Output
_____no_output_____
###Markdown
Go Sorting Again
###Code
!cp spk.bin mua.bin.spk
sorter = Sorter('/disk0/testbench/mua.bin',
probe=tetrodes,
fet_method='pca',
clu_method='hdbscan',
n_jobs=24)
sorter.run()
###Output
_____no_output_____ |
Final_Final_Project_Part2_notebook1 (1).ipynb | ###Markdown
What is the "name" of the dataset? Who is the author?>Google Play Store by Lavanya GuptaWhat is the name of the place you found it?>KaggleWhere can we obtain it? (i.e., website URL)>https://www.kaggle.com/lava18/google-play-store-appsWhat is the license of the dataset? What are you allowed to do with it?> This work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/. Since it is hosted on Kaggle.com, it is an open source dataset platform and we are allowed to modify it as per our requirementHow big is it in file size and in number of items?> 10.8k(rows)*13 (columns) 1329 KB Why did we choose to explore this data set for Part2?This dataset stores the details of the applications on Google Play. It has 13 features for each application, that allows us to explore the various applications in depth and perform analysis out of the dataset to derive significant trends from the visualizations:>Top 15 most common app categories.Social, News and magazines, photography , health and fitness,communication,sports,finance,lifestyle,productivity,personalization,medical,business,tools,game,family.>Categories of top 100 apps in the google play store.Maps and Navigation , Entertainment , Education, books and reference ,health and fitness, sports, travel and local, shopping ,news and magazines ,personalization ,photography ,video players ,productivity ,social ,family ,tools ,communication ,game.>Correlation between the variables, reviews and install (proportional as expected) >Heat map to demonstrate the correlation factor between the features.Choosing this data set will allow us to play around and explore app statistics and generate some insightful visualizations.
###Code
#importing libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import bqplot
import traitlets
import ipywidgets
import math
import bqplot.pyplot
%matplotlib inline
df=pd.read_csv("E:/Priya/Spring 19/Data Viz/Final Project/googleplaystore.csv")
df
df.shape
#To check if duplicate rows are present
df.drop_duplicates().shape
#New dataframe without duplicates
df1=df.drop_duplicates()
df1
#Checking datatype of Rating column
df1["Rating"].dtype
df1[df1["Rating"]>5]
#Removing the above row
df1.drop([10472], inplace=True)
df1
df1.shape
size_modify=[]
for i in list(df1["Size"].index):
if((df1["Size"].loc[i])[-1]=='M'):
size_modify.append(float((df1["Size"].loc[i])[:-1])*1024)
elif((df1["Size"].loc[i])[-1]=='k'):
size_modify.append(float((df1["Size"].loc[i])[:-1]))
else:
size_modify.append(np.nan)
size_modify
#Size values converted to float values
df1["Size"]=size_modify
df1
inst_modify=[]
for i in list(df1["Installs"].index):
if((df1["Installs"].loc[i])[-1]=='+'):
inst_modify.append(float((df1["Installs"].loc[i])[:-1].replace(',','')))
else:
inst_modify.append(float(df1["Installs"].loc[i]))
inst_modify
#Installs values converted to float numbers
df1["Installs"]=inst_modify
df1
rev_modify=[]
for i in list(df1["Reviews"].index):
rev_modify.append(float(df1["Reviews"].loc[i]))
rev_modify
#Converted number of reviews from string to float
df1["Reviews"]=rev_modify
df1
price_modify=[]
for i in list(df1["Price"].index):
if((df1["Price"].loc[i])[0]=='$'):
price_modify.append(float((df1["Price"].loc[i])[1:]))
else:
price_modify.append(float(df1["Price"].loc[i]))
price_modify
#Remove $ sign from price
df1["Price"]=price_modify
df1
#This shows the apps are repeated in the App column
len(df1["App"].unique())
#dict is being used to see which apps are repeated
dict={}
for i in list(df1.index):
dict[df1["App"].loc[i]]=0
dict
for i in list(df1.index):
dict[df1["App"].loc[i]]+=1
dict
df1[df1["App"]=="Firefox Focus: The privacy browser"]
#The above table shows that there is repetion of apps, and the corresponsing repeated rows differ in values in Reviews columns,
#Thus, the next step removes those repetitions.
df2=pd.merge(df1[["App", "Reviews"]], df1[['App','Category', 'Size', 'Rating','Installs', 'Type' ,'Price', 'Content Rating', 'Genres']].drop_duplicates(), left_index=True, right_index=True)
#Dropping one of the merge App column
df2.drop(["App_y"], axis=1, inplace=True)
df2
#Using installs and revies values to get an idea about popularity of the apps
df2.sort_values(by=['Installs','Reviews'], ascending=False, inplace=True)
df2
#Number of apps per category
pd.value_counts(df2["Category"])
#Most popular categories
fig = plt.figure(figsize=(30,15))
ax1 = fig.add_subplot(2,3,1)
ax1.set_title('Top 15 most common app categories')
pd.value_counts(df2["Category"])[:15].plot(kind='barh')
#Categories of most popular apps. In this case, we are considering top 100 apps
fig = plt.figure(figsize=(30,15))
ax1 = fig.add_subplot(2,3,1)
ax1.set_title('Categories of most popular apps')
pd.value_counts(df2["Category"][:99]).plot(kind='barh')
#Most popular apps majorly belongs to GAME category, although majority number of apps developed falls into FAMILIY category.
#Threfore, we are going to explore the GAME category in detail.
review=list(df2["Reviews"])
installs=list(df2["Installs"])
review
for i in range(len(review)):
if(review[i]<1):
review[i]=np.nan
for i in range(len(installs)):
if(installs[i]<1):
installs[i]=np.nan
review=np.log10(review)
installs=np.log10(installs)
fig1 = plt.figure(figsize=(30,10))
ax1 = fig1.add_subplot(2,3,1)
ax1.set_title('Correlation between reviews and installs')
plt.xlabel("Number of reviews")
plt.ylabel("Number of installs")
plt.scatter(review, installs)
#The correlation seems to be somewhat proportional, and is accordance to excepted results.
game_df = df2[df2['Category']=='GAME']
fig = plt.figure(figsize=(10,10))
sns.set(style="white",font_scale=2)
sns.heatmap(game_df.dropna()[['Rating','Reviews','Size','Installs','Price']].corr(), fmt='.2f',annot=True,linewidth=2);
#The above heatmap can be used to see correlation between varaious columns labels. Only the correlation between installs and reviews
#stands out to be significant among all.
df2
game_df
df3=game_df[game_df["Rating"]>=4][["App_x", "Rating", "Type", "Content Rating", "Genres"]]
df3
grouped=df3.groupby(["Genres", "Rating"])["Rating"].count()
grouped
genres=list(grouped.index.levels[0])
#Assigning index value to every genres
dict={}
for i in range(len(genres)):
dict[i]=genres[i]
dict
arr=np.zeros((11, len(genres)))
for i in range(len(genres)):
temp=list(grouped[dict[i]].index)
for j in range(len(temp)):
element=grouped[dict[i]][temp[j]]
arr[round((temp[j]-4)*10),i]=element
arr[arr== 0] = np.nan
arr
#Plotting Heat Map
col_sc = bqplot.ColorScale(scheme = "Reds")
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc , label= "Game category genres" )
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical', label="Ratings from 4.0 (0) to 5.0 (10)")
mySelectedLabel = ipywidgets.Label()
#Writing Linking Function
def get_data_value(change):
i,j = heat_map.selected[0]
v = arr[i,j] # grab data value
mySelectedLabel.value = "Total number of games = " + str(v) # set our label
content=df3[(df3["Rating"]==float(i/10+4)) & (df3["Genres"]==dict[j])].groupby(["Content Rating"])["Content Rating"].count()
bar_vert.x=list(content.index)
bar_vert.y=list(content)
bar_vert.labels=list(content.index)
type_app=df3[(df3["Rating"]==float(i/10+4)) & (df3["Genres"]==dict[j])].groupby(["Type"])["Type"].count()
bar_horz.x=list(type_app.index)
bar_horz.y=list(type_app)
bar_horz.labels=list(type_app.index)
# regenerating our heatmap to use in our fig canvas
heat_map = bqplot.GridHeatMap(color = arr,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 0.8})
fig = bqplot.Figure(marks = [heat_map],
axes = [c_ax, y_ax, x_ax], title="Total number of game apps for a particular rating and genre")
#Plotting bar graph for content rating
x_sc1 = bqplot.OrdinalScale()
y_sc1 = bqplot.LinearScale()
x_ax1 = bqplot.Axis(scale = x_sc1, label="Content rating" )
y_ax1 = bqplot.Axis(scale = y_sc1,
orientation = 'vertical', label="Number of apps")
bar_vert=bqplot.pyplot.bar(x=[], y=[], scales={'x': x_sc1, 'y': y_sc1}, labels=[])
fig1= bqplot.Figure(marks = [bar_vert], axes = [y_ax1, x_ax1], title="Game apps content ratings")
#Plotting bar graph for paid/free type
x_sc2 = bqplot.OrdinalScale()
y_sc2 = bqplot.LinearScale()
x_ax2 = bqplot.Axis(scale = x_sc2, label="Type (Free/Paid)")
y_ax2 = bqplot.Axis(scale = y_sc2,
orientation = 'vertical', label= "Number of apps")
bar_horz=bqplot.pyplot.bar(x=[], y=[], scales={'x': x_sc2, 'y': y_sc2}, labels=[])
fig2= bqplot.Figure(marks = [bar_horz], axes = [y_ax2, x_ax2], title="Game apps types (free or paid)")
# Implementing observe function
heat_map.observe(get_data_value, 'selected')
s="Game category genres:"
for i in dict:
s= s+ "\n"+ str(i)+":"+ str(dict[i])
ta1 = ipywidgets.Textarea(s)
ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig , ta1]), ipywidgets.HBox([fig1, fig2])])
###Output
_____no_output_____ |
user/notebooks/interface/interactive_plot.ipynb | ###Markdown
Please select data using the tickboxes to compare CH4 measurements from Bilsdale and Tacolneston. The daterange covered by the plot may be changed using the slider below the figure.Please allow some time for figure below to load as the notebook is performing a search of the HUGS object store and downloading the requested data.
###Code
from HUGS.Interface import Interface
i = Interface()
plot_box = i.plotting_interface(selected_results=results, data=ch4_data)
plot_box
###Output
_____no_output_____ |
notebooks/GraphClassificationStatistics.ipynb | ###Markdown
Graphs- data/graph_classification/graph_mutag_with_node_num.pt- data/graph_classification/graph_enzymes_with_node_num.pt
###Code
mutag_graphs = torch.load("../data/graph_classification/graph_mutag_with_node_num.pt")
mutag_graphs_train = mutag_graphs["train"]
mutag_graphs_train[0]
len(mutag_graphs_train) # number of graphs
def get_graphs_stats(graphs):
def _get_graph_statistics(graph):
num_edges = len(graph["adj"].coalesce().indices().T.tolist())
num_nodes = graph["node_features"].shape[0]
return num_nodes, num_edges
stats = [_get_graph_statistics(graph) for graph in graphs]
avg_num_nodes = round(sum([s[0] for s in stats]) / len(stats), 2)
avg_num_edges = round(sum([s[1] for s in stats]) / len(stats), 2)
node_feature_dim = graphs[0]["node_features"].shape[1]
print(f"Num graphs = {len(graphs)}")
print(f"Avg. num nodes = {avg_num_nodes}")
print(f"Avg. num edges = {avg_num_edges}")
print(f"Avg. num edges = {node_feature_dim}")
get_graphs_stats(mutag_graphs_train)
###Output
_____no_output_____ |
Trainer-Collaboratories/Fine_Tuning/DensNet201/Fine_tuning_DensNet201(GAP_256_0,15).ipynb | ###Markdown
**Import Google Drive**
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
**Import Library**
###Code
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
###Output
Using TensorFlow backend.
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
**Load Data**
###Code
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
**Data Preparation**
###Code
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
###Output
_____no_output_____
###Markdown
**Model Parameters**
###Code
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
###Output
_____no_output_____
###Markdown
**Data Generator**
###Code
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
**Define Model**
###Code
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.DenseNet201(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.15)(x)
x =tf.keras.layers.Dense(256, activation='relu')(x)
x =tf.keras.layers.Dropout(0.15)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
###Output
_____no_output_____
###Markdown
**Train Top Layers**
###Code
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
###Output
WARNING:tensorflow:From <ipython-input-17-42947d619a66>:13: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
Epoch 1/2
375/375 [==============================] - 64s 171ms/step - loss: 1.0917 - accuracy: 0.5598 - val_loss: 1.3269 - val_accuracy: 0.5578
Epoch 2/2
375/375 [==============================] - 62s 164ms/step - loss: 0.8629 - accuracy: 0.6222 - val_loss: 0.9756 - val_accuracy: 0.5968
Waktu Training: 136.40506196022034
###Markdown
**Train Fine Tuning**
###Code
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
###Output
Epoch 1/100
375/375 [==============================] - 77s 205ms/step - loss: 0.7901 - accuracy: 0.6710 - val_loss: 5.0022 - val_accuracy: 0.4026
Epoch 2/100
375/375 [==============================] - 74s 198ms/step - loss: 0.6512 - accuracy: 0.7352 - val_loss: 1.0374 - val_accuracy: 0.6559
Epoch 3/100
375/375 [==============================] - 74s 198ms/step - loss: 0.5834 - accuracy: 0.7565 - val_loss: 1.1431 - val_accuracy: 0.6015
Epoch 4/100
375/375 [==============================] - 74s 198ms/step - loss: 0.5675 - accuracy: 0.7703 - val_loss: 0.7991 - val_accuracy: 0.7171
Epoch 5/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5351 - accuracy: 0.7805 - val_loss: 1.8458 - val_accuracy: 0.4933
Epoch 6/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5206 - accuracy: 0.7865 - val_loss: 1.5537 - val_accuracy: 0.5551
Epoch 7/100
375/375 [==============================] - 74s 198ms/step - loss: 0.5051 - accuracy: 0.7952 - val_loss: 0.8306 - val_accuracy: 0.7023
Epoch 8/100
375/375 [==============================] - 74s 198ms/step - loss: 0.5023 - accuracy: 0.8003 - val_loss: 0.7172 - val_accuracy: 0.7359
Epoch 9/100
375/375 [==============================] - 74s 197ms/step - loss: 0.4918 - accuracy: 0.7950 - val_loss: 0.9284 - val_accuracy: 0.6673
Epoch 10/100
375/375 [==============================] - 74s 197ms/step - loss: 0.4622 - accuracy: 0.8140 - val_loss: 2.0170 - val_accuracy: 0.5121
Epoch 11/100
375/375 [==============================] - 74s 197ms/step - loss: 0.4643 - accuracy: 0.8152 - val_loss: 1.3672 - val_accuracy: 0.6163
Epoch 12/100
375/375 [==============================] - 74s 197ms/step - loss: 0.4515 - accuracy: 0.8185 - val_loss: 1.5961 - val_accuracy: 0.5376
Epoch 13/100
375/375 [==============================] - ETA: 0s - loss: 0.4486 - accuracy: 0.8210Restoring model weights from the end of the best epoch.
375/375 [==============================] - 74s 198ms/step - loss: 0.4486 - accuracy: 0.8210 - val_loss: 2.4827 - val_accuracy: 0.5323
Epoch 00013: early stopping
###Markdown
**Model Graph**
###Code
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
**Evaluate Model**
###Code
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
###Output
_____no_output_____ |
doc/ProjectsExercises/2020/hw4/ipynb/hw4_ASR.ipynb | ###Markdown
Second week Data Analysis and Machine Learning May 29, 2020 Regression examples, from linear regression, via decision trees and various forests to neural networks The main aim of this project is to study some specific regression problems, starting with the regression algorithms studiedin homework set 3 (exercise 2 in particular).We will include decision trees, random forests and eventually boostingmethods and neural network with tensorflow (feel free however to write your own code).The case we encounter hereis the so-called Ising model for our training data and we willfocus on supervised training. We will follow closely the recentarticle of Mehta et al, arXiv1803.08823. This article standsout as an excellent review on machine learning (ML) algorithms.The added benefit is that each figure andmodel presented in this article is accompanied by its jupyternotebook. Thismeans that we can start using these and compare with our own results.You can also look up the Regression slides for a discussion of the Ising model (scroll down to the end).Alternatively, you can replace the Ising thorughout the exercises with the nuclear binding energies. The choice is yours. Or if you have other data sets suitable for regression, feel free to use those.What follows here is however a discussion of the Ising model. The nuclear binding energies were discussed during the lectures.With the abovementioned configurations we will determine, using firstvarious regression methods, the value of the coupling constant for theenergy of the one-dimensional Ising model.We will mainly use scikit-learn or tensorflow or other Python packages such as keras or other.Feel free to use the notebooks to benchmark your code.Part a): Producing the data for the one-dimensional Ising model The model we will employ in our studies is the so-called Isingmodel. Together withmodels like the Pottsmodel and similarso-called lattice models, the Ising model has been widely studied inmathematics (in statistics in particular), physics, lifescience,chemistry and even in the social sciences in order to model socialbehavior. It is asimple binary value system where the variables of the model (spins often inphysics) can take two values only, for example \( \pm 1 \) or \( 0 \) and \( 1 \).The system exhibits a phase transition in two or higher dimensions andthe first person to find the analytical expressions for variousexpectation values was the Norwegian chemist LarsOnsager (Nobel prize inchemistry) after a tour de force mathematics exercise.In our discussions here we will stay with a physicist's approach andcall the variables for spin. You could replace this with any othertype of binary variables, ranging from a two political parties to blueand red spheres. In its simplest form we define the energy of thesystem as$$\begin{equation*} E=-J\sum_{}^{N}s_ks_l,\end{equation*}$$with \( s_k=\pm 1 \), \( N \) is the total number of spins,\( J \) is a coupling constant expressing the strength of the interactionbetween neighboring spins.The symbol \( \) indicates that we sum over nearestneighbors only.Notice that for \( J>0 \) it is energetically favorable for neighboring spinsto be aligned. This feature leads to, at low enough temperatures,a cooperative phenomenon called spontaneous magnetization. That is,through interactions between nearest neighbors, a given magneticmoment can influence the alignment of spins that are separatedfrom the given spin by a macroscopic distance. These long range correlationsbetween spins are associated with a long-range order in whichthe lattice has a net magnetization in the absence of a magnetic field.We start by considering the one-dimensional Ising model with nearest neighbor interactions. This model does not exhibit any phase transition.Consider the 1D Ising model with nearest-neighbor interactions $$\begin{equation*} E[\hat{s}]=-J\sum_{j=1}^{N}s_{j}s_{j+1},\end{equation*}$$on a chain of length \( N \) with so-called periodic boundary conditions and \( S_j=\pm 1 \) Ising spin variables.In one dimension, this model has no phase transition at finite temperature.In the Python code below we generate, with a coupling coefficient set to \( J=1 \), a large number of spin configurations say \( 10000 \) as shown in the code below.It means that our data will be a set of \( i=1\ldots n \) points of the form\( \{(E[\boldsymbol{s}^i],\boldsymbol{s}^i)\} \).Our task is to find the value of \( J \) from the data set using linear regression.Here is the Python code you need to generate the training data, seealso the notebook of Mehta etal.
###Code
import numpy as np
import scipy.sparse as sp
np.random.seed(12)
import warnings
#Comment this to turn on warnings
warnings.filterwarnings('ignore')
### define Ising model aprams
# system size
L=40
# create 10000 random Ising states
states=np.random.choice([-1, 1], size=(10000,L))
def ising_energies(states,L):
"""
This function calculates the energies of the states in the nn Ising Hamiltonian
"""
J=np.zeros((L,L),)
for i in range(L):
J[i,(i+1)%L]-=1.0
# compute energies
E = np.einsum('...i,ij,...j->...',states,J,states)
return E
# calculate Ising energies
energies=ising_energies(states,L)
states.shape, states
energies.shape, energies
import seaborn as sns
sns.distplot(energies);
###Output
_____no_output_____
###Markdown
We can now recast the problem as a linear regression model using our codes from homework set 3 (exercise 2 in particular).The way we are going to build our model mimicks the way we could think of finding say the gravitional constant for the graviational force between two planets.In the absence of any prior knowledge, one sensible choice is the all-to-all Ising model$$E_\mathrm{model}[\boldsymbol{s}^i] = - \sum_{j=1}^N \sum_{k=1}^N J_{j,k}s_{j}^is_{k}^i.$$Here \( i \) represents a particular spin configuration (one of the possible \( n \) configurations we generated with the code above).This model is uniquely defined by the non-local coupling strengths \( J_{jk} \) which we want to learn.The model is linear in \( \mathbf{J} \) which makes it possible to use linear regression.To apply linear regression, we recast this model in the form$$E_\mathrm{model}^i \equiv \mathbf{X}^i \cdot \mathbf{J},$$where the vectors \( \mathbf{X}^i \) represent all two-body interactions\( \{s_{j}^is_{k}^i \}_{j,k=1}^N \), and the index \( i \) runs over thesamples in the data set. To make the analogy complete, we can alsorepresent the dot product by a single index \( p = \{j,k\} \),i.e. \( \mathbf{X}^i \cdot \mathbf{J}=X^i_pJ_p \). Note that theregression model does not include the minus sign, so we expect tolearn negative \( J \)'s.Part b): Estimating the coupling constant of the one-dimensional Ising model using linear regression We start with the one-dimensional Ising model and use the data we havegenerated with \( J=1 \) in the previous point.Use linear regression,Lasso and Ridge regression as done in homework 3.Make an analysis of the guessed coupling constant as function of the hyperparameters \( \lambda \). You can compare yourresults with those of Mehtaet al..Make sure it is the 1D data which is used.Discuss the methods and how they perform in computing the couplingconstant \( J \) and include a bias-variance analysis using the bootstrap resampling method.Discuss also the mean squared error andthe \( R2 \) score as measures to assess your model.Give a critical analysis of your results. $$\mathbf{X^{i}} = [s_1, s_2, ..., s_{N^2}]$$ $$\mathbf{J} = [J_1, J_2, ..., J_{N^2}]^T$$ Where subscript is $p = {j, k}$This brings us to how the design matrix is going to be created:
###Code
energies.shape, states.shape
J_test = np.ones((L, 1))
X_test = states[0][np.newaxis, :]
J_test.shape, X_test.shape
X_test @ J_test
###Output
_____no_output_____ |
macroH2A_IPS_StemCells_2021.ipynb | ###Markdown
This notebook was implemented to- compute the *correlation networks* for the three examined experimental conditions: **epi** (Episome), **m11** (macroH2A1.1), and **m12** (macroH2A1.2);- summarize the obtained networks and the counts of genes belonging to **NHEJ**, **DDR not NHEJ**, **Reprogramming**, and **Senescence** pathways, which we represented as bar plots;- export the biggest m11 and m12 networks as SIF and attribute files for easy representation in Cytoscape (sizes **62** and **8**, respectively).- finally compute a one-way ANOVA among all conditions (m11, m12, episome) and select the significantly most varying genes to be represented by heatmap
###Code
import os.path
import math
import pandas as pd
import numpy as np
from IPython.display import display
from multiprocessing import Pool
from functools import partial
###Output
_____no_output_____
###Markdown
Importing gene expression data
###Code
if os.path.exists("./playgrounds/data/NAR_2021/CTL_vs_macroH2A11_epi.xlsx"):
m11_file = "./playgrounds/data/NAR_2021/CTL_vs_macroH2A11_epi.xlsx"
m11_file_huv = './playgrounds/data/NAR_2021/HUVEC_vs_macroH2A11_epi.xlsx'
m12_file = "./playgrounds/data/NAR_2021/CTL_vs_macroH2A12_epi.xlsx"
m11_vs_m12_file = "./playgrounds/data/NAR_2021/macroH2A11_epi_vs_macroH2A12_epi.xlsx"
gene_lists = pd.read_excel("./playgrounds/data/NAR_2021/gene_lists_17032021.xls", sheet_name="lists")
import macroH2A_IPS_NAR_2021_workers
elif not os.path.exists("./data/NAR_2021/CTL_vs_macroH2A11_epi.xlsx"):
## Conditional branch for Google Colab
!git clone https://github.com/mazzalab/playgrounds.git
m11_file = "./playgrounds/data/NAR_2021/CTL_vs_macroH2A11_epi.xlsx"
m11_file_huv = './playgrounds/data/NAR_2021/HUVEC_vs_macroH2A11_epi.xlsx'
m12_file = "./playgrounds/data/NAR_2021/CTL_vs_macroH2A12_epi.xlsx"
m11_vs_m12_file = "./playgrounds/data/NAR_2021/macroH2A11_epi_vs_macroH2A12_epi.xlsx"
gene_lists = pd.read_excel("./playgrounds/data/NAR_2021/gene_lists_17032021.xls", sheet_name="lists")
## Import the worker function for parallel computing
import playgrounds.macroH2A_IPS_NAR_2021_workers
## Install Pyntacle
!pip install pyntacle==1.3.2
else:
m11_file = "data/NAR_2021/CTL_vs_macroH2A11_epi.xlsx"
m11_file_huv = 'data/NAR_2021/HUVEC_vs_macroH2A11_epi.xlsx'
m12_file = "data/NAR_2021/CTL_vs_macroH2A12_epi.xlsx"
m11_vs_m12_file = "data/NAR_2021/macroH2A11_epi_vs_macroH2A12_epi.xlsx"
gene_lists = pd.read_excel("data/NAR_2021/gene_lists_17032021.xls", sheet_name="lists")
import macroH2A_IPS_NAR_2021_workers
m11_m12_epi = pd.read_excel(m11_file, sheet_name="CTL_vs_macroH2A11_epi",
index_col=None, na_values=['NA'], usecols = "A,C,J:R", engine='openpyxl')
m11_m12_epi = m11_m12_epi.loc[m11_m12_epi['gene_biotype'] == "protein_coding"]
m11_m12_huv = pd.read_excel(m11_file_huv, sheet_name="HUVEC_vs_macroH2A11_epi",
index_col=None, na_values=['NA'], usecols = "A,C,M:U", engine='openpyxl')
m11_m12_huv = m11_m12_huv.loc[m11_m12_huv['gene_biotype'] == "protein_coding"]
epi_vs_m11 = pd.read_excel(m11_file, sheet_name="CTL_vs_macroH2A11_epi",
index_col=None, na_values=['NA'], usecols = "A,E,H", engine='openpyxl')
epi_vs_m12 = pd.read_excel(m12_file, sheet_name="CTL_vs_macroH2A12_epi",
index_col=None, na_values=['NA'], usecols = "A,E,H", engine='openpyxl')
m11_vs_m12 = pd.read_excel(m11_vs_m12_file, sheet_name="macroH2A11+epi vs macroH2A12+ep",
index_col=None, na_values=['NA'], usecols = "A,E,H", engine='openpyxl')
display(m11_m12_epi.head())
###Output
_____no_output_____
###Markdown
Analysis of relationships macroH2A1.1
###Code
temp = m11_m12_epi.iloc[:,5:8]
temp.index = m11_m12_epi.iloc[:,0]
# remove genes with more than one zero
temp = temp[(temp == 0).sum(1) < 2]
num_processors = 5
#Create a pool of processors
p=Pool(processes = num_processors)
func = partial(macroH2A_IPS_NAR_2021_workers.range_calc_Cor, temp)
chunklen = math.ceil(temp.shape[0] / num_processors)
chunks = [range(0, temp.shape[0])[i * chunklen:(i + 1) * chunklen] for i in range(num_processors)]
#get them to work in parallel - highly intensive !!
output = p.map(func, [chunk for chunk in chunks])
# output = p.map(func, [i for i in range(0, temp.shape[0])])
p.close()
m11_cor = np.vstack(output)
print(m11_cor)
m11_cor_df = pd.DataFrame (m11_cor)
m11_cor_df.columns = temp.index
m11_cor_df = m11_cor_df.set_index(temp.index)
m11_cor_df.to_csv("m11_cor_df.txt", sep="\t", header=True, index=True)
###Output
_____no_output_____
###Markdown
macroH2A1.2
###Code
temp = m11_m12_huv.iloc[:,8:11]
temp.index = m11_m12_huv.iloc[:,0]
# remove genes more than one zero
temp = temp[(temp == 0).sum(1) < 2]
num_processors = 5
#Create a pool of processors
p=Pool(processes = num_processors)
func = partial(macroH2A_IPS_NAR_2021_workers.range_calc_Cor, temp)
chunklen = math.ceil(temp.shape[0] / num_processors)
chunks = [range(0, temp.shape[0])[i * chunklen:(i + 1) * chunklen] for i in range(num_processors)]
#get them to work in parallel - highly intensive !!
output = p.map(func, [chunk for chunk in chunks])
p.close()
m12_cor = np.vstack(output)
print(m12_cor)
m12_cor_df = pd.DataFrame (m12_cor)
m12_cor_df.columns = temp.index
m12_cor_df = m12_cor_df.set_index(temp.index)
m12_cor_df.to_csv("m12_cor_df.txt", sep="\t", header=True, index=True)
###Output
_____no_output_____
###Markdown
Episome
###Code
temp = m11_m12_epi.iloc[:,2:5]
temp.index = m11_m12_epi.iloc[:,0]
# remove genes that contain more than one zero
temp = temp[(temp == 0).sum(1) < 2]
temp.head()
num_processors = 5
#Create a pool of processors
p=Pool(processes = num_processors)
func = partial(macroH2A_IPS_NAR_2021_workers.range_calc_Cor, temp)
chunklen = math.ceil(temp.shape[0] / num_processors)
chunks = [range(0, temp.shape[0])[i * chunklen:(i + 1) * chunklen] for i in range(num_processors)]
#get them to work in parallel - highly intensive !!
output = p.map(func, [chunk for chunk in chunks])
p.close()
epi_cor = np.vstack(output)
print(epi_cor)
epi_cor_df = pd.DataFrame(epi_cor)
epi_cor_df.columns = temp.index
epi_cor_df = epi_cor_df.set_index(temp.index)
epi_cor_df.to_csv("epi_cor_df.txt", sep="\t", header=True, index=True)
###Output
_____no_output_____
###Markdown
Format and filter datasets
###Code
## Load from file if needed
# m11_cor_df = pd.read_csv("m11_cor_df.txt", sep="\t", dtype=object, index_col='Genes')
# m12_cor_df = pd.read_csv("m12_cor_df.txt", sep="\t", dtype=object, index_col='Genes')
# epi_cor_df = pd.read_csv("epi_cor_df.txt", sep="\t", dtype=object, index_col='Genes')
###Output
_____no_output_____
###Markdown
Episome
###Code
epi_cor_conn_df = epi_cor_df.reset_index().melt(id_vars='Genes').query('value != 0')
epi_cor_conn_df = epi_cor_conn_df[epi_cor_conn_df['value']!='0']
epi_cor_conn_df.set_index(epi_cor_conn_df['Genes'], inplace=True)
epi_cor_conn_df['value2'] = [eval(x) for x in epi_cor_conn_df['value'].to_numpy()]
epi_cor_conn_df[['b1', 'b2']] = pd.DataFrame(epi_cor_conn_df['value2'].tolist(), index=epi_cor_conn_df.index)
epi_cor_conn_df = epi_cor_conn_df.drop(['value', 'value2'], axis=1)
epi_cor_conn_df.columns = ["Gene Symbol A", "Gene Symbol B", "Corr", "p_value"]
epi_cor_conn_df[['adj_p_value']] = macroH2A_IPS_NAR_2021_workers.fdr(epi_cor_conn_df['p_value'])
epi_cor_conn_df = epi_cor_conn_df[(epi_cor_conn_df['adj_p_value'] <0.01)]
epi_cor_conn_df.to_csv("epi_cor_conn_filtered.txt", sep="\t", header=True, index=True)
epi_cor_conn_df.head()
###Output
_____no_output_____
###Markdown
macroH2A1.1
###Code
m11_cor_conn_df = m11_cor_df.reset_index().melt(id_vars='Genes').query('value != 0')
m11_cor_conn_df = m11_cor_conn_df[m11_cor_conn_df['value']!='0']
m11_cor_conn_df.set_index(m11_cor_conn_df['Genes'], inplace=True)
m11_cor_conn_df['value2'] = [eval(x) for x in m11_cor_conn_df['value'].to_numpy()]
m11_cor_conn_df[['b1', 'b2']] = pd.DataFrame(m11_cor_conn_df['value2'].tolist(), index=m11_cor_conn_df.index)
m11_cor_conn_df = m11_cor_conn_df.drop(['value', 'value2'], axis=1)
m11_cor_conn_df.columns = ["Gene Symbol A", "Gene Symbol B", "Corr", "p_value"]
m11_cor_conn_df[['adj_p_value']] = macroH2A_IPS_NAR_2021_workers.fdr(m11_cor_conn_df['p_value'])
m11_cor_conn_df = m11_cor_conn_df[(m11_cor_conn_df['adj_p_value'] <0.01)]
m11_cor_conn_df.to_csv("m11_cor_conn_filtered.txt", sep="\t", header=True, index=True)
m11_cor_conn_df.head()
###Output
_____no_output_____
###Markdown
macroH2A1.2
###Code
m12_cor_conn_df = m12_cor_df.reset_index().melt(id_vars='Genes').query('value != 0')
m12_cor_conn_df = m12_cor_conn_df[m12_cor_conn_df['value']!='0']
m12_cor_conn_df.set_index(m12_cor_conn_df['Genes'], inplace=True)
m12_cor_conn_df['value2'] = [eval(x) for x in m12_cor_conn_df['value'].to_numpy()]
m12_cor_conn_df[['b1', 'b2']] = pd.DataFrame(m12_cor_conn_df['value2'].tolist(), index=m12_cor_conn_df.index)
m12_cor_conn_df = m12_cor_conn_df.drop(['value', 'value2'], axis=1)
m12_cor_conn_df.columns = ["Gene Symbol A", "Gene Symbol B", "Corr", "p_value"]
m12_cor_conn_df[['adj_p_value']] = macroH2A_IPS_NAR_2021_workers.fdr(m12_cor_conn_df['p_value'])
m12_cor_conn_df = m12_cor_conn_df[(m12_cor_conn_df['adj_p_value'] <0.01)]
m12_cor_conn_df.to_csv("m12_cor_conn_filtered.txt", sep="\t", header=True, index=True)
m12_cor_conn_df.head()
###Output
_____no_output_____
###Markdown
Plotting results
###Code
from pyntacle.io_stream.importer import PyntacleImporter
g_epi = PyntacleImporter.Sif("data/NAR_2021/epi_cor_conn_filtered_df.sif")
g_m11 = PyntacleImporter.Sif("data/NAR_2021/m11_cor_conn_filtered_df.sif")
g_m12 = PyntacleImporter.Sif("data/NAR_2021/m12_cor_conn_filtered_df.sif")
###Output
SIF from data/NAR_2021/epi_cor_conn_filtered_df.sif imported
SIF from data/NAR_2021/m11_cor_conn_filtered_df.sif imported
SIF from data/NAR_2021/m12_cor_conn_filtered_df.sif imported
###Markdown
Select components that contain at least one gene of the lists above
###Code
cc_epi = g_epi.clusters()
cc_epi = [c.vs["name"] for c in cc_epi.subgraphs() if c.vcount() >2 and
(set(c.vs["name"]).intersection(gene_lists['NHEJ'].tolist()) or
(set(c.vs["name"]).intersection(gene_lists['Senescence'].tolist())) or
(set(c.vs["name"]).intersection(gene_lists['Reprogramming'].tolist())) or
(set(c.vs["name"]).intersection(gene_lists['DDR_not_NHEJ'].tolist()))
)]
cc_m11 = g_m11.clusters()
cc_m11 = [c.vs["name"] for c in cc_m11.subgraphs() if c.vcount() >2 and
(set(c.vs["name"]).intersection(gene_lists['NHEJ'].tolist()) or
(set(c.vs["name"]).intersection(gene_lists['Senescence'].tolist())) or
(set(c.vs["name"]).intersection(gene_lists['Reprogramming'].tolist())) or
(set(c.vs["name"]).intersection(gene_lists['DDR_not_NHEJ'].tolist()))
)]
cc_m12 = g_m12.clusters()
cc_m12 = [c.vs["name"] for c in cc_m12.subgraphs() if c.vcount() >2 and
(set(c.vs["name"]).intersection(gene_lists['NHEJ'].tolist()) or
(set(c.vs["name"]).intersection(gene_lists['Senescence'].tolist())) or
(set(c.vs["name"]).intersection(gene_lists['Reprogramming'].tolist())) or
(set(c.vs["name"]).intersection(gene_lists['DDR_not_NHEJ'].tolist()))
)]
[len(c) for c in cc_m12]
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
%matplotlib inline
plt.rcParams.update({'figure.figsize':(2.8,1.5), 'figure.dpi':300})
plt.rcParams.update({'font.size': 7})
kwargs_m11 = dict(alpha=0.5, bins=30)
kwargs_m12 = dict(alpha=0.5, bins=3)
kwargs_epi = dict(alpha=0.5, bins=3)
temp_cc_m11 = [len(c) for c in cc_m11]
temp_cc_m12 = [len(c) for c in cc_m12]
temp_cc_epi = [len(c) for c in cc_epi]
plt.hist(temp_cc_m11, **kwargs_m11, color='g', label='macroH2A1.1')
plt.hist(temp_cc_m12, **kwargs_m12, color='b', label='macroH2A1.2')
plt.hist(temp_cc_epi, **kwargs_epi, color='r', label='episome')
plt.gca().set(ylabel='frequency', xlabel='network size') # title='Frequency Histogram',
plt.xticks(np.arange(min(min(temp_cc_m11), min(temp_cc_epi), min(temp_cc_m12)),
max(max(temp_cc_m11), max(temp_cc_epi), max(temp_cc_m12))+2,
5))
plt.yticks(np.arange(0,20,2))
# plt.xlim(50,75)
plt.legend()
plt.savefig("FigureXX_A.svg")
###Output
_____no_output_____
###Markdown
Count genes in gene lists for the clusters of sizes > 2 of mH2A1.1 co-expression network
###Code
cc_m11 = g_m11.clusters()
cc_m11_ddr_not_nhej = [(set(c.vs["name"]).intersection(gene_lists['DDR_not_NHEJ'].tolist()), len(c.vs["name"]))
for c
in cc_m11.subgraphs() if c.vcount() >2 and set(c.vs["name"]).intersection(gene_lists['DDR_not_NHEJ'].tolist())]
cc_m11_nhej = [(set(c.vs["name"]).intersection(gene_lists['NHEJ'].tolist()), len(c.vs["name"]))
for c
in cc_m11.subgraphs() if c.vcount() >2 and set(c.vs["name"]).intersection(gene_lists['NHEJ'].tolist())]
cc_m11_reprogramming = [(set(c.vs["name"]).intersection(gene_lists['Reprogramming'].tolist()), len(c.vs["name"]))
for c
in cc_m11.subgraphs() if c.vcount() >2 and set(c.vs["name"]).intersection(gene_lists['Reprogramming'].tolist())]
cc_m11_senescence = [(set(c.vs["name"]).intersection(gene_lists['Senescence'].tolist()), len(c.vs["name"]))
for c
in cc_m11.subgraphs() if c.vcount() >2 and set(c.vs["name"]).intersection(gene_lists['Senescence'].tolist())]
from collections import Counter
z = [x[1] for x in cc_m11_ddr_not_nhej]
counter_cc_m11_ddr_not_nhej = Counter(z)
z = [x[1] for x in cc_m11_nhej]
counter_cc_m11_nhej = Counter(z)
z = [x[1] for x in cc_m11_reprogramming]
counter_cc_m11_reprogramming = Counter(z)
z = [x[1] for x in cc_m11_senescence]
counter_cc_m11_senescence = Counter(z)
labels = counter_cc_m11_ddr_not_nhej + counter_cc_m11_reprogramming + counter_cc_m11_nhej + counter_cc_m11_senescence
# labels.most_common()
labels = [x[0] for x in sorted(labels.items())]
zero_dict = dict(zip(labels, [0] * len(labels)))
dict_cc_m11_ddr_not_nhej = {**zero_dict, **counter_cc_m11_ddr_not_nhej}
dict_cc_m11_nhej = {**zero_dict, **counter_cc_m11_nhej}
dict_cc_m11_reprogramming = {**zero_dict, **counter_cc_m11_reprogramming}
dict_cc_m11_senescence = {**zero_dict, **counter_cc_m11_senescence}
counts_m11_ddr_not_nhej = list(dict_cc_m11_ddr_not_nhej.values())
counts_cc_m11_nhej = list(dict_cc_m11_nhej.values())
counts_cc_m11_reprogramming = list(dict_cc_m11_reprogramming.values())
counts_cc_m11_senescence = list(dict_cc_m11_senescence.values())
x = np.arange(len(labels)) # the label locations
width = 0.20
fig, ax = plt.subplots()
rects1 = ax.bar(x - width*2, counts_m11_ddr_not_nhej, width, label='DDR not NHEJ', align='edge', color='#d55e00')
rects2 = ax.bar(x - width, counts_cc_m11_nhej, width, label='NHEJ', align='edge', color='#009e73')
rects3 = ax.bar(x, counts_cc_m11_reprogramming, width, label='Reprogramming', align='edge', color='#0072b2')
rects4 = ax.bar(x + width, counts_cc_m11_senescence, width, label='Senescence', align='edge', color='#f0e442')
ax.set_ylabel('frequency')
ax.set_xlabel('network size (macroH2A1.1)')
# ax.set_title('Scores by group and gender')
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.yaxis.set_minor_locator(MultipleLocator(1))
ax.legend()
plt.savefig("FigureXX_B.svg")
###Output
_____no_output_____
###Markdown
Create network (sif) and attribute files for cluster (size 62) of macroH2A1.1
###Code
cc_m11 = g_m11.clusters()
cc_m11_62 = [c for c in cc_m11.subgraphs() if c.vcount() == 62][0]
cc_m11_62_nodes = pd.DataFrame(cc_m11_62.vs["name"])
cc_m11_62_nodes = cc_m11_62_nodes.rename(columns={0: 'Node'})
cc_m11_62_nodes[["NHEJ"]] = [c in gene_lists['NHEJ'].tolist() for c in cc_m11_62_nodes['Node']]
cc_m11_62_nodes[["DDR_not_NHEJ"]] = [c in gene_lists['DDR_not_NHEJ'].tolist() for c in cc_m11_62_nodes['Node']]
cc_m11_62_nodes[["Reprogramming"]] = [c in gene_lists['Reprogramming'].tolist() for c in cc_m11_62_nodes['Node']]
cc_m11_62_nodes[["Senescence"]] = [c in gene_lists['Senescence'].tolist() for c in cc_m11_62_nodes['Node']]
diff_expr = [x['Gene Symbol'] for index, x in epi_vs_m11.iterrows() if
x['pvalue']<0.05 and (x['log2FoldChange'] > 1 or x['log2FoldChange'] < -1)]
cc_m11_62_nodes[["DEG"]] = [c['Node'] in diff_expr for index2, c in cc_m11_62_nodes.iterrows()]
cc_m11_62_nodes.to_csv("g_m11_62_nodes_attributes.tsv", index = False, header=True, sep='\t')
cc_m11_62_nodes.head()
from pyntacle.io_stream.exporter import PyntacleExporter
PyntacleExporter.Sif(file='g_m11_62.sif', graph=cc_m11_62, header=True)
###Output
Graph successfully exported to SIF at full path:
/scratch/tom/playgrounds/g_m11_62.sif
###Markdown
Create network (sif) and attribute files for cluster (size 10) of macroH2A1.2
###Code
cc_m12 = g_m12.clusters()
cc_m12_10 = [c for c in cc_m12.subgraphs() if c.vcount() == 10][0]
cc_m12_10_nodes = pd.DataFrame(cc_m12_10.vs["name"])
cc_m12_10_nodes = cc_m12_10_nodes.rename(columns={0: 'Node'})
cc_m12_10_nodes[["NHEJ"]] = [c in gene_lists['NHEJ'].tolist() for c in cc_m12_10_nodes['Node']]
cc_m12_10_nodes[["DDR_not_NHEJ"]] = [c in gene_lists['DDR_not_NHEJ'].tolist() for c in cc_m12_10_nodes['Node']]
cc_m12_10_nodes[["Reprogramming"]] = [c in gene_lists['Reprogramming'].tolist() for c in cc_m12_10_nodes['Node']]
cc_m12_10_nodes[["Senescence"]] = [c in gene_lists['Senescence'].tolist() for c in cc_m12_10_nodes['Node']]
diff_expr = [x['Gene Symbol'] for index, x in epi_vs_m12.iterrows() if
x['pvalue']<0.05 and (x['log2FoldChange'] > 1 or x['log2FoldChange'] < -1)]
cc_m12_10_nodes[["DEG"]] = [c['Node'] in diff_expr for index2, c in cc_m12_10_nodes.iterrows()]
cc_m12_10_nodes.to_csv("cc_m12_10_nodes_attributes.tsv", index = False, header=True, sep='\t')
cc_m12_10_nodes.head()
from pyntacle.io_stream.exporter import PyntacleExporter
PyntacleExporter.Sif(file='g_m12_10.sif', graph=cc_m12_10, header=True)
###Output
Graph successfully exported to SIF at full path:
/scratch/tom/playgrounds/g_m12_10.sif
###Markdown
Select largest components
###Code
cl = g_epi.components()
cl_sizes = cl.sizes()
giant_component_index = cl_sizes.index(max(cl_sizes))
lc_epi_names = [g_epi.vs[cl[x]]["name"] for x in set(cl.membership) if giant_component_index == x][0]
cl = g_m11.components()
cl_sizes = cl.sizes()
giant_component_index = cl_sizes.index(max(cl_sizes))
lc_m11_names = [g_m11.vs[cl[x]]["name"] for x in set(cl.membership) if x == giant_component_index][0]
cl = g_m12.components()
cl_sizes = cl.sizes()
giant_component_index = cl_sizes.index(max(cl_sizes))
lc_m12_names = [g_m12.vs[cl[x]]["name"] for x in set(cl.membership) if giant_component_index == x][0]
## m11
cc_m11_nodes = pd.DataFrame(index=lc_m11_names, columns=["Node"])
cc_m11_nodes["Node"] = lc_m11_names
cc_m11_nodes[["NHEJ"]] = [c in gene_lists['NHEJ'].tolist() for c in cc_m11_nodes['Node']]
cc_m11_nodes[["DDR_not_NHEJ"]] = [c in gene_lists['DDR_not_NHEJ'].tolist() for c in cc_m11_nodes['Node']]
cc_m11_nodes[["Reprogramming"]] = [c in gene_lists['Reprogramming'].tolist() for c in cc_m11_nodes['Node']]
cc_m11_nodes[["Senescence"]] = [c in gene_lists['Senescence'].tolist() for c in cc_m11_nodes['Node']]
diff_expr = [x['Gene Symbol'] for index, x in epi_vs_m11.iterrows() if
x['pvalue']<0.05 and (x['log2FoldChange'] > 1 or x['log2FoldChange'] < -1)]
cc_m11_nodes[["DEG"]] = [c['Node'] in diff_expr for index2, c in cc_m11_nodes.iterrows()]
cc_m11_nodes.to_csv("cc_m11_nodes_attributes.tsv", index = False, header=True, sep='\t')
## m12
cc_m12_nodes = pd.DataFrame(index=lc_m12_names, columns=["Node"])
cc_m12_nodes["Node"] = lc_m12_names
cc_m12_nodes[["NHEJ"]] = [c in gene_lists['NHEJ'].tolist() for c in cc_m12_nodes['Node']]
cc_m12_nodes[["DDR_not_NHEJ"]] = [c in gene_lists['DDR_not_NHEJ'].tolist() for c in cc_m12_nodes['Node']]
cc_m12_nodes[["Reprogramming"]] = [c in gene_lists['Reprogramming'].tolist() for c in cc_m12_nodes['Node']]
cc_m12_nodes[["Senescence"]] = [c in gene_lists['Senescence'].tolist() for c in cc_m12_nodes['Node']]
diff_expr = [x['Gene Symbol'] for index, x in epi_vs_m12.iterrows() if
x['pvalue']<0.05 and (x['log2FoldChange'] > 1 or x['log2FoldChange'] < -1)]
cc_m12_nodes[["DEG"]] = [c['Node'] in diff_expr for index2, c in cc_m12_nodes.iterrows()]
cc_m12_nodes.to_csv("cc_m12_nodes_attributes.tsv", index = False, header=True, sep='\t')
###Output
_____no_output_____
###Markdown
Compute ANOVA among all conditions (m11, m12, episome)
###Code
m11_m12_epi.head()
import scipy.stats as stats
temp = pd.DataFrame(index = m11_m12_epi.index, columns = ['fvalue', 'pvalue'])
for idx, row in m11_m12_epi.iterrows():
temp.at[idx,'fvalue'], temp.at[idx,'pvalue'] = stats.f_oneway(row[2:5], row[5:8], row[7:11])
m11_m12_epi_anova = pd.concat([m11_m12_epi, temp], axis=1)
m11_m12_epi_anova[['adj_pvalue']] = macroH2A_IPS_NAR_2021_workers.fdr(m11_m12_epi_anova['pvalue'])
m11_m12_epi_anova.head()
###Output
/home/t.mazza/miniconda3/envs/playgrounds/lib/python3.7/site-packages/scipy/stats/stats.py:3709: F_onewayConstantInputWarning: Each of the input arrays is constant;the F statistic is not defined or infinite
warnings.warn(F_onewayConstantInputWarning())
###Markdown
Print data for Heatmap
###Code
temp4hm = m11_m12_epi_anova[m11_m12_epi_anova['pvalue']<= 0.05]
temp4hm = temp4hm.drop(['gene_biotype', 'fvalue', 'pvalue', 'adj_pvalue'], axis=1)
temp4hm.head()
temp4hm.to_csv("m11_m12_epi_4hm.txt", sep="\t", header=True, index=False)
###Output
_____no_output_____
###Markdown
Compute ANOVA among all conditions (m11, m12, episome)
###Code
temp = pd.DataFrame(index = m11_m12_huv.index, columns = ['fvalue', 'pvalue'])
for idx, row in m11_m12_huv.iterrows():
temp.at[idx,'fvalue'], temp.at[idx,'pvalue'] = stats.f_oneway(row[2:5], row[5:8], row[7:11])
m11_m12_huv_anova = pd.concat([m11_m12_huv, temp], axis=1)
m11_m12_huv_anova[['adj_pvalue']] = macroH2A_IPS_NAR_2021_workers.fdr(m11_m12_huv_anova['pvalue'])
del temp
m11_m12_huv_anova.head()
###Output
/home/t.mazza/miniconda3/envs/playgrounds/lib/python3.7/site-packages/scipy/stats/stats.py:3709: F_onewayConstantInputWarning: Each of the input arrays is constant;the F statistic is not defined or infinite
warnings.warn(F_onewayConstantInputWarning())
###Markdown
Compute ANOVA among all conditions (m11, m12, HUVEC)
###Code
temp4hm = m11_m12_huv_anova[m11_m12_huv_anova['adj_pvalue']<= 0.05]
temp4hm = temp4hm.drop(['gene_biotype', 'fvalue', 'pvalue', 'adj_pvalue'], axis=1)
temp4hm.head()
temp4hm.to_csv("m11_m12_huv_4hm.txt", sep="\t", header=True, index=False)
del temp4hm
###Output
_____no_output_____
###Markdown
Print system and required packages information
###Code
%load_ext watermark
%watermark -v -m -p numpy,pandas,matplotlib,sklearn,traitlets,IPython,ipywidgets,openpyxl,pyntacle
# date
print(" ")
%watermark -u -n -t -z
###Output
CPython 3.7.6
IPython 7.12.0
numpy 1.18.1
pandas 1.1.4
matplotlib 3.1.3
sklearn 0.24.0
traitlets 4.3.3
IPython 7.12.0
ipywidgets 7.5.1
openpyxl 3.0.3
pyntacle 1.3.2
compiler : MSC v.1916 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
CPU cores : 8
interpreter: 64bit
last updated: Mon Mar 08 2021 17:36:33 W. Europe Standard Time
|
supplementary_data/Case2_Analysis.ipynb | ###Markdown
Average performance of best performing hyperparameters in Case 2
###Code
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
from matplotlib.pyplot import gca
from matplotlib.ticker import FormatStrFormatter
import seaborn as sns
plt.style.use('seaborn')
seed_list = [0, 1, 2, 3, 4]
n_steps = 250
fig = plt.figure(figsize=(5, 3.5))
########### On policy results ###############################
algo_list = [ "PPO", "TRPO"]
batch_list_best = [16000, 16000 ]
color = ['green', 'purple']
for i, algo in enumerate(algo_list):
batch = batch_list_best[i]
df_list = []
for j, seed in enumerate(seed_list):
fname = ("./Case2/log_"+str(algo)+"_3d-orientation-v2_id-"+str(batch)+"-0.98_"+str(seed)+"_tanh_64_64/monitor.csv")
df = pd.read_csv(fname, skiprows=1)
df['time'] = np.cumsum(df['l'].values)
df_list.append(df)
df_all = pd.concat(df_list).sort_values('time')
timesteps = df_all['time'].rolling(n_steps,min_periods=1).mean()
smooth_path = df_all['r'].rolling(n_steps,min_periods=1).mean()/1000
path_deviation = df_all['r'].rolling(n_steps,min_periods=1).std()/1000
under_line = (smooth_path-path_deviation)
over_line = (smooth_path+path_deviation)
plt.plot(timesteps, smooth_path, c=color[i], label=str(algo), linewidth = 0.5, zorder=2)
art = plt.fill_between(timesteps, under_line, over_line, color=color[i], alpha=.2, edgecolor=None, zorder=1, rasterized=True)
art.set_edgecolor(None)
########### Off policy results ###############################
algo_list = ["DDPG","SAC", "TD3"]
batch_list_best = [2000000, 2000000,2000000]
color = ['darkorange', 'red', 'blue']
for i, algo in enumerate(algo_list):
batch = batch_list_best[i]
df_list = []
for j, seed in enumerate(seed_list):
fname = ("./Case2/log_"+str(algo)+"_3d-orientation-v4_id-"+str(batch)+"-0.98_"+str(seed)+"_tanh_64_64/monitor.csv")
df = pd.read_csv(fname, skiprows=1)
df['time'] = np.cumsum(df['l'].values)
df_list.append(df)
df_all = pd.concat(df_list).sort_values('time')
timesteps = df_all['time'].rolling(n_steps,min_periods=1).mean()
smooth_path = df_all['r'].rolling(n_steps,min_periods=1).mean()/1000
path_deviation = df_all['r'].rolling(n_steps,min_periods=1).std()/1000
under_line = (smooth_path-path_deviation)
over_line = (smooth_path+path_deviation)
plt.plot(timesteps, smooth_path, c=color[i], label=str(algo), linewidth = 0.5, zorder=2)
art = plt.fill_between(timesteps, under_line, over_line, color=color[i], alpha=.2, edgecolor=None, zorder=1, rasterized=True)
art.set_edgecolor(None)
font = font_manager.FontProperties(family='Times New Roman', weight='normal', style='normal', size=12)
leg = plt.legend(ncol=1, loc="upper left", frameon = True, framealpha=1.0, labelspacing=0.2, borderpad = 0.2, prop=font)
leg.get_frame().set_edgecolor('k')
leg.get_frame().set_facecolor('w')
plt.ylabel('Episode Score (thousands)', fontsize = 12)
plt.yticks([-2, -1, 0, 1, 2, 3, 4, 5], fontsize =12, fontname='Times New Roman')
gca().yaxis.set_major_formatter(FormatStrFormatter('%g'))
plt.ylim([-1.5, 4.5])
plt.xlabel('Timesteps (millions)', fontsize = 12)
plt.ticklabel_format(style='sci', axis='x', scilimits=(6,6))
plt.xticks(np.linspace(0,10e6, 11), fontsize =12, fontname='Times New Roman')
plt.setp(plt.gca().get_xaxis().get_offset_text(), visible=False)
plt.xlim([-0.2e6, 10.2e6])
plt.tick_params(axis='both', which='major', pad=1)
gca().grid(linewidth=0.5)
plt.tight_layout()
plt.savefig('Case2_learning-curve.png', format='png', bbox_inches='tight',dpi=300)
# plt.show()
###Output
_____no_output_____
###Markdown
Average results for different batchsizes (On Policy Algorithm)
###Code
seed_list = [0, 1, 2, 3, 4]
n_steps=250
fig = plt.figure(figsize=(6.5,3))
f=0
algo_list = [ "PPO", "TRPO"]
batch_list = [1000, 4000, 16000, 64000, 128000]
color = ['red', 'blue', 'green', 'orange', 'purple', '#1b9e77', '#66a61e', 'slateblue']
label = ['(a)','(b)']
for k, algo in enumerate(algo_list):
plt.subplot(1,2,f+1)
f+=1
for i, batch in enumerate(batch_list):
df_list = []
for j, seed in enumerate(seed_list):
fname = ("./Case2/log_"+str(algo)+"_3d-orientation-v2_id-"+str(batch)+"-0.98_"+str(seed)+"_tanh_64_64/monitor.csv")
df = pd.read_csv(fname, skiprows=1)
df['time'] = np.cumsum(df['l'].values)
df_list.append(df)
df_all = pd.concat(df_list).sort_values('time')
timesteps = df_all['time'].rolling(n_steps,min_periods=1).mean()
smooth_path = df_all['r'].rolling(n_steps,min_periods=1).mean()/1000
path_deviation = df_all['r'].rolling(n_steps,min_periods=1).std()/1000
under_line = (smooth_path-path_deviation)
over_line = (smooth_path+path_deviation)
plt.plot(timesteps, smooth_path, c=color[i], label=str(batch), linewidth = 0.5, zorder=2)
art = plt.fill_between(timesteps, under_line, over_line, color=color[i], alpha=.2, edgecolor=None, zorder=1,rasterized=True)
art.set_edgecolor(None)
plt.setp(plt.gca().get_xaxis().get_offset_text(), visible=False)
plt.xticks(np.linspace(0,10e6, 6), fontsize =10, fontname='Times New Roman')
label[k]=algo
plt.xlabel('Timesteps (millions)\n'+label[k], fontsize = 10, fontname='Times New Roman',linespacing = 1.75)
plt.xlim([-0.2e6, 10.2e6])
plt.setp(plt.gca().get_yaxis().get_offset_text(), visible=False)
plt.yticks([-2.0,-1, 0,1,2,3,4, 5], fontsize =10, fontname='Times New Roman')
plt.ylabel('Episode Score (thousands)', fontsize = 10, fontname='Times New Roman')
plt.ylim([-2.3, 4.2])
gca().grid(linewidth=1.0)
plt.tick_params(axis='both', which='major', pad=2)
plt.ticklabel_format(style='sci', axis='x', scilimits=(6,6))
gca().yaxis.set_major_formatter(FormatStrFormatter('%g'))
if k == 0:
font = font_manager.FontProperties(family='Times New Roman', weight='normal' ,style='normal', size=10)
leg = plt.legend(ncol=2, loc="upper left", frameon = True, framealpha=0.7, labelspacing=0.2, borderpad = 0.2, prop=font, columnspacing=1)
leg.get_frame().set_linewidth(0.0)
plt.tight_layout()
plt.savefig('Case2_onpolicy.png', dpi=300,bbox_inches='tight')
# plt.show()
###Output
_____no_output_____
###Markdown
Average results for different replay buffer sizes (Off Policy Algorithm)
###Code
seed_list = [0, 1, 2, 3, 4]
n_steps=250
color = ['red', 'blue', 'green', 'orange', 'purple', '#1b9e77', '#66a61e', 'darkgrey']
label = ['(a)','(b)','(c)']
fig = plt.figure(figsize=(10,3))
f=0
algo_list = ["SAC", "TD3","DDPG"]
batch_list = [100000, 200000, 500000, 1000000, 2000000]
for k, algo in enumerate(algo_list):
plt.subplot(1,3,f+1)
f+=1
for i, batch in enumerate(batch_list):
df_list = []
for j, seed in enumerate(seed_list):
fname = ("./Case2/log_"+str(algo)+"_3d-orientation-v3_id-"+str(batch)+"-0.98_"+str(seed)+"_tanh_64_64/monitor.csv")
df = pd.read_csv(fname, skiprows=1)
df['time'] = np.cumsum(df['l'].values)
df_list.append(df)
df_all = pd.concat(df_list).sort_values('time')
timesteps = df_all['time'].rolling(n_steps,min_periods=1).mean()
smooth_path = df_all['r'].rolling(n_steps,min_periods=1).mean()/1000
path_deviation = df_all['r'].rolling(n_steps,min_periods=1).std()/1000
under_line = (smooth_path-path_deviation)
over_line = (smooth_path+path_deviation)
plt.plot(timesteps, smooth_path, c=color[i], label=str(batch), linewidth = 0.5, zorder=2)
art = plt.fill_between(timesteps, under_line, over_line, color=color[i], alpha=.2, edgecolor=None, zorder=1,rasterized=True)
art.set_edgecolor(None)
plt.setp(plt.gca().get_xaxis().get_offset_text(), visible=False)
plt.xticks(np.linspace(0,5e6, 6), fontsize =10, fontname='Times New Roman')
label[k]=algo
plt.xlabel('Timesteps (millions)\n'+label[k], fontsize = 10, fontname='Times New Roman',linespacing = 1.75)
plt.xlim([-0.2e6, 5.2e6])
plt.setp(plt.gca().get_yaxis().get_offset_text(), visible=False)
plt.yticks([-2.0,-1, 0,1,2,3,4, 5], fontsize =10, fontname='Times New Roman')
plt.ylabel('Episode Score (thousands)', fontsize = 10, fontname='Times New Roman')
plt.ylim([-2.3, 4.2])
gca().grid(linewidth=1.0)
plt.tick_params(axis='both', which='major', pad=2)
plt.ticklabel_format(style='sci', axis='x', scilimits=(6,6))
gca().yaxis.set_major_formatter(FormatStrFormatter('%g'))
if k == 1:
font = font_manager.FontProperties(family='Times New Roman',weight='normal',style='normal', size=10)
leg = plt.legend(ncol=2, loc="upper left", frameon = True, framealpha=0.7, labelspacing=0.2,
borderpad = 0.2, prop=font, columnspacing=1, bbox_to_anchor=(0.0, 1.01))
leg.get_frame().set_linewidth(0.0)
plt.tight_layout()
plt.savefig('Case2_offpolicy.png', dpi=300,bbox_inches='tight')
# plt.show()
###Output
_____no_output_____ |
San Francisco AirBnB.ipynb | ###Markdown
San Francisco AirBnB data analysis 1. Business UnderstandingThis notebook explores the following high level business questions using data from http://insideairbnb.com/get-the-data.html :1) What months are busiest in San Francisco? What are the price variations?This helps us understand the peak months and prices for the same2) What types of rooms are available? What is the price difference according to room type?This helps in understanding the options of housings and the prices for the same.3) Which neighbourhood has most listings? Is price related to the number of listings in that neighbourhood?This helps in understanding the neighborhoods and prices for the same. Data Understanding
###Code
# Importing the libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import warnings
import calendar
warnings.filterwarnings("ignore")
%matplotlib inline
#Ingest calendar,listing and review data
sf_calendar = pd.read_csv('calendar.csv',parse_dates=['date'])
sf_listings = pd.read_csv('listings.csv')
sf_reviews = pd.read_csv('reviews.csv',parse_dates=['date'])
sf_calendar.head()
#change true and false as 1 and 0 respectively for considering only available==1 entries
replace_map = {'available':{'f': 0,'t' : 1}}
sf_calendar.replace(replace_map, inplace=True)
sf_calendar.head()
sf_listings.head()
sf_reviews.head()
print(sf_calendar.describe())
print(sf_listings.info())
###Output
listing_id available minimum_nights maximum_nights
count 3.114335e+06 3.114335e+06 3.114309e+06 3.114309e+06
mean 2.138650e+07 4.472932e-01 1.174129e+04 1.019015e+06
std 1.283859e+07 4.972143e-01 1.082530e+06 4.649850e+07
min 9.580000e+02 0.000000e+00 1.000000e+00 1.000000e+00
25% 9.760014e+06 0.000000e+00 2.000000e+00 2.900000e+01
50% 2.259754e+07 0.000000e+00 4.000000e+00 3.600000e+02
75% 3.270356e+07 1.000000e+00 3.000000e+01 1.125000e+03
max 4.056928e+07 1.000000e+00 1.000000e+08 2.147484e+09
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8533 entries, 0 to 8532
Data columns (total 16 columns):
id 8533 non-null int64
name 8533 non-null object
host_id 8533 non-null int64
host_name 8479 non-null object
neighbourhood_group 0 non-null float64
neighbourhood 8533 non-null object
latitude 8533 non-null float64
longitude 8533 non-null float64
room_type 8533 non-null object
price 8533 non-null int64
minimum_nights 8533 non-null int64
number_of_reviews 8533 non-null int64
last_review 6642 non-null object
reviews_per_month 6642 non-null float64
calculated_host_listings_count 8533 non-null int64
availability_365 8533 non-null int64
dtypes: float64(4), int64(7), object(5)
memory usage: 1.0+ MB
None
###Markdown
Data Preparation In this setion, we drop 2 columns a) 'neighbourhood_group' from sf_listings as it has only NaN values b) 'adjusted_price' from sf_calendar as it has the same values as 'price'
###Code
# remove redundant columns like neigbourhood_group from listings and adjusted_price from calendar
sf_listings = sf_listings.drop(columns = ['neighbourhood_group'])
sf_calendar = sf_calendar.drop(['adjusted_price'],axis=1)
#check the sizes of the dataset
print('Listings :{}'.format(sf_listings.shape))
print('Reviews :{}'.format(sf_reviews.shape))
print('Calendar :{}'.format(sf_calendar.shape))
###Output
Listings :(8533, 15)
Reviews :(382156, 2)
Calendar :(3114335, 6)
###Markdown
1. What months are busiest in San Francisco? What are the price variations? Busiest Months in San Francisco
###Code
# make new dataframe for number of reviews
rev_freq = pd.DataFrame(sf_reviews['date'].value_counts().values,
index=sf_reviews['date'].value_counts().index,
columns=['Number of reviews'])
# resample data grouping by year
rev_freq_year = rev_freq.resample('A').sum()
# Print values
rev_freq_year
#Finding out the earliest and latest review date
sf_reviews['StartDate'] = pd.to_datetime(sf_reviews['date'])
least_recent_date = sf_reviews['StartDate'].min()
recent_date = sf_reviews['StartDate'].max()
print(least_recent_date)
print(recent_date)
# Select the year 2018
rev_freq_2018 = rev_freq.loc['2018']
# The review sum per month
rev_2018_month = rev_freq_2018.resample('M').sum()
rev_2018_month['% rev'] = (rev_2018_month['Number of reviews']*100)/rev_2018_month['Number of reviews'].sum()
# Print values
rev_2018_month
def bar_plot(X,Y,x_label,y_label,title) :
"""
Description: This function can be used to plot bar plots.
Arguments:
X = values to be plotted on x-axis
Y = values to be plotted on y-axis
x_label = Label to be placed near x-axis
y_label = Label to be placed near y-axis
title = title of plot
Returns:
None
"""
# Bar plot
fig1 = plt.figure(figsize=(10, 5))
ax = fig1.add_subplot(1, 1, 1, aspect='auto')
sns.barplot(x=X, y=Y)
# Set axis label properties
ax.set_xlabel(x_label, weight='normal', size=20)
ax.set_ylabel(y_label, weight='normal', size=20)
plt.title(title, fontsize=20)
# Set tick label properties
ax.tick_params('x', labelsize=15, rotation=35)
ax.tick_params('y', labelsize=15)
plt.show()
bar_plot(rev_2018_month.index.month_name(),rev_2018_month['% rev'],'Month','Reviews percentage %',
'Reviews percentage per month in 2018')
# Select the year 2019
rev_freq_2019 = rev_freq.loc['2019']
# The review sum per month
rev_2019_month = rev_freq_2019.resample('M').sum()
rev_2019_month['% rev'] = (rev_2019_month['Number of reviews']*100)/rev_2019_month['Number of reviews'].sum()
# Print values
rev_2019_month
bar_plot(rev_2019_month.index.month_name(),rev_2019_month['% rev'],'Month','Reviews percentage %',
'Reviews percentage per month in 2019')
###Output
_____no_output_____
###Markdown
Result : August, September and October are the busiest months. Price Variations
###Code
#Remove all rows which have 0 for availibility as they cannot influence price
sf_calendar_copy = sf_calendar.copy()
sf_calendar_copy = sf_calendar_copy[sf_calendar_copy['available']==1]
# Number of price listings per year
freq_cal = pd.DataFrame(sf_calendar_copy['date'].value_counts().values,
index=sf_calendar_copy['date'].value_counts().index,
columns=['Frequency of Price'])
freq_year_cal = freq_cal.resample('A').sum()
# Print values
freq_year_cal
#start and end date of entries in calendar.csv
sf_calendar_copy['StartDate'] = pd.to_datetime(sf_calendar_copy['date'])
least_recent_date = sf_calendar_copy['StartDate'].min()
recent_date = sf_calendar_copy['StartDate'].max()
print(least_recent_date)
print(recent_date)
# Index data by date
sf_calendar_copy.index = sf_calendar_copy['date']
# Get data for 2020
sf_calendar_2020 = sf_calendar_copy.loc['2020']
# Percentage of missing values
sf_calendar_2020.isnull().mean()
# Drop rows with missing price values as there are 98 missing values
sf_calendar_2020_c = sf_calendar_2020.dropna()
# Preprocess the price variable
sf_calendar_2020_c['price'] = sf_calendar_2020_c['price'].apply(
lambda x: float(x[1:].replace(',', '')))
sf_calendar_2020_c = sf_calendar_2020_c[['price','available']]
# Print the first five rows
sf_calendar_2020_c.head()
# The price
print("Price min : ", sf_calendar_2020_c['price'].min())
print("Price max : ", sf_calendar_2020_c['price'].max())
print("Price mean : ", sf_calendar_2020_c['price'].mean())
# Resemple data by month
df_2020_month = sf_calendar_2020_c.resample('M').mean()
# difference between the price mean per month and the price mean
df_2020_month['diff mean'] = df_2020_month['price'] \
- sf_calendar_2020_c['price'].mean()
# Print data
df_2020_month
bar_plot(df_2020_month.index.month_name(),df_2020_month['diff mean'],'Month','Price difference [$]',
'Price difference from average in 2020')
###Output
_____no_output_____
###Markdown
Result : September, October and November are the most expensive months. December,January and February are least expensive. 2. What types of rooms are available? What is the price difference according to room type?
###Code
#availability per room type
sf_listings.room_type.value_counts()
sf_houses = sf_listings.groupby(['room_type']).mean()['availability_365'].sort_values()
print(sf_houses)
#plot availability vs room_type
sf_houses.plot(kind="bar");
plt.xlabel("Type of room")
plt.ylabel("No. of available days")
plt.title("Availibility according to type of house");
#checking if room_type has missing values
sf_listings.isnull().mean()
#average price per room type
sf_houses_price = sf_listings.groupby(['room_type']).mean()['price'].sort_values()
print(sf_houses_price)
sf_houses_price.plot(kind="bar");
plt.xlabel("Type of room")
plt.ylabel("Price in $")
plt.title("Price according to type of house");
###Output
room_type
Shared room 90.453925
Private room 140.253286
Hotel room 235.292208
Entire home/apt 269.363197
Name: price, dtype: float64
###Markdown
Result : Shared room has highest availability and least price. 3. Which neighbourhood has most listings? Is price related to the number of listings in that neighbourhood?
###Code
#Number of listings per neighborhood
sf_listings['neighbourhood'].value_counts()
###Output
_____no_output_____
###Markdown
Which neighbourhood has most availibility?
###Code
#available days per neighborhood
sf_listings_avail = sf_listings.groupby(['neighbourhood']).mean()['availability_365'].sort_values()
sf_listings_avail
###Output
_____no_output_____
###Markdown
Price variability with neighbourhood
###Code
#average price per neighborhood
sf_listings.groupby(['neighbourhood']).mean()['price'].sort_values()
# difference between the price mean per month and the price mean
sf_listings['diff_mean'] = sf_listings['price'] - sf_listings['price'].mean()
sf_listings_sorted = sf_listings.groupby(['neighbourhood']).mean()['diff_mean'].sort_values()
c = sf_listings_sorted.to_frame()
print(c)
# Plot the price difference
fig2 = plt.figure(figsize=(32,10))
ax = fig2.add_subplot(1, 1, 1, aspect='auto')
sns.barplot(x = sf_listings['neighbourhood'], y = sf_listings['diff_mean'],order = c.index)
# Set axis label properties
ax.set_xlabel('Neighbourhood', weight='normal', size=20)
ax.set_ylabel('Price difference [$]', weight='normal', size=20)
plt.title('Price difference from average wrt neighbourhood', fontsize=30)
# Set tick label properties
ax.tick_params('x', labelsize=15, rotation=90)
ax.tick_params('y', labelsize=15)
plt.show()
###Output
diff_mean
neighbourhood
Treasure Island/YBI -117.912516
Presidio -110.412516
Excelsior -95.550082
Bayview -93.810746
Crocker Amazon -91.902312
Ocean View -90.509738
Outer Sunset -72.910949
Lakeshore -67.344720
Visitacion Valley -61.951990
Outer Mission -56.945849
Outer Richmond -36.832729
West of Twin Peaks -35.815294
Nob Hill -29.662516
Chinatown -29.311797
Downtown/Civic Center -28.614881
Bernal Heights -24.161908
Diamond Heights -8.588987
Parkside -8.173079
Inner Richmond -5.848639
Mission -2.361819
Haight Ashbury -0.824138
Noe Valley 15.507127
Western Addition 17.725579
Financial District 17.953338
North Beach 21.287484
Glen Park 25.860211
Potrero Hill 30.443484
South of Market 36.977630
Castro/Upper Market 38.604469
Twin Peaks 40.877339
Seacliff 60.542029
Inner Sunset 92.815554
Marina 98.030783
Golden Gate Park 101.387484
Pacific Heights 106.022266
Russian Hill 108.444627
Presidio Heights 224.239658
|
adt_model/.ipynb_checkpoints/XGB-checkpoint.ipynb | ###Markdown
Define Functions
###Code
# Performce cross validation using xgboost
def xgboostcv(X, y, fold, n_estimators, lr, depth, n_jobs, gamma, min_cw, subsample, colsample):
uid = np.unique(fold)
model_pred = np.zeros(X.shape[0])
model_valid_loss = np.zeros(len(uid))
model_train_loss = np.zeros(len(uid))
for i in uid:
x_valid = X[fold==i]
x_train = X[fold!=i]
y_valid = y[fold==i]
y_train = y[fold!=i]
model = XGBRegressor(n_estimators=n_estimators, learning_rate=lr,
max_depth = depth, n_jobs = n_jobs,
gamma = gamma, min_child_weight = min_cw,
subsample = subsample, colsample_bytree = colsample, random_state=1234)
model.fit(x_train, y_train)
pred = model.predict(x_valid)
model_pred[fold==i] = pred
model_valid_loss[uid==i] = mean_squared_error(y_valid, pred)
model_train_loss[uid==i] = mean_squared_error(y_train, model.predict(x_train))
return {'pred':model_pred, 'valid_loss':model_valid_loss, 'train_loss':model_train_loss}
# Compute MSE for xgboost cross validation
def xgboostcv_mse(n, p, depth, g, min_cw, subsample, colsample):
model_cv = xgboostcv(X_train, y_train, fold_train,
int(n)*100, 10**p, int(depth), n_nodes,
10**g, min_cw, subsample, colsample)
MSE = mean_squared_error(y_train, model_cv['pred'])
return -MSE
# Display model performance metrics for each cv iteration
def cv_performance(model, y, fold):
uid = np.unique(fold)
pred = np.round(model['pred'])
y = y.reshape(-1)
model_valid_mse = np.zeros(len(uid))
model_valid_mae = np.zeros(len(uid))
model_valid_r2 = np.zeros(len(uid))
for i in uid:
pred_i = pred[fold==i]
y_i = y[fold==i]
model_valid_mse[uid==i] = mean_squared_error(y_i, pred_i)
model_valid_mae[uid==i] = np.abs(pred_i-y_i).mean()
model_valid_r2[uid==i] = r2_score(y_i, pred_i)
results = pd.DataFrame(0, index=uid,
columns=['valid_mse', 'valid_mae', 'valid_r2',
'valid_loss', 'train_loss'])
results['valid_mse'] = model_valid_mse
results['valid_mae'] = model_valid_mae
results['valid_r2'] = model_valid_r2
results['valid_loss'] = model['valid_loss']
results['train_loss'] = model['train_loss']
print(results)
# Display overall model performance metrics
def cv_overall_performance(y, y_pred):
overall_MSE = mean_squared_error(y, y_pred)
overall_MAE = (np.abs(y_pred-y)).mean()
overall_RMSE = np.sqrt(np.square(y_pred-y).mean())
overall_R2 = r2_score(y, y_pred)
print("XGB overall MSE: %0.4f" %overall_MSE)
print("XGB overall MAE: %0.4f" %overall_MAE)
print("XGB overall RMSE: %0.4f" %overall_RMSE)
print("XGB overall R^2: %0.4f" %overall_R2)
# Plot variable importance
def plot_importance(model, columns):
importances = pd.Series(model.feature_importances_, index = columns).sort_values(ascending=False)
n = len(columns)
plt.figure(figsize=(10,15))
plt.barh(np.arange(n)+0.5, importances)
plt.yticks(np.arange(0.5,n+0.5), importances.index)
plt.tick_params(axis='both', which='major', labelsize=22)
plt.ylim([0,n])
plt.gca().invert_yaxis()
plt.savefig('variable_importance.png', dpi = 150)
# Save xgboost model
def save(obj, path):
pkl_fl = open(path, 'wb')
pickle.dump(obj, pkl_fl)
pkl_fl.close()
# Load xgboost model
def load(path):
f = open(path, 'rb')
obj = pickle.load(f)
f.close()
return(obj)
###Output
_____no_output_____
###Markdown
Parameter Values
###Code
# Set a few values
validation_only = False # Whether test model with test data
n_nodes = 96 # Number of computing nodes used for hyperparamter tunning
trained = False # If a trained model exits
cols_drop = ['Temp', 'WindSp', 'Precip', 'Snow', 'StationId', 'NumberOfLanes', 'Dir', 'FC'] # Columns to be dropped
if trained:
params = load('params.dat')
xgb_cv = load('xgb_cv.dat')
xgb = load('xgb.dat')
###Output
_____no_output_____
###Markdown
Read Data
###Code
if validation_only:
raw_data_train = pd.read_csv("final_train_data_adt.csv")
data = raw_data_train.drop(cols_drop, axis=1)
if 'Dir' in data.columns:
data[['Dir']] = data[['Dir']].astype('category')
one_hot = pd.get_dummies(data[['Dir']])
data = data.drop(['Dir'], axis = 1)
data = data.join(one_hot)
if 'FC' in data.columns:
data[['FC']] = data[['FC']].astype('category')
one_hot = pd.get_dummies(data[['FC']])
data = data.drop(['FC'], axis = 1)
data = data.join(one_hot)
week_dict = {"DayOfWeek": {'Monday': 1, 'Tuesday': 2, 'Wednesday': 3, 'Thursday': 4,
'Friday': 5, 'Saturday': 6, 'Sunday': 7}}
data = data.replace(week_dict)
X = data.drop(['Volume', 'fold'], axis=1)
X_col = X.columns
y = data[['Volume']]
fold_train = data[['fold']].values.reshape(-1)
X_train = X.values
y_train = y.values
else:
raw_data_train = pd.read_csv("final_train_data_adt.csv")
raw_data_test = pd.read_csv("final_test_data_adt.csv")
raw_data_test1 = pd.DataFrame(np.concatenate((raw_data_test.values, np.zeros(raw_data_test.shape[0]).reshape(-1, 1)), axis=1),
columns = raw_data_test.columns.append(pd.Index(['fold'])))
raw_data = pd.DataFrame(np.concatenate((raw_data_train.values, raw_data_test1.values), axis=0),
columns = raw_data_train.columns)
data = raw_data.drop(cols_drop, axis=1)
if 'Dir' in data.columns:
data[['Dir']] = data[['Dir']].astype('category')
one_hot = pd.get_dummies(data[['Dir']])
data = data.drop(['Dir'], axis = 1)
data = data.join(one_hot)
if 'FC' in data.columns:
data[['FC']] = data[['FC']].astype('category')
one_hot = pd.get_dummies(data[['FC']])
data = data.drop(['FC'], axis = 1)
data = data.join(one_hot)
week_dict = {"DayOfWeek": {'Monday': 1, 'Tuesday': 2, 'Wednesday': 3, 'Thursday': 4,
'Friday': 5, 'Saturday': 6, 'Sunday': 7}}
data = data.replace(week_dict)
X = data.drop(['Volume'], axis=1)
y = data[['Volume']]
X_train = X.loc[X.fold!=0, :]
fold_train = X_train[['fold']].values.reshape(-1)
X_col = X_train.drop(['fold'], axis = 1).columns
X_train = X_train.drop(['fold'], axis = 1).values
y_train = y.loc[X.fold!=0, :].values
X_test = X.loc[X.fold==0, :]
X_test = X_test.drop(['fold'], axis = 1).values
y_test = y.loc[X.fold==0, :].values
X_col
# Explain variable names
X_name_dict = {'Temp': 'Temperature', 'WindSp': 'Wind Speed', 'Precip': 'Precipitation', 'Snow': 'Snow',
'Long': 'Longitude', 'Lat': 'Latitude', 'NumberOfLanes': 'Number of Lanes', 'SpeedLimit': 'Speed Limit',
'FRC': 'TomTom FRC', 'DayOfWeek': 'Day of Week', 'Month': 'Month', 'Hour': 'Hour',
'AvgSp': 'Average Speed', 'ProbeCount': 'Probe Count', 'Dir_E': 'Direction(East)',
'Dir_N': 'Direction(North)', 'Dir_S': 'Direction(South)', 'Dir_W': 'Direction(West)',
'FC_3R': 'FHWA FC(3R)', 'FC_3U': 'FHWA FC(3U)', 'FC_4R': 'FHWA FC(4R)', 'FC_4U': 'FHWA FC(4U)',
'FC_5R': 'FHWA FC(5R)', 'FC_5U': 'FHWA FC(5U)', 'FC_7R': 'FHWA FC(7R)', 'FC_7U': 'FHWA FC(7U)'}
data.head()
X_train.shape
if validation_only == False:
print(X_test.shape)
###Output
_____no_output_____
###Markdown
Cross Validation & Hyperparameter Optimization
###Code
# Set hyperparameter ranges for Bayesian optimization
xgboostBO = BayesianOptimization(xgboostcv_mse,
{'n': (1, 10),
'p': (-4, 0),
'depth': (2, 10),
'g': (-3, 0),
'min_cw': (1, 10),
'subsample': (0.5, 1),
'colsample': (0.5, 1)
})
# Use Bayesian optimization to tune hyperparameters
import time
start_time = time.time()
xgboostBO.maximize(init_points=10, n_iter = 50)
print('-'*53)
print('Final Results')
print('XGBOOST: %f' % xgboostBO.max['target'])
print("--- %s seconds ---" % (time.time() - start_time))
# Save the hyperparameters the yield the highest model performance
params = xgboostBO.max['params']
save(params, 'params.dat')
params
# Perform cross validation using the optimal hyperparameters
xgb_cv = xgboostcv(X_train, y_train, fold_train, int(params['n'])*100,
10**params['p'], int(params['depth']), n_nodes,
10**params['g'], params['min_cw'], params['subsample'], params['colsample'])
# Display cv results for each iteration
cv_performance(xgb_cv, y_train, fold_train)
# Display overall cv results
cv_pred = xgb_cv['pred']
cv_pred[cv_pred<0] = 0
cv_overall_performance(y_train.reshape(-1), cv_pred)
# Save the cv results
save(xgb_cv, 'xgb_cv.dat')
###Output
_____no_output_____
###Markdown
Model Test
###Code
# Build a xgboost using all the training data with the optimal hyperparameter
xgb = XGBRegressor(n_estimators=int(params['n'])*100, learning_rate=10**params['p'], max_depth = int(params['depth']),
n_jobs = n_nodes, gamma = 10**params['g'], min_child_weight = params['min_cw'],
subsample = params['subsample'], colsample_bytree = params['colsample'], random_state=1234)
xgb.fit(X_train, y_train)
# Test the trained model with test data
if validation_only == False:
y_pred = xgb.predict(X_test)
y_pred[y_pred<0] = 0
cv_overall_performance(y_test.reshape(-1), y_pred)
# Plot variable importance
col_names = [X_name_dict[i] for i in X_col]
plot_importance(xgb, col_names)
# Save the trained xgboost model
save(xgb, 'xgb.dat')
# Produce cross validation estimates or estimates for test data
train_data_pred = pd.DataFrame(np.concatenate((raw_data_train.values, cv_pred.reshape(-1, 1)), axis=1),
columns = raw_data_train.columns.append(pd.Index(['PredVolume'])))
train_data_pred.to_csv('train_data_adt_pred.csv', index = False)
if validation_only == False:
test_data_pred = pd.DataFrame(np.concatenate((raw_data_test.values, y_pred.reshape(-1, 1)), axis=1),
columns = raw_data_test.columns.append(pd.Index(['PredVolume'])))
test_data_pred.to_csv('test_data_adt_pred.csv', index = False)
###Output
_____no_output_____
###Markdown
Plot Estimations vs. Observations
###Code
# Prepare data to plot estimated and observed values
if validation_only:
if trained:
plot_df = pd.read_csv("train_data_pred.csv")
else:
plot_df = train_data_pred
else:
if trained:
plot_df = pd.read_csv("test_data_pred.csv")
else:
plot_df = test_data_pred
plot_df = plot_df.sort_values(by=['StationId', 'Date', 'Dir', 'Hour'])
plot_df = plot_df.set_index(pd.Index(range(plot_df.shape[0])))
# Define a function to plot estimated and observed values for a day
def plot_daily_estimate(frc):
indices = plot_df.index[(plot_df.FRC == frc) & (plot_df.Hour == 0)].tolist()
from_index = np.random.choice(indices, 1)[0]
to_index = from_index + 23
plot_df_sub = plot_df.loc[from_index:to_index, :]
time = pd.date_range(plot_df_sub.Date.iloc[0] + ' 00:00:00', periods=24, freq='H')
plt.figure(figsize=(20,10))
plt.plot(time, plot_df_sub.PredVolume, 'b-', label='XGBoost', lw=2)
plt.plot(time, plot_df_sub.Volume, 'r--', label='Observed', lw=3)
plt.tick_params(axis='both', which='major', labelsize=24)
plt.ylabel('Volume (vehs/hr)', fontsize=24)
plt.xlabel("Time", fontsize=24)
plt.legend(loc='upper left', shadow=True, fontsize=24)
plt.title('Station ID: {0}, MAE={1}, FRC = {2}'.format(
plot_df_sub.StationId.iloc[0],
round(np.abs(plot_df_sub.PredVolume-plot_df_sub.Volume).mean()),
plot_df_sub.FRC.iloc[0]), fontsize=40)
plt.savefig('frc_{0}.png'.format(frc), dpi = 150)
return(plot_df_sub)
# Define a function to plot estimated and observed values for a week
def plot_weekly_estimate(frc):
indices = plot_df.index[(plot_df.FRC == frc) & (plot_df.Hour == 0) & (plot_df.DayOfWeek == 'Monday')].tolist()
from_index = np.random.choice(indices, 1)[0]
to_index = from_index + 24*7-1
plot_df_sub = plot_df.loc[from_index:to_index, :]
time = pd.date_range(plot_df_sub.Date.iloc[0] + ' 00:00:00', periods=24*7, freq='H')
plt.figure(figsize=(20,10))
plt.plot(time, plot_df_sub.PredVolume, 'b-', label='XGBoost', lw=2)
plt.plot(time, plot_df_sub.Volume, 'r--', label='Observed', lw=3)
plt.tick_params(axis='both', which='major', labelsize=24)
plt.ylabel('Volume (vehs/hr)', fontsize=24)
plt.xlabel("Time", fontsize=24)
plt.legend(loc='upper left', shadow=True, fontsize=24)
plt.title('Station ID: {0}, MAE={1}, FRC = {2}'.format(
plot_df_sub.StationId.iloc[0],
round(np.abs(plot_df_sub.PredVolume-plot_df_sub.Volume).mean()),
plot_df_sub.FRC.iloc[0]), fontsize=40)
plt.savefig('frc_{0}.png'.format(frc), dpi = 150)
return(plot_df_sub)
# Plot estimated and observed values for a day
frc4_daily_plot = plot_daily_estimate(4)
save(frc4_daily_plot, 'frc4_daily_plot.dat')
# Plot estimated and observed values for a week
frc3_weekly_plot = plot_weekly_estimate(3)
save(frc3_weekly_plot, 'frc3_weekly_plot.dat')
###Output
_____no_output_____ |
groking-leetcode/5-cyclic-sort.ipynb | ###Markdown
循环排序(Cyclic sort)循环排序是一项重要的技巧。通常可以用来高效的解决一种未排序,元素值为 1-n 的数组(n为数组长度)。既然数组的元素都是其长度范围内的元素,如果这个数组排序,那么数组元素的值和其索引有着一一对应的关系。例如数组 `[3, 1, 2, 4, 5]` 排序后是 `[1, 2, 4, 3, 5]````arr[0] = 1arr[1] = 2arr[2] = 3arr[3] = 4arr[4] = 5```从排序后的数组可以看出,元素的的 `值 == 索引 + 1`。即元素都在其**正确**的位置上。如果元素不在其正确的位置上,那么这个元素可能是重复的数字,对应的索引可能就是缺失的数字。因此通过循环排序可以解决leetcode 上类似寻找数组缺失数字,重复数字等问题。那么什么是循环排序呢?循环排序的宗旨就是通过迭代数组,判断当前索引的元素是否在其正确的位置上,如果在旧迭代下一步,如果就把元素跟它应该在的位置的元素交换。总结下来就两个步骤1. 迭代当前数组2. 判断当前元素是否在其正确的索引位置,如果在则继续迭代,如果不在则与其正确索引位置的元素交换其 伪代码如下```i = 0while i < len(nums): j = get_next_index(nums[i]) if nums[i] != nums[j]: nums[i], nums[j] = nums[j], nums[i] else: i += 1 ```结果这样处理,数组就"排序"好了,所谓排序指大部分元素都在其正确的索引位置,但是有些元素不在,这些元素可能是重复元素。直观上略抽象,下面通过真题解析。 cylic sort> 给定一个数组,其元素的值是 1 - n,n为数组的长度。每个元素都是unqiue。使用O(N)的复杂度原地排序>> 示例 1:>> 输入: [3, 1, 5, 4, 2]>> 输出: [1, 2, 3, 4, 5]>> 示例 2:>> 输入: [2, 6, 4, 3, 1, 5]>> 输出: [1, 2, 3, 4, 5, 6]>> 示例 3:>> 输入: [1, 5, 6, 4, 3, 2]>> 输出: [1, 2, 3, 4, 5, 6]套用上面 cyclic sort 的模板,很容易写入下面的代码:
###Code
class Solution:
def cyclicSort(self, nums):
i = 0
while i < len(nums):
j = nums[i] - 1
if nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
i += 1
return nums
nums = [2, 6, 4, 3, 1, 5]
ret = Solution().cyclicSort(nums)
print(ret)
###Output
[1, 2, 3, 4, 5, 6]
###Markdown
上述过程如图所示:循环迭代数组的时候,当前元素的正确位置索引为其值 - 1。如果当前值不在正确位置,需要进行交换。直到其索引和值正确。这个过程中,i 并不会变化,即看起来有一个 内部 循环。那么时间复杂度是 `O(N*N)`吗?实际上不是,因为每次交换,都有一个元素正确了,那么下次迭代到它的时候,自然也不用再交换。而处理当前元素的时候,最坏的情况下需要交互 N-1个元素。而剩下的元素都不用再交互,只有正常的迭代。最终复杂度是 O(N)+O(N−1) ,即线性复杂度。由于是原地排序,空间复杂度是 O(1)。 重复数基于循环排序,如果数组的元素不是unique,有重复的元素,有一类题就是找出这些重复的数字。[287. 寻找重复数](https://leetcode-cn.com/problems/find-the-duplicate-number/)> 给定一个包含 n + 1 个整数的数组 nums,其数字都在 1 到 n 之间(包括 1 和 n),可知至少存在一个重复的整数。假设只有一个重复的整数,找出这个重复的数。 > > 示例 1:> > 输入: [1,3,4,2,2]> > 输出: 2> > 示例 2:> > 输入: [3,1,3,4,2]> > 输出: 3> > 说明:> > 1. 不能更改原数组(假设数组是只读的)。> 2. 只能使用额外的 O(1) 的空间。> 3. 时间复杂度小于 O(n2) 。> 4. 数组中只有一个重复的数字,但它可能不止重复出现一次。从题意保证输入只有一个重复的元素,并且符合循环排序的条件(元素是 1->n)。使用循环排序的技巧。那么就有元素不在其正确的索引位置上,这个元素就是重复的元素。但是这样做其实不符合题意,因为题目说明要求不能改变原来的数组,cylic sort 会改变数组,这里使用 cylic sort 只是为了说明可以通过循环排序找出重复数。
###Code
class Solution:
def findDuplicate(self, nums):
i = 0
while i < len(nums):
j = nums[i] - 1
if nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
i += 1
for i in range(len(nums)):
if nums[i] != i + 1:
return nums[i]
return -1
nums = [1, 4, 4, 3, 2]
nums = [2, 1, 3, 3, 5, 4]
nums = [2, 4, 1, 4, 4]
ret = Solution().findDuplicate(nums)
print(ret)
###Output
4
###Markdown
时间复杂度是O(N),空间复杂度是 O(1)。正如前面所说,不能改变数组,正确的解法可以使用快慢指针。 缺失的数字有元素重复了,那么重复的元素必然占据了其索引本来应该所在的元素。那么那个元素就是确实的数字,另外一类题:[268. 缺失数字](链接:https://leetcode-cn.com/problems/missing-number)> 给定一个包含 0, 1, 2, ..., n 中 n 个数的序列,找出 0 .. n 中没有出现在序列中的那个数。> > 示例 1:> > 输入: [3,0,1]> > 输出: 2> > 示例 2:> > 输入: [9,6,4,2,3,5,7,0,1]> > 输出: 8> > 说明: 你的算法应具有线性时间复杂度。你能否仅使用额外常数空间来实现?>
###Code
class Solution:
def missingNumber(self, nums: List[int]) -> int:
i = 0
while i < len(nums):
j = nums[i] - 1
if nums[i] > 0 and nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
i += 1
for i in range(len(nums)):
if nums[i] == 0:
return i + 1
return 0
###Output
_____no_output_____
###Markdown
这题的关键是数组的元素包含 0,0 在这样的数组越界。对于越界的元素可以忽略不管。最后再找出这些索引位置不对的元素即可,类似的还有下面一题:[448. 找到所有数组中消失的数字](https://leetcode-cn.com/problems/find-all-numbers-disappeared-in-an-array)> 给定一个范围在 1 ≤ a[i] ≤ n ( n = 数组大小 ) 的 整型数组,数组中的元素一些出现了两次,另一些只出现一次。> 找到所有在 [1, n] 范围之间没有出现在数组中的数字。> 您能在不使用额外空间且时间复杂度为O(n)的情况下完成这个任务吗? 你可以假定返回的数组不算在额外空间内。> > 示例:> > 输入:> > [4,3,2,7,8,2,3,1]> > 输出:> > [5,6]使用循环排序技巧将数组排序,然后再迭代数组,找出索引不在正确位置上的数。它的 索引 + 1 就是缺失的数字:
###Code
from typing import *
# 448
class Solution:
def findDisappearedNumbers(self, nums: List[int]) -> List[int]:
i = 0
while i < len(nums):
j = nums[i] - 1
if nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
i += 1
return [i + 1 for i in range(len(nums)) if nums[i] != i + 1]
nums = [4, 3, 2, 7, 8, 2, 3, 1]
ret = Solution().findDisappearedNumbers(nums)
print(ret)
###Output
[5, 6]
###Markdown
通过上面两题,可以了解循环排序通常找两类数1. 重复数:不在正确索引位置上的数2. 缺失数:不在正确索引位置上的数的索引+1关键都是找出不在正确位置上的数和其索引下面一题正是将两者结合[645. 错误的集合](https://leetcode-cn.com/problems/set-mismatch)> 集合 S 包含从1到 n 的整数。不幸的是,因为数据错误,导致集合里面某一个元素复制了成了集合里面的另外一个元素的值,导致集合丢失了一个整数并且有一个元素重复。>> 给定一个数组 nums 代表了集合 S 发生错误后的结果。你的任务是首先寻找到重复出现的整数,再找到丢失的整数,将它们以数组的形式返回。>> 示例 1:>> 输入: nums = [1,2,2,4]>> 输出: [2,3]>> 注意:>> 给定数组的长度范围是 [2, 10000]。>> 给定的数组是无序的。
###Code
class Solution:
def findErrorNums(self, nums: List[int]) -> List[int]:
i = 0
while i < len(nums):
j = nums[i] - 1
if nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
i += 1
for i in range(len(nums)):
if nums[i] != i + 1:
return [nums[i], i+1]
return []
nums = [1,2,2,4]
ret = Solution().findErrorNums(nums)
print(ret)
###Output
[2, 3]
###Markdown
`[nums[i], i+1]` 正好是**重复数**和**缺失数**。时间复杂度依然是常数,也没有使用额外的空间。 越界处理循环排序的重要条件就是数组元素范围在 1 - n内,就像上面第268题一样,有些问题的数组元素其实是会越界。当然,对于这种问题,也可以分析一下是否可以使用循环排序。一个tips就是对于找缺失或者重复数,就可以尝试一下。例如下面这题:[41. 缺失的第一个正数](https://leetcode-cn.com/problems/first-missing-positive/)> 给定一个未排序的整数数组,找出其中没有出现的最小的正整数。> > 示例 1:> > 输入: [1,2,0]> > 输出: 3> > 示例 2:> > 输入: [3,4,-1,1]> > 输出: 2> > 示例 3:> > 输入: [7,8,9,11,12]> > 输出: 1> > 说明:> > 你的算法的时间复杂度应为O(n),并且只能使用常数级别的空间。找出最小的正整数,按照循环排序的思路,对于越界的元素搁置不管。那么等"排序"结束后,错误位置的数字,那么就是目标要求的数。代码如下:
###Code
# 41
class Solution:
def firstMissingPositive(self, nums: List[int]) -> int:
if not nums:
return 1
i = 0
while i < len(nums):
j = nums[i] - 1
if 0 < nums[i] and j < len(nums) and nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
# 越界,处于正确位置
i += 1
for i in range(len(nums)):
if i != nums[i] - 1:
return i + 1
# 全都在正确的位置
return len(nums) + 1
nums = [7,8,9,11,12]
print(nums)
###Output
[7, 8, 9, 11, 12]
###Markdown
尽管这是一道 Hard 题目,通过循环排序的技巧,解题思路还是比较清晰。这种问题的还有一个变种,就是不再是找最小的正整数,而是找k个最小数。需要注意的是,对于应排序好的全是正整数的数组,那么最终返回的值也会越界,即是 数组长度 + 1。那么代码如下:
###Code
class Solution:
def findFirstKMissingPositive(self, nums, k):
missingNumbers = []
i = 0
while i < len(nums):
j = nums[i] - 1
if 0 < nums[i] and j < len(nums) and nums[i] != nums[j]:
nums[i], nums[j] = nums[j], nums[i]
else:
i += 1
extra_nums = set()
for i in range(len(nums)):
if nums[i] != i + 1 and len(missingNumbers) < k:
missingNumbers.append(i+1)
extra_nums.add(nums[i])
i = 1
while len(missingNumbers) < k:
cur_num = len(nums) + i
if cur_num not in extra_nums:
missingNumbers.append(cur_num)
i += 1
return missingNumbers
nums = [3, -1, 4, 5, 5]
k = 3
nums = [2, 3, 4]
k = 3
ret = Solution().findFirstKMissingPositive(nums, k)
print(ret)
###Output
[1, 5, 6]
###Markdown
情侣牵手循环排序的重要技巧是**交换**,交换之后能让部分元素的位置正确。通过交换来处理元素在leetcode还有一题,虽然标签循环排序,而是贪心算法。但是借用循环排序的交换元素的技巧,解题非常方便。[765. 情侣牵手](https://leetcode-cn.com/problems/couples-holding-hands)> N 对情侣坐在连续排列的 2N 个座位上,想要牵到对方的手。 计算最少交换座位的次数,以便每对情侣可以并肩坐在一起。 一次交换可选择任意两人,让他们站起来交换座位。> > 人和座位用 0 到 2N-1 的整数表示,情侣们按顺序编号,第一对是 (0, 1),第二对是 (2, 3),以此类推,最后一对是 (2N-2, 2N-1)。> > 这些情侣的初始座位 row[i] 是由最初始坐在第 i 个座位上的人决定的。> > 示例 1:> > 输入: row = [0, 2, 1, 3]> > 输出: 1> > 解释: 我们只需要交换row[1]和row[2]的位置即可。> > 示例 2:> > 输入: row = [3, 2, 0, 1]> > 输出: 0> > 解释: 无需交换座位,所有的情侣都已经可以手牵手了。> > 说明: len(row) 是偶数且数值在 [4, 60]范围内。 可以保证row 是序列 0...len(row)-1 的一个全排列。 从题意可知,判断两个元素是否是情侣可以通过定义得出,```2N-2 和 2N-1。N = (index // 2) + 1```但是这种相邻的数字,使用 `^`异或求解更方便。解题的思路就是使用贪心策略,假设当前的元素是正确的,那么就需要在剩余的元素中找到他的情侣。一旦找到,就处理好一对情侣,接着处理下一对,如果本身已经是正确的情侣,就直接跳过即可。代码如下:
###Code
class Solution:
def minSwapsCouples(self, row: List[int]) -> int:
ret = 0
i = 0
while i < len(row):
if not self.isCouple(row[i], row[i + 1]):
j = i + 1
while j < len(row):
if self.isCouple(row[i], row[j]):
row[i + 1], row[j] = row[j], row[i + 1]
ret += 1
break
j += 1
i += 2
return ret
def isCouple(self, i, j):
return i ^ 1 == j
###Output
_____no_output_____ |
Web_Scraping/Scraping SS.COM.ipynb | ###Markdown
Scraping SS.COM Excercise for web scraping. In this excercise we have to make pandas dataframe list of all selling woods from all Latvian regions. And save the pandas data frame to excel file.
###Code
import requests
from bs4 import BeautifulSoup as soup
import pandas as pd
url = "https://www.ss.com/lv/real-estate/wood/"
req = requests.get(url)
req.status_code
# 'lxml' sometimes is better parser than default python parser 'html.parser', but I am going to use html.parser, it is esier to remember..
# soup_obj = soup(req.text, 'lxml')
soup_obj = soup(req.text, 'html.parser')
soup_obj.find('body')
# or
# soup_obj.body
# it is the same output
# We need to find all urls for all regions first.
# We found that our desired 'a' tags have class 'a_category', so we are going to search all 'a' tags with this category.
regions = soup_obj.find_all('a', {'class': 'a_category'})
# or
# regions = soup_obj.find_all('a', class_ = 'a_category')
# we also can search using 'td' width parameter as it is uniquely 25% for our specific 3 'td' tags. But after this we still need to find all 'a' tags, so it is longer process.
# regions = soup_obj.find_all('td', {'width': '25%'})
len(regions)
regions[0]
# Now we can find out region urls.
# 'href' basicaly has url string which we have to add to base url.
regions[0]['href']
baseurl = 'https://www.ss.com'
# My initial variant: Returns list with regions strings.
# regionlist = []
# for i in regions:
# split_url = i['href'].split('/')
# regionlist.append(split_url[-2])
# regionlist
# urllist = [url + region + '/' for region in regionlist]
# urllist
# Better variant: One liner.
urllist = [baseurl + region['href'] for region in regions]
urllist
# Adding 'sell/' string at the end, so we get only sellers not buyers.
sellurls = [el + 'sell/' for el in urllist]
sellurls
riga_region1 = requests.get(sellurls[0])
riga_region1.status_code
# Find out how many tables there is in this riga region page.
riga_soup_obj = soup(riga_region1.text, 'html.parser')
riga_page1 = riga_soup_obj.find_all('table')
len(riga_page1)
# Find out which table is one ew are looking for. In this case it is table with index 8.
riga_page1[8]
# make data frame from requested html text.
riga_df_list1 = pd.read_html(riga_region1.text)
riga_df_list1
riga_df1 = riga_df_list1[8]
riga_df1
# Next we should slice the area from table with necessary data and add right column names in right places.
riga_one_page_table1 = riga_df1.iloc[1:, 2:]
riga_one_page_table1
riga_one_page_table1.columns = ['Sludinājuma apraksts', 'Pagasts', 'Platība', 'Cena']
riga_one_page_table1
# regionlist = []
# for i in regions:
# split_url = i['href'].split('/')
# regionlist.append(split_url[-2])
# print(regionlist)
['https://www.ss.com/lv/real-estate/wood/riga-region/sell/',
'https://www.ss.com/lv/real-estate/wood/aizkraukle-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/aluksne-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/balvi-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/bauska-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/cesis-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/daugavpils-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/dobele-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/gulbene-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/jekabpils-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/jelgava-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/kraslava-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/kuldiga-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/liepaja-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/limbadzi-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/ludza-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/madona-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/ogre-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/preili-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/rezekne-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/saldus-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/talsi-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/tukums-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/valka-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/valmiera-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/ventspils-and-reg/sell/',
'https://www.ss.com/lv/real-estate/wood/other/sell/']
# 3 pages
a = 'https://www.ss.com/lv/real-estate/wood/riga-region/sell/'
# 1 page
b = 'https://www.ss.com/lv/real-estate/wood/aizkraukle-and-reg/sell/'
# 0 pages
c = 'https://www.ss.com/lv/real-estate/wood/dobele-and-reg/sell/'
# print(sellurls[1])
riga_region2 = requests.get('https://www.ss.com/lv/real-estate/wood/aizkraukle-and-reg/sell/')
# print(sellurls[0])
riga_df_list2 = pd.read_html(riga_region2.text)
print(len(riga_df_list2))
riga_df2 = riga_df_list2[6]
riga_df2
# riga_soup_obj = soup(riga_region2.text, 'html.parser')
# riga_page1 = riga_soup_obj.find_all('table')
# riga_page_option1 = riga_soup_obj.find_all('option', {'value': f'/lv/real-estate/wood/riga-region/sell/'})
# print('-------------------', riga_page_option1)
# print(riga_df2)
# riga_one_page_table2 = riga_df2.iloc[1:, 2:]
# riga_one_page_table2.columns = ['Sludinājuma apraksts', 'Pagasts', 'Platība', 'Cena']
# riga_one_page_table2
# Check if region has multiple pages. If yes, then find how many.
riga_region = requests.get('https://www.ss.com/lv/real-estate/wood/riga-region/buy/')
soup_obj = soup(riga_region.text, 'html.parser')
pages_div = soup_obj.find_all('div', {'class': 'td2'})
print('pages_div================', pages_div, '\n')
finding_number_of_pages = pages_div[0].find('a')
print('finding_number_of_pages==============', finding_number_of_pages, '\n')
listy = str(finding_number_of_pages).split('.html')
print('listy===============', listy, '\n')
listnr = listy[0].split('page')
print('listnr===============', listnr, '\n')
number_of_pages = page_number_list[-1]
number_of_pages
# Now we must
for region_nr in range(len(sellurls)):
riga_region = requests.get(sellurls[region_nr])
###Output
_____no_output_____ |
notebooks/create_loss_images.ipynb | ###Markdown
functions_0to1
###Code
functions_0to1 = {
"log_loss": log_loss,
"sigmoid_focal_crossentropy": sigmoid_focal_crossentropy,
}
# y_preds = jnp.linspace(start=-1, stop=2, num=1000)
y_preds = list(jnp.linspace(start=0, stop=1, num=1000))
y_true0 = {}
y_true1 = {}
for function_name, function in functions_0to1.items():
y_true0[function_name] = [float(function(y_true=jnp.array([0.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
y_true1[function_name] = [float(function(y_true=jnp.array([1.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
df_y_true0 = pd.DataFrame({
"y_pred": y_preds,
"log_loss": y_true0["log_loss"],
"sigmoid_focal_crossentropy": y_true0["sigmoid_focal_crossentropy"],
})
df_y_true0.hvplot.line(x="y_pred", y=["log_loss", "sigmoid_focal_crossentropy"],
ylabel="loss value").opts(title="Comparing loss functions when y_true=0")
df_y_true1 = pd.DataFrame({
"y_pred": y_preds,
"log_loss": y_true1["log_loss"],
"sigmoid_focal_crossentropy": y_true1["sigmoid_focal_crossentropy"],
})
df_y_true1.hvplot.line(x="y_pred", y=["log_loss", "sigmoid_focal_crossentropy"],
ylabel="loss value").opts(title="Comparing loss functions when y_true=1")
###Output
_____no_output_____
###Markdown
Single loss functions loss_log
###Code
from jax_toolkit.losses.classification import log_loss
y_preds = list(jnp.linspace(start=0, stop=1, num=1000))
y_true0 = [float(log_loss(y_true=jnp.array([0.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
y_true1 = [float(log_loss(y_true=jnp.array([1.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
df = pd.DataFrame({
"y_pred": y_preds,
"y_true=0": y_true0,
"y_true=1": y_true1,
})
df.hvplot.line(x="y_pred", y=["y_true=0", "y_true=1"],
ylabel="loss value").opts(title="log_loss")
###Output
_____no_output_____
###Markdown
sigmoid_focal_crossentropy
###Code
from jax_toolkit.losses.classification import sigmoid_focal_crossentropy
# y_preds = jnp.linspace(start=-1, stop=2, num=1000)
y_preds = list(jnp.linspace(start=0, stop=1, num=1000))
y_true0 = [float(sigmoid_focal_crossentropy(y_true=jnp.array([0.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
y_true1 = [float(sigmoid_focal_crossentropy(y_true=jnp.array([1.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
df = pd.DataFrame({
"y_pred": y_preds,
"y_true=0": y_true0,
"y_true=1": y_true1,
})
df.hvplot.line(x="y_pred", y=["y_true=0", "y_true=1"],
ylabel="loss value").opts(title="sigmoid_focal_crossentropy")
###Output
_____no_output_____
###Markdown
squared_hinge
###Code
from jax_toolkit.losses.classification import squared_hinge
# y_preds = jnp.linspace(start=-1, stop=2, num=1000)
y_preds = list(jnp.linspace(start=-2, stop=2, num=1000))
y_true0 = [float(squared_hinge(y_true=jnp.array([-1.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
y_true1 = [float(squared_hinge(y_true=jnp.array([1.]), y_pred=jnp.array([y_pred]))) for y_pred in y_preds]
df = pd.DataFrame({
"y_pred": y_preds,
"y_true=-1": y_true0,
"y_true=1": y_true1,
})
df.hvplot.line(x="y_pred", y=["y_true=-1", "y_true=1"],
ylabel="loss value").opts(title="squared_hinge")
###Output
_____no_output_____ |
era5/era5-explorer-simple.ipynb | ###Markdown
ERA5 Explorer
###Code
from datetime import datetime
import os
import os.path
from pathlib import Path
import subprocess
import warnings
from IPython.display import clear_output, display, HTML, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
from ddsapi import Client
%matplotlib inline
# Request-related options
# dataset_options = [
# ('ERA-5 Single Levels', 'era5-single-levels')
# ]
product_type_options = [
('Reanalysis', 'reanalysis')
]
variable_options = [
('2 m Dew Point Temperature', '2_metre_dewpoint_temperature'),
('Air Pressure at Mean Sea Level', 'air_pressure_at_mean_sea_level'),
('LWE Thickness of Snowfall Amount', 'lwe_thickness_of_snowfall_amount'),
('Surface Air Pressure', 'surface_air_pressure'),
('Surface Short Wave Flux', 'surface_downwelling_shortwave_flux_in_air'),
('Surface Thermal Radiation', 'surface_thermal_radiation_downwards'),
('2 m Temperature', '2_metre_temperature'),
('Total Precipitation', 'total_precipitation'),
('10 m Wind U-Component', '10_metre_u_wind_component'),
('10 m Wind V-Component', '10_metre_v_wind_component')
]
# latitude_bounds = {
# 'min': -90,
# 'max': 90
# }
# longitude_bounds = {
# 'min': 0,
# 'max': 360
# }
# start_date = {'year': 1979, 'month': 1, 'day': 1}
# stop_date = {'year': 2020, 'month': 12, 'day': 31}
start_date = {
'year': 2020,
'month': 1,
'day': 1
}
stop_date = {
'year': 2020,
'month': 1,
'day': 5
}
# Auxiliary data
_DIMENSIONS = {'time', 'latitude', 'longitude'}
data = {'dset': None, 'fpath': None, 'is_updated': False}
# Input and output widgets
out = widgets.Output()
intro = widgets.HTML(
value='<b>Please, choose request-related options:</b>'
)
dds_api_key = widgets.Password(
description='API Key:',
placeholder='Please, enter DDS API key',
layout=widgets.Layout(width='auto', height='auto')
)
# datasets = widgets.Dropdown(
# options=dataset_options,
# index=0,
# disabled=False,
# description='Dataset:',
# layout=widgets.Layout(width='auto', height='auto')
# )
product_types = widgets.Dropdown(
options=product_type_options,
index=0,
disabled=False,
description='Product Type:',
layout=widgets.Layout(width='auto', height='auto')
)
variables = widgets.SelectMultiple(
options=variable_options,
index=(0,),
disabled=False,
description='Variables:',
rows=len(variable_options),
layout=widgets.Layout(width='auto', height='auto')
)
area_grid = widgets.GridspecLayout(
n_rows=6,
n_columns=3,
grid_gap='0px',
width='auto',
height='auto'
)
area_grid[0, 0] = widgets.HTML(
value='Area bounds:',
layout=widgets.Layout(width='auto', height='auto')
)
area_grid[0, 1] = widgets.HTML(
value='North',
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[1, 1] = area_north = widgets.Text(
value='48.0',
placeholder='',
description='',
disabled=False,
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[2, 0] = widgets.HTML(
value='West',
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[2, 2] = widgets.HTML(
value='East',
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[3, 0] = area_west = widgets.Text(
value='5.0',
placeholder='',
description='',
disabled=False,
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[3, 2] = area_east = widgets.Text(
value='18.0',
placeholder='',
description='',
disabled=False,
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[4, 1] = widgets.HTML(
value='South',
layout=widgets.Layout(width='50%', height='auto')
)
area_grid[5, 1] = area_south = widgets.Text(
value='35.0',
placeholder='',
description='',
disabled=False,
layout=widgets.Layout(width='50%', height='auto')
)
# latitudes = widgets.SelectionRangeSlider(
# options=[
# str(i)
# for i in range(latitude_bounds['min'], latitude_bounds['max'] + 1)
# ],
# index=(125, 140),
# disabled=False,
# description='Latitude:',
# orientation='horizontal',
# readout=True,
# continuous_update=False,
# layout=widgets.Layout(width='auto', height='auto')
# )
# longitudes = widgets.SelectionRangeSlider(
# options=[
# str(i)
# for i in range(longitude_bounds['min'], longitude_bounds['max'] + 1)
# ],
# index=(5, 18),
# disabled=False,
# description='Longitude:',
# orientation='horizontal',
# readout=True,
# continuous_update=False,
# layout=widgets.Layout(width='auto', height='auto')
# )
start_datetime = widgets.DatePicker(
value=datetime(**start_date),
disabled=False,
description='Start Date:',
layout=widgets.Layout(width='auto', height='auto')
)
stop_datetime = widgets.DatePicker(
value=datetime(**stop_date),
disabled=False,
description='Stop Date:',
layout=widgets.Layout(width='auto', height='auto')
)
run = widgets.Button(
description='Retrieve',
tooltip='Retrieve',
icon='download',
disabled=False,
button_style='',
layout=widgets.Layout(width='auto', height='auto')
)
time_indices = widgets.IntSlider(
value=0,
min=0,
max=1,
step=1,
description='Plot Time:',
disabled=True,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
layout=widgets.Layout(width='auto', height='auto')
)
selected_variables = widgets.Dropdown(
options=[],
# index=0,
disabled=True,
description='Plot Variable:',
layout=widgets.Layout(width='auto', height='auto')
)
link = widgets.HTML(
value='Link currently not available',
# placeholder='Link currently not available',
description='Link:',
layout=widgets.Layout(width='auto', height='auto')
)
in_box = widgets.VBox(
children=[
intro,
dds_api_key,
# datasets,
product_types,
variables,
area_grid,
# latitudes,
# longitudes,
start_datetime,
stop_datetime,
run
],
layout=widgets.Layout(width='auto', height='auto')
)
out_box = widgets.VBox(
children=[
out,
time_indices,
selected_variables,
link
],
layout=widgets.Layout(width='auto', height='auto')
)
form = widgets.TwoByTwoLayout(
top_left=in_box,
top_right=out_box,
grid_gap='0px'
)
# Required functions
def plot(time_index=0, variable=None):
if not variable:
variable = selected_variables.options[0]
darr = data['dset'][variable].isel(indexers={'time': time_index})
darr.plot.pcolormesh()
plt.show()
def display_widgets():
display(form)
def retrieve():
api_key = dds_api_key.value
dataset = 'era5-single-levels' # datasets.value
request = {
'product_type': product_types.value,
'variable': variables.value,
'area': {
'south': float(area_south.value),
'north': float(area_north.value),
'west': float(area_west.value),
'east': float(area_east.value)
# 'south': float(latitudes.value[0]),
# 'north': float(latitudes.value[1]),
# 'west': float(longitudes.value[0]),
# 'east': float(longitudes.value[1])
},
'time': {
'start': start_datetime.value.isoformat(),
'stop': stop_datetime.value.isoformat()
},
'format': 'netcdf'
}
data['fpath'] = file_path = f"{dataset}-{request['product_type']}.nc"
client = (
Client(quiet=True, url='https://ddsapi.cmcc.it/v1', key=api_key)
if api_key else
Client()
)
# display(HTML(f'Retrieve initialized at {datetime.now().time()}.'))
client.retrieve(dataset, request, file_path)
# display(HTML(f'Retrieve completed at {datetime.now().time()}.'))
# !clamscan -r --bell -i {os.getcwd()}
# scan_result = subprocess.run(
# ['clamscan', '-r', '--bell', '-i', os.getcwd()],
# capture_output=True
# )
# print(scan_result.stdout.decode('UTF-8'))
# if err := scan_result.stderr.decode('UTF-8'):
# warnings.warn(
# message=f'scan not completed:\n{err}',
# category=RuntimeWarning
# )
def reset():
time_indices.value = 0
time_indices.max = 1
time_indices.disabled = True
selected_variables.options = []
selected_variables.disabled = True
def update_data():
# NOTE: This is an embarrassingly dirty hack to avoid the problem due to
# which `xarray.open_dataset` returns previously loaded (possibly cached)
# dataset.
with xr.open_dataset(data['fpath'], cache=False) as dset:
pass
with xr.open_dataset(data['fpath'], cache=False) as dset:
data['dset'] = dset
vars_ = sorted(list(set(dset.variables.keys()) - _DIMENSIONS))
time_indices.value = 0
time_indices.max = dset.coords['time'].size - 1
time_indices.disabled = False
selected_variables.options = vars_
selected_variables.disabled = False
def update_link():
if data['fpath']:
path = os.path.join(*Path(os.getcwd()).parts[3:], data['fpath'])
path = f'http://localhost:8888/lab/tree/{path}'
link.value = f'<a href="{path}">Download data</a>'
def update_out(time_index, variable):
print('update_out', 1, data['is_updated'])
if data['is_updated']:
with out:
clear_output()
plot(time_index, variable)
def run_on_click():
data['is_updated'] = False
reset()
retrieve()
update_data()
update_link()
out.clear_output()
data['is_updated'] = True
display_plot()
@out.capture(clear_output=True)
def display_plot(time_index=0, variable=None):
plot(time_index, variable)
# Additional functionality related to widgets
interact = widgets.interactive(
update_out,
time_index=time_indices,
variable=selected_variables
)
run.on_click(lambda run: run_on_click())
# Running
display_widgets()
###Output
_____no_output_____ |
week2/day3/exercises/Practice_4_For_functions.ipynb | ###Markdown
Python | day 4 | for loop, functions (feat. basics & if/else) Exercise 0.1 - Python Basics1. What do we use to make a line break in a print? And for a tab stop?have a look ---> http://elclubdelautodidacta.es/wp/2012/04/python-capitulo-31-exprimiendo-la-funcion-print/ 2. Make a converter from dollars to euros. You'll have to use input. 3. Declare two strings, one will be your first name and the other your last name. Declare your age in another variable. Print a sentence which include those variables, using ```f{}```. Wtf is that? check this out **-->** https://realpython.com/python-string-formatting/ 4. Given the list `[4,7, -3]`, calculate its maximum, minimum and sum. 5. Round `38.38276252728` to 5 decimal places. 6. Make the phrase `"Born to be wild"` uppercase, then lowercase, divide it by spaces, and finally, replace `"wild"` with `"Geek."` 7. Create a program where two inputs are collected, and the output of the program is a boolean which tell the user whether those inputs are the same or not. Exercise 0.2 - If/else1. Create a decission tree using if/else sentences, to determine the price of the movies ticket. If the client's age is between 5 and 15 years, both included, the price will be 5, if she/he is retired and the movie is one of the `peliculas_discount`, the price is 4. In any other case, it will be 7 euros.You should create the list of `peliculas_discount` with your favourites movies.
###Code
x = "SI"
print(x.upper())
peliculas_discount = ["Infiltrados", "El Padrino", "American Ganster"]
edad_cliente = int(input("¿Qué edad tienes?"))
pelicula_a_ver = input("¿Qué película vas a ver?")
retirado = input("¿Estás retirado?")
retirado = retirado.upper()
# Opción 2 a partir de String
if retirado == "SI" or retirado == "YES":
retirado = True
else:
retirado = False
# Las condiciones
#if edad_cliente >= 5 and edad_cliente <= 15:
if (5 <= edad_cliente <= 15):
print("El precio son 5€")
elif retirado and pelicula_a_ver in peliculas_discount:
print("El precio son 4€")
else:
print("El precio son 7€")
# Las condiciones
#if edad_cliente >= 5 and edad_cliente <= 15:
if (5 <= edad_cliente <= 15):
print("El precio son 5€")
elif retirado:
if pelicula_a_ver in peliculas_discount:
print("EL precio son 4€")
else:
print("El precio son 7€")
else:
print("El precio son 7€")
# Opción 1 a partir de String
retirado = (retirado == "SI") or (retirado == "YES")
# Opción 2 a partir de String
if retirado == "SI":
retirado = True
else:
retirado = False
# Opción 1 para convertir a booleano "retirado" a partir de la edad
retirado = (edad_cliente >= 67)
# Opción 2 para convertir a booleano "retirado" a partir de la edad
if edad_cliente >= 67:
retirado = True
else:
retirado = False
# Opción 3 para convertir a booleano "retirado" a partir de la edad
retirado = False
if edad_cliente >= 67:
retirado = True
lista = ['María Cagigas','Kapil Dadlani','Daniel del Valle','María del Mar Delgado Domínguez','Estela Falgas','Alfonso Garcia Mateo-Sagasta','Javier Gil Antuñano Foncillas','Juan Guerrero Enriquez','Antonio Leal','Miguel Merry del Val','Miguel','Marta Miñana','Roberto Molleda','Javier Olcoz','Ariadna Puigventos','Leonardo Sánchez Soler','Anais villegas', "Xeles"]
import random
alumno_aleatorio = random.choice(lista)
print(alumno_aleatorio)
import random
lista_stu = ['María Cagigas','Kapil Dadlani','Daniel del Valle','María del Mar Delgado Domínguez','Estela Falgas','Alfonso Garcia Mateo-Sagasta','Javier Gil Antuñano Foncillas','Juan Guerrero Enriquez','Antonio Leal','Miguel Merry del Val','Miguel','Marta Miñana','Roberto Molleda','Javier Olcoz','Ariadna Puigventos','Leonardo Sánchez Soler','Anais villegas', "Xeles"]
def random_student(lista_alumnos):
return random.choice(lista_alumnos)
s = random_student(lista_alumnos=lista_stu)
print(s)
c = 6
d = 2
if c != 6:
print("Algo")
elif d != 2:
print("Prueba suerte")
elif d == 6:
print(6)
elif c == 6:
print("C es 6")
if d == 2:
print(2)
else:
print("Else")
###Output
C es 6
2
###Markdown
Exercise 1 - ForIf you find a way of solving any of these questions below without using for loop, don't hesitate to do it. 1. Prints the numbers between 0 and 70 that are multiples of 3 and 5.
###Code
3 % 3 == 0
###Output
_____no_output_____
###Markdown
list(range(start, stop, incrementa))
###Code
def rango(*args, **kwargs):
def rango(start=0, stop, inc=1)
print(list(range(0, 71, 1)))
print(list(range(3)))
print(list(range(2, 71, 1)))
list(range(3))
lista_a = ["Mallorca", "Mérida", "Madrid"]
lista_b = ["España", "Francia", "Italia"]
#for i in [0,1,2]: # El for recoger el VALOR de la colección
for i in range(len(lista_a)):
print("Valor de i:", i)
print("lista_a[" + str(i) + "]-->", lista_a[i])
print("lista_b[" + str(i) + "]-->",lista_b[i])
print("----------------")
lista_a = ["Mallorca", "Mérida", "Madrid"]
lista_b = ["España", "Francia", "Italia"]
contador = 0
for i in ["hola", "andrés", "jaime"]: # El for recoger el VALOR de la colección
print("Valor de contador:", contador)
print("lista_a[" + str(contador) + "]-->", lista_a[contador])
print("lista_b[" + str(contador) + "]-->",lista_b[contador])
print("----------------")
contador += 1
lista_a = ["Mallorca", "Mérida", "Madrid"]
lista_b = ["España", "Francia", "Italia"]
lista_c = lista_a + lista_b
lista_c
list(range(len(lista_c)))
#for i in [0, 1, 2, 3, 4, 5]:
for i in range(len(lista_c)):
print(lista_c[i])
lista_a
lista_b
#for i in range(len(lista_a)):
for i in [0, 1, 2]:
if i == 0: # La primera iteración
print(lista_a[i], ":", lista_b[i+1])
elif i == 1: # La segunda iteración
print(lista_a[i], ":", lista_b[i-1])
else: # Es la tercera iteración
print(lista_a[i], ":", lista_b[i])
lista_a = ["Mallorca", "Mérida", "Madrid"]
lista_b = ["España", "Francia", "Italia"]
#for i in range(len(lista_a)):
for i in [0, 1, 2]:
if i == 0: # La primera iteración
print(lista_a[i], ":", lista_b[i+1])
elif i == 1: # La segunda iteración
print(lista_a[i], ":", lista_b[i-1])
elif i == 2: # La tercera iteración
print(lista_a[i], ":", lista_b[i])
print(lista_a)
print(lista_b)
lista_d = lista_b + ["Cuenca"]
lista_d
#for i in range(len(lista_a)):
for i in [0, 1 ,2]:
print(lista_a[i])
print(lista_d[i+1])
rango = list(range(71))
for x in rango:
#if (x % 3 == 0) and (x % 5 == 0):
if (x % 3 == 0) or (x % 5 == 0):
print(x)
# acumulador / contador
lista_a = ['Mallorca', 'Mérida', 'Madrid']
lista_b = ['España', 'Francia', 'Italia']
acum = 0
# for i in [0, 1, 2]:
for i in range(len(lista_a)):
print(lista_a[acum])
acum += 1
print(acum)
# acumulador / contador
lista_a = ['Mallorca', 'Mérida', 'Madrid']
lista_b = ['España', 'Francia', 'Italia']
acum = 0
while acum < len(lista_a):
print(lista_a[acum])
acum += 1
print(acum)
lista_a = ['Mallorca', 'Mérida', 'Madrid']
lista_b = ['España', 'Francia', 'Italia']
acum = 0
try:
while True:
print(lista_a[acum])
acum += 1
except:
print("Acum vale 3 o más")
#print(lista_a[acum])
print(acum)
lista_a = ['Mallorca', 'Mérida', 'Madrid']
lista_b = ['España', 'Francia', 'Italia']
acum = 0
for elem in lista_a:
print(elem)
print(lista_b[acum])
acum = acum + 1
###Output
Mallorca
España
Mérida
Francia
Madrid
Italia
###Markdown
2. Print the following pattern:```python122333444455555666666777777788888888999999999```
###Code
for i in range(1, 10):
r = str(i) * i
print(r)
for i in range(1, 10):
s = ""
for j in range(0, i):
s = s + str(i)
print(s)
"2" * 3
###Output
_____no_output_____
###Markdown
3. Given to list of same lenght, create a third list which contains the sum of both of the list, element by element. ```pythonExample: a = [1,2]b = [4,5]new_list = [5,7]```
###Code
a = [6, 9, 7]
b = [9, 3, 6]
for x, y in enumerate(a):
for z, w in enumerate(b):
if x == z: # Las posiciones que se están recorriendo sean las mismas
if x == 0:
p1 = y + w
elif x == 1:
p2 = y + w
else:
p3 = y + w
new_list = [p1, p2, p3]
new_list
a = [6, 9, 7]
b = [9, 3, 6]
acum = 0
for elem in a:
print(a[acum])
print(b[acum])
print("--------")
acum += 1
a = [6, 9, 7]
b = [9, 3, 6]
new_list = []
# for acum in [0, 1, 2]:
for acum in range(len(a)):
print(a[acum])
print(b[acum])
new_list.append(a[acum] + b[acum])
print(new_list)
print("--------")
a = [6, 9, 7]
b = [9, 3, 6]
new_list = []
# for acum in [0, 1, 2]:
for acum in range(len(a)):
new_list.append(a[acum] + b[acum])
print(new_list)
s = random_student(lista_alumnos=lista_stu)
print(s)
###Output
Kapil Dadlani
###Markdown
4. Get first, second best scores from the list.List contains duplicates.
###Code
scores = [86,86,85,85,85,83,23,45,84,1,2,0]
scores.sort()
print(scores)
scores.reverse()
print(scores)
list(range(len(scores)))
print(scores[0])
for i in range(len(scores)):
if scores[i] == scores[0]:
continue
else:
print(scores[i])
break
scores = [86,86,85,85,85,83,23,45,84,1,2,0]
print(set(scores))
print(id(23))
print(id(22))
print(id(24))
print(id(86))
for x in scores:
print(hash(x))
s = list(set(scores))
s
s = list(set(scores))
s.sort()
s.reverse()
print(s[0])
print(s[1])
for i in range(2):
print(s[i])
acum = 0
while acum < 2:
print(s[acum])
acum += 1
l = [2, 3, 5, 8]
for i in range(len(l)):
if i == 2:
pass
else:
print(l[i])
print("IT")
l = [1515, 30000, -35]
l = list(set(l))
l
l = [1515, 30000, -35, "z", "a", "h"]
l = list(set(l))
l
# Diferencias break, pass, continue
# Continue: Salta a la siguiente iteración
for i in range(5):
if i == 1:
continue
print(i)
# Break: Para el bucle TOTALMENTE
for i in range(5):
if i == 1:
break
print(i)
# Pass: no hace nada, sigue adelante de forma normal
for i in range(5):
if i == 1:
pass
print(i)
# ¿Por qué pass?
def get_data():
""" Para obtener datos TODO(@gabvaztor) """
data = []
return data
def clean_data():
pass
def draw_data():
pass
get_data()
print("Ho\nla")
print(r"Ho\nla")
name = "Bob"
'Hello, %s' % name
errno = 51531351
'Hey %s, there is a 0x%x error!' % (name, errno)
name = "Bob"
'Hello, {}'.format(name)
a = 5
b = 10
f'Five plus ten is {a + b} and not {2 * (a + b)}.'
###Output
_____no_output_____
###Markdown
5. From the given list: a) Create separate lists of strings and numbers. b) Sort the strings' list in ascending order c) Sort the strings' list in descending order d) Sort the number's list from lowest to highest e) Sort the number's list from highest to lowest
###Code
gadgets = ["Mobile", "Laptop", 100, "Camera", 310.28, "Speakers", 27.00, "Television", 1000, "Laptop Case", "Camera Lens"]
strings = [s for s in gadgets if type(s) == str]
strings
numbers = []
strings = []
for x in gadgets:
if type(x) == int or type(x) == float:
numbers.append(x)
elif type(x) == str:
strings.append(x)
print(numbers)
print("--------")
print(strings)
numbers = []
strings = []
for x in gadgets:
if type(x) == int or type(x) == float:
numbers.append(x)
else:
strings.append(x)
print(numbers)
print("--------")
print(strings)
for x in gadgets:
if type(x) == int or type(x) == float:
print(x)
print("---------")
for x in gadgets:
if type(x) == str:
print(x)
strings
# Ordenar de forma ascendente
strings.sort()
strings
strings.reverse()
strings
numbers = [k for k in gadgets if type(k) == int or type(k) == float]
numbers
numbers.sort()
numbers
numbers.sort(reverse=True)
numbers
strings.sort(reverse=False)
strings
def sort(lista, reverse=False):
lista.sort()
if reverse:
lista.reverse()
return lista
l = [2, 6, 1, 0]
sort(lista=l, reverse=True)
numbers
print(gadgets)
###Output
['Mobile', 'Laptop', 100, 'Camera', 310.28, 'Speakers', 27.0, 'Television', 1000, 'Laptop Case', 'Camera Lens']
###Markdown
6. Make a list of ten aliens, each of which is one color: 'red', 'green', or 'blue'. - You can shorten this to 'r', 'g', and 'b' if you want, but if you choose this option you have to include a comment explaining what r, g, and b stand for. - Red aliens are worth 5 points, green aliens are worth 10 points, and blue aliens are worth 20 points. - Use a for loop to determine the number of points a player would earn for destroying all of the aliens in your list.
###Code
lista_aliens = ["r", "g", "b"]
l_aliens = []
for i in range(10):
if i % 2 == 0: # i es par
l_aliens.append("r")
elif i % 3 == 0: # i es divisible entre 3
l_aliens.append("g")
else:
l_aliens.append("b")
l_aliens
acum = 0
for alien in l_aliens:
if alien == "r":
# 5 puntos
acum = acum + 5
elif alien == "g":
# 10 puntos
acum = acum + 10
else:
# 20 puntos
acum = acum + 20
print(acum)
l_aliens
# ESTO NO ACTUALIZA ACUM
acum = 0
for alien in l_aliens:
if alien == "r":
# 5 puntos
print(acum + 5)
elif alien == "g":
# 10 puntos
print(acum + 10)
else:
# 20 puntos
print(acum + 20)
print(acum)
###Output
5
20
5
10
5
20
5
20
5
10
0
###Markdown
Exercise 2 - Functions 1. Write a function program to sum up two given different numbers.
###Code
a = 2
b = 4
def suma_dos_numeros(a, b):
print(a + b)
suma_dos_numeros(a=6, b=15)
def suma_dos_numeros(a, b):
print(a + b)
r = suma_dos_numeros(a=6, b=15)
print(r)
a = 2
b = 4
def suma_dos_numeros():
print(a + b)
suma_dos_numeros()
a = 2
b = 4
def suma_dos_numeros():
x = 7
y = 3
print(a + b)
suma_dos_numeros()
print(x + y)
###Output
6
|
notebooks/supp_3_gtr_orgs.ipynb | ###Markdown
GtR organisations and sectorsHere we take a list of GtR organisations matched with Companies House and extract their SIC codes and sizes.We will extract the SIC codes using a 4-digit sic code - Nesta segment lookup Preamble
###Code
%run notebook_preamble.ipy
###Output
_____no_output_____
###Markdown
Load data
###Code
# This is the GtR-CH match
gtr_ch_matched = pd.read_csv('../data/external/gtr_house_metadata.csv',dtype={'company_number':str})
gtr_ch_matched.columns
# This is the 4-digit sic - industry segment lookup
industry_lookup = pd.read_csv('../data/external/industry_cluster_lookup_feb_2017.csv',
dtype={'sic_4':str})
# And this is the Companies House data
ch = pd.read_csv('../data/external/ch_data/BasicCompanyDataAsOneFile-2019-07-01.csv',dtype={'CompanyNumber':str})
ch.columns = [x.lower().strip() for x in ch.columns]
ch = ch[['companyname','companynumber','companystatus','regaddress.postcode','siccode.sictext_1']]
###Output
_____no_output_____
###Markdown
Merge
###Code
gtr_ch_merged = pd.merge(gtr_ch_matched,ch,left_on='company_number',right_on='companynumber')
###Output
_____no_output_____
###Markdown
Some missing companies
###Code
missing_numbers = set(gtr_ch_matched['company_number'])-set(ch['companynumber'])
len(missing_numbers)
###Output
_____no_output_____
###Markdown
Check with Alex what these could be
###Code
#Exterct four digit sic codes from the five digits
gtr_ch_merged['sic'] = [x.split(' ')[0] for x in gtr_ch_merged['siccode.sictext_1']]
#Deal with the presence of SIC codes with different lengths
gtr_ch_merged['sic_4'] = [x if len(x)==4 else x[:-1] if len(x)==5 else x+'0' for x in gtr_ch_merged['sic']]
merged_w_segments = pd.merge(gtr_ch_merged,industry_lookup,left_on='sic_4',right_on='sic_4')
merged_w_segments.head()
#There are a few sic codes missing from our lookup - unclear why
missing_sics = set(gtr_ch_merged['sic_4'])-set(industry_lookup['sic_4'])
###Output
_____no_output_____
###Markdown
Read the link table matching projects to organisations
###Code
link = pd.read_csv('../data/external/gtr_link_table.csv')
link.columns
project_org_match = pd.merge(merged_w_segments,link,left_on='id',right_on='id')
# Finally - turn this into a project - org lookup
project_sectors = pd.concat([project_org_match.groupby('project_id')[var].apply(lambda x: list(x)) for var in
['companyname','cluster','sic_4','descr']],axis=1)
project_sectors.to_csv(f'../data/processed/{today_str}_gtr_organisations_industries_2.csv',compression='zip')
project_sectors
comp_names = [x[0].lower() for x in project_sectors['companyname']]
has_bbc = [x for x in comp_names if any(val in x for val in ['bbc','broadcasting'])]
has_bbc
###Output
_____no_output_____ |
notebooks/05b-Lx_saturation-one_null_distr.ipynb | ###Markdown
Saturation analyses on the Lx Saturation analyses, using recall generated from 06-recall_one_null_distr. Where a single null distribution was used. Conclusion that using sampled null distribution per gene vs using a single null distribution for all genes does not make a difference on the recall scores generally, and saturation analyses results were similar.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
dir_Lx = './tmp/'
recall_cutoff = 0.95
model_name = 'topfeat'
def getPct(x, model_name, cutoff=0.95):
# get the fraction of targets with recall > cutoff
# given as is refering to Lx
df_results = pd.read_csv('%s/model_results_L%s_reg_rf_boruta_%s.csv' % (dir_Lx, x, model_name))
df_results = df_results.loc[df_results.model == model_name,:].copy()
n_total = df_results.shape[0]
n_pass = sum(df_results.corr_test_recall_1null > cutoff)
return n_pass/n_total, np.nanmean(df_results.corr_test)
###Output
_____no_output_____
###Markdown
based on reduced model
###Code
df_stats = {'Lx':[], 'recall_pct':[], 'mean_corr':[]}
for x in [25,75,100,200,300]:
df_stats['Lx'].append(x)
recall_pct, mean_corr = getPct(x, model_name, recall_cutoff)
df_stats['recall_pct'].append(recall_pct)
df_stats['mean_corr'].append(mean_corr)
df_stats = pd.DataFrame(df_stats)
df_stats
ax = sns.scatterplot(df_stats.Lx, df_stats.recall_pct, s=100)
ax.set(xlabel='Lx', ylabel='%% targets with recall > %s' % recall_cutoff)
ax = sns.scatterplot(df_stats.Lx, df_stats.mean_corr, s=100)
ax.set(xlabel='Lx', ylabel='Mean correlation (rho)')
###Output
_____no_output_____ |
Part 3_Regression & Clustering.ipynb | ###Markdown
Python group project - ipynb n°3 - Simple and Multiple Regression & Clustering Note: For the regression part, we are going to use another dataset: crime_rate.xlsxHowever interesting Overwatch Analysis was, analysing the features and target of the crime rate in the US provides more relevant information, hilighted by simple and multiple regression analysis. 1. Simple Regression: Is there a correlation between the Crime Rate and the number of School Droppers? In this section, the objective is to analyse the correlation between 2 variables: Crime Rate in the US and the number of School Droppers. The purpose of this analysis is to determine the linear relationship between those 2 variables. We assume that:- Crime_Rate : Independant Variable (Feature)- Droppers_Rate : Dependant Variable (Target)
###Code
# 0. Import necessary modules #
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
# 1. Import the dataset from Excel #
df= pd.read_excel('crime_rate.xlsx')
# 2. Print the first 5 rows of the dataset #
df.head()
# 3. Set Indexes #
df=df.set_index('City')
df.head()
###Output
_____no_output_____
###Markdown
We can see that the Crime Rate and the Droppers Rate have a significantly different scale. In order to obtain relevant results, we are going to multiply Droppers'value by 100.
###Code
# 4. Scale the dataset #
df['Droppers_rate'] = df['Droppers_rate']*100
df.head()
# 5. Define the variables: Reported Crime Rate (independant) and School Droppers (dependant) #
x= df['Reported_Crime_Rate']
y= df['Droppers_rate']
# 6. Plot the variables in a graph to observe theirs variations #
plt.plot(x, c='blue') #The x axis (Reported Crime Rate) is the blue line
plt.plot(y, c='red') #The y axis (School Droppers Rate) is in red
plt.show()
# 7. Convert variables into Numpy arrays #
x=df['Reported_Crime_Rate'].values.reshape(-1,1)
y=df['Droppers_rate'].values.reshape(-1,1)
# 8. Create objects for the classes #
reg= LinearRegression()
reg.fit(x,y)
y_pred=reg.predict(x)
# 9. Plot the linear regression #
plt.scatter(x,y, c='blue')
plt.plot(x,y_pred, c='red')
plt.show()
# 10. Compute R^2 and RMSE #
R_squared = r2_score(y,y_pred)
RMSE = np.sqrt(mean_squared_error(y,y_pred))
print('R-Squared is equal to: ', R_squared)#The closer towards 1, the better the fit
print('Root Mean Squared Error is equal to: ', RMSE) #The lower that value is, the better the fit
###Output
R-Squared is equal to: 0.10401829368952742
Root Mean Squared Error is equal to: 564.45645958214
###Markdown
To conclude the n°1 We can assume with a probability of 10% that the linear equation found represents the correlation link between our 2 variables. Given that our R^2 is low, it could mean that the Crime Rate and Droppers Rate are not strongly correlated (there is no strong influence of the number of people who dropped school on the crime rate) 2. Multiple Regression The objective of this part is the same as the first 1, but this time there are 3 variables in the equation: - Crime_Rate : Independant Variable 1 (Feature)- Violent_Crime_Rate : Independant Variable 2 (Feature)- Annual_Pilice_Funds : Dependant Variable (Target)Thus, we will determine what is the correlation between the fund allocated to Police department according to the Crime and Violent Crime rate in the city.
###Code
# 0. Import necessary modules #
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
#The dataset remain the same and is already load from part 1
# 1. Print the first 5 rows #
df.head()
# 2. Define variables: Reported Crime Rate and Reported Violent Crime (independant) and Annual Police Funds (independant)
X= df[['Reported_Crime_Rate','Reported_Violent_Crime']]
Y= df['Annual_Police_Funds']
# 3. Initiate the test #
X_train, X_test, Y_train, Y_test= train_test_split(X,Y,test_size=0.25, random_state=0)
# 4. Create objects for the classes and make them fit with the training data #
reg= LinearRegression()
reg.fit(X_train,Y_train)
# 5. Compute the coefficients and the intercept #
coef= reg.coef_
inter= reg.intercept_
print('Coefficients: `\n', coef)
print('Intercept: \n', inter)
# 6. Make predictions on the testing data #
Y_pred = reg.predict(X_test)
print(X_test)
print(Y_pred)
# 7. Compute R^2 and RMSE #
R_Squared = r2_score(Y_test,Y_pred)
RMSE = np.sqrt(mean_squared_error(Y_test,Y_pred))
print('R-Squared is equal to: ', R_squared)#The closer towards 1, the better the fit
print('Root Mean Squared Error is equal to: ', RMSE) #The lower that value is, the better the fit
###Output
R-Squared is equal to: 0.10401829368952742
Root Mean Squared Error is equal to: 14.658558304015223
###Markdown
To conclude the n°2 Once again, we can see that R^2 is low (only 10%), meaning that the model found does not fit accuratly the distributio of our data. Thus, the correlation between our variables is not well defined. 3. Clustering For this part, we are going to use our initial dataset on Overwatch statistics.The purpose is to gather the data in differents groups to analyse the behaviours of 2 variables associated together. The variables analysed are: - Pick Rate- % of On Fire realized
###Code
# 0. Import Librairies #
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
# 1. Read in a CSV file #
df = pd.read_csv("overbuff.csv", encoding = "latin1")
# 2. Clean the table: we keep only Rank, On Fire and Pick Rate #
df0 = df[['Pick_rate','On_fire','Rank',]]
df0.head()
###Output
_____no_output_____
###Markdown
We are going to focus our analysis on the players ranked Bronze only in order to provide a relevant analysis.
###Code
# 3. Sort the dataset by keeping only the players ranked Bronze #
df1 = df0[df0['Rank']== 'Bronze'].drop('Rank', axis=1)
#We clean our dataset in order to keep only the column analysed
df1.head()
# 4. Determine the accurate number of clusters of the dataset #
mms = MinMaxScaler()
mms.fit(df1)
data_transformed = mms.transform(df1)
SSD =[] #Create an array for the Sum of Squared Distances
K = range(1,15) #We assume that the number of cluster is between 1 and 15
for k in K:
km = KMeans(n_clusters=k)
km = km.fit(data_transformed)
SSD.append(km.inertia_)
print(SSD)
# 5. Plot the Sum of Squared Distance to determine visually the optimal number of clusters #
plt.plot(K, SSD, 'bx-', c='green')
plt.xlabel('k')
plt.ylabel('Sum of Squared Distances')
plt.title('Optimal Number k of Clusters')
plt.show()
###Output
_____no_output_____
###Markdown
We can clearly see that the curve tends to become stationary around 5. We will then cluster our data in 5 differents clusters.
###Code
# 6. Define the Variables #
x = df1['Pick_rate'].values
y = df1['On_fire'].values
kmeans = KMeans(n_clusters=5).fit(df1)
centroids = kmeans.cluster_centers_
kmeans.labels_.astype(float)
# 7. Plot the Cluster on a Scatter Plot #
plt.scatter(x,y, c= kmeans.labels_.astype(float), s=40, alpha=0.5)
plt.scatter(centroids[:,0],centroids[:,1],c='black', s=80, alpha=0.8)
plt.xlabel('Pick Rate')
plt.ylabel('On Fire')
###Output
_____no_output_____ |
tf-cool/02_Tf_String/01_TF_String.ipynb | ###Markdown
`tf.strings()`These are operations to work with string tensors.* [Docs](https://www.tensorflow.org/api_docs/python/tf/strings/as_string)In this notebook we are going to cover the most usefull and basic string functions from the `tf.strings()`
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
1. ``as_string(...)``Converts each entry in the given tensor to strings.* The input must be of type, int, float, bool, ...
###Code
A = tf.Variable([4, 5, 6, 7])
A_string = tf.strings.as_string(A,)
A_string
### 2. ``format(...)``
Converts each entry in the given tensor to strings.
* The input must be of type, int, float, bool, ...
###Output
_____no_output_____
###Markdown
3. ``join(...)``Perform element-wise concatenation of a list of string tensors.
###Code
tf.strings.join(["This", "is", "a", "boy"], separator=" ")
###Output
_____no_output_____
###Markdown
4. ``lenght(...)``Returns the lenght of the string
###Code
A = tf.Variable(["This is a ", "This is a ", "This is a "])
B = tf.Variable(["boy", "girl", "apple"])
tf.strings.length(A), tf.strings.length(B)
###Output
_____no_output_____
###Markdown
5. ``lower(...)``Converts the string to lowercase.
###Code
A = tf.Variable(["This is a ", "This is a ", "This is a "])
tf.strings.lower(A)
###Output
_____no_output_____
###Markdown
6. ``ngrams(...)``Create a tensor of n-grams based on data.
###Code
A = tf.Variable([str("The weather was very cold that noone could go out for a date.").split()])
tf.strings.ngrams(A, 3)
###Output
_____no_output_____
###Markdown
7. ``reduce_join(...)``Join the strings in one long string along an axis.
###Code
tf.strings.reduce_join([
["this", "is", "a", "boy."],
["His", "name", "is", "jonh"]
], separator=" ").numpy()
###Output
_____no_output_____
###Markdown
8. ``regex_full_match(...)``Matches the given list of strings with a regular expression and returns true if it matchs.
###Code
tf.strings.regex_full_match(["This", "is", "me", "1234"], pattern=r'^[a-z]+')
###Output
_____no_output_____
###Markdown
9. ``regex_replace(...)`` Replace elements of input matching regex pattern with rewrite.
###Code
tf.strings.regex_replace(["This", "is", "me", "1234"], r'^[a-z]+', "1234")
###Output
_____no_output_____
###Markdown
10. ``strip(...)``- Removes trailing white spaces. This method is simmilar to the python string method `strip()`
###Code
tf.strings.strip(["This ", " is ", " me ", " 1234"])
###Output
_____no_output_____
###Markdown
11. ``split(...)``Split elements of input based on sep into a RaggedTensor.
###Code
tf.strings.split(["This is a long string which will be spitted by a space."], sep=" ")
###Output
_____no_output_____
###Markdown
12. ``substr(...)``Extract a string based on index and length from another string.
###Code
tf.strings.substr(["hello world"], pos=0, len=5).numpy()
###Output
_____no_output_____
###Markdown
13. ``upper(...)``Convert the given string to uppercase characters.
###Code
tf.strings.upper(["Hello world"]).numpy()
###Output
_____no_output_____
###Markdown
14. ``to_number(...)``Convert a given string to a number.
###Code
tf.strings.to_number([str(i) for i in range(10)], out_type="int32").numpy()
###Output
_____no_output_____ |
lesson05/.ipynb_checkpoints/Optional_Part1_galaxySimulations_3d_lesson05-checkpoint.ipynb | ###Markdown
Galaxy simulationsNow we'll use some initial conditions from the ```Gadget-2``` particle simulation code to do our own simulations of galaxies! We'll start with some low resolution data.There are only 2 types of particles - particle 1 is a star particle and particle 2 is a dark matter particle.
###Code
# usual things:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
We'll read data in using np.genfromtxt. This is the "opposite" function of the np.savetxt we used before to save text files.Note we can use it to even read from the web! Or you can download these galaxy snapshots from the Day 5 webpage as well.
###Code
# this is a 100 particle (in stars and DM) initial conditions
# each row is a particle
# Ptype is the particle type (dark matter or stars)
# and then x,y,z give coordinates of each particle
# vx,vy,vz give each velocity component
names = ('Ptype', 'x', 'y', 'z', 'vx', 'vy', 'vz')
formats= ('f8', 'f8', 'f8','f8', 'f8','f8', 'f8')
galaxy_data = np.genfromtxt("https://jnaiman.github.io/csci-p-14110/lesson04/galaxySnaps/snap_001_fac1n3.txt",
delimiter=",",
dtype={'names':names,
'formats':formats})
###Output
_____no_output_____
###Markdown
Now let's take a quick peak at what the array looks like:
###Code
galaxy_data
###Output
_____no_output_____
###Markdown
We'll convert this data into a form that our hermite solver knows how to use with the `convert_galaxy_data.py` library. Make sure this is in the same folder as this ipynb notebook!
###Code
# convert galaxy data
from convert_galaxy_data import convert_galaxy_data
masses, pos, vel = convert_galaxy_data(galaxy_data)
###Output
_____no_output_____
###Markdown
What do they look like? Here are the masses (in grams):
###Code
masses
###Output
_____no_output_____
###Markdown
We'll use a slightly different version of the `do_hermite` function: the `do_hermite_galaxies` that is tailored to doing galaxy simulations:
###Code
# import the galaxy library
from hermite_library import do_hermite_galaxies
# note: this will likely take a good long while to run
# note that tfinal has changed - this is because our scales are very different!
# time is in seconds and is 10^7 years
r_h, v_h, t_h, e_h = do_hermite_galaxies(masses, pos, vel, tfinal=3.15e7*1e7, Nsteps = 100)
###Output
_____no_output_____
###Markdown
Finally, plot!
###Code
# let's plot in multi-d
fig, ax = plt.subplots(1, 4, figsize = (10*2, 10))
fig.suptitle('Coordinates Plot')
# for plots 0->2
ax[0].set_xlabel('x in kpc')
ax[0].set_ylabel('y in kpc')
ax[1].set_xlabel('x in kpc')
ax[1].set_ylabel('z in kpc')
ax[2].set_xlabel('y in kpc')
ax[2].set_ylabel('z in kpc')
# plot Euler's solution, particle 1, x-y
for i in range(len(masses)):
ax[0].plot(r_h[i,0,:], r_h[i,1,:], lw=3)
for i in range(len(masses)):
ax[1].plot(r_h[i,0,:], r_h[i,2,:], lw=3)
for i in range(len(masses)):
ax[2].plot(r_h[i,1,:], r_h[i,2,:], lw=3)
ax[3].set_xlabel('Time in years')
ax[3].set_ylabel('Energy')
# re-norm energy
ax[3].plot(t_h, e_h)
plt.show()
###Output
_____no_output_____ |
notebooks/Paramer Server.ipynb | ###Markdown
Parameter Server Mode
###Code
import os
import sagemaker
from sagemaker import get_execution_role
import time
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
from sagemaker.tensorflow import TensorFlow
train_instance_count = 2
#train_instance_type = 'ml.p2.xlarge' # GPU
train_instance_type='ml.m5.4xlarge'
mnist_estimator = TensorFlow(entry_point='modi_mnist.py',
role=role,
train_instance_count=train_instance_count,
train_instance_type=train_instance_type,
framework_version='1.12',
py_version = 'py3',
distributions = {'parameter_server': {'enabled': True}})
%%time
print("train_instance_type - ",train_instance_type )
mnist_estimator.fit(training_data_uri)
###Output
_____no_output_____ |
src/Plotings.ipynb | ###Markdown
This script is used to generate plots related.
###Code
# import packages
import numpy as np
import pandas as pd
import os
import datetime
# define the repository path
from common_settings import obspath, outpath, events_name, \
obs_events, day_load_flow, hour_load_flow, conct_name, modpath
import matplotlib.pyplot as plt
import seaborn as sns
# define the repository path
from common_settings import obspath, outpath, events_name, \
obs_events, day_load_flow, hour_load_flow, conct_name, modpath, mod_load_flow
from utils.plotting import cq_line_plot
from utils.concentration import cumulative_lq, excel_save
from utils.signatures import update_cumul_df, load_flow_loc
###Output
_____no_output_____
###Markdown
plot the time series of discharge, concentration and C-Q
###Code
# plot the time series of discharge, concentration and C-Q
const_name = 'cq-NO3'
cq = pd.read_csv(outpath + const_name + '.csv')
cq.Time = pd.to_datetime(cq.Time)
for i in range(cq.shape[0]):
cq.loc[i, 'index'] = cq.Time[i].strftime("%Y-%m-%d")
cq.set_index('Time', inplace=True, drop=False)
cols = cq.columns
fig, axes = plt.subplots(4, 4, figsize=(20, 18))
k=0
for ii in obs_events.index[59:]:
row, col = np.floor_divide(k, 4), np.mod(k, 4)
start, end = obs_events.loc[ii, ['start', 'end']].values
start = pd.to_datetime(start + ' 00:00:00')
end = pd.to_datetime(end + ' 23:00:00')
ylabel_str = 'Flow volume (ML)'
cq_slice = cq.loc[start:end, :]
ax_temp = cq_slice.plot(kind='scatter', x=cols[2], y=cols[1], ax=axes[row, col], legend=True)
ax_temp.legend([obs_events.loc[ii, 'start']])
k += 1
plt.savefig(f'{outpath}figs/obs_NO3_flow_scatter.png', format='png', dpi=400, layout='tight')
# Plot
sns.set_style('white')
ylabel_str = 'Discharge (ML)'
fig, axes = plt.subplots(2, figsize=(8, 10))
sns.scatterplot(data = cq, x = cols[2], y = cols[1], ax=axes[0])
cq_line_plot(cq, ylabel_str, cols, logy=False, ax=axes[1])
# plt.savefig('../../output/figs/cq-' + const_name + '-test.png', format='png', dpi=300)
# Plot the data of different small time periods
time_period = ['2019-12-01', '2020-02-01']
cq_sliced = cq.loc[time_period[0]:time_period[1], :]
cq_line_plot(cq_sliced, ylabel_str, cols, logy=True)
# plt.savefig('../../output/figs/cq-' + const_name + 'mobilization-period2.png', format='png', dpi=300)
###Output
_____no_output_____
###Markdown
The double-mass plot of flow and loads
###Code
def double_mass_line(df, xycols, fs, xlabel, ylabel, color=None, legd=None, ls=None, ax=None):
ax = df.plot(x=xycols[0], y= xycols[1], ax=ax, ls=ls, color=color)
ax.set_xlabel(xlabel, fontsize=fs)
ax.set_ylabel(ylabel, fontsize=fs)
return ax
fn_day = 'obs_year_cumulative_ratio_day'
df_day = pd.read_excel(f'{outpath}{fn_day}.xlsx', sheet_name=[f'obs_year_{i}' for i in range(9)]);
fn_hour = 'obs_year_cumulative_ratio_hour'
df_hour = pd.read_excel(f'{outpath}{fn_hour}.xlsx', None);
fn_mod = 'mod_year_cumulative_ratio_day'
df_mod = pd.read_excel(f'{outpath}{fn_mod}.xlsx', None);
# read inputs (*cumulative_ratio*.xlsx)
xylabel = ['cumul_flow_ratio', 'cumul_load_ratio']
xlabel='Normalized cumulative flow volume'
ylabel='Normalized cumulative mass'
fs=16; fs_legd = 12
sns.set_style('whitegrid')
sns.color_palette("tab10")
# sns.set_style("ticks", {"xtick.major.size": 14, "ytick.major.size": 14})
plt.figure(figsize=(10, 8))
ax = plt.subplot(111)
legd = []
colors = ['C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9']
ii = 0
for _, val in df_day.items():
legd.append(str(val.values[0][0])[0:4])
double_mass_line(val, xylabel, fs, xlabel, ylabel, colors[ii], legd, ax=ax)
ii += 1
# for _, val in df_hour.items():
# legd.append(str(val.values[0][0])[0:4])
# double_mass_line(val, xylabel, fs, xlabel, ylabel, legd, ax=ax, ls='-.')
ii = 0
for _, val in df_mod.items():
legd.append(str(val.values[0][0])[0:4])
double_mass_line(val, xylabel, fs, xlabel, ylabel, colors[ii], legd, ax=ax, ls='--')
ii += 1
ax.legend(legd, fontsize=fs_legd);
ax.plot(ax.get_xlim(), ax.get_xlim(), c=".3")
ax.set_yticklabels(np.round(ax.get_yticks(), 2), size = 16);
ax.set_xticklabels(np.round(ax.get_xticks(), 2), size = 16);
# plt.savefig(f'{outpath}figs/{fn_day}_hour.png', format='png', dpi=400)
###Output
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:23: UserWarning: FixedFormatter should only be used together with FixedLocator
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:24: UserWarning: FixedFormatter should only be used together with FixedLocator
###Markdown
Plot Event Mean Concentration (EMC)
###Code
# import EMC and tranform into a matrix
emc = pd.read_csv(f'{outpath}obs_storm_event.csv', index_col='ID')
# emc = pd.read_csv(f'{outpath}mod_NO3_flow.csv', index_col='ID')
emc_matrix = {year: [] for year in range(2009, 2020)}
# emc_matrix = {year: [] for year in range(2009, 2014)}
emc.start = pd.to_datetime(emc.start)
for ii in emc.index:
if emc.start[ii].month < 7:
emc_matrix[emc.start[ii].year - 1].append(emc.loc[ii, 'event_load_coefficients'])
else:
emc_matrix[emc.start[ii].year].append(emc.loc[ii, 'event_load_coefficients'])
# convert dict into a dataframe
df_fillna = pd.DataFrame(index=np.arange(1, 13), columns=emc_matrix.keys())
for key, val in emc_matrix.items():
df_fillna.loc[0:len(val), key] = val
df_fillna.fillna(0, inplace=True)
# creat a mask
mask = np.zeros_like(df_fillna.values)
mask[df_fillna == 0] = True
# creat a mask
mask = np.zeros_like(df_fillna.values)
mask[df_fillna == 0] = True
# sns.set_context({"figure.figsize":(17,5)})
sns.axes_style("white")
fig,axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(12, 5))
cbar_ax = fig.add_axes([.92, .2, .03, .45])
ax = sns.heatmap(data=np.array(df_fillna.values), annot=True, mask=mask, vmin=0, vmax=1,
xticklabels=df_fillna.columns, yticklabels= df_fillna.index,cmap="RdBu_r", ax=axes[0], cbar=0)
ax1 = sns.heatmap(data=np.array(mod_df_fillna.values), annot=True, mask=mod_mask, vmin=0, vmax=1,
xticklabels=mod_df_fillna.columns, yticklabels= mod_df_fillna.index,cmap="RdBu_r", ax=axes[1], cbar_ax=cbar_ax)
ax.set_xlabel('Year', fontsize=16)
ax.set_ylabel('Events', fontsize=16)
ax1.set_xlabel('Year', fontsize=16)
plt.xticks(rotation=90, fontsize=14)
plt.yticks(fontsize=14);
plt.savefig(f'{outpath}figs/obs_mod_load_coefficients_common.png', format='png', dpi=400)
###Output
_____no_output_____
###Markdown
The seasonal correlation between concentration and discharge
###Code
time_ranges = [[f'{year}/7/1', f'{year}/10/1', f'{year+1}/1/1', f'{year+1}/4/1', f'{year+1}/7/1'] for year in range(2009, 2018)]
df_ratio = pd.DataFrame(index=[str(year) for year in range(2009, 2018)], columns = [1, 2, 3, 4])
cols = mod_load_flow.columns
# day_load_flow[cols[0]] = mod_load_flow[cols[0]] * 1000
mod_load_flow.tail()
_, axes = plt.subplots(1, 4, figsize=(24, 7), sharex=True, sharey=True)
for tt in time_ranges:
for ii in range(len(tt) -1):
start = pd.to_datetime(tt[ii])
end = pd.to_datetime(tt[ii + 1]) - datetime.timedelta(days=1)
df = load_flow_loc([start, end], mod_load_flow, timestep ='d')
df_sum = df.sum(axis=0)
sns.scatterplot(data=df, x=cols[2], y=cols[0], ax=axes[ii])
for ax in axes:
ax.set(xscale="log", yscale="log")
ax.set_xticklabels(ax.get_xticks(), fontsize=12, rotation=30)
ax.set_yticklabels(ax.get_yticks(), fontsize=12)
ax.set_xlabel('Flow (ML)', fontsize=12)
ax.set_xlim(0.1, 1e5)
ax.set_ylim(0.001, 1e4)
axes[0].set_ylabel('Load (kg)', fontsize=14); #Load (kg) Concentration (mg/l)
plt.legend(df_ratio.index);
plt.savefig(f'{outpath}figs/mod_season_load_flow.png', format='png', dpi=400, layout='tight')
###Output
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:11: UserWarning: FixedFormatter should only be used together with FixedLocator
# This is added back by InteractiveShellApp.init_path()
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:12: UserWarning: FixedFormatter should only be used together with FixedLocator
if sys.path[0] == '':
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:18: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "layout" which is no longer supported as of 3.3 and will become an error two minor releases later
###Markdown
Plot month summarized metrics
###Code
df_month = pd.read_csv(f'{outpath}mod_obs_month.csv', index_col = 'Month')
sns.set_style('white')
ax = df_month.plot(kind='bar', figsize=(14, 7), xlabel='Time', ylabel='DIN (kg)', fontsize=14)
ax.set_xlabel(ax.get_xlabel(), fontsize=14);
ax.set_ylabel(ax.get_ylabel(), fontsize=14);
plt.xticks([i for i in range(0,df_month.shape[0], 6)], df_month.index[::6], rotation=45)
plt.legend(df_month.columns, fontsize=14);
plt.savefig(f'{outpath}figs/mod_obs_month.png', format='png', dpi=400, layout='tight')
###Output
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:7: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "layout" which is no longer supported as of 3.3 and will become an error two minor releases later
import sys
###Markdown
Plot long term C-Q relationship
###Code
time_ranges = ['2009/7/1', '2014/6/30']
cols = day_load_flow.columns
day_load_flow[cols[0]] = day_load_flow[cols[0]] * 1000
_, axes = plt.subplots(1, 3, figsize=(20, 6), sharex=False, sharey=False)
start, end = pd.to_datetime(time_ranges)
# plot low-freq-obs in axes[0]
df_obs = load_flow_loc([start, end], day_load_flow, timestep ='d')
cols = day_load_flow.columns
sns.scatterplot(data=df_obs, x=cols[2], y=cols[3], ax=axes[0])
# plot high-freq-obs in axes[1]
df_obs_hour = load_flow_loc(['2018/7/1', '2020/6/30'], hour_load_flow, timestep ='h')
cols = hour_load_flow.columns
sns.scatterplot(data=df_obs_hour, x=cols[0], y=cols[1], ax=axes[1])
# plot mod in axes[1]
df_mod = load_flow_loc([start, end], mod_load_flow, timestep ='d')
cols = mod_load_flow.columns
sns.scatterplot(data=df_mod, x=cols[2], y=cols[1], ax=axes[2])
for ax in axes:
# ax.set(xscale="log", yscale="log")
ax.set_xticklabels(ax.get_xticks(), fontsize=12, rotation=30)
ax.set_yticklabels(ax.get_yticks(), fontsize=12)
ax.set_xlabel('Flow (ML)', fontsize=12)
axes[0].set_ylabel('Concentration (mg/L)', fontsize=14); #Load (kg) Concentration (mg/l)
# plt.savefig(f'{outpath}figs/obs_mod_longterm_load_flow.png', format='png', dpi=400, layout='tight')
###Output
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:19: UserWarning: FixedFormatter should only be used together with FixedLocator
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:20: UserWarning: FixedFormatter should only be used together with FixedLocator
###Markdown
Calcluate the cumulative frequency distributions (CFD) of dissolved concentrations Mod results
###Code
sns.set_style('whitegrid')
fig, axes = plt.subplots(1, 2, figsize=(12, 5), sharex=True, sharey=True)
# Obs results
cols = day_load_flow.columns
time_ranges = [[f'{year}/7/1', f'{year+1}/6/30'] for year in range(2009, 2018)]
for tt in time_ranges:
df = load_flow_loc(tt, day_load_flow, timestep='d')
ax = sns.histplot(data = df, x=cols[-1],bins=100,cumulative=True, stat='probability', fill=False, element='poly', alpha=0.7, ax=axes[0]);
df_temp = load_flow_loc([time_ranges[0][0], time_ranges[-1][1]], day_load_flow, timestep='d')
sns.histplot(data = df_temp, x=cols[-1],bins=100,cumulative=True, stat='probability', fill=False, element='poly', alpha=0.7, ax=axes[0]);
lgd = [*[year for year in range(2009, 2014)], 'all']
axes[0].set_xlabel('Concentration (mg/l)', fontsize=14);
axes[0].set_ylabel('Probability', fontsize=14);
axes[0].set_xticklabels(axes[1].get_xticks(), fontsize=14);
# axes[1].set_xlim(0, 10);
axes[0].set_yticklabels(np.round(axes[1].get_yticks(), 2), fontsize=14);
# mod results
cols = mod_load_flow.columns
time_ranges = [[f'{year}/7/1', f'{year+1}/6/30'] for year in range(2009, 2018)]
for tt in time_ranges:
df = load_flow_loc(tt, mod_load_flow, timestep='d')
sns.histplot(data = df, x=cols[1],bins=100,cumulative=True, stat='probability', fill=False, element='poly', alpha=0.7, ax=axes[1]);
df_temp = load_flow_loc([time_ranges[0][0], time_ranges[-1][1]], mod_load_flow, timestep='d')
sns.histplot(data = df_temp, x=cols[1],bins=100,cumulative=True, stat='probability', fill=False, element='poly', alpha=0.7, ax=axes[1]);
lgd = [*[year for year in range(2009, 2018)], 'all']
axes[1].set_xlabel('Concentration (mg/l)', fontsize=14);
axes[1].set_ylabel('Probability', fontsize=14);
axes[1].set_xticklabels(axes[0].get_xticks(), fontsize=14);
# axes[0].set_xlim(0, 10);
axes[1].set_yticklabels(np.round(axes[0].get_yticks(), 2), fontsize=14);
plt.legend(lgd, bbox_to_anchor=(0.92, 0.85), fontsize=14)
plt.savefig(f'{outpath}figs/obs_mod_concentration_cum_freq.png', format='png', dpi=400)
###Output
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:16: UserWarning: FixedFormatter should only be used together with FixedLocator
app.launch_new_instance()
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:31: UserWarning: FixedFormatter should only be used together with FixedLocator
F:\Anaconda\envs\oed\lib\site-packages\ipykernel_launcher.py:33: UserWarning: FixedFormatter should only be used together with FixedLocator
###Markdown
Plot long term C distribution
###Code
time_ranges = ['2009/7/1', '2018/6/30']
cols = day_load_flow.columns
# day_load_flow[cols[0]] = day_load_flow[cols[0]] * 1000
day_load_flow.head()
start, end = pd.to_datetime(time_ranges)
# plot obs in axes[0]
df_obs = load_flow_loc([start, end], day_load_flow, timestep ='d')
df_mod = load_flow_loc([start, end], mod_load_flow, timestep ='d')
df_plot = pd.concat([df_obs.filter(regex='Con', axis=1), df_mod.filter(regex='Co', axis=1)], axis=1)
ax = df_plot.boxplot(figsize = (8, 6), fontsize=14)
ax.set_xticklabels(['Obs', 'Mod'], fontsize=14)
ax.set_ylabel('Concentration (mg/L)', fontsize=14)
plt.savefig(f'{outpath}figs/obs_mod_concentration_boxplot.png', format='png', dpi=400)
###Output
_____no_output_____
###Markdown
Plot Delivery Ratio Surface
###Code
df = pd.read_csv(outpath+'DeliveryRatioSurface.csv', index_col='month')
cols = df.columns
plt.rcParams.update({'font.size': 14})
ax = sns.lineplot(x=df.index, y=cols[-1], data=df);
ax.axhline(y=25, ls='--', color='grey',lw=1.5)
ax.set_ylabel('Delivery Ratio Surface')
plt.savefig(outpath+'figs/DeliveryRatioSurface.png', format='png', dpi=300)
###Output
_____no_output_____ |
CSW/NGDC_Geoportal_filter_test.ipynb | ###Markdown
Test NGDC Geoportal bbox, start, stop filters
###Code
from owslib.csw import CatalogueServiceWeb
from owslib.fes import SortBy, SortProperty
from owslib import fes
import datetime as dt
csw = CatalogueServiceWeb('http://www.ngdc.noaa.gov/geoportal/csw',timeout=60) # NGDC Geoportal
#csw = CatalogueServiceWeb('http://catalog.data.gov/csw-all',timeout=60)
csw.get_operation_by_name('GetRecords').constraints
# adjust to match MaxRecordDefault of CSW, if would be cleaner if we pick this up Capabilities XML
# this issue will allow for this: https://github.com/geopython/OWSLib/issues/211
pagesize = 10
sort_property = 'dc:title' # a supported queryable of the CSW
sort_order = 'ASC' # should be 'ASC' or 'DESC'
sortby = SortBy([SortProperty(sort_property, sort_order)])
foo=sortby.properties
# hopefully something like this will be implemented in fes soon
def dateRange(start_date='1900-01-01',stop_date='2100-01-01',constraint='overlaps'):
if constraint == 'overlaps':
start = fes.PropertyIsLessThanOrEqualTo(propertyname='apiso:TempExtent_begin', literal=stop_date)
stop = fes.PropertyIsGreaterThanOrEqualTo(propertyname='apiso:TempExtent_end', literal=start_date)
elif constraint == 'within':
start = fes.PropertyIsGreaterThanOrEqualTo(propertyname='apiso:TempExtent_begin', literal=start_date)
stop = fes.PropertyIsLessThanOrEqualTo(propertyname='apiso:TempExtent_end', literal=stop_date)
return start,stop
val = 'salinity'
box=[-72.0, 41.0, -69.0, 43.0] # gulf of maine
# specific specific times (UTC) ...
# hurricane sandy
jd_start = dt.datetime(2012,10,26)
jd_stop = dt.datetime(2012,11,2)
# 2014 feb 10-15 storm
jd_start = dt.datetime(2014,2,10)
jd_stop = dt.datetime(2014,2,15)
# 2014 recent
jd_start = dt.datetime(2014,3,8)
jd_stop = dt.datetime(2014,3,11)
# 2014 recent
jd_start = dt.datetime(1988,1,1)
jd_stop = dt.datetime(2012,3,1)
# 2011
#jd_start = dt.datetime(2013,4,20)
#jd_stop = dt.datetime(2013,4,24)
# ... or relative to now
#jd_now = dt.datetime.utcnow()
#jd_start = jd_now - dt.timedelta(days=3)
#jd_stop = jd_now + dt.timedelta(days=3)
start_date = jd_start.strftime('%Y-%m-%d %H:00')
stop_date = jd_stop.strftime('%Y-%m-%d %H:00')
jd_start = dt.datetime.strptime(start_date,'%Y-%m-%d %H:%M')
jd_stop = dt.datetime.strptime(stop_date,'%Y-%m-%d %H:%M')
print start_date,'to',stop_date
start,stop = dateRange(start_date,stop_date)
filter1 = fes.PropertyIsLike(propertyname='apiso:AnyText',literal=('*%s*' % val),
escapeChar='\\',wildCard='*',singleChar='?')
bbox = fes.BBox(box,crs='urn:ogc:def:crs:OGC:1.3:CRS84')
#filter_list = [fes.And([ bbox, filter1, start,stop]) ]
filter_list = [fes.And([ bbox, filter1]) ]
# you should be okay from here
startposition = 0
maxrecords = 20
while True:
print 'getting records %d to %d' % (startposition, startposition+pagesize)
csw.getrecords2(constraints=filter_list,
startposition=startposition, maxrecords=pagesize, sortby=sortby)
# print csw.request
for rec,item in csw.records.iteritems():
print item.title
if csw.results['nextrecord'] == 0:
break
startposition += pagesize
if startposition >= maxrecords:
break
filter_list = [fes.And([ bbox, filter1]) ]
csw.getrecords2(constraints=filter_list)
csw.results['matches']
filter_list = [fes.And([ bbox, filter1, start,stop]) ]
csw.getrecords2(constraints=filter_list)
csw.results['matches']
filter_list = [filter1]
csw.getrecords2(constraints=filter_list)
csw.results['matches']
###Output
_____no_output_____ |
python_module_tutorials/SymPy.ipynb | ###Markdown
SymPyThis notebook shows some of SymPy's main features. Consult the official documentation at http://docs.sympy.org/latest/index.html to obtain the whole module reference.SymPy is a Computer Algebra System (CAS) for Python. In other words, you can perform symbolic mathematic operations with it and embed these operations into your Python programs. SymPy is part of the SciPy project. Usually, Sympy is bundled with common Python distribution, such as Anaconda (https://www.continuum.io/anaconda). If it is not bundled with your distribution and you use pip, you can install SymPy with the command "pip install sympy".Most of the shown examples are derived from the free book "Scipy lectures" (http://www.scipy-lectures.org).Once you have installed SymPy on your device, you can load it into your Python script:
###Code
# This notebook was tested with Python 3.5 and SymPy 1.0
# Only methods that are used in this notebook are imported. To import all functions, write "from sympy import *".
# Note that some methods, such as "sin", have homonymous counterparts in other packages, such as NumPy.
# If you use other packages, use "import sympy (as ...)" and "sympy." before every SymPy method to avoid conflicts.
from sympy import * #e.g. Derivative, diff, dsolve, Eq, Function, init_printing, integrate, lambdify, Matrix, sin, solve, sqrt, ...
from sympy.abc import x, y, z # Can be anything from a to z. Represents variables.
from sympy.solvers import linsolve # To solve linear equation systems.
###Output
_____no_output_____
###Markdown
1. Symbolic representation with SymPySymPy works with symbolic mathematic operations. You can store mathematic formulas in variables without the necessity to compute them directly:
###Code
function = sqrt(sin(x)) + 23 # "sqrt" and "sin" were imported from sympy. "x" was imported from sympy.abc.
function
###Output
_____no_output_____
###Markdown
2. Usage in interactive sessionsIn interactive sessions, like this Jupyter notebook, you can also print your expressions as more readable LATEX or Matjax expressions (depending on your device) after you initialzed this option with "init_printing()":(More information at http://docs.sympy.org/latest/tutorial/printing.html)
###Code
init_printing()
function
oo #Infinite
###Output
_____no_output_____
###Markdown
3. Simplify or expand mathematical expressions
###Code
expand((x + y)**3)
expand(x + y, complex=True)
expand(cos(x + y), trig=True)
simplify((x + x*y) / x)
###Output
_____no_output_____
###Markdown
4. Limits of functions
###Code
# Limit of sin(x) as x->0
limit(sin(x)/x, x, 0)
# Using oo (which stands for inifinte)
# Limit of x as x->oo
limit(x, x, oo)
limit(x**2, x, -oo)
###Output
_____no_output_____
###Markdown
5. Differentiation and IntegrationYou can perform calculus operations on your functions, such as differentiation and integration:(More information at http://docs.sympy.org/latest/tutorial/calculus.html)
###Code
function2 = x**2
# First derivative
function2.diff()
# Second derivative using alternative diff()
diff(function2, x, 2)
function3 = sin(x) * 2
# Integration without limits (indefinite)
integrate(function3, x)
# Integration with limits (definite)
integrate(function3, (x, 0, 2))
# Improper integration (using oo as infinite)
integrate(exp(-x**2), (x, -oo, oo))
###Output
_____no_output_____
###Markdown
5. Series expansionYou can find about the series module at http://docs.sympy.org/latest/modules/series/index.html
###Code
# Get the Taylor series at 0 until x^15...
series(cos(x), x, 0, 15)
# ...or at 3 until the error is automatically assumed as small enough
series(cos(x), x, 3)
# Get the Fourier series from -Pi to +Pi
s = fourier_series(x**2, (x, -pi, pi))
s
# Returns the 4 first elements of the fourier series
s.truncate(n=4)
###Output
_____no_output_____
###Markdown
6. Defining and solving equations/inequalitiesAside of defining functions, you can also define liner and non-linear equations (with "Eq()") and solve them:(More information at http://docs.sympy.org/latest/modules/solvers/solvers.html)
###Code
equation = Eq(x**2, 4)
solve(equation)
###Output
_____no_output_____
###Markdown
You can access the left hand side and the right hand side of the equation with the following attributes:
###Code
equation.lhs
equation.rhs
###Output
_____no_output_____
###Markdown
SymPy can also solve inequalities,...
###Code
inequality = x**2 < 10
solve(inequality)
###Output
_____no_output_____
###Markdown
...linear equation systems...
###Code
# Solves the following system of two equations:
# x + y + z -1 = 0
# x + 2y + 3z -1 = 0
# Alternative method: linsolve([x + y + z - 1, x + 2*y + 3*z - 1], (x, y, z))
linsolve(Matrix(([1, 1, 1, 1], [1, 2, 3, 1])), (x, y, z))
###Output
_____no_output_____
###Markdown
...and overdetermined systems
###Code
x1 = Symbol('x1')
x2 = Symbol('x2')
f1 = 3 * x1**2 - 2 * x2**2 - 1
f2 = x1**2 - 2 * x1 + x2**2 + 2 * x2 - 8
print(nsolve((f1, f2), (x1, x2), (-1, 1)))
###Output
Matrix([[-1.19287309935246], [1.27844411169911]])
###Markdown
7. Ordinary Differential Equations (ODE)ODE can be solved in the following way (More information at http://docs.sympy.org/latest/modules/solvers/ode.html):
###Code
f = Function("f")
diff_equation = Eq(f(x).diff()/f(x), 3/x) # In other words: y'/y = 3/x
diff_result = dsolve(diff_equation, f(x))
diff_result
###Output
_____no_output_____
###Markdown
It is also possible to classify ODEs wth SymPy:
###Code
classify_ode(diff_equation, f(x))
###Output
_____no_output_____
###Markdown
For Partial Differental Equations, look here: http://docs.sympy.org/latest/modules/solvers/pde.html 8. Conversion of SymPy representations into Python codeYou can also convert SymPy functions to Python functions to use them directly in your programs:
###Code
result_function = diff_result.rhs
result_function
python_function = lambdify(('C1', x), result_function)
python_function(1, 2) # C1=1, x=2.
###Output
_____no_output_____ |
prac11/prac.ipynb | ###Markdown
10. Критерии согласия. Критерий $\chi^2$ (гистограммы)Разобьем множество значений $\xi$ на отрезки $\bigtriangleup_1, \ldots, \bigtriangleup_r$,где $\bigtriangleup_i = (a_{i-1}, a_i], i = 1, \ldots, r$$p_i = P\{\xi \in \bigtriangleup_i|H_0\} = F_0(a_i) - F_0(a_{i-1})$$n_i = num\{X_j\in \bigtriangleup_i\}$ Статистика критерия$\chi^2(X_{[n]}) = \sum_{i=1}^{r}\frac{n_i}{p_i}(\frac{n_i}{n} - p_i)^2 = \sum_{i=1}^{r}\frac{(n_i- p_in)^2}{np_i}$ Теорема Пирсона Если $H_0$ верна, тогда $$\chi^2(X_{[n]}) \xrightarrow[n \to \infty]{d} \chi^2(r-1)$$[Доказательство для 2-х](https://tvims.nsu.ru/chernova/ms/lec/node46.html)[7 пруфов](https://arxiv.org/pdf/1808.09171.pdf) Критерий Колмогорова (эмпирическая функция распределения)[wiki](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test)Пусть $F_0(x)$ непрерывна на $\mathbb{R}$Статистика Колмогорова $D_{n}(X_{[n]}) = sup_{x\in \mathbb{R}}|F^*_n(x) - F_0(x)|$Если верна $H_0$, то $D_n(X_{[n]}) \xrightarrow[n \to \infty]{п.н.} 0$Если верна $H_1$, то $D_n(X_{[n]}) \xrightarrow[n \to \infty]{п.н.} sup_{x\in \mathbb{R}}|G(x) - F_0(x)| > 0$ Теорема КолмогороваЕсли $H_0$ - верна и $F_0$- непрерывна, то$P\{\sqrt{n}D_n(X_{[n]}) < z\} \xrightarrow[n \to \infty]{п.н.} K(z) = 1 + 2\sum_{m=1}^{\infty}(-1)^{m}e^{-2m^2z^2}$Критическая область$V_k = \{\sqrt{n}D_n(X_{[n]}) > d_{1 - \alpha}\}$ Построение эмпирической и теоретической функции распределения На паре 1. [ref](https://stepik.org/lesson/26280/step/6?unit=8173)Дана следующая выборка | $i$ | 0 | 1 | 2 | $\geq$ 3 ||------|------|------|----|-----------||$n_i$ | 70 | 78 | 34 | 18 |Проверить гипотезу, что $\xi \sim Pois(1)$ 2. [ref](https://stepik.org/lesson/26280/step/10?unit=8173)В сервисном центре измерили время между заявками на ремонт. Выборка дана в часах.$X_{[n]}:$ 0.25, 1.48, 0.32, 0.17, 1.66, 0.29, 0.02, 1.31, 0.12, 3.09Проверить, что $\xi \sim F_0 = Exp(1)$ Дома 1. (1.5) [ref](https://stepik.org/lesson/26280/step/4?unit=8173)Исследователь Василий по выборке0.29 0.01 0.50 0.21 0.65 0.34 0.75 0.07 0.07 0.25 1.26 0.11 0.22 0.95 0.63 0.93 0.73 0.37 0.80 1.10проверяет гипотезу об экспоненциальном законе распределения с параметром $\lambda=2$ генеральной совокупности. * Найдите вероятность попадания генеральной совокупности в интервал $(0.2, 0.5]$ при условии, что верна нулевая гипотеза.* Сделайте разбивку на 3 области (проверяйте, что $Np_i \geq 5$)* Найдите выборочное значение статистики критерия примите статистическое решение. Уровень значимости 0.01 2. (1.5) [ref](https://stepik.org/lesson/26280/step/13?unit=8173)Используйте следующие данные об отказах аппаратуры за $10000$ часов работы для проверки гипотезы о том, что число отказов имеет распределение Пуассона:| $i$ | 0 | 1 | 2 | $\leq$ 3 ||------|------|------|----|-----------||$n_i$ | 427 | 235 | 72 | 23 |* Проверьте гипотезу с помощью критерия хи-квадрат при уровне значимости $0.01$. Найдите выборочное значение статистики критерия* Найдите оценку неизвестного параметра $\lambda$ с помощью метода максимального правдоподобия.* Найдите ожидаемое число приборов, имевших $3$ отказа за $10000$ часов работы. В качестве оценки параметра $\lambda$ возьмите значение, найденное в предыдущем задании (используем значение с точностью 1 знак после запятой). 3. (2) [ref](https://stepik.org/lesson/26280/step/11?thread=solutions&unit=8173)С помощью критерия Колмогорова проверьте гипотезу о том, что объем шампуня в упаковке подчиняется нормальному закону распределения с мат. ожиданием 450 и дисперсией 16:451 450 444 454 447Найдите значение статистики критерия Колмогорова $\sqrt{n}D_n$
###Code
import math
p
n=[427,235,72,23]
N=sum(n)
p=[0.59**i*math.exp(-0.59)/math.factorial(i) for i in range(len(n)-1)]
p.append(1-sum(p))
chi=sum([((n[i]-N*p[i])**2)/(N*p[i]) for i in range(len(n))])
round(chi,2)
###Output
_____no_output_____ |
courses/C2_ Build Your Model/SOLUTIONS/DSE C2 S6_ Product Quality Case Study Part 2.ipynb | ###Markdown
DSE Course 2, Session 6: Product Quality Case Study Part 2**Instructor**: Wesley Beckner**Contact**: [email protected] this session we will continue with our wine quality prediction model from Course 1. We will apply strategies to improve our model in light of colinearity between features and feature skewness.--- Contents* 6.0 [Preparing Environment and Importing Data](x.0) * 6.0.1 [Import Packages](x.0.1) * 6.0.2 [Load Dataset](x.0.2)* 6.1 [Feature Engineering](x.1) * 6.1.1 [Feature Skewness](x.1.1) * 6.1.2 [Feature Colinearity](x.1.2) * 6.1.3 [Feature Normalization](x.1.3) * 6.1.4 [Feature Selection](x.1.4) * 6.1.5 [Dimensionality Reduction](x.1.5)* 6.2 [Modeling](x.2) * 6.2.1 [Models from Course 1](x.2.1) * 6.2.2 [Random Forests](x.2.2) * 6.2.3 [Support Vector Machine](x.2.3) * 6.2.4 [Gradient Boosting](x.2.4) * 6.2.5 [AdaBoost](x.2.5) * 6.2.6 [XGBoost](x.2.6) --- 6.0 Preparing Environment and Importing Data[back to top](top) 6.0.1 Import Packages[back to top](top)Load libraries which will be needed in this Notebook
###Code
# Pandas library for the pandas dataframes
from copy import copy
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
from ipywidgets import interact, widgets
import seaborn as sns
import plotly.express as px
import random
import scipy.stats as stats
from scipy.stats import skew, norm, probplot, boxcox, f_oneway
from patsy import dmatrices
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.formula.api import ols
import statsmodels.api as sm
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import LabelEncoder, StandardScaler, PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.metrics import mean_squared_error, r2_score
from sklearn import metrics
from xgboost import XGBClassifier
from sklearn.decomposition import PCA
###Output
/usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning:
pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
###Markdown
6.0.2 Load Dataset[back to top](top)
###Code
df = pd.read_csv("https://raw.githubusercontent.com/wesleybeckner/"\
"ds_for_engineers/main/data/wine_quality/winequalityN.csv")
# for some packages we will want to remove spaces in the header names
df.columns = df.columns.str.replace(' ', '_')
# the quality label will be a made up target for us to predict
df['quality_label'] = df['quality'].apply(lambda x: 'low' if x <=5 else
'med' if x <= 7 else 'high')
class_tp = {'red': 0, 'white': 1}
y_tp = df['type'].map(class_tp)
df['type_encoding'] = y_tp
class_ql = {'low':0, 'med': 1, 'high': 2}
y_ql = df['quality_label'].map(class_ql)
df['quality_encoding'] = y_ql
features = list(df.columns[1:-1].values)
# features.remove('type_encoding')
features.remove('quality_label')
features.remove('quality')
###Output
_____no_output_____
###Markdown
6.0.2 Base Model[back to top](top)
###Code
df_base = df.dropna()
X_train, X_test, y_train, y_test = train_test_split(df_base.loc[:, features +
['type_encoding']], df_base['quality_label'],
test_size=0.20, random_state=42)
model = LogisticRegression(penalty='l2',
tol=.001,
C=.003,
class_weight='balanced',
solver='sag',
max_iter=1e6)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Accuracy: {:2.2%} '.format(metrics.accuracy_score
(y_test, y_pred)))
print('Precision: {:2.2%} '.format(metrics.precision_score
(y_test, y_pred, average='weighted')))
print('Recall: {:2.2%} '.format(metrics.recall_score
(y_test, y_pred, average='weighted')))
print('F1 Score: {:2.2%} '.format(metrics.f1_score
(y_test, y_pred, average='weighted')))
###Output
Accuracy: 41.14%
Precision: 58.69%
Recall: 41.14%
F1 Score: 42.86%
###Markdown
6.1 Feature Engineering[back to top](top) 6.1.1 Missing Data[back to top](top)
###Code
imp = SimpleImputer(strategy='mean')
X2 = imp.fit_transform(df[features])
df_impute = pd.DataFrame(X2, columns=features)
print(df_impute.shape)
df_impute.head()
###Output
(6497, 12)
###Markdown
6.1.1 Feature Skewness[back to top](top)
###Code
df_impute.skew()
df_impute.kurtosis()
###Output
_____no_output_____
###Markdown
6.1.2 Feature Colinearity[back to top](top)
###Code
def VIF(df, features):
# add intercept for OLS in statmodels
X = df[features].assign(constant=1)
# Calculate VIF Factors
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X.values, i) for i in
range(X.shape[1])]
vif["features"] = X.columns
return vif.iloc[:-1].sort_values("VIF Factor") # here I've omitted the intercept
VIF(df_impute, features)
###Output
_____no_output_____
###Markdown
6.1.3 Feature Normalization[back to top](top) We can first try the standard scaler in sklearn, to scale to 0 mean and unit variance:
###Code
scaler = StandardScaler()
normed = scaler.fit_transform(df_impute)
df_normed = pd.DataFrame(normed, columns=features)
df_normed.head()
###Output
_____no_output_____
###Markdown
We note that this doesn't effect skew or kurtosis:
###Code
pd.DataFrame([df_normed.kurtosis(), df[features].kurtosis(),
df_normed.skew(), df[features].skew()],
index=pd.MultiIndex(levels=[['normalized', 'original'],
['kurtosis', 'skew']],
codes=[[0,1,0,1],[0,0,1,1]]))
###Output
_____no_output_____
###Markdown
We can use, instead, the box-cox transformation method to remove skew and kurtosis from the data:
###Code
pt_scaler = PowerTransformer(method='box-cox', standardize=False)
# box-cox only works with strictly positive data
box_features = [i for i,j in (df_impute[features] > 0).all().items() if j]
normed = pt_scaler.fit_transform(df_impute[box_features])
df_box = pd.DataFrame(normed, columns=box_features)
df_box.head()
pd.DataFrame([df_box.kurtosis(), df[features].kurtosis(),
df_box.skew(), df[features].skew()],
index=pd.MultiIndex(levels=[['boxcox', 'original'],['kurtosis', 'skew']],
codes=[[0,1,0,1],[0,0,1,1]]))
###Output
_____no_output_____
###Markdown
6.1.4 Feature Selection[back to top](top)One feature we might consider removing is density, since it has such a high VIF 6.1.5 Dimensionality Reduction[back to top](top) We choose a meaningful number of principal components
###Code
comp = 3
pca = PCA(n_components=comp)
X_pca = pca.fit_transform(pd.merge(df_impute.loc[:,features],df['type_encoding'], left_index=True,
right_index=True))
tot = sum(pca.explained_variance_)
var_exp = [(i / tot)*100 for i in sorted(pca.explained_variance_, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
fig, ax = plt.subplots(1, 1, figsize=(10,10))
ax.bar(range(comp), var_exp, alpha=0.5, align='center',
label='Individual')
ax.step(range(comp), cum_var_exp, where='mid',
label='Cumulative')
ax.set_ylabel('Explained variance ratio')
ax.set_xlabel('Principal components')
ax.legend()
###Output
_____no_output_____
###Markdown
6.2 Modeling[back to top](top) 6.2.1 Models from Course 1[back to top](top)
###Code
# create train/test indices so that we can apply these to the different datasets
Xid_train, Xid_test, yid_train, yid_test = train_test_split(
range(df.shape[0]), range(df.shape[0]), random_state=42)
# and define the function to standardize our model testing
def train_test_model(model, Xid_train, Xid_test, yid_train, yid_test, X, y,
verbose=True):
X_train = X.iloc[Xid_train]
X_test = X.iloc[Xid_test]
y_train = y.iloc[yid_train]
y_test = y.iloc[yid_test]
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
acc = metrics.accuracy_score(y_test, y_pred)
precision = metrics.precision_score(y_test, y_pred, average='weighted')
recall = metrics.recall_score(y_test, y_pred, average='weighted')
f1 = metrics.f1_score(y_test, y_pred, average='weighted')
if verbose:
print('Accuracy: {:2.2%} '.format(acc))
print('Precision: {:2.2%} '.format(precision))
print('Recall: {:2.2%} '.format(recall))
print('F1 Score: {:2.2%} '.format(f1))
return acc, precision, recall, f1
###Output
_____no_output_____
###Markdown
6.2.1.1 With Imputed Data
###Code
model = LogisticRegression(penalty='l2',
tol=.001,
C=.003,
class_weight='balanced',
solver='sag',
max_iter=1e6)
X_imp = pd.merge(df_impute.loc[:,features],df['type_encoding'], left_index=True,
right_index=True)
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train, yid_test, X_imp, y_ql)
###Output
Accuracy: 40.18%
Precision: 58.55%
Recall: 40.18%
F1 Score: 41.15%
###Markdown
6.2.1.2 With Box-Cox Transformed Data
###Code
X_box = pd.merge(df_box.loc[:,box_features],df_impute[['type_encoding',
'citric_acid']], left_index=True, right_index=True)
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train, yid_test, X_box, y_ql)
###Output
Accuracy: 40.06%
Precision: 62.69%
Recall: 40.06%
F1 Score: 43.33%
###Markdown
6.2.1.3 With Zero-Mean and Unit Variance Data
###Code
X_norm = pd.merge(df_normed.loc[:,features],df['type_encoding'], left_index=True,
right_index=True)
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train, yid_test, X_norm, y_ql)
###Output
Accuracy: 49.17%
Precision: 67.63%
Recall: 49.17%
F1 Score: 51.53%
###Markdown
6.2.1.4 Normed and Without Colinear Features
###Code
non_density = copy(features)
non_density.remove('density')
X_norm2 = pd.merge(df_normed.loc[:,non_density],df['type_encoding'],
left_index=True, right_index=True)
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train, yid_test, X_norm2, y_ql)
###Output
Accuracy: 48.98%
Precision: 67.93%
Recall: 48.98%
F1 Score: 51.42%
###Markdown
6.2.1.4 First Three Principal Components
###Code
X_pca = pd.DataFrame(X_pca)
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train, yid_test, X_pca, y_ql)
###Output
Accuracy: 32.00%
Precision: 51.65%
Recall: 32.00%
F1 Score: 35.64%
###Markdown
6.2.2 Random Forests[back to top](top)
###Code
model = RandomForestClassifier()
data = [X_norm2, X_norm, X_box, X_imp, X_pca]
data_names = ['Normed, Non-Colinear', 'Normed', 'Box-Cox', 'Imputed', 'PCA']
performance = []
for data, name in zip(data,data_names):
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train,
yid_test, X_norm2, y_ql, False)
performance.append([acc, pre, rec, f1])
pd.DataFrame(performance, columns = ['Accuracy', 'Precision', 'Recall', 'F1'],
index=data_names)
###Output
_____no_output_____
###Markdown
6.2.3 Support Vector Machine[back to top](top)
###Code
model = SVC()
data = [X_norm2, X_norm, X_box, X_imp, X_pca]
data_names = ['Normed, Non-Colinear', 'Normed', 'Box-Cox', 'Imputed', 'PCA']
performance = []
for data, name in zip(data,data_names):
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train,
yid_test, X_norm2, y_ql, False)
performance.append([acc, pre, rec, f1])
pd.DataFrame(performance, columns = ['Accuracy', 'Precision', 'Recall', 'F1'],
index=data_names)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning:
Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning:
Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning:
Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning:
Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning:
Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
###Markdown
6.2.4 Gradient Boosting[back to top](top)
###Code
model = GradientBoostingClassifier()
data = [X_norm2, X_norm, X_box, X_imp, X_pca]
data_names = ['Normed, Non-Colinear', 'Normed', 'Box-Cox', 'Imputed', 'PCA']
performance = []
for data, name in zip(data,data_names):
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train,
yid_test, X_norm2, y_ql, False)
performance.append([acc, pre, rec, f1])
pd.DataFrame(performance, columns = ['Accuracy', 'Precision', 'Recall', 'F1'],
index=data_names)
###Output
_____no_output_____
###Markdown
6.2.5 AdaBoost[back to top](top)
###Code
model = AdaBoostClassifier()
data = [X_norm2, X_norm, X_box, X_imp, X_pca]
data_names = ['Normed, Non-Colinear', 'Normed', 'Box-Cox', 'Imputed', 'PCA']
performance = []
for data, name in zip(data,data_names):
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train,
yid_test, X_norm2, y_ql, False)
performance.append([acc, pre, rec, f1])
pd.DataFrame(performance, columns = ['Accuracy', 'Precision', 'Recall', 'F1'],
index=data_names)
###Output
_____no_output_____
###Markdown
6.2.6 XGBoost[back to top](top)We can think of XGBoost as regularized Gradient Boosting
###Code
model = XGBClassifier()
data = [X_norm2, X_norm, X_box, X_imp, X_pca]
data_names = ['Normed, Non-Colinear', 'Normed', 'Box-Cox', 'Imputed', 'PCA']
performance = []
for data, name in zip(data,data_names):
acc, pre, rec, f1 = train_test_model(model, Xid_train, Xid_test, yid_train,
yid_test, X_norm2, y_ql, False)
performance.append([acc, pre, rec, f1])
pd.DataFrame(performance, columns = ['Accuracy', 'Precision', 'Recall', 'F1'],
index=data_names)
###Output
_____no_output_____
###Markdown
6.3 Random Search
###Code
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 1000, num = 10)]
# Number of features to consider at every split
max_features = ['auto']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 22)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [3, 6]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf}
model = RandomForestClassifier()
grid = RandomizedSearchCV(estimator = model,
param_distributions = random_grid,
n_iter = 50,
cv = 5,
verbose=2,
random_state=42,
n_jobs = -1)
grid.fit(X_imp, y_ql)
display(grid.best_params_)
###Output
Fitting 5 folds for each of 50 candidates, totalling 250 fits
|
stats/01_bifurcation.ipynb | ###Markdown
12.1. Plotting the bifurcation diagram of a chaotic dynamical system
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def logistic(r, x):
return r * x * (1 - x)
x = np.linspace(0, 1)
fig, ax = plt.subplots(1, 1)
ax.plot(x, logistic(2, x), 'k')
def plot_system(r, x0, n, ax=None):
# Plot the function and the
# y=x diagonal line.
t = np.linspace(0, 1)
ax.plot(t, logistic(r, t), 'k', lw=2)
ax.plot([0, 1], [0, 1], 'k', lw=2)
# Recursively apply y=f(x) and plot two lines:
# (x, x) -> (x, y)
# (x, y) -> (y, y)
x = x0
for i in range(n):
y = logistic(r, x)
# Plot the two lines.
ax.plot([x, x], [x, y], 'k', lw=1)
ax.plot([x, y], [y, y], 'k', lw=1)
# Plot the positions with increasing
# opacity.
ax.plot([x], [y], 'ok', ms=10,
alpha=(i + 1) / n)
x = y
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_title(f"$r={r:.1f}, \, x_0={x0:.1f}$")
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6),
sharey=True)
plot_system(2.5, .1, 10, ax=ax1)
plot_system(3.5, .1, 10, ax=ax2)
n = 10000
r = np.linspace(2.5, 4.0, n)
iterations = 1000
last = 100
x = 1e-5 * np.ones(n)
lyapunov = np.zeros(n)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 9),
sharex=True)
for i in range(iterations):
x = logistic(r, x)
# We compute the partial sum of the
# Lyapunov exponent.
lyapunov += np.log(abs(r - 2 * r * x))
# We display the bifurcation diagram.
if i >= (iterations - last):
ax1.plot(r, x, ',k', alpha=.25)
ax1.set_xlim(2.5, 4)
ax1.set_title("Bifurcation diagram")
# We display the Lyapunov exponent.
# Horizontal line.
ax2.axhline(0, color='k', lw=.5, alpha=.5)
# Negative Lyapunov exponent.
ax2.plot(r[lyapunov < 0],
lyapunov[lyapunov < 0] / iterations,
'.k', alpha=.5, ms=.5)
# Positive Lyapunov exponent.
ax2.plot(r[lyapunov >= 0],
lyapunov[lyapunov >= 0] / iterations,
'.r', alpha=.5, ms=.5)
ax2.set_xlim(2.5, 4)
ax2.set_ylim(-2, 1)
ax2.set_title("Lyapunov exponent")
plt.tight_layout()
###Output
_____no_output_____ |
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/02-CIFAR-CNN-Code-Along.ipynb | ###Markdown
Copyright 2019. Created by Jose Marcial Portilla. CIFAR Code Along with CNNThe CIFAR-10 dataset is similar to MNIST, except that instead of one color channel (grayscale) there are three channels (RGB).Where an MNIST image has a size of (1,28,28), CIFAR images are (3,32,32). There are 10 categories an image may fall under:0. airplane1. automobile2. bird3. cat4. deer5. dog6. frog7. horse8. ship9. truckAs with the previous code along, make sure to watch the theory lectures! You'll want to be comfortable with:* convolutional layers* filters/kernels* pooling* depth, stride and zero-padding Perform standard imports
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchvision.utils import make_grid
import numpy as np
import pandas as pd
import seaborn as sn # for heatmaps
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the CIFAR-10 datasetPyTorch makes the CIFAR-10 train and test datasets available through torchvision. The first time they're called, the datasets will be downloaded onto your computer to the path specified. From that point, torchvision will always look for a local copy before attempting another download.The set contains 50,000 train and 10,000 test images.Refer to the previous section for explanations of transformations, batch sizes and DataLoader.
###Code
transform = transforms.ToTensor()
train_data = datasets.CIFAR10(root='../Data', train=True, download=True, transform=transform)
test_data = datasets.CIFAR10(root='../Data', train=False, download=True, transform=transform)
train_data
test_data
###Output
_____no_output_____
###Markdown
Create loaders
###Code
torch.manual_seed(101) # for reproducible results
train_loader = DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = DataLoader(test_data, batch_size=10, shuffle=False)
###Output
_____no_output_____
###Markdown
Define strings for labelsWe can call the labels whatever we want, so long as they appear in the order of 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'. Here we're using 5-character labels padded with spaces so that our reports line up later.
###Code
class_names = ['plane', ' car', ' bird', ' cat', ' deer', ' dog', ' frog', 'horse', ' ship', 'truck']
###Output
_____no_output_____
###Markdown
We don't want to use the variable name "class" here, as it would overwrite Python's built-in keyword. View a batch of images
###Code
np.set_printoptions(formatter=dict(int=lambda x: f'{x:5}')) # to widen the printed array
# Grab the first batch of 10 images
for images,labels in train_loader:
break
# Print the labels
print('Label:', labels.numpy())
print('Class: ', *np.array([class_names[i] for i in labels]))
# Print the images
im = make_grid(images, nrow=5) # the default nrow is 8
plt.figure(figsize=(10,4))
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
###Output
Label: [ 3 2 0 4 9 5 1 2 4 8]
Class: cat bird plane deer truck dog car bird deer ship
###Markdown
Define the modelIn the previous section we used two convolutional layers and two pooling layers before feeding data through a fully connected hidden layer to our output. The model follows CONV/RELU/POOL/CONV/RELU/POOL/FC/RELU/FC. We'll use the same format here.The only changes are:* take in 3-channel images instead of 1-channel* adjust the size of the fully connected inputOur first convolutional layer will have 3 input channels, 6 output channels, a kernel size of 3 (resulting in a 3x3 filter), and a stride length of 1 pixel.These are passed in as nn.Conv2d(3,6,3,1)
###Code
class ConvolutionalNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 3, 1) # changed from (1, 6, 5, 1)
self.conv2 = nn.Conv2d(6, 16, 3, 1)
self.fc1 = nn.Linear(6*6*16, 120) # changed from (4*4*16) to fit 32x32 images with 3x3 filters
self.fc2 = nn.Linear(120,84)
self.fc3 = nn.Linear(84, 10)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2, 2)
X = X.view(-1, 6*6*16)
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.log_softmax(X, dim=1)
###Output
_____no_output_____
###Markdown
Why (6x6x16) instead of (5x5x16)?With MNIST the kernels and pooling layers resulted in $\;(((28−2)/2)−2)/2=5.5 \;$ which rounds down to 5 pixels per side.With CIFAR the result is $\;(((32-2)/2)-2)/2 = 6.5\;$ which rounds down to 6 pixels per side.
###Code
torch.manual_seed(101)
model = ConvolutionalNetwork()
model
###Output
_____no_output_____
###Markdown
Including the bias terms for each layer, the total number of parameters being trained is:$\quad\begin{split}(3\times6\times3\times3)+6+(6\times16\times3\times3)+16+(576\times120)+120+(120\times84)+84+(84\times10)+10 &=\\162+6+864+16+69120+120+10080+84+840+10 &= 81,302\end{split}$
###Code
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>6}')
print(f'______\n{sum(params):>6}')
count_parameters(model)
###Output
162
6
864
16
69120
120
10080
84
840
10
______
81302
###Markdown
Define loss function & optimizer
###Code
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Train the modelThis time we'll feed the data directly into the model without flattening it first.OPTIONAL: In the event that training takes too long, you can interrupt the kernel, skip ahead to the bottom of the notebook, and load a trained version of the model that's been saved in this folder.
###Code
import time
start_time = time.time()
epochs = 10
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
b+=1
# Apply the model
y_pred = model(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%1000 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/50000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
# Apply the model
y_val = model(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
###Output
epoch: 0 batch: 1000 [ 10000/50000] loss: 1.72811735 accuracy: 26.660%
epoch: 0 batch: 2000 [ 20000/50000] loss: 1.90085292 accuracy: 32.790%
epoch: 0 batch: 3000 [ 30000/50000] loss: 1.77626872 accuracy: 36.507%
epoch: 0 batch: 4000 [ 40000/50000] loss: 1.32156026 accuracy: 38.925%
epoch: 0 batch: 5000 [ 50000/50000] loss: 1.37019920 accuracy: 40.922%
epoch: 1 batch: 1000 [ 10000/50000] loss: 1.18819773 accuracy: 51.520%
epoch: 1 batch: 2000 [ 20000/50000] loss: 1.21327436 accuracy: 51.725%
epoch: 1 batch: 3000 [ 30000/50000] loss: 1.15835631 accuracy: 52.283%
epoch: 1 batch: 4000 [ 40000/50000] loss: 1.28492486 accuracy: 52.587%
epoch: 1 batch: 5000 [ 50000/50000] loss: 2.28428698 accuracy: 52.930%
epoch: 2 batch: 1000 [ 10000/50000] loss: 1.22954726 accuracy: 56.750%
epoch: 2 batch: 2000 [ 20000/50000] loss: 1.51806808 accuracy: 56.725%
epoch: 2 batch: 3000 [ 30000/50000] loss: 0.82857972 accuracy: 56.847%
epoch: 2 batch: 4000 [ 40000/50000] loss: 0.99008143 accuracy: 57.108%
epoch: 2 batch: 5000 [ 50000/50000] loss: 0.74985492 accuracy: 57.280%
epoch: 3 batch: 1000 [ 10000/50000] loss: 0.73941267 accuracy: 60.430%
epoch: 3 batch: 2000 [ 20000/50000] loss: 0.85795957 accuracy: 60.200%
epoch: 3 batch: 3000 [ 30000/50000] loss: 0.99735087 accuracy: 59.877%
epoch: 3 batch: 4000 [ 40000/50000] loss: 1.01958919 accuracy: 59.965%
epoch: 3 batch: 5000 [ 50000/50000] loss: 0.76560205 accuracy: 59.992%
epoch: 4 batch: 1000 [ 10000/50000] loss: 1.01610267 accuracy: 62.400%
epoch: 4 batch: 2000 [ 20000/50000] loss: 0.91081637 accuracy: 62.405%
epoch: 4 batch: 3000 [ 30000/50000] loss: 1.82826269 accuracy: 62.287%
epoch: 4 batch: 4000 [ 40000/50000] loss: 0.83664912 accuracy: 62.260%
epoch: 4 batch: 5000 [ 50000/50000] loss: 1.08910477 accuracy: 62.156%
epoch: 5 batch: 1000 [ 10000/50000] loss: 0.74583805 accuracy: 64.360%
epoch: 5 batch: 2000 [ 20000/50000] loss: 0.55392635 accuracy: 64.360%
epoch: 5 batch: 3000 [ 30000/50000] loss: 1.75524867 accuracy: 63.840%
epoch: 5 batch: 4000 [ 40000/50000] loss: 1.33982396 accuracy: 63.767%
epoch: 5 batch: 5000 [ 50000/50000] loss: 1.00686800 accuracy: 63.760%
epoch: 6 batch: 1000 [ 10000/50000] loss: 1.14186704 accuracy: 65.820%
epoch: 6 batch: 2000 [ 20000/50000] loss: 1.05023360 accuracy: 65.320%
epoch: 6 batch: 3000 [ 30000/50000] loss: 1.02424061 accuracy: 65.353%
epoch: 6 batch: 4000 [ 40000/50000] loss: 0.92525661 accuracy: 65.438%
epoch: 6 batch: 5000 [ 50000/50000] loss: 1.16625285 accuracy: 65.342%
epoch: 7 batch: 1000 [ 10000/50000] loss: 1.45434511 accuracy: 66.870%
epoch: 7 batch: 2000 [ 20000/50000] loss: 1.03906512 accuracy: 66.835%
epoch: 7 batch: 3000 [ 30000/50000] loss: 1.39975834 accuracy: 66.730%
epoch: 7 batch: 4000 [ 40000/50000] loss: 0.61725640 accuracy: 66.528%
epoch: 7 batch: 5000 [ 50000/50000] loss: 0.52788514 accuracy: 66.414%
epoch: 8 batch: 1000 [ 10000/50000] loss: 1.14143419 accuracy: 69.010%
epoch: 8 batch: 2000 [ 20000/50000] loss: 1.21063972 accuracy: 68.010%
epoch: 8 batch: 3000 [ 30000/50000] loss: 1.08417308 accuracy: 67.817%
epoch: 8 batch: 4000 [ 40000/50000] loss: 0.89879084 accuracy: 67.935%
epoch: 8 batch: 5000 [ 50000/50000] loss: 1.01553142 accuracy: 67.740%
epoch: 9 batch: 1000 [ 10000/50000] loss: 0.66314524 accuracy: 69.630%
epoch: 9 batch: 2000 [ 20000/50000] loss: 1.15445566 accuracy: 69.450%
epoch: 9 batch: 3000 [ 30000/50000] loss: 0.69587100 accuracy: 69.300%
epoch: 9 batch: 4000 [ 40000/50000] loss: 0.72007024 accuracy: 68.873%
epoch: 9 batch: 5000 [ 50000/50000] loss: 0.96967870 accuracy: 68.824%
Duration: 1207 seconds
###Markdown
Optional: Save the modelThis will save your trained model, without overwriting the saved model we have provided called CIFAR10-CNN-Model-master.pt
###Code
torch.save(model.state_dict(), 'CIFAR10-CNN-Model.pt')
###Output
_____no_output_____
###Markdown
Plot the loss and accuracy comparisons
###Code
plt.plot(train_losses, label='training loss')
plt.plot(test_losses, label='validation loss')
plt.title('Loss at the end of each epoch')
plt.legend();
plt.plot([t/500 for t in train_correct], label='training accuracy')
plt.plot([t/100 for t in test_correct], label='validation accuracy')
plt.title('Accuracy at the end of each epoch')
plt.legend();
###Output
_____no_output_____
###Markdown
Evaluate Test Data
###Code
print(test_correct) # contains the results of all 10 epochs
print()
print(f'Test accuracy: {test_correct[-1].item()*100/10000:.3f}%') # print the most recent result as a percent
###Output
[tensor(4940), tensor(5519), tensor(5685), tensor(5812), tensor(5930), tensor(6048), tensor(5941), tensor(6166), tensor(6035), tensor(6105)]
Test accuracy: 61.050%
###Markdown
This is not as impressive as with MNIST, which makes sense. We would have to adjust our parameters to obtain better results.Still, it's much better than the 10% we'd get with random chance! Display the confusion matrixIn order to map predictions against ground truth, we need to run the entire test set through the model.Also, since our model was not as accurate as with MNIST, we'll use a heatmap to better display the results.
###Code
# Create a loader for the entire the test set
test_load_all = DataLoader(test_data, batch_size=10000, shuffle=False)
with torch.no_grad():
correct = 0
for X_test, y_test in test_load_all:
y_val = model(X_test)
predicted = torch.max(y_val,1)[1]
correct += (predicted == y_test).sum()
arr = confusion_matrix(y_test.view(-1), predicted.view(-1))
df_cm = pd.DataFrame(arr, class_names, class_names)
plt.figure(figsize = (9,6))
sn.heatmap(df_cm, annot=True, fmt="d", cmap='BuGn')
plt.xlabel("prediction")
plt.ylabel("label (ground truth)")
plt.show();
###Output
_____no_output_____
###Markdown
For more info on the above chart, visit the docs on scikit-learn's confusion_matrix, seaborn heatmaps, and matplotlib colormaps. Examine the missesWe can track the index positions of "missed" predictions, and extract the corresponding image and label. We'll do this in batches to save screen space.
###Code
misses = np.array([])
for i in range(len(predicted.view(-1))):
if predicted[i] != y_test[i]:
misses = np.append(misses,i).astype('int64')
# Display the number of misses
len(misses)
# Display the first 8 index positions
misses[:8]
# Set up an iterator to feed batched rows
r = 8 # row size
row = iter(np.array_split(misses,len(misses)//r+1))
###Output
_____no_output_____
###Markdown
Now that everything is set up, run and re-run the cell below to view all of the missed predictions.Use Ctrl+Enter to remain on the cell between runs. You'll see a StopIteration once all the misses have been seen.
###Code
np.set_printoptions(formatter=dict(int=lambda x: f'{x:5}')) # to widen the printed array
nextrow = next(row)
lbls = y_test.index_select(0,torch.tensor(nextrow)).numpy()
gues = predicted.index_select(0,torch.tensor(nextrow)).numpy()
print("Index:", nextrow)
print("Label:", lbls)
print("Class: ", *np.array([class_names[i] for i in lbls]))
print()
print("Guess:", gues)
print("Class: ", *np.array([class_names[i] for i in gues]))
images = X_test.index_select(0,torch.tensor(nextrow))
im = make_grid(images, nrow=r)
plt.figure(figsize=(8,4))
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
###Output
Index: [ 3 4 6 7 8 17 20 21]
Label: [ 0 6 1 6 3 7 7 0]
Class: plane frog car frog cat horse horse plane
Guess: [ 8 4 2 4 5 3 2 2]
Class: ship deer bird deer dog cat bird bird
###Markdown
Optional: Load a Saved ModelIn the event that training the ConvolutionalNetwork takes too long, you can load a trained version by running the following code:model2 = ConvolutionalNetwork()model2.load_state_dict(torch.load('CIFAR10-CNN-Model-master.pt'))model2.eval()
###Code
# Instantiate the model and load saved parameters
model2 = ConvolutionalNetwork()
model2.load_state_dict(torch.load('CIFAR10-CNN-Model-master.pt'))
model2.eval()
# Evaluate the saved model against the test set
test_load_all = DataLoader(test_data, batch_size=10000, shuffle=False)
with torch.no_grad():
correct = 0
for X_test, y_test in test_load_all:
y_val = model2(X_test)
predicted = torch.max(y_val,1)[1]
correct += (predicted == y_test).sum()
print(f'Test accuracy: {correct.item()}/{len(test_data)} = {correct.item()*100/(len(test_data)):7.3f}%')
# Display the confusion matrix as a heatmap
arr = confusion_matrix(y_test.view(-1), predicted.view(-1))
df_cm = pd.DataFrame(arr, class_names, class_names)
plt.figure(figsize = (9,6))
sn.heatmap(df_cm, annot=True, fmt="d", cmap='BuGn')
plt.xlabel("prediction")
plt.ylabel("label (ground truth)")
plt.show();
###Output
_____no_output_____ |
tesis.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/My\ Drive/TESIS1/Colabs/
!ls
!git clone https://github.com/FranPerez98/unet_tesis.git
%cd unet_tesis/
%rm data/membrane/test/*_predict.png
!git init
!git config --global user.email '[email protected]'
!git config --global user.name 'FranPerez98'
!git add -A
!git commit -m 'hola'
!git remote add origin https://<FranPerez98>:<pucp0600@>[email protected]/<FranPerez98>/unet_tesis.git
###Output
/bin/bash: FranPerez98: No such file or directory
|
_notebooks/KovaaksData.ipynb | ###Markdown
"Understanding The Data -- Kovaaks Aim Trainer"> "A Deep Dive into the automatically saved csvs from Kovaaks Aim Trainer"- toc: true- branch: master- badges: true- comments: true- author: Josh Prier- categories: [Kovaaks, Understanding The Data, Data Science] KovaaksKovaaks is a "game" that allows to directly practice mouse control in 3d fps games. There are hundreds of different mini-games to practice with, each having a different focus. This [guide](https://www.dropbox.com/s/vaba3potfhf9jy1/KovaaK%20aim%20workout%20routines.pdf?dl=0) is best for getting perspective or understanding how and why to use Kovaaks File NamesKovaaks has a great feature in which it saves every mini-game's stats to a csv. The format of the csv's names is as follows``` - - YYYY.MM.DD-HH.MM.SS Stats.csv```Example:```Tile Frenzy - Challenge - 2020.12.14-08.46.00 Stats.csv```Some of the scenario names also have dashes so just checking against the first dash will not work```Tile Frenzy - Strafing - 03 - Challenge - 2020.12.14-08.34.31 Stats.csv``` The DataEach file has 4 parts:* List of all Kills* Weapon, shots, hits, damage done, damage possible* Overall Stats and info * Info about settings (input lag, fps, sens, FOV, etc)
###Code
#collapse-hide
import pandas as pd
import matplotlib.pyplot as plt
from urllib.request import urlopen
from io import StringIO
import plotly.express as px
from IPython.display import HTML
###Output
_____no_output_____
###Markdown
Each part of the data has different formats and headers.Here are the Headers/keys in python
###Code
#collapse-show
keys_kills=["Date","Kill #","Timestamp","Bot","Weapon","TTK","Shots","Hits","Accuracy","Damage Done","Damage Possible","Efficiency","Cheated"]
keys_weapon=["Date","Weapon","Shots","Hits","Damage Done","Damage Possible"]
keys_info=["Date","Kills","Deaths","Fight Time","Avg TTK","Damage Done","Damage Taken","Midairs","Midaired","Directs","Directed","Distance Traveled","Score","Scenario","Hash","Game Version","Challenge Start","Input Lag","Max FPS (config)","Sens Scale","Horiz Sens","Vert Sens","FOV","Hide Gun","Crosshair","Crosshair Scale","Crosshair Color","Resolution","Avg FPS","Resolution Scale"]
keys_info_no_colon=["Resolution","Avg FPS","Resolution Scale"]
#collapse-hide
#HELPERS
def split_format_file(section, output, date):
split_section = section.split('\n')
# if output == "":
# output = split_section[0]
# TODO: Add date to each line
for i in range(len(split_section[1:])):
if split_section[i+1][-1] == ',':
split_section[i+1] = split_section[i+1][:-1]
split_section[i+1] = date + "," + split_section[i+1]
section = '\n'.join(split_section[1:])
output = output + '\n' + section
return output
def format_info(info, output, date):
info_lines = info.split('\n')
data = []
for key in keys_info:
if key == "Date":
found_key = True
data.append(date)
else:
found_key = False
for line in info_lines:
if any(key in line for key in keys_info_no_colon):
split_line = line.split(',')
if len(split_line) > 1:
if split_line[0] == key:
found_key = True
data.append(split_line[1])
else:
split_line = line.split(':', 1)
if len(split_line) > 1:
if split_line[0] == key:
found_key = True
data.append(split_line[1][1:])
if not found_key:
data.append('')
output = output + '\n' + ','.join(data)
return output
#hide_output
# Current online directory for my stats
stat_dir = "https://jprier.github.io/stats/"
stat_filenames_url = "https://jprier.github.io/stats/filenames.txt"
stat_filenames = urlopen(stat_filenames_url).read().decode('utf-8').split('\n')
kills = ','.join(keys_kills)
weapon = ','.join(keys_weapon)
info = ','.join(keys_info)
for filename in stat_filenames:
# TODO: parse filename for challenge name and date
try:
filename = filename.replace(' ', '%20')
file = urlopen(stat_dir + filename).read().decode('utf-8').split('\n\n')
if len(file) > 1:
date = filename.split('%20')[-2]
# TODO: Add challenge name and date to each as columns
kills = split_format_file(file[0], kills, date)
# file[1] --> df_weapon
weapon = split_format_file(file[1], weapon, date)
# file[2,3] --> df_info
info = format_info(file[2]+"\n"+file[3], info, date)
except Exception as err:
print(err)
df_kills = pd.read_csv(StringIO(kills), sep=",")
df_weapons = pd.read_csv(StringIO(weapon), sep=",")
df_info = pd.read_csv(StringIO(info), sep=",")
df_kills["Date"] = pd.to_datetime(df_kills.Date, format='%Y.%m.%d-%H.%M.%S')#df_kills["Date"].dt.strftime("%Y.%d.%m-%H.%M.%S")
df_weapons["Date"] = pd.to_datetime(df_weapons.Date, format='%Y.%m.%d-%H.%M.%S')#df_weapons["Date"].dt.strftime("%Y.%d.%m-%H.%M.%S")
df_info["Date"] = pd.to_datetime(df_info.Date, format='%Y.%m.%d-%H.%M.%S')#df_info["Date"].dt.strftime("%Y.%d.%m-%H.%M.%S")
#hide
with pd.option_context('display.max_rows', 10, 'display.max_columns', None):
display(df_info)
df_info["dates"] = df_info["Date"]
df_info.set_index('Date', inplace=True)
with pd.option_context('display.max_rows', 10, 'display.max_columns', None):
display(df_info)
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
#hide_output
scenarios = df_info['Scenario'].unique()
scenario, scenarios = scenarios[0], scenarios[1:]
df_info_max = df_info.loc[df_info['Scenario'] == scenario].resample('D')['Score'].agg(['max'])
df_info_max['Scenario'] = scenario
for scenario in scenarios:
df_info_max_scenario = df_info.loc[df_info['Scenario'] == scenario].resample('D')['Score'].agg(['max'])
df_info_max_scenario = df_info_max_scenario[df_info_max_scenario['max'].notna()]
if df_info_max_scenario.size > 3:
df_info_max_scenario['Scenario'] = scenario
df_info_max = df_info_max.append(df_info_max_scenario)
with pd.option_context('display.max_rows', 10, 'display.max_columns', None):
display(df_info_max)
fig = px.line(df_info_max, x=df_info_max.index, y="max", color='Scenario')
fig1 = px.scatter(df_info, x=df_info.index, y="Score", trendline='lowess', color='Scenario')
#hide_input
# fig.show()
HTML(fig.to_html(include_plotlyjs='cdn'))
#hide_input
# fig1.show()
HTML(fig1.to_html(include_plotlyjs='cdn'))
###Output
_____no_output_____ |
Module 5/.ipynb_checkpoints/Ex 5.2 SARSA Agent-completed-checkpoint.ipynb | ###Markdown
DAT257x: Reinforcement Learning Explained Lab 5: Temporal Difference Learning Exercise 5.2: SARSA Agent
###Code
import numpy as np
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.simple_rooms import SimpleRoomsEnv
from lib.envs.windy_gridworld import WindyGridworldEnv
from lib.envs.cliff_walking import CliffWalkingEnv
from lib.simulation import Experiment
class Agent(object):
def __init__(self, actions):
self.actions = actions
self.num_actions = len(actions)
def act(self, state):
raise NotImplementedError
import math
class SarsaAgent(Agent):
def __init__(self, actions, epsilon=0.01, alpha=0.5, gamma=1):
super(SarsaAgent, self).__init__(actions)
## TODO 1
## Initialize empty dictionary here
## In addition, initialize the value of epsilon, alpha and gamma
self.epsilon = epsilon
self.alpha = alpha
self.gamma = gamma
self.Q_s_a = {}
def stateToString(self, state):
mystring = ""
if np.isscalar(state):
mystring = str(state)
else:
for digit in state:
mystring += str(digit)
return mystring
def act(self, state):
stateStr = self.stateToString(state)
action = np.random.randint(0, self.num_actions)
for a in self.actions:
try:
if math.isnan(self.Q_s_a[stateStr][a]):
self.Q_s_a[stateStr][a] = 0
except KeyError:
try:
self.Q_s_a[stateStr][a] = 0
except KeyError:
self.Q_s_a[stateStr] = {}
self.Q_s_a[stateStr][a] = 0
## TODO 2
## Implement epsilon greedy policy here
choice = None
if self.epsilon == 0:
choice = 0
elif self.epsilon == 1:
choice = 1
else:
choice = np.random.binomial(1, self.epsilon)
try:
if choice == 0:
#action = np.random.choice(self.num_actions)
Q_s_as = list(self.Q_s_a[stateStr].values())
if Q_s_as.count(max(Q_s_as)) > 1:
best = [i for i in range(self.num_actions) if Q_s_as[i] == max(Q_s_as)]
action = np.random.choice(best)
# elif len(set(self.Q_s_a[stateStr].values())) < 3:
# action = np.random.randint(0, self.num_actions)
else:
action = max(self.Q_s_a[stateStr], key = self.Q_s_a[stateStr].get)
except KeyError as KeyErr:
action = np.random.randint(0, self.num_actions)
return action
def learn(self, state1, action1, reward, state2, action2):
state1Str = self.stateToString(state1)
state2Str = self.stateToString(state2)
## TODO 3
## Implement the sarsa update here
try:
td_target = reward + self.gamma * self.Q_s_a[state2Str][action2]
except KeyError as KeyErr:
self.Q_s_a[state2Str] = {}
self.Q_s_a[state2Str][action2] = 0
td_target = reward + self.gamma * self.Q_s_a[state2Str][action2]
td_delta = td_target - self.Q_s_a[state1Str][action1]
self.Q_s_a[state1Str][action1] = self.Q_s_a[state1Str][action1] + self.alpha * td_delta
"""
try:
td_delta = td_target - self.Q_s_a[state1Str][action1]
except KeyError as KeyErr:
td_delta = td_target
try:
self.Q_s_a[state1Str][action1] = self.Q_s_a[state1Str][action1] + self.alpha * td_delta
except KeyError as KeyErr:
self.Q_s_a[state1Str] = {}
self.Q_s_a[state1Str][action1] = self.alpha * td_delta
"""
"""
SARSA Update
Q(s,a) <- Q(s,a) + alpha * (reward + gamma * Q(s',a') - Q(s,a))
or
Q(s,a) <- Q(s,a) + alpha * (td_target - Q(s,a))
or
Q(s,a) <- Q(s,a) + alpha * td_delta
"""
pass
interactive = True
%matplotlib nbagg
env = SimpleRoomsEnv()
agent = SarsaAgent(range(env.action_space.n))
experiment = Experiment(env, agent)
experiment.run_sarsa(10, interactive)
interactive = False
%matplotlib inline
env = SimpleRoomsEnv()
agent = SarsaAgent(range(env.action_space.n))
experiment = Experiment(env, agent)
experiment.run_sarsa(50, interactive)
interactive = True
%matplotlib nbagg
env = CliffWalkingEnv()
agent = SarsaAgent(range(env.action_space.n))
experiment = Experiment(env, agent)
experiment.run_sarsa(10, interactive)
interactive = False
%matplotlib inline
env = CliffWalkingEnv()
agent = SarsaAgent(range(env.action_space.n))
experiment = Experiment(env, agent)
experiment.run_sarsa(100, interactive)
interactive = False
%matplotlib inline
env = WindyGridworldEnv()
agent = SarsaAgent(range(env.action_space.n))
experiment = Experiment(env, agent)
experiment.run_sarsa(50, interactive)
interactive = False
%matplotlib inline
env = CliffWalkingEnv()
agent = SarsaAgent(range(env.action_space.n))
experiment = Experiment(env, agent)
experiment.run_sarsa(1000, interactive)
###Output
_____no_output_____ |
Image-segmentation/OxfordPets/ResNet34-v4/ResNet34-v4.ipynb | ###Markdown
Image Segmentation U-Net+ [https://ithelp.ithome.com.tw/articles/10240314](https://ithelp.ithome.com.tw/articles/10240314)+ [https://www.kaggle.com/tikutiku/hubmap-tilespadded-inference-v2](https://www.kaggle.com/tikutiku/hubmap-tilespadded-inference-v2)v4 版本中僅更改 Decoder 的部分,將原先上採樣改用反卷積來實作 Load data
###Code
from google.colab import drive
drive.mount('/content/drive')
!tar -xf "/content/drive/MyDrive/Colab Notebooks/annotations.tar.gz" -C /content
!tar -xf "/content/drive/MyDrive/Colab Notebooks/images.tar.gz" -C /content
import os
# https://stackoverflow.com/questions/60058588/tesnorflow-2-0-tf-random-set-seed-not-working-since-i-am-getting-different-resul
SEED = 2021
os.environ['PYTHONHASHSEED'] = str(SEED)
import numpy
import math
import random
VERSION = 'ResNet34UNet-V4'
DATA_ROOT_DIR = '/content/'
SEED = 2021
IMG_SIZE = (160, 160)
NUM_CLASSES = 4
BATCH_SIZE = 24
EPOCHES = 50
import tensorflow.keras as keras
from keras import layers, models
import tensorflow as tf
def reset_random_seed():
os.environ['PYTHONHASHSEED'] = str(SEED)
numpy.random.seed(SEED)
random.seed(SEED)
tf.random.set_seed(SEED)
reset_random_seed()
files = []
for file in os.listdir(DATA_ROOT_DIR + 'images'):
# file = Abyssinian_1.jpg
if file.endswith('jpg'):
fn = file.split('.')[0]
if os.path.isfile(DATA_ROOT_DIR + 'annotations/trimaps/' + fn + '.png'):
files.append(fn)
files = sorted(files)
from IPython.display import display
from tensorflow.keras.preprocessing.image import load_img
from PIL import ImageOps
import PIL
# ImageOps.autocontrast() method maximizes (normalize) image contrast.
# This function calculates a histogram of the input image,
# removes cutoff percent of the lightest and darkest pixels from the histogram,
# and remaps the image so that the darkest pixel becomes black (0),
# and the lightest becomes white (255).
img = PIL.ImageOps.autocontrast(load_img(DATA_ROOT_DIR + 'annotations/trimaps/' + files[0] + '.png'))
display(img)
###Output
_____no_output_____
###Markdown
Data Generator
###Code
# reference: https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence
class OxfordPets(keras.utils.Sequence):
def __init__(self, files):
self.files = files
def __len__(self):
return math.ceil(len(self.files) / BATCH_SIZE)
def __getitem__(self, index):
x, y = [], []
for i in range(index * BATCH_SIZE, min((index+1) * BATCH_SIZE, len(self.files))):
# target size in load_img
# (img_height, img_width)
x.append(numpy.array(load_img(DATA_ROOT_DIR + 'images/' + self.files[i] + '.jpg', target_size = IMG_SIZE), dtype='float32'))
y.append(numpy.array(load_img(DATA_ROOT_DIR + 'annotations/trimaps/' + self.files[i] + '.png', target_size = IMG_SIZE, color_mode="grayscale"), dtype='uint8'))
return numpy.array(x), numpy.array(y)
###Output
_____no_output_____
###Markdown
Build model
###Code
class ResBlock(keras.Model):
def __init__(self, channels, strides=1):
super().__init__()
self.conv1 = layers.Conv2D(
channels, 3, strides=strides, padding='same', use_bias=False)
self.bn1 = layers.BatchNormalization()
self.conv2 = layers.Conv2D(
channels, 3, strides=1, padding='same', use_bias=False)
self.bn2 = layers.BatchNormalization()
if strides != 1:
self.shortcut = keras.Sequential([
layers.Conv2D(channels, 1, strides,
padding='same', use_bias=False)
])
else:
self.shortcut = keras.Sequential()
def call(self, inputs):
x = self.conv1(inputs)
x = self.bn1(x)
x = tf.nn.relu(x)
x = self.conv2(x)
x = self.bn2(x)
shortcut = self.shortcut(inputs)
return tf.nn.relu(tf.add(x, shortcut))
def get_config(self):
return {}
class Encoder(keras.Model):
def __init__(self, channels, repeat, strides):
super().__init__()
self.resBlocks = keras.Sequential()
self.resBlocks.add(ResBlock(channels, strides))
for _ in range(1, repeat):
self.resBlocks.add(ResBlock(channels, strides=1))
def call(self, inputs):
return self.resBlocks(inputs)
def get_config(self):
return {}
class ChannelAttention(keras.Model):
def __init__(self, reduction):
super().__init__()
self.globalMaxPool = layers.GlobalMaxPooling2D(keepdims=True)
self.globalAvgPool = layers.GlobalAveragePooling2D(keepdims=True)
self.reduction = reduction
def build(self, input_shape):
self.fc = keras.Sequential([
layers.Conv2D(input_shape[-1]//self.reduction, 3, padding='same'),
layers.ReLU(),
layers.Conv2D(input_shape[-1], 1, padding='valid')
])
def call(self, inputs):
x1 = self.globalMaxPool(inputs)
x2 = self.globalAvgPool(inputs)
x1 = self.fc(x1)
x2 = self.fc(x2)
x = tf.nn.sigmoid(layers.add([x1, x2]))
return x
class SpatialAttention(keras.Model):
def __init__(self):
super().__init__()
self.conv3x3 = layers.Conv2D(1, 3, padding='same')
def call(self, inputs):
# https://github.com/kobiso/CBAM-tensorflow/blob/master/attention_module.py#L95
x1 = tf.math.reduce_max(inputs, axis=3, keepdims=True)
x2 = tf.math.reduce_mean(inputs, axis=3, keepdims=True)
x = tf.concat([x1, x2], 3)
x = self.conv3x3(x)
x = tf.nn.sigmoid(x)
return x
class CBAM(keras.Model):
def __init__(self, reduction):
super().__init__()
self.channelAttention = ChannelAttention(reduction)
self.spaialAttention = SpatialAttention()
def call(self, inputs):
x = inputs * self.channelAttention(inputs)
x = x * self.spaialAttention(x)
return x
class DecoderV4(keras.Model):
def __init__(self, channels):
super().__init__()
self.bn1 = layers.BatchNormalization()
self.bn2 = layers.BatchNormalization()
self.bn3 = layers.BatchNormalization()
self.upsample = layers.Conv2DTranspose(channels, 3, strides=2, padding='same', use_bias=False)
self.conv3x3 = layers.Conv2D(channels, 3, strides=1, padding='same', use_bias=False)
self.conv1x1 = layers.Conv2D(channels, 1, use_bias=False)
self.cbam = CBAM(reduction=16)
def call(self, inputs):
x = self.bn1(inputs)
x = tf.nn.relu(x)
# 反卷積
x = self.upsample(x)
x = self.bn2(x)
x = tf.nn.relu(x)
x = self.conv3x3(x)
x = self.bn3(x)
x = self.cbam(x)
shortcut = self.conv1x1(self.upsample(inputs))
x += shortcut
return x
def get_config(self):
return {}
def ResNet34UNetV4(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Encode by ResNet34
x = layers.Conv2D(64, 7, strides=2, padding='same', use_bias=False)(inputs)
x = layers.BatchNormalization()(x)
x = tf.nn.relu(x)
x0 = layers.MaxPooling2D(3, strides=2, padding='same')(x)
# ResNet34
x1 = Encoder(64, 3, strides=1)(x0)
x2 = Encoder(128, 4, strides=2)(x1)
x3 = Encoder(256, 6, strides=2)(x2)
x4 = Encoder(512, 3, strides=2)(x3)
# Center Block
y5 = layers.Conv2D(512, 3, padding='same', use_bias=False)(x4)
# Decode
y4 = DecoderV4(64)(layers.Concatenate(axis=3)([x4, y5]))
y3 = DecoderV4(64)(layers.Concatenate(axis=3)([x3, y4]))
y2 = DecoderV4(64)(layers.Concatenate(axis=3)([x2, y3]))
y1 = DecoderV4(64)(layers.Concatenate(axis=3)([x1, y2]))
y0 = DecoderV4(64)(y1)
# Hypercolumn
y4 = layers.UpSampling2D(16, interpolation='bilinear')(y4)
y3 = layers.UpSampling2D(8, interpolation='bilinear')(y3)
y2 = layers.UpSampling2D(4, interpolation='bilinear')(y2)
y1 = layers.UpSampling2D(2, interpolation='bilinear')(y1)
hypercolumn = layers.Concatenate(axis=3)([y0, y1, y2, y3, y4])
# Final conv
outputs = keras.Sequential([
layers.Conv2D(64, 3, padding='same', use_bias=False),
layers.ELU(),
layers.Conv2D(num_classes, 1, use_bias=False)
])(hypercolumn)
outputs = tf.nn.softmax(outputs)
return keras.Model(inputs, outputs)
###Output
_____no_output_____
###Markdown
Start Training!
###Code
keras.backend.clear_session()
reset_random_seed()
m = ResNet34UNetV4(IMG_SIZE+(3,), NUM_CLASSES)
m.summary()
keras.utils.plot_model(m, show_shapes=True)
files = sorted(files)
rng = numpy.random.default_rng(SEED)
rng.shuffle(files)
middle = math.ceil(len(files) * 0.8)
train = OxfordPets(files[:middle])
test = OxfordPets(files[middle:])
# 確定每次訓練時 files 的順序皆相同
print(files[:10])
keras.backend.clear_session()
reset_random_seed()
callbacks = [
keras.callbacks.ModelCheckpoint(DATA_ROOT_DIR + VERSION + ".h5", monitor='val_loss', save_best_only=True, save_weights_only=True)
]
m.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=['accuracy'])
history = m.fit(train, validation_data=test, epochs=EPOCHES, callbacks=callbacks)
###Output
Epoch 1/50
247/247 [==============================] - 166s 565ms/step - loss: 0.8708 - accuracy: 0.7161 - val_loss: 0.6474 - val_accuracy: 0.7460
Epoch 2/50
247/247 [==============================] - 138s 559ms/step - loss: 0.5136 - accuracy: 0.7966 - val_loss: 0.6201 - val_accuracy: 0.7708
Epoch 3/50
247/247 [==============================] - 137s 556ms/step - loss: 0.4333 - accuracy: 0.8302 - val_loss: 0.4031 - val_accuracy: 0.8435
Epoch 4/50
247/247 [==============================] - 138s 557ms/step - loss: 0.3904 - accuracy: 0.8483 - val_loss: 0.4942 - val_accuracy: 0.8066
Epoch 5/50
247/247 [==============================] - 138s 558ms/step - loss: 0.3590 - accuracy: 0.8609 - val_loss: 0.4159 - val_accuracy: 0.8402
Epoch 6/50
247/247 [==============================] - 137s 556ms/step - loss: 0.3352 - accuracy: 0.8703 - val_loss: 0.7973 - val_accuracy: 0.7682
Epoch 7/50
247/247 [==============================] - 137s 555ms/step - loss: 0.3142 - accuracy: 0.8781 - val_loss: 0.8439 - val_accuracy: 0.7476
Epoch 8/50
247/247 [==============================] - 137s 556ms/step - loss: 0.2995 - accuracy: 0.8839 - val_loss: 0.3656 - val_accuracy: 0.8588
Epoch 9/50
247/247 [==============================] - 137s 556ms/step - loss: 0.2766 - accuracy: 0.8923 - val_loss: 0.5750 - val_accuracy: 0.7990
Epoch 10/50
247/247 [==============================] - 138s 558ms/step - loss: 0.2689 - accuracy: 0.8953 - val_loss: 0.3774 - val_accuracy: 0.8600
Epoch 11/50
247/247 [==============================] - 137s 556ms/step - loss: 0.2448 - accuracy: 0.9041 - val_loss: 0.5033 - val_accuracy: 0.8376
Epoch 12/50
247/247 [==============================] - 137s 555ms/step - loss: 0.2374 - accuracy: 0.9067 - val_loss: 0.3679 - val_accuracy: 0.8643
Epoch 13/50
247/247 [==============================] - 138s 557ms/step - loss: 0.2250 - accuracy: 0.9112 - val_loss: 0.3520 - val_accuracy: 0.8737
Epoch 14/50
247/247 [==============================] - 138s 557ms/step - loss: 0.2084 - accuracy: 0.9173 - val_loss: 0.3384 - val_accuracy: 0.8810
Epoch 15/50
247/247 [==============================] - 138s 556ms/step - loss: 0.2164 - accuracy: 0.9142 - val_loss: 0.3418 - val_accuracy: 0.8810
Epoch 16/50
247/247 [==============================] - 138s 557ms/step - loss: 0.2032 - accuracy: 0.9189 - val_loss: 0.4243 - val_accuracy: 0.8511
Epoch 17/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1927 - accuracy: 0.9228 - val_loss: 0.3317 - val_accuracy: 0.8819
Epoch 18/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1930 - accuracy: 0.9227 - val_loss: 0.3527 - val_accuracy: 0.8789
Epoch 19/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1713 - accuracy: 0.9306 - val_loss: 0.3730 - val_accuracy: 0.8729
Epoch 20/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1608 - accuracy: 0.9344 - val_loss: 0.3212 - val_accuracy: 0.8934
Epoch 21/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1571 - accuracy: 0.9359 - val_loss: 0.5124 - val_accuracy: 0.8621
Epoch 22/50
247/247 [==============================] - 138s 556ms/step - loss: 0.1580 - accuracy: 0.9355 - val_loss: 0.4492 - val_accuracy: 0.8603
Epoch 23/50
247/247 [==============================] - 137s 555ms/step - loss: 0.1633 - accuracy: 0.9331 - val_loss: 0.4170 - val_accuracy: 0.8788
Epoch 24/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1702 - accuracy: 0.9307 - val_loss: 0.3855 - val_accuracy: 0.8848
Epoch 25/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1535 - accuracy: 0.9369 - val_loss: 0.3852 - val_accuracy: 0.8747
Epoch 26/50
247/247 [==============================] - 138s 558ms/step - loss: 0.1529 - accuracy: 0.9371 - val_loss: 0.3917 - val_accuracy: 0.8889
Epoch 27/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1294 - accuracy: 0.9459 - val_loss: 0.3610 - val_accuracy: 0.8926
Epoch 28/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1178 - accuracy: 0.9501 - val_loss: 0.3781 - val_accuracy: 0.8981
Epoch 29/50
247/247 [==============================] - 138s 559ms/step - loss: 0.1152 - accuracy: 0.9513 - val_loss: 0.3947 - val_accuracy: 0.9003
Epoch 30/50
247/247 [==============================] - 138s 556ms/step - loss: 0.1125 - accuracy: 0.9524 - val_loss: 0.4426 - val_accuracy: 0.8859
Epoch 31/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1123 - accuracy: 0.9524 - val_loss: 0.3981 - val_accuracy: 0.8980
Epoch 32/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1028 - accuracy: 0.9561 - val_loss: 0.4096 - val_accuracy: 0.8956
Epoch 33/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1272 - accuracy: 0.9475 - val_loss: 0.6545 - val_accuracy: 0.8146
Epoch 34/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1922 - accuracy: 0.9232 - val_loss: 0.3564 - val_accuracy: 0.8911
Epoch 35/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1216 - accuracy: 0.9488 - val_loss: 0.3669 - val_accuracy: 0.8990
Epoch 36/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1018 - accuracy: 0.9567 - val_loss: 0.4074 - val_accuracy: 0.9003
Epoch 37/50
247/247 [==============================] - 138s 557ms/step - loss: 0.0915 - accuracy: 0.9607 - val_loss: 0.4186 - val_accuracy: 0.9014
Epoch 38/50
247/247 [==============================] - 138s 557ms/step - loss: 0.0861 - accuracy: 0.9629 - val_loss: 0.4500 - val_accuracy: 0.8994
Epoch 39/50
247/247 [==============================] - 137s 556ms/step - loss: 0.0839 - accuracy: 0.9640 - val_loss: 0.4503 - val_accuracy: 0.8985
Epoch 40/50
247/247 [==============================] - 138s 557ms/step - loss: 0.0818 - accuracy: 0.9648 - val_loss: 0.4977 - val_accuracy: 0.8980
Epoch 41/50
247/247 [==============================] - 138s 559ms/step - loss: 0.0790 - accuracy: 0.9660 - val_loss: 0.5005 - val_accuracy: 0.8978
Epoch 42/50
247/247 [==============================] - 138s 557ms/step - loss: 0.0775 - accuracy: 0.9666 - val_loss: 0.5119 - val_accuracy: 0.8987
Epoch 43/50
247/247 [==============================] - 137s 554ms/step - loss: 0.0761 - accuracy: 0.9673 - val_loss: 0.5335 - val_accuracy: 0.8979
Epoch 44/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1036 - accuracy: 0.9571 - val_loss: 2.0554 - val_accuracy: 0.7759
Epoch 45/50
247/247 [==============================] - 137s 556ms/step - loss: 0.1771 - accuracy: 0.9292 - val_loss: 0.3852 - val_accuracy: 0.8856
Epoch 46/50
247/247 [==============================] - 138s 557ms/step - loss: 0.1049 - accuracy: 0.9556 - val_loss: 0.3739 - val_accuracy: 0.8970
Epoch 47/50
247/247 [==============================] - 137s 554ms/step - loss: 0.0826 - accuracy: 0.9645 - val_loss: 0.5067 - val_accuracy: 0.9004
Epoch 48/50
247/247 [==============================] - 138s 556ms/step - loss: 0.0713 - accuracy: 0.9692 - val_loss: 0.5100 - val_accuracy: 0.9005
Epoch 49/50
247/247 [==============================] - 137s 554ms/step - loss: 0.0669 - accuracy: 0.9711 - val_loss: 0.5402 - val_accuracy: 0.9009
Epoch 50/50
247/247 [==============================] - 137s 556ms/step - loss: 0.0635 - accuracy: 0.9726 - val_loss: 0.5471 - val_accuracy: 0.8999
###Markdown
Evaluate
###Code
import matplotlib.pyplot as plt
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label = 'val_loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.ylim([0, 2])
plt.legend(loc='upper right')
print(min(history.history['val_loss']))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0, 1])
plt.legend(loc='lower right')
print(max(history.history['val_accuracy']))
import json
with open(DATA_ROOT_DIR + VERSION + '.json', 'w') as f:
data = {
'loss': history.history['loss'],
'val_loss': history.history['val_loss']
}
json.dump(data, f)
###Output
_____no_output_____
###Markdown
Show result
###Code
def mask_to_img(predict):
mask = numpy.argmax(predict, axis=-1)
# numpy.expand_dims() expand the shape of an array.
# Insert a new axis that will appear at the axis position in the expanded
# array shape.
mask = numpy.expand_dims(mask, axis = -1)
return PIL.ImageOps.autocontrast(keras.preprocessing.image.array_to_img(mask))
demo_data = OxfordPets(files[:20])
demo_res = m.predict(demo_data)
plt.figure(figsize=(8, 10))
for i in range(20):
plt.subplot(5, 4, i+1)
plt.xticks([])
plt.yticks([])
plt.imshow(keras.preprocessing.image.array_to_img(demo_data.__getitem__(0)[0][i]))
plt.imshow(mask_to_img(demo_res[i]), cmap='jet', alpha=0.3)
plt.show()
plt.close()
import shutil
val_loss = min(history.history['val_loss'])
shutil.copy(DATA_ROOT_DIR + VERSION + '.json', '/content/drive/MyDrive/Models/%s-%.4f.json' % (VERSION, val_loss))
shutil.copy(DATA_ROOT_DIR + VERSION + '.h5', '/content/drive/MyDrive/Models/%s-%.4f.h5' % (VERSION, val_loss))
###Output
_____no_output_____ |
GuicoRenzJulius.ipynb | ###Markdown
Classes with multiple objects
###Code
class Birds:
def __init__ (self, bird_name):
self.bird_name = bird_name
def flying_birds (self):
print (f"{self.bird_name} flies above the clouds")
def non_flying_birds (self):
print (f"{self.bird_name} is the national bird of the Philippines")
vulture = Birds ("Griffin Vulture")
crane = Birds ("Common Crane")
emu = Birds ("Philippine Eagle")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
###Output
Griffin Vulture flies above the clouds
Common Crane flies above the clouds
Philippine Eagle is the national bird of the Philippines
###Markdown
Encapsulation
###Code
class tol:
def __init__ (self, a, b):
self.a = a
self.b = b
def add (self):
return self.a + self.b
tol_object = tol (4, 8)
tol_object.a = 7
tol_object.add()
class counter:
def __init__(self):
self.current = 0
def increment(self):
self.current += 1
def value(self):
return self.current
def reset(self):
self.current = 0
counter = counter()
counter.increment()
counter.increment()
counter.increment()
print (counter.value())
print (counter.reset())
###Output
3
None
###Markdown
Inheritance
###Code
class Person:
def __init__ (self,fname,sname):
self.fname = fname
self.sname = sname
def printname (self):
print(self.fname, self.sname)
x = Person("Taylor","Smith")
x.printname()
class Teacher(Person):
pass
x = Teacher("George","Marquez")
x.printname()
###Output
Taylor Smith
George Marquez
###Markdown
Polymorphism
###Code
class RegularPolygon:
def __init__ (self, side):
self._side = side
class Square (RegularPolygon):
def area (self):
return self._side * self._side
class EquilateralTriangle (RegularPolygon):
def area (self):
return self._side * self._side * 0.433
obj1 = Square(4)
obj2 = EquilateralTriangle(3)
print (obj1.area())
print (obj2.area())
###Output
16
3.897
###Markdown
Class with Multiple Objects Application* Create a Python program that displays the name of 3 students (Student 1,Student 2, Student 3) and their grades.* Create a class name "Person" and attributes - std1, std2, std3, pre, mid, fin* Compute for the average grade of each term using Grade() method* Information about student's grades must be hidden from others
###Code
import random
class Person:
def __init__ (self, student, pl, mt, fnls):
self.student = student
self.pl = pl *0.30
self.mt = mt *0.30
self.fnls = fnls *0.40
def Grade (self):
print (self.student, "'s average grade is", self.pl, "in prelims")
print (self.student, "'s average grade is", self.mt, "in midterms")
print (self.student, "'s average grade is", self.fnls, "in finals")
std1 = Person ("Maurice", random.randint(70,99), random.randint(70,99), random.randint(70,99))
std2 = Person ("Layla", random.randint(70,99), random.randint(70,99), random.randint(70,99))
std3 = Person ("Jinkee", random.randint(70,99), random.randint(70,99), random.randint(70,99))
x = input(str("Choose a student to view his/her grade: "))
if x == "Person1":
std1.Grade()
else:
if x == "Person2":
std2.Grade()
else:
if x == "Person3":
std3.Grade()
else:
print("Student not found.")
###Output
Choose a student to view his/her grade: Person1
Maurice 's average grade is 28.5 in prelims
Maurice 's average grade is 28.5 in midterms
Maurice 's average grade is 35.2 in finals
|
04-generative-adversarial-networks.ipynb | ###Markdown
**Generative Adversarial Networks (GANs)** --- 0. Recommended Reading + Notebook Configuration **Required Reading**1. [Goodfellow et al 2014](https://arxiv.org/pdf/1406.2661.pdf)2. [Style GAN](https://arxiv.org/pdf/1812.04948.pdf)**Additional Reading**1. [DCGAN](https://arxiv.org/pdf/1511.06434.pdf)2. [Wasserstein GAN](https://arxiv.org/pdf/1701.07875.pdf)3. [Improved Wasserstain GAN](https://arxiv.org/pdf/1704.00028.pdf)4. [BigGAN](https://arxiv.org/pdf/1809.11096.pdf)5. [Progressive GAN](https://arxiv.org/pdf/1710.10196.pdf)6. [CycleGAN](https://arxiv.org/pdf/1703.10593.pdf)7. [SRGAN](https://arxiv.org/pdf/1609.04802.pdf)
###Code
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
###Output
_____no_output_____
###Markdown
1. Ian Goodfellow Invents GANs --- In 2014, graduate student Ian Goodfellow published this paper where he showed, that if under a certain set of assumptions, you solved the equation below, you could create fake data that was completely indistinguishable from real data.  Now, as is the case for many other interesting equations, we humans have no idea how to actually solve this equation in all but the most trivial cases. However, we have found some algorithms that allow us to compute approximate solutions. Goodfellow published one such algorithm in his paper, and using this approach, generated fake images that look…kinda crappy:  So why are we talking about a 2014 paper that presents an equation for generating fake data that no one knows how to solve, and an algorithm that allows us to compute poor approximate solutions? Well, if we jump forward 5 years to 2019, Ian Goodfellow is now a director of machine learning at Apple, and the approximate solutions to his equation have gotten better. A lot better. Which of these three images do you think is fake?   --- The answer is…all three. None of these are images of people. Further, clever variants of Goodfellow’s idea have been used to do all kinds of other cool stuff, such as: - Fill in missing parts of images- Transform low-resolution images in realistic high-resolution versions (super resolution)- Generate captions from images- Generate images from captions - Compose interesting music- Synthesize raw audio- Synthetically age faces in images- Make fake videos of real people dancing- Generate training data for other machine learning models- Predict depth maps from images- Find anomalies in medical images- Compute the mapping from the style of one video to another Pretty crazy, right? Goodfellow called his approach a generative adversarial network, or GAN. Now, when I first learned about GANs, a couple questions came to mind. First, how the heck to these things actually work? How exactly does solving Goodfellow’s equation allow us to create such realistic fake data? How did GANs get so much better from 2014 to 2019?And secondly, what does all of this mean? What does it mean that for arguably the first time in human history, we can create completely photorealistic images of people that don’t exist? Are there limitations to GANs, and if so, what are they? I find that when trying to understand a new technology, there’s really no substitute for building it. So that’s what we we’re going to do here. Let's start by having a closer look at Goodfellow's equation. Goodfellow's Idea---  Generative Adversarial Networks are composed of 2 seperate neural networks: a Generator (G), and a Discriminator (D). The Discriminator's job is to determine if it's input is real or fake. A good discriminator will return a value of 1 for real input data, and value of 0 for fake input data. In practice, discrimators are exactly like the neural networks we've used to solve classification problems! The Generator's job is to transform random vectors into realistic fake data. The way GANs are trained is a bit different than the neural networks we've seen so far in this course. Instead of optimizing a single objective function, here we have **two networks with competing goals**. Goodfellow's equation above is a propossed way to expresses the competition mathematically. Goodfellow's equation makes use of the expected value $(\mathbb{E})$. Here, the expected value really just means to average over a bunch of examples. Notice that if you remove the expected values in the equation above, we're left with something very similar to **Binary Cross Entropy**, with the minus sign removed. Thinking through each term above, we can see that $V(D, G)$ will be maximized when $D(x)=1$, and $D(G(z))=0)$, meaning our discriminator perfectly classifies our real and fake images. Alternatively, our generator is doing well when $D(G(z))=1$ (and has no control over the $D(x)$ term), so a good generator will minimize $V(D, G)$.What goodfellow proposes with $\min_G\max_D$, is that we find a Discriminator that maximizes V, while finding a generator that minimizes V. If this sounds a little conflicting, it is!A more natural way to think about what Goodfellow is describing here is as **a game.** We have two players $D$ and $G$. Each player can change its own paremeters, let's call these $\theta^{(D)}$ and $\theta^{(G)}$, but has no control over the other player's parameters. The solution to the optimization problems we've seen so far is a local minimum, but the solution to this type of game is known as a **Nash Equilibrium**. What might our **Nash Equilibrium** look like in the paremeter space of our models? We'll explore this idea shortly, but I encourage you to give it a little thought first - it's pretty cool! For lots more information on the GAN forumlation check out [Ian Goodfellow's 2016 NIPS Tutorial](https://arxiv.org/pdf/1701.00160.pdf). 2. Let's Build the World's Simplest GAN--- Let's start really simple. Instead of creating full fake images, let's start with grayscale images that are only 1x1 pixels - so just one number! I know this seems trivial, but it will give us a sense for how the whole GAN works, and then we can expand up to full images. Instead of actually importing images, we'll make some "real data" by sampling from a gaussian distribution.
###Code
%pylab inline
import numpy as np
import torch
import torch.nn as nn
from tqdm import tqdm
from ipywidgets import interact
import matplotlib.gridspec as gridspec
#Set plotting style and minimum font size
from jupyterthemes import jtplot; jtplot.style()
plt.rcParams['figure.figsize'], plt.rcParams['lines.linewidth'] = (10,6), 2
for k in plt.rcParams:
if 'size' in k:
try: plt.rcParams[k] = 18
except:pass
n = 32 #How many examples do we want?
x = torch.randn(n) + 3 #"Real" Data for now, in practice this will be our real data.
x
fig = figure(0, (12, 4)); ax=fig.add_subplot(111)
scatter(x.numpy(), np.zeros(len(x)), marker='x', c='xkcd:purple'); xlim([0, 6]); grid(0)
ax.get_yaxis().set_ticks([0]), ax.get_yaxis().set_ticklabels(['Real Data $x$']);
###Output
_____no_output_____
###Markdown
Now, let's create some fake data. We'll start with a sampling our random noise vector $z$ from gaussian distribution. Something we haven't discussed yet is that **we get to choose the size of z**. In this case, since we're trying to build the world's simplest GAN, we'll start with a z dimension of 1. Just as with our real data, we'll sample a batch of size $n$. It's worth noting that z is often called a **latent variable** in the context of GANs, and when we make changes to our random vector $z$, this is often referred to as working in the "latent space". This interpretation will become more clear as we get deeper into GANs.
###Code
z = torch.randn(n) #32 examples of noise z.
z
###Output
_____no_output_____
###Markdown
Now, to create fake data, we need a generator function. In the spirit of keeping things simple, our generator function will just have one single parameter, we'll call it $b_G$, that we add to your input data. So $G(z)=z+b_G$. Generators in successful GANs are typically neural networks, but the only strict requirment here is that $G$ is differentiable.
###Code
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.b = torch.tensor([0.0])
def forward(self, x): return x+self.b
G = Generator() #We can now instantiate our Generator and create some fake data!
G(z)
def vis_data(ax, G, x, z):
scatter(x.numpy(), np.ones(len(x)), marker='x', c='xkcd:purple');
scatter(G(z).numpy(), np.zeros(len(x)), marker='x', c='xkcd:sea blue'); xlim([-3, 5]); ylim([-1,2]); grid(0)
ax.get_yaxis().set_ticks([0, 1]), ax.get_yaxis().set_ticklabels(['Fake Data $G(z)$', 'Real Data $x$']);
fig = figure(0, (12, 3)); ax=fig.add_subplot(111);
vis_data(ax, G, x, z)
###Output
_____no_output_____
###Markdown
Alright, so we now have some real data $x$, and some fake data $G(z)$. Based on the plot above, how do you think our generator is doing? How good is our fake data? Let's see how changing our Generator parameter $b_G$ changes our fake data:
###Code
def play_with_generator(b=0):
G.b.set_(torch.tensor(b,dtype=torch.float));
fig = figure(0, (12, 3)); ax=fig.add_subplot(111)
vis_data(ax, G, x, z)
interact(play_with_generator, b = (-4.0, 4.0))
###Output
_____no_output_____
###Markdown
Following the GAN approach here, we should use a discriminator function to seperate our real from fake examples. Again, instead of using a full-blown neural network here, we'll start with a really simple function:
###Code
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.b = torch.tensor([1.0])
def forward(self, x): return torch.exp(-(x - self.b)**2)
D = Discriminator()
def vis_discriminator(ax, D):
xs = torch.linspace(-5, 5, 100)
ax.plot(xs.numpy(), D(xs).numpy(), c='xkcd:goldenrod')
title('$D$')
def play_with_discriminator(b=0):
D.b.set_(torch.tensor(b,dtype=torch.float));
fig = figure(0, (12, 6)); ax=fig.add_subplot(111)
vis_discriminator(ax, D);
interact(play_with_discriminator, b = (-4.0, 4.0))
###Output
_____no_output_____
###Markdown
As you can see, our discriminator is a really simple bell-shaped curve with one parameter that shifts our curve to the left or right: $D(x)=e^{-(x-b_D)^2}$. Remember that our discriminator is supposed to return the probability that an an input example $x$ is real. In the case of our super simple discriminator, the closer our input $x$ is to our single discriminator parameter $b_D$, the higher the returned probability of x being real is, reaching a maximum value of 1 when $x=b_D$. Now that we've chosen functions for our discriminator and generator, let's think about how whole systems works. $$ D(x)=e^{-(x-b_D)^2} $$$$ G(z)=z+b_G$$ $$\min_G \max_D V(D, G)=\mathbb{E}_{x\sim p_{data}(x)}[\log D(x)]+ \mathbb{E}_{z\sim p_z(z)}[\log(1 - D(G(z)))]$$
###Code
def probe_value_function(D, G, x, z):
bs = torch.linspace(-1, 7, 64)
V = torch.zeros((len(bs), len(bs)))
for i, bG in enumerate(bs):
G.b.set_(bG)
for j, bD in enumerate(bs):
D.b.set_(bD)
V[i, j] = torch.mean(D(x)) + torch.mean(1-D(G(z))) #Ignore logs in value function for now
return meshgrid(bs, bs), V
def vis_value_function(ax, grids, V, bd=None, bg=None):
f=contourf(*grids, V); grid(0); colorbar();
if (bd, bg) is not (None, None): ax.scatter(bd, bg, color=(1, 0, 1), s=50)
title('$V(D, G)$'); xlabel('$b_D$'); ylabel('$b_G$');
grids, V = probe_value_function(D, G, x, z)
vis_value_function(ax, grids, V)
def play_with_gan(bd=0, bg=0):
G.b.set_(torch.tensor(bg,dtype=torch.float));
D.b.set_(torch.tensor(bd,dtype=torch.float));
gs = gridspec.GridSpec(3, 2)
fig = figure(0, (18, 6)); ax=fig.add_subplot(gs[:2, 0]); vis_discriminator(ax, D)
ax=fig.add_subplot(gs[2, 0]); vis_data(ax, G, x, z)
ax=fig.add_subplot(gs[:, 1]); vis_value_function(ax, grids, V, bd, bg)
interact(play_with_gan, bd=(-4.0, 4.0), bg=(-4.0, 4.0));
###Output
_____no_output_____
###Markdown
$$\min_G \max_D V(D, G)= \frac{1}{m} \sum_{i=1}^{m} D(x_i)+ \frac{1}{m} \sum_{i=1}^{m} \big( 1 - D(G(z_i))) \big)$$ Let's try to get a sense for Goodfellow's equation using our interactive plot. Note that we've removed the logs from Goodfellow's equation to make the visual more clear - the logs are mostly here for computational reasons - the logs make computing these values on a computer easier, but do not change the solution to the equation. Remember what Goodfellow's equation is asking us to do two things at once: 1. Find the generator $G$ that minimizes $V$. 2. Find the discriminator $D$ that maximizes $V$. In Goodfellows original formulation, $G$ and $D$ are expressed as arbitrary functions. However, in practice (as Goodfellow goes on to layout in his paper), we fix $D$ and $G$ to be some specific functions (like a specific neural network), and search for the parameters of our functions, instead of searching accross all functions (because we have no tractible way of doing this!). For us, since we've chosen really simple functions that only have one parameter each, we only have two parameters to tune! Notice how changing each of our variables, changes our plot above, especially the value function (the magenta point) in the value function plot to the right. Let's try to achieve what Goodfellow's equation is asking us to do manually. Let's start with the $max_D$ part, if we adjust our Discriminator parameter $b_D$, notice that as we increase $b_D$, our dicriminator curver moves more on top of our data, increasing the value of $\frac{1}{m} \sum_{i=1}^{m} D(x_i)$, while moving away from our fake data, decreasing the value of $\frac{1}{m} \sum_{i=1}^{m} D(G(z_i))$. The next effect here is to increase our value function. We can see this clearly on our heatmap plot, reaching a maximum around $D_b=3$. Alright, so we've done half of what Goodfellow has asked us to do! For at least this one Generator we've found $\max_D V(D, G)$ by setting $b_D$ to approximately 3. Now, of course, we're only half way there! Let's now consider the $max_G$ part of Goodfellow's equation. This part asks us to find the valug of $b_G$ that *minimizes* V. If we start fiddling with $b_G$, notice that increasing $b_G$ brings our value function down. This is becuase we're moving more of our fake data beneath the large valued parts of our discriminator, increasing the value of $\frac{1}{m} \sum_{i=1}^{m} D(G(z_i))$, decreasing $V$, and "fooling" our discriminator with more realistic fake data! $V$ reaches a minimum when $b_G$ is around 3, by no coincidence, right when our fake data ends up directly below our discriminator curve!Ok, so with our "brute force" method, we've solved goodfellow's equation! We've found a Discriminator configuration that maximimizes $V$, while simultaneously finding a Generator configuration that minimizes $V$. Pretty cool, right?This type of solution is called a **Nash Equibrium**, becuase we've found a solution where if either player **unilaterally** changes their strategy, they will be worse off. In the parameter space of our value function, Nash equibruim look like saddle points! So while in most machine learning problems we're looking for local (or global :)) minima, with GANs we're looking for something different. We're looking for points in the paremeter space where our functions are **in balance.**. We're looking for Nash Equilibria. Now, a good follow up question is...why? Why do we care about finding Nash Equilibria in the parameter space of this value function? Well, as we mentioned at the beginning of this lecture, it turns out that if you're able to find this solution, to solve Goodfellow's equation, **your fake data will be indistinguishable from your real data**. That is, as Goodfellow goes on to prove in his paper, if you find this solution, the fake data the comes our of your Generator will have the same exact probability distribution as your real data. This is pretty boring in one dimension as in the example above, but in the case high dimenional data, such as images - this result is remarkable. Now, before we get too excited here, there's a couple of cavaets. For one, Goodfellow's proof only holds up for the set of continuous functions, not for parameterized functions like neural networks. But more importantly, **we haven't figured out how to find these Nash Equilibria in the general case!** In our toy example above, we found the solution by hand - a strategy that definitely won't work in many dimensions over 2!Fortunately, Goodfellow also presents a computationally feasible strategy for finding these Nash Equilibria:  Let's spend some time breaking down goodfellow's algorithm. First, in the algorithm description, notice that Goodfellow is proposing we use stochastic gradient descent to find a solution to our equation, just as we do when training supervised neural networks. So the idea is we'll take lots of small steps along the gradients or our Discrimiantor and Generator (with respect to their parameters). Specifically, we start by sampling a "minibatch of noise", and a minibatch from the data generating distribution $p_{data}(x)$. In practice, this just means that we need to take a random sample of example from our dataset, just as we do when training an supervised neural network. After sampling our data, we compute the gradient of our value function with respect to our discriminator parameters, and use this gradient to update our discriminator parameters. Next, we sample another "random minibatch", and computer the gradient of our value function again, but this time with repsect to our Generator parameters. Notice that Goodfellow has ommitted the $log D(x^{(i)})$ from the value function when computing the gradient of our generator - hopefully this makes sense - this term would be $0$, becuase changing our generator parameters has no impact on our real data $x$. Let's implement Goodfellow's algorithm in PyTorch!
###Code
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.b = nn.Parameter(torch.tensor([0.0])) #Making this an nn.Parameter now, gradient will be computed
def forward(self, x): return x+self.b
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.b = nn.Parameter(torch.tensor([1.0]))
def forward(self, x): return torch.exp(-(x - self.b)**2)
num_steps=512; lr=0.1 #Let's run goodfellow algorithm for num_steps at a learning rate of lr
D=Discriminator(); G=Generator()
n = 32 #How many examples do we want?
x = torch.randn(n) + 3 #"Real" Data for now, in practice this will be our real data.
z = torch.randn(n) #Note that goodfellow's algorithm says we should re-sample this each time, for now, we'll just sampel once.
params = {'G':[],'D':[]} #Keep track of parameters while training
for i in tqdm(range(num_steps)):
#Train Discriminator
V=torch.mean(D(x)) + torch.mean(1-D(G(z))) #Leave logs out for now.
V.backward() #Perform backprop
with torch.no_grad():
D.b+=D.b.grad*lr #Gradient Ascent
params['D'].append(D.b.item()) #Log Paremeters
D.zero_grad(); G.zero_grad();
#Train Generator
V=torch.mean(1-D(G(z))) #Leave logs out for now.
V.backward() #Perform backprop
with torch.no_grad():
G.b-=G.b.grad*lr #Gradient Descent
params['G'].append(G.b.item()) #Log Paremeters
D.zero_grad(); G.zero_grad();
with torch.no_grad():
grids, V = probe_value_function(D, G, x, z)
gs = gridspec.GridSpec(3, 2)
fig = figure(0, (18, 6)); ax=fig.add_subplot(gs[:2, 0]);
with torch.no_grad():
D.b.set_(torch.tensor(params['D'][i]))
vis_discriminator(ax, D)
ax=fig.add_subplot(gs[2, 0]);
with torch.no_grad():
G.b.set_(torch.tensor(params['G'][i]))
vis_data(ax, G, x, z)
ax=fig.add_subplot(gs[:, 1]); vis_value_function(ax, grids, V)
plot(params['D'][:i+1], params['G'][:i+1], color=(1,0,1), linewidth=3)
scatter(params['D'][i], params['G'][i], color=(1,0,1), s=50)
###Output
_____no_output_____
###Markdown
Alright, as you can see above, using goodfellow's algorithm, we found the Nash Equilibrium in our value function! Further, our fake data distribution has been shifted by our generator to be close to our real data. Let's watch Goodfellow's algorithm in action:   And of course, in 3D :)  Pretty cool, right!? We've built a really simple GAN. As you can see, using Goodfellow's Gradient Descent algorithm, we have found our Nash equilibrium, and our fake data disribution looks pretty close to our real data! Now, this of course just really simple a 1-dimensional example - we could have solved this problem with much simpler methods! But what's really remarkable is how well **Goodfellow's approach scales to higher dimensional data.** We'll explore this next.  3. A Dive into Higher Dimensions--- Alright, so we've seen how to apply Goodfellow's algorithm in one dimension - now our job is to apply Goodfellow's appraoch to much more interesting, higher dimensional data. Let's start with a relatively simple dataset, the MNIST handwritten digit dataset. 3.1 The Data
###Code
%pylab inline
#Let's automate some data loading with fastai vision (a torchvision replacement)
from fastai.vision import *
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') #Do we have a GPU?
defaults.device = device
print(device)
###Output
cpu
###Markdown
We'll use the super handy [ImageDataBunch](https://docs.fast.ai/vision.data.htmlImageDataBunch) from fastai to handle our data. We'll also go ahead and choose a batch size, and normalize our data>
###Code
batch_size, im_size, channels = 128, 32, 1
path = untar_data(URLs.MNIST)
tfms = ([*rand_pad(padding=3, size=im_size, mode='zeros')], [])
data = ImageList.from_folder(path, convert_mode='L').split_none() \
.label_empty() \
.transform(tfms, size = im_size) \
.databunch(bs=batch_size)
data.show_batch(figsize=(8, 8))
x, y=data.one_batch()
hist(x.numpy().ravel(),100);
###Output
_____no_output_____
###Markdown
3.2 Model Architecture As you may have guessed, our simple 1-parameter models from our Toy GAN really aren't going to cut it here. We need a more sophisticated generator and discriminator to make fake handwritten digits. We'll use an architecture similar to the architecture Goodfellow used in his original work.
###Code
import torch
import torch.nn as nn
###Output
_____no_output_____
###Markdown
3.2.1 Generator Our generator may look different than the neural networks you may have seen before. The reason for this is that, unlike most neural networks that map from high-dimension to low-dimensional space (e.g. 224x224x3 color images to 10 discrete classes), our Generator needs to map from a low-dimensional random noise vector to a higher dimensional image. Goodfellow used random noise vectors of length, and for the most part, researchers continue to use noise vectors around this size. The challenge here is that we would like to map from smaller to bigger images, would still like to use convoultions for the speed, efficiency, and proven effectiveness, but the traditional convolution gives us no means of producing a larger image than we started with. Fortunatley, someone much smarter than me has already thought of this, and developed a method for doing exactly this: **transposed** or **fractionally strided** convolutions. There's a [terrific paper](https://github.com/vdumoulin/conv_arithmetic) on this by Vincent Dumoulin and Francesco Visin, I've borrowed an animation from their work below. No padding, no strides, transposed Arbitrary padding, no strides, transposed Half padding, no strides, transposed Full padding, no strides, transposed No padding, strides, transposed Padding, strides, transposed Padding, strides, transposed (odd)
###Code
def conv_trans(ni, nf, ks = 4, stride = 2, padding = 1):
return nn.Sequential(
nn.ConvTranspose2d(ni, nf, kernel_size=ks, bias=False, stride=stride, padding = padding),
nn.ReLU(inplace = True))
G = nn.Sequential(
conv_trans(100, 512, ks=4, stride=1, padding=0),
conv_trans(512, 256),
conv_trans(256, 128),
nn.ConvTranspose2d(128, channels, 4, stride=2, padding=1),
nn.Sigmoid()).to(device)
z = torch.randn(batch_size, 100, 1, 1)
fake = G(z.to(device))
imshow(fake[0, 0].cpu().detach().numpy()); grid(0)
fake.shape
###Output
_____no_output_____
###Markdown
So there you have it, a fake image! Doesn't look to convincing yet, but of course, we haven't trained our GAN yet! 3.2.2 Disciminator Our discriminator will be much closer to the deep models we've seen before, we'll use 4 convolutional layers.
###Code
def conv(ni, nf, ks=4, stride=2, padding=1):
return nn.Sequential(
nn.Conv2d(ni, nf, kernel_size=ks, bias=False, stride=stride, padding=padding),
nn.ReLU(inplace = True))
D = nn.Sequential(
conv(channels, 128),
conv(128, 256),
conv(256, 512),
nn.Conv2d(512, 1, 4, stride=1, padding=0),
Flatten(),
nn.Sigmoid()).to(device)
D(fake).shape
###Output
_____no_output_____
###Markdown
3.3 Training Loop
###Code
from torch import optim
from tqdm import tqdm
from IPython import display
import matplotlib.gridspec as gridspec
def show_progress():
'''Visualization method to see how were doing'''
clf(); fig = figure(0, (24, 12)); gs = gridspec.GridSpec(6, 12)
with torch.no_grad(): fake = G(z_fixed)
for j in range(30):
fig.add_subplot(gs[(j//6), j%6])
imshow(fake[j, 0].detach().cpu().numpy(), cmap='gray'); axis('off')
ax=fig.add_subplot(gs[5, :4]); hist(fake.detach().cpu().numpy().ravel(), 100, facecolor='xkcd:crimson')
ax.get_yaxis().set_ticks([]); xlabel('$G(z)$'); xlim([-1, 1])
ax=fig.add_subplot(gs[5, 4:7]); hist(x.cpu().numpy().ravel(), 100, facecolor='xkcd:purple')
ax.get_yaxis().set_ticks([]); xlabel('$x$')
fig.add_subplot(gs[:,7:])
plot(losses[0], 'xkcd:goldenrod'); plot(losses[1], color='xkcd:sea blue'); legend(['Discriminator', 'Generator'],loc=1);
grid(1); title('Epoch = ' + str(epoch)); ylabel('loss'); xlabel('iteration');
display.clear_output(wait=True); display.display(gcf())
###Output
_____no_output_____
###Markdown
Unlike our toy example, intead of regular SGD we're going to use the **Adam Optimizer**. We'll use a seperate optimizer PyTorch optimizer for our discriminator and generator:
###Code
optD = optim.Adam(D.parameters(), lr=2e-4, betas = (0.5, 0.999))
optG = optim.Adam(G.parameters(), lr=2e-4, betas = (0.5, 0.999))
###Output
_____no_output_____
###Markdown
Now, we need to make a few more changes to our Toy GAN. First, we need to spend a little time on the **Value Function**. In practice, our Toy GAN will struggle to learn on real data becuase of the way our value function behaves when our generator creates very bad fake data. Goodfellow points this out in his original publication. The idea here is that the Generator loss function saturates when $D(G(z))$ is close to zero, and does not provide much gradient to learn from. Goodfellow proposes a Hueristic solution, replacing $log(1-D(G(z)))$ with $-log(D(G(z)))$. Both curves monotonically decrease as $D(G(z))$ increases, but $-log(D(G(z)))$ gives us a much stronger gradient when $D(G(z))$ is close to zero (helping our generator learn more quickly when its predictions are very different from real data).  Finally, before we train our GAN, we need to be a little careful about how we implement our loss function. Instead of directly implementing our loss function explicitly in pytorch as `torch.mean(torch.log(D(G(z))) + torch.mean(torch.log(1-D(G(z)))`, we'll use PyTorch's [`nn.BCELoss()`](https://pytorch.org/docs/stable/nn.htmlbceloss). `nn.BCELoss` is more stable computationally, and in my experience can really make the difference between a model that converges and one that doesn't. To use `nn.BCELoss`, we'll break up the cost function into two parts, on for our real data and one for our fake data, and add these two parts together before taking our gradient. This is where libraries like PyTorch realy shine, just as a results of us adding the two parts of our loss function together, Pytorch knows how to backpropogate errors to each portion of our loss function!It's worth taking some time walking thought the training code below to make sure each step makes sense. We'll visualize our GAN performance as we train. Measing and Visualizing GAN Performance can be difficult, becuase there's not unique measure of how realistic our fake data is (If there was, we could just optimize this directly!). For this reason, it's become popular to visualize generated images during traininng to get an idea of how our GAN is doing. We'll do this for a fixed set of latent vectors, `z_fixed`, visualizing the output of our generator from a fixed set of inputs will make our training animation easier to follow. We'll also visualize the distribution of a batch of our training and synthetic data, and finally, we'll plot our discriminator and generator loss functions. Note that in Goodfellow's original GAN formulation, our Discriminator and Generator loss were the same quantity (The value function)! But our modified formulation means that our Generator loss is a bit different than our Discrminator loss. However, our losses still quantify opposing goals, so a lower discriminator loss will correspond to a higher generator loss and vice versa.
###Code
criterion = nn.BCELoss()
zero_labels = torch.zeros(batch_size).to(device)
ones_labels = torch.ones(batch_size).to(device)
epochs, viz_freq = 1, 50
z_fixed = torch.randn(batch_size, 100, 1, 1).to(device) #Let's use a fixed latent vector z for viz
losses = [[],[]] #Keep track of losses
for epoch in range(epochs):
for i, (x,y) in enumerate(tqdm(data.train_dl)):
#Train Discriminator
requires_grad(G, False); #Speeds up training a smidge
z = torch.randn(batch_size, 100, 1, 1).to(device)
l_fake = criterion(D(G(z)).view(-1), zero_labels)
l_real = criterion(D(x).view(-1), ones_labels)
loss = l_fake + l_real
loss.backward(); losses[0].append(loss.item())
optD.step(); G.zero_grad(); D.zero_grad();
#Train Generator
requires_grad(G, True);
z = torch.randn(batch_size, 100, 1, 1).to(device)
loss = criterion(D(G(z)).view(-1), ones_labels)
loss.backward(); losses[1].append(loss.item())
optG.step(); G.zero_grad(); D.zero_grad();
if i%viz_freq == 0: show_progress()
###Output
_____no_output_____
###Markdown
This can be pretty slow on a cpu machine, here's an animation of our GAN training:  Note that the 30 images in our animation are supposed to show our fake data! As you likeley noticed, these look nothing like realy handwritten digits. Our GAN is not creating realistic fake data. Notice that our Discriminator and Generator stop learning after a little while, and our generated images just look black! So, what's going on here? Well, it turns out that GANs, especially the early variants, are notoriously difficult to train. While Goodfellow's paper provides some convergence gaurantees for GANs in theory, these are only true in the space of truly continuous functions, not pareterized functions like our Neural Networks. Fortunately, since the publication of Goodfellow's idea, a large amount of effort has gone into creating more stable varients GANs. One of the first improvments, DCGAN, was published in 2015 by [Alec Radford, Luke Metz, Soumith Chintala.](https://arxiv.org/abs/1511.06434) 4. DCGAN to the Rescue---  In late 2015, published what is know known as DCGAN, one of the first GANs that could be trained by a mere mortal. Perhaps their most valuable contribution are the above guidlines for training GANs. Like alot of deep learning research, these GAN recommendations appear to be largeley based on empirical investigations - really "tricks of the trade" that work well in practice. Let's apply these Architecture guidelines to our GAN! We'll also borrow the generator architecture that the DCGAN authors use for the LSUN Bedroom dataset:  First, we'll change our image size to 64x64 to match the architecture used in the DCGAN paper, this is really easy with fastai. We'll also normalize our data to be between -1 and 1. We're doing this becuase, following the DCGAN paper, we're going to use a Tanh function on the output of our generator, making our synthetic images values fall between -1 and 1 (we used a sigmoid function in our first generator).
###Code
batch_size, im_size, channels = 128, 64, 1
path = untar_data(URLs.MNIST)
tfms = ([*rand_pad(padding=3, size=im_size, mode='zeros')], [])
data = ImageList.from_folder(path, convert_mode='L').split_none() \
.label_empty() \
.transform(tfms, size = im_size) \
.databunch(bs=batch_size) \
.normalize((0.5, 0.5))
###Output
_____no_output_____
###Markdown
Next, we'll add Batch Normalization to our Generator and Discriminator. [Batch Normalization](https://arxiv.org/pdf/1502.03167.pdf) is a simple but very effective technique for normalizing our data as it passes through our network, speeding up training, and sometimes making triaining more stable and improving generalization. Since it's introduction in 2015, Batch Normalization has become common practice in deep models.
###Code
def conv_trans(ni, nf, ks = 4, stride = 2, padding = 1):
return nn.Sequential(
nn.ConvTranspose2d(ni, nf, kernel_size=ks, bias=False, stride=stride, padding = padding),
nn.BatchNorm2d(nf),
nn.ReLU(inplace = True))
G = nn.Sequential(
conv_trans(100, 1024, ks=4, stride=1, padding=0),
conv_trans(1024, 512),
conv_trans(512, 256),
conv_trans(256, 128),
nn.ConvTranspose2d(128, channels, 4, stride=2, padding=1),
nn.Tanh()).to(device)
def conv(ni, nf, ks=4, stride=2, padding=1):
return nn.Sequential(
nn.Conv2d(ni, nf, kernel_size=ks, bias=False, stride=stride, padding=padding),
nn.BatchNorm2d(nf),
nn.LeakyReLU(0.2, inplace = True))
D = nn.Sequential(
conv(channels, 128),
conv(128, 256),
conv(256, 512),
conv(512, 1024),
nn.Conv2d(1024, 1, 4, stride=1, padding=0),
Flatten(),
nn.Sigmoid()).to(device)
###Output
_____no_output_____
###Markdown
Alright, after all that work, we're ready to train our DCGAN.
###Code
zero_labels = torch.zeros(batch_size).to(device)
ones_labels = torch.ones(batch_size).to(device)
losses = [[],[]]
epochs, viz_freq, count = 1, 25, 0
z_fixed = torch.randn(batch_size, 100, 1, 1).to(device)
for epoch in range(epochs):
for i, (x,y) in enumerate(tqdm(data.train_dl)):
#Train Discriminator
requires_grad(G, False); #Speeds up training a smidge
z = torch.randn(batch_size, 100, 1, 1).to(device)
l_fake = criterion(D(G(z)).view(-1), zero_labels)
l_real = criterion(D(x).view(-1), ones_labels)
loss = l_fake + l_real
loss.backward(); losses[0].append(loss.item())
optD.step(); G.zero_grad(); D.zero_grad();
#Train Generator
requires_grad(G, True);
z = torch.randn(batch_size, 100, 1, 1).to(device)
loss = criterion(D(G(z)).view(-1), ones_labels)
loss.backward(); losses[1].append(loss.item())
optG.step(); G.zero_grad(); D.zero_grad();
if i%viz_freq == 0: show_progress()
count+=1
###Output
0%| | 0/546 [00:00<?, ?it/s][A
###Markdown
--- And a faster animation of the same model:  Not, bad right?! I think the beginning of this training is really really cool, let's watch it a bit slower.  As you can see, for the first 100 iterations or so, our discriminator completeley "wins". During this time, our generator hops around randomly until, around iteration 100 it finds some simple examples that fool the discriminator! From there, the generator and discriminator battle back and forth, making our synthetic images better and better in the process. Finally, here's the same architecture trained on the LSUN bedroom dataset:
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('xxXbJ8-XmHU', width=1000, height=562)
###Output
_____no_output_____
###Markdown
6. GANs Grow Up (Sortof)--- 1. [DCGAN](https://arxiv.org/pdf/1511.06434.pdf)2. [Wasserstein GAN](https://arxiv.org/pdf/1701.07875.pdf)3. [Improved Wasserstain GAN](https://arxiv.org/pdf/1704.00028.pdf)4. [BigGAN](https://arxiv.org/pdf/1809.11096.pdf)5. [Progressive GAN](https://arxiv.org/pdf/1710.10196.pdf)6. [Style GAN](https://arxiv.org/pdf/1812.04948.pdf) 7. StyleGAN Insanity - As of 2019, some of the most photo-realistic fake images our there are being generated by [Style GAN](https://arxiv.org/pdf/1812.04948.pdf). - Style GAN uses a different formulation for the random noise vector, allowing noise/randomness to be injected at different "scales". - Let's explore one **fascinating** way to think about what GANs are doing, by exploring the mapping between the latent space and the image space of a trained generator. - We'll use stylegan for this, but we could use other GANs. - There's a bunch fo dependencies, so we'll do this in a seperate repo~~~git clone https://github.com/stephencwelch/stylegan-encoder.git~~~- Once you have your repo + environment setup, check out `Play_with_latent_directions.ipynb`!- Note that you will need a GPU for this to work!
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('kSLJriaOumA', width=1000, height=562)
###Output
_____no_output_____ |
Code/faers_select_patients.ipynb | ###Markdown
Select patients from FAERS
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import feather
from utils import Utils
from database import Database
from drug import Drug
u = Utils()
db = Database('Mimir from Munnin')
np.random.seed(u.RANDOM_STATE)
from IPython.display import display
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
def set_style():
plt.style.use(['seaborn-white', 'seaborn-paper'])
matplotlib.rc("font", family="Arial")
set_style()
###Output
_____no_output_____
###Markdown
Find valid patients
###Code
# get patients that have sex
q_patients_w_sex = "SELECT safetyreportid, patient_sex, patient_custom_master_age FROM effect_openfda_19q2.patient WHERE patient_sex='Female' OR patient_sex='Male'"
res_patients_w_sex = db.run_query(q_patients_w_sex)
# make dataframe
df_patients = pd.DataFrame(res_patients_w_sex, columns=['PID','Sex','Age'])
# replace 'Male' 'Female' with M, F
df_patients = df_patients.replace('Female','F').replace('Male','M')
# age as int, replace missing with '-1'
df_patients = df_patients.replace('','-1').astype({'Age': 'float'}).astype({'Age': 'int'})
# age missing
age_missing = df_patients.query('Age=="-1"').count()[0]/len(df_patients)
print("Age Missing: ", age_missing, "%")
# age distribution
plt.hist(df_patients.Age, bins=10000)
plt.title('Age Distribution')
x=150000
plt.ylim(0,x)
plt.xlim(0,110)
plt.vlines(18,0,x,'r')
plt.vlines(85,0,x,'r')
plt.show()
# impute missing age with mean
age_mean = df_patients.query('Age!="-1"').get('Age').mean()
print("Mean Age before imputing: ", age_mean)
df_patients = df_patients.replace(-1,int(age_mean))
print("Mean Age after imputing: ", df_patients.get('Age').mean())
# remove age below 18 and above 85
print("Underage/Overage removed: ", (len(df_patients.query('Age<18'))+len(df_patients.query('Age>85')))/len(df_patients), "%")
df_patients = df_patients.query('Age>=18 and Age<=85')
#db.run_query("DROP TABLE IF EXISTS user_pc2800.patient")
q_create_valid_patients = "CREATE TABLE user_pc2800.patient (PID VARCHAR(255) NOT NULL, PRIMARY KEY (PID), FOREIGN KEY (PID) REFERENCES effect_openfda_19q2.patient(safetyreportid) );"
db.run_query(q_create_valid_patients)
db.close_connection()
win = 100000
for x in range(0, int(len(valid_PID)/win)+1):
q_insert_valid_patients = "INSERT INTO user_pc2800.patient (PID) VALUES "
for PID in valid_PID[x*win:x*win+win]:
q_insert_valid_patients += "('"+PID+"'), "
q_insert_valid_patients = "".join(list(q_insert_valid_patients)[:-2])
res_insert_valid_patients = db.run_query(q_insert_valid_patients)
u.save_df(df_patients, 'df_patients')
valid_PID = df_patients.get('PID').to_numpy()
u.save_np(valid_PID, "valid_PID")
###Output
_____no_output_____
###Markdown
Load valid patients
###Code
df_patients = u.load_df('df_patients')
valid_PID = u.load_np("valid_PID")
print("Female patients: ", 100*df_patients.query('Sex=="F"').shape[0]/df_patients.shape[0], "%")
###Output
_____no_output_____ |
examples/baccus_example_nb.ipynb | ###Markdown
Generate data
###Code
#2d gaussian:
mean = [0., 0.]
cov = [[0.25, 0.02], [0.02, 0.04]] # diagonal covariance
x1, y1 = np.random.multivariate_normal(mean, cov, 500).T
sx1 = np.ones(500)*0.3
sy1 = np.ones(500)*0.2
#2d gaussian:
mean = [-1., 1]
cov = [[0.04, -0.005], [-0.005, 0.01]] # diagonal covariance
x2, y2 = np.random.multivariate_normal(mean, cov, 200).T
sx2 = np.ones(200)*0.2
sy2 = np.ones(200)*0.05
#2d gaussian:
mean = [2., -2]
cov = [[0.04, -0.005], [-0.005, 0.01]] # diagonal covariance
x3, y3 = np.random.multivariate_normal(mean, cov, 300).T
sx3 = np.ones(300)*0.02
sy3 = np.ones(300)*0.05
data1 = np.stack((x1,y1,sx1,sy1),axis=1)
data2 = np.stack((x2,y2,sx2,sy2),axis=1)
data3 = np.stack((x3,y3,sx3,sy3),axis=1)
DATA = [data1,data2,data3]
###Output
_____no_output_____
###Markdown
Plot data
###Code
plt.scatter(x1,y1,color='r',label='Data 1')
plt.scatter(x2,y2,color='g',label='Data 2')
plt.scatter(x3,y3,color='k',label='Data 3')
plt.legend(fontsize=15)
plt.xlabel(r'$x$',fontsize=24)
plt.ylabel(r'$y$',fontsize=24)
plt.tick_params(axis='both',width=1,length=8,labelsize=15)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Define lkl function
###Code
def lkl_norescale(theta,DATA):
x = theta[0]
y = theta[1]
count = 0
npar = 2
log_lkl = 0
for i in range(0,len(DATA)):
shift_x = theta[npar+count]
count += 1
shift_y = theta[npar+count]
count += 1
xpar = x+shift_x
ypar = y+shift_y
dat = DATA[i]
xdat = dat[:,0]
ydat = dat[:,1]
sx = dat[:,2]
sy = dat[:,3]
log_lkl -= 0.5*(np.sum(((xdat-xpar)/sx)**2.) + np.sum(((ydat-ypar)/sy)**2.))
return log_lkl
###Output
_____no_output_____
###Markdown
Prepare to call BACCUS
###Code
#bounds for the priors of the model parameters
prior_bounds_model = [(-10.,10.),(-10.,10.)]
#bounds for the priors of the shifts
prior_bounds_shifts = [(0,-8.,8.),(1,-8.,8.)]
#bounds for the priors of the variances of the shifts
prior_bounds_var = [(0,0,8),(1,0,8)]
#which shifts in each data set
kind_shifts = [(0,1),(0,1),(0,1)]
#prior for the variances of the shifts - a lognormal
b = -2
xi = 10
y = T.dscalar('y')
s = -0.5*((T.log(y)-b)/2./xi)**2.
logprior_sigma2 = theano.function([y],s)
prior_var = []
for ivar in range(0,2):
prior_var += [logprior_sigma2]
###Output
_____no_output_____
###Markdown
Plot prior on variances of the shifts
###Code
xx = np.logspace(-8,np.log10(8),1000)
logp = np.zeros(len(xx))
for i in range(0,len(xx)):
logp[i] = np.exp(logprior_sigma2(xx[i]))
plt.semilogx(xx,logp)
plt.show()
want_rescaling = False
model = BACCUS(prior_bounds_model=prior_bounds_model,prior_bounds_shifts=prior_bounds_shifts,prior_bounds_var=prior_bounds_var,
lkl = lkl_norescale, kind_shifts = kind_shifts,prior_var=prior_var,want_rescaling=want_rescaling)
#Set initial position, steps and walkers
nwalkers = 250
steps = 4000
ndata = len(DATA)
pos = []
for i in range(0,nwalkers):
#Model parameters
pos += [np.array([normal(0,1),normal(0,1)])]
#Rescaling parameters, if wanted
if want_rescaling:
for j in range(0,ndata):
pos[i] = np.append(pos[i],normal(1,0.2))
#shift_hyperparams
for j in range(0,ndata):
pos[i] = np.append(pos[i],normal(0.,1.))
pos[i] = np.append(pos[i],normal(0.,1.))
#var for shifts
pos[i] = np.append(pos[i],normal(1.,0.2))
pos[i] = np.append(pos[i],normal(1.,0.2))
#correlation of shifts
pos[i] = np.append(pos[i],normal(0.,0.2))
###Output
_____no_output_____
###Markdown
RUN the CHAIN
###Code
model.run_chain(DATA, stepsize=2., pos=pos, nwalkers=nwalkers, steps=steps)
###Output
Total steps = 1000000
Running MCMC...
Done.
###Markdown
Get the Results
###Code
model.get_results(want_plots=True)
import emcee
print emcee.__version__
###Output
2.1.0
###Markdown
PLOT RESULTS
###Code
chain = np.loadtxt('output_chain.txt')
columns = chain.shape[1]
steps=steps
nwalkers=nwalkers
burn_len = int(0.3*steps)
chain_burned = np.zeros([(steps - burn_len)*nwalkers, columns])
for i in range (0,nwalkers):
slic = i*steps
walk = chain[(i*steps + burn_len):((i+1)*steps),:]
chain_burned[(i*(steps-burn_len)):((i+1)*(steps-burn_len)), :] = walk
def convert_to_varev(logL):
"""
Given a grid of log-likelihood values, convert them to cumulative
standard deviation. This is useful for drawing contours from a
grid of likelihoods.
"""
sigma = np.exp(logL)
shape = sigma.shape
sigma = sigma.ravel()
# obtain the indices to sort and unsort the flattened array
i_sort = np.argsort(sigma)[::-1]
i_unsort = np.argsort(i_sort)
sigma_cumsum = sigma[i_sort].cumsum()
sigma_cumsum /= sigma_cumsum[-1]
return sigma_cumsum[i_unsort].reshape(shape)
H,x, y = np.histogram2d(chain_burned[:,1],chain_burned[:,2],bins=(16,16),weights=np.ones(len(chain_burned[:,0])),normed=True)
x22 = 0.5*(x[1:]+x[:-1])
y22 = 0.5*(y[1:]+y[:-1])
H[H==0] = 1e-16
H = convert_to_varev(np.log(H))
f, axarr = plt.subplots(nrows=1,ncols=1, figsize=(14,10))
axarr.contourf(x22,y22,H.T,zorder = 2,levels=[0.,0.683,0.954],cmap=plt.cm.Blues_r,alpha=0.7)
axarr.contour(x22,y22,H.T,levels=[0.,0.683,0.954],colors='b',linewidth = 2.,zorder=2)
axarr.scatter(x1,y1,color='r')
axarr.scatter(x2,y2,color='g')
axarr.scatter(x3,y3,color='k')
axarr.tick_params(axis='both',width=1,length=8,labelsize=15)
axarr.set_xlabel(r'$x$',fontsize=24)
axarr.set_ylabel(r'$y$',fontsize=24)
axarr.set_xlim(-8,8)
axarr.set_ylim(-8,8)
axarr.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Likelihood with rescaling
###Code
def lkl_rescale(theta,DATA):
x = theta[0]
y = theta[1]
nrescaling = len(DATA)
count = 0
npar = 2
log_lkl = 0
for i in range(0,len(DATA)):
alpha = theta[npar+i]
shift_x = theta[npar+nrescaling+count]
count += 1
shift_y = theta[npar+nrescaling+count]
count += 1
xpar = x+shift_x
ypar = y+shift_y
dat = DATA[i]
xdat = dat[:,0]
ydat = dat[:,1]
sx = dat[:,2]
sy = dat[:,3]
log_lkl -= 0.5*alpha*(np.sum(((xdat-xpar)/sx)**2.) + np.sum(((ydat-ypar)/sy)**2.))
return log_lkl
###Output
_____no_output_____
###Markdown
Redo the same with rescaling
###Code
want_rescaling = True
model_res = BACCUS(prior_bounds_model=prior_bounds_model,prior_bounds_shifts=prior_bounds_shifts,prior_bounds_var=prior_bounds_var,
lkl = lkl_rescale, kind_shifts = kind_shifts,prior_var=prior_var,want_rescaling=want_rescaling)
#Set initial position, steps and walkers
nwalkers = 200
steps = 3000
ndata = len(DATA)
pos = []
for i in range(0,nwalkers):
#Model parameters
pos += [np.array([normal(0,1),normal(0,1)])]
#Rescaling parameters, if wanted
if want_rescaling:
for j in range(0,ndata):
pos[i] = np.append(pos[i],normal(1,0.2))
#shift_hyperparams
for j in range(0,ndata):
pos[i] = np.append(pos[i],normal(0.,1.))
pos[i] = np.append(pos[i],normal(0.,1.))
#var for shifts
pos[i] = np.append(pos[i],normal(1.,0.2))
pos[i] = np.append(pos[i],normal(1.,0.2))
#correlation of shifts
pos[i] = np.append(pos[i],normal(0.,0.2))
model_res.run_chain(DATA, stepsize=2., pos=pos, nwalkers=nwalkers, steps=steps)
model_res.get_results(root='output_rs_')
chain_rs = np.loadtxt('output_rs_chain.txt')
columns = chain_rs.shape[1]
steps=steps
nwalkers=nwalkers
burn_len = int(0.3*steps)
chain_rs_burned = np.zeros([(steps - burn_len)*nwalkers, columns])
for i in range (0,nwalkers):
slic = i*steps
walk = chain_rs[(i*steps + burn_len):((i+1)*steps),:]
chain_rs_burned[(i*(steps-burn_len)):((i+1)*(steps-burn_len)), :] = walk
H,x, y = np.histogram2d(chain_rs_burned[:,1],chain_rs_burned[:,2],bins=(16,16),weights=np.ones(len(chain_rs_burned[:,0])),normed=True)
x22 = 0.5*(x[1:]+x[:-1])
y22 = 0.5*(y[1:]+y[:-1])
H[H==0] = 1e-16
H = convert_to_varev(np.log(H))
f, axarr = plt.subplots(nrows=1,ncols=1, figsize=(14,10))
axarr.contourf(x22,y22,H.T,zorder = 2,levels=[0.,0.683,0.954],cmap=plt.cm.Blues_r,alpha=0.7)
axarr.contour(x22,y22,H.T,levels=[0.,0.683,0.954],colors='b',linewidth = 2.,zorder=2)
axarr.scatter(x1,y1,color='r')
axarr.scatter(x2,y2,color='g')
axarr.scatter(x3,y3,color='k')
axarr.tick_params(axis='both',width=1,length=8,labelsize=15)
axarr.set_xlabel(r'$x$',fontsize=24)
axarr.set_ylabel(r'$y$',fontsize=24)
#axarr.set_xlim(0.5,1.3)
#axarr.set_ylim(0.3,1.3)
axarr.grid()
plt.show()
###Output
_____no_output_____ |
ML_ToyExamples/KNN_Vectorisation.ipynb | ###Markdown
k-nearest neighbours and vectorisationMeichen Lu ([email protected]) 14th April 2018Code taken from [Vectorization](http://cs229.stanford.edu/syllabus.html). Note that the equation for dists_1 and dists_2 were wrong in the original code.
###Code
import numpy as np
from sklearn.datasets import fetch_mldata
import time
import matplotlib.pyplot as plt
%matplotlib inline
mnist = fetch_mldata('MNIST original')
X = mnist.data.astype(float)
Y = mnist.target.astype(float)
mask = np.random.permutation(range(np.shape(X)[0]))
num_train = 10000
num_test = 500
K = 10
X_train = X[mask[:num_train]]
Y_train = Y[mask[:num_train]]
X_mean = np.mean(X_train,axis = 0)
X_train = (X_train-X_mean)/255
X_test = X[mask[num_train:num_train+num_test]]
X_test = (X_test - X_mean)/255
Y_test = Y[mask[num_train:num_train+num_test]]
print('X_train',X_train.shape)
print('Y_train',Y_train.shape)
print('X_test',X_test.shape)
print('Y_test',Y_test.shape)
ex_image = (np.reshape(X_train[10,:]*255 + X_mean, (28, 28))).astype(np.uint8)
plt.imshow(ex_image, interpolation='nearest')
# Version 1 (Naive implementation using two for loops)
start = time.time()
dists_1 = np.zeros((num_test,num_train))
for i in range(num_test):
for j in range(num_train):
dists_1[i,j] = np.sqrt(np.sum(np.square(X_test[i,:]-X_train[j,:])))
stop = time.time()
time_taken = stop-start
print('Time taken with two for loops: {}s'.format(time_taken))
# Version 2(Somewhat better implementation using one for loop)
start = time.time()
dists_2 = np.zeros((num_test,num_train))
for i in range(num_test):
dists_2[i,:] = np.sqrt(np.sum(np.square(X_test[i,:]-X_train),axis = 1))
stop = time.time()
time_taken = stop-start
print('Time taken with just one for loop: {}s'.format(time_taken))
# Version 3 (Fully vectorized implementation with no for loop)
start = time.time()
dists_3 = np.zeros((num_test,num_train))
A = np.sum(np.square(X_test),axis = 1)
B = np.sum(np.square(X_train),axis = 1)
C = np.dot(X_test,X_train.T)
dists_3 = np.sqrt(A[:,np.newaxis]+B[np.newaxis,:]-2*C)
stop = time.time()
time_taken = stop-start
print('Time taken with no for loops: {}s'.format(time_taken))
# Prediction accuracy
sorted_dist_indices = np.argsort(dists_3,axis = 1)
closest_k = Y_train[sorted_dist_indices][:,:K].astype(int)
Y_pred = np.zeros_like(Y_test)
for i in range(num_test):
Y_pred[i] = np.argmax(np.bincount(closest_k[i,:]))
accuracy = (np.where(Y_test-Y_pred == 0)[0].size)/float(num_test)
print('Prediction accuracy: {}%'.format(accuracy*100))
###Output
Prediction accuracy: 94.39999999999999%
###Markdown
Explanation of the fully vectorised implementation1. The distance between 2 vectors $x_1$ and $x_2$ are $$\Vert x_1 - x_2 \Vert = \sqrt{ \sum_i x_{1,i}^2 +\sum_i x_{2,i}^2 - 2\sum_i x_{1,i}x_{2,i}} $$2. A, B, C represents the three terms on the right hand side3. The newaxis expands the dimension, making A a column vector of shape (500,1) and B a row vector of shape (1,10000)4. When subtracting the two vectors, numpy automatically replicate the rows and columns such that the matrices' have the same shape. (In MatLab, one need to use repmat function to repeat the row/column vectors.
###Code
np.shape(C)
###Output
_____no_output_____ |
code/3_cond_GAN/test_post_analysis.ipynb | ###Markdown
post analysis pandas testDec 4, 2020
###Code
import numpy as np
import pandas as pd
import argparse
import subprocess as sp
import os
import glob
import sys
import time
from pandarallel import pandarallel
sys.path.append('/global/u1/v/vpa/project/jpt_notebooks/Cosmology/Cosmo_GAN/repositories/lbann_cosmogan/3_analysis')
from modules_image_analysis import *
def parse_args():
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description="Analyze output data from LBANN run")
add_arg = parser.add_argument
add_arg('--val_data','-v', type=str, default='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/128_square/dataset_2_smoothing_200k/norm_1_train_val.npy',help='The .npy file with input data to compare with')
add_arg('--folder','-f', type=str,help='The full path of the folder containing the data to analyze.')
add_arg('--cores','-c', type=int, default=64,help='Number of cores to use for parallelization')
add_arg('--bins_type','-bin', type=str, default='uneven',help='Number of cores to use for parallelization')
return parser.parse_args()
### Transformation functions for image pixel values
def f_transform(x):
return 2.*x/(x + 4. + 1e-8) - 1.
def f_invtransform(s):
return 4.*(1. + s)/(1. - s + 1e-8)
# ### Modules for Extraction
def f_get_sorted_df(main_dir,label):
'''
Module to create Dataframe with filenames for each epoch and step
Sorts by step and epoch
'''
def f_get_info_from_fname(fname):
''' Read file and return dictionary with epoch, step'''
dict1={}
dict1['epoch']=np.int32(fname.split('epoch-')[-1].split('_')[0])
dict1['step']=np.int64(fname.split('step-')[-1].split('.')[0])
return dict1
t1=time.time()
### get list of file names
fldr_loc=main_dir+'/images/'
files_arr,img_arr=np.array([]),np.array([])
files=glob.glob(fldr_loc+'*gen_img_label-{0}_epoch*_step*.npy'.format(label))
files_arr=np.append(files_arr,files)
img_arr=np.append(img_arr,['train'] *len(files))
print('Number of files',len(files_arr))
if len(files_arr)<1: print('No files'); raise SystemExit
### Create dataframe
df_files=pd.DataFrame()
df_files['img_type']=np.array(img_arr)
df_files['fname']=np.array(files_arr).astype(str)
# Create list of dictionaries
dict1=df_files.apply(lambda row : f_get_info_from_fname(row.fname),axis=1)
keys=dict1[0].keys() # Extract keys of dictionary
# print(keys)
# ### Convert list of dicts to dict of lists
dict_list={key:[k[key] for k in dict1] for key in keys}
# ### Add columns to Dataframe
for key in dict_list.keys():
df_files[key]=dict_list[key]
df_files=df_files.sort_values(by=['img_type','epoch','step']).reset_index(drop=True) ### sort df by epoch and step
t2=time.time()
print("time for sorting",t2-t1)
return df_files[['epoch','step','img_type','fname']]
def f_compute_hist_spect(sample,bins):
''' Compute pixel intensity histograms and radial spectrum for 2D arrays
Input : Image arrays and bins
Output: dictionary with 5 arrays : Histogram values, errors and bin centers, Spectrum values and errors.
'''
### Compute pixel histogram for row
gen_hist,gen_err,hist_bins=f_batch_histogram(sample,bins=bins,norm=True,hist_range=None)
### Compute spectrum for row
spec,spec_sdev=f_compute_spectrum(sample,plot=False)
dict1={'hist_val':gen_hist,'hist_err':gen_err,'hist_bin_centers':hist_bins,'spec_val':spec,'spec_sdev':spec_sdev }
return dict1
def f_get_images(fname,img_type):
'''
Extract image using file name
'''
fname,key=fname,img_type
a1=np.load(fname)
samples=a1[:]
return samples
def f_high_pixel(images,cutoff=0.9966):
'''
Get number of images with a pixel about max cut-off value
'''
max_arr=np.amax(images,axis=(1,2))
num_large=max_arr[max_arr>cutoff].shape[0]
return num_large
def f_compute_chisqr(dict_val,dict_sample):
'''
Compute chi-square values for sample w.r.t input images
Input: 2 dictionaries with 4 keys for histogram and spectrum values and errors
'''
### !!Both pixel histograms MUST have same bins and normalization!
### Compute chi-sqr
### Used in keras code : np.sum(np.divide(np.power(valhist - samphist, 2.0), valhist))
### chi_sqr :: sum((Obs-Val)^2/(Val))
chisqr_dict={}
try:
val_dr=dict_val['hist_val'].copy()
val_dr[val_dr<=0.]=1.0 ### Avoiding division by zero for zero bins
sq_diff=(dict_val['hist_val']-dict_sample['hist_val'])**2
size=len(dict_val['hist_val'])
l1,l2=int(size*0.3),int(size*0.7)
keys=['chi_1a','chi_1b','chi_1c','chi_1']
for (key,start,end) in zip(keys,[0,l1,l2,0],[l1,l2,None,None]): # 4 lists : small, medium, large pixel values and full
chisqr_dict.update({key:np.sum(np.divide(sq_diff[start:end],val_dr[start:end]))})
idx=None # Choosing the number of histograms to use. Eg : -5 to skip last 5 bins
# chisqr_dict.update({'chi_sqr1':})
chisqr_dict.update({'chi_2':np.sum(np.divide(sq_diff[:idx],1.0))}) ## chi-sqr without denominator division
chisqr_dict.update({'chi_imgvar':np.sum(dict_sample['hist_err'][:idx])/np.sum(dict_val['hist_err'][:idx])}) ## measures total spread in histograms wrt to input data
idx=64
spec_diff=(dict_val['spec_val']-dict_sample['spec_val'])**2
### computing the spectral loss chi-square
chisqr_dict.update({'chi_spec1':np.sum(spec_diff[:idx]/dict_sample['spec_val'][:idx]**2)})
### computing the spectral loss chi-square
chisqr_dict.update({'chi_spec2':np.sum(spec_diff[:idx]/dict_sample['spec_sdev'][:idx]**2)})
spec_loss=1.0*np.log(np.mean((dict_val['spec_val'][:idx]-dict_sample['spec_val'][:idx])**2))+1.0*np.log(np.mean((dict_val['spec_sdev'][:idx]-dict_sample['spec_sdev'][:idx])**2))
chisqr_dict.update({'chi_spec3':spec_loss})
except Exception as e:
print(e)
keys=['chi_1a','chi_1b','chi_1c','chi_1','chi_2','chi_imgvar','chi_spec1','chi_spec2']
chisqr_dict=dict.fromkeys(keys,np.nan)
pass
return chisqr_dict
def f_get_computed_dict(fname,img_type,bins,dict_val):
'''
'''
### Get images from file
images=f_get_images(fname,img_type)
### Compute number of images with high pixel values
high_pixel=f_high_pixel(images,cutoff=0.9898) # pixels over 780
very_high_pixel=f_high_pixel(images,cutoff=0.9973) # pixels over 3000
### Compute spectrum and histograms
dict_sample=f_compute_hist_spect(images,bins) ## list of 5 numpy arrays
### Compute chi squares
dict_chisqrs=f_compute_chisqr(dict_val,dict_sample)
dict1={}
dict1.update(dict_chisqrs)
dict1.update({'num_imgs':images.shape[0],'num_large':high_pixel,'num_vlarge':very_high_pixel})
dict1.update(dict_sample)
return dict1
## Extract image data
# args=parse_args()
args=argparse.Namespace()
args.folder='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/20201202_094018_cgan_model1/'
args.bins_type='unenven'
args.cores=1
args.val_data='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/raw_data/128_square/dataset_5_4univ_cgan/'
print(args)
fldr_name=args.folder
main_dir=fldr_name
if main_dir.endswith('/'): main_dir=main_dir[:-1]
assert os.path.exists(main_dir), "Directory doesn't exist"
print("Analyzing data in",main_dir)
num_cores=args.cores
## Define bin-edges for histogram
if args.bins_type=='uneven':
bins=np.concatenate([np.array([-0.5]),np.arange(0.5,20.5,1),np.arange(20.5,100.5,5),np.arange(100.5,1000.5,50),np.array([2000])])
else : bins=np.arange(0,1510,10)
print("Bins",bins)
transform=False ## Images are in transformed space (-1,1), convert bins to the same space
if not transform: bins=f_transform(bins) ### scale to (-1,1)
print(bins)
label_list[-2:]
sigma_list=[0.5,0.65,0.8,1.1];label_list=[0,1,2,3];
for count,(sigma,label) in enumerate(zip(sigma_list,label_list)):
### Extract validation data
fname=args.val_data+'norm_1_sig_%s_train_val.npy'%(sigma)
print("Using validation data from ",fname)
s_val=np.load(fname,mmap_mode='r')[:8000][:,0,:,:]
print(s_val.shape)
### Get dataframe with file names, sorted by epoch and step
df_files=f_get_sorted_df(main_dir,label).head(20)
### Compute
t1=time.time()
### Compute histogram and spectrum of raw data
dict_val=f_compute_hist_spect(s_val,bins)
### Parallel CPU test
# ##Using pandarallel : https://stackoverflow.com/questions/26784164/pandas-multiprocessing-apply
df=df_files.copy()
pandarallel.initialize(progress_bar=True)
# pandarallel.initialize(nb_workers=num_cores,progress_bar=True)
t2=time.time()
dict1=df.parallel_apply(lambda row: f_get_computed_dict(fname=row.fname,img_type='train_gen',bins=bins,dict_val=dict_val),axis=1)
keys=dict1[0].keys()
### Convert list of dicts to dict of lists
dict_list={key:[k[key] for k in dict1] for key in keys}
### Add columns to Dataframe
for key in dict_list.keys():
df[key]=dict_list[key]
t3=time.time()
print("Time ",t3-t2)
display(df.head(5))
### Save to file
# fname='/df_processed_{0}.pkle'.format(label)
# df.to_pickle(main_dir+fname)
# print("Saved file at ",main_dir+fname)
def f_batch_histogram(img_arr,bins,norm,hist_range):
''' Compute histogram statistics for a batch of images'''
## Extracting the range. This is important to ensure that the different histograms are compared correctly
if hist_range==None : ulim,llim=np.max(img_arr),np.min(img_arr)
else: ulim,llim=hist_range[1],hist_range[0]
# print(ulim,llim)
### array of histogram of each image
hist_arr=np.array([np.histogram(arr.flatten(), bins=bins, range=(llim,ulim), density=norm) for arr in img_arr]) ## range is important
hist=np.stack(hist_arr[:,0]) # First element is histogram array
# print(hist.shape)
bin_list=np.stack(hist_arr[:,1]) # Second element is bin value
### Compute statistics over histograms of individual images
mean,err=np.mean(hist,axis=0),np.std(hist,axis=0)/np.sqrt(hist.shape[0])
bin_edges=bin_list[0]
centers = (bin_edges[:-1] + bin_edges[1:]) / 2
return mean,err,centers
fname='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/20201202_094018_cgan_model1/images/gen_img_label-0_epoch-8_step-35650.npy'
a1=np.load(fname)
print(a1.shape)
###Output
(64, 128, 128)
|
plotting-vectors.ipynb | ###Markdown
Extract data/text5-AC-MODEL-4000.tar.gz and run the following section to load the model
###Code
model = load_model()
words = []
vectors = []
for i in model:
if model[i].size > 0:
words.append(i)
vectors.append(model[i])
pca = PCA(n_components=2).fit_transform(vectors)
###Output
_____no_output_____
###Markdown
scatter
###Code
plt.scatter(pca[:,0],pca[:,1],linewidths=1,color='blue')
for i,w in enumerate(words):
plt.annotate(w,xy=(pca[i][0],pca[i][1]))
plt.show()
# returns a list of similar words to the given word. Sorted by their similarity
def most_similar_words(word, nofw:"Number of most similar words"):
sample = model[word]
sims = {}
for i in model:
if i != word:
sims[euclidean(sample, model[i])] = i
return [sims[x] for x in sorted(sims.keys())[:nofw]] + [word]
def tsne_plot3(model, words):
labels = []
tokens = []
for word in words:
if len(model[word]) != 0:
tokens.append(model[word])
labels.append(word)
# labels = labels[:200]
# tokens = tokens[:200]
tsne_model = TSNE(perplexity=40, n_components=2, init='pca', n_iter=2500, random_state=23)
new_values = tsne_model.fit_transform(tokens)
x = []
y = []
for value in new_values:
x.append(value[0])
y.append(value[1])
plt.figure(figsize=(16, 16))
for i in range(len(x)):
plt.scatter(x[i],y[i])
plt.annotate(labels[i],
xy=(x[i], y[i]),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.show()
tsne_plot3(model, most_similar_words('august', 10))
tsne_plot3(model, most_similar_words('two', 10))
model['there']
###Output
_____no_output_____ |
numpy-labs/nb10-part3.ipynb | ###Markdown
Part 3: Sparse matrix storageThis part is about sparse matrix storage in Numpy/Scipy. Start by running the following code cell to get some of the key modules you'll need.
###Code
import numpy as np
import pandas as pd
from random import sample # Used to generate a random sample
from IPython.display import display
###Output
_____no_output_____
###Markdown
Sample dataFor this part, you'll need to download the dataset below. It's a list of pairs of strings. The strings, it turns out, correspond to anonymized Yelp! user IDs; a pair $(a, b)$ exists if user $a$ is friends on Yelp! with user $b$. **Exercise 0** (ungraded). Verify that you can obtain the dataset and take a peek by running the two code cells that follow.
###Code
import requests
import os
import hashlib
import io
def is_vocareum():
return os.path.exists('.voc')
if is_vocareum():
local_filename = '../resource/asnlib/publicdata/UserEdges-1M.csv'
else:
local_filename = 'UserEdges-1M.csv'
url = 'https://cse6040.gatech.edu/datasets/{}'.format(url_suffix)
if os.path.exists(local_filename):
print("[{}]\n==> '{}' is already available.".format(url, file))
else:
print("[{}] Downloading...".format(url))
r = requests.get(url)
with open(file, 'w', encoding=r.encoding) as f:
f.write(r.text)
checksum = '4668034bbcd2fa120915ea2d15eafa8d'
with io.open(local_filename, 'r', encoding='utf-8', errors='replace') as f:
body = f.read()
body_checksum = hashlib.md5(body.encode('utf-8')).hexdigest()
assert body_checksum == checksum, \
"Downloaded file '{}' has incorrect checksum: '{}' instead of '{}'".format(local_filename,
body_checksum,
checksum)
print("==> Checksum test passes: {}".format(checksum))
print("==> '{}' is ready!\n".format(local_filename))
print("(Auxiliary files appear to be ready.)")
# Peek at the data:
edges_raw = pd.read_csv(local_filename)
display(edges_raw.head ())
print("...\n`edges_raw` has {} entries.".format(len(edges_raw)))
###Output
_____no_output_____
###Markdown
Evidently, this dataframe has one million entries. **Exercise 1** (ungraded). Explain what the following code cell does.
###Code
edges_raw_trans = pd.DataFrame({'Source': edges_raw['Target'],
'Target': edges_raw['Source']})
edges_raw_symm = pd.concat([edges_raw, edges_raw_trans])
edges = edges_raw_symm.drop_duplicates()
V_names = set(edges['Source'])
V_names.update(set(edges['Target']))
num_edges = len(edges)
num_verts = len(V_names)
print("==> |V| == {}, |E| == {}".format(num_verts, num_edges))
###Output
==> |V| == 107456, |E| == 882640
###Markdown
**Answer.** Give this question some thought before peeking at our suggested answer, which follows.Recall that the input dataframe, `edges_raw`, has a row $(a, b)$ if $a$ and $b$ are friends. But here is what is unclear at the outset: if $(a, b)$ is an entry in this table, is $(b, a)$ also an entry? The code in the above cell effectively figures that out, by computing a dataframe, `edges`, that contains both $(a, b)$ and $(b, a)$, with no additional duplicates, i.e., no copies of $(a, b)$.It also uses sets to construct a set, `V_names`, that consists of all the names. Evidently, the dataset consists of 107,456 unique names and 441,320 unique pairs, or 882,640 pairs when you "symmetrize" to ensure that both $(a, b)$ and $(b, a)$ appear. GraphsOne way a computer scientist thinks of this collection of pairs is as a _graph_: https://en.wikipedia.org/wiki/Graph_(discrete_mathematics%29)The names or user IDs are _nodes_ or _vertices_ of this graph; the pairs are _edges_, or arrows that connect vertices. That's why the final output objects are named `V_names` (for vertex names) and `edges` (for the vertex-to-vertex relationships). The process or calculation to ensure that both $(a, b)$ and $(b, a)$ are contained in `edges` is sometimes referred to as _symmetrizing_ the graph: it ensures that if an edge $a \rightarrow b$ exists, then so does $b \rightarrow a$. If that's true for all edges, then the graph is _undirected_. The Wikipedia page linked to above explains these terms with some examples and helpful pictures, so take a moment to review that material before moving on.We'll also refer to this collection of vertices and edges as the _connectivity graph_. Sparse matrix storage: Baseline methodsLet's start by reminding ourselves how our previous method for storing sparse matrices, based on nested default dictionaries, works and performs.
###Code
def sparse_matrix(base_type=float):
"""Returns a sparse matrix using nested default dictionaries."""
from collections import defaultdict
return defaultdict(lambda: defaultdict (base_type))
def dense_vector(init, base_type=float):
"""
Returns a dense vector, either of a given length
and initialized to 0 values or using a given list
of initial values.
"""
# Case 1: `init` is a list of initial values for the vector entries
if type(init) is list:
initial_values = init
return [base_type(x) for x in initial_values]
# Else, case 2: `init` is a vector length.
assert type(init) is int
return [base_type(0)] * init
###Output
_____no_output_____
###Markdown
**Exercise 2** (3 points). Implement a function to compute $y \leftarrow A x$. Assume that the keys of the sparse matrix data structure are integers in the interval $[0, s)$ where $s$ is the number of rows or columns as appropriate.
###Code
def spmv(A, x, num_rows=None):
if num_rows is None:
num_rows = max(A.keys()) + 1
y = dense_vector(num_rows)
# Recall: y = A*x is, conceptually,
# for all i, y[i] == sum over all j of (A[i, j] * x[j])
### BEGIN SOLUTION
for i, row_i in A.items():
s = 0.
for j, a_ij in row_i.items():
s += a_ij * x[j]
y[i] = s
### END SOLUTION
return y
# Test cell: `spmv_baseline_test`
# / 0. -2.5 1.2 \ / 1. \ / -1.4 \
# | 0.1 1. 0. | * | 2. | = | 2.1 |
# \ 6. -1. 0. / \ 3. / \ 4.0 /
A = sparse_matrix ()
A[0][1] = -2.5
A[0][2] = 1.2
A[1][0] = 0.1
A[1][1] = 1.
A[2][0] = 6.
A[2][1] = -1.
x = dense_vector ([1, 2, 3])
y0 = dense_vector ([-1.4, 2.1, 4.0])
# Try your code:
y = spmv(A, x)
max_abs_residual = max([abs(a-b) for a, b in zip(y, y0)])
print ("==> A:", A)
print ("==> x:", x)
print ("==> True solution, y0:", y0)
print ("==> Your solution, y:", y)
print ("==> Residual (infinity norm):", max_abs_residual)
assert max_abs_residual <= 1e-14
print ("\n(Passed.)")
###Output
==> A: defaultdict(<function sparse_matrix.<locals>.<lambda> at 0x7ff565ee4158>, {0: defaultdict(<class 'float'>, {1: -2.5, 2: 1.2}), 1: defaultdict(<class 'float'>, {0: 0.1, 1: 1.0}), 2: defaultdict(<class 'float'>, {0: 6.0, 1: -1.0})})
==> x: [1.0, 2.0, 3.0]
==> True solution, y0: [-1.4, 2.1, 4.0]
==> Your solution, y: [-1.4000000000000004, 2.1, 4.0]
==> Residual (infinity norm): 4.440892098500626e-16
(Passed.)
###Markdown
Next, let's convert the `edges` input into a sparse matrix representing its connectivity graph. To do so, we'll first want to map names to integers.
###Code
id2name = {} # id2name[id] == name
name2id = {} # name2id[name] == id
for k, v in enumerate (V_names):
# for debugging
if k <= 5: print ("Name %s -> Vertex id %d" % (v, k))
if k == 6: print ("...")
id2name[k] = v
name2id[v] = k
###Output
Name rAM73vehyZsaPiKiS-zwCQ -> Vertex id 0
Name SQEQywFouGdaRfIIYAcN_w -> Vertex id 1
Name 63UaVdLeICrUy-mI97LpUA -> Vertex id 2
Name ClUp_hPfVAKnBPaBPP4ekQ -> Vertex id 3
Name edu9iSPlTwYIhhkvvA7sRA -> Vertex id 4
Name oJds8x4VdVEGXwYN-qa3Yw -> Vertex id 5
...
###Markdown
**Exercise 3** (3 points). Given `id2name` and `name2id` as computed above, convert `edges` into a sparse matrix, `G`, where there is an entry `G[s][t] == 1.0` wherever an edge `(s, t)` exists.**Note** - This step might take time for the kernel to process as there are 1 million rows
###Code
G = sparse_matrix()
### BEGIN SOLUTION
for i in range(len(edges)): # edges is the table above
s = edges['Source'].iloc[i]
t = edges['Target'].iloc[i]
s_id = name2id[s]
t_id = name2id[t]
G[s_id][t_id] = 1.0
### END SOLUTION
# Test cell: `edges2spmat1_test`
G_rows_nnz = [len(row_i) for row_i in G.values()]
print ("G has {} vertices and {} edges.".format(len(G.keys()), sum(G_rows_nnz)))
assert len(G.keys()) == num_verts
assert sum(G_rows_nnz) == num_edges
# Check a random sample
for k in sample(range(num_edges), 1000):
i = name2id[edges['Source'].iloc[k]]
j = name2id[edges['Target'].iloc[k]]
assert i in G
assert j in G[i]
assert G[i][j] == 1.0
print ("\n(Passed.)")
###Output
G has 107456 vertices and 882640 edges.
(Passed.)
###Markdown
**Exercise 4** (3 points). In the above, we asked you to construct `G` using integer keys. However, since we are, after all, using default dictionaries, we could also use the vertex _names_ as keys. Construct a new sparse matrix, `H`, which uses the vertex names as keys instead of integers.
###Code
H = sparse_matrix()
### BEGIN SOLUTION
for i in range(len(edges)): # edges is the table above
s = edges['Source'].iloc[i]
t = edges['Target'].iloc[i]
H[s][t] = 1.0
### END SOLUTION
# Test cell: `create_H_test`
H_rows_nnz = [len(h) for h in H.values()]
print("`H` has {} vertices and {} edges.".format(len(H.keys()), sum(H_rows_nnz)))
assert len(H.keys()) == num_verts
assert sum(H_rows_nnz) == num_edges
# Check a random sample
for i in sample(G.keys(), 100):
i_name = id2name[i]
assert i_name in H
assert len(G[i]) == len(H[i_name])
print ("\n(Passed.)")
###Output
`H` has 107456 vertices and 882640 edges.
(Passed.)
###Markdown
**Exercise 5** (3 points). Implement a sparse matrix-vector multiply for matrices with named keys. In this case, it will be convenient to have vectors that also have named keys; assume we use dictionaries to hold these vectors as suggested in the code skeleton, below.**Hint** - To help you understand more about the exercise, go back to **Exercise 2** and see what we did there. There is only one technical change between Ex2 and Ex5
###Code
def vector_keyed(keys=None, values=0, base_type=float):
if keys is not None:
if type(values) is not list:
values = [base_type(values)] * len(keys)
else:
values = [base_type(v) for v in values]
x = dict(zip(keys, values))
else:
x = {}
return x
def spmv_keyed(A, x):
"""Performs a sparse matrix-vector multiply for keyed matrices and vectors."""
assert type(x) is dict
y = vector_keyed(keys=A.keys(), values=0.0)
### BEGIN SOLUTION
for i, A_i in A.items():
for j, a_ij in A_i.items():
y[i] += a_ij * x[j]
### END SOLUTION
return y
# Test cell: `spmv_keyed_test`
# 'row': / 0. -2.5 1.2 \ / 1. \ / -1.4 \
# 'your': | 0.1 1. 0. | * | 2. | = | 2.1 |
# 'boat': \ 6. -1. 0. / \ 3. / \ 4.0 /
KEYS = ['row', 'your', 'boat']
A_keyed = sparse_matrix ()
A_keyed['row']['your'] = -2.5
A_keyed['row']['boat'] = 1.2
A_keyed['your']['row'] = 0.1
A_keyed['your']['your'] = 1.
A_keyed['boat']['row'] = 6.
A_keyed['boat']['your'] = -1.
x_keyed = vector_keyed (KEYS, [1, 2, 3])
y0_keyed = vector_keyed (KEYS, [-1.4, 2.1, 4.0])
# Try your code:
y_keyed = spmv_keyed (A_keyed, x_keyed)
# Measure the residual:
residuals = [(y_keyed[k] - y0_keyed[k]) for k in KEYS]
max_abs_residual = max ([abs (r) for r in residuals])
print ("==> A_keyed:", A_keyed)
print ("==> x_keyed:", x_keyed)
print ("==> True solution, y0_keyed:", y0_keyed)
print ("==> Your solution:", y_keyed)
print ("==> Residual (infinity norm):", max_abs_residual)
assert max_abs_residual <= 1e-14
print ("\n(Passed.)")
###Output
==> A_keyed: defaultdict(<function sparse_matrix.<locals>.<lambda> at 0x7ff565ee41e0>, {'row': defaultdict(<class 'float'>, {'your': -2.5, 'boat': 1.2}), 'your': defaultdict(<class 'float'>, {'row': 0.1, 'your': 1.0}), 'boat': defaultdict(<class 'float'>, {'row': 6.0, 'your': -1.0})})
==> x_keyed: {'row': 1.0, 'your': 2.0, 'boat': 3.0}
==> True solution, y0_keyed: {'row': -1.4, 'your': 2.1, 'boat': 4.0}
==> Your solution: {'row': -1.4000000000000004, 'your': 2.1, 'boat': 4.0}
==> Residual (infinity norm): 4.440892098500626e-16
(Passed.)
###Markdown
Let's benchmark `spmv()` against `spmv_keyed()` on the full data set. Do they perform differently?
###Code
x = dense_vector ([1.] * num_verts)
%timeit spmv (G, x)
x_keyed = vector_keyed (keys=[v for v in V_names], values=1.)
%timeit spmv_keyed (H, x_keyed)
###Output
175 ms ± 1.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
392 ms ± 5.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Alternative formats: Take a look at the following slides: [link](https://www.dropbox.com/s/4fwq21dy60g4w4u/cse6040-matrix-storage-notes.pdf?dl=0). These slides cover the basics of two list-based sparse matrix formats known as _coordinate format_ (COO) and _compressed sparse row_ (CSR). We will also discuss them briefly below. Coordinate Format (COO)In this format we store three lists, one each for rows, columns and the elements of the matrix. Look at the below picture to understand how these lists are formed.  **Exercise 6** (3 points). Convert the `edges[:]` data into a coordinate (COO) data structure in native Python using three lists, `coo_rows[:]`, `coo_cols[:]`, and `coo_vals[:]`, to store the row indices, column indices, and matrix values, respectively. Use integer indices and set all values to 1.**Hint** - Think of what rows, columns and values mean conceptually when you relate it with our dataset of edges
###Code
### BEGIN SOLUTION
coo_rows = [name2id[s] for s in edges['Source']]
coo_cols = [name2id[t] for t in edges['Target']]
coo_vals = [1.0]*len(edges)
### END SOLUTION
# Test cell: `create_coo_test`
assert len (coo_rows) == num_edges
assert len (coo_cols) == num_edges
assert len (coo_vals) == num_edges
assert all ([v == 1. for v in coo_vals])
# Randomly check a bunch of values
coo_zip = zip (coo_rows, coo_cols, coo_vals)
for i, j, a_ij in sample (list (coo_zip), 1000):
assert (i in G) and j in G[i]
print ("\n(Passed.)")
###Output
(Passed.)
###Markdown
**Exercise 7** (3 points). Implement a sparse matrix-vector multiply routine for COO implementation.
###Code
def spmv_coo(R, C, V, x, num_rows=None):
"""
Returns y = A*x, where A has 'm' rows and is stored in
COO format by the array triples, (R, C, V).
"""
assert type(x) is list
assert type(R) is list
assert type(C) is list
assert type(V) is list
assert len(R) == len(C) == len(V)
if num_rows is None:
num_rows = max(R) + 1
y = dense_vector(num_rows)
### BEGIN SOLUTION
for i, j, a_ij in zip(R, C, V):
y[i] += a_ij * x[j]
### END SOLUTION
return y
# Test cell: `spmv_coo_test`
# / 0. -2.5 1.2 \ / 1. \ / -1.4 \
# | 0.1 1. 0. | * | 2. | = | 2.1 |
# \ 6. -1. 0. / \ 3. / \ 4.0 /
A_coo_rows = [0, 0, 1, 1, 2, 2]
A_coo_cols = [1, 2, 0, 1, 0, 1]
A_coo_vals = [-2.5, 1.2, 0.1, 1., 6., -1.]
x = dense_vector([1, 2, 3])
y0 = dense_vector([-1.4, 2.1, 4.0])
# Try your code:
y_coo = spmv_coo(A_coo_rows, A_coo_cols, A_coo_vals, x)
max_abs_residual = max ([abs(a-b) for a, b in zip(y_coo, y0)])
print("==> A_coo:", list(zip(A_coo_rows, A_coo_cols, A_coo_vals)))
print("==> x:", x)
print("==> True solution, y0:", y0)
print("==> Your solution:", y_coo)
print("==> Residual (infinity norm):", max_abs_residual)
assert max_abs_residual <= 1e-15
print("\n(Passed.)")
###Output
==> A_coo: [(0, 1, -2.5), (0, 2, 1.2), (1, 0, 0.1), (1, 1, 1.0), (2, 0, 6.0), (2, 1, -1.0)]
==> x: [1.0, 2.0, 3.0]
==> True solution, y0: [-1.4, 2.1, 4.0]
==> Your solution: [-1.4000000000000004, 2.1, 4.0]
==> Residual (infinity norm): 4.440892098500626e-16
(Passed.)
###Markdown
Let's see how fast this is...
###Code
x = dense_vector([1.] * num_verts)
%timeit spmv_coo(coo_rows, coo_cols, coo_vals, x)
###Output
237 ms ± 31 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Compressed Sparse Row FormatThis is similar to the COO format excpet that it is much more compact and takes up less storage. Look at the picture below to understand more about this representation **Exercise 8** (3 points). Now create a CSR data structure, again using native Python lists. Name your output CSR lists `csr_ptrs`, `csr_inds`, and `csr_vals`.It's easiest to start with the COO representation. We've given you some starter code. Unlike most of the exercises, instead of creating a function, you have to compute csr_ptrs here
###Code
from operator import itemgetter
C = sorted(zip(coo_rows, coo_cols, coo_vals), key=itemgetter(0))
nnz = len(C)
assert nnz >= 1
assert (C[-1][0] + 1) == num_verts # Why?
csr_inds = [j for _, j, _ in C]
csr_vals = [a_ij for _, _, a_ij in C]
# Your task: Compute `csr_ptrs`
### BEGIN SOLUTION
C_rows = [i for i, _, _ in C] # sorted rows
csr_ptrs = [0] * (num_verts + 1)
i_cur = -1 # a known, invalid row index
for k in range(nnz):
if C_rows[k] != i_cur:
i_cur = C_rows[k]
csr_ptrs[i_cur] = k
from itertools import accumulate
csr_ptrs = list(accumulate(csr_ptrs, max))
csr_ptrs[-1] = nnz
### END SOLUTION
# Test cell: `create_csr_test`
assert type(csr_ptrs) is list, "`csr_ptrs` is not a list."
assert type(csr_inds) is list, "`csr_inds` is not a list."
assert type(csr_vals) is list, "`csr_vals` is not a list."
assert len(csr_ptrs) == (num_verts + 1), "`csr_ptrs` has {} values instead of {}".format(len(csr_ptrs), num_verts+1)
assert len(csr_inds) == num_edges, "`csr_inds` has {} values instead of {}".format(len(csr_inds), num_edges)
assert len(csr_vals) == num_edges, "`csr_vals` has {} values instead of {}".format(len(csr_vals), num_edges)
assert csr_ptrs[num_verts] == num_edges, "`csr_ptrs[{}]` == {} instead of {}".format(num_verts, csr_ptrs[num_verts], num_edges)
# Check some random entries
for i in sample(range(num_verts), 10000):
assert i in G
a, b = csr_ptrs[i], csr_ptrs[i+1]
msg_prefix = "Row {} should have these nonzeros: {}".format(i, G[i])
assert (b-a) == len(G[i]), "{}, which is {} nonzeros; instead, it has just {}.".format(msg_prefix, len(G[i]), b-a)
assert all([(j in G[i]) for j in csr_inds[a:b]]), "{}. However, it may have missing or incorrect column indices: csr_inds[{}:{}] == {}".format(msg_prefix, a, b, csr_inds[a:b])
assert all([(j in csr_inds[a:b] for j in G[i].keys())]), "{}. However, it may have missing or incorrect column indices: csr_inds[{}:{}] == {}".format(msg_prefix, a, b, csr_inds[a:b])
print ("\n(Passed.)")
###Output
(Passed.)
###Markdown
**Exercise 9** (3 points). Now implement a CSR-based sparse matrix-vector multiply.
###Code
def spmv_csr(ptr, ind, val, x, num_rows=None):
assert type(ptr) == list
assert type(ind) == list
assert type(val) == list
assert type(x) == list
if num_rows is None: num_rows = len(ptr) - 1
assert len(ptr) >= (num_rows+1) # Why?
assert len(ind) >= ptr[num_rows] # Why?
assert len(val) >= ptr[num_rows] # Why?
y = dense_vector(num_rows)
### BEGIN SOLUTION
for i in range(num_rows):
for k in range(ptr[i], ptr[i+1]):
y[i] += val[k] * x[ind[k]]
### END SOLUTION
return y
# Test cell: `spmv_csr_test`
# / 0. -2.5 1.2 \ / 1. \ / -1.4 \
# | 0.1 1. 0. | * | 2. | = | 2.1 |
# \ 6. -1. 0. / \ 3. / \ 4.0 /
A_csr_ptrs = [ 0, 2, 4, 6]
A_csr_cols = [ 1, 2, 0, 1, 0, 1]
A_csr_vals = [-2.5, 1.2, 0.1, 1., 6., -1.]
x = dense_vector([1, 2, 3])
y0 = dense_vector([-1.4, 2.1, 4.0])
# Try your code:
y_csr = spmv_csr(A_csr_ptrs, A_csr_cols, A_csr_vals, x)
max_abs_residual = max([abs(a-b) for a, b in zip(y_csr, y0)])
print ("==> A_csr_ptrs:", A_csr_ptrs)
print ("==> A_csr_{cols, vals}:", list(zip(A_csr_cols, A_csr_vals)))
print ("==> x:", x)
print ("==> True solution, y0:", y0)
print ("==> Your solution:", y_csr)
print ("==> Residual (infinity norm):", max_abs_residual)
assert max_abs_residual <= 1e-14
print ("\n(Passed.)")
x = dense_vector([1.] * num_verts)
%timeit spmv_csr(csr_ptrs, csr_inds, csr_vals, x)
###Output
216 ms ± 4.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Using Scipy's implementationsWhat you should have noticed is that the list-based COO and CSR formats do not really lead to sparse matrix-vector multiply implementations that are much faster than the dictionary-based methods. Let's instead try Scipy's native COO and CSR implementations.
###Code
import numpy as np
import scipy.sparse as sp
A_coo_sp = sp.coo_matrix((coo_vals, (coo_rows, coo_cols)))
A_csr_sp = A_coo_sp.tocsr() # Alternatively: sp.csr_matrix((val, ind, ptr))
x_sp = np.ones(num_verts)
print ("\n==> COO in Scipy:")
%timeit A_coo_sp.dot (x_sp)
print ("\n==> CSR in Scipy:")
%timeit A_csr_sp.dot (x_sp)
###Output
==> COO in Scipy:
3.43 ms ± 86.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
==> CSR in Scipy:
2.1 ms ± 45 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
|
jupyternb/.ipynb_checkpoints/slides02_EPMWebApiIntro-Copy1-checkpoint.ipynb | ###Markdown
EPM Web API (Python) - primeiros passos... Operações típicas1. **Estabelecimento de Conexão** (=sessão) com um **EPM Server** (=servidor de dados)2. **Criação** das variáveis de interesse * definir variáveis de interesse (eventualmente suas propriedades) * executar criação no ambiente Python da variável de interesse (contraparte do **EPM Server**)3. **Leitura** de valores escalares * definir variável de interesse * executar leitura4. **Leitura** de valores históricos * definir variável de interesse * definir período de interesse * definir tipo de agregação (=processamento) * executar leitura5. **Leitura** de anotações * definir variável de interesse * definir período de interesse * executar leitura6. **Escrita** de valores escalares * definir variável de interesse * definir valor, timestamp e qualidade * executar escrita7. **Escrita** de valores históricos * definir variável de interesse * definir vetor de valores, timestamps e qualidades * executar escrita8. **Escrita** de anotações * definir variável de interesse * definir mensagem e timestamp * executar escrita9. **CRUD** (Create, Read, Update and Delete) de variáveis - não faz parte do escopo deste minicurso10. **Encerramento da Conexão** (=sessão) com um **EPM Server** (=servidor de dados) 1. **Estabelecimento de Conexão** (=sessão) com um **EPM Server** (=servidor de dados)
###Code
# No ambiente Jupyter - fazer gráfico no próprio ambiente usando a MATPLOTLIB
%matplotlib inline
# Importação dos princiapais módulos utilizados em análises de dados de processo
# Dica:
# Sempre procurar tratar as exceções para facilitar a manutenção dos códigos e a utilização por terceiros!
try:
import numpy as np
import datetime
import pytz
import matplotlib.pyplot as plt
import pandas as pd
# Importação do módulo para conecão com o EPM Server (via EPM Web Server)
import epmwebapi as epm
print('Módulos importados com sucesso!')
except ImportError as error:
print('Erro na importação!')
print(error.__class__.__name__ + ': ' + error.message)
except Exception as exception:
print(exception.__class__.__name__ + ': ' + exception.message)
# ATENÇÃO:
# Para uso em produção, é recomendado usar variáveis de ambiente para buscar informações de Usuário/Senha.
# Para este minicurso será utilizado as funções input e getpass
import getpass
user = input('EPM user:')
password = getpass.getpass("EPM password:")
epm_auth = 'http://epmtr.elipse.com.br/auth' # 'http://localhost:44333'
epm_web = 'http://epmtr.elipse.com.br'# 'http://localhost:44332'
# Criação de uma conexão informando os endereços do EPM Webserver(Authentication Port e WEB API Port), usuário e senha.
try:
epmConn = epm.EpmConnection(epm_auth, epm_web, user, password)
# A forma mais recomendada (fácil) para usar o comando print é com fstring
print(f'Conexão com {epm_web} criada com sucesso para o usuário {user}.')
except:
print(f'Falha no estabelecimento da conexão com {epm_web} para o usuário {user}.')
###Output
EPM user:truser
EPM password:········
Conexão com http://epmtr.elipse.com.br criada com sucesso para o usuário truser.
###Markdown
2. **Criação** das variáveis de interesse
###Code
# DICA:
# Definir uma lista com as variáveis de interesse para facilitar seu uso/reuso
bvList = ['LIC101', 'FIC101']
bvDic = epmConn.getDataObjects(bvList)
bv_LIC101 = bvDic[bvList[0]]
bv_FIC101 = bvDic[bvList[1]]
print('Nível do tanque: {}'.format(bv_LIC101)) # outra forma de usar o comando print é com o método format
print('Válvula do tanque: {}'.format(bv_FIC101))
###Output
Nível do tanque: <epmwebapi.epmdataobject.EpmDataObject object at 0x000002880D00B9E8>
Válvula do tanque: <epmwebapi.epmdataobject.EpmDataObject object at 0x000002880D012940>
###Markdown
Dica É possível usar filtros para pesquisar as variáveis de interesse e criá-las no ambiente Python. *exemplo*: é possível que se busque por todas as variáveis que tenham °C como unidade de medida. Exemplos de uso de filtros podem ser vistos no GitHub da Elipse Software:[Elipse Software/epmwebapi/exemplos/Quickstart.ipynb](https://nbviewer.jupyter.org/github/elipsesoftware/epmwebapi/blob/master/exemplos/Quickstart.ipynb) Usando as funções *dir* para ver métodos-propriedades e filtro com regex sobre strings para eliminar os métodos-propriedades "privados"
###Code
import re
# DICA: https://regex101.com/
regex = re.compile(r'[^\b_]') # <=> Não iniciar palavra com _
all_meth_props = dir(bv_LIC101)
meth_props = list(filter(regex.match, all_meth_props))
print(*meth_props, sep=" | ")
###Output
clamping | deleteAnnotations | description | domain | enumProperties | eu | highLimit | historyDelete | historyReadAggregate | historyReadRaw | historyUpdate | lowLimit | name | path | read | readAnnotations | readAttributes | write | writeAnnotation
###Markdown
Por padrão, *name* é a única propriedade lida do EPM Server para criação da *Basic Variable* no ambiente Python.
###Code
# Por padrão, "name" é a única propriedade necessária para instanciar uma Basic Variable
print(f'Variável: {bv_LIC101.name}')
print(f'Descrição: {bv_LIC101.description }')
print(f'Unidade de Medida (= E.U.): {bv_LIC101.eu}')
###Output
Variável: LIC101
Descrição: None
Unidade de Medida (= E.U.): None
###Markdown
Quando for necessário saber o valor destas propriedades, é necessário: (i) criar a variávle no ambiente Python solicitando que estes valores sejam lidos na criação da mesma; (ii) proceder com a leitura destas propriedades sob demanda após a criação da mesma.
###Code
# Exemplo de busca das propriedades da variável bv_LIC101 (previamente criada)
bv_LIC101.readAttributes()
print(f'Variável: {bv_LIC101.name}')
print(f'Descrição: {bv_LIC101.description }')
print(f'Unidade de Medida (= E.U.): {bv_LIC101.eu}')
###Output
Variável: LIC101
Descrição: Nível do tanque
Unidade de Medida (= E.U.): {'displayName': 'm', 'description': 'meter'}
###Markdown
3. **Leitura** de valores escalares Lê o último valor proveniente da via de tempo real do EPM Server: (i) via *Interface de Comunicação*; (ii) via escrita com o método *write*. Valores inseridos na base de dados através: (i) restauração de bkp; (ii) uso do EPM Tag Port (); (iii) função historyUpdate; só são acessados através de consultas históricas, como historyRead e historyReadAggregate.
###Code
# Trazendo par ao ambiebte Python uma variável que tem dados provenientes da via de tempo real (rand01)
rand01 = epmConn.getDataObjects(['rand01'])['rand01']
last_value = rand01.read()
print(f'Último valor recebido pela via de tempo real: {last_value.value}')
print(f'Último timestamp associado ao valor: {last_value.timestamp}')
print(f'Última qualidade associada ao valor: {last_value.statusCode} - 0 corresponde à "Good" no Padrão OPC UA')
# Nota:
# No padrão OPC DA Classic 192 corresponde à qualidade "Good"
###Output
Último valor recebido pela via de tempo real: 35.50395584106445
Último timestamp associado ao valor: 2019-08-01T14:44:01.82-03:00
Última qualidade associada ao valor: 0 - 0 corresponde à "Good" no Padrão OPC UA
###Markdown
4. **Leitura** de valores históricos Valores brutos (raw) - como foram armazenados Agregação - padrão OPC UA Consulta dos valores "brutos" (= raw data)
###Code
# Consulta dos valores "brutos" (= raw data)
# Valores "devem" ser informados em conforemidade com o Timezone ou em UTC (Coordinated Universal Time)
ini_date = datetime.datetime(2014, 3, 4, 2, tzinfo=pytz.utc)
end_date = datetime.datetime(2014, 3, 4, 4, tzinfo=pytz.utc)
query_period = epm.QueryPeriod(ini_date, end_date)
data = bv_LIC101.historyReadRaw(query_period)
plt.plot(data['Value'], color='#00e6e6') # plot apenas dos dados
plt.xlabel("auto-linear")
plt.ylabel(bv_LIC101.name + '[' + bv_LIC101.eu['displayName'] + ']')
# Notas extras:
# Redimensionando e colorindo a figura posteriormente á sua criação
fig = plt.gcf()
ax = plt.gca()
fig.set_size_inches(18, 10)
fig.patch.set_facecolor('#525454')
ax.set_facecolor('#323434')
#fig.savefig(bv_LIC101.name + '.png', dpi=100) # salvar imagem em arquivo
# ATENÇÃO!
# Os Timestamps retornados estão sempre em UTC!
print(data[0])
###Output
(1.6148503, datetime.datetime(2014, 3, 4, 2, 0, tzinfo=datetime.timezone.utc), 0)
###Markdown
Regras de ouro quando se trabalha com Timestamps* Sempre usar objetos datetime "offset-aware" (delta do local em relação ao UTC).* Sempre armazenar datetime em UTC e fazer a conversão do fuso horário apenas ao interagir com usuários.* Sempre usar a norma internacional ISO 8601 como entrada/saída para repersentações de data-hora. Exemplos norma ISO 8601* Data: 2019-07-16* Data e horas separados, em UTC: 2019-07-16 21:02Z* Data e horas combinadas, em UTC: 2019-07-16T21:02Z* Data com número da semana: 2019-W29-2* Data ordinal: 2019-197
###Code
# Nota:
# UTC’ is Coordinated Universal Time. It is a successor to, but distinct from, Greenwich Mean Time (GMT) and the various definitions of Universal Time. UTC is now the worldwide standard for regulating clocks and time measurement.
# All other timezones are defined relative to UTC, and include offsets like UTC+0800 - hours to add or
# subtract from UTC to derive the local time. No daylight saving time occurs in UTC, making it a useful
# timezone to perform date arithmetic without worrying about the confusion and ambiguities caused by
# daylight saving time transitions, your country changing its timezone, or mobile computers that roam
# through multiple timezones.
#
# ref.: http://pytz.sourceforge.net/
#
# Verificar os timezones que iniciam por 'America/S' -> para mostrar Sao_Paulo
print(list(filter(lambda s: s.find('America/S')+1, pytz.common_timezones)))
print(50 * '#')
# ATENÇÃO!
# Wrong value returned from offset for timezones in Brazil
# https://github.com/sdispater/pendulum/issues/319
#
tz_sp = pytz.timezone('America/Sao_Paulo')
ini_date1 = datetime.datetime(2014, 3, 3, 23, tzinfo=tz_sp)
end_date1 = datetime.datetime(2014, 3, 4, 1, tzinfo=tz_sp)
query_period1 = epm.QueryPeriod(ini_date1,end_date1)
data1 = bv_LIC101.historyReadRaw(query_period1)
# Para colocar o OFF-SET manualmente, deve-se primeiro identificá-lo (ou informar manualmente)
now_naive = datetime.datetime.now() # Só para ver que existe um método NOW!
now_aware = datetime.datetime.utcnow() # Usado para identificar o OFFSET
localize = pytz.utc.localize(now_aware) # # Só para ver que existe um método localize!
now_sp = now_aware.astimezone(tz_sp) # Usado para identificar o OFFSET
print(f'Data-hora agora simples, sem informações de Timezone: {now_naive}')
print(f'Data-hora agora com informações de Timezone: {now_aware}')
print(f'Localização UTC - Timezone: {localize}')
print(f'Localização São Paulo - Timezone: {now_sp}')
print(50 * '#')
tz_offset = now_sp.tzinfo.utcoffset(now_aware).seconds/(3600) - 24 # is_dst=False -> sem horário de verão!
tz_ok = datetime.timezone(datetime.timedelta(hours=tz_offset))
# tz_ok = -3 # São Paulo (Brasília) - Brasil - sem horário de verão!
ini_date2 = datetime.datetime(2014, 3, 3, 23, tzinfo=tz_ok)
end_date2 = datetime.datetime(2014, 3, 4, 1, tzinfo=tz_ok)
query_period2 = epm.QueryPeriod(ini_date2,end_date2)
data_ok = bv_LIC101.historyReadRaw(query_period2)
# Imprimindo resultado final!
print('Timestamp com problemas: {}'.format(data1[0]))
print('Timestamp OK: {}'.format(data_ok[0]))
# Fazendo o gráfico utilizando o eixo dos tempos (abissas) com valores em data-hora local
def utc2tz(dt, tz):
""" Converte date-time em UTC para timezone informada - removendo tzinfo
"""
return dt.astimezone(tz).replace(tzinfo=None)
local_timestamp = list(map(utc2tz, data['Timestamp'], [tz_sp]*len(data)))
# Parece haver um erro ao tentar usar datetime- timezone aware, por isso a remoão de tzinfo acima
#local_timestamp = data['Timestamp']
# C:\Anaconda3\lib\site-packages\pandas\plotting\_converter.py in __init__(self, locator, tz, defaultfmt)
# AttributeError: 'datetime.timezone' object has no attribute '_utcoffset'
# https://github.com/matplotlib/matplotlib/issues/12310
# https://github.com/pandas-dev/pandas/issues/22859
# Pandas converter, use to extract date str for axis labels, fails when index is timezone aware.
# This happends only if matplotlib>=3.0.0 version is installed. I reported the bug first to
# matplotlib (matplotlib/matplotlib#12310), but the close it as they believe that the bug is in pandas
# converter.
# Criando a figura antes de fazer o plot
# matplotlib.pyplot.subplots(nrows=1, ncols=1, sharex=False, sharey=False, squeeze=True, subplot_kw=None, gridspec_kw=None, **fig_kw)
fig, ax = plt.subplots()
fig.set_size_inches(18, 10)
fig.patch.set_facecolor('#525454')
ax.plot(local_timestamp, data['Value'], color='#00e6e6')
ax.set_xlabel("Time")
ax.set_ylabel(bv_LIC101.name + '[' + bv_LIC101.eu['displayName'] + ']')
ax.grid(True)
ax.set_facecolor('#323434')
###Output
C:\Anaconda3\lib\site-packages\pandas\plotting\_converter.py:129: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
Consulta usando Agregações (= processamento dos dados - server side)
###Code
# Consulta usando Agregações (= processamento dos dados - server side)
# Assim como nas consultas RAW, o s valores "devem" ser informados em conforemidade com o Timezone ou em UTC (Coordinated Universal Time)
ini_date = datetime.datetime(2014, 3, 4, 2, tzinfo=pytz.utc)
end_date = datetime.datetime(2014, 3, 4, 4, tzinfo=pytz.utc)
query_period = epm.QueryPeriod(ini_date, end_date)
process_interval = datetime.timedelta(minutes=10)
# Dica: usar TAB após "epm.AggregateType." para ver métodos disponíveis (intelisense)
aggregate_details = epm.AggregateDetails(process_interval, epm.AggregateType.TimeAverage)
# Executando a consulta agregada da Média Ponderada ao longo do tempo (a cada 10 minutos)
data = bv_LIC101.historyReadAggregate(aggregate_details, query_period)
# Mostrando em um gráfico do tipo degrau, tendo o valor anterior como referência (padrão)
t = np.arange(0, len(data))*10
plt.step(t, data['Value'], where='pre', color='#00e6e6') # where é 'pre' por padrão!
plt.xlabel("Delta Time (min)")
plt.ylabel(bv_LIC101.name + '[' + bv_LIC101.eu['displayName'] + ']')
# Notas extras:
# Redimensionando e colorindo a figura posteriormente á sua criação
fig = plt.gcf()
ax = plt.gca()
fig.set_size_inches(18, 10)
fig.patch.set_facecolor('#525454')
ax.set_facecolor('#323434')
#fig.savefig(bv_LIC101.name + '.png', dpi=100) # salvar imagem em arquivo
###Output
_____no_output_____
###Markdown
5. **Leitura** de anotações A anotação consiste de uma mensagem criada por um analista, que está associada à uma variável em um dado *Timestamp*.* Anotação de um TAG é identificada univocamente por uma terna: Mensagem, *Timestamp* e Usuário* Mais de um usuário pode criar uma anotação em um mesmo TAG em um mesmo *Timestamp** NENHUM usuário pode EDITAR uma anotação feita por outro!!! *(propriedade intelectual)** APENAS Administradore podem APAGAR anotações (de qualquer usuário)!
###Code
# Todas as anotações do período solicitado são retornadas em uma lista
annotations = bv_LIC101.readAnnotations(ini_date,end_date)
for annotation in annotations:
print(f'ANOTAÇÃO: {annotation}\n')
###Output
_____no_output_____
###Markdown
Dica Ver exemplo de uso de anotações no GitHub da Elipse Software: [Elipse Software/epmwebapi/exemplos/sample01.ipynb](https://nbviewer.jupyter.org/github/elipsesoftware/epmwebapi/blob/master/exemplos/sample01.ipynb) 6. **Escrita** de valores escalares 7. **Escrita** de valores históricos 8. **Escrita** de anotações 9. **CRUD** (Create, Read, Update and Delete) de variáveis - não faz parte do escopo deste minicurso 10. **Encerramento da Conexão** (=sessão) com um **EPM Server** (=servidor de dados)
###Code
# SEMPRE deve-se encerrar a conexão estabelecida com o EPM Server, pois isso irá encerrar a sessão e
# liberar a licença de EPM Client para que outros, eventualmente, possam utilizá-la.
epmConn.close()
###Output
_____no_output_____ |
intro-to-dl-master/week4/Autoencoders-task.ipynb | ###Markdown
Denoising Autoencoders And Where To Find ThemToday we're going to train deep autoencoders and apply them to faces and similar images search.Our new test subjects are human faces from the [lfw dataset](http://vis-www.cs.umass.edu/lfw/). Import stuff
###Code
import sys
sys.path.append("..")
import grading
import tensorflow as tf
import keras, keras.layers as L, keras.backend as K
import numpy as np
from sklearn.model_selection import train_test_split
from lfw_dataset import load_lfw_dataset
%matplotlib inline
import matplotlib.pyplot as plt
import download_utils
import keras_utils
import numpy as np
from keras_utils import reset_tf_session
# !!! remember to clear session/graph if you rebuild your graph to avoid out-of-memory errors !!!
###Output
_____no_output_____
###Markdown
Load datasetDataset was downloaded for you. Relevant links (just in case):- http://www.cs.columbia.edu/CAVE/databases/pubfig/download/lfw_attributes.txt- http://vis-www.cs.umass.edu/lfw/lfw-deepfunneled.tgz- http://vis-www.cs.umass.edu/lfw/lfw.tgz
###Code
# we downloaded them for you, just link them here
download_utils.link_week_4_resources()
# load images
X, attr = load_lfw_dataset(use_raw=True, dimx=32, dimy=32)
IMG_SHAPE = X.shape[1:]
# center images
X = X.astype('float32') / 255.0 - 0.5
# split
X_train, X_test = train_test_split(X, test_size=0.1, random_state=42)
def show_image(x):
plt.imshow(np.clip(x + 0.5, 0, 1))
plt.title('sample images')
for i in range(6):
plt.subplot(2,3,i+1)
show_image(X[i])
print("X shape:", X.shape)
print("attr shape:", attr.shape)
# try to free memory
del X
import gc
gc.collect()
###Output
_____no_output_____
###Markdown
Autoencoder architectureLet's design autoencoder as two sequential keras models: the encoder and decoder respectively.We will then use symbolic API to apply and train these models. First step: PCAPrincipial Component Analysis is a popular dimensionality reduction method. Under the hood, PCA attempts to decompose object-feature matrix $X$ into two smaller matrices: $W$ and $\hat W$ minimizing _mean squared error_:$$\|(X W) \hat{W} - X\|^2_2 \to_{W, \hat{W}} \min$$- $X \in \mathbb{R}^{n \times m}$ - object matrix (**centered**);- $W \in \mathbb{R}^{m \times d}$ - matrix of direct transformation;- $\hat{W} \in \mathbb{R}^{d \times m}$ - matrix of reverse transformation;- $n$ samples, $m$ original dimensions and $d$ target dimensions;In geometric terms, we want to find d axes along which most of variance occurs. The "natural" axes, if you wish.PCA can also be seen as a special case of an autoencoder.* __Encoder__: X -> Dense(d units) -> code* __Decoder__: code -> Dense(m units) -> XWhere Dense is a fully-connected layer with linear activaton: $f(X) = W \cdot X + \vec b $Note: the bias term in those layers is responsible for "centering" the matrix i.e. substracting mean.
###Code
def build_pca_autoencoder(img_shape, code_size):
"""
Here we define a simple linear autoencoder as described above.
We also flatten and un-flatten data to be compatible with image shapes
"""
encoder = keras.models.Sequential()
encoder.add(L.InputLayer(img_shape))
encoder.add(L.Flatten()) #flatten image to vector
encoder.add(L.Dense(code_size)) #actual encoder
decoder = keras.models.Sequential()
decoder.add(L.InputLayer((code_size,)))
decoder.add(L.Dense(np.prod(img_shape))) #actual decoder, height*width*3 units
decoder.add(L.Reshape(img_shape)) #un-flatten
return encoder,decoder
###Output
_____no_output_____
###Markdown
Meld them together into one model:
###Code
s = reset_tf_session()
encoder, decoder = build_pca_autoencoder(IMG_SHAPE, code_size=32)
inp = L.Input(IMG_SHAPE)
code = encoder(inp)
reconstruction = decoder(code)
autoencoder = keras.models.Model(inputs=inp, outputs=reconstruction)
autoencoder.compile(optimizer='adamax', loss='mse')
autoencoder.fit(x=X_train, y=X_train, epochs=15,
validation_data=[X_test, X_test],
# callbacks=[keras_utils.TqdmProgressCallback()],
verbose=True)
def visualize(img,encoder,decoder):
"""Draws original, encoded and decoded images"""
code = encoder.predict(img[None])[0] # img[None] is the same as img[np.newaxis, :]
reco = decoder.predict(code[None])[0]
plt.subplot(1,3,1)
plt.title("Original")
show_image(img)
plt.subplot(1,3,2)
plt.title("Code")
plt.imshow(code.reshape([code.shape[-1]//2,-1]))
plt.subplot(1,3,3)
plt.title("Reconstructed")
show_image(reco)
plt.show()
score = autoencoder.evaluate(X_test,X_test,verbose=0)
print("PCA MSE:", score)
for i in range(5):
img = X_test[i]
visualize(img,encoder,decoder)
###Output
_____no_output_____
###Markdown
Going deeper: convolutional autoencoderPCA is neat but surely we can do better. This time we want you to build a deep convolutional autoencoder by... stacking more layers. EncoderThe **encoder** part is pretty standard, we stack convolutional and pooling layers and finish with a dense layer to get the representation of desirable size (`code_size`).We recommend to use `activation='elu'` for all convolutional and dense layers.We recommend to repeat (conv, pool) 4 times with kernel size (3, 3), `padding='same'` and the following numbers of output channels: `32, 64, 128, 256`.Remember to flatten (`L.Flatten()`) output before adding the last dense layer! DecoderFor **decoder** we will use so-called "transpose convolution". Traditional convolutional layer takes a patch of an image and produces a number (patch -> number). In "transpose convolution" we want to take a number and produce a patch of an image (number -> patch). We need this layer to "undo" convolutions in encoder. We had a glimpse of it during week 3 (watch [this video](https://www.coursera.org/learn/intro-to-deep-learning/lecture/auRqf/a-glimpse-of-other-computer-vision-tasks) starting at 5:41).Here's how "transpose convolution" works:In this example we use a stride of 2 to produce 4x4 output, this way we "undo" pooling as well. Another way to think about it: we "undo" convolution with stride 2 (which is similar to conv + pool).You can add "transpose convolution" layer in Keras like this:```pythonL.Conv2DTranspose(filters=?, kernel_size=(3, 3), strides=2, activation='elu', padding='same')```Our decoder starts with a dense layer to "undo" the last layer of encoder. Remember to reshape its output to "undo" `L.Flatten()` in encoder.Now we're ready to undo (conv, pool) pairs. For this we need to stack 4 `L.Conv2DTranspose` layers with the following numbers of output channels: `128, 64, 32, 3`. Each of these layers will learn to "undo" (conv, pool) pair in encoder. For the last `L.Conv2DTranspose` layer use `activation=None` because that is our final image.
###Code
# Let's play around with transpose convolution on examples first
def test_conv2d_transpose(img_size, filter_size):
print("Transpose convolution test for img_size={}, filter_size={}:".format(img_size, filter_size))
x = (np.arange(img_size ** 2, dtype=np.float32) + 1).reshape((1, img_size, img_size, 1))
f = (np.ones(filter_size ** 2, dtype=np.float32)).reshape((filter_size, filter_size, 1, 1))
s = reset_tf_session()
conv = tf.nn.conv2d_transpose(x, f,
output_shape=(1, img_size * 2, img_size * 2, 1),
strides=[1, 2, 2, 1],
padding='SAME')
result = s.run(conv)
print("input:")
print(x[0, :, :, 0])
print("filter:")
print(f[:, :, 0, 0])
print("output:")
print(result[0, :, :, 0])
s.close()
test_conv2d_transpose(img_size=2, filter_size=2)
test_conv2d_transpose(img_size=2, filter_size=3)
test_conv2d_transpose(img_size=4, filter_size=2)
test_conv2d_transpose(img_size=4, filter_size=3)
def build_deep_autoencoder(img_shape, code_size):
"""PCA's deeper brother. See instructions above. Use `code_size` in layer definitions."""
H,W,C = img_shape
# encoder
encoder = keras.models.Sequential()
encoder.add(L.InputLayer(img_shape))
### YOUR CODE HERE: define encoder as per instructions above ###
encoder.add(L.Conv2D(filters=32, kernel_size=(3, 3), strides=1, padding="same", activation='elu'))
encoder.add(L.MaxPooling2D((2, 2)))
encoder.add(L.Conv2D(filters=64, kernel_size=(3, 3), strides=1, padding="same", activation='elu'))
encoder.add(L.MaxPooling2D((2, 2)))
encoder.add(L.Conv2D(filters=128, kernel_size=(3, 3), strides=1, padding="same", activation='elu'))
encoder.add(L.MaxPooling2D((2, 2)))
encoder.add(L.Conv2D(filters=256, kernel_size=(3, 3), strides=1, padding="same", activation='elu'))
encoder.add(L.MaxPooling2D((2, 2)))
encoder.add(L.Flatten()) # flatten
encoder.add(L.Dense(code_size)) # actual encoder
# decoder
decoder = keras.models.Sequential()
decoder.add(L.InputLayer((code_size,)))
### YOUR CODE HERE: define decoder as per instructions above ###
decoder.add(L.Dense(2*2*256)) #actual encoder
decoder.add(L.Reshape((2,2,256))) #un-flatten
decoder.add(L.Conv2DTranspose(filters=128, kernel_size=(3, 3), strides=2, activation='elu', padding='same'))
decoder.add(L.Conv2DTranspose(filters=64, kernel_size=(3, 3), strides=2, activation='elu', padding='same'))
decoder.add(L.Conv2DTranspose(filters=32, kernel_size=(3, 3), strides=2, activation='elu', padding='same'))
decoder.add(L.Conv2DTranspose(filters=3, kernel_size=(3, 3), strides=2, activation=None, padding='same')) # final image
return encoder, decoder
# Check autoencoder shapes along different code_sizes
get_dim = lambda layer: np.prod(layer.output_shape[1:])
for code_size in [1,8,32,128,512]:
s = reset_tf_session()
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=code_size)
print("Testing code size %i" % code_size)
assert encoder.output_shape[1:]==(code_size,),"encoder must output a code of required size"
assert decoder.output_shape[1:]==IMG_SHAPE, "decoder must output an image of valid shape"
assert len(encoder.trainable_weights)>=6, "encoder must contain at least 3 layers"
assert len(decoder.trainable_weights)>=6, "decoder must contain at least 3 layers"
for layer in encoder.layers + decoder.layers:
assert get_dim(layer) >= code_size, "Encoder layer %s is smaller than bottleneck (%i units)"%(layer.name,get_dim(layer))
print("All tests passed!")
s = reset_tf_session()
# Look at encoder and decoder shapes.
# Total number of trainable parameters of encoder and decoder should be close.
s = reset_tf_session()
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=32)
encoder.summary()
decoder.summary()
###Output
_____no_output_____
###Markdown
Convolutional autoencoder training. This will take **1 hour**. You're aiming at ~0.0056 validation MSE and ~0.0054 training MSE.
###Code
s = reset_tf_session()
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=32)
inp = L.Input(IMG_SHAPE)
code = encoder(inp)
reconstruction = decoder(code)
autoencoder = keras.models.Model(inputs=inp, outputs=reconstruction)
autoencoder.compile(optimizer="adamax", loss='mse')
# we will save model checkpoints here to continue training in case of kernel death
model_filename = 'autoencoder.{0:03d}.hdf5'
last_finished_epoch = None
#### uncomment below to continue training from model checkpoint
#### fill `last_finished_epoch` with your latest finished epoch
# from keras.models import load_model
# s = reset_tf_session()
# last_finished_epoch = 4
# autoencoder = load_model(model_filename.format(last_finished_epoch))
# encoder = autoencoder.layers[1]
# decoder = autoencoder.layers[2]
autoencoder.fit(x=X_train, y=X_train, epochs=25,
validation_data=[X_test, X_test],
callbacks=[keras_utils.ModelSaveCallback(model_filename)],
# keras_utils.TqdmProgressCallback()],
verbose=True,
initial_epoch=last_finished_epoch or 0)
reconstruction_mse = autoencoder.evaluate(X_test, X_test, verbose=0)
print("Convolutional autoencoder MSE:", reconstruction_mse)
for i in range(5):
img = X_test[i]
visualize(img,encoder,decoder)
# save trained weights
encoder.save_weights("encoder.h5")
decoder.save_weights("decoder.h5")
# restore trained weights
s = reset_tf_session()
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=32)
encoder.load_weights("encoder.h5")
decoder.load_weights("decoder.h5")
inp = L.Input(IMG_SHAPE)
code = encoder(inp)
reconstruction = decoder(code)
autoencoder = keras.models.Model(inputs=inp, outputs=reconstruction)
autoencoder.compile(optimizer="adamax", loss='mse')
print(autoencoder.evaluate(X_test, X_test, verbose=0))
print(reconstruction_mse)
###Output
_____no_output_____
###Markdown
Submit to Coursera
###Code
from submit import submit_autoencoder
submission = build_deep_autoencoder(IMG_SHAPE, code_size=71)
# token expires every 30 min
COURSERA_TOKEN = ### YOUR TOKEN HERE
COURSERA_EMAIL = ### YOUR EMAIL HERE
submit_autoencoder(submission, reconstruction_mse, COURSERA_EMAIL, COURSERA_TOKEN)
###Output
_____no_output_____
###Markdown
Optional: Denoising AutoencoderThis part is **optional**, it shows you one useful application of autoencoders: denoising. You can run this code and make sure denoising works :) Let's now turn our model into a denoising autoencoder:We'll keep the model architecture, but change the way it is trained. In particular, we'll corrupt its input data randomly with noise before each epoch.There are many strategies to introduce noise: adding gaussian white noise, occluding with random black rectangles, etc. We will add gaussian white noise.
###Code
def apply_gaussian_noise(X,sigma=0.1):
"""
adds noise from standard normal distribution with standard deviation sigma
:param X: image tensor of shape [batch,height,width,3]
Returns X + noise.
"""
noise = np.random.normal(0, sigma, X.shape) ### YOUR CODE HERE ###
return X + noise
# noise tests
theoretical_std = (X_train[:100].std()**2 + 0.5**2)**.5
our_std = apply_gaussian_noise(X_train[:100],sigma=0.5).std()
assert abs(theoretical_std - our_std) < 0.01, "Standard deviation does not match it's required value. Make sure you use sigma as std."
assert abs(apply_gaussian_noise(X_train[:100],sigma=0.5).mean() - X_train[:100].mean()) < 0.01, "Mean has changed. Please add zero-mean noise"
# test different noise scales
plt.subplot(1,4,1)
show_image(X_train[0])
plt.subplot(1,4,2)
show_image(apply_gaussian_noise(X_train[:1],sigma=0.01)[0])
plt.subplot(1,4,3)
show_image(apply_gaussian_noise(X_train[:1],sigma=0.1)[0])
plt.subplot(1,4,4)
show_image(apply_gaussian_noise(X_train[:1],sigma=0.5)[0])
###Output
_____no_output_____
###Markdown
Training will take **1 hour**.
###Code
s = reset_tf_session()
# we use bigger code size here for better quality
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=512)
assert encoder.output_shape[1:]==(512,), "encoder must output a code of required size"
inp = L.Input(IMG_SHAPE)
code = encoder(inp)
reconstruction = decoder(code)
autoencoder = keras.models.Model(inp, reconstruction)
autoencoder.compile('adamax', 'mse')
for i in range(25):
print("Epoch %i/25, Generating corrupted samples..."%(i+1))
X_train_noise = apply_gaussian_noise(X_train)
X_test_noise = apply_gaussian_noise(X_test)
# we continue to train our model with new noise-augmented data
autoencoder.fit(x=X_train_noise, y=X_train, epochs=1,
validation_data=[X_test_noise, X_test],
# callbacks=[keras_utils.TqdmProgressCallback()],
verbose=True)
X_test_noise = apply_gaussian_noise(X_test)
denoising_mse = autoencoder.evaluate(X_test_noise, X_test, verbose=0)
print("Denoising MSE:", denoising_mse)
for i in range(5):
img = X_test_noise[i]
visualize(img,encoder,decoder)
###Output
_____no_output_____
###Markdown
Optional: Image retrieval with autoencodersSo we've just trained a network that converts image into itself imperfectly. This task is not that useful in and of itself, but it has a number of awesome side-effects. Let's see them in action.First thing we can do is image retrieval aka image search. We will give it an image and find similar images in latent space:To speed up retrieval process, one should use Locality Sensitive Hashing on top of encoded vectors. This [technique](https://erikbern.com/2015/07/04/benchmark-of-approximate-nearest-neighbor-libraries.html) can narrow down the potential nearest neighbours of our image in latent space (encoder code). We will caclulate nearest neighbours in brute force way for simplicity.
###Code
# restore trained encoder weights
s = reset_tf_session()
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=32)
encoder.load_weights("encoder.h5")
images = X_train
codes = encoder.predict(images) ### YOUR CODE HERE: encode all images ###
assert len(codes) == len(images)
from sklearn.neighbors.unsupervised import NearestNeighbors
nei_clf = NearestNeighbors(metric="euclidean")
nei_clf.fit(codes)
def get_similar(image, n_neighbors=5):
assert image.ndim==3,"image must be [batch,height,width,3]"
code = encoder.predict(image[None])
(distances,),(idx,) = nei_clf.kneighbors(code,n_neighbors=n_neighbors)
return distances,images[idx]
def show_similar(image):
distances,neighbors = get_similar(image,n_neighbors=3)
plt.figure(figsize=[8,7])
plt.subplot(1,4,1)
show_image(image)
plt.title("Original image")
for i in range(3):
plt.subplot(1,4,i+2)
show_image(neighbors[i])
plt.title("Dist=%.3f"%distances[i])
plt.show()
###Output
_____no_output_____
###Markdown
Cherry-picked examples:
###Code
# smiles
show_similar(X_test[247])
# ethnicity
show_similar(X_test[56])
# glasses
show_similar(X_test[63])
###Output
_____no_output_____
###Markdown
Optional: Cheap image morphing We can take linear combinations of image codes to produce new images with decoder.
###Code
# restore trained encoder weights
s = reset_tf_session()
encoder, decoder = build_deep_autoencoder(IMG_SHAPE, code_size=32)
encoder.load_weights("encoder.h5")
decoder.load_weights("decoder.h5")
for _ in range(5):
image1,image2 = X_test[np.random.randint(0,len(X_test),size=2)]
code1, code2 = encoder.predict(np.stack([image1, image2]))
plt.figure(figsize=[10,4])
for i,a in enumerate(np.linspace(0,1,num=7)):
output_code = code1*(1-a) + code2*(a)
output_image = decoder.predict(output_code[None])[0]
plt.subplot(1,7,i+1)
show_image(output_image)
plt.title("a=%.2f"%a)
plt.show()
###Output
_____no_output_____ |
Notebooks/Day3.ipynb | ###Markdown
Day 3, Binary Diagnostic Use the binary numbers in your diagnostic report to calculate the gamma rate and epsilon rate, then multiply them together. You need to use the binary numbers in the diagnostic report to generate two new binary numbers (called the gamma rate and the epsilon rate). The power consumption can then be found by multiplying the gamma rate by the epsilon rate.
###Code
let divide source = ((0,0), source) ||> List.fold (fun (x,y) char -> match char with '1' -> (x + 1, y) | '0' -> (x, y + 1) | _ -> (x,y))
let significate (x,y) = if x >= y then ['1';'0'] else ['0';'1']
let toNumber str = System.Convert.ToInt32(str, 2)
let toString source = source |> Array.ofSeq |> (fun s -> new System.String(s))
["00100";"11110";"10110";"10111";"10101";"01111";"00111";"11100";"10000";"11001";"00010";"01010"]
|> List.map (fun l -> l.ToCharArray() |> Array.toList)
|> List.transpose
|> List.map divide //Item1: 1's count, Item2: 0's count
|> List.map significate // 0: mcb, 1: lcb
|> List.transpose // 0: Gamma, 1: epsilon
|> List.map (fun l -> l |> toString |> toNumber)
|> List.fold (*) 1
System.IO.File.ReadLines("../Data/Day3.txt") |> Seq.toList
|> List.map (fun l -> l.ToCharArray() |> Array.toList)
|> List.transpose
|> List.map divide //Item1: 1's count, Item2: 0's count
|> List.map significate // 0: mcb, 1: lcb
|> List.transpose // 0: Gamma, 1: epsilon
|> List.map (fun l -> l |> toString |> toNumber)
|> List.fold (*) 1
###Output
_____no_output_____
###Markdown
Use the binary numbers in your diagnostic report to calculate the oxygen generator rating and CO2 scrubber rating, then multiply them together>To find the life support rating from sample, multiply the oxygen generator rating (23) by the CO2 scrubber rating (10) to get 230.
###Code
// let expandBits = ["00100";"11110";"10110";"10111";"10101";"01111";"00111";"11100";"10000";"11001";"00010";"01010"] |> List.map (fun l -> l.ToCharArray() |> Array.toList)
let expandBits = System.IO.File.ReadLines("../Data/Day3.txt") |> Seq.toList |> List.map (fun l -> l.ToCharArray() |> Array.toList)
let inline mcb l = l |> List.head
let inline lcb l = l |> List.tail |> List.head
let rec reduceBits op i bits =
if Seq.length bits = 1 then bits
else
let rb = bits |> List.transpose |> List.item i |> divide |> significate |> op
bits |> List.where (fun l -> l[i] = rb) |> reduceBits op (i+1)
let inline reduceBy op = reduceBits op 0
let o2 = expandBits |> reduceBy mcb |> List.map (fun l -> l |> toString |> toNumber) |> List.head
let co2 = expandBits |> reduceBy lcb |> List.map (fun l -> l |> toString |> toNumber) |> List.head
o2 * co2
###Output
_____no_output_____ |
lessons/python/While loop.ipynb | ###Markdown
The WHile Loop
###Code
x=0
while x<5:
x+=1
print(x)
###Output
1
2
3
4
5
|
Demo/Cascade_Tabnet_mmdet_v2_cpu_only_demo.ipynb | ###Markdown
Prepare environment
###Code
!pip install mmdet==2.3.0 requests
!pip install torch==1.5.1+cpu torchvision==0.6.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
!pip install mmcv-full==1.0.5+torch1.5.0+cpu -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
###Output
Requirement already satisfied: mmdet==2.3.0 in /usr/local/lib/python3.6/dist-packages (2.3.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (2.23.0)
Requirement already satisfied: terminaltables in /usr/local/lib/python3.6/dist-packages (from mmdet==2.3.0) (3.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from mmdet==2.3.0) (1.15.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mmdet==2.3.0) (1.18.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from mmdet==2.3.0) (3.2.2)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests) (2.10)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->mmdet==2.3.0) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->mmdet==2.3.0) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->mmdet==2.3.0) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->mmdet==2.3.0) (1.2.0)
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Requirement already satisfied: torch==1.5.1+cpu in /usr/local/lib/python3.6/dist-packages (1.5.1+cpu)
Requirement already satisfied: torchvision==0.6.1+cpu in /usr/local/lib/python3.6/dist-packages (0.6.1+cpu)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==1.5.1+cpu) (0.16.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.5.1+cpu) (1.18.5)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.6.1+cpu) (7.0.0)
Looking in links: https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
Collecting mmcv-full==1.0.5+torch1.5.0+cpu
Using cached https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/1.0.5/torch1.5.0/cpu/mmcv_full-1.0.5%2Btorch1.5.0%2Bcpu-cp36-cp36m-manylinux1_x86_64.whl
Requirement already satisfied: addict in /usr/local/lib/python3.6/dist-packages (from mmcv-full==1.0.5+torch1.5.0+cpu) (2.2.1)
Requirement already satisfied: yapf in /usr/local/lib/python3.6/dist-packages (from mmcv-full==1.0.5+torch1.5.0+cpu) (0.30.0)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.6/dist-packages (from mmcv-full==1.0.5+torch1.5.0+cpu) (4.1.2.30)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from mmcv-full==1.0.5+torch1.5.0+cpu) (3.13)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mmcv-full==1.0.5+torch1.5.0+cpu) (1.18.5)
Installing collected packages: mmcv-full
Found existing installation: mmcv-full 1.0.5
Uninstalling mmcv-full-1.0.5:
Successfully uninstalled mmcv-full-1.0.5
Successfully installed mmcv-full-1.0.5
###Markdown
Download prerequisites
###Code
conf_url = 'https://raw.githubusercontent.com/iiLaurens/CascadeTabNet/mmdet2x/Config/cascade_mask_rcnn_hrnetv2p_w32_20e.py'
with open('cascade_mask_rcnn_hrnetv2p_w32_20e.py', 'wb') as f:
f.write(requests.get(conf_url).content)
checkpoint_url = 'https://github.com/iiLaurens/CascadeTabNet/releases/download/v1.0.0/ICDAR.19.Track.B2.Modern.table.structure.recognition.v2.pth'
with open('ICDAR.19.Track.B2.Modern.table.structure.recognition.v2.pth', 'wb') as f:
f.write(requests.get(checkpoint_url).content)
demo_img = 'https://github.com/iiLaurens/CascadeTabNet/raw/mmdet2x/Demo/demo.png'
with open('demo.png', 'wb') as f:
f.write(requests.get(demo_img).content)
###Output
_____no_output_____
###Markdown
Run inference on demo image
###Code
from mmdet.apis import init_detector, inference_detector, show_result_pyplot
import mmcv
# Load model
config_file = 'cascade_mask_rcnn_hrnetv2p_w32_20e.py'
checkpoint_file = 'ICDAR.19.Track.B2.Modern.table.structure.recognition.v2.pth'
model = init_detector(config_file, checkpoint_file, device='cpu')
# Test a single image
img = "demo.png"
# Run Inference
result = inference_detector(model, img)
# Visualization results
show_result_pyplot(model, img, result, score_thr=0.85)
###Output
/usr/local/lib/python3.6/dist-packages/mmdet/apis/inference.py:109: UserWarning: We set use_torchvision=True in CPU mode.
warnings.warn('We set use_torchvision=True in CPU mode.')
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2973: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
|
experiments/bert_noisy_networks_large.ipynb | ###Markdown
Textworld starting kit notebookModel: *Bert-DQN with replay memory*When running first: 1. Run the first 2 code cells(with pip installations) 2. Restart runtime 3. Countinue with the next cellsThis is done, because there is a problem with dependencies of **textworld** and **colab**, requiring different versions of **prompt-toolkit**
###Code
!pip install textworld
!pip install prompt-toolkit==1.0.16
!pip install pytorch_pretrained_bert
import os
import random
import logging
import yaml
import copy
import spacy
import numpy as np
import glob
from tqdm import tqdm
from typing import List, Dict, Any
from collections import namedtuple
import pandas as pd
import torch
import torch.nn.functional as F
import math
import gym
import textworld.gym
from textworld import EnvInfos
from pytorch_pretrained_bert import BertTokenizer, BertModel
torch.cuda.is_available()
torch.cuda.empty_cache()
###Output
_____no_output_____
###Markdown
Generic functions
###Code
def to_np(x):
if isinstance(x, np.ndarray):
return x
return x.data.cpu().numpy()
def to_pt(np_matrix, enable_cuda=False, type='long'):
if type == 'long':
if enable_cuda:
return torch.autograd.Variable(torch.from_numpy(np_matrix).type(torch.LongTensor).cuda())
else:
return torch.autograd.Variable(torch.from_numpy(np_matrix).type(torch.LongTensor))
elif type == 'float':
if enable_cuda:
return torch.autograd.Variable(torch.from_numpy(np_matrix).type(torch.FloatTensor).cuda())
else:
return torch.autograd.Variable(torch.from_numpy(np_matrix).type(torch.FloatTensor))
def _words_to_ids(words, word2id):
ids = []
for word in words:
try:
ids.append(word2id[word])
except KeyError:
ids.append(1)
return ids
def preproc(s, str_type='None', tokenizer=None, lower_case=True):
if s is None:
return ["nothing"]
s = s.replace("\n", ' ')
if s.strip() == "":
return ["nothing"]
if str_type == 'feedback':
if "$$$$$$$" in s:
s = ""
if "-=" in s:
s = s.split("-=")[0]
s = s.strip()
if len(s) == 0:
return ["nothing"]
tokens = [t.text for t in tokenizer(s)]
if lower_case:
tokens = [t.lower() for t in tokens]
return tokens
def max_len(list_of_list):
return max(map(len, list_of_list))
def pad_sequences(sequences, maxlen=None, dtype='int32', value=0.):
'''
Partially borrowed from Keras
# Arguments
sequences: list of lists where each element is a sequence
maxlen: int, maximum length
dtype: type to cast the resulting sequence.
value: float, value to pad the sequences to the desired value.
# Returns
x: numpy array with dimensions (number_of_sequences, maxlen)
'''
lengths = [len(s) for s in sequences]
nb_samples = len(sequences)
if maxlen is None:
maxlen = np.max(lengths)
# take the sample shape from the first non empty sequence
# checking for consistency in the main loop below.
sample_shape = tuple()
for s in sequences:
if len(s) > 0:
sample_shape = np.asarray(s).shape[1:]
break
x = (np.ones((nb_samples, maxlen) + sample_shape) * value).astype(dtype)
for idx, s in enumerate(sequences):
if len(s) == 0:
continue # empty list was found
# pre truncating
trunc = s[-maxlen:]
# check `trunc` has expected shape
trunc = np.asarray(trunc, dtype=dtype)
if trunc.shape[1:] != sample_shape:
raise ValueError('Shape of sample %s of sequence at position %s is different from expected shape %s' %
(trunc.shape[1:], idx, sample_shape))
# post padding
x[idx, :len(trunc)] = trunc
return x
###Output
_____no_output_____
###Markdown
Layers
###Code
def masked_mean(x, m=None, dim=-1):
"""
mean pooling when there're paddings
input: tensor: batch x time x h
mask: batch x time
output: tensor: batch x h
"""
if m is None:
return torch.mean(x, dim=dim)
mask_sum = torch.sum(m, dim=-1) # batch
res = torch.sum(x, dim=1) # batch x h
mean = res / (mask_sum.unsqueeze(-1) + 1e-6)
del mask_sum
del res
return mean
class Embedding(torch.nn.Module):
'''
inputs: x: batch x seq (x is post-padded by 0s)
outputs:embedding: batch x seq x emb
mask: batch x seq
'''
def __init__(self, embedding_size, vocab_size, enable_cuda=False):
super(Embedding, self).__init__()
self.embedding_size = embedding_size
self.vocab_size = vocab_size
self.enable_cuda = enable_cuda
self.embedding_layer = torch.nn.Embedding(self.vocab_size, self.embedding_size, padding_idx=0)
def compute_mask(self, x):
mask = torch.ne(x, 0).float()
if self.enable_cuda:
mask = mask.cuda()
return mask
def forward(self, x):
embeddings = self.embedding_layer(x) # batch x time x emb
mask = self.compute_mask(x) # batch x time
return embeddings, mask
class FastUniLSTM(torch.nn.Module):
"""
Adapted from https://github.com/facebookresearch/DrQA/
now supports: different rnn size for each layer
all zero rows in batch (from time distributed layer, by reshaping certain dimension)
"""
def __init__(self, ninp, nhids, dropout_between_rnn_layers=0.):
super(FastUniLSTM, self).__init__()
self.ninp = ninp
self.nhids = nhids
self.nlayers = len(self.nhids)
self.dropout_between_rnn_layers = dropout_between_rnn_layers
self.stack_rnns()
def stack_rnns(self):
rnns = [torch.nn.LSTM(self.ninp if i == 0 else self.nhids[i - 1],
self.nhids[i],
num_layers=1,
bidirectional=False) for i in range(self.nlayers)]
self.rnns = torch.nn.ModuleList(rnns)
def forward(self, x, mask):
def pad_(tensor, n):
if n > 0:
zero_pad = torch.autograd.Variable(torch.zeros((n,) + tensor.size()[1:]))
if x.is_cuda:
zero_pad = zero_pad.cuda()
tensor = torch.cat([tensor, zero_pad])
return tensor
"""
inputs: x: batch x time x inp
mask: batch x time
output: encoding: batch x time x hidden[-1]
"""
# Compute sorted sequence lengths
batch_size = x.size(0)
lengths = mask.data.eq(1).long().sum(1) # .squeeze()
_, idx_sort = torch.sort(lengths, dim=0, descending=True)
_, idx_unsort = torch.sort(idx_sort, dim=0)
lengths = list(lengths[idx_sort])
idx_sort = torch.autograd.Variable(idx_sort)
idx_unsort = torch.autograd.Variable(idx_unsort)
# Sort x
x = x.index_select(0, idx_sort)
# remove non-zero rows, and remember how many zeros
n_nonzero = np.count_nonzero(lengths)
n_zero = batch_size - n_nonzero
if n_zero != 0:
lengths = lengths[:n_nonzero]
x = x[:n_nonzero]
# Transpose batch and sequence dims
x = x.transpose(0, 1)
# Pack it up
rnn_input = torch.nn.utils.rnn.pack_padded_sequence(x, lengths)
# Encode all layers
outputs = [rnn_input]
for i in range(self.nlayers):
rnn_input = outputs[-1]
# dropout between rnn layers
if self.dropout_between_rnn_layers > 0:
dropout_input = F.dropout(rnn_input.data,
p=self.dropout_between_rnn_layers,
training=self.training)
rnn_input = torch.nn.utils.rnn.PackedSequence(dropout_input,
rnn_input.batch_sizes)
seq, last = self.rnns[i](rnn_input)
outputs.append(seq)
if i == self.nlayers - 1:
# last layer
last_state = last[0] # (num_layers * num_directions, batch, hidden_size)
last_state = last_state[0] # batch x hidden_size
# Unpack everything
for i, o in enumerate(outputs[1:], 1):
outputs[i] = torch.nn.utils.rnn.pad_packed_sequence(o)[0]
output = outputs[-1]
# Transpose and unsort
output = output.transpose(0, 1) # batch x time x enc
# re-padding
output = pad_(output, n_zero)
last_state = pad_(last_state, n_zero)
output = output.index_select(0, idx_unsort)
last_state = last_state.index_select(0, idx_unsort)
# Pad up to original batch sequence length
if output.size(1) != mask.size(1):
padding = torch.zeros(output.size(0),
mask.size(1) - output.size(1),
output.size(2)).type(output.data.type())
output = torch.cat([output, torch.autograd.Variable(padding)], 1)
output = output.contiguous() * mask.unsqueeze(-1)
return output, last_state, mask
###Output
_____no_output_____
###Markdown
Noisy nets
###Code
class NoisyLinear(torch.nn.Module):
def __init__(self, in_features, out_features, std_init=0.4):
super(NoisyLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.std_init = std_init
self.weight_mu = torch.nn.Parameter(torch.empty(out_features, in_features))
self.weight_sigma = torch.nn.Parameter(torch.empty(out_features, in_features))
self.register_buffer('weight_epsilon', torch.empty(out_features, in_features))
self.bias_mu = torch.nn.Parameter(torch.empty(out_features))
self.bias_sigma = torch.nn.Parameter(torch.empty(out_features))
self.register_buffer('bias_epsilon', torch.empty(out_features))
self.reset_parameters()
self.sample_noise()
def reset_parameters(self):
mu_range = 1.0 / math.sqrt(self.in_features)
self.weight_mu.data.uniform_(-mu_range, mu_range)
self.weight_sigma.data.fill_(self.std_init / math.sqrt(self.in_features))
self.bias_mu.data.uniform_(-mu_range, mu_range)
self.bias_sigma.data.fill_(self.std_init / math.sqrt(self.out_features))
def _scale_noise(self, size):
x = torch.randn(size)
return x.sign().mul_(x.abs().sqrt_())
def sample_noise(self):
epsilon_in = self._scale_noise(self.in_features)
epsilon_out = self._scale_noise(self.out_features)
self.weight_epsilon.copy_(epsilon_out.ger(epsilon_in))
self.bias_epsilon.copy_(epsilon_out)
def forward(self, inp):
if self.training:
return F.linear(inp, self.weight_mu + self.weight_sigma * self.weight_epsilon, self.bias_mu + self.bias_sigma * self.bias_epsilon)
else:
return F.linear(inp, self.weight_mu, self.bias_mu)
###Output
_____no_output_____
###Markdown
Model
###Code
example_input = '-= Pantry =- \
You\'ve just sauntered into a pantry. You try to gain information on your surroundings by using a technique you call "looking."\
You see a shelf. The shelf is wooden. On the shelf you can make out a black pepper and an orange bell pepper. I mean, just wow! Isn\'t TextWorld just the best?\
There is an open frosted-glass door leading south.\
You are carrying nothing.\
Recipe #1\
---------\
Gather all following ingredients and follow the directions to prepare this tasty meal.\
Ingredients:\
black pepper\
red apple\
water\
Directions:\
prepare meal\
________ ________ __ __ ________ \
| \| \| \ | \| \ \
\$$$$$$$$| $$$$$$$$| $$ | $$ \$$$$$$$$ \
| $$ | $$__ \$$\/ $$ | $$ \
| $$ | $$ \ >$$ $$ | $$ \
| $$ | $$$$$ / $$$$\ | $$ \
| $$ | $$_____ | $$ \$$\ | $$ \
| $$ | $$ \| $$ | $$ | $$ \
\$$ \$$$$$$$$ \$$ \$$ \$$ \
__ __ ______ _______ __ _______ \
| \ _ | \ / \ | \ | \ | \ \
| $$ / \ | $$| $$$$$$\| $$$$$$$\| $$ | $$$$$$$\ \
| $$/ $\| $$| $$ | $$| $$__| $$| $$ | $$ | $$\
| $$ $$$\ $$| $$ | $$| $$ $$| $$ | $$ | $$\
| $$ $$\$$\$$| $$ | $$| $$$$$$$\| $$ | $$ | $$\
| $$$$ \$$$$| $$__/ $$| $$ | $$| $$_____ | $$__/ $$\
| $$$ \$$$ \$$ $$| $$ | $$| $$ \| $$ $$\
\$$ \$$ \$$$$$$ \$$ \$$ \$$$$$$$$ \$$$$$$$ \
You are hungry! Let\'s cook a delicious meal. Check the cookbook in the kitchen for the recipe. Once done, enjoy your meal!'
def preproc_example(s):
s = s.replace('$', '')
s = s.replace('#', '')
s = s.replace('\n', ' ')
s = s.replace(' ', ' ')
s = s.replace('_', '')
s = s.replace('|', '')
s = s.replace('\\', '')
s = s.replace('/', '')
s = s.replace('-', '')
s = s.replace('=', '')
return s
preproc_example(example_input)
# example_input.replace('$', '')
def convert_examples_to_features(sequences, tokenizer):
"""Loads a data file into a list of `InputFeature`s."""
batch_tokens = []
batch_input_ids = []
batch_input_masks = []
for example in sequences:
_example = preproc_example(example)
# print(_example)
tokens = tokenizer.tokenize(_example)
if len(tokens) > 512:
tokens = tokens[:512]
batch_tokens.append(tokens)
del _example
del tokens
max_length = max([len(x) for x in batch_tokens])
# print('bert_max_seqence', max_length)
for tokens in batch_tokens:
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_length:
input_ids.append(0)
input_mask.append(0)
batch_input_ids.append(input_ids)
batch_input_masks.append(input_mask)
del input_ids
del input_mask
return batch_tokens, batch_input_ids, batch_input_masks
def freeze_layer(layer):
for param in layer.parameters():
param.requires_grad = False
mlogger = logging.getLogger(__name__)
class Bert_DQN(torch.nn.Module):
model_name = 'bert_dqn'
def __init__(self, model_config, word_vocab, generate_length=5, enable_cuda=False):
super(Bert_DQN, self).__init__()
self.model_config = model_config
self.enable_cuda = enable_cuda
self.word_vocab_size = len(word_vocab)
self.id2word = word_vocab
self.generate_length = generate_length
self.read_config()
# print(enable_cuda)
self.device = torch.device("cuda" if enable_cuda else "cpu")
self.tokenizer = BertTokenizer.from_pretrained(self.bert_model, do_lower_case=True)
self._def_layers()
self.init_weights()
self.print_parameters()
def print_parameters(self):
amount = 0
for p in self.parameters():
amount += np.prod(p.size())
print("total number of parameters: %s" % (amount))
parameters = filter(lambda p: p.requires_grad, self.parameters())
amount = 0
for p in parameters:
amount += np.prod(p.size())
print("number of trainable parameters: %s" % (amount))
def read_config(self):
# model config
# self.embedding_size = self.model_config['embedding_size']
# self.encoder_rnn_hidden_size = self.model_config['encoder_rnn_hidden_size']
# self.action_scorer_hidden_dim = self.model_config['action_scorer_hidden_dim']
# self.dropout_between_rnn_layers = self.model_config['dropout_between_rnn_layers']
self.bert_model = self.model_config['bert_model']
self.layer_index = self.model_config['layer_index']
self.action_scorer_hidden_dim = self.model_config['action_scorer_hidden_dim']
self.train_bert = self.model_config['train_bert']
def _def_layers(self):
# word embeddings
# self.word_embedding = Embedding(embedding_size=self.embedding_size,
# vocab_size=self.word_vocab_size,
# enable_cuda=self.enable_cuda)
# # lstm encoder
# self.encoder = FastUniLSTM(ninp=self.embedding_size,
# nhids=self.encoder_rnn_hidden_size,
# dropout_between_rnn_layers=self.dropout_
self.encoder = BertModel.from_pretrained(self.bert_model).to(self.device)
if not self.train_bert:
freeze_layer(self.encoder)
# only for base models
# for large models is
bert_embeddings = 768
self.action_scorer_shared = torch.nn.Linear(bert_embeddings, self.action_scorer_hidden_dim)
action_scorers = []
for _ in range(self.generate_length):
action_scorers.append( NoisyLinear(self.action_scorer_hidden_dim,
self.word_vocab_size,
std_init=self.model_config['noisy_std']))
self.action_scorers = torch.nn.ModuleList(action_scorers)
self.fake_recurrent_mask = None
def init_weights(self):
torch.nn.init.xavier_uniform_(self.action_scorer_shared.weight.data)
for i in range(len(self.action_scorers)):
self.action_scorers[i].sample_noise()
self.action_scorer_shared.bias.data.fill_(0)
def representation_generator(self, ids, mask):
ids = ids.to(self.device)
mask = mask.to(self.device)
layers, _ = self.encoder(ids, attention_mask=mask)
# encoding_sequence = layers[self.layer_index]
# print('layer length: ', len(layers))
encoding_sequence = layers[-2].type(torch.FloatTensor)
encoding_sequence = encoding_sequence.to(self.device)
# print('encoding_sequence: ', type(encoding_sequence))
# print('encoding_sequence: ', encoding_sequence)
mask = mask.type(torch.FloatTensor).to(self.device)
# print('mask: ', type(mask))
# print('mask: ', mask)
# embeddings, mask = self.word_embedding.forward(_input_words) # batch x time x emb
# encoding_sequence, _, _ = self.encoder.forward(embeddings, mask) # batch x time x h
res_mean = masked_mean(encoding_sequence, mask) # batch x h
del layers
del encoding_sequence
del mask
return res_mean
def action_scorer(self, state_representation):
hidden = self.action_scorer_shared.forward(state_representation) # batch x hid
hidden = F.relu(hidden) # batch x hid
action_ranks = []
for i in range(len(self.action_scorers)):
action_ranks.append(self.action_scorers[i].forward(hidden)) # batch x n_vocab
del hidden
return action_ranks
###Output
_____no_output_____
###Markdown
Agent
###Code
# a snapshot of state to be stored in replay memory
Transition = namedtuple('Transition', ('bert_ids', 'bert_masks',
'word_indices',
'reward', 'mask', 'done',
'next_bert_ids', 'next_bert_masks',
'next_word_masks'))
class HistoryScoreCache(object):
def __init__(self, capacity=1):
self.capacity = capacity
self.reset()
def push(self, stuff):
"""stuff is float."""
if len(self.memory) < self.capacity:
self.memory.append(stuff)
else:
self.memory = self.memory[1:] + [stuff]
def get_avg(self):
return np.mean(np.array(self.memory))
def reset(self):
self.memory = []
def __len__(self):
return len(self.memory)
class PrioritizedReplayMemory(object):
def __init__(self, capacity=100000, priority_fraction=0.0):
# prioritized replay memory
self.priority_fraction = priority_fraction
self.alpha_capacity = int(capacity * priority_fraction)
self.beta_capacity = capacity - self.alpha_capacity
self.alpha_memory, self.beta_memory = [], []
self.alpha_position, self.beta_position = 0, 0
def push(self, is_prior=False, *args):
"""Saves a transition."""
if self.priority_fraction == 0.0:
is_prior = False
if is_prior:
if len(self.alpha_memory) < self.alpha_capacity:
self.alpha_memory.append(None)
self.alpha_memory[self.alpha_position] = Transition(*args)
self.alpha_position = (self.alpha_position + 1) % self.alpha_capacity
else:
if len(self.beta_memory) < self.beta_capacity:
self.beta_memory.append(None)
self.beta_memory[self.beta_position] = Transition(*args)
self.beta_position = (self.beta_position + 1) % self.beta_capacity
def sample(self, batch_size):
if self.priority_fraction == 0.0:
from_beta = min(batch_size, len(self.beta_memory))
res = random.sample(self.beta_memory, from_beta)
else:
from_alpha = min(int(self.priority_fraction * batch_size), len(self.alpha_memory))
from_beta = min(batch_size - int(self.priority_fraction * batch_size), len(self.beta_memory))
res = random.sample(self.alpha_memory, from_alpha) + random.sample(self.beta_memory, from_beta)
random.shuffle(res)
return res
def __len__(self):
return len(self.alpha_memory) + len(self.beta_memory)
class CustomAgent:
def __init__(self):
"""
Arguments:
word_vocab: List of words supported.
"""
self.mode = "train"
with open("./vocab.txt") as f:
self.word_vocab = f.read().split("\n")
with open("config.yaml") as reader:
self.config = yaml.safe_load(reader)
self.word2id = {}
for i, w in enumerate(self.word_vocab):
self.word2id[w] = i
self.EOS_id = self.word2id["</S>"]
self.batch_size = self.config['training']['batch_size']
self.max_nb_steps_per_episode = self.config['training']['max_nb_steps_per_episode']
self.nb_epochs = self.config['training']['nb_epochs']
# Set the random seed manually for reproducibility.
np.random.seed(self.config['general']['random_seed'])
torch.manual_seed(self.config['general']['random_seed'])
if torch.cuda.is_available():
if not self.config['general']['use_cuda']:
print("WARNING: CUDA device detected but 'use_cuda: false' found in config.yaml")
self.use_cuda = False
else:
torch.backends.cudnn.deterministic = True
torch.cuda.manual_seed(self.config['general']['random_seed'])
self.use_cuda = True
else:
self.use_cuda = False
self.model = Bert_DQN(model_config=self.config["model"],
word_vocab=self.word_vocab,
enable_cuda=self.use_cuda)
self.experiment_tag = self.config['checkpoint']['experiment_tag']
self.model_checkpoint_path = self.config['checkpoint']['model_checkpoint_path']
self.save_frequency = self.config['checkpoint']['save_frequency']
if self.config['checkpoint']['load_pretrained']:
self.load_pretrained_model(self.model_checkpoint_path + '/' + self.config['checkpoint']['pretrained_experiment_tag'] + '.pt')
if self.use_cuda:
self.model.cuda()
self.replay_batch_size = self.config['general']['replay_batch_size']
self.replay_memory = PrioritizedReplayMemory(self.config['general']['replay_memory_capacity'],
priority_fraction=self.config['general']['replay_memory_priority_fraction'])
# optimizer
parameters = filter(lambda p: p.requires_grad, self.model.parameters())
self.optimizer = torch.optim.Adam(parameters, lr=self.config['training']['optimizer']['learning_rate'])
# epsilon greedy
self.epsilon_anneal_episodes = self.config['general']['epsilon_anneal_episodes']
self.epsilon_anneal_from = self.config['general']['epsilon_anneal_from']
self.epsilon_anneal_to = self.config['general']['epsilon_anneal_to']
self.epsilon = self.epsilon_anneal_from
self.update_per_k_game_steps = self.config['general']['update_per_k_game_steps']
self.clip_grad_norm = self.config['training']['optimizer']['clip_grad_norm']
self.nlp = spacy.load('en', disable=['ner', 'parser', 'tagger'])
self.preposition_map = {"take": "from",
"chop": "with",
"slice": "with",
"dice": "with",
"cook": "with",
"insert": "into",
"put": "on"}
self.single_word_verbs = set(["inventory", "look"])
self.discount_gamma = self.config['general']['discount_gamma']
self.current_episode = 0
self.current_step = 0
self._epsiode_has_started = False
self.history_avg_scores = HistoryScoreCache(capacity=1000)
self.best_avg_score_so_far = 0.0
self.loss = []
self.state = ''
def train(self, imitate=False):
"""
Tell the agent that it's training phase.
"""
self.mode = "train"
self.imitate = imitate
self.wt_index = 0
# print(self.wt_index)
self.model.train()
def eval(self):
"""
Tell the agent that it's evaluation phase.
"""
self.mode = "eval"
self.model.eval()
def _start_episode(self, obs: List[str], infos: Dict[str, List[Any]]) -> None:
"""
Prepare the agent for the upcoming episode.
Arguments:
obs: Initial feedback for each game.
infos: Additional information for each game.
"""
self.init(obs, infos)
self._epsiode_has_started = True
def _end_episode(self, obs: List[str], scores: List[int], infos: Dict[str, List[Any]]) -> None:
"""
Tell the agent the episode has terminated.
Arguments:
obs: Previous command's feedback for each game.
score: The score obtained so far for each game.
infos: Additional information for each game.
"""
self.finish()
self._epsiode_has_started = False
def load_pretrained_model(self, load_from):
"""
Load pretrained checkpoint from file.
Arguments:
load_from: File name of the pretrained model checkpoint.
"""
print("loading model from %s\n" % (load_from))
try:
if self.use_cuda:
state_dict = torch.load(load_from)
else:
state_dict = torch.load(load_from, map_location='cpu')
self.model.load_state_dict(state_dict)
except:
print("Failed to load checkpoint...")
def select_additional_infos(self) -> EnvInfos:
"""
Returns what additional information should be made available at each game step.
Requested information will be included within the `infos` dictionary
passed to `CustomAgent.act()`. To request specific information, create a
:py:class:`textworld.EnvInfos <textworld.envs.wrappers.filter.EnvInfos>`
and set the appropriate attributes to `True`. The possible choices are:
* `description`: text description of the current room, i.e. output of the `look` command;
* `inventory`: text listing of the player's inventory, i.e. output of the `inventory` command;
* `max_score`: maximum reachable score of the game;
* `objective`: objective of the game described in text;
* `entities`: names of all entities in the game;
* `verbs`: verbs understood by the the game;
* `command_templates`: templates for commands understood by the the game;
* `admissible_commands`: all commands relevant to the current state;
In addition to the standard information, game specific information
can be requested by appending corresponding strings to the `extras`
attribute. For this competition, the possible extras are:
* `'recipe'`: description of the cookbook;
* `'walkthrough'`: one possible solution to the game (not guaranteed to be optimal);
Example:
Here is an example of how to request information and retrieve it.
>>> from textworld import EnvInfos
>>> request_infos = EnvInfos(description=True, inventory=True, extras=["recipe"])
...
>>> env = gym.make(env_id)
>>> ob, infos = env.reset()
>>> print(infos["description"])
>>> print(infos["inventory"])
>>> print(infos["extra.recipe"])
Notes:
The following information *won't* be available at test time:
* 'walkthrough'
"""
request_infos = EnvInfos()
request_infos.description = True
request_infos.inventory = True
request_infos.entities = True
request_infos.verbs = True
request_infos.max_score = True
request_infos.has_won = True
request_infos.has_lost = True
request_infos.extras = ["recipe", "walkthrough"]
return request_infos
def init(self, obs: List[str], infos: Dict[str, List[Any]]):
"""
Prepare the agent for the upcoming games.
Arguments:
obs: Previous command's feedback for each game.
infos: Additional information for each game.
"""
# reset agent, get vocabulary masks for verbs / adjectives / nouns
self.scores = []
self.dones = []
self.prev_actions = ["" for _ in range(len(obs))]
# get word masks
batch_size = len(infos["verbs"])
verbs_word_list = infos["verbs"]
noun_word_list, adj_word_list = [], []
for entities in infos["entities"]:
tmp_nouns, tmp_adjs = [], []
for name in entities:
split = name.split()
tmp_nouns.append(split[-1])
if len(split) > 1:
tmp_adjs += split[:-1]
noun_word_list.append(list(set(tmp_nouns)))
adj_word_list.append(list(set(tmp_adjs)))
verb_mask = np.zeros((batch_size, len(self.word_vocab)), dtype="float32")
noun_mask = np.zeros((batch_size, len(self.word_vocab)), dtype="float32")
adj_mask = np.zeros((batch_size, len(self.word_vocab)), dtype="float32")
for i in range(batch_size):
for w in verbs_word_list[i]:
if w in self.word2id:
verb_mask[i][self.word2id[w]] = 1.0
for w in noun_word_list[i]:
if w in self.word2id:
noun_mask[i][self.word2id[w]] = 1.0
for w in adj_word_list[i]:
if w in self.word2id:
adj_mask[i][self.word2id[w]] = 1.0
second_noun_mask = copy.copy(noun_mask)
second_adj_mask = copy.copy(adj_mask)
second_noun_mask[:, self.EOS_id] = 1.0
adj_mask[:, self.EOS_id] = 1.0
second_adj_mask[:, self.EOS_id] = 1.0
self.word_masks_np = [verb_mask, adj_mask, noun_mask, second_adj_mask, second_noun_mask]
self.cache_chosen_indices = None
self.current_step = 0
def get_game_step_info(self, obs: List[str], infos: Dict[str, List[Any]]):
"""
Get all the available information, and concat them together to be tensor for
a neural model. we use post padding here, all information are tokenized here.
Arguments:
obs: Previous command's feedback for each game.
infos: Additional information for each game.
"""
# print('inventory: ', len(infos['inventory']))
# print('obs: ', len(obs))
# print('recipees: ', len(infos['extra.recipe']))
# print('descriptions: ', len(infos['description']))
#-----------------------------------------------------------------------------------------------
# inventory_token_list = [preproc(item, tokenizer=self.nlp) for item in infos["inventory"]]
# inventory_id_list = [_words_to_ids(tokens, self.word2id) for tokens in inventory_token_list]
# feedback_token_list = [preproc(item, str_type='feedback', tokenizer=self.nlp) for item in obs]
# feedback_id_list = [_words_to_ids(tokens, self.word2id) for tokens in feedback_token_list]
# quest_token_list = [preproc(item, tokenizer=self.nlp) for item in infos["extra.recipe"]]
# quest_id_list = [_words_to_ids(tokens, self.word2id) for tokens in quest_token_list]
# prev_action_token_list = [preproc(item, tokenizer=self.nlp) for item in self.prev_actions]
# prev_action_id_list = [_words_to_ids(tokens, self.word2id) for tokens in prev_action_token_list]
# description_token_list = [preproc(item, tokenizer=self.nlp) for item in infos["description"]]
# for i, d in enumerate(description_token_list):
# if len(d) == 0:
# description_token_list[i] = ["end"] # if empty description, insert word "end"
# description_id_list = [_words_to_ids(tokens, self.word2id) for tokens in description_token_list]
# description_id_list = [_d + _i + _q + _f + _pa for (_d, _i, _q, _f, _pa) in zip(description_id_list, inventory_id_list, quest_id_list, feedback_id_list, prev_action_id_list)]
# input_description = pad_sequences(description_id_list, maxlen=max_len(description_id_list)).astype('int32')
# input_description = to_pt(input_description, self.use_cuda)
#-----------------------------------------------------------------------------------------------
sep = ' [SEP] '
description_text_list = [_d + sep + _i + sep + _q + sep + _f + sep + _pa for (_d, _i, _q, _f, _pa)
in zip(infos['description'], infos['inventory'], infos['extra.recipe'], obs, self.prev_actions)]
_, bert_ids, bert_mask = convert_examples_to_features(description_text_list, self.model.tokenizer)
# del inventory_token_list
# del inventory_id_list
# del feedback_token_list
# del feedback_id_list
# del quest_token_list
# del quest_id_list
# del prev_action_token_list
# del prev_action_id_list
# del description_token_list
# del description_id_list
del description_text_list
return bert_ids, bert_mask
def word_ids_to_commands(self, verb, adj, noun, adj_2, noun_2):
"""
Turn the 5 indices into actual command strings.
Arguments:
verb: Index of the guessing verb in vocabulary
adj: Index of the guessing adjective in vocabulary
noun: Index of the guessing noun in vocabulary
adj_2: Index of the second guessing adjective in vocabulary
noun_2: Index of the second guessing noun in vocabulary
"""
# turns 5 indices into actual command strings
if self.word_vocab[verb] in self.single_word_verbs:
return self.word_vocab[verb]
if adj == self.EOS_id:
res = self.word_vocab[verb] + " " + self.word_vocab[noun]
else:
res = self.word_vocab[verb] + " " + self.word_vocab[adj] + " " + self.word_vocab[noun]
if self.word_vocab[verb] not in self.preposition_map:
return res
if noun_2 == self.EOS_id:
return res
prep = self.preposition_map[self.word_vocab[verb]]
if adj_2 == self.EOS_id:
res = res + " " + prep + " " + self.word_vocab[noun_2]
else:
res = res + " " + prep + " " + self.word_vocab[adj_2] + " " + self.word_vocab[noun_2]
return res
def get_wordid_from_vocab(self, word):
if word in self.word2id.keys():
return self.word2id[word]
else:
return self.EOS_id
def command_to_word_ids(self, cmd, batch_size):
verb_id=self.EOS_id
first_adj=self.EOS_id
first_noun=self.EOS_id
second_adj=self.EOS_id
second_noun=self.EOS_id
# print('cmd_to_ids')
# print(cmd.split())
ids = _words_to_ids(cmd.split(), self.word2id)
# print(ids)
for ind, i in enumerate(ids):
if self.word_masks_np[0][0][i]==1.0:
verb = ind
verb_id = i
nouns=[]
for ind, i in enumerate(ids):
if self.word_masks_np[2][0][i]==1.0:
nouns.append((ind,i))
if len(nouns) > 0:
if nouns[0][0] != verb - 1:
adj_ids = ids[verb + 1: nouns[0][0]]
adj=''
adj= ' '.join([self.word_vocab[x] for x in adj_ids])
# print(adj)
first_adj=self.get_wordid_from_vocab(adj)
# print(nouns)
first_noun=nouns[0][1]
if len(nouns) > 1:
if nouns[1][0] != nouns[0][0] - 1:
adj_ids = ids[nouns[0][0]: nouns[1][0]]
adj= ' '.join([self.word_vocab[x] for x in adj_ids])
second_adj=self.get_wordid_from_vocab(adj)
second_noun=nouns[1][1]
list_ids = [verb_id, first_adj, first_noun, second_adj, second_noun]
return [to_pt(np.array([[x]]*batch_size), self.use_cuda) for x in list_ids]
def get_chosen_strings(self, chosen_indices):
"""
Turns list of word indices into actual command strings.
Arguments:
chosen_indices: Word indices chosen by model.
"""
chosen_indices_np = [to_np(item)[:, 0] for item in chosen_indices]
res_str = []
batch_size = chosen_indices_np[0].shape[0]
for i in range(batch_size):
verb, adj, noun, adj_2, noun_2 = chosen_indices_np[0][i],\
chosen_indices_np[1][i],\
chosen_indices_np[2][i],\
chosen_indices_np[3][i],\
chosen_indices_np[4][i]
res_str.append(self.word_ids_to_commands(verb, adj, noun, adj_2, noun_2))
del verb
del adj
del noun
del adj_2
del noun_2
del chosen_indices_np
return res_str
def choose_random_command(self, word_ranks, word_masks_np):
"""
Generate a command randomly, for epsilon greedy.
Arguments:
word_ranks: Q values for each word by model.action_scorer.
word_masks_np: Vocabulary masks for words depending on their type (verb, adj, noun).
"""
batch_size = word_ranks[0].size(0)
word_ranks_np = [to_np(item) for item in word_ranks] # list of batch x n_vocab
word_ranks_np = [r * m for r, m in zip(word_ranks_np, word_masks_np)] # list of batch x n_vocab
word_indices = []
for i in range(len(word_ranks_np)):
indices = []
for j in range(batch_size):
msk = word_masks_np[i][j] # vocab
indices.append(np.random.choice(len(msk), p=msk / np.sum(msk, -1)))
del msk
word_indices.append(np.array(indices))
del indices
# word_indices: list of batch
word_qvalues = [[] for _ in word_masks_np]
for i in range(batch_size):
for j in range(len(word_qvalues)):
word_qvalues[j].append(word_ranks[j][i][word_indices[j][i]])
word_qvalues = [torch.stack(item) for item in word_qvalues]
word_indices = [to_pt(item, self.use_cuda) for item in word_indices]
word_indices = [item.unsqueeze(-1) for item in word_indices] # list of batch x 1
del word_ranks_np
return word_qvalues, word_indices
# def choose_random_command(self, word_ranks, word_masks_np):
# batch_size = word_ranks[0].size(0)
# word_ranks_np = [to_np(item) for item in word_ranks] # list of batch x n_vocab
# word_ranks_np = [r - np.min(r) for r in word_ranks_np] # minus the min value, so that all values are non-negative
# kinda_epsilon = 0.1
# random_ranks = np.random.normal(0, kinda_epsilon, word_ranks_np[0].shape)
# word_ranks_np = [r + random_ranks for r in word_ranks_np] # add noise
# word_ranks_np = [r * m for r, m in zip(word_ranks_np, word_masks_np)] # list of batch x n_vocab
# word_indices = [np.argmax(item, -1) for item in word_ranks_np] # list of batch
# word_qvalues = [[] for _ in word_masks_np]
# for i in range(batch_size):
# for j in range(len(word_qvalues)):
# word_qvalues[j].append(word_ranks[j][i][word_indices[j][i]])
# word_qvalues = [torch.stack(item) for item in word_qvalues]
# word_indices = [to_pt(item, self.use_cuda) for item in word_indices]
# word_indices = [item.unsqueeze(-1) for item in word_indices] # list of batch x 1
# return word_qvalues, word_indices
def choose_maxQ_command(self, word_ranks, word_masks_np):
"""
Generate a command by maximum q values, for epsilon greedy.
Arguments:
word_ranks: Q values for each word by model.action_scorer.
word_masks_np: Vocabulary masks for words depending on their type (verb, adj, noun).
"""
batch_size = word_ranks[0].size(0)
word_ranks_np = [to_np(item) for item in word_ranks] # list of batch x n_vocab
word_ranks_np = [r - np.min(r) for r in word_ranks_np] # minus the min value, so that all values are non-negative
word_ranks_np = [r * m for r, m in zip(word_ranks_np, word_masks_np)] # list of batch x n_vocab
word_indices = [np.argmax(item, -1) for item in word_ranks_np] # list of batch
word_qvalues = [[] for _ in word_masks_np]
for i in range(batch_size):
for j in range(len(word_qvalues)):
word_qvalues[j].append(word_ranks[j][i][word_indices[j][i]])
word_qvalues = [torch.stack(item) for item in word_qvalues]
word_indices = [to_pt(item, self.use_cuda) for item in word_indices]
word_indices = [item.unsqueeze(-1) for item in word_indices] # list of batch x 1
del word_ranks_np
return word_qvalues, word_indices
# torch.Size([16, 227])
# torch.Size([16, 227])
# torch.Size([16, 768])
# 5
# torch.Size([16, 20200])
# 16
# 5
# torch.Size([1, 20200])
def get_ranks(self, bert_ids, bert_masks):
"""
Given input description tensor, call model forward, to get Q values of words.
Arguments:
input_description: Input tensors, which include all the information chosen in
select_additional_infos() concatenated together.
"""
# word_ranks_arr = []
# for x in range(len(bert_ids)):
# bert_ids_single = torch.tensor([bert_ids[x]], dtype=torch.long)
# bert_masks_single = torch.tensor([bert_masks[x]], dtype=torch.long)
# state_representation_single = self.model.representation_generator(bert_ids_single, bert_masks_single)
# del bert_ids_single
# del bert_masks_single
# word_ranks_arr.append(self.model.action_scorer(state_representation_single))
# # print(len(word_ranks_arr))
# # print(len(word_ranks_arr[0]))
# # print(word_ranks_arr[0][0].shape)
# del state_representation_single
# word_ranks = word_ranks_arr[0]
# for x in range(len(word_ranks_arr) - 1):
# for y in range(len(word_ranks_arr[x + 1])):
# word_ranks[y] = torch.cat((word_ranks[y], word_ranks_arr[x + 1][y]), dim=0)
# del word_ranks_arr
bert_ids = torch.tensor([x for x in bert_ids], dtype=torch.long)
bert_masks = torch.tensor([x for x in bert_masks], dtype=torch.long)
# print(bert_ids.shape)
# print(bert_masks.shape)
state_representation = self.model.representation_generator(bert_ids, bert_masks)
# print(state_representation.shape)
del bert_ids
del bert_masks
word_ranks = self.model.action_scorer(state_representation) # each element in list has batch x n_vocab size
# print(len(word_ranks))
# print(word_ranks[0].shape)
del state_representation
return word_ranks
def act_eval(self, obs: List[str], scores: List[int], dones: List[bool], infos: Dict[str, List[Any]]) -> List[str]:
"""
Acts upon the current list of observations, during evaluation.
One text command must be returned for each observation.
Arguments:
obs: Previous command's feedback for each game.
score: The score obtained so far for each game (at previous step).
done: Whether a game is finished (at previous step).
infos: Additional information for each game.
Returns:
Text commands to be performed (one per observation).
Notes:
Commands returned for games marked as `done` have no effect.
The states for finished games are simply copy over until all
games are done, in which case `CustomAgent.finish()` is called
instead.
"""
if self.current_step > 0:
# append scores / dones from previous step into memory
self.scores.append(scores)
self.dones.append(dones)
if all(dones):
self._end_episode(obs, scores, infos)
return # Nothing to return.
bert_ids, bert_masks = self.get_game_step_info(obs, infos)
word_ranks = self.get_ranks(bert_ids, bert_masks) # list of batch x vocab
del bert_ids
del bert_masks
_, word_indices_maxq = self.choose_maxQ_command(word_ranks, self.word_masks_np)
chosen_indices = word_indices_maxq
chosen_indices = [item.detach() for item in chosen_indices]
chosen_strings = self.get_chosen_strings(chosen_indices)
self.prev_actions = chosen_strings
self.current_step += 1
return chosen_strings
def act(self, obs: List[str], scores: List[int], dones: List[bool], infos: Dict[str, List[Any]]) -> List[str]:
"""
Acts upon the current list of observations.
One text command must be returned for each observation.
Arguments:
obs: Previous command's feedback for each game.
score: The score obtained so far for each game (at previous step).
done: Whether a game is finished (at previous step).
infos: Additional information for each game.
Returns:
Text commands to be performed (one per observation).
Notes:
Commands returned for games marked as `done` have no effect.
The states for finished games are simply copy over until all
games are done, in which case `CustomAgent.finish()` is called
instead.
"""
if not self._epsiode_has_started:
self._start_episode(obs, infos)
if self.mode == "eval":
return self.act_eval(obs, scores, dones, infos)
if self.current_step > 0:
# append scores / dones from previous step into memory
self.scores.append(scores)
self.dones.append(dones)
# compute previous step's rewards and masks
rewards_np, rewards, mask_np, mask = self.compute_reward()
# Sample for noisy nets
for i in range(len(self.model.action_scorers)):
self.model.action_scorers[i].sample_noise()
bert_ids, bert_masks = self.get_game_step_info(obs, infos)
# generate commands for one game step, epsilon greedy is applied, i.e.,
# there is epsilon of chance to generate random commands
if self.imitate:
# print('imitate')
correct_cmd=infos['extra.walkthrough'][0][self.wt_index]
# print(correct_cmd)
if self.wt_index != len(infos['extra.walkthrough'][0]) - 1:
self.wt_index+=1
chosen_indices = self.command_to_word_ids(correct_cmd, len(bert_ids))
else:
word_ranks = self.get_ranks(bert_ids, bert_masks) # list of batch x vocab
_, word_indices_maxq = self.choose_maxQ_command(word_ranks, self.word_masks_np)
_, word_indices_random = self.choose_random_command(word_ranks, self.word_masks_np)
# random number for epsilon greedyupdate
rand_num = np.random.uniform(low=0.0, high=1.0, size=(len(bert_ids), 1))
less_than_epsilon = (rand_num < self.epsilon).astype("float32") # batch
greater_than_epsilon = 1.0 - less_than_epsilon
less_than_epsilon = to_pt(less_than_epsilon, self.use_cuda, type='float')
greater_than_epsilon = to_pt(greater_than_epsilon, self.use_cuda, type='float')
less_than_epsilon, greater_than_epsilon = less_than_epsilon.long(), greater_than_epsilon.long()
# print('Random_step: ',less_than_epsilon.tolist())
chosen_indices = [
less_than_epsilon * idx_random + greater_than_epsilon * idx_maxq
for idx_random, idx_maxq in zip(word_indices_random, word_indices_maxq)
]
chosen_indices = [item.detach() for item in chosen_indices]
chosen_strings = self.get_chosen_strings(chosen_indices)
self.prev_actions = chosen_strings
# push info from previous game step into replay memory
if self.current_step > 0:
for b in range(len(obs)):
if mask_np[b] == 0:
continue
is_prior = rewards_np[b] > 0.0
self.replay_memory.push(is_prior,*(self.cache_bert_ids[b],
self.cache_bert_masks[b],
[item[b] for item in self.cache_chosen_indices],
rewards[b],
mask[b],
dones[b],
bert_ids[b],
bert_masks[b],
[item[b] for item in self.word_masks_np]))
# cache new info in current game step into caches
self.cache_chosen_indices = chosen_indices
self.cache_bert_ids = bert_ids
self.cache_bert_masks = bert_masks
# update neural model by replaying snapshots in replay memory
#fix update
if self.current_step > 0 and self.current_step % self.update_per_k_game_steps == 0:
loss = self.update()
if loss is not None:
# Backpropagate
self.loss.append(to_np(loss).mean())
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.clip_grad_norm)
self.optimizer.step() # apply gradients
self.current_step += 1
if all(dones):
self._end_episode(obs, scores, infos)
return # Nothing to return.
return chosen_strings
def compute_reward(self):
"""
Compute rewards by agent. Note this is different from what the training/evaluation
scripts do. Agent keeps track of scores and other game information for training purpose.
"""
# mask = 1 if game is not finished or just finished at current step
if len(self.dones) == 1:
# it's not possible to finish a game at 0th step
mask = [1.0 for _ in self.dones[-1]]
else:
assert len(self.dones) > 1
mask = [1.0 if not self.dones[-2][i] else 0.0 for i in range(len(self.dones[-1]))]
mask = np.array(mask, dtype='float32')
mask_pt = to_pt(mask, self.use_cuda, type='float')
# rewards returned by game engine are always accumulated value the
# agent have recieved. so the reward it gets in the current game step
# is the new value minus values at previous step.
rewards = np.array(self.scores[-1], dtype='float32') # batch
if len(self.scores) > 1:
prev_rewards = np.array(self.scores[-2], dtype='float32')
rewards = rewards - prev_rewards
rewards_pt = to_pt(rewards, self.use_cuda, type='float')
return rewards, rewards_pt, mask, mask_pt
def update(self):
"""
Update neural model in agent. In this example we follow algorithm
of updating model in dqn with replay memory.
"""
if len(self.replay_memory) < self.replay_batch_size:
return None
transitions = self.replay_memory.sample(self.replay_batch_size)
batch = Transition(*zip(*transitions))
del transitions
#pyt bert_ids and bert_masks
# observation_id_list = pad_sequences(batch.observation_id_list, maxlen=max_len(batch.observation_id_list)).astype('int32')
# print(observation_id_list)
# input_observation = to_pt(observation_id_list, self.use_cuda)
# bert_ids = torch.tensor(batch.bert_ids, dtype=torch.long).to(self.model.device)
# bert_masks = torch.tensor(batch.bert_masks, dtype=torch.long).to(self.model.device)
bert_ids = pad_sequences(batch.bert_ids, maxlen=max_len(batch.bert_ids)).astype('int32')
bert_masks = pad_sequences(batch.bert_masks, maxlen=max_len(batch.bert_masks)).astype('int32')
# next_observation_id_list = pad_sequences(batch.next_observation_id_list, maxlen=max_len(batch.next_observation_id_list)).astype('int32')
# next_input_observation = to_pt(next_observation_id_list, self.use_cuda)
# next_bert_ids = torch.tensor(batch.next_bert_ids, dtype=torch.long).to(self.model.device)
# next_bert_masks = torch.tensor(batch.next_bert_masks, dtype=torch.long).to(self.model.device)
next_bert_ids = pad_sequences(batch.next_bert_ids, maxlen=max_len(batch.next_bert_ids)).astype('int32')
next_bert_masks = pad_sequences(batch.next_bert_masks, maxlen=max_len(batch.next_bert_masks)).astype('int32')
chosen_indices = list(list(zip(*batch.word_indices)))
chosen_indices = [torch.stack(item, 0) for item in chosen_indices] # list of batch x 1
word_ranks = self.get_ranks(bert_ids, bert_masks) # list of batch x vocab
del bert_ids
del bert_masks
word_qvalues = [w_rank.gather(1, idx).squeeze(-1) for w_rank, idx in zip(word_ranks, chosen_indices)] # list of batch
del chosen_indices
del word_ranks
q_value = torch.mean(torch.stack(word_qvalues, -1), -1) # batch
del word_qvalues
next_word_ranks = self.get_ranks(next_bert_ids, next_bert_masks) # batch x n_verb, batch x n_noun, batchx n_second_noun
del next_bert_ids
del next_bert_masks
next_word_masks = list(list(zip(*batch.next_word_masks)))
next_word_masks = [np.stack(item, 0) for item in next_word_masks]
next_word_qvalues, _ = self.choose_maxQ_command(next_word_ranks, next_word_masks)
del next_word_masks
del next_word_ranks
next_q_value = torch.mean(torch.stack(next_word_qvalues, -1), -1) # batch
next_q_value = next_q_value.detach()
rewards = torch.stack(batch.reward) # batch
not_done = 1.0 - np.array(batch.done, dtype='float32') # batch
not_done = to_pt(not_done, self.use_cuda, type='float')
rewards = rewards + not_done * next_q_value * self.discount_gamma # batch
del not_done
mask = torch.stack(batch.mask) # batch
loss = F.smooth_l1_loss(q_value * mask, rewards * mask)
del q_value
del mask
del rewards
del batch
return loss
def finish(self) -> None:
"""
All games in the batch are finished. One can choose to save checkpoints,
evaluate on validation set, or do parameter annealing here.
"""
# Game has finished (either win, lose, or exhausted all the given steps).
self.final_rewards = np.array(self.scores[-1], dtype='float32') # batch
dones = []
for d in self.dones:
d = np.array([float(dd) for dd in d], dtype='float32')
dones.append(d)
dones = np.array(dones)
step_used = 1.0 - dones
self.step_used_before_done = np.sum(step_used, 0) # batch
self.history_avg_scores.push(np.mean(self.final_rewards))
# save checkpoint
# print(self.mode)
# print(self.current_episode)
# print(self.save_frequency)
if self.mode == "train" and self.current_episode % self.save_frequency == 0:
avg_score = self.history_avg_scores.get_avg()
if avg_score > self.best_avg_score_so_far:
self.best_avg_score_so_far = avg_score
save_to = self.model_checkpoint_path + '/' + self.experiment_tag + "_episode_" + str(self.current_episode) + ".pt"
if not os.path.isdir(self.model_checkpoint_path):
os.mkdir(self.model_checkpoint_path)
torch.save(self.model.state_dict(), save_to)
print("\n========= saved checkpoint =========")
self.current_episode += 1
# annealing
if self.current_episode < self.epsilon_anneal_episodes:
self.epsilon -= (self.epsilon_anneal_from - self.epsilon_anneal_to) / float(self.epsilon_anneal_episodes)
def get_mean_loss(self):
mean_loss = 0.
if len(self.loss) != 0:
mean_loss = sum(self.loss) / len(self.loss)
self.loss = []
return mean_loss
###Output
_____no_output_____
###Markdown
Setup configs VocabUpload vocab.txt file
###Code
from google.colab import files
if not os.path.isfile('./vocab.txt'):
uploaded = files.upload()
# Upload vocab.txt
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=fn, length=len(uploaded[fn])))
else:
print("Vocab already uploaded!")
!head vocab.txt
###Output
!
"
#
$
%
&
'
'a
'd
'll
###Markdown
Config
###Code
#enough memory first try 3 games : epochs > 8
# with open('./config.yaml', 'w') as config:
# config.write("""general:
# discount_gamma: 0.5
# random_seed: 42
# use_cuda: True # disable this when running on machine without cuda
# # replay memory
# replay_memory_capacity: 100000 # adjust this depending on your RAM size
# replay_memory_priority_fraction: 0.25
# update_per_k_game_steps: 20
# replay_batch_size: 4
# # epsilon greedy
# epsilon_anneal_episodes: 300 # -1 if not annealing
# epsilon_anneal_from: 1.0
# epsilon_anneal_to: 0.2
# checkpoint:
# experiment_tag: 'starting-kit'
# model_checkpoint_path: './saved_models'
# load_pretrained: False # during test, enable this so that the agent load your pretrained model
# pretrained_experiment_tag: 'starting-kit'
# save_frequency: 100
# training:
# batch_size: 1
# nb_epochs: 100
# max_nb_steps_per_episode: 100 # after this many steps, a game is terminated
# optimizer:
# step_rule: 'adam' # adam
# learning_rate: 0.001
# clip_grad_norm: 5
# model:
# embedding_size: 32
# encoder_rnn_hidden_size: [32]
# action_scorer_hidden_dim: 16
# dropout_between_rnn_layers: 0.
# bert_model: 'bert-base-uncased'
# layer_index: 11
# """)
# second try 3 games 7-8 epochs max
# with open('./config.yaml', 'w') as config:
# config.write("""general:
# discount_gamma: 0.7
# random_seed: 42
# use_cuda: True # disable this when running on machine without cuda
# # replay memory
# replay_memory_capacity: 100000 # adjust this depending on your RAM size
# replay_memory_priority_fraction: 0.25
# update_per_k_game_steps: 4
# replay_batch_size: 4
# # epsilon greedy
# epsilon_anneal_episodes: 60 # -1 if not annealing
# epsilon_anneal_from: 1.0
# epsilon_anneal_to: 0.2
# checkpoint:
# experiment_tag: 'starting-kit'
# model_checkpoint_path: '/gdrive/My Drive/saved_models'
# load_pretrained: False # during test, enable this so that the agent load your pretrained model
# pretrained_experiment_tag: 'starting-kit'
# save_frequency: 200
# training:
# batch_size: 1
# nb_epochs: 100
# max_nb_steps_per_episode: 100 # after this many steps, a game is terminated
# optimizer:
# step_rule: 'adam' # adam
# learning_rate: 0.001
# clip_grad_norm: 5
# model:
# embedding_size: 64
# encoder_rnn_hidden_size: [64]
# action_scorer_hidden_dim: 32
# dropout_between_rnn_layers: 0.
# bert_model: 'bert-base-uncased'
# layer_index: 11
# """)
# 6 games more than 20 epochs
# with open('./config.yaml', 'w') as config:
# config.write("""general:
# discount_gamma: 0.7
# random_seed: 42
# use_cuda: True # disable this when running on machine without cuda
# # replay memory
# replay_memory_capacity: 100000 # adjust this depending on your RAM size
# replay_memory_priority_fraction: 0.25
# update_per_k_game_steps: 4
# replay_batch_size: 4
# # epsilon greedy
# epsilon_anneal_episodes: 60 # -1 if not annealing
# epsilon_anneal_from: 1.0
# epsilon_anneal_to: 0.2
# checkpoint:
# experiment_tag: 'starting-kit'
# model_checkpoint_path: '/gdrive/My Drive/saved_models'
# load_pretrained: False # during test, enable this so that the agent load your pretrained model
# pretrained_experiment_tag: 'starting-kit'
# save_frequency: 200
# training:
# batch_size: 1
# nb_epochs: 100
# max_nb_steps_per_episode: 100 # after this many steps, a game is terminated
# optimizer:
# step_rule: 'adam' # adam
# learning_rate: 0.001
# clip_grad_norm: 5
# model:
# embedding_size: 64
# encoder_rnn_hidden_size: [64]
# action_scorer_hidden_dim: 32
# dropout_between_rnn_layers: 0.
# bert_model: 'bert-base-uncased'
# layer_index: 11
# """)
with open('./config.yaml', 'w') as config:
config.write("""general:
discount_gamma: 0.7
random_seed: 42
use_cuda: True # disable this when running on machine without cuda
# replay memory
replay_memory_capacity: 100000 # adjust this depending on your RAM size
replay_memory_priority_fraction: 0.5
update_per_k_game_steps: 8
replay_batch_size: 24
# epsilon greedy
epsilon_anneal_episodes: 200 # -1 if not annealing
epsilon_anneal_from: 1
epsilon_anneal_to: 0.2
checkpoint:
experiment_tag: 'bert-dqn-noisy-networks-large'
model_checkpoint_path: '/gdrive/My Drive/TextWorld/trained_models'
load_pretrained: True # during test, enable this so that the agent load your pretrained model
pretrained_experiment_tag: 'bert-dqn-noisy-networks-large_episode_500'
save_frequency: 100
training:
batch_size: 10
nb_epochs: 100
max_nb_steps_per_episode: 100 # after this many steps, a game is terminated
optimizer:
step_rule: 'adam' # adam
learning_rate: 0.001
clip_grad_norm: 5
model:
noisy_std: 0.3
embedding_size: 192
encoder_rnn_hidden_size: [384]
action_scorer_hidden_dim: 128
dropout_between_rnn_layers: 0.
bert_model: 'bert-base-uncased'
train_bert: False
layer_index: 11
""")
###Output
_____no_output_____
###Markdown
Mount drive to load gamesNotebook takes sample games from google drive(requires authentication).To train the agent with games, upload archive with them in google drive and fix the path to the archive inside drive below.
###Code
from google.colab import drive
drive.mount('/gdrive')
# home_dir = '/gdrive/My Drive/Masters/TextWorld/'
# path_to_sample_games = home_dir + 'sample_games'
path_to_sample_games = '/gdrive/My Drive/TextWorld/sample_games'
def select_additional_infos() -> EnvInfos:
"""
Returns what additional information should be made available at each game step.
Requested information will be included within the `infos` dictionary
passed to `CustomAgent.act()`. To request specific information, create a
:py:class:`textworld.EnvInfos <textworld.envs.wrappers.filter.EnvInfos>`
and set the appropriate attributes to `True`. The possible choices are:
* `description`: text description of the current room, i.e. output of the `look` command;
* `inventory`: text listing of the player's inventory, i.e. output of the `inventory` command;
* `max_score`: maximum reachable score of the game;
* `objective`: objective of the game described in text;
* `entities`: names of all entities in the game;
* `verbs`: verbs understood by the the game;
* `command_templates`: templates for commands understood by the the game;
* `admissible_commands`: all commands relevant to the current state;
In addition to the standard information, game specific information
can be requested by appending corresponding strings to the `extras`
attribute. For this competition, the possible extras are:
* `'recipe'`: description of the cookbook;
* `'walkt
hrough'`: one possible solution to the game (not guaranteed to be optimal);
Example:
Here is an example of how to request information and retrieve it.
>>> from textworld import EnvInfos
>>> request_infos = EnvInfos(description=True, inventory=True, extras=["recipe"])
...
>>> env = gym.make(env_id)
>>> ob, infos = env.reset()
>>> print(infos["description"])
>>> print(infos["inventory"])
>>> print(infos["extra.recipe"])
Notes:
The following information *won't* be available at test time:
* 'walkthrough'
"""
request_infos = EnvInfos()
request_infos.description = True
request_infos.inventory = True
request_infos.entities = True
request_infos.verbs = True
request_infos.max_score = True
request_infos.has_won = True
request_infos.has_lost = True
request_infos.extras = ["recipe", "walkthrough"]
return request_infos
def gather_entites(game_files):
requested_infos = select_additional_infos()
_validate_requested_infos(requested_infos)
env_id = textworld.gym.register_games(game_files, requested_infos,
max_episode_steps=1,
name="training")
env_id = textworld.gym.make_batch(env_id, batch_size=1, parallel=True)
env = gym.make(env_id)
game_range = range(len(game_files))
entities = set()
verbs = set()
for game_no in game_range:
obs, infos = env.reset()
env.skip()
entities |= { a for i in infos['entities'] for a in i }
verbs |= { a for i in infos['verbs'] for a in i }
return list(entities), list(verbs)
def make_vocab(games):
entities, verbs = gather_entites(games)
with open("./vocab.txt") as f:
word_vocab = f.read().split("\n")
#########################
batch_size = len(verbs)
verbs_word_list = verbs
noun_word_list, adj_word_list = [], []
tmp_nouns, tmp_adjs = [], []
for name in entities:
split = name.split()
tmp_nouns.append(split[-1])
if len(split) > 1:
tmp_adjs.append(" ".join(split[:-1]))
noun_word_list = list(set(tmp_nouns))
adj_word_list = list(set(tmp_adjs))
word2id = { word: idx for idx, word in enumerate(word_vocab) }
verb_mask = np.zeros((len(word_vocab),), dtype="float32")
noun_mask = np.zeros((len(word_vocab),), dtype="float32")
adj_mask = np.zeros((len(word_vocab),), dtype="float32")
for w in verbs_word_list:
if w in word2id:
verb_mask[word2id[w]] = 1.0
for w in noun_word_list:
if w in word2id:
noun_mask[word2id[w]] = 1.0
for w in adj_word_list:
if w in word2id:
adj_mask[word2id[w]] = 1.0
second_noun_mask = copy.copy(noun_mask)
second_adj_mask = copy.copy(adj_mask)
second_noun_mask[:, self.EOS_id] = 1.0
adj_mask[:, self.EOS_id] = 1.0
second_adj_mask[:, self.EOS_id] = 1.0
word_masks_np = [verb_mask, adj_mask, noun_mask, second_adj_mask, second_noun_mask]
return word_vocab, word2id, word_masks_np
# List of additional information available during evaluation.
AVAILABLE_INFORMATION = EnvInfos(
description=True, inventory=True,
max_score=True, objective=True, entities=True, verbs=True,
command_templates=True, admissible_commands=True,
has_won=True, has_lost=True,
extras=["recipe"]
)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
def _validate_requested_infos(infos: EnvInfos):
msg = "The following information cannot be requested: {}"
for key in infos.basics:
if not getattr(AVAILABLE_INFORMATION, key):
raise ValueError(msg.format(key))
for key in infos.extras:
if key not in AVAILABLE_INFORMATION.extras:
raise ValueError(msg.format(key))
def get_index(game_no, stats):
return "G{}_{}".format(game_no, stats)
def get_game_id(game_info):
return hash((tuple(game_info['entities'][0]), game_info['extra.recipe'][0]))
def make_stats(count_games):
stats_cols = [ "scores", "steps", "loss", "max_score", "outcomes", "eps", "state"]
stats = {}
for col in stats_cols:
stats[col] = [0] * count_games
return stats
def save_to_csv(epoch_no, stats_df, score_mean, states, loss, eps, col):
filename = '/gdrive/My Drive/stats/TextWorld/game_' + str(col) + '.csv'
log_df = pd.DataFrame(columns=['epoch','score', 'steps', 'loss', 'eps', 'state'])
log_df.loc[0,'epoch'] = epoch_no
log_df.loc[0,'score'] = score_mean
log_df.loc[0,'steps'] = stats_df[get_index(col, 'st')]['avr']
log_df.loc[0,'loss'] = loss[col]
log_df.loc[0,'eps'] = eps[col]
log_df.loc[0, 'state'] = states[col]
if not os.path.isfile(filename):
log_df.to_csv(filename, header=['epoch','score', 'steps', 'loss', 'eps', 'state'])
else: # else it exists so append without writing the header
log_df.to_csv(filename, mode='a', header=False)
def print_epoch_stats(epoch_no, stats):
print("\n\nEpoch: {:3d}".format(epoch_no))
steps, scores, loss, states = stats["steps"], stats["scores"], stats["loss"], stats["state"]
max_scores, outcomes = stats["max_score"], stats["outcomes"]
games_cnt, parallel_cnt = len(steps), len(steps[0])
columns = [ get_index(col, st) for col in range(games_cnt) for st in ['st', 'sc']]
stats_df = pd.DataFrame(index=list(range(parallel_cnt)) + ["avr", "loss"], columns=columns)
for col in range(games_cnt):
for row in range(parallel_cnt):
outcome = outcomes[col][row]
outcome = outcome > 0 and "W" or outcome < 0 and "L" or ""
stats_df[get_index(col, 'st')][row] = steps[col][row]
stats_df[get_index(col, 'sc')][row] = outcome + " " + str(scores[col][row])
score_mean = np.mean(scores[col])
stats_df[get_index(col, 'sc')]['avr'] = "{}/{}".format(score_mean, max_scores[col])
stats_df[get_index(col, 'st')]['avr'] = stats_df[get_index(col, 'st')].mean()
stats_df[get_index(col, 'sc')]['loss'] = "{:.5f}".format(loss[col])
stats_df[get_index(col, 'st')]['loss'] = states[col]
save_to_csv(epoch_no, stats_df, score_mean, states, loss, stats['eps'], col)
print(stats_df)
def train(game_files):
requested_infos = select_additional_infos()
agent = CustomAgent()
env_id = textworld.gym.register_games(game_files, requested_infos,
max_episode_steps=agent.max_nb_steps_per_episode,
name="training")
env_id = textworld.gym.make_batch(env_id, batch_size=agent.batch_size, parallel=True)
print("ENVID: {}".format(env_id))
print("Making {} parallel environments to train on them\n".format(agent.batch_size))
env = gym.make(env_id)
count_games = len(game_files)
games_ids = {}
for epoch_no in range(1, agent.nb_epochs + 1):
stats = make_stats(count_games)
idx = 0
for game_no in tqdm(range(count_games)):
obs, infos = env.reset()
game_id = get_game_id(infos)
if epoch_no == 1:
games_ids[game_id] = idx
idx += 1
real_id = games_ids[game_id]
stats["max_score"][real_id] = infos['max_score'][0]
imitate = random.random() > 1.0
agent.train(imitate)
scores = [0] * len(obs)
dones = [False] * len(obs)
steps = [0] * len(obs)
while not all(dones):
# Increase step counts.
steps = [step + int(not done) for step, done in zip(steps, dones)]
commands = agent.act(obs, scores, dones, infos)
obs, scores, dones, infos = env.step(commands)
# Let the agent knows the game is done.
agent.act(obs, scores, dones, infos)
stats["scores"][real_id] = scores
stats["steps"][real_id] = steps
stats["eps"][real_id] = agent.epsilon
stats["loss"][real_id] = agent.get_mean_loss()
stats["state"][real_id] = ["imitate" if imitate else "agent"]
stats["outcomes"][real_id] = [ w-l for w, l in zip(infos['has_won'], infos['has_lost'])]
print_epoch_stats(epoch_no, stats)
torch.save(agent.model, './agent_model.pt')
return
# game_dir = path_to_sample_games
# games = []
# if os.path.isdir(game_dir):
# games += glob.glob(os.path.join(game_dir, "*.ulx"))
# print("{} games found for training.".format(len(games)))
# if len(games) != 0:
# train(games)
###Output
_____no_output_____
###Markdown
Eval
###Code
# COMMENT TRAIN
# FIX LOADING MODEL PATH
# FIX BATCH_SIZE TO BE 10
def eval_games(game_files):
requested_infos = select_additional_infos()
agent = CustomAgent()
env_id = textworld.gym.register_games(game_files, requested_infos,
max_episode_steps=agent.max_nb_steps_per_episode,
name="eval")
env_id = textworld.gym.make_batch(env_id, batch_size=10, parallel=True)
print("ENVID: {}".format(env_id))
print("Making {} parallel environments to eval on them\n".format(agent.batch_size))
env = gym.make(env_id)
count_games = len(game_files)
games_ids = {}
stats = make_stats(count_games)
score_sum = 0
steps_sum = 0
steps_length = count_games*10
for game_no in tqdm(range(count_games)):
obs, infos = env.reset()
agent.eval()
scores = [0] * len(obs)
dones = [False] * len(obs)
steps = [0] * len(obs)
while not all(dones):
# Increase step counts.
steps = [step + int(not done) for step, done in zip(steps, dones)]
commands = agent.act(obs, scores, dones, infos)
obs, scores, dones, infos = env.step(commands)
# Let the agent knows the game is done.
agent.act(obs, scores, dones, infos)
score_sum += sum(scores)
steps_sum += sum(steps)
print('Max score: ', score_sum)
print('Mean steps: ', steps_sum / steps_length)
game_dir = path_to_sample_games
games = []
if os.path.isdir(game_dir):
games += glob.glob(os.path.join(game_dir, "*.ulx"))
print("{} games found for training.".format(len(games)))
if len(games) != 0:
eval_games(games)
###Output
_____no_output_____ |
lista1/.ipynb_checkpoints/lista1-checkpoint.ipynb | ###Markdown
Conceitos iniciais para a lista 1 Teorema do valor intermediário Se $f(x)$ é uma função contínua e $f(a)f(b)<0$ , então existe $z \in [a,b]$ tal que $f(z)=0$, isto é, há uma raiz. __Importante__: Caso o teorema não seja válido para determinado exercício, não significa que a função não possua uma raiz, uma vez que ele é condição **suficiente** e não **necessária** para a utilização de métodos numéricos de resolução de equações não lineares. Depois de obtido o intervalo $[a, b]$ que contém z, podemos escolher um método numérico para calcular uma aproximação de $x_1, x_2, x_3,..., x_k$ que convirjam para $z$.
###Code
# Nesta celula, iremos configurar as variaveis utilizadas durante todo o notebook...
import numpy as np
# Para usar qualquer metodo, basta inserir a funcao desejada na variavel 'f' logo abaixo.
# Como exemplo, temos a funcao x*(np.e**x) - 4, do exercicio 1.1 da lista 1 :)
f = lambda x: x**3 - 2*x -1
df = lambda x: 3*x**2 - 2 # inserir a f'(x) nessa linha para utilizar o método de Newton!
###Output
_____no_output_____
###Markdown
Método da bissecção Quando é válido utililzar o método? Funciona **apenas** para casos em que existe **uma** e **somente uma** raiz da função dentro do intervalo $[a,b]$
###Code
"""
Funcao responsavel pelo calculo do metodo da bisseccao.
Parametros: [a, b] -> Intervalo de atuacao do metodo
funcao -> f(x)
erro -> Nivel de precisao desejado.
"""
def metodoBisseccao(a, b, funcao, erro):
print(">>> Intervalo atual: ", "a: ", a, " b: ", b)
x = (a + b) / 2 # Calculamos o ponto medio.
print(">>> Ponto medio:", x)
if (abs(a - x) <= erro):
return x
verificacao = funcao(a)*funcao(x)
print(">>> f(a)*f(x): ", verificacao)
# Caso f(a)*f(x) > 0, entao... [x, b]
if (verificacao > 0):
print('>>> Proxima iteracao: [x, b]')
return metodoBisseccao(x, b, funcao, erro)
else:
print('>>> Proxima iteracao: [a, x]')
return metodoBisseccao(a, x, funcao, erro)
metodoBisseccao(0, 2, f, 0.01)
###Output
>>> Intervalo atual: a: 0 b: 2
>>> Ponto medio: 1.0
>>> f(a)*f(x): 5.12687268616382
>>> Proxima iteracao: [x, b]
>>> Intervalo atual: a: 1.0 b: 2
>>> Ponto medio: 1.5
>>> f(a)*f(x): -3.4895207948093603
>>> Proxima iteracao: [a, x]
>>> Intervalo atual: a: 1.0 b: 1.5
>>> Ponto medio: 1.25
>>> f(a)*f(x): -0.46517230569722967
>>> Proxima iteracao: [a, x]
>>> Intervalo atual: a: 1.0 b: 1.25
>>> Ponto medio: 1.125
>>> f(a)*f(x): 0.6854065401758516
>>> Proxima iteracao: [x, b]
>>> Intervalo atual: a: 1.125 b: 1.25
>>> Ponto medio: 1.1875
>>> f(a)*f(x): 0.056864567762418626
>>> Proxima iteracao: [x, b]
>>> Intervalo atual: a: 1.1875 b: 1.25
>>> Ponto medio: 1.21875
>>> f(a)*f(x): -0.013077172044994142
>>> Proxima iteracao: [a, x]
>>> Intervalo atual: a: 1.1875 b: 1.21875
>>> Ponto medio: 1.203125
>>> f(a)*f(x): -0.0007462821456601511
>>> Proxima iteracao: [a, x]
>>> Intervalo atual: a: 1.1875 b: 1.203125
>>> Ponto medio: 1.1953125
###Markdown
Método do Ponto Fixo Temos em mãos uma equaçao não-linear $f(x)$, para a qual tentaremos aproximar raízes. Nesse sentido, temos que, partindo de $f(x)=0$, conseguimos desenvolvê-la de forma a encontrarmos uma outra com o formato $x=g(x)$ (chamada de função de iteração), que será utilizada para o processo iterativo de pesquisa de soluções. Formula do Método $x_{n+1} = g(x_{n})$ Quando é válido utilizar o método? O método em questão é válido nos seguintes casos:- $g([a,b]) \subset [a,b]$- $max|g'(x)| \leqslant L < 1$
###Code
"""
Funcao responsavel pelo calculo do metodo do Ponto Fixo.
Parametros: x0 -> Ponto de partida
funcao -> f(x)
erro -> Nivel de precisao desejado.
"""
def metodoPontoFixo(x0, funcao, erro):
xnext = funcao(x0) # x = g(x)
print(">>> xn+1 = g(xn): ", xnext)
# Se ja encontramos uma solucao precisa o bastante
if (abs(x0 - xnext) <= erro):
return xnext
else:
return metodoPontoFixo(xnext, funcao, erro) # Chamada recursiva da funcao...
metodoPontoFixo(0, f, 0.01)
###Output
>>> x = g(x): -0.5773502691896257
>>> x = g(x): -0.4325829068124507
>>> x = g(x): -0.4650559315428768
>>> x = g(x): -0.45756601476601655
###Markdown
Cálculo do Erro na n-ésima iteração. Temos que, a partir da ideia de que $|z - x_{n+1}| \leqslant |z-x_{n}|L$ $|z-x_{n+1}| \leqslant \frac{L|x_{n+1}-x_{n}|}{1-L} $ Ordem de convergência $\rho = \frac{1}{n}$ Teorema de Ordem de convergência Caso $g'(z) = 0$, então $\rho \geqslant 2$ (pelo menos quadrática). Metodo de Newton A fórmula do Método de Newton é definida pela seguinte equação de recorrência: $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$ Critérios do método Critério 1 (Praticamente o critério do ponto fixo) 1. $f(a)f(b) < 0$2. $f'(x) \neq 0, \forall x \in [a,b]$3. $f''(x) \neq 0, \forall x \in [a,b]$4. $|\frac{f(a)}{f'(a)}| Caso o item 4 seja válido, o método de Newton converge para a raiz z de $f(x)=0, \forall x_0 \in [a, b]$ Critério 2 Se apenas as condições 1, 2 e 3 do critério anterior forem satisfeitas (basicamente, o método do ponto fixo menos a última condição), então $x_0$ deve ser escolhido da seguinte maneira: $f(x_0)f''(x) \geqslant 0$ assim, o método converge para a raiz z. Fórmula do erro De forma bem semelhante à fórumla do erro do método do ponto fixo, temos que o método de Newton possui um erro expressado pela seguinte relação: $|z-x_{n+1}| \leqslant k_{n}|z-x_n|^2$ onde $k_n = max_{x \in [a, b]} \frac{(|f''(x)|)}{2|f'(x_n)|}$
###Code
"""
Funcao responsavel pelo calculo do metodo de Newton.
Parametros: x0 -> Ponto de partida
f -> Funcao de otimizacao
df -> derivada da funcao de otimizacao
erro -> Nivel de precisao desejado.
"""
def metodoNewton(x0, f, df, erro):
xnext = x0 - (f(x0)/df(x0))
print(">>> Iteração atual", " xn: ", x0, " xn+1: ", xnext)
if (abs(x0 - xnext) <= erro):
return xnext
else:
return metodoNewton(xnext, f, df, erro)
metodoNewton(10, f, df, 0.000001)
###Output
>>> Iteração atual xn: 10 xn+1: 6.714765100671141
>>> Iteração atual xn: 6.714765100671141 xn+1: 4.551196436801771
>>> Iteração atual xn: 4.551196436801771 xn+1: 3.1516607597637125
>>> Iteração atual xn: 3.1516607597637125 xn+1: 2.288244613012434
>>> Iteração atual xn: 2.288244613012434 xn+1: 1.8210126475281583
>>> Iteração atual xn: 1.8210126475281583 xn+1: 1.6452998528602982
>>> Iteração atual xn: 1.6452998528602982 xn+1: 1.6186301644991354
>>> Iteração atual xn: 1.6186301644991354 xn+1: 1.6180342832426726
>>> Iteração atual xn: 1.6180342832426726 xn+1: 1.6180339887499668
|
data/preprocess_data.ipynb | ###Markdown
Process person data Auxiliary functions
###Code
# We need this function to replace all occurences of {'-self-closing': 'true'} by None.
def remove_self_closing(elem):
if type(elem) != dict and type(elem) != list:
return
if type(elem) == list:
for idx, value in enumerate(elem):
if value == {'-self-closing': 'true'}:
elem[key] = None
elif type(value) == dict or type(value) == list:
remove_self_closing(value)
if type(elem) == dict:
for (key, value) in elem.items():
if value == {'-self-closing': 'true'}:
elem[key] = None
elif type(value) == dict or type(value) == list:
remove_self_closing(value)
###Output
_____no_output_____
###Markdown
Import converted/MDB_STAMMDATEN.json, reorganize the data and remove parts that are not interesting for us
###Code
with open("__converted/MDB_STAMMDATEN.json", "r") as f:
data = json.load(f)
persons_raw = data["DOCUMENT"]["MDB"]
persons = []
retired = []
for p in persons_raw:
person = {}
name = p["NAMEN"]["NAME"]
name = name[-1] if type(name) == list else name
bio = p["BIOGRAFISCHE_ANGABEN"]
wp_raw = p["WAHLPERIODEN"]["WAHLPERIODE"]
wp_raw = [wp_raw] if type(wp_raw) != list else wp_raw
# We don't need the data if the person is/was not a MP in the current term
if "19" not in [wp["WP"] for wp in wp_raw]:
continue
person["nachname"] = name["NACHNAME"]
person["vorname"] = name["VORNAME"]
person["geburtsdatum"] = bio["GEBURTSDATUM"]
person["geburtsort"] = bio["GEBURTSORT"]
person["sterbedatum"] = bio["STERBEDATUM"]
person["geschlecht"] = bio["GESCHLECHT"]
person["familienstand"] = bio["FAMILIENSTAND"]
person["religion"] = bio["RELIGION"]
person["beruf"] = bio["BERUF"]
person["anrede_titel"] = name["ANREDE_TITEL"]
person["akad_titel"] = name["AKAD_TITEL"]
person["vita"] = bio["VITA_KURZ"]
person["partei"] = bio["PARTEI_KURZ"]
person["partei_id"] = person["partei"]
if bio["PARTEI_KURZ"] == "CDU" or bio["PARTEI_KURZ"] == "CSU":
person["partei_id"] = "CDU/CSU"
elif bio["PARTEI_KURZ"] == "Plos":
person["partei"] = "fraktionslos"
person["partei_id"] = "fraktionslos"
elif bio["PARTEI_KURZ"] == "BÜNDNIS 90/DIE GRÜNEN":
person["partei"] = "GRÜNE"
person["partei_id"] = "GRÜNE"
wahlperioden = []
active = True
for wp in wp_raw:
if wp["WP"] == "19" and wp["MDBWP_BIS"] != {'-self-closing': 'true'}:
active = False
break
wahlperiode = {}
wahlperiode["wp"] = wp["WP"]
wahlperiode["md_von"] = wp["MDBWP_VON"]
wahlperiode["md_bis"] = wp["MDBWP_BIS"]
wahlperiode["liste"] = wp["LISTE"]
wahlperiode["mandatsart"] = wp["MANDATSART"]
wahlperiode["wkr_land"] = wp["WKR_LAND"]
wahlperiode["wkr_name"] = wp["WKR_NAME"]
wahlperiode["wkr_nummer"] = wp["WKR_NUMMER"]
institutionen = wp["INSTITUTIONEN"]["INSTITUTION"]
institutionen = [institutionen] if type(institutionen) != list else institutionen
wahlperiode["institutionen"] = institutionen
wahlperioden.append(wahlperiode)
person["wahlperioden"] = wahlperioden
remove_self_closing(person)
# we only want persons that are currently in the parliament
if not active:
retired.append((person["nachname"], person["vorname"]))
continue
persons.append(person)
###Output
_____no_output_____
###Markdown
Import and map image URLs
###Code
with open("img/urls.json", "r") as f:
urls = json.load(f)
# Apply some corrections to the names to match the names from stammdaten.
names = [(p["nachname"], p["vorname"]) for p in persons]
CORRECTIONS = {
("Altenkamp", "Norbert"): ("Altenkamp", "Norbert Maria"),
("in der Beek", "Olaf"): ("In der Beek", "Olaf"),
("Mackensen-Geis", "Isabel"): ("Mackensen", "Isabel"),
("Michelbach", "(Univ Kyiv) Hans"): ("Michelbach", "Hans"),
("Merkel", "Angela Dorothea"): ("Merkel", "Angela")
}
def check_names(urls):
l = [(url["name"].split(", ")[0], url["name"].split(", ")[1]) for url in urls]
for name in names:
assert name in l, name
def correct_names(urls):
for url in urls:
vorname = url["name"].split(", ")[1]
nachname = url["name"].split(", ")[0]
nachname = nachname.split(" (")[0]
vorname = vorname.replace("Dr. ", "")
vorname = vorname.replace("h. c. ", "")
vorname = vorname.replace("Prof. ", "")
vorname = vorname.replace(" von der", "")
vorname = vorname.replace(" von", "")
vorname = vorname.replace(" de", "")
vorname = vorname.replace(" Graf", "")
vorname = vorname.replace(" Freiherr", "")
if (nachname, vorname) in CORRECTIONS:
nachname, vorname = CORRECTIONS[(nachname, vorname)]
url["name"] = nachname + ", " + vorname
correct_names(urls)
check_names(urls)
d = {url["name"]: url for url in urls}
for person in persons:
person["img"] = d[f"{person['nachname']}, {person['vorname']}"]["img"]
###Output
_____no_output_____
###Markdown
Export data to final/stammdaten.json
###Code
with open("___final/stammdaten.json", "w") as outfile:
json.dump(persons, outfile, indent = 4, ensure_ascii = False)
###Output
_____no_output_____
###Markdown
Other stuff
###Code
# Make sure there are no duplicate names (because we want to use them as id)
for i in range(len(persons)):
for j in range(i+1, len(persons)):
p1 = persons[i]
p2 = persons[j]
if p1["nachname"] == p2["nachname"] and p1["vorname"] == p2["vorname"]:
print(p1["nachname"], p1["vorname"])
###Output
_____no_output_____
###Markdown
Process vote data
###Code
CORRECTIONS = {
("Özoguz", "Aydan"): ("Özoğuz", "Aydan"),
("Dagdelen", "Sevim"): ("Dağdelen", "Sevim")
}
# Remove some columns we don't use, check if all names match a name in stammdaten.json and
# correct them if necessary. Store result in <vote_nr>.json and <vote_nr>.csv
def process_vote_data(filename, vote_nr):
# Read csv
csv_data = pd.read_csv("__converted/votes/" + filename, sep=";")
csv_data = csv_data[["Fraktion/Gruppe", "Name", "Vorname", "ja", "nein", "Enthaltung", "ungültig", "nichtabgegeben"]]
# some format changes
csv_data.columns = ["partei_id", "name", "vorname", "ja", "nein", "Enthaltung", "ungültig", "nichtabgegeben"]
csv_data["vote"] = "nö"
for idx, row in csv_data.iterrows():
csv_data.at[idx, "name"] = row["name"].split(" (")[0]
if row["partei_id"] == "Fraktionslos":
csv_data.at[idx, "partei_id"] = "fraktionslos"
if row["partei_id"] == "BÜ90/GR":
csv_data.at[idx, "partei_id"] = "GRÜNE"
if row["ja"] == 1:
csv_data.at[idx, "vote"] = 0
elif row["nein"] == 1:
csv_data.at[idx, "vote"] = 1
elif row["Enthaltung"] == 1:
csv_data.at[idx, "vote"] = 2
elif row["ungültig"] == 1:
csv_data.at[idx, "vote"] = 3
elif row["nichtabgegeben"] == 1:
csv_data.at[idx, "vote"] = 4
csv_data = csv_data[["partei_id", "name", "vorname", "vote"]]
# Apply corrections for names that differ from stammdaten.json
names = [(p["nachname"], p["vorname"]) for p in persons]
for idx, row in csv_data[["name", "vorname"]].iterrows():
name = (row["name"], row["vorname"])
if name in CORRECTIONS:
csv_data.at[idx, "name"] = CORRECTIONS[name][0]
csv_data.at[idx, "vorname"] = CORRECTIONS[name][1]
# Check if there are more names that require manual intervention
for _, row in csv_data[["name", "vorname"]].iterrows():
nachname = row["name"]
vorname = row["vorname"]
#if (nachname, vorname) not in retired: TODO
#assert (nachname, vorname) in names, (nachname, vorname)
return json.loads(csv_data.to_json(orient = "records"))
files = [
("20210624_1_xls-data.csv", "20210624_1"),
("20210624_2_xls-data.csv", "20210624_2"),
("20210624_4_xls-data.csv", "20210624_4"),
("20210610_4_xls-data.csv", "20210610_4"),
("20210325_2_xls-data.csv", "20210325_2"),
("20210225_2_xls-data.csv", "20210225_2"),
("20191220_2_xls-data.csv", "20191220_2")
]
with open("__converted/votes/meta.json", "r") as f:
data = json.load(f)
for file, vote_nr in files:
data[vote_nr]["votes"] = process_vote_data(file, vote_nr)
l = [d for (_, d) in data.items()]
with open("___final/votes.json", "w") as outfile:
json.dump(l, outfile, indent = 4, ensure_ascii = False)
###Output
_____no_output_____ |
4. Analysis/Churn_Test.ipynb | ###Markdown
 please refer to the Train.ipnb for details about the data Exploratory Data Analysis Of Test Data
###Code
import pandas as pd
import numpy as np
from sklearn import preprocessing
import matplotlib.pyplot as plt
plt.rc("font", size=14)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
import seaborn as sns
sns.set(style="white")
sns.set(style="whitegrid", color_codes=True)
test=pd.read_csv("Churn_Test.csv")
test.head()
###Output
_____no_output_____
###Markdown
we want to check whether our data has null value. So we visualize the data
###Code
sns.heatmap(test.isnull(),yticklabels=False,cbar=False,cmap="viridis")
###Output
_____no_output_____
###Markdown
We want to also have a basic overview of the number of people who have exited the bank and the number that hasn't By Gender
###Code
sns.set_style("whitegrid")
sns.countplot(x="Exited",hue="Gender",data=test, palette="RdBu_r")
###Output
_____no_output_____
###Markdown
By Geography
###Code
sns.countplot(x="Exited",hue="Geography",data=test)
###Output
_____no_output_____
###Markdown
By Whether or not they have credit card
###Code
sns.countplot(x="Exited",hue="HasCrCard",data=test)
###Output
_____no_output_____
###Markdown
We want to also know the distribution of ages of our customer
###Code
sns.distplot(test["Age"],kde=False,bins=30)
test.info
###Output
_____no_output_____
###Markdown
We want to introduce dummy variables to replace our text fields
###Code
gender=pd.get_dummies(test["Gender"],drop_first=True)
geography=pd.get_dummies(test["Geography"],drop_first=True)
test=pd.concat([test,gender,geography],axis=1)
test.head()
X=test.drop("Exited",axis=1)
y=test["Exited"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.linear_model import LogisticRegression
logmodel=LogisticRegression()
###Output
_____no_output_____
###Markdown
Let's drop all columns that do not have much impact on our dependent variable
###Code
test.drop(["Geography"],axis=1,inplace=True)
test.head()
test.drop(["RowNumber","CustomerId","Surname","Gender"],axis=1,inplace=True)
test.head()
test.head()
test.drop(["HasCrCard","CreditScore","EstimatedSalary","Bolga"],axis=1,inplace=True)
test.head()
###Output
_____no_output_____ |
Meduza_Breakthrough_infections.ipynb | ###Markdown
v.1 - 2021-07-14
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Calculations
###Code
# constants
ExcessMortalityApr20toApr21 = 43263
MortalityOfficialAprtoJul = 4800
PopulationMoscow = 12651000
FullyVaccinated = 1800000
SemiVaccinated = 3200000-FullyVaccinated
VaccinatedGroups = {'FullyVaccinated':FullyVaccinated,
'SemiVaccinated' :SemiVaccinated}
# HIGH and LOW refers to scenario type, not arithmetic value.
MortalityUndercountCoeff = {'low':1.5 , 'mean':1.75 , 'high':2 }
IFR = {'high':0.005 , 'mean':0.0066 , 'low':0.0084 }
# asymptomatics share is 0.33- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7839426/
# so OR_Symptomatic for NonImmune should be 1-0.33=0.66
OR_Symptomatic = {'FullyVaccinated':{'high':0.33,'mean':0.2, 'low':0.12},
'SemiVaccinated' :{'high':0.7, 'mean':0.69,'low':0.64},
'Convalescent' :{'high':0.2, 'mean':0.2, 'low':0.2},
'NonImmune' :{'high':0.66, 'mean':0.66,'low':0.66}}
HospitalizationOfficialJuntoJul9 = 62127
OR_Hospitalization = {'FullyVaccinated':{'high':0.08, 'mean':0.06, 'low':0.04},
'SemiVaccinated' :{'high':0.29, 'mean':0.25, 'low':0.06},
'Convalescent' :{'high':0.04, 'mean':0.04, 'low':0.04},} # MOCK VALUES ON CONVALESCENT!
# Functions
## Symptomatic Infections
def get_ExpectedSymptomaticNumber(N, Group, Scenario='mean'):
EstimatedShareOfGroupInN =get_GroupNumber(Group, Scenario)/PopulationMoscow*N
return EstimatedShareOfGroupInN*get_SymptomaticShareInGroup(Group, Scenario)
def get_SymptomaticShareInGroup(Group, Scenario='mean'):
return get_SymptomaticInGroup(Group, Scenario)/get_GroupNumber(Group, Scenario)
def get_SymptomaticInGroup(Group, Scenario='mean'):
# доля группы × шансы заболеть × число заболевших по смертям 45 дней
SymptomaticInGroup = (get_GroupNumber(Group, Scenario)/PopulationMoscow)\
*OR_Symptomatic[Group][Scenario]\
*get_TrueInfectionsNumber45d(Scenario)
return SymptomaticInGroup
## Hospitalizations
def get_ExpectedHospitalizedNumber(N, Group, Scenario='mean'):
EstimatedShareOfGroupInN =get_GroupNumber(Group, Scenario)/PopulationMoscow*N
return EstimatedShareOfGroupInN*get_HospitalizedShareInGroup(Group, Scenario)
def get_HospitalizedShareInGroup(Group, Scenario='mean'):
return get_HospitalizedInGroup(Group, Scenario)/get_GroupNumber(Group, Scenario)
def get_HospitalizedInGroup(Group, Scenario='mean'):
# доля группы × шансы заболеть × число официально госпитализированных
HospitalizedInGroup = (get_GroupNumber(Group, Scenario)/PopulationMoscow)\
*OR_Hospitalization[Group][Scenario]\
*HospitalizationOfficialJuntoJul9
return HospitalizedInGroup
## Other Functions
def get_TrueInfectionsNumber45d(Scenario):
TrueRecentMortality = MortalityOfficialAprtoJul * MortalityUndercountCoeff[Scenario]
TrueInfectionsNumber45d = TrueRecentMortality/IFR[Scenario]
return TrueInfectionsNumber45d
def get_GroupNumber(Group, Scenario='mean'):
if Group not in OR_Symptomatic.keys():
raise 'Incorrect Group!'
elif Group in VaccinatedGroups.keys():
return VaccinatedGroups[Group]
elif Group == 'Convalescent':
return get_ConvalescentNumber(Scenario)
elif Group == 'NonImmune':
return PopulationMoscow-get_ConvalescentNumber(Scenario)-FullyVaccinated-SemiVaccinated
def get_ConvalescentNumber(Scenario):
return (ExcessMortalityApr20toApr21\
+\
MortalityOfficialAprtoJul*MortalityUndercountCoeff[Scenario])\
/IFR[Scenario]
###Output
_____no_output_____
###Markdown
DataFrames
###Code
# df_FullyVaccinated
N_min = 1
N_max = 1000
df_FullyVaccinated = pd.DataFrame(
{'low':[get_ExpectedSymptomaticNumber(i, 'FullyVaccinated', 'low') for i in range(N_min,N_max)],
'mean':[get_ExpectedSymptomaticNumber(i, 'FullyVaccinated', 'mean') for i in range(N_min,N_max)],
'high':[get_ExpectedSymptomaticNumber(i, 'FullyVaccinated', 'high') for i in range(N_min,N_max)]
},
index=list(range(N_min,N_max))
)
# df_SemiVaccinated
N_min = 1
N_max = 1000
df_SemiVaccinated = pd.DataFrame(
{'low':[get_ExpectedSymptomaticNumber(i, 'SemiVaccinated', 'low') for i in range(N_min,N_max)],
'mean':[get_ExpectedSymptomaticNumber(i, 'SemiVaccinated', 'mean') for i in range(N_min,N_max)],
'high':[get_ExpectedSymptomaticNumber(i, 'SemiVaccinated', 'high') for i in range(N_min,N_max)]
},
index=list(range(N_min,N_max))
)
# df_Convalescent
N_min = 1
N_max = 1000
df_Convalescent = pd.DataFrame(
{'low':[get_ExpectedSymptomaticNumber(i, 'Convalescent', 'low') for i in range(N_min,N_max)],
'mean':[get_ExpectedSymptomaticNumber(i, 'Convalescent', 'mean') for i in range(N_min,N_max)],
'high':[get_ExpectedSymptomaticNumber(i, 'Convalescent', 'high') for i in range(N_min,N_max)]
},
index=list(range(N_min,N_max))
)
# df_FullyVaccinated_Hospitalizied
N_min = 1
N_max = 1000
df_FullyVaccinated_Hospitalizied = pd.DataFrame(
{'low': [get_ExpectedHospitalizedNumber(i, 'FullyVaccinated', 'low') for i in range(N_min,N_max)],
'mean':[get_ExpectedHospitalizedNumber(i, 'FullyVaccinated', 'mean') for i in range(N_min,N_max)],
'high':[get_ExpectedHospitalizedNumber(i, 'FullyVaccinated', 'high') for i in range(N_min,N_max)]
},
index=list(range(N_min,N_max))
)
# df_SemiVaccinated_Hospitalizied
N_min = 1
N_max = 1000
df_SemiVaccinated_Hospitalizied = pd.DataFrame(
{'low':[get_ExpectedHospitalizedNumber(i, 'SemiVaccinated', 'low') for i in range(N_min,N_max)],
'mean':[get_ExpectedHospitalizedNumber(i, 'SemiVaccinated', 'mean') for i in range(N_min,N_max)],
'high':[get_ExpectedHospitalizedNumber(i, 'SemiVaccinated', 'high') for i in range(N_min,N_max)]
},
index=list(range(N_min,N_max))
)
# df_Convalescent_Hospitalized
N_min = 1
N_max = 1000
df_Convalescent_Hospitalized = pd.DataFrame(
{'low':[get_ExpectedHospitalizedNumber(i, 'Convalescent', 'low') for i in range(N_min,N_max)],
'mean':[get_ExpectedHospitalizedNumber(i, 'Convalescent', 'mean') for i in range(N_min,N_max)],
'high':[get_ExpectedHospitalizedNumber(i, 'Convalescent', 'high') for i in range(N_min,N_max)]
},
index=list(range(N_min,N_max))
)
###Output
_____no_output_____
###Markdown
Figs
###Code
fig, axs = plt.subplots(1,3, figsize=(20,5))
# ax.set_xscale('log')
axs[0].set_ylim(0,10)
# ax.set_facecolor('black')
axs[0].grid(color='gray')
# plt.plot(df_FullyVaccinated)
# , c='white', marker= 'o', alpha=1, linestyle='None')
axs[0].plot(df_FullyVaccinated)
# ==
# ax.set_xscale('log')
axs[1].set_ylim(0,10)
# ax.set_facecolor('black')
axs[1].grid(color='gray')
# plt.plot(df_FullyVaccinated)
# , c='white', marker= 'o', alpha=1, linestyle='None')
axs[1].plot(df_SemiVaccinated)
# ==
# ax.set_xscale('log')
axs[2].set_ylim(0,10)
# ax.set_facecolor('black')
axs[2].grid(color='gray')
# plt.plot(df_FullyVaccinated)
# , c='white', marker= 'o', alpha=1, linestyle='None')
axs[2].plot(df_Convalescent)
# ==
axs[0].axhline(y=1, color='black', linestyle='--')
axs[0].set_title('Полностью вакцинированные', c='black', pad=-20)
axs[1].axhline(y=1, color='black', linestyle='--')
axs[1].set_title('Вакцинированные одной дозой', c='black', pad=-20)
axs[2].axhline(y=1, color='black', linestyle='--')
axs[2].set_title('Переболевшие', c='black', pad=-20)
plt.savefig('pic_1_All_Symptomatic.svg')
fig, axs = plt.subplots(1,2, figsize=(2/3*20,5))
# ax.set_xscale('log')
axs[0].set_ylim(0,2)
# ax.set_facecolor('black')
axs[0].grid(color='gray')
# plt.plot(df_FullyVaccinated_Hospitalized)
# , c='white', marker= 'o', alpha=1, linestyle='None')
axs[0].plot(df_FullyVaccinated_Hospitalizied)
# ==
# ax.set_xscale('log')
axs[1].set_ylim(0,2)
# ax.set_facecolor('black')
axs[1].grid(color='gray')
# plt.plot(df_FullyVaccinated_Hospitalized)
# , c='white', marker= 'o', alpha=1, linestyle='None')
axs[1].plot(df_SemiVaccinated_Hospitalizied)
# ==
# # ax.set_xscale('log')
# axs[2].set_ylim(0,2)
# # ax.set_facecolor('black')
# axs[2].grid(color='gray')
# # plt.plot(df_FullyVaccinated_Hospitalized)
# # , c='white', marker= 'o', alpha=1, linestyle='None')
# axs[2].plot(df_Convalescent_Hospitalized)
# ==
axs[0].axhline(y=1, color='black', linestyle='--')
axs[0].set_title('Полностью вакцинированные', c='black', pad=-20)
axs[1].axhline(y=1, color='black', linestyle='--')
axs[1].set_title('Вакцинированные одной дозой', c='black', pad=-20)
# axs[2].axhline(y=1, color='black', linestyle='--')
# axs[2].set_title('Переболевшие', c='black', pad=-20)
plt.savefig('pic_1_All_Hospitalized.svg')
###Output
_____no_output_____ |
notebooks/Final Project Team 3.ipynb | ###Markdown
Data from the Social Security Administration is separated into many `.txt` files so we needed to combine them into a single `.csv` file. We only took data from 1910 to 2010 in ten year intervals to match the age correction data that we had.
###Code
write_csv = open('../data/data1910-2010.csv', 'w')
SA_writer = csv.writer( write_csv, delimiter = ',', quotechar = '', quoting = csv.QUOTE_NONE)
#change these to change the range of years to pull data from
start = 1910
end = 2010
interval = 10
SA_writer.writerow(['year', 'first_name', 'gender', 'count'])
for year in range(start, end + interval, interval):
f = open('../data/yob'+str(year)+'.txt', 'r')
read_csv = csv.reader(f, delimiter = ',')
for line in read_csv:
if line[0].isspace() == False:
SA_writer.writerow([year, str(line[0]), str(line[1]), int(line[2])])
f.close()
###Output
_____no_output_____
###Markdown
Next we created a table out of the joined data and then split it into separate tables for men and women. Then we found the total instances of each unique name and plotted the first twenty highest totals to compare with after making various adjustments.
###Code
first_names = Table.read_table('../data/data1910-2010.csv')
male_first = first_names.where('gender', 'M')
female_first = first_names.where('gender', 'F')
raw_total = first_names.select(['first_name', 'count']).group('first_name', collect = sum)
raw_sorted = raw_total.sort('count sum', descending = False)
first_20names = raw_sorted.take[raw_sorted.num_rows-20 : raw_sorted.num_rows]
plt.figure(1, figsize = (10,8))
plt.barh(first_20names.column('first_name'), first_20names.column('count sum'))
plt.title('First 20 Most Common First Names, no corrections')
plt.show()
###Output
_____no_output_____
###Markdown
Our raw data is tallying how many babies were given a particular name from year to year. We're trying to look at the most popular name in 2013. We need to account for people dying by reducing their contribution to the total. To do this we tried to use the following table provided by the authors.
###Code
alive = Table.read_table('../data/aging-curve.csv')
alive.show(5)
###Output
_____no_output_____
###Markdown
Initially we thought that the Male/Female columns were telling us how many people born in a certain decade were still alive in 2013. By using that column and finding the total number of baby names from each year, we turned the number of people alive into a percentage.
###Code
#split them into male and female tables
male_alive = alive.select(['Decade', 'Age', 'Male'])
female_alive = alive.select(['Decade', 'Age', 'Female'])
# takes a table and returns total names in each year (use this as total people born that year)
def sum_by_year(table):
total = []
for year in range(1910, 2020, 10):
year_total = 0
year_total = table.where('year', year).column('count').sum()
total.append(year_total)
return total
male_total_by_year = sum_by_year(male_first)
female_total_by_year = sum_by_year(female_first)
#exclude 1900 row
male_alive = male_alive.exclude(0)
female_alive = female_alive.exclude(0)
#add year total column
male_alive = male_alive.with_column('total', male_total_by_year)
female_alive = female_alive.with_column('total', female_total_by_year)
#take 'number still alive' column and divide by 'total by year' column
def find_percent(table, column):
count = table.column(column)
total = table.column('total')
percent = count/total
return table.with_column('percent_alive', percent)
male_percent = find_percent(male_alive, 'Male')
female_percent = find_percent(female_alive, 'Female')
female_percent
###Output
_____no_output_____
###Markdown
The percentages that we got didn't look right; only 5.2% of people born in 1990 were still alive. We tried getting actuarial data from the SSA, but we could only find webpages with tables. Since we didn't want to input the data by hand, we abandoned that idea.So the next idea was to use the numbers in the Male.1 and Female.1 columns as the percent of people born that decade who are still alive.
###Code
#2nd attempt to correct for deaths
#people still alive in 2013
alive = Table.read_table('../data/aging-curve.csv')
alive = alive.exclude(0) #dropping 1900 row
#take the 'Male.1' and 'Female.1' columns (hopefully they mean percentage of people still alive from that year)
correction_m = alive.column('Male.1')
correction_f = alive.column('Female.1')
#function that applies the correction values to a table
def age_correct(table, value):
corrected_counts = []
index = 0
for year in range(1910, 2020, 10):
counts = table.where('year', year).column('count') #take rows from 'count' column where the year is the same
counts = np.round(counts * value[index]) #the part where the correction is applied
index += 1 #index for the list of corrections, stops incrementing when you hit the year 2010
corrected_counts.extend(list(counts))
return corrected_counts
corrected_m = age_correct(male_first, correction_m)
male_first = male_first.with_column('age_corrected', corrected_m)
corrected_f = age_correct(female_first, correction_f)
female_first = female_first.with_column('age_corrected', corrected_f)
female_first.show(5)
###Output
_____no_output_____
###Markdown
So there were 22,848 Mary's born in 1910 and probably about 73 still around in 2013. To check if this was reasonable, we summed the age_corrected column for men and women where the year was 1910 and added them together.
###Code
female_1910_sum = female_first.where('year', 1910).column('age_corrected').sum()
male_1910_sum = male_first.where('year', 1910).column('age_corrected').sum()
female_1910_sum + male_1910_sum
###Output
_____no_output_____
###Markdown
The 2010 US Census shows there were about 55k people 100 years old and older still alive; we're 'a bit' off unless the 'and older' crowd can make up the difference.  Moving on...we applied the age correction to the data.
###Code
male_total = male_first.select(['first_name', 'age_corrected']).group('first_name', collect = sum)
female_total = female_first.select(['first_name', 'age_corrected']).group('first_name', collect = sum)
joined_table = male_total.append(female_total)
sorted_first = joined_table.sort('age_corrected sum', descending = True)
#only used to plot top 20 first names
top20 = joined_table.sort('age_corrected sum', descending = False)
top20 = top20.take[joined_table.num_rows-20 : joined_table.num_rows]
sorted_first
#need this number later when we find the probability of each first name
total_names = joined_table.column('age_corrected sum').sum()
###Output
_____no_output_____
###Markdown
The 20 names with the highest totals after the age adjustment. Michael jumped from 4th to 1st and some re-ordering at the bottom.
###Code
plt.figure(1, figsize = (10,8))
plt.barh(top20.column('first_name'), top20.column('age_corrected sum'))
plt.title('First 20 Most Common First Names, age corrected')
plt.show()
###Output
_____no_output_____
###Markdown
The authors also accounted for names not registered with the SSA, by weighting foreign names based on their ethnicity's percentage of the total population. They only adjusted Hispanic names because they were the most numerous foreign born population. We couldn't figure out how to do this with the data from the SSA, because the ethnicity of the name isn't flagged. Doing this manually would require us to go through every name for every year and judge whether a particular name sounded Hispanic or not, so we skipped this step.Now, looking at the last names data provided by the authors, they did provide information about the probable race of a person with a particular last name.
###Code
surnames = Table.read_table('../data/surnames.csv')
surnames.sort('pcthispanic', descending = True)
###Output
_____no_output_____
###Markdown
We had state by state total and hispanic population data, so our plan was to find the percentage of the U.S. population that was Hispanic and apply it to the count of a last name where `'pcthispanic' >= 80`, as an initial guess for a threshold.
###Code
state_pops = Table.read_table('../data/state-pop.csv')
total = list(state_pops.column('totalPop'))
hisp = list(state_pops.column('hispPop'))
#the numbers in the table have commas, preventing type-casting and summing
#this function takes a column, removes commas, then sums the column
def sum_columns(lists):
column_total = 0
for i in range(len(lists)):
column_total += int(lists[i].replace(',', ''))
return column_total
#finding the percent Hispanic population in the U.S.
total_pop = sum_columns(total)
hisp_pop = sum_columns(hisp)
per_hisp = hisp_pop / total_pop
per_hisp
###Output
_____no_output_____
###Markdown
To work with the surnames table, NAN need to be removed and the 'numbers' need to be converted to actual numbers. This next section takes a long time and eats memory; the output will be saved to `(../data/surnames_no_nan.csv)` and then loaded in a later cell.
###Code
#Modified Chris Pyles' code to remove NAN from daredevil lab; original copyright Christopher Pyles <[email protected]>
def remove_nan(t):
def checkNotnan(val):
if (val!=val) or (val=='nan') or (val=='NAN') or (val=='NaN') or (val=='(S)'):
return False
return True
labels = t.labels
for label in labels:
t = t.where(label, checkNotnan)
return t
surnames = surnames.drop(['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace']) #drop these so we don't lose rows with high pcthispanic just because that row contains a NAN
surnames = remove_nan(surnames)
surnames.to_csv('../output/surnames_no_nan.csv')#save to file so we can continue from here later
#loading the NAN removed table
lastnames = Table.read_table('../output/surnames_no_nan.csv')
#functions that convert table entries to numbers
def to_int(entry):
if type(entry) == int:
return entry
return int(entry)
def to_float(entry):
if type(entry) == float:
return entry
return float(entry)
count = lastnames.apply(to_int, 'count')
lastnames = lastnames.drop('count')
lastnames = lastnames.with_columns('count', count)
pcthisp = lastnames.apply(to_float, 'pcthispanic')
lastnames = lastnames.drop('pcthispanic')
lastnames = lastnames.with_columns('pcthispanic', pcthisp)
###Output
_____no_output_____
###Markdown
This part increases the count of lastnames by 16% if `'pcthispanic'` is 80 or greater.
###Code
threshold = 80 #value in 'pcthispanic' column has to be >= this for their count to be adjusted
def adj_hisp(entry):
return entry * (1 + per_hisp)
#pull out separate columns from table and join them as a matrix
pcthisp = list(lastnames.column('pcthispanic'))
count = list(lastnames.column('count'))
stack = np.stack((pcthisp, count), axis = -1)
#perform the adjustment
for row in stack:
if row[0] >= threshold:
row[1] = adj_hisp(row[1])
#make a list with the adjusted values
adjusted_count = []
for i in range(len(stack)):
adjusted_count.append(int(stack[i,1]))
lastname = lastnames.with_column('adjusted_count', adjusted_count)
lastname
###Output
_____no_output_____
###Markdown
Next the authors found the probability of having a certain first or last name by dividing each count by the total population. We also kept only the first 100 names from each table after sorting by probability.
###Code
def first_to_percent(entry):
return entry / total_names
def last_to_percent(entry):
return entry / total_pop
first_prob = sorted_first.apply(first_to_percent, 'age_corrected sum')
firstname = sorted_first.with_column('probability', first_prob)
last_prob = lastname.apply(last_to_percent, 'adjusted_count')
lastname = lastname.with_column('probability', last_prob)
firstname
first_100 = firstname.take[:100]
first_100.to_csv('../output/first_100.csv')
sort_last = lastname.sort('probability', descending = True)
last_100 = sort_last.take[:100]
last_100.to_csv('../output/last_100.csv')
###Output
_____no_output_____
###Markdown
Now we make two tables with just names and probabilites, convert them to arrays and combine them into another array whose elements will be the probablity of first name times the probability of last name. Then we made a table out of the probability array.
###Code
#un-comment below to load output of previous cells and continue from here
# first_100 = Table.read_table('../output/first_100.csv')
# last_100 = Table.read_table('../output/last_100.csv')
lastnames = list(last_100.column('name'))
last_prob = list(last_100.column('probability'))
first_names = list(first_100.column('first_name'))
first_prob = list(first_100.column('probability'))
#this part converts last name strings to lowercase and then capitalizes the first letter
last_names = []
for element in lastnames:
lower = element.lower()
proper = lower.capitalize()
last_names.append(proper)
name_array = np.zeros([100, 100])
for first in range(len(first_prob)):
for last in range(len(last_prob)):
name_array[first, last] = first_prob[first] * last_prob[last]
#original copyright Christopher Pyles <[email protected]>
import pandas as pd
df = pd.DataFrame(name_array)
string_labels = list(map(str, df.columns))
df = df.rename({i:j for i, j in zip(df.columns, string_labels)}, axis=1)
name_table = Table.from_df(df)
column_labels = name_table.labels
name_table = name_table.relabel(column_labels, last_names)
name_table = name_table.with_column('First Names', first_names)
name_table = name_table.move_to_start('First Names')
name_table
###Output
_____no_output_____
###Markdown
This table says you have a `9.7e-04` chance of being named Michael Smith, `7.6e-04` chance of being named Michael Johnson, etc. It assumes first and last names are independent variables and the probability of having a certain first and last name combination is just the product of probabilities.But names are not random combinations of possible names. The authors tried to correct for this by using a matrix from a study by another researcher ([Lee Hartman](http://mypage.siu.edu/lhartman/johnsmith.html)). This is their correction matrix.
###Code
adjustments = Table.read_table('../data/adjustments.csv')
adjustments
###Output
_____no_output_____
###Markdown
The ethnicity adjustment table only has certain `first` + `last` `name` combinations; in order to use it we need to have our table match the names.
###Code
#selecting the last names in the correction matrix; happily the order of last names match
last_names_to_keep = ['Miller' , 'Anderson', 'Martin', 'Smith' , 'Thompson',
'Wilson' , 'Moore' , 'White' , 'Taylor' , 'Davis' ,
'Johnson', 'Brown' , 'Jones' , 'Thomas' , 'Williams',
'Jackson', 'Lee' , 'Garcia', 'Martinez', 'Rodriguez']
reduced = name_table.select(last_names_to_keep)
reduced = reduced.with_column('First Names', first_names)
reduced = reduced.move_to_start('First Names')
#selecting the first names in the correction matrix
def drop_first_names(entry):
to_keep = ['John' , 'Michael', 'James' , 'Robert', 'David' ,
'Mary' , 'William', 'Richard', 'Thomas', 'Jennifer',
'Patricia', 'Joseph' , 'Linda' , 'Maria' , 'Charles' ,
'Barbara' , 'Mark' , 'Daniel' , 'Susan' , 'Elizabeth']
for name in to_keep:
if entry == name:
return True
return False
#the order of first names in the correction matrix
first_names_order = [2, 0, 1, 3, 4, 6, 5, 8, 9, 12, 15, 7, 13, 19, 11, 17, 14, 10, 18, 16]
matching_size = reduced.where('First Names', drop_first_names)
matching = matching_size.take(first_names_order) #matching the order of rows in the correction matrix
#building the final adjusted table of first and last names
result = Table.empty()
first_names_to_keep = ['John' , 'Michael', 'James' , 'Robert', 'David' ,
'Mary' , 'William', 'Richard', 'Thomas', 'Jennifer',
'Patricia', 'Joseph' , 'Linda' , 'Maria' , 'Charles' ,
'Barbara' , 'Mark' , 'Daniel' , 'Susan' , 'Elizabeth']
i = 1
for label in last_names_to_keep:
a = matching.column(i)
b = 1 + adjustments.column(i)/100
c = np.round(a*b*1e6, 1) #multiplied by 1e6 to match order of magnitude of author's final table
result = result.with_columns(label, c)
i += 1
result = result.with_column('First Names', first_names_to_keep)
result = result.move_to_start('First Names')
###Output
_____no_output_____
###Markdown
Finally, we tried to reorder the table, yet again, to match the author's final table, but some of the names in their table are not in the `adjustment` matrix. We were not sure why they dropped the names they did, where these new names with no values came from, nor why they added them in the first place.
###Code
result.to_csv('../output/result.csv')
result.show()
###Output
_____no_output_____ |
In Class Projects/Project 2 - Chapter 2 - Working With Lists.ipynb | ###Markdown
Chapter 2: Working With ListsMuch of the remainder of this book is dedicated to using data structures to produce analysis that is elegant and efficient. To use the words of economics, you are making a long-term investment in your human capital by working through these exercises. Once you have invested in these fixed-costs, you can work with data at low marginal cost.If you are familiar with other programming languages, you may be accustomed to working with arrays. An array is must be cast to house particular data types (_float_, _int_, _string_, etc…). By default, Python works with dynamic lists instead of arrays. Dynamic lists are not cast as a particular type. Working with Lists|New Concepts | Description|| --- | --- || Dynamic List | A dynamic list is encapsulated by brackets _([])_. A list is mutable. Elements can be added to or deleted from a list on the fly.|| List Concatenation | Two lists can be joined together in the same manner that strings are concatenated. || List Indexing | Lists are indexed with the first element being indexed as zero and the last element as the length of (number of elements in) the list less one. Indexes are called using brackets – i.e., _lst[0]_ calls the 0th element in the list. |In later chapters, we will combine lists with dictionaries to essential data structures. We will also work with more efficient and convenient data structures using the numpy and pandas libraries.Below we make our first lists. One will be empty. Another will contain integers. Another will have floats. Another strings. Another will mix these:
###Code
#lists.py
empty_list = []
int_list = [1,2,3,4,5]
float_list = [1.0,2.0,3.0,4.0,5.0]
string_list = ["Many words", "impoverished meaning"]
mixed_list = [1,2.0, "Mix it up"]
print(empty_list)
print(int_list)
print(float_list)
print(string_list)
print(mixed_list)
###Output
[]
[1, 2, 3, 4, 5]
[1.0, 2.0, 3.0, 4.0, 5.0]
['Many words', 'impoverished meaning']
[1, 2.0, 'Mix it up']
###Markdown
Often we will want to transform lists. In the following example, we will concatenate two lists, which means we will join the lists together:
###Code
#concatenateLists
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
join_lists = list1 + list2
print("list1:", list1)
print("list2:", list2)
print(join_lists)
###Output
list1: [5, 4, 9, 10, 3, 5]
list2: [6, 3, 2, 1, 5, 3]
[5, 4, 9, 10, 3, 5, 6, 3, 2, 1, 5, 3]
###Markdown
We have joined the lists together to make one long list. We can already observe one way in which Python will be useful for helping us to organize data. If we were doing this in a spread sheet, we would have to identify the row and column values of the elements or copy and paste the desired values into new rows or enter formulas into cells. Python accomplishes this for us with much less work.For a list of numbers, we will usually perform some arithmetic operation or categorize these values in order to identify meaningful subsets within the data. This requires access the elements, which Python allows us to do efficiently. For Loops and _range()_| New Concepts | Description || --- | --- || _list(obj)_ | List transforms an iterable object, such as a tuple or set, into a dynamic list. || _range(j, k , l)_ | Identifies a range of integers from _j _ to _k–1_ separated by some interval _l_. ||_len(obj)_ | Measure the length of an iterable object. |We can use a for loop to more efficiently execute this task. As we saw in the last chapter, the for loop will execute a series of elements: for element in list. Often, this list is a range of numbers that represent the index of a dynamic list. For this purpose we call:
###Code
for i in range(j, k, l):
<execute script>
###Output
_____no_output_____
###Markdown
The for loop cycles through all integers of interval _l _ between _j _ and *k - 1*, executing a script for each value. This script may explicitly integrate the value _i_. If you do not specify a starting value, *j *, the range function assumes that you are calling an array of elements from _0 _ to _j _. Likewise, if you do not specify an interval, *l *, range assumes that this interval is _1 _. Thus, _for i in range(k)_ is interpreted as _for i in range(0, k, 1)_. We will again use the loop in its simplest form, cycling through number from _0 _ to *(k – 1)*, where the length of the list is the value _k _. These cases are illustrated below in _range.py_.
###Code
#range.py
list1 = list(range(9))
list2 = list(range(-9,9))
list3 = list(range(-9,9,3))
print(list1)
print(list2)
print(list3)
###Output
_____no_output_____
###Markdown
The for loop will automatically identify the elements contained in _range()_ without requiring you to call _list()_. This is illustrated below in _forLoopAndRange.py_.
###Code
#forLoopAndRange.py
for i in range(10):
print(i)
###Output
_____no_output_____
###Markdown
Having printed *i* for all *i in range(0, 10, 1)*, we produce a set of integers from 0 to 9.If we were only printing index numbers from a range, for loops would not be very useful. For loops can be used to produce a wide variety of outputs. Often, you will call a for loop to cycle through the index of a particular array. Since arrays are indexed starting with 0 and for loops also assume 0 as an initial value, cycling through a list with a for loop is straight-forward. For a list named *A*, just use the command:
###Code
for i in range(len(A)):
<execute script>
###Output
_____no_output_____
###Markdown
This command will call all integers between 0 and 1 less than the length of _A _. In other words, it will call all indexers associated with _A _.
###Code
#copyListElementsForLoop.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
for i in range(j):
list3.append(list1[i])
k = len(list2)
for i in range(k):
list3.append(list2[i])
print("list3 elements:", list3)
###Output
_____no_output_____
###Markdown
Creating a New List with Values from Other Lists| New Concepts | Description || --- | --- || List Methods i.e., _.append()_, _.insert()_ | List methods append and insert increse the length of a list by adding in element to the list. || If Statements | An if statement executes the block of code contained in it if conditions stipulated by the if statement are met (they return True). || Else Statement | In the case that the conditions stipulated by an if statement are not met, and else statement executes an alternate block of code | | Operator i.e., ==, !=, , = | The operator indicates the condition relating two variables that is to be tested. |We can extend the exercise by summing the ith elements in each list. In the exercise below, _list3_ is the sum of the ith elements from _list1_ and _list2_.
###Code
#addListElements.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
for i in range(j):
list3.append(list1[i] + list2[i])
print("list3:", list3)
###Output
_____no_output_____
###Markdown
In the last exercise, we created an empty list, _list3_. We could not fill the list by calling element in it directly, as no elements yet exist in the list. Instead, we use the append method that is owned by the list-object. Alternately, we can use the insert method. It takes the form, _list.insert(index, object)_. This is shown in a later example. We appended the summed values of the first two lists in the order that the elements are ranked. We could have summed them in opposite order by summing element 5, then 4, ..., then 0.
###Code
#addListElements.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
for i in range(j):
list3.insert(0,list1[i] + list2[i])
print("list3:", list3)
###Output
_____no_output_____
###Markdown
In the next exercise we will us a function that we have not used before. We will check the length of each list whose elements are summed. We want to make sure that if we call an index from one list, it exists in the other. We do not want to call a list index if it does not exist. That would produce an error. We can check if a statement is true using an if statement. As with the for loop, the if statement is followed by a colon. This tells the program that the execution below or in front of the if statement depends upon the truth of the condition specified. The code that follows below an if statement must be indented, as this identifies what block of code is subject to the statement.
###Code
if True:
print("execute script")
###Output
execute script
###Markdown
If the statement returns _True_, then the commands that follow the if-statement will be executed. Though not stated explicitly, we can think of the program as passing over the if statement to the remainder of the script:
###Code
if True:
print("execute script")
else:
pass
###Output
execute script
###Markdown
If the statement returns _False_, then the program will continue reading the script.
###Code
if False:
print("execute script")
else:
print("statement isn't True")
###Output
statement isn't True
###Markdown
Nothing is printed in the console since there is no further script to execute.We will want to check if the lengths of two different lists are the same. To check that a variable has a stipulated value, we use two equals signs. Using _==_ allows the program to compare two values rather setting the value of the variable on the left, as would occur with only one equals sign.Following the if statement is a for loop. If the length of _list1_ and _list2_ are equal, the program will set the ith element of _list3_ equal to the sum of the ith elements from _list1_ and _list2_. In this example, the for loop will cycle through index values 0, 1, 2, 3, 4, and 5.We can take advantage of the for loop to use _.insert()_ in a manner that replicates the effect of our use of _append()_. We will insert the sum of the ith elements of _list1_ and _list2_ at the ith element of _list3_.
###Code
#addListElements3.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
if j == len(list2):
for i in range(0, len(list2)):
list3.insert(i,list1[i] + list2[i])
print("list3:", list3)
###Output
list1 elements: 5 4 9 10 3
list2 elements: 6 3 2 1 5
list3: [11, 7, 11, 11, 8, 8]
###Markdown
The if condition may be followed by an else statement. This tells the program to run a different command if the condition of the if statement is not met. In this case, we want the program to tell us why the condition was not met. In other cases, you may want to create other if statements to create a tree of possible outcomes. Below we use an if-else statement to identify when list’s are not the same length. We remove the last element from _list2_ to create lists of different lengths:
###Code
#addListElements4.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
if j == len(list2):
for i in range(0, len(list2)):
list3.insert(i,list1[i] + list2[i])
else:
print("Lists are not the same length, cannot perform element-wise operations.")
print("list3:", list3)
###Output
list1 elements: 5 4 9 10 3
list2 elements: 6 3 2 1 5
Lists are not the same length, cannot perform element-wise operations.
list3: []
###Markdown
Since the condition passed to the if statement was false, no values were appended to *list3*. Removing List Elements| New Concepts | Description || --- | --- || _del_ | The command del is used to delete an element from a list ||List Methods i.e., _.pop()_, _.remove()_, _.append()_ | Lists contains methods that can be used to modify the list. These include _.pop()_ which removes the last element of a list, allowing it to be saved as a separate object. Another method, _.remove()_ deletes an explicitly identified element. _.append(x)_ adds an additional element at the end of the list. | Perhaps you want to remove an element from a list. There are a few means of accomplishing this. Which one you choose depends on the ends desired.
###Code
#deleteListElements.py
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
print("lists before deletion: ")
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
del list1[0]
del list2[5]
print()
print("lists after deletion: ")
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
###Output
lists before deletion:
red nose
blue ice
orange fire
black cat
white mouse
golden dog
lists after deletion:
blue nose
orange ice
black fire
white cat
golden mouse
###Markdown
We have deleted _"red"_ from _list1_ and _"dog"_ from _list2_. By printing the elements of each list once before and once after one element is deleted from each, we can note the difference in the lists over time. What if we knew that we wanted to remove the elements but did not want to check what index each element is associated with? We can use the remove function owned by each list. We will tell _list1_ to remove _"red"_ and _list2_ to remove _"dog"_.
###Code
#removeListElements.py
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
print("lists before deletion: ")
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
list1.remove("red")
list2.remove("dog")
print()
print("lists after deletion: ")
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
###Output
lists before deletion:
red nose
blue ice
orange fire
black cat
white mouse
golden dog
lists after deletion:
blue nose
orange ice
black fire
white cat
golden mouse
###Markdown
We have achieved the same result using a different means. What if we wanted to keep track of the element that we removed? Before deleting or removing the element, we could assign the value to a different object. Let's do this before using the remove function:
###Code
#removeAndSaveListElementsPop.py
#define list1 and list2
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
#identify what is printed in for loop
print("lists before deletion: ")
if len(list1) == len(list2):
# use for loop to print lists in parallel
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
# remove list elements and save them as variables '_res"
list1_res = "red"
list2_res = "dog"
list1.remove(list1_res)
list2.remove(list2_res)
print()
# print lists again as in lines 8-11
print("lists after deletion: ")
if len(list1) == len(list2):
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
print()
print("Res1", "\tRes2")
print(list1_res, "\t" + (list2_res))
###Output
lists before deletion:
red nose
blue ice
orange fire
black cat
white mouse
golden dog
lists after deletion:
blue nose
orange ice
black fire
white cat
golden mouse
Res1 Res2
red dog
###Markdown
An easier way to accomplish this is to use *.pop()*, another method owned by each list.
###Code
#removeListElementsPop.py
#define list1 and list2
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
#identify what is printed in for loop
print("lists before deletion: ")
# use for loop to print lists in parallel
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
# remove list elements and save them as variables '_res"
list1_res = list1.pop(0)
list2_res = list2.pop(5)
print()
# print lists again as in lines 8-11
print("lists after deletion: ")
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
print()
print("Res1", "\tRes2")
print(list1_res, "\t" + (list2_res))
###Output
lists before deletion:
red nose
blue ice
orange fire
black cat
white mouse
golden dog
lists after deletion:
blue nose
orange ice
black fire
white cat
golden mouse
Res1 Res2
red dog
###Markdown
More with For Loops When you loop through element values, it is not necessary that these are consecutive. You may skip values at some interval. The next example returns to the earlier _addListElements.py_ examples. This time, we pass the number 2 as the third element in _range()_. Now range will count by twos from _0 _ to _j – 1_. This will make _list3_ shorter than before.
###Code
#addListElements5.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
if j == len(list2):
for i in range(0, j, 2):
list3.append(list1[i] + list2[i])
else:
print("Lists are not the same length, cannot perform element-wise operations.")
print("list3:", list3)
###Output
list1 elements: 5 4 9 10 3
list2 elements: 6 3 2 1 5
list3: [11, 11, 8]
###Markdown
We entered the sum of elements 0, 2, and 4 from _list1_ and _list2_ into *list3*. Since these were appended to *list3*, they are indexed in *list3[0]*, *list3[1]*, and *list3[2]*.For loops in python can call in sequence element of objects that are iterable. These include lists, strings, keys and values from dictionaries, as well as the range function we have already used. You may use a for loop that calls each element in the list without identifying the index of each element.
###Code
obj = ["A", "few", "words", "to", "print"]
for x in obj:
print(x)
###Output
A
few
words
to
print
###Markdown
Each $x$ called is an element from _obj_. Where before we passed _len(list1)_ to the for loop, we now pass _list1_ itself to the for loop and append each element $x$ to _list2_.
###Code
#forLoopWithoutIndexer.py
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = []
for x in list1:
list2.append(x)
print("list1\t", "list2")
k = len(list1)
j = len(list2)
if len(list1) == len(list2):
for i in range(0, len(list1)):
print(list1[i], "\t", list2[i])
###Output
list1 list2
red red
blue blue
orange orange
black black
white white
golden golden
###Markdown
Sorting Lists, Errors, and Exceptions| New Concepts | Description || --- | --- || _sorted()_ | The function sorted() sorts a list in order of numerical or alphabetical value. || passing errors i.e., _try_ and _except_ | A try statement will pass over an error if one is generated by the code in the try block. In the case that an error is passed, code from the except block well be called. This should typically identify the type of error that was passed. |We can sort lists using the sorted list function that orders the list either by number or alphabetically. We reuse lists from the last examples to show this.
###Code
#sorting.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = ["red", "blue", "orange", "black", "white", "golden"]
print("list1:", list1)
print("list2:", list2)
sorted_list1 = sorted(list1)
sorted_list2 = sorted(list2)
print("sortedList1:", sorted_list1)
print("sortedList2:", sorted_list2)
###Output
list1: [5, 4, 9, 10, 3, 5]
list2: ['red', 'blue', 'orange', 'black', 'white', 'golden']
sortedList1: [3, 4, 5, 5, 9, 10]
sortedList2: ['black', 'blue', 'golden', 'orange', 'red', 'white']
###Markdown
What happens if we try to sort a that has both strings and integers? You might expect that Python would sort integers and then strings or vice versa. If you try this, you will raise an error:
###Code
#sortingError.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = ["red", "blue", "orange", "black", "white", "golden"]
list3 = list1 + list2
print("list1:", list1)
print("list2:", list2)
print("list3:", list3)
sorted_list1 = sorted(list1)
sorted_list2 = sorted(list2)
print("sortedList1:", sorted_list1)
print("sortedList2:", sorted_list2)
sorted_list3 = sorted(list3)
print("sortedList3:", sorted_list3)
print("Execution complete!")
###Output
list1: [5, 4, 9, 10, 3, 5]
list2: ['red', 'blue', 'orange', 'black', 'white', 'golden']
list3: [5, 4, 9, 10, 3, 5, 'red', 'blue', 'orange', 'black', 'white', 'golden']
sortedList1: [3, 4, 5, 5, 9, 10]
sortedList2: ['black', 'blue', 'golden', 'orange', 'red', 'white']
###Markdown
The script returns an error. If this error is raised during execution, it will interrupt the program. One way to deal with this is to ask Python to try to execute some script and to execute some other command if an error would normally be raised:
###Code
#sortingError.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = ["red", "blue", "orange", "black", "white", "golden"]
list3 = list1 + list2
print("list1:", list1)
print("list2:", list2)
print("list3:", list3)
sorted_list1 = sorted(list1)
sorted_list2 = sorted(list2)
print("sortedList1:", sorted_list1)
print("sortedList2:", sorted_list2)
try:
sorted_list3 = sorted(list3)
print("sortedList3:", sorted_list3)
except:
print("TypeError: unorderable types: str() < int() "
"ignoring error")
print("Execution complete!")
###Output
list1: [5, 4, 9, 10, 3, 5]
list2: ['red', 'blue', 'orange', 'black', 'white', 'golden']
list3: [5, 4, 9, 10, 3, 5, 'red', 'blue', 'orange', 'black', 'white', 'golden']
sortedList1: [3, 4, 5, 5, 9, 10]
sortedList2: ['black', 'blue', 'golden', 'orange', 'red', 'white']
TypeError: unorderable types: str() < int() ignoring error
Execution complete!
###Markdown
We successfully avoided the error and instead called an alternate operation defined under except. The use for this will become more obvious as we move along. We will except use them from time to time and note the reason when we do. Slicing a List| New Concepts | Description || --- | --- || slice i.e., _list\[a:b\]_|A slice of a list is a copy of a portion (or all) of a list from index a to b – 1.|Sometimes, we may want to access several elements instantly. Python allows us to do this with a slice. Technically, when you call a list in its entirety, you take a slice that includes the entire list. We can do this explicitly like this:
###Code
#fullSlice.py
some_list = [3, 1, 5, 6, 1]
print(some_list[:])
###Output
[3, 1, 5, 6, 1]
###Markdown
Using *some_list\[:\]* is equivalent of creating a slice using *some_list\[min_index: list_length\]* where *min_index = 0* and *list_length= len(some_list)*:
###Code
#fullSlice2.py
some_list = [3, 1, 5, 6, 1]
min_index = 0
max_index = len(some_list)
print("minimum:", min_index)
print("maximum:", max_index)
print("Full list using slice", some_list[min_index:max_index])
print("Full list without slice", some_list)
###Output
minimum: 0
maximum: 5
Full list using slice [3, 1, 5, 6, 1]
Full list without slice [3, 1, 5, 6, 1]
###Markdown
This is not very useful if we do not use this to take a smaller subsection of a list. Below, we create a new array that is a subset of the original array. As you might expect by now, *full_list\[7\]* calls the 8th element. Since indexing begins with the 0th element, this element is actually counted as the 7th element. Also, similar to the command *for i in range(3, 7)*, the slice calls elements 3, 4, 5, and 6:
###Code
#partialSlice.py
min_index = 3
max_index = 7
full_list = [1, 2, 3, 4, 5, 6, 7, 8, 9]
partial_list = full_list[min_index:max_index]
print("Full List:", full_list)
print("Partial List:", partial_list)
print("full_list[7]:", full_list[7])
###Output
Full List: [1, 2, 3, 4, 5, 6, 7, 8, 9]
Partial List: [4, 5, 6, 7]
full_list[7]: 8
###Markdown
Nested For Loops| New Concepts | Description || --- | --- || Nested For Loops | A for loop may contain other for loops. They are useful for multidimensional data structures. |Creative use of for loops can save the programmer a lot of work. While you should be careful not to create so many layers of for loops and if statements that code is difficult to interpret (“Flat is better than nested”), you should be comfortable with the structure of nested for loops and, eventually, their use in structures like dictionaries and generators.A useful way to become acquainted with the power of multiple for loops is to identify what each the result of each iteration of nested for loops. In the code below, the first for loop will count from 0 to 4. For each value of *i*, the second for loop will cycle through values 0 to 4 for _j _.
###Code
#nestedForLoop.py
print("i", "j")
for i in range(5):
for j in range(5):
print(i, j)
###Output
i j
0 0
0 1
0 2
0 3
0 4
1 0
1 1
1 2
1 3
1 4
2 0
2 1
2 2
2 3
2 4
3 0
3 1
3 2
3 3
3 4
4 0
4 1
4 2
4 3
4 4
###Markdown
Often, we will want to employ values generated by for loops in a manner other than printing the values generated directly by the for loops. We may, for example, want to create a new value constructed from _i _ and _j _. Below, this value is constructed as the sum of _i _ and _j _.
###Code
#nestedForLoop.py
print("i", "j", "i+j")
for i in range(5):
for j in range(5):
val = i + j
print(i, j, val)
###Output
i j i+j
0 0 0
0 1 1
0 2 2
0 3 3
0 4 4
1 0 1
1 1 2
1 2 3
1 3 4
1 4 5
2 0 2
2 1 3
2 2 4
2 3 5
2 4 6
3 0 3
3 1 4
3 2 5
3 3 6
3 4 7
4 0 4
4 1 5
4 2 6
4 3 7
4 4 8
###Markdown
If we interpret the results as a table, we can better understand the intuition of for loops. Lighter shading indicates lower values of _i _ with shading growing darker as the value of _i _ increases.| | | | |__j__| | | | --- | --- | --- | --- | --- | --- | --- || | | __0__ | __1__ |__2__|__3__ | __4__ || | __0__ | 0 | 1 | 2 | 3 | 4 || | __1__ | 1 | 2 | 3 | 4 | 5 || __i__ | __2__ | 2 | 3 | 4 | 5 | 6 || | __3__ | 3 | 4 | 5 | 6 | 7 || | __4__ | 4 | 5 | 6 | 7 | 8 | Lists, Lists, and More Lists| New Concepts | Description || --- | --- || _min(lst)_ | The function _min()_ returns the lowest value from a list of values passed to it. || _max(lst)_ | The function _max()_ returns that highest value from a list of values passed to it. || generators i.e., _[val for val in lst]_ |Generators use a nested for loop to create an iterated data structure. |Lists have some convenient features. You can find the maximum and minimum values in a list with the _min()_ and _max()_ functions:
###Code
# minMaxFunctions.py
list1 = [20, 30, 40, 50]
max_list_value = max(list1)
min_list_value = min(list1)
print("maximum:", max_list_value, "minimum:", min_list_value)
###Output
maximum: 50 minimum: 20
###Markdown
We could have used a for loop to find these values. The program below performs the same task:
###Code
#minMaxFuntionsByHand.py
list1 = [20, 30, 40, 50]
# initial smallest value is infinite
# will be replaced if a value from the list is lower
min_list_val = float("inf")
# initial largest values is negative infinite
# will be replaced if a value from the list is higher
max_list_val = float("-inf")
for x in list1:
if x < min_list_val:
min_list_val = x
if x > max_list_val:
max_list_val = x
print("maximum:", max_list_val, "minimum:", min_list_val)
###Output
maximum: 50 minimum: 20
###Markdown
We chose to make the starting value of min_list_value large and positive and the starting value of *max_list_value* large and negative. The for loop cycles through these values and assigns the value, _x _, from the list to *min_list_value* if the value is less than the current value assigned to *min_list_value* and to *max_list_value* if the value is greater than the current value assigned to *max_list_value*.Earlier in the chapter, we constructed lists using list comprehension (i.e., the _list()_ function) and by generating lists and setting values with _.append()_ and _.insert()_. We may also use a generator to create a list. Generators are convenient as they provide a compact means of creating a list that is easier to interpret. They follow the same format as the _list()_ function.
###Code
#listFromGenerator.py
generator = (i for i in range(20))
print(generator)
list1 = list(generator)
print(list1)
list2 = [2 * i for i in range(20)]
print(list2)
###Output
<generator object <genexpr> at 0x00000230D200C4A0>
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38]
|
src/16_EDA_features_nb14.ipynb | ###Markdown
Introduction- nb14 で取得した特徴量をEDA- nb14 では、molecule ごとの特徴量を意識して作成したのでそれを確認してみる。- 使用するデータは、14で作ったtrainデータ。1000サンプルだけもってきた。 Import everything I nead :)
###Code
import pandas as pd
import numpy as np
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
pd.set_option('display.max_columns', 100)
###Output
_____no_output_____
###Markdown
Preparation
###Code
path = './data_for_eda/nb14_X_sample.csv'
X = pd.read_csv(path)
X = X.drop(['Unnamed: 0'], axis=1)
X.head(3)
###Output
_____no_output_____
###Markdown
---> なんか、NaN がある...(`molecule_atom_index_1_dixt_std`)
###Code
_idx = X.isnull().sum(axis=0)!=0
X.isnull().sum(axis=0)[_idx]
###Output
_____no_output_____
###Markdown
---> nb14処理ミスってNaNがたくさん生成されてる。14を編集しなきゃ。。 ---> NaN があっても、lightGBMは動くけど、その行が無視される。これは問題だと思われる。 - 上記の問題を解決する方法 1. NaNにデータを補完する(0とか) 1. その`列`つまり特徴量を削除する。 Method 1- NaNにデータを補完- 今回は、stdとかが関わっているので、0で置き換える
###Code
X1 = X.fillna(0.0)
X1.isnull().sum(axis=0)
###Output
_____no_output_____
###Markdown
---> `NaN`がなくなった! ---**plot**
###Code
feat = 'molecule_type_dist_std_diff'
trace0 = go.Scatter(y=X[feat], mode='markers', marker=dict(size=7))
trace1 = go.Scatter(y=X1[feat], mode='markers', marker=dict(size=3))
data = [trace0, trace1]
iplot(data)
###Output
_____no_output_____
###Markdown
Method2
###Code
_idx = X.isnull().sum(axis=0)!=0
feat_null_list = X.isnull().sum(axis=0)[_idx].index
feat_null_list
print('削除前', X.shape)
X_except_null_feat = X.drop(feat_null_list, axis=1)
print('削除後', X_except_null_feat.shape)
X_except_null_feat.isnull().sum(axis=0)
###Output
_____no_output_____ |
notebooks/2017-11-21(A mechanism for charging neurons).ipynb | ###Markdown
A mechanism for chargining neuronsThe question of much of how long the cue has to be sustained in order for the sequence to get boostrap is answered on this notebook
###Code
import sys
sys.path.append('../')
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
plt.rcParams["figure.figsize"] = [16,9]
sns.set(font_scale=3.0)
from network import run_network_recall, train_network, run_network_recall_limit
from connectivity import designed_matrix_sequences, designed_matrix_sequences_local
from analysis import get_recall_duration_for_pattern, get_recall_duration_sequence
###Output
_____no_output_____
###Markdown
The equation and two examples
###Code
A_list = [1, 2, 5, 10]
threshold = 0.5
tau_z = 0.050
tau_z_vector = np.arange(0.050, 1.050, 0.050)
for A in A_list:
T_charge = tau_z_vector * np.log(A / (A - threshold))
plt.plot(tau_z_vector, T_charge, '*-', markersize=13, label='A = ' + str(A))
plt.plot(tau_z_vector, tau_z_vector, ls='--', color='gray', label='identity')
plt.xlabel('tau_z')
plt.ylabel('Necessary time for charge')
plt.legend()
plt.axhline(0, ls='--', color='black');
###Output
_____no_output_____
###Markdown
The fact that the identity goes over all the lines means that a tau_z time for the cue will be enough for all the sequeces to kick-start themselves if we maake T_cue = tau_z Two examplesThe first example does not work because the self-excitation value is too smal the second one corrects for that and succesfully works
###Code
N = 10
sequences = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
self_excitation = 1
transition = 0.6
inhibition = 3.0
G = 200.0
tau_m = 0.010
T = 10.0
I_cue = 0
T_cue = 0.100
dt = 0.001
threshold = 0.5
tau_z = 0.250
w = designed_matrix_sequences(N, sequences, self_excitation=self_excitation, transition=transition,
inhbition=inhibition)
dic = run_network_recall(N, w, G, threshold, tau_m, tau_z, T, dt, I_cue, T_cue)
pattern = 2
x_history = dic['x']
duration = get_recall_duration_for_pattern(x_history, pattern, dt)
print('duration', duration)
time = np.arange(0, T, dt)
x_history = dic['x']
z_history = dic['z']
current_history = dic['current']
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
patterns = [0, 1, 2]
# patterns = sequence
for pattern in patterns:
ax1.plot(time, x_history[:, pattern], label='x' + str(pattern))
ax2.plot(time, current_history[:, pattern], label='current' + str(pattern))
ax3.plot(time, z_history[:, pattern], label='z ' + str(pattern))
ax1.axhline(0, ls='--', color='black')
ax1.legend();
ax1.set_ylim([-0.1, 1.1])
ax3.set_ylim([-0.1, 1.1])
ax2.axhline(threshold, ls='--', color='black')
ax2.legend();
ax3.axhline(0, ls='--', color='black')
ax3.legend();
###Output
duration nan
###Markdown
This does not converge, is in the area of A=1 where the current is not long enough.
###Code
N = 10
sequences = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
self_excitation = 2
transition = 0.6
inhibition = 3.0
G = 200.0
tau_m = 0.010
T = 10.0
I_cue = 0
T_cue = 0.100
dt = 0.001
threshold = 0.5
tau_z = 0.250
w = designed_matrix_sequences(N, sequences, self_excitation=self_excitation, transition=transition,
inhbition=inhibition)
dic = run_network_recall(N, w, G, threshold, tau_m, tau_z, T, dt, I_cue, T_cue)
pattern = 2
x_history = dic['x']
duration = get_recall_duration_for_pattern(x_history, pattern, dt)
print('duration', duration)
time = np.arange(0, T, dt)
x_history = dic['x']
z_history = dic['z']
current_history = dic['current']
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
patterns = [0, 1, 2]
# patterns = sequence
for pattern in patterns:
ax1.plot(time, x_history[:, pattern], label='x' + str(pattern))
ax2.plot(time, current_history[:, pattern], label='current' + str(pattern))
ax3.plot(time, z_history[:, pattern], label='z ' + str(pattern))
ax1.axhline(0, ls='--', color='black')
ax1.legend();
ax1.set_ylim([-0.1, 1.1])
ax3.set_ylim([-0.1, 1.1])
ax2.axhline(threshold, ls='--', color='black')
ax2.legend();
ax3.axhline(0, ls='--', color='black')
ax3.legend();
###Output
../network.py:5: RuntimeWarning: overflow encountered in exp
return 1.0/(1 + np.exp(-G * x))
|
Exploratory Analysis - Haberman cancer survival.ipynb | ###Markdown
**Introduction** I will be performing exploratory data analysis on this dataset. This dataset was obtained from a study conducted from 1958 to 1970 at the University of Chicago's Billings Hospital on the survival of patients who had undergone surgery for breast cancer. Objective of this project - To find out any correlation between survival rate and age- Did the cancer patients have better survival chance as the year progresses?- Did the number of positive axillary nodes affect survival rates? First, let's import required packages
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os # accessing directory structure
import matplotlib.pyplot as plt # plotting
import seaborn as sns
###Output
_____no_output_____
###Markdown
Next, we will import the csv file comprising Haberman Cancer Survival Dataset
###Code
haberman = pd.read_csv('../input/haberman.csv/haberman.csv')
###Output
_____no_output_____
###Markdown
Get a quick view on the first 5 rows of dataset.
###Code
haberman.head()
###Output
_____no_output_____
###Markdown
Check to see if there is any unknown values i.e. NaN, NULL, etc for each column.
###Code
haberman.isnull().sum()
###Output
_____no_output_____
###Markdown
Change values in status: 1 to survived and 2 to dead for better readability. Achived through [pandas.dataframe.replace()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html)
###Code
haberman['status'] = haberman.status.replace([1,2], ['survived','dead'])
###Output
_____no_output_____
###Markdown
Probability Density Function (PDF)PDF describes the relative likelihood for a random variable to take on a given value. Here, PDF is a great way for us to see the proportion of datapoints in each variable. For instance, we will like to see the proportion of people's age in a dataset that survived Haberman cancer. Does age has an effect on survival chance? Let's see!First, we have an overview of the dataset using [pairplot](https://seaborn.pydata.org/generated/seaborn.pairplot.html).
###Code
g = sns.pairplot(haberman, hue = 'status')
g.map_diag(sns.distplot)
g.map_offdiag(plt.scatter)
g.add_legend()
g.fig.suptitle('FacetGrid plot', fontsize = 20)
g.fig.subplots_adjust(top=0.9);
###Output
_____no_output_____
###Markdown
**Summary from diagram** What we can see from here:- We will just need to look into the histograms for each variable that are linked to survival rates (age, nodes and year).- For age, we observed people from the age of 30 to 40 has better survival rates. People from the age of 50 to 60 had a lower chance of survival while those above 60 has an equal chance.- For year, we only observed higher survival rate between 1960 to 1963. Well, this does not tell us anything since there could changes in how data are collected or there are simply less haberman cancer patients admitted into the University of Chicago's Billings Hospital. - For nodes, we see a better survival rates with patients that have 2 or less positive axillary nodes.
###Code
gg = sns.boxplot(x='status',y='nodes', data=haberman)
gg.set_yscale('log')
###Output
_____no_output_____
###Markdown
BoxplotBoxplot is also a great way to represent the distribution of qualitative data. What we see here is that people who survived have a range of 0-7 nodes, while those that died have 0-25 nodes. Finding correlation between age and survival rate - I will use [Pandas dataframe.corr()](https://www.geeksforgeeks.org/python-pandas-dataframe-corr/) for this purpose Correlation between age and survivial rate
###Code
age_corr = haberman
age_corr_dead = age_corr[age_corr['status'] == 'dead'].groupby(['age']).size().reset_index(name='count')
age_corr_dead.corr()
sns.regplot(x = 'age', y = 'count', data = age_corr_dead).set_title("Age vs Death count")
age_corr_survived = age_corr[age_corr['status'] == 'survived'].groupby(['age']).size().reset_index(name='count')
age_corr_survived.corr()
sns.regplot(x = 'age', y = 'count', data = age_corr_survived).set_title('Age vs Survived count')
###Output
_____no_output_____
###Markdown
For correlation coefficient, a positive 1 means a perfect positive correlation, which means both variables move in the same direction. If one goes up, the other will go up. In contrary, a negative 1 means the relationship that exists between two variables is negative 100% of the time. - For this study, we observed a weak negative correlation between age and survived and age and death. We see a negative correlation coefficient of approximately -0.281 when comparing age with number of patients that survived. Looking into the square of the correlation coefficient, R-squared, which represents the degree or extent to which the variance of one variable is related to the variance of the second variable, and is typically expressed in percentage terms, we observed a value of approximately 0.0790. This value shows that 7.9% of the variation in survived counts can be explained by age. A value of approximately 7% is observed when comparing death count and age.- Hence, we can say that age has no correlation with survivial rates. Correlation between year and survival rate
###Code
year_corr = haberman
year_corr_dead = year_corr[year_corr['status'] == 'dead'].groupby(['year']).size().reset_index(name='count')
year_corr_dead.corr()
sns.regplot(x = 'year', y = 'count', data = year_corr_dead).set_title('Year vs death count')
year_corr_survived = year_corr[year_corr['status'] == 'survived'].groupby(['year']).size().reset_index(name='count')
year_corr_survived.corr()
sns.regplot(x = 'year', y = 'count', data = year_corr_survived).set_title('Year vs Survived count')
###Output
_____no_output_____
###Markdown
As we compare year with survivial rates, we observed negative correlation once again, similar to what was seen when compared to age. R-squared values of 0.144 and 0.406 were observed for death and survived counts. This means that 14% of the variation in death counts were related to the year while approximately 41% of the variation in survived count were related to year.- As compared to year, we see higher negative correlation values. Is this something that we can rely on? Well, we do observed more data points when comparing age with survival rate and the low amount of data point here could explain a higher negative correlation value. Correlation between positive axillary nodes and survival rates
###Code
node_corr = haberman
node_corr_dead = node_corr[node_corr['status'] == 'dead'].groupby(['nodes']).size().reset_index(name = 'count')
node_corr_dead.corr()
sns.regplot(x = 'nodes', y = 'count', data = node_corr_dead).set_title('No of positive axillary nodes vs Death count')
node_corr_survived = node_corr[node_corr['status'] == 'survived'].groupby(['nodes']).size().reset_index(name ='count')
node_corr_survived.corr()
sns.regplot(x = 'nodes', y = 'count', data =node_corr_survived).set_title('No of positive axillary nodes vs Survived patients')
###Output
_____no_output_____ |
Data Preprocessing/time_series_features.ipynb | ###Markdown
Data Imports
###Code
'''Run this cell to produce a concatenation of csvs in the directory in reverse order'''
# all_files = glob.glob(os.path.join('', '*.csv'))
# dtypes = {'user_type':str, 'bike_share_for_all_trip':str, 'rental_access_method':str}
# data = pd.concat((pd.read_csv(f, dtype=dtypes).iloc[::-1].reset_index(drop=True) for f in all_files), ignore_index=True)
data = pd.read_csv('aggregated_dataset_updated.csv')
data.head()
###Output
C:\Users\Newton\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3444: DtypeWarning: Columns (2) have mixed types.Specify dtype option on import or set low_memory=False.
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
Producing Time Series Data
###Code
data['date_hour'] = pd.to_datetime(data['datetime'], format='%Y-%m-%d_%H')
us_bd = CustomBusinessDay(calendar=USFederalHolidayCalendar())
data['Business Day'] = (data['date_hour'].dt.date.isin(pd.date_range(start='2018-01-01',end='2020-01-01', freq=us_bd).date))*1
data['Holiday'] = (data['date_hour'].dt.date.isin(USFederalHolidayCalendar().holidays(start='2018-01-01',end='2020-01-01').date))*1
day_name = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']
day_name = dict(enumerate(day_name))
month_name = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November']
month_name = dict(enumerate(month_name))
season_name = ['Spring', 'Summer', 'Fall']
season_name = dict(enumerate(season_name))
for i in range(3):
data[season_name[i]] = (data['date_hour'].dt.month%12 // 3 + 1 == i)*1
for i in range(6):
data[day_name[i]] = (data['date_hour'].dt.dayofweek == i)*1
for i in range(11):
data[month_name[i]] = (data['date_hour'].dt.month == i+1)*1
data['Hour'] = data['date_hour'].dt.hour
data['Days Passed'] = (data['date_hour'] - pd.to_datetime('2018-01-01')).dt.days
data['Commute In'] = ((data['date_hour'].dt.time >= time(8,0)) & (data['date_hour'].dt.time <= time(9,0)) & (data['Business Day']))*1
data['Commute Out'] = ((data['date_hour'].dt.time >= time(16,0)) & (data['date_hour'].dt.time <= time(18,0)) & (data['Business Day']))*1
data.drop(columns='date_hour',inplace=True)
data.to_csv('aggregated_dataset_updated_with_time.csv')
###Output
_____no_output_____ |
WebScraping/Web-scraping-NHL-Playoff-Pace.ipynb | ###Markdown
 A Glimpse Into the Future: Interactive TextbooksJupyter Notebooks are easy to maintain, keep current and evergreen. Multiple Jupyter Notebooks can be combined to create Interactive Textbooks.For example, _The Foundations of Data Science_ class at UC Berkeley's [Interactive Textbook](https://ds8.gitbooks.io/textbook/content/).https://ds8.gitbooks.io/textbook/content/ NHL DataThis notebook highlights how to use and work with open data using Jupyter notebooks in comparison to a more traditional approach of using standard, desktop tools to perform an open data assignment. The goal of the exercise is to use current National Hockey League (NHL) results to determine whether a team is on pace for making the playoffs. Traditional approach Tool 1Traditionally, students would have had to go to a particular website to access the data: http://www.hockey-reference.com/teams/CGY/2018_games.html Tool 2From there, they would have to manually copy and paste the data into a tool such as Microsoft Excel. Tool 3...that is then copied and pasted into Microsoft Word in order to write up a final report. In total, that means the students would need to use the following tools:- a web browser- Microsoft Excel or something like it- Microsoft Word or something similarThe final product is usually a static snapshot in time. Jupyter notebooks approachUsing Jupyter notebooks, the entire analysis can be done in one tool, requiring only a web browser. The end product is an interactive notebook that combines active code along with the explanatory narrative for how the analysis was conducted which can be interpreted by anyone.
###Code
import urllib.request
import pandas as pd
from bs4 import BeautifulSoup
from argparse import ArgumentParser
import numpy as np
# Query the hockey-reference website for data
html1 = urllib.request.urlopen("https://www.hockey-reference.com/teams/CGY/2017_games.html").read()
html2 = urllib.request.urlopen("https://www.hockey-reference.com/teams/VAN/2017_games.html").read()
soup1 = BeautifulSoup(html1,"html5lib")
soup2 = BeautifulSoup(html2,"html5lib")
table1 = soup1.find_all('table')[0]
table_body1 = table1.find('tbody')
rows1 = table_body1.find_all('tr')
table2 = soup2.find_all('table')[0]
table_body2 = table2.find('tbody')
rows2 = table_body2.find_all('tr')
column_headers = [ch.getText() for ch in table1.find_all('tr')[0].find_all('th')]
#print(column_headers)
team1_data = [[td1.getText() for td1 in rows1[i].find_all(['th','td'])]
for i in range(len(rows1))]
team2_data = [[td2.getText() for td2 in rows2[i].find_all(['th','td'])]
for i in range(len(rows2))]
df1 = pd.DataFrame(team1_data, columns=column_headers)
df2 = pd.DataFrame(team2_data, columns=column_headers)
df1 = df1.drop(df1.index[[20,41,62,83]])
df2 = df2.drop(df2.index[[20,41,62,83]])
# Extracted and cleaned data from the hockey-reference website
print(df1)
cols = ['GP','W', 'OL']
df1_clean = df1[cols].apply(pd.to_numeric, errors='coerce')
df2_clean = df2[cols].apply(pd.to_numeric, errors='coerce')
df1_clean['Playoff_Pace']=df1_clean['GP']*96/82
df1_clean['CGY_Points']=df1_clean['W']*2 + df1_clean['OL']
df2_clean['VAN_Points']=df2_clean['W']*2 + df2_clean['OL']
df1_clean['VAN_Points']=df2_clean['VAN_Points']
# Data analysis of my two favorite hockey teams
df_combined=df1_clean
df_combined=df_combined.drop(['W','OL'],axis=1)
print(df_combined)
import matplotlib.pyplot as plt
# Calgary Flames Points Pace
plt.plot( 'GP', 'Playoff_Pace', data=df_combined, marker='', color='black', linewidth=4);
plt.plot( 'GP', 'CGY_Points', data=df_combined, marker='', color='red', linewidth=4, linestyle='dashed', label="CGY");
plt.xlabel('GP');
plt.ylabel('Points');
# Calgary Flames and Vancouver Canucks Points Pace
plt.plot( 'GP', 'Playoff_Pace', data=df_combined, marker='', color='black', linewidth=4);
plt.plot( 'GP', 'CGY_Points', data=df_combined, marker='', color='red', linewidth=4, linestyle='dashed', label="CGY");
plt.plot( 'GP', 'VAN_Points', data=df_combined, marker='', color='blue', linewidth=4, linestyle='dashed', label="VAN");
plt.xlabel('GP');
plt.ylabel('Points')
plt.legend();
###Output
_____no_output_____ |
notebooks/Clean_gold/Clean datasets.ipynb | ###Markdown
WL
###Code
text_src = read_lines("../../data_parallel/wi+locness/train_src")
text_tgt = read_lines("../../data_parallel/wi+locness/train_tgt")
with open("../Clustering/data/wl_train_src_embed.pickle", "rb") as f:
vectors_src = pickle.load(f)
with open("../Clustering/data/wl_train_tgt_embed.pickle", "rb") as f:
vectors_tgt = pickle.load(f)
cos_sim_wl = []
for i in tqdm(range(len(vectors_src))):
cos_sim_wl.append(cosine_similarity([vectors_src[i]],[vectors_tgt[i]])[0,0])
# class CleanDataset:
# def __init(self, text_src, text_tgt):
# pass
# def
def show_statistic(df, len_before=0, show_rate=True):
diff_len = len_before - len(df)
print("Count of rows: ", len(df))
if diff_len > 0:
print("Diff count: ", diff_len)
if show_rate==True:
print("Count of rows with change: ", df['have_change'].sum())
print("Rate of rows with change: ", round(df['have_change'].mean(),2))
print()
def basic_clean(df, show = True, min_cos_src_tgt_sim = 0):
len_start = len(df)
print("Initial statistics")
df['have_change'] = (df['text_src']!=df['text_tgt']).astype(int)
show_statistic(df, len_start)
print("Drop dulicates")
len_before = len(df)
df = df.drop_duplicates(subset=['text_src', 'text_tgt'])
show_statistic(df, len_before, show_rate=True)
print("Drop where len less 5 and less then 3 token")
len_before = len(df)
df = df[df["text_tgt"].str.len() > 5]
df = df[df['text_tgt'].apply(lambda x: len(x.split(" ")) > 3)]
show_statistic(df, len_before, show_rate=True)
print("Drop where start from non-capital")
len_before = len(df)
df = df[df["text_tgt"].apply(lambda x: x[0] == x[0].upper())]
show_statistic(df, len_before, show_rate=True)
print("Drop where for one src more than one target")
len_before = len(df)
val_count = df.text_src.value_counts()
index_to_delete = []
for text_src in val_count[val_count > 1].index:
sub_df = df[df.text_src == text_src]
res_ind = sub_df[sub_df["cos_src_tgt"] > 0.999].index
if len(res_ind):
index_to_delete.extend(res_ind)
df = df[~df.index.isin(index_to_delete)]
show_statistic(df, len_before, show_rate=True)
print("Drop where cosine similarity between src and tgt is less than ", min_cos_src_tgt_sim)
len_before = len(df)
df = df[df["cos_src_tgt"] > min_cos_src_tgt_sim]
show_statistic(df, len_before, show_rate=True)
print('Final rate of cleaned data: ', round(len(df)/len_start,2))
return df
df = pd.DataFrame({"text_src": text_src, "text_tgt": text_tgt, "cos_src_tgt": cos_sim_wl})
#df["cos_src_tgt"].describe()
clean_wl = basic_clean(df, show = True, min_cos_src_tgt_sim=0.75)
df = df.drop_duplicates(subset=['text_src', 'text_tgt'])
len(df)
df[df["text_tgt"].str.len() <= 5]
df = df[df["text_tgt"].str.len() > 5]
len(df)
df[df["text_tgt"].apply(lambda x: x[0] != x[0].upper())]
df = df[df["text_tgt"].apply(lambda x: x[0] == x[0].upper())]
len(df)
df["cos_src_tgt"].plot.hist()
df_sort = df.sort_values("cos_src_tgt")
df_sort[df_sort["cos_src_tgt"] <= 0.9]
df.loc[2943,'text_src']
df.loc[2943,'text_tgt']
df = df[df["cos_src_tgt"] > 0.75]
32418/34308
len(df)
df = df[df['text_tgt'].apply(lambda x: len(x.split(" ")) > 3)]
len(df)
df.head(2)
val_count = df.text_src.value_counts()
len(val_count[val_count > 1])
val_count[val_count > 1]
df[df.text_src == "But lot of peoples saying that , the zoo can protected endagered spices against illegal poachers ."]
df.loc[9569].values
df.loc[9570].values
df[df.text_src == "Dear Sir / Madam"]
df[df.text_src == "Dear sir or madam ,"]
df[df.text_src == "I love it ."]
df[df.text_src == "In my opinion people should bulid some kind of wildlife parks . This solution will allow"]
df.loc[9573].values
index_to_delete = []
for text_src in val_count[val_count > 1].index:
sub_df = df[df.text_src == text_src]
res_ind = sub_df[sub_df["cos_src_tgt"] > 0.999].index
if len(res_ind):
index_to_delete.extend(res_ind)
index_to_delete
len(df)
df = df[~df.index.isin(index_to_delete)]
len(df)
31491/34308
from sklearn.model_selection import train_test_split
train, dev = train_test_split(df, test_size=0.02, random_state=4)
train_src = train.text_src.values
train_tgt = train.text_tgt.values
dev_src = dev.text_src.values
dev_tgt = dev.text_tgt.values
path_save = "../../data_parallel/clean_wl/"
write_lines(path_save+"train_src", train_src, mode='w')
write_lines(path_save+"train_tgt", train_tgt, mode='w')
write_lines(path_save+"dev_src", dev_src, mode='w')
write_lines(path_save+"dev_tgt", dev_tgt, mode='w')
# with open("../Checkpoint_exp/wl_cos.pickle", "rb") as f:
# cos_check_wl = pickle.load(f)
###Output
_____no_output_____
###Markdown
Nucle
###Code
text_src = read_lines("../../data_parallel/nucle/nucle_src")
text_tgt = read_lines("../../data_parallel/nucle/nucle_tgt")
with open("../Clustering/data/nucle_train_src_embed.pickle", "rb") as f:
vectors_src = pickle.load(f)
with open("../Clustering/data/nucle_train_tgt_embed.pickle", "rb") as f:
vectors_tgt = pickle.load(f)
cos_sim = []
for i in tqdm(range(len(vectors_src))):
cos_sim.append(cosine_similarity([vectors_src[i]],[vectors_tgt[i]])[0,0])
with open("../Checkpoint_exp/nucle_cos.pickle", "rb") as f:
cos_check = pickle.load(f)
df = pd.DataFrame({"text_src": text_src, "text_tgt": text_tgt, "cos_src_tgt": cos_sim, "cos_check": cos_check})
len(df)
basic_clean_nucle = basic_clean(df, show = True, min_cos_src_tgt_sim=0.935)
# df = df.drop_duplicates(subset=['text_src', 'text_tgt'])
# df = df[df['cos_src_tgt'] >= 0.935]
# df = df[df['cos_check'] >= 0.96]
#df[df['text_src']!=df['text_tgt']]
df['cos_check'].plot.hist()
df_sort = df.sort_values('cos_check')
df_sort[df_sort['cos_check']<=0.97]
df = df.sort_values('cos')
df['cos'].plot.hist()
df[df['cos_src_tgt'] <=0.935].sort_values("cos_src_tgt")
###Output
_____no_output_____ |
croatia-nordics-and-some.ipynb | ###Markdown
Please read the [README](https://github.com/ibudiselic/covid/blob/master/README.md) file in this repository.
###Code
# Interactive plots make it difficult to update this automatically from the command line.
# This can be switched to `True` in Binder to get interactive plots, though.
INTERACTIVE_PLOTS = False
# Load all the data and functionality.
%run lib.ipynb
countries_to_plot = ['Croatia', 'Sweden', 'Norway', 'Finland', 'Denmark', 'Iceland', 'Belgium', 'United Kingdom', 'Italy']
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
analyze_countries(absolute_date_comparison_start_date='2020-03-01')
###Output
_____no_output_____ |
projects/02 models/boston-house-price-prediction.ipynb | ###Markdown
Boston house price prediction The problem that we are going to solve here is that given a set of features that describe a house in Boston, our machine learning model must predict the house price. To train our machine learning model with boston housing data, we will be using scikit-learn’s boston dataset.In this dataset, each row describes a boston town or suburb. There are 506 rows and 13 attributes (features) with a target column (price).https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.names
###Code
# Importing the libraries
import pandas as pd
import numpy as np
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Importing the Boston Housing dataset
from sklearn.datasets import load_boston
boston = load_boston()
# Initializing the dataframe
data = pd.DataFrame(boston.data)
# See head of the dataset
data.head()
#Adding the feature names to the dataframe
data.columns = boston.feature_names
data.head()
###Output
_____no_output_____
###Markdown
CRIM per capita crime rate by town ZN proportion of residential land zoned for lots over 25,000 sq.ft. INDUS proportion of non-retail business acres per town CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) NOX nitric oxides concentration (parts per 10 million) RM average number of rooms per dwelling AGE proportion of owner-occupied units built prior to 1940 DIS weighted distances to five Boston employment centres RAD index of accessibility to radial highways TAX full-value property-tax rate per 10,000usd PTRATIO pupil-teacher ratio by town B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town LSTAT % lower status of the population Each record in the database describes a Boston suburb or town.
###Code
#Adding target variable to dataframe
data['PRICE'] = boston.target
# Median value of owner-occupied homes in $1000s
#Check the shape of dataframe
data.shape
data.columns
data.dtypes
# Identifying the unique number of values in the dataset
data.nunique()
# Check for missing values
data.isnull().sum()
# See rows with missing values
data[data.isnull().any(axis=1)]
# Viewing the data statistics
data.describe()
# Finding out the correlation between the features
corr = data.corr()
corr.shape
# Plotting the heatmap of correlation between features
plt.figure(figsize=(20,20))
sns.heatmap(corr, cbar=True, square= True, fmt='.1f', annot=True, annot_kws={'size':15}, cmap='Greens')
# Spliting target variable and independent variables
X = data.drop(['PRICE'], axis = 1)
y = data['PRICE']
# Splitting to training and testing data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.3, random_state = 4)
###Output
_____no_output_____
###Markdown
Linear regression Training the model
###Code
# Import library for Linear Regression
from sklearn.linear_model import LinearRegression
# Create a Linear regressor
lm = LinearRegression()
# Train the model using the training sets
lm.fit(X_train, y_train)
# Value of y intercept
lm.intercept_
#Converting the coefficient values to a dataframe
coeffcients = pd.DataFrame([X_train.columns,lm.coef_]).T
coeffcients = coeffcients.rename(columns={0: 'Attribute', 1: 'Coefficients'})
coeffcients
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
# Model prediction on train data
y_pred = lm.predict(X_train)
# Model Evaluation
print('R^2:',metrics.r2_score(y_train, y_pred))
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_train, y_pred))*(len(y_train)-1)/(len(y_train)-X_train.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_train, y_pred))
print('MSE:',metrics.mean_squared_error(y_train, y_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred)))
###Output
R^2: 0.7465991966746854
Adjusted R^2: 0.736910342429894
MAE: 3.0898610949711305
MSE: 19.073688703469035
RMSE: 4.367343437774162
###Markdown
𝑅^2 : It is a measure of the linear relationship between X and Y. It is interpreted as the proportion of the variance in the dependent variable that is predictable from the independent variable.Adjusted 𝑅^2 :The adjusted R-squared compares the explanatory power of regression models that contain different numbers of predictors.MAE : It is the mean of the absolute value of the errors. It measures the difference between two continuous variables, here actual and predicted values of y. MSE: The mean square error (MSE) is just like the MAE, but squares the difference before summing them all instead of using the absolute value. RMSE: The mean square error (MSE) is just like the MAE, but squares the difference before summing them all instead of using the absolute value.
###Code
# Visualizing the differences between actual prices and predicted values
plt.scatter(y_train, y_pred)
plt.xlabel("Prices")
plt.ylabel("Predicted prices")
plt.title("Prices vs Predicted prices")
plt.show()
# Checking residuals
plt.scatter(y_pred,y_train-y_pred)
plt.title("Predicted vs residuals")
plt.xlabel("Predicted")
plt.ylabel("Residuals")
plt.show()
###Output
_____no_output_____
###Markdown
There is no pattern visible in this plot and values are distributed equally around zero. So Linearity assumption is satisfied
###Code
# Checking Normality of errors
sns.distplot(y_train-y_pred)
plt.title("Histogram of Residuals")
plt.xlabel("Residuals")
plt.ylabel("Frequency")
plt.show()
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Here the residuals are normally distributed. So normality assumption is satisfied For test data
###Code
# Predicting Test data with the model
y_test_pred = lm.predict(X_test)
# Model Evaluation
acc_linreg = metrics.r2_score(y_test, y_test_pred)
print('R^2:', acc_linreg)
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_test, y_test_pred))*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_test, y_test_pred))
print('MSE:',metrics.mean_squared_error(y_test, y_test_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test, y_test_pred)))
###Output
R^2: 0.7121818377409193
Adjusted R^2: 0.6850685326005711
MAE: 3.859005592370744
MSE: 30.05399330712416
RMSE: 5.482152251362977
###Markdown
Here the model evaluations scores are almost matching with that of train data. So the model is not overfitting. Random Forest Regressor Train the model
###Code
# Import Random Forest Regressor
from sklearn.ensemble import RandomForestRegressor
# Create a Random Forest Regressor
reg = RandomForestRegressor(n_estimators=200, max_depth=5)
# Train the model using the training sets
reg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
# Model prediction on train data
y_pred = reg.predict(X_train)
# Model Evaluation
print('R^2:',metrics.r2_score(y_train, y_pred))
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_train, y_pred))*(len(y_train)-1)/(len(y_train)-X_train.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_train, y_pred))
print('MSE:',metrics.mean_squared_error(y_train, y_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred)))
# Visualizing the differences between actual prices and predicted values
plt.scatter(y_train, y_pred)
plt.xlabel("Prices")
plt.ylabel("Predicted prices")
plt.title("Prices vs Predicted prices")
plt.show()
# Checking residuals
plt.scatter(y_pred,y_train-y_pred)
plt.title("Predicted vs residuals")
plt.xlabel("Predicted")
plt.ylabel("Residuals")
plt.show()
###Output
_____no_output_____
###Markdown
For test data
###Code
# Predicting Test data with the model
y_test_pred = reg.predict(X_test)
# Model Evaluation
acc_rf = metrics.r2_score(y_test, y_test_pred)
print('R^2:', acc_rf)
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_test, y_test_pred))*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_test, y_test_pred))
print('MSE:',metrics.mean_squared_error(y_test, y_test_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test, y_test_pred)))
###Output
R^2: 0.8085864829773283
Adjusted R^2: 0.7905547748520041
MAE: 2.818610620624688
MSE: 19.987413283231646
RMSE: 4.470728495808222
###Markdown
XGBoost Regressor Training the model
###Code
# Import XGBoost Regressor
from xgboost import XGBRegressor
#Create a XGBoost Regressor
reg = XGBRegressor()
# Train the model using the training sets
reg.fit(X_train, y_train)
###Output
[20:29:47] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
###Markdown
max_depth (int) – Maximum tree depth for base learners.learning_rate (float) – Boosting learning rate (xgb’s “eta”)n_estimators (int) – Number of boosted trees to fit.gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.min_child_weight (int) – Minimum sum of instance weight(hessian) needed in a child.subsample (float) – Subsample ratio of the training instance.colsample_bytree (float) – Subsample ratio of columns when constructing each tree.objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).nthread (int) – Number of parallel threads used to run xgboost. (Deprecated, please use n_jobs)scale_pos_weight (float) – Balancing of positive and negative weights. Model Evaluation
###Code
# Model prediction on train data
y_pred = reg.predict(X_train)
# Model Evaluation
print('R^2:',metrics.r2_score(y_train, y_pred))
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_train, y_pred))*(len(y_train)-1)/(len(y_train)-X_train.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_train, y_pred))
print('MSE:',metrics.mean_squared_error(y_train, y_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred)))
# Visualizing the differences between actual prices and predicted values
plt.scatter(y_train, y_pred)
plt.xlabel("Prices")
plt.ylabel("Predicted prices")
plt.title("Prices vs Predicted prices")
plt.show()
# Checking residuals
plt.scatter(y_pred,y_train-y_pred)
plt.title("Predicted vs residuals")
plt.xlabel("Predicted")
plt.ylabel("Residuals")
plt.show()
###Output
_____no_output_____
###Markdown
For test data
###Code
#Predicting Test data with the model
y_test_pred = reg.predict(X_test)
# Model Evaluation
acc_xgb = metrics.r2_score(y_test, y_test_pred)
print('R^2:', acc_xgb)
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_test, y_test_pred))*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_test, y_test_pred))
print('MSE:',metrics.mean_squared_error(y_test, y_test_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test, y_test_pred)))
###Output
R^2: 0.8494894736313225
Adjusted R^2: 0.8353109457849979
MAE: 2.4509708843733136
MSE: 15.716320042597493
RMSE: 3.9643814199188117
###Markdown
SVM Regressor
###Code
# Creating scaled set to be used in model to improve our results
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the model
###Code
# Import SVM Regressor
from sklearn import svm
# Create a SVM Regressor
reg = svm.SVR(kernel='linear', C=1)
# Train the model using the training sets
reg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
C : float, optional (default=1.0): The penalty parameter of the error term. It controls the trade off between smooth decision boundary and classifying the training points correctly.kernel : string, optional (default='rbf’): kernel parameters selects the type of hyperplane used to separate the data. It must be one of 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed’ or a callable.degree : int, optional (default=3): Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.gamma : float, optional (default='auto’): It is for non linear hyperplanes. The higher the gamma value it tries to exactly fit the training data set. Current default is 'auto' which uses 1 / n_features.coef0 : float, optional (default=0.0): Independent term in kernel function. It is only significant in 'poly' and 'sigmoid'.shrinking : boolean, optional (default=True): Whether to use the shrinking heuristic. Model Evaluation
###Code
# Model prediction on train data
y_pred = reg.predict(X_train)
# Model Evaluation
print('R^2:',metrics.r2_score(y_train, y_pred))
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_train, y_pred))*(len(y_train)-1)/(len(y_train)-X_train.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_train, y_pred))
print('MSE:',metrics.mean_squared_error(y_train, y_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred)))
# Visualizing the differences between actual prices and predicted values
plt.scatter(y_train, y_pred)
plt.xlabel("Prices")
plt.ylabel("Predicted prices")
plt.title("Prices vs Predicted prices")
plt.show()
# Checking residuals
plt.scatter(y_pred,y_train-y_pred)
plt.title("Predicted vs residuals")
plt.xlabel("Predicted")
plt.ylabel("Residuals")
plt.show()
###Output
_____no_output_____
###Markdown
For test data
###Code
# Predicting Test data with the model
y_test_pred = reg.predict(X_test)
# Model Evaluation
acc_svm = metrics.r2_score(y_test, y_test_pred)
print('R^2:', acc_svm)
print('Adjusted R^2:',1 - (1-metrics.r2_score(y_test, y_test_pred))*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1))
print('MAE:',metrics.mean_absolute_error(y_test, y_test_pred))
print('MSE:',metrics.mean_squared_error(y_test, y_test_pred))
print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test, y_test_pred)))
###Output
R^2: 0.7025265959053506
Adjusted R^2: 0.6745037389978836
MAE: 3.525916725344025
MSE: 31.062194357493038
RMSE: 5.573346782454241
###Markdown
Evaluation and comparision of all the models
###Code
models = pd.DataFrame({
'Model': ['Linear Regression', 'Random Forest', 'XGBoost', 'Support Vector Machines'],
'R-squared Score': [acc_linreg*100, acc_rf*100, acc_xgb*100, acc_svm*100]})
models.sort_values(by='R-squared Score', ascending=False)
###Output
_____no_output_____ |
W2/W2D3/tutorials/tutorial2.ipynb | ###Markdown
Tutorial 2: Effects of Input Correlation**Week 2, Day 3: Biological Neuron Models****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, John Murray, Richard Naud, Arvind Kumar__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Matthew Krause, Spiros Chavlis, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial Objectives*Estimated timing of tutorial: 50 minutes*In this tutorial, we will use the leaky integrate-and-fire (LIF) neuron model (see Tutorial 1) to study how they **transform input correlations to output properties** (transfer of correlations). In particular, we are going to write a few lines of code to:- inject correlated GWN in a pair of neurons- measure correlations between the spiking activity of the two neurons- study how the transfer of correlation depends on the statistics of the input, i.e. mean and standard deviation. Tutorial slides These are the slides for the videos in all tutorials today
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/8djsm/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
--- Setup
###Code
# Imports
import matplotlib.pyplot as plt
import numpy as np
import time
###Output
_____no_output_____
###Markdown
Figure Settings
###Code
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
# use NMA plot style
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
my_layout = widgets.Layout()
###Output
_____no_output_____
###Markdown
Plotting Functions
###Code
# @title Plotting Functions
def example_plot_myCC():
pars = default_pars(T=50000, dt=.1)
c = np.arange(10) * 0.1
r12 = np.zeros(10)
for i in range(10):
I1gL, I2gL = correlate_input(pars, mu=20.0, sig=7.5, c=c[i])
r12[i] = my_CC(I1gL, I2gL)
plt.figure()
plt.plot(c, r12, 'bo', alpha=0.7, label='Simulation', zorder=2)
plt.plot([-0.05, 0.95], [-0.05, 0.95], 'k--', label='y=x',
dashes=(2, 2), zorder=1)
plt.xlabel('True CC')
plt.ylabel('Sample CC')
plt.legend(loc='best')
# the function plot the raster of the Poisson spike train
def my_raster_Poisson(range_t, spike_train, n):
"""
Generates poisson trains
Args:
range_t : time sequence
spike_train : binary spike trains, with shape (N, Lt)
n : number of Poisson trains plot
Returns:
Raster plot of the spike train
"""
# find the number of all the spike trains
N = spike_train.shape[0]
# n should smaller than N:
if n > N:
print('The number n exceeds the size of spike trains')
print('The number n is set to be the size of spike trains')
n = N
# plot rater
i = 0
while i < n:
if spike_train[i, :].sum() > 0.:
t_sp = range_t[spike_train[i, :] > 0.5] # spike times
plt.plot(t_sp, i * np.ones(len(t_sp)), 'k|', ms=10, markeredgewidth=2)
i += 1
plt.xlim([range_t[0], range_t[-1]])
plt.ylim([-0.5, n + 0.5])
plt.xlabel('Time (ms)', fontsize=12)
plt.ylabel('Neuron ID', fontsize=12)
def plot_c_r_LIF(c, r, mycolor, mylabel):
z = np.polyfit(c, r, deg=1)
c_range = np.array([c.min() - 0.05, c.max() + 0.05])
plt.plot(c, r, 'o', color=mycolor, alpha=0.7, label=mylabel, zorder=2)
plt.plot(c_range, z[0] * c_range + z[1], color=mycolor, zorder=1)
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
# @title Helper Functions
def default_pars(**kwargs):
pars = {}
### typical neuron parameters###
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -75. # initial potential [mV]
pars['V_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
### simulation parameters ###
pars['T'] = 400. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
### external parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized
# time points [ms]
return pars
def run_LIF(pars, Iinj):
"""
Simulate the LIF dynamics with external input current
Args:
pars : parameter dictionary
Iinj : input current [pA]. The injected current here can be a value or an array
Returns:
rec_spikes : spike times
rec_v : mebrane potential
"""
# Set parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, V_L = pars['V_init'], pars['V_L']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tref = pars['tref']
# Initialize voltage and current
v = np.zeros(Lt)
v[0] = V_init
Iinj = Iinj * np.ones(Lt)
tr = 0.
# simulate the LIF dynamics
rec_spikes = [] # record spike times
for it in range(Lt - 1):
if tr > 0:
v[it] = V_reset
tr = tr - 1
elif v[it] >= V_th: # reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref / dt
# calculate the increment of the membrane potential
dv = (-(v[it] - V_L) + Iinj[it] / g_L) * (dt / tau_m)
# update the membrane potential
v[it + 1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes
def my_GWN(pars, sig, myseed=False):
"""
Function that calculates Gaussian white noise inputs
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same random number sequence
Returns:
I : Gaussian white noise input
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Set random seed. You can fix the seed of the random number generator so
# that the results are reliable however, when you want to generate multiple
# realization make sure that you change the seed for each new realization
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate GWN
# we divide here by 1000 to convert units to sec.
I_GWN = sig * np.random.randn(Lt) * np.sqrt(pars['tau_m'] / dt)
return I_GWN
def LIF_output_cc(pars, mu, sig, c, bin_size, n_trials=20):
""" Simulates two LIF neurons with correlated input and computes output correlation
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c : correlation coefficient ~[0, 1]
bin_size : bin size used for time series
n_trials : total simulation trials
Returns:
r : output corr. coe.
sp_rate : spike rate
sp1 : spike times of neuron 1 in the last trial
sp2 : spike times of neuron 2 in the last trial
"""
r12 = np.zeros(n_trials)
sp_rate = np.zeros(n_trials)
for i_trial in range(n_trials):
I1gL, I2gL = correlate_input(pars, mu, sig, c)
_, sp1 = run_LIF(pars, pars['g_L'] * I1gL)
_, sp2 = run_LIF(pars, pars['g_L'] * I2gL)
my_bin = np.arange(0, pars['T'], bin_size)
sp1_count, _ = np.histogram(sp1, bins=my_bin)
sp2_count, _ = np.histogram(sp2, bins=my_bin)
r12[i_trial] = my_CC(sp1_count[::20], sp2_count[::20])
sp_rate[i_trial] = len(sp1) / pars['T'] * 1000.
return r12.mean(), sp_rate.mean(), sp1, sp2
###Output
_____no_output_____
###Markdown
The helper function contains the:- Parameter dictionary: `default_pars( **kwargs)` from Tutorial 1- LIF simulator: `run_LIF` from Tutorial 1- Gaussian white noise generator: `my_GWN(pars, sig, myseed=False)` from Tutorial 1- Poisson type spike train generator: `Poisson_generator(pars, rate, n, myseed=False)`- Two LIF neurons with correlated inputs simulator: `LIF_output_cc(pars, mu, sig, c, bin_size, n_trials=20)` --- Section 1: Correlations (Synchrony)Correlation or synchrony in neuronal activity can be described for any readout of brain activity. Here, we are concerned with **the spiking activity of neurons**. In the simplest way, correlation/synchrony refers to coincident spiking of neurons, i.e., when two neurons spike together, they are firing in **synchrony** or are **correlated**. Neurons can be synchronous in their instantaneous activity, i.e., they spike together with some probability. However, it is also possible that spiking of a neuron at time $t$ is correlated with the spikes of another neuron with a delay (**time-delayed synchrony**). Origin of synchronous neuronal activity:- Common inputs, i.e., two neurons are receiving input from the same sources. The degree of correlation of the shared inputs is proportional to their output correlation.- Pooling from the same sources. Neurons do not share the same input neurons but are **receiving inputs from neurons which themselves are correlated**.- Neurons are connected to each other (uni- or bi-directionally): This will only give rise to **time-delayed synchrony**. Neurons could also be connected via gap-junctions.- Neurons have similar parameters and initial conditions. Implications of synchronyWhen neurons spike together, they can have a stronger impact on downstream neurons. Synapses in the brain are sensitive to the temporal correlations (i.e., delay) between pre- and postsynaptic activity, and this, in turn, can lead to **the formation of functional neuronal networks** - the basis of unsupervised learning (we will study some of these concepts in a forthcoming tutorial).Synchrony implies a reduction in the dimensionality of the system. In addition, correlations, in many cases, can impair the decoding of neuronal activity. Video 1: Input & output correlations
###Code
# @title Video 1: Input & output correlations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Bh411o7eV", width=730, height=410, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nsAYFBcAkes", width=730, height=410, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
A simple model to study the emergence of correlations is to inject common inputs to a pair of neurons and measure the output correlation as **a function of the fraction(c) of common inputs**. Here, we are going to investigate the transfer of correlations by computing the correlation coefficient of spike trains recorded from two unconnected LIF neurons, which received correlated inputs.The input current to LIF neuron $i$ $(i=1,2)$ is:\begin{equation}\frac{I_i}{g_L} =\mu_i + \sigma_i (\sqrt{1-c}\xi_i + \sqrt{c}\xi_c) \quad (1)\end{equation}where $\mu_i$ is the temporal average of the current. The Gaussian white noise $\xi_i$ is independent for each neuron, while $\xi_c$ is common to all neurons. The variable $c$ ($0\le c\le1$) controls the fraction of common and independent inputs. $\sigma_i$ shows the variance of the total input.So, first, we will generate correlated inputs. Execute this cell to get a function `correlate_input` for generating correlated GWN inputs
###Code
# @markdown Execute this cell to get a function `correlate_input` for generating correlated GWN inputs
def correlate_input(pars, mu=20., sig=7.5, c=0.3):
"""
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c. : correlation coefficient ~[0, 1]
Returns:
I1gL, I2gL : two correlated inputs with corr. coe. c
"""
# generate Gaussian whute noise xi_1, xi_2, xi_c
xi_1 = my_GWN(pars, sig)
xi_2 = my_GWN(pars, sig)
xi_c = my_GWN(pars, sig)
# Generate two correlated inputs by Equation. (1)
I1gL = mu + np.sqrt(1. - c) * xi_1 + np.sqrt(c) * xi_c
I2gL = mu + np.sqrt(1. - c) * xi_2 + np.sqrt(c) * xi_c
return I1gL, I2gL
help(correlate_input)
###Output
Help on function correlate_input in module __main__:
correlate_input(pars, mu=20.0, sig=7.5, c=0.3)
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c. : correlation coefficient ~[0, 1]
Returns:
I1gL, I2gL : two correlated inputs with corr. coe. c
###Markdown
Coding Exercise 1A: Compute the correlationThe _sample correlation coefficient_ between two input currents $I_i$ and $I_j$ is defined as the sample covariance of $I_i$ and $I_j$ divided by the square root of the sample variance of $I_i$ multiplied with the square root of the sample variance of $I_j$. In equation form: \begin{align}r_{ij} &= \frac{cov(I_i, I_j)}{\sqrt{var(I_i)} \sqrt{var(I_j)}}\\cov(I_i, I_j) &= \sum_{k=1}^L (I_i^k -\bar{I}_i)(I_j^k -\bar{I}_j) \\var(I_i) &= \sum_{k=1}^L (I_i^k -\bar{I}_i)^2\end{align}where $\bar{I}_i$ is the sample mean, k is the time bin, and L is the length of $I$. This means that $I_i^k$ is current i at time $k\cdot dt$. Note that **the equations above are not accurate for sample covariances and variances as they should be additionally divided by L-1** - we have dropped this term because it cancels out in the sample correlation coefficient formula.The _sample correlation coefficient_ may also be referred to as the **sample Pearson correlation coefficient**. Here, is a beautiful paper that explains multiple ways to calculate and understand correlations [Rodgers and Nicewander 1988](https://www.stat.berkeley.edu/~rabbee/correlation.pdf).In this exercise, we will create a function, `my_CC` to compute the sample correlation coefficient between two time series. Note that while we introduced this computation here in the context of input currents, the sample correlation coefficient is used to compute the correlation between any two time series - we will use it later on binned spike trains. We then check our method is accurate by generating currents with a certain correlation (using `correlate_input`), computing the correlation coefficient using `my_CC`, and plotting the true vs sample correlation coefficients.
###Code
def my_CC(i, j):
"""
Args:
i, j : two time series with the same length
Returns:
rij : correlation coefficient
"""
########################################################################
## TODO for students: compute rxy, then remove the NotImplementedError #
# Tip1: array([a1, a2, a3])*array([b1, b2, b3]) = array([a1*b1, a2*b2, a3*b3])
# Tip2: np.sum(array([a1, a2, a3])) = a1+a2+a3
# Tip3: square root, np.sqrt()
# Fill out function and remove
# raise NotImplementedError("Student exercise: compute the sample correlation coefficient")
########################################################################
# Calculate the covariance of i and j
cov = ((i - i.mean()) * (j - j.mean())).sum()
# Calculate the variance of i
var_i = ((i - i.mean()) * (i - i.mean())).sum()
# Calculate the variance of j
var_j = ((j - j.mean()) * (j - j.mean())).sum()
# Calculate the correlation coefficient
rij = cov / np.sqrt(var_i*var_j)
return rij
example_plot_myCC()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial2_Solution_313f41e4.py)*Example output:* The sample correlation coefficients (computed using `my_CC`) match the ground truth correlation coefficient! In the next exercise, we will use the Poisson distribution to model spike trains. Remember that you have seen the Poisson distribution used in this way in the [pre-reqs math day on Statistics](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial1.htmlsection-2-2-poisson-distribution). Remember that a Poisson spike train has the following properties:- The ratio of the mean and variance of spike count is 1- Inter-spike-intervals are exponentially distributed- Spike times are irregular i.e. 𝐶𝑉ISI=1- Adjacent spike intervals are independent of each other.In the following cell, we provide a helper function `Poisson_generator` and then use it to produce a Poisson spike train. Execute this cell to get helper function `Poisson_generator`
###Code
# @markdown Execute this cell to get helper function `Poisson_generator`
def Poisson_generator(pars, rate, n, myseed=False):
"""
Generates poisson trains
Args:
pars : parameter dictionary
rate : noise amplitute [Hz]
n : number of Poisson trains
myseed : random seed. int or boolean
Returns:
pre_spike_train : spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate uniformly distributed random variables
u_rand = np.random.rand(n, Lt)
# generate Poisson train
poisson_train = 1. * (u_rand < rate * (dt / 1000.))
return poisson_train
help(Poisson_generator)
###Output
Help on function Poisson_generator in module __main__:
Poisson_generator(pars, rate, n, myseed=False)
Generates poisson trains
Args:
pars : parameter dictionary
rate : noise amplitute [Hz]
n : number of Poisson trains
myseed : random seed. int or boolean
Returns:
pre_spike_train : spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
###Markdown
Execute this cell to visualize Poisson spike train
###Code
# @markdown Execute this cell to visualize Poisson spike train
pars = default_pars()
pre_spike_train = Poisson_generator(pars, rate=10, n=100, myseed=2020)
my_raster_Poisson(pars['range_t'], pre_spike_train, 100)
###Output
_____no_output_____
###Markdown
Coding Exercise 1B: Measure the correlation between spike trainsAfter recording the spike times of the two neurons, how can we estimate their correlation coefficient? In order to find this, we need to bin the spike times and obtain two time series. Each data point in the time series is the number of spikes in the corresponding time bin. You can use `np.histogram()` to bin the spike times.Complete the code below to bin the spike times and calculate the correlation coefficient for two Poisson spike trains. Note that `c` here is the ground-truth correlation coefficient that we define. Execute this cell to get a function for generating correlated Poisson inputs (`generate_corr_Poisson`)
###Code
# @markdown Execute this cell to get a function for generating correlated Poisson inputs (`generate_corr_Poisson`)
def generate_corr_Poisson(pars, poi_rate, c, myseed=False):
"""
function to generate correlated Poisson type spike trains
Args:
pars : parameter dictionary
poi_rate : rate of the Poisson train
c. : correlation coefficient ~[0, 1]
Returns:
sp1, sp2 : two correlated spike time trains with corr. coe. c
"""
range_t = pars['range_t']
mother_rate = poi_rate / c
mother_spike_train = Poisson_generator(pars, rate=mother_rate,
n=1, myseed=myseed)[0]
sp_mother = range_t[mother_spike_train > 0]
L_sp_mother = len(sp_mother)
sp_mother_id = np.arange(L_sp_mother)
L_sp_corr = int(L_sp_mother * c)
np.random.shuffle(sp_mother_id)
sp1 = np.sort(sp_mother[sp_mother_id[:L_sp_corr]])
np.random.shuffle(sp_mother_id)
sp2 = np.sort(sp_mother[sp_mother_id[:L_sp_corr]])
return sp1, sp2
print(help(generate_corr_Poisson))
def corr_coeff_pairs(pars, rate, c, trials, bins):
"""
Calculate the correlation coefficient of two spike trains, for different
realizations
Args:
pars : parameter dictionary
rate : rate of poisson inputs
c : correlation coefficient ~ [0, 1]
trials : number of realizations
bins : vector with bins for time discretization
Returns:
r12 : correlation coefficient of a pair of inputs
"""
r12 = np.zeros(n_trials)
for i in range(n_trials):
##############################################################
## TODO for students
# Note that you can run multiple realizations and compute their r_12(diff_trials)
# with the defined function above. The average r_12 over trials can get close to c.
# Note: change seed to generate different input per trial
# Fill out function and remove
raise NotImplementedError("Student exercise: compute the correlation coefficient")
##############################################################
# Generate correlated Poisson inputs
sp1, sp2 = generate_corr_Poisson(pars, ..., ..., myseed=2020+i)
# Bin the spike times of the first input
sp1_count, _ = np.histogram(..., bins=...)
# Bin the spike times of the second input
sp2_count, _ = np.histogram(..., bins=...)
# Calculate the correlation coefficient
r12[i] = my_CC(..., ...)
return r12
poi_rate = 20.
c = 0.2 # set true correlation
pars = default_pars(T=10000)
# bin the spike time
bin_size = 20 # [ms]
my_bin = np.arange(0, pars['T'], bin_size)
n_trials = 100 # 100 realizations
r12 = corr_coeff_pairs(pars, rate=poi_rate, c=c, trials=n_trials, bins=my_bin)
print(f'True corr coe = {c:.3f}')
print(f'Simu corr coe = {r12.mean():.3f}')
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial2_Solution_e5eaac3e.py) Sample output```True corr coe = 0.200Simu corr coe = 0.197``` --- Section 2: Investigate the effect of input correlation on the output correlation*Estimated timing to here from start of tutorial: 20 min*Now let's combine the aforementioned two procedures. We first generate the correlated inputs. Then we inject the correlated inputs $I_1, I_2$ into a pair of neurons and record their output spike times. We continue measuring the correlation between the output and investigate the relationship between the input correlation and the output correlation. In the following, you will inject correlated GWN in two neurons. You need to define the mean (`gwn_mean`), standard deviation (`gwn_std`), and input correlations (`c_in`).We will simulate $10$ trials to get a better estimate of the output correlation. Change the values in the following cell for the above variables (and then run the next cell) to explore how they impact the output correlation.
###Code
# Play around with these parameters
pars = default_pars(T=80000, dt=1.) # get the parameters
c_in = 0.3 # set input correlation value
gwn_mean = 10.
gwn_std = 10.
###Output
_____no_output_____
###Markdown
Do not forget to execute this cell to simulate the LIF
###Code
# @title
# @markdown Do not forget to execute this cell to simulate the LIF
bin_size = 10. # ms
starttime = time.perf_counter() # time clock
r12_ss, sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=gwn_mean, sig=gwn_std, c=c_in,
bin_size=bin_size, n_trials=10)
# just the time counter
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
print(f"Input correlation = {c_in}")
print(f"Output correlation = {r12_ss}")
plt.figure(figsize=(12, 6))
plt.plot(sp1, np.ones(len(sp1)) * 1, '|', ms=20, label='neuron 1')
plt.plot(sp2, np.ones(len(sp2)) * 1.1, '|', ms=20, label='neuron 2')
plt.xlabel('time (ms)')
plt.ylabel('neuron id.')
plt.xlim(1000, 8000)
plt.ylim(0.9, 1.2)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Think! 2: Input and Output Correlations- Is the output correlation always smaller than the input correlation? If yes, why?- Should there be a systematic relationship between input and output correlations? You will explore these questions in the next figure but try to develop your own intuitions first! Lets vary `c_in` and plot the relationship between the `c_in` and output correlation. This might take some time depending on the number of trials. Don't forget to execute this cell!
###Code
#@title
#@markdown Don't forget to execute this cell!
pars = default_pars(T=80000, dt=1.) # get the parameters
bin_size = 10.
c_in = np.arange(0, 1.0, 0.1) # set the range for input CC
r12_ss = np.zeros(len(c_in)) # small mu, small sigma
starttime = time.perf_counter() # time clock
for ic in range(len(c_in)):
r12_ss[ic], sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=10)
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
plt.figure(figsize=(7, 6))
plot_c_r_LIF(c_in, r12_ss, mycolor='b', mylabel='Output CC')
plt.plot([c_in.min() - 0.05, c_in.max() + 0.05],
[c_in.min() - 0.05, c_in.max() + 0.05],
'k--', dashes=(2, 2), label='y=x')
plt.xlabel('Input CC')
plt.ylabel('Output CC')
plt.legend(loc='best', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial2_Solution_71e76f4d.py) --- Section 3: Correlation transfer function*Estimated timing to here from start of tutorial: 30 min*The above plot of input correlation vs. output correlation is called the __correlation transfer function__ of the neurons. Section 3.1: How do the mean and standard deviation of the Gaussian white noise (GWN) affect the correlation transfer function?The correlations transfer function appears to be linear. The above can be taken as the input/output transfer function of LIF neurons for correlations, instead of the transfer function for input/output firing rates as we had discussed in the previous tutorial (i.e., F-I curve).What would you expect to happen to the slope of the correlation transfer function if you vary the mean and/or the standard deviation of the GWN of the inputs ? Execute this cell to visualize correlation transfer functions
###Code
# @markdown Execute this cell to visualize correlation transfer functions
pars = default_pars(T=80000, dt=1.) # get the parameters
no_trial = 10
bin_size = 10.
c_in = np.arange(0., 1., 0.2) # set the range for input CC
r12_ss = np.zeros(len(c_in)) # small mu, small sigma
r12_ls = np.zeros(len(c_in)) # large mu, small sigma
r12_sl = np.zeros(len(c_in)) # small mu, large sigma
starttime = time.perf_counter() # time clock
for ic in range(len(c_in)):
r12_ss[ic], sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
r12_ls[ic], sp_ls, sp1, sp2 = LIF_output_cc(pars, mu=18.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
r12_sl[ic], sp_sl, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=20.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
plt.figure(figsize=(7, 6))
plot_c_r_LIF(c_in, r12_ss, mycolor='b', mylabel=r'Small $\mu$, small $\sigma$')
plot_c_r_LIF(c_in, r12_ls, mycolor='y', mylabel=r'Large $\mu$, small $\sigma$')
plot_c_r_LIF(c_in, r12_sl, mycolor='r', mylabel=r'Small $\mu$, large $\sigma$')
plt.plot([c_in.min() - 0.05, c_in.max() + 0.05],
[c_in.min() - 0.05, c_in.max() + 0.05],
'k--', dashes=(2, 2), label='y=x')
plt.xlabel('Input CC')
plt.ylabel('Output CC')
plt.legend(loc='best', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Think! 3.1: GWN and the Correlation Transfer FunctionWhy do both the mean and the standard deviation of the GWN affect the slope of the correlation transfer function? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial2_Solution_2deb4ccb.py) Section 3.2: What is the rationale behind varying $\mu$ and $\sigma$?The mean and the variance of the synaptic current depends on the spike rate of a Poisson process. We can use something called [Campbell's theorem](https://en.wikipedia.org/wiki/Campbell%27s_theorem_(probability)) to estimate the mean and the variance of the synaptic current:\begin{align}\mu_{\rm syn} = \lambda J \int P(t) \\\sigma_{\rm syn} = \lambda J \int P(t)^2 dt\\\end{align}where $\lambda$ is the firing rate of the Poisson input, $J$ the amplitude of the postsynaptic current and $P(t)$ is the shape of the postsynaptic current as a function of time. Therefore, when we varied $\mu$ and/or $\sigma$ of the GWN, we mimicked a change in the input firing rate. Note that, if we change the firing rate, both $\mu$ and $\sigma$ will change simultaneously, not independently. Here, since we observe an effect of $\mu$ and $\sigma$ on correlation transfer, this implies that the input rate has an impact on the correlation transfer function. Think!: Correlations and Network Activity- What are the factors that would make output correlations smaller than input correlations? (Notice that the colored lines are below the black dashed line)- What does the fact that output correlations are smaller mean for the correlations throughout a network?- Here we have studied the transfer of correlations by injecting GWN. But in the previous tutorial, we mentioned that GWN is unphysiological. Indeed, neurons receive colored noise (i.e., Shot noise or OU process). How do these results obtained from injection of GWN apply to the case where correlated spiking inputs are injected in the two LIFs? Will the results be the same or different?Reference- De La Rocha, Jaime, et al. "Correlation between neural spike trains increases with firing rate." Nature (2007) (https://www.nature.com/articles/nature06028/)- Bujan AF, Aertsen A, Kumar A. Role of input correlations in shaping the variability and noise correlations of evoked activity in the neocortex. Journal of Neuroscience. 2015 Jun 3;35(22):8611-25. (https://www.jneurosci.org/content/35/22/8611) [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial2_Solution_be65591c.py) --- Summary*Estimated timing of tutorial: 50 minutes*In this tutorial, we studied how the input correlation of two LIF neurons is mapped to their output correlation. Specifically, we:- injected correlated GWN in a pair of neurons,- measured correlations between the spiking activity of the two neurons, and- studied how the transfer of correlation depends on the statistics of the input, i.e., mean and standard deviation.Here, we were concerned with zero time lag correlation. For this reason, we restricted estimation of correlation to instantaneous correlations. If you are interested in time-lagged correlation, then we should estimate the cross-correlogram of the spike trains and find out the dominant peak and area under the peak to get an estimate of output correlations. We leave this as a future to-do for you if you are interested.If you have time, check out the bonus video to think about responses of ensembles of neurons to time-varying input. --- Bonus --- Bonus Section 1: Ensemble ResponseFinally, there is a short BONUS lecture video on the firing response of an ensemble of neurons to time-varying input. There are no associated coding exercises - just enjoy. Video 2: Response of ensemble of neurons to time-varying input
###Code
# @title Video 2: Response of ensemble of neurons to time-varying input
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV18K4y1x7Pt", width=730, height=410, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="78_dWa4VOIo", width=730, height=410, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____ |
DL/week05/.ipynb_checkpoints/Practical_5-using-a-pretrained-convnet_img_size_50-checkpoint.ipynb | ###Markdown
Deep LearningPractical 5 - Using a pre-trained convnetAY2020/21 Semester
###Code
from tensorflow import keras
print('keras: ', keras.__version__)
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
###Output
keras: 2.2.4-tf
###Markdown
ObjectivesAfter completing this practical exercise, students should be able to utilize a pre-trained convnet through:1. [Feature extraction without and with data augmentation](fea)2. [Fine Tuning](fine)3. [Exercise- utilize another pre-trained model](exc) 1. Feature extraction Feature extraction consists of using the representations learned by a previous network to extract interesting features from new samples. These features are then run through a new classifier, which is trained from scratch. In our case, we use the convolutional base of the VGG16 network, trained on ImageNet, to extract interesting features from our cat and dog images, and then training a cat vs. dog classifier on top of these features.The VGG16 model, among others, comes pre-packaged with Keras. You can import it from the `tensorflow.keras.applications` module. Here's the list of image classification models (all pre-trained on the ImageNet dataset) that are available as part of `tensorflow.keras.applications`:* Xception* InceptionV3* ResNet50* VGG16* VGG19* MobileNetLet's instantiate the VGG16 model:
###Code
from tensorflow.keras.applications import VGG16
img_size = 50
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(img_size, img_size, 3))
###Output
_____no_output_____
###Markdown
We passed three arguments to the constructor:* `weights`, to specify which weight checkpoint to initialize the model from* `include_top`, which refers to including or not the densely-connected classifier on top of the network. By default, this densely-connected classifier would correspond to the 1000 classes from ImageNet. Since we intend to use our own densely-connected classifier (with only two classes, cat and dog), we don't need to include it.* `input_shape`, the shape of the image tensors that we will feed to the network. This argument is purely optional: if we don't pass it, then the network will be able to process inputs of any size.Here's the detail of the architecture of the VGG16 convolutional base: it's very similar to the simple convnets that you are already familiar with.
###Code
conv_base.summary()
###Output
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 50, 50, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 50, 50, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 50, 50, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 25, 25, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 25, 25, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 25, 25, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 12, 12, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 12, 12, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 6, 6, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 6, 6, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 3, 3, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
###Markdown
The final feature map has shape `(1, 1, 512)`. That's the feature on top of which we will stick a densely-connected classifier.At this point, there are two ways we could proceed: * Feature extraction without data augmentation:Running the convolutional base over our dataset, recording its output to a Numpy array on disk, then using this data as input to a standalone densely-connected classifier. This solution is very fast and cheap to run, because it only requires running the convolutional base once for every input image, and the convolutional base is by far the most expensive part of the pipeline. However, for the exact same reason, this technique would not allow us to leverage data augmentation at all.* Feature extraction with data augmentation:Extending the model we have (`conv_base`) by adding `Dense` layers on top, and running the whole thing end-to-end on the input data. This allows us to use data augmentation, because every input image is going through the convolutional base every time it is seen by the model. However, for this same reason, this technique is far more expensive than the first one.We will cover both techniques. 1.1 Feature extraction without data augmentationWe will record the output of `conv_base` on our data and using these outputs as inputs to a new model.We will start by simply running instances of the previously-introduced `ImageDataGenerator` to extract images as Numpy arrays as well as their labels. We will extract features from these images simply by calling the `predict` method of the `conv_base` model.
###Code
import os
import numpy as np
from tensorflow.keras.preprocessing.image import ImageDataGenerator
#update base_dir to path where you saved the cats_and_dogs_small dataset
base_dir = 'C:/School/NP/Np_class/DL/week04/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20
def extract_features(directory, sample_count):
features = np.zeros(shape=(sample_count, 1, 1, 512))
labels = np.zeros(shape=(sample_count))
generator = datagen.flow_from_directory(
directory,
target_size=(img_size, img_size),
batch_size=batch_size,
class_mode='binary')
i = 0
for inputs_batch, labels_batch in generator:
features_batch = conv_base.predict(inputs_batch)
features[i * batch_size : (i + 1) * batch_size] = features_batch
labels[i * batch_size : (i + 1) * batch_size] = labels_batch
i += 1
if i * batch_size >= sample_count:
# Note that since generators yield data indefinitely in a loop,
# we must `break` after every image has been seen once.
break
return features, labels
train_features, train_labels = extract_features(train_dir, 2000)
validation_features, validation_labels = extract_features(validation_dir, 1000)
test_features, test_labels = extract_features(test_dir, 1000)
print(train_features.shape)
print(train_labels.shape)
###Output
(2000, 1, 1, 512)
(2000,)
###Markdown
The extracted features are currently of shape `(samples, 1, 1, 512)`. We will feed them to a densely-connected classifier, so first we must flatten them to `(samples, 1 * 1 * 512)`:
###Code
train_features = np.reshape(train_features, (2000, 1 * 1 * 512))
validation_features = np.reshape(validation_features, (1000, 1 * 1 * 512))
test_features = np.reshape(test_features, (1000, 1 * 1 * 512))
###Output
_____no_output_____
###Markdown
At this point, we can define our densely-connected classifier (note the use of dropout for regularization), and train it on the data and labels that we just recorded:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
model = models.Sequential()
model.add(layers.Dense(256, activation='relu', input_shape=(1 * 1 * 512,)))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer=optimizers.RMSprop(lr=2e-5),
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(train_features, train_labels,
epochs=30,
batch_size=20,
validation_data=(validation_features, validation_labels))
###Output
Train on 2000 samples, validate on 1000 samples
Epoch 1/30
2000/2000 [==============================] - 1s 377us/sample - loss: 0.7994 - acc: 0.5120 - val_loss: 0.6859 - val_acc: 0.5570
Epoch 2/30
2000/2000 [==============================] - 0s 174us/sample - loss: 0.7350 - acc: 0.5215 - val_loss: 0.6654 - val_acc: 0.5910
Epoch 3/30
2000/2000 [==============================] - 0s 155us/sample - loss: 0.7131 - acc: 0.5475 - val_loss: 0.6534 - val_acc: 0.6180
Epoch 4/30
2000/2000 [==============================] - 0s 162us/sample - loss: 0.7073 - acc: 0.5610 - val_loss: 0.6429 - val_acc: 0.6390
Epoch 5/30
2000/2000 [==============================] - 0s 166us/sample - loss: 0.6730 - acc: 0.5885 - val_loss: 0.6346 - val_acc: 0.6530
Epoch 6/30
2000/2000 [==============================] - 0s 163us/sample - loss: 0.6646 - acc: 0.5950 - val_loss: 0.6270 - val_acc: 0.6640
Epoch 7/30
2000/2000 [==============================] - 0s 157us/sample - loss: 0.6564 - acc: 0.6250 - val_loss: 0.6205 - val_acc: 0.6720
Epoch 8/30
2000/2000 [==============================] - 0s 160us/sample - loss: 0.6527 - acc: 0.6190 - val_loss: 0.6147 - val_acc: 0.6770
Epoch 9/30
2000/2000 [==============================] - 0s 163us/sample - loss: 0.6351 - acc: 0.6210 - val_loss: 0.6094 - val_acc: 0.6850
Epoch 10/30
2000/2000 [==============================] - 0s 167us/sample - loss: 0.6218 - acc: 0.6555 - val_loss: 0.6046 - val_acc: 0.6820
Epoch 11/30
2000/2000 [==============================] - 0s 161us/sample - loss: 0.6389 - acc: 0.6345 - val_loss: 0.5999 - val_acc: 0.6860
Epoch 12/30
2000/2000 [==============================] - 0s 161us/sample - loss: 0.6111 - acc: 0.6665 - val_loss: 0.5960 - val_acc: 0.6890
Epoch 13/30
2000/2000 [==============================] - 0s 160us/sample - loss: 0.6153 - acc: 0.6445 - val_loss: 0.5916 - val_acc: 0.6900
Epoch 14/30
2000/2000 [==============================] - 0s 159us/sample - loss: 0.6064 - acc: 0.6750 - val_loss: 0.5877 - val_acc: 0.6940
Epoch 15/30
2000/2000 [==============================] - 0s 154us/sample - loss: 0.6099 - acc: 0.6700 - val_loss: 0.5843 - val_acc: 0.6950
Epoch 16/30
2000/2000 [==============================] - 0s 159us/sample - loss: 0.6104 - acc: 0.6585 - val_loss: 0.5808 - val_acc: 0.6980
Epoch 17/30
2000/2000 [==============================] - 0s 166us/sample - loss: 0.6012 - acc: 0.6745 - val_loss: 0.5777 - val_acc: 0.7020
Epoch 18/30
2000/2000 [==============================] - 0s 158us/sample - loss: 0.5987 - acc: 0.6815 - val_loss: 0.5748 - val_acc: 0.6970
Epoch 19/30
2000/2000 [==============================] - 0s 167us/sample - loss: 0.5808 - acc: 0.6940 - val_loss: 0.5719 - val_acc: 0.7010
Epoch 20/30
2000/2000 [==============================] - 0s 162us/sample - loss: 0.5864 - acc: 0.6750 - val_loss: 0.5694 - val_acc: 0.7050
Epoch 21/30
2000/2000 [==============================] - 0s 156us/sample - loss: 0.5794 - acc: 0.6950 - val_loss: 0.5668 - val_acc: 0.7040
Epoch 22/30
2000/2000 [==============================] - 0s 157us/sample - loss: 0.5695 - acc: 0.7105 - val_loss: 0.5639 - val_acc: 0.7050
Epoch 23/30
2000/2000 [==============================] - 0s 167us/sample - loss: 0.5672 - acc: 0.7075 - val_loss: 0.5622 - val_acc: 0.7150
Epoch 24/30
2000/2000 [==============================] - 0s 164us/sample - loss: 0.5707 - acc: 0.7110 - val_loss: 0.5596 - val_acc: 0.7110
Epoch 25/30
2000/2000 [==============================] - 0s 158us/sample - loss: 0.5705 - acc: 0.7170 - val_loss: 0.5572 - val_acc: 0.7110
Epoch 26/30
2000/2000 [==============================] - 0s 194us/sample - loss: 0.5606 - acc: 0.7140 - val_loss: 0.5551 - val_acc: 0.7120
Epoch 27/30
2000/2000 [==============================] - 0s 171us/sample - loss: 0.5685 - acc: 0.7020 - val_loss: 0.5535 - val_acc: 0.7170
Epoch 28/30
2000/2000 [==============================] - 0s 172us/sample - loss: 0.5601 - acc: 0.7110 - val_loss: 0.5515 - val_acc: 0.7150
Epoch 29/30
2000/2000 [==============================] - 0s 157us/sample - loss: 0.5535 - acc: 0.7095 - val_loss: 0.5499 - val_acc: 0.7180
Epoch 30/30
2000/2000 [==============================] - 0s 159us/sample - loss: 0.5538 - acc: 0.7120 - val_loss: 0.5482 - val_acc: 0.7180
###Markdown
Training is very fast, since we only have to deal with two `Dense` layers -- an epoch takes less than one second even on CPU.Let's take a look at the loss and accuracy curves during training:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We reach a validation accuracy of about 72%, much better than what we could achieve previously with our small model trained from scratch. However, our plots also indicate that we are overfitting almost from the start -- despite using dropout with a fairly large rate. This is because this technique does not leverage data augmentation, which is essential to preventing overfitting with small image datasets. 1.2 Feature extraction with data augmentationNow, let's review the second technique we mentioned for doing feature extraction, which is much slower and more expensive, but which allows us to leverage data augmentation during training: extending the `conv_base` model and running it end-to-end on the inputs. Note that this technique is very computationally expensive and the below program may take very long time to complete.Because models behave just like layers, you can add a model (like our `conv_base`) to a `Sequential` model just like you would add a layer. So you can do the following:
###Code
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(conv_base) #VGG16 pretrained
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
This is what our model looks like now:
###Code
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 1, 1, 512) 14714688
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 131328
_________________________________________________________________
dense_3 (Dense) (None, 1) 257
=================================================================
Total params: 14,846,273
Trainable params: 14,846,273
Non-trainable params: 0
_________________________________________________________________
###Markdown
As you can see, the convolutional base of VGG16 has 14,714,688 parameters, which is very large. The classifier we are adding on top has 2 million parameters.Before we compile and train our model, a very important thing to do is to freeze the convolutional base. "Freezing" a layer or set of layers means preventing their weights from getting updated during training. If we don't do this, then the representations that were previously learned by the convolutional base would get modified during training. Since the `Dense` layers on top are randomly initialized, very large weight updates would be propagated through the network, effectively destroying the representations previously learned.In Keras, freezing a network is done by setting its `trainable` attribute to `False`:
###Code
conv_base.trainable = False
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 1, 1, 512) 14714688
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 131328
_________________________________________________________________
dense_3 (Dense) (None, 1) 257
=================================================================
Total params: 14,846,273
Trainable params: 131,585
Non-trainable params: 14,714,688
_________________________________________________________________
###Markdown
With this setup, only the weights from the two `Dense` layers that we added will be trained. Note that in order for these changes to take effect, we must first compile the model. If you ever modify weight trainability after compilation, you should then re-compile the model, or these changes would be ignored.Now we can start training our model, with the same data augmentation configuration that we used in our previous example:
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 50x50
target_size=(img_size, img_size),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(img_size, img_size),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=1)
model.save('cats_and_dogs_small_3.h5')
###Output
_____no_output_____
###Markdown
Let's plot our results again:
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, we reach a validation accuracy of about 72%. This is much better than our small convnet trained from scratch. 2. Fine-tuning Another widely used technique for model reuse, complementary to feature extraction, is _fine-tuning_. Fine-tuning consists in unfreezing a few of the top layers of a frozen model base used for feature extraction, and jointly training both the newly added part of the model (in our case, the fully-connected classifier) and these top layers. This is called "fine-tuning" because it slightly adjusts the more abstract representations of the model being reused, in order to make them more relevant for the problem at hand. We have stated before that it was necessary to freeze the convolution base of VGG16 in order to be able to train a randomly initialized classifier on top. For the same reason, it is only possible to fine-tune the top layers of the convolutional base once the classifier on top has already been trained. If the classifier wasn't already trained, then the error signal propagating through the network during training would be too large, and the representations previously learned by the layers being fine-tuned would be destroyed. Thus the steps for fine-tuning a network are as follows:* 1) Add your custom network on top of an already trained base network.* 2) Freeze the base network.* 3) Train the part you added.* 4) Unfreeze some layers in the base network.* 5) Jointly train both these layers and the part you added.We have already completed the first 3 steps when doing feature extraction. Let's proceed with the 4th step: we will unfreeze our `conv_base`, and then freeze individual layers inside of it.As a reminder, this is what our convolutional base looks like:
###Code
conv_base.summary()
###Output
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 50, 50, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 50, 50, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 50, 50, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 25, 25, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 25, 25, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 25, 25, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 12, 12, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 12, 12, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 6, 6, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 6, 6, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 3, 3, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 0
Non-trainable params: 14,714,688
_________________________________________________________________
###Markdown
We will fine-tune the last 3 convolutional layers, which means that all layers up until `block4_pool` should be frozen, and the layers `block5_conv1`, `block5_conv2` and `block5_conv3` should be trainable.Why not fine-tune more layers? Why not fine-tune the entire convolutional base? We could. However, we need to consider that:* Earlier layers in the convolutional base encode more generic, reusable features, while layers higher up encode more specialized features. It is more useful to fine-tune the more specialized features, as these are the ones that need to be repurposed on our new problem. There would be fast-decreasing returns in fine-tuning lower layers.* The more parameters we are training, the more we are at risk of overfitting. The convolutional base has 15M parameters, so it would be risky to attempt to train it on our small dataset.Thus, in our situation, it is a good strategy to only fine-tune the top 2 to 3 layers in the convolutional base.Let's set this up, starting from where we left off in the previous example:
###Code
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True # after black5_conv1, set_trainable becomes True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 1, 1, 512) 14714688
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 131328
_________________________________________________________________
dense_3 (Dense) (None, 1) 257
=================================================================
Total params: 14,846,273
Trainable params: 7,211,009
Non-trainable params: 7,635,264
_________________________________________________________________
###Markdown
Now we can start fine-tuning our network. We will do this with the RMSprop optimizer, using a very low learning rate. The reason for using a low learning rate is that we want to limit the magnitude of the modifications we make to the representations of the 3 layers that we are fine-tuning. Updates that are too large may harm these representations.Now let's proceed with fine-tuning:
###Code
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
#model.save('cats_and_dogs_small_4.h5')
###Output
_____no_output_____
###Markdown
Let's plot our results using the same plotting code as before:
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The model acuracy is now improved further to be around 80%.We can now finally evaluate this model on the test data:
###Code
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_size, img_size),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 1000 images belonging to 2 classes.
WARNING:tensorflow:From <ipython-input-22-045f6020024c>:7: Model.evaluate_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.evaluate, which supports generators.
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
test acc: 0.79
###Markdown
Here we get a test accuracy of 80%. 3. Exercise - utilize another pretrained model Please utilize another pretrained model, e.g:(refer https://keras.io/applications/ for details)* Xception* InceptionV3* ResNet50* VGG19* MobileNet* DenseNet* NASNet(**Note**: Different models have different requirements on minimum image size so you may need to increase `img_size` accordingly)Implement the feature extraction, feature extraction with data augmentation and fine-tuning you learned through this practical. Observe the training and validation accuracy curves. Provide your codes & comments in the below boxes.
###Code
# Task 1: Feature Extraction without data augmentation
from tensorflow.keras.applications import NASNetMobile
img_size = 224
conv_base = NASNetMobile(include_top=False,
input_shape=(img_size, img_size, 3),
weights="imagenet")
conv_base.summary()
# Task 2: Feature Extraction with data augmentation
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
import matplotlib.pyplot as plt
import os
import numpy as np
from tensorflow.keras.preprocessing.image import ImageDataGenerator
base_dir = 'C:/School/NP/Np_class/DL/week04/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
conv_base.trainable = False
model.summary()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 50x50
target_size=(img_size, img_size),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(img_size, img_size),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=1)
%matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_size, img_size),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
# Task 3: Fine tuning
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'normal_conv_1_12':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(img_size, img_size),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
###Output
Found 1000 images belonging to 2 classes.
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
test acc: 0.972
|
MarginalPrior.ipynb | ###Markdown
*In the [MutualInclinations](https://github.com/iancze/hierarchical-mutual-inclinations/blob/master/MutualInclinations.ipynb) notebook, we built a hierarchical Bayesian model to infer the distribution of mutual inclinations using a sample of circumbinary disks. At one point, we mentioned that instead of sampling in the azimuthal angle $\phi$, we could actually analytically marginalize it out of our prior. This notebook details the math of the marginalization, setting this up in PyMC3, and sampling it.* Ok, now that we've defined $\theta$ and $\phi$ in the disk frame and specified how to convert these back to the observer frame, let's get back to the hierarchical model. A mutual inclination prior $p(\theta)$ specifies the angle between the disk and binary unit angular momentum vectors, but not the azimuthal orientation of them ($\phi$). So, given some mutual inclination $\theta$, the prior on the position of the binary unit vector on the unit sphere looks like a ring centered on the disk unit vector. This means that in the frame of the disk, the prior on the binary unit vector location is separable, $$p(\theta, \phi) = p(\theta) p(\phi)$$ where $p(\phi)$ is uniform from $[0, 2\pi]$. At any point along the ring $\phi | \theta$ we can use the relationship $i_\star = \cos^{-1} Z$ to calculate $i_\star$. Writing this all out, we have a functional relationship$$\cos i_\star = f_\star(\cos i_\mathrm{disk}, \theta, \phi) = -\sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta \sin \phi + \cos i_\mathrm{disk} \cos \theta$$Framing this differently, we can say that if we know $i_\mathrm{disk}$, $\theta$, and $\phi$, we essentially have a $\delta$-function prior on $\cos i_\star$$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta, \phi) = \delta(\cos i_\star - f_\star(\cos i_\mathrm{disk}, \theta, \phi))$$As we mentioned, because we don't have any constraint on $\Delta \Omega$, we don't have any constraint on $\phi$ either (note that $\Delta \Omega \ne \phi$, but these quantites are related through the rotation matrices in a one-to-one manner once $\cos i_\mathrm{disk}$, $\theta$ and $\cos i_\star$ are specified). My mother told me never to sample tomorrow what I could analytically marginalize today (or was it never put off until tomorrow what I can do today?). So, we'd like to rearrange the math a little so we can derive a prior on $\cos i_\star$ that has been marginalized over our ignorance on $\phi$.We'll start by writing the joint distribution $$p(\cos i_\star, \phi | \cos i_\mathrm{disk}, \theta) = p(\cos i_\star | \cos i_\mathrm{disk}, \theta, \phi) p(\phi)$$$$p(\cos i_\star, \phi | \cos i_\mathrm{disk}, \theta) = \delta(\cos i_\star - f_\star(\cos i_\mathrm{disk}, \theta, \phi)) p(\phi)$$To derive the marginal distribution we'll integrate over $\phi$,$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \int_0^{2\pi} p(\cos i_\star, \phi | \cos i_\mathrm{disk}, \theta) \, \mathrm{d}\phi$$$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \int_0^{2\pi} \delta(\cos i_\star - f_\star(\cos i_\mathrm{disk}, \theta, \phi)) \, p(\phi) \, \mathrm{d}\phi$$ Let$$g(\phi) = \cos i_\star - f_\star(\cos i_\mathrm{disk}, \theta, \phi)$$$$g(\phi) = \cos i_\star + \sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta \sin \phi - \cos i_\mathrm{disk} \cos \theta$$$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \frac{1}{2\pi} \int_0^{2\pi} \delta(g(\phi)) \, \mathrm{d}\phi$$To evaluate this integral, we'll use a neat trick of $\delta$ function composition to rewrite the argument into something more friendly to integrate. The $\delta$ function composition rule states$$\delta(g(\phi)) = \sum_i \frac{\delta(\phi - \phi_i)}{|g^\prime (\phi_i)|}$$where $\phi_i$ are the roots of the equation. This part makes sense if you think about $\delta(g(\phi))$ as having a $\delta$ function pop up everywhere $g(\phi) = 0$ (i.e., the roots of the equation). The denominator of this equation, the slope of $g(\phi)$ evaluated at those roots, comes about from the more familiar $\delta$ function scaling relation, $$\delta(\alpha \phi) = \frac{\delta(\phi)}{|\alpha|}$$Basically, the $\delta$ functions will be wider in regions where $g(\phi)$ crosses $g = 0$ slowly and narrower in regions where $g(\phi)$ crosses $g = 0$ in a hurry. Thinking back to our original diagram (figure forthcoming), where we are tracing a ring on the surface of a sphere, we can make our lives simpler by reconsidering the limits of integration to cover only one side of the circle. Since our rotation convention has $\phi = 0$ starting at the 9 o'clock position, and we want to integrate from top to bottom, this means from $-\pi/2$ to $\pi/2$.$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \frac{1}{2\pi} \int_0^{2\pi} \delta(g(\phi)) \, \mathrm{d}\phi = \frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \delta(g(\phi)) \, \mathrm{d}\phi$$The nice part about this change is that there is only one root in this domain, meaning that given $\cos i_\star$, $\cos i_\mathrm{disk}$, and $\theta$, there is exactly one $\phi$ in this range (the other is on the other half of the circle). Solving $g(\phi) = 0$ for the root, we have$$\sin \phi_0 = \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta}$$$$\phi_0 = \sin^{-1} \left [ \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta} \right ]$$The derivative of $g(\phi)$ is$$\frac{\mathrm{d} g}{\mathrm{d}\phi} = \sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta \cos \phi$$Evaluating the derivative at the root, we have$$\left | \frac{\mathrm{d} g}{\mathrm{d}\phi} (\phi_i) \right | = \left | \sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta \sqrt{1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta} \right)^2} \; \right |$$Using these relationships, we have rewritten$$\frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \delta(g(\phi)) \, \mathrm{d}\phi = \frac{1}{\pi} \left ( \left | \frac{\mathrm{d} g}{\mathrm{d}\phi} (\phi_i) \right | \right )^{-1} \int_{-\pi/2}^{\pi/2} \delta(\phi - \phi_0) \, \mathrm{d}\phi$$$$\frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \delta(g(\phi)) \, \mathrm{d}\phi = \frac{1}{\pi} \left ( \left | \frac{\mathrm{d} g}{\mathrm{d}\phi} (\phi_i) \right | \right )^{-1} $$$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \frac{1}{\pi} \left | \sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta \sqrt{1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sqrt{1 - \cos^2 i_\mathrm{disk}} \sin \theta} \right)^2} \; \right |^{-1} $$$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \frac{1}{\pi} \left | \sin i_\mathrm{disk} \sin \theta \sqrt{1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right)^2} \; \right |^{-1} $$This doesn't look particularly pretty, but we can verify that we have this prior correct by generating random samples from $p(\phi | \cos i_\mathrm{disk}, \theta)$, calculating $\cos i_\star$ from them and then plotting the $p(\cos i_\star | \cos i_\mathrm{disk}, \theta)$ histogram, i.e., the marginal prior. $u$-substitutionThis can also be calculated (slightly more easily) with $u$ substitution. We need to calculate the integral$$\frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \delta(g(\phi))\, \mathrm{d}\phi$$Let $u = \sin \theta$, then $\mathrm{d}u = \cos \phi \, \mathrm{d} \phi$. So we have$$\mathrm{d}\phi = \frac{\mathrm{d}u}{\cos \phi} = \frac{\mathrm{d}u}{\sqrt{1 - u^2}}$$$$g(u) = \cos i_\star + \sin i_\mathrm{disk} \sin \theta \, u - \cos i_\mathrm{disk} \cos \theta$$$$g^\prime(u) = \sin i_\mathrm{disk} \sin \theta$$Solving for the $g(u) = 0$ root,$$u_0 = \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta}$$and using the composition rule,$$\delta(g(u)) = \frac{\delta(u - u_0)}{|\sin i_\mathrm{disk} \sin \theta | }$$the integral becomes$$\frac{1}{\pi} \int_{-\pi/2}^{\pi/2} \delta(g(\phi))\, \mathrm{d}\phi = \frac{1}{\pi} \frac{1}{|\sin i_\mathrm{disk} \sin \theta |} \int_{-1}^1 \frac{\delta(u - u_0)}{\sqrt{1 - u^2}}\, \mathrm{d}u$$Now this is a $\delta$ function integral that is a bit more easier to comprehend. It just picks up the value of the function $1 / \sqrt{1 - u^2}$ where $u = u_0$, so the final solution is $$\frac{1}{\pi} \frac{1}{|\sin i_\mathrm{disk} \sin \theta|} \frac{1}{\sqrt{1 - u_0^2}}$$which is the same thing as before. Now that we've specified this prior, we can write the full posterior with the rest of the terms.$$p_k(\cos i_\mathrm{disk}, \cos i_\star| \cos \hat{i}_\mathrm{disk}, \cos \hat{i}_\star, \alpha, \beta) = {\cal N}(\cos \hat{i}_\mathrm{disk}, \cos \hat{i}_\star | \cos i_\mathrm{disk}, \cos i_\star, \boldsymbol{\Sigma}) p(\cos i_\star | \cos i_\mathrm{disk}, \theta) p(\cos i_\mathrm{disk}) p(\theta |, \alpha, \beta)$$This posterior looks a little bit different than before. For this single disk $k$, it looks quite simple, since it only has two parameters, $\cos i_\mathrm{disk}$ and $\cos i_\star$. Looking more closely, we see that this whole posterior is conditional on the prior for the mutual inclinations, which we've specified with a $\beta$ distribution $p_\beta$, parameterized by $\alpha$ and $\beta$. With the full sample of disks, we will seek to infer $\alpha$ and $\beta$ as well. This is the hierarchical nature of the problem. Just trying to experiment implementing the marginal prior. It has the form$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \frac{1}{\pi} \left | \sin i_\mathrm{disk} \sin \theta \sqrt{1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right)^2} \; \right |^{-1} $$Since we know that $i_\mathrm{disk}$ and $\theta$ will always be in the range $[0, \pi]$, the absolute value bars are not necessary.$$p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = \frac{1}{\pi} \left ( \sin i_\mathrm{disk} \sin \theta \sqrt{1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right)^2} \; \right )^{-1} $$and (dropping constants)$$\ln p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = - \ln \sin i_\mathrm{disk} - \ln \sin(\theta) - \frac{1}{2} \ln \left [ 1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right )^2\right]$$We can calculate the domain of this distribution by realizing $$1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right )^2 > 0$$or $$-1 < \left (\frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right ) < 1$$or$$\cos i_\mathrm{disk} \cos \theta - \sin i_\mathrm{disk} \sin \theta < \cos i_\star < \sin i_\mathrm{disk} \sin \theta + \cos i_\mathrm{disk} \cos \theta$$ Let's make these even more complicated by coming up with a transformation with a variable $y$ that goes from the range $[0, 1]$ always, and is scaled like $\cos i_\star$. Then, we use this marginal prior to evaluate on $\cos i_\star$.Intercept evaled at ($y = 0$)$$\cos i_\star = \cos i_\mathrm{disk} \cos \theta - \sin i_\mathrm{disk} \sin \theta$$Slope eval at ($y = 1$) divided by eval at ($y = 0$).$$m = \frac{\sin i_\mathrm{disk} \sin \theta + \cos i_\mathrm{disk} \cos \theta}{\cos i_\mathrm{disk} \cos \theta - \sin i_\mathrm{disk} \sin \theta}$$So the relationship becomes $$\cos i_\star = \cos i_\mathrm{disk} \cos \theta - \sin i_\mathrm{disk} \sin \theta + \left [ (\sin i_\mathrm{disk} \sin \theta + \cos i_\mathrm{disk} \cos \theta) - (\cos i_\mathrm{disk} \cos \theta - \sin i_\mathrm{disk} \sin \theta) \right ] y$$Turns out this linear scaling doesn't work so great for the sampling. I'm not exactly sure why, but I have a feeling it has to do with the fact all of the probability is at the tails. This may still be true, but at least right now it seems like it's a fact that I'm evaluating `NaN`.We could try a different parameterization.
###Code
# code here
%matplotlib inline
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
# disable tex formatting (for Linux)
rcParams["text.usetex"] = False
import matplotlib.pyplot as plt
import numpy as np
import theano.tensor as tt
import pymc3 as pm
deg = np.pi/180.0
def get_limits(cos_i_disk, theta):
'''
theta in radians
'''
sin_i_disk = np.sqrt(1 - cos_i_disk**2)
low = cos_i_disk * np.cos(theta) - sin_i_disk * np.sin(theta)
high = sin_i_disk * np.sin(theta) + cos_i_disk * np.cos(theta)
return low, high
def p_marg(cos_i_star, cos_i_disk, theta):
'''
theta in radians
'''
sin_i_disk = np.sqrt(1 - cos_i_disk**2)
return 1 / (np.pi * np.sqrt(1 - cos_i_disk**2) * np.sin(theta) * \
np.sqrt(1 - ((cos_i_disk * np.cos(theta) - cos_i_star) / \
(np.sqrt(1 - cos_i_disk**2) * np.sin(theta)))**2))
# plot the marginal prior for fixed value of cos_i_disk, for many values of theta
cos_i_disk = np.cos(40.0 * deg)
fig, ax = plt.subplots(nrows=1)
ax.axvspan(cos_i_disk - 0.22, cos_i_disk + 0.02, color="0.7")
for theta in np.array([5.0, 10.0, 20.0, 30.0, 40.0]) * deg:
low, high = get_limits(cos_i_disk, theta)
cos_i_stars = np.linspace(low+0.1*deg, high - 0.1*deg, num=100)
p = p_marg(cos_i_stars, cos_i_disk, theta)
ax.plot(cos_i_stars, p)
ax.axvline(cos_i_disk, color="0.3")
# choose a cos i_disk, and theta value
cos_i_disk = np.cos(40.0 * deg)
theta = 5.0 * deg
def lnp(cos_i_star): #, cos_i_disk, theta):
sin_i_disk = tt.sqrt(1 - cos_i_disk**2)
# limits on where function is defined
low = cos_i_disk - sin_i_disk * np.sin(theta)
high = sin_i_disk * np.sin(theta) + cos_i_disk * np.cos(theta)
# specify the bounds via returning -inf
return tt.switch(tt.and_(tt.lt(low, cos_i_star), tt.lt(cos_i_star, high)), -tt.log(sin_i_disk) - tt.log(tt.sin(theta)) - \
0.5 * tt.log(1 - ((cos_i_disk * tt.cos(theta) - cos_i_star)/(sin_i_disk * tt.sin(theta)))**2), -1.0)
# for now these are just fixed
cos_i_disk = np.cos(40.0 * deg)
theta = 5.0 * deg
# low, high = get_limits(cos_i_disk, theta)
with pm.Model() as model:
# Now, we will use cos_i_disk and theta to calculate the prior on cos_i_star
# sin_i_disk = tt.sqrt(1.0 - cos_i_disk**2)
# see https://discourse.pymc.io/t/first-attempt-at-a-hierarchical-model-with-custom-likelihood/1928
# for conditional prior help
# the domain of this dist is between
# def lnp(cos_i_star):
# return -tt.log(sin_i_disk) - tt.log(tt.sin(theta)) - \
# 0.5 * tt.log(1 - ((cos_i_disk * tt.cos(theta) - cos_i_star)/(sin_i_disk * tt.sin(theta)))**2)
# the disk inclination is probably a good place to start
cos_i_star = pm.DensityDist("cos_i_star", lnp, testval=cos_i_disk)
p1 = tt.printing.Print('cos_i_star')(cos_i_star)
###Output
cos_i_star __str__ = 0.7660444431189781
###Markdown
$$\cos i_\star = \cos i_\mathrm{disk} - \sin i_\mathrm{disk} \sin \theta + \left [ (\sin i_\mathrm{disk} \sin \theta + \cos i_\mathrm{disk} \cos \theta) - (\cos i_\mathrm{disk} - \sin i_\mathrm{disk} \sin \theta) \right ] y$$ $$\ln p(\cos i_\star | \cos i_\mathrm{disk}, \theta) = - \ln \sin i_\mathrm{disk} - \ln \sin(\theta) - \frac{1}{2} \ln \left [ 1 - \left ( \frac{\cos i_\mathrm{disk} \cos \theta - \cos i_\star}{\sin i_\mathrm{disk} \sin \theta} \right )^2\right]$$
###Code
# instantiate a PyMC3 model class
with pm.Model() as marginal_model:
mu = pm.Uniform("mu", lower=0.0, upper=1.0)
var = pm.Uniform("var", lower=0.0, upper=(mu * (1 - mu)))
# convert these back to alpha and beta and put priors on the fact that
# alpha and beta should be greater than 1.0
# intermediate mutual inclination variable
x = pm.Beta("x", mu=mu, sd=tt.sqrt(var), shape=N_systems)
# Convert from the intermediate variable to the actual mutual inclination angle we want
cos_theta = pm.Deterministic("cos_theta", 1 - 2 * x)
# Enforce the geometrical prior on i_disk, as before
# Testval tells the chain to start in the center of the posterior.
cos_i_disk = pm.Uniform("cosIdisk", lower=-1.0, upper=1.0, shape=N_systems, \
testval=np.cos(sample["i_disk"].data * deg))
# Now, we will use cos_i_disk, theta, and phi to calculate cos_i_star
sin_i_disk = tt.sqrt(1.0 - cos_i_disk**2)
sin_theta = tt.sqrt(1.0 - cos_theta**2)
# intermediate stellar cos variable
y = pm.Uniform("y", lower=0.0, upper=1.0, shape=N_systems)
# transform to cos_i_star
cos_i_star = pm.Deterministic("cosIstar", cos_i_disk - sin_i_disk * sin_theta + \
((sin_i_disk * sin_theta + cos_i_disk * cos_theta) - \
(cos_i_disk - sin_i_disk * sin_theta)) * y)
# calculate the potential from the prior
pm.Potential("cos_i_prior", -pm.math.log(sin_i_disk) - pm.math.log(sin_theta) - \
0.5 * pm.math.log(1 - ((cos_i_disk * cos_theta - cos_i_star)/(sin_i_disk * sin_theta))**2))
# Finally, we define the likelihood by conditioning on the observations using a Normal
obs_disk = pm.Normal("obs_disk", mu=cos_i_disk, sd=sd_disk, observed=np.cos(sample["i_disk"].data * deg))
obs_star = pm.Normal("obs_star", mu=cos_i_star, sd=sd_star, observed=np.cos(sample["i_star"].data * deg))
# choose a cos i_disk, and theta value
cos_i_disk = np.cos(40.0 * deg)
theta = 5.0 * deg
phis = np.random.uniform(0, 2 * np.pi, size=100000)
cos_i_stars = -np.sqrt(1 - cos_i_disk**2) * np.sin(theta) * np.sin(phis) + cos_i_disk * np.cos(theta)
fig, ax = plt.subplots(nrows=4, figsize=(6, 10))
# res = plt.hist(cos_i_stars, bins=40, density=True)
# heights, bins, patches = res
# ax[0].plot(bins[1:-1], p_marg(bins[1:-1], cos_i_disk, theta)) # it will blow up at the edges
ax[0].hist(-np.sqrt(1 - cos_i_disk**2) * np.sin(5.0 * deg) * np.sin(phis) + \
cos_i_disk * np.cos(5.0 * deg), bins=40, density=True)
ax[1].hist(-np.sqrt(1 - cos_i_disk**2) * np.sin(10.0 * deg) * np.sin(phis) + \
cos_i_disk * np.cos(10.0 * deg), bins=40, density=True)
ax[2].hist(-np.sqrt(1 - cos_i_disk**2) * np.sin(20.0 * deg) * np.sin(phis) + \
cos_i_disk * np.cos(20.0 * deg), bins=40, density=True)
ax[3].hist(-np.sqrt(1 - cos_i_disk**2) * np.sin(30.0 * deg) * np.sin(phis) + \
cos_i_disk * np.cos(30.0 * deg), bins=40, density=True)
for a in ax:
a.axvline(cos_i_disk, color="0.3")
a.set_xlabel(r"$\cos i_\star$");
###Output
_____no_output_____
###Markdown
Notes on this are here: https://docs.pymc.io/Probability_Distributions.html
###Code
def logp(failure, value):
return (failure * log(λ) - λ * value).sum()
exp_surv = pm.DensityDist('exp_surv', logp, observed={'failure':failure, 'value':t})
###Output
_____no_output_____ |
notebooks/Archive/20210304-mdl-daen690-IntermediateDataset-Patients.ipynb | ###Markdown
Merge and Prepare Intermediate Datasets -- PatientsThis notebook is the data engineering for creating the intermediate dataset for the "Patients" dataset. The script reads in the raw data provided by our project partner, drops records where FRDPersonnelStartDate is NULL, removes remaining duplicates, add tenure by month attribute, converts categorical data into numeric, and adds new numeric columns utilizing the the "One Hot Encoding" method, then outputs dataframe ro a CSV file. I will document and comment the notebook later. Enjoy!
###Code
# Funtion to identify and print easy to understand variable types
def get_var_category(series):
unique_count = series.nunique(dropna=False)
total_count = len(series)
if pd.api.types.is_numeric_dtype(series):
return 'Numerical'
elif pd.api.types.is_datetime64_dtype(series):
return 'Date'
elif unique_count == total_count:
return 'Text (Unique)'
else:
return 'Categorical'
def print_categories(df):
for column_name in df.columns:
print(column_name, ": ", get_var_category(df[column_name]))
import numpy as np
import pandas as pd
# Setup HTML display
from IPython.core.display import display, HTML
# Notebook cell width adjustment
display(HTML('<style>.container { width:80% !important; }</style>'))
dfPatients = pd.read_excel(r'./data/20210225-ems-raw-v04.xlsx',
sheet_name='Patients',
na_values=['NA'])
dfPatients.shape
print_categories(dfPatients)
dfPatients = dfPatients.drop(dfPatients[(dfPatients.FRDPersonnelStartDate.isnull())].index)
dfPatients.shape
# Drop duplicate records
dfPatients_dedup = pd.DataFrame.drop_duplicates(dfPatients)
dfPatients_dedup.shape
# diff 543774 - 543760 = 14
# Add TenureMonths, which is the count of months working at the time of the dispatch call
dfPatients_dedup['TenureMonths'] = ((dfPatients_dedup.loc[:, 'DispatchTime'].dt.date - \
dfPatients_dedup.loc[:, 'FRDPersonnelStartDate'].dt.date) / \
np.timedelta64(1, 'M')).astype(int)
dfPatients_dedup.shape
# error is just a warning, need to figure out even though using loc
dfPatients_dedup.head()
dfPatients_dedup['Shift'].value_counts()
# First just going to to add a factorized column starting with 1 as opposed to the normal python 0
dfPatients_dedup['ShiftCode'] = pd.factorize(dfPatients_dedup['Shift'])[0] + 1
dfPatients_dedup.head()
dfPatients_dedup['ShiftCode'].value_counts()
# For fun and experimentation going to use the "One Hot Encoding" method using get_dummies to create a row of 1,0 for each Shift A, B, and C
dingdongs = pd.get_dummies(dfPatients_dedup['Shift'])
dingdongs.head()
dingdongs.value_counts()
dingdongs.rename(columns={'A - Shift': 'Shift_A', 'B - Shift': 'Shift_B', 'C - Shift': 'Shift_C'}, inplace=True)
dingdongs.value_counts()
dfPatients_dedup = pd.concat([dfPatients_dedup, dingdongs], axis=1)
dfPatients_dedup.head()
dfPatients_dedup['UnitId'].value_counts()
dfPatients_dedup['UnitIdCode'] = pd.factorize(dfPatients_dedup['UnitId'])[0] + 1
dfPatients_dedup['UnitIdCode'].value_counts()
dfPatients_dedup['PatientOutcome'].value_counts()
dfPatients_dedup['PatientOutcomeCode'] = pd.factorize(dfPatients_dedup['PatientOutcome'])[0] + 1
dfPatients_dedup['PatientOutcomeCode'].value_counts()
dfPatients_dedup['PatientGender'].value_counts()
dfPatients_dedup['FRDPersonnelGender'].value_counts()
dfPatients_dedup['ProviderGenderCode'] = pd.factorize(dfPatients_dedup['FRDPersonnelGender'])[0] + 1
dfPatients_dedup['ProviderGenderCode'].value_counts()
dfPatients_dedup.head()
print_categories(dfPatients_dedup)
dfPatients_dedup.to_csv(r'./data/dfPatients_dedup.csv', index = True)
###Output
_____no_output_____ |
Applied_Alg_sem_7_Interpolation.ipynb | ###Markdown
Занятие 7 Прикладная алгебра и численные методы Многочлены Чебышева
###Code
!python -m pip install sympy --upgrade
!python -m pip install numpy --upgrade
import numpy as np
import sympy
from sympy import S
from sympy.functions.special.polynomials import chebyshevt, chebyshevu
from numpy.polynomial.chebyshev import chebinterpolate
import matplotlib.pyplot as plt
import pandas as pd
from google.colab import files
%matplotlib inline
import sympy
import numpy
sympy.__version__, numpy.__version__
###Output
_____no_output_____
###Markdown
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.norm.htmlscipy.linalg.norm
https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html
Многочлен Чебышева первого рода$T_0(x) = 1$, $T_1(x) = x$, $T_(x) = 2xT_{n - 1}(x) - T_{n - 2}(x)$, $n\ge 2$. Пример 1Построим графики многочленов Чебышева первого рода при $n = 4, 5$.
###Code
x = S('x')
def T(n, x):
if n == 0:
return 1
if n == 1:
return x
return sympy.expand(sympy.simplify(2*x*T(n - 1, x) - T(n - 2, x)))
display(T(4, x), T(5, x))
p = sympy.plot(T(4, x), (x, -1, 1), line_color='green', show=False)
p.extend(sympy.plot(T(5, x), (x, -1, 1), line_color='red', show=False))
p.show()
###Output
_____no_output_____
###Markdown
Пример 2.
$$
T_n(x) = \frac{(x + \sqrt{x^2 - 1})^n + (x - \sqrt{x^2 - 1})^n}{2}, \quad |x| \ge 1.
$$
Построим многочлен Чебышева первого рода порядка 15 по приведенной выше формуле и по рекурсивной формуле Примера 1.
На сетке значений $x$ от 2 до 3 с шагом 0.001 вычислить нормы разностей значений многочлена Чебышева, полученных двумя способами.
###Code
def T_new(n, x):
return ((x + sympy.sqrt(x**2 - 1))**n + (x - sympy.sqrt(x**2 - 1))**n)/2
X = np.linspace(2, 3, 1001)
Y1 = sympy.lambdify(x, T(15, x))(X)
Y2 = sympy.lambdify(x, T_new(15, x))(X)
print(*[np.linalg.norm(Y1 - Y2, k) for k in [1, 2, 3]])
plt.plot(X, Y1, 'g-', label=sympy.latex(sympy.Eq(S('T15(x)'), T(15, x)), mode='inline'))
plt.plot(X, Y2, 'r:', label=sympy.latex(sympy.Eq(S('newT15(x)'), T_new(15, x)), mode='inline'))
plt.legend()
###Output
0.020938336849212646 0.0013388702364580465 0.0006147238041184818
###Markdown
Пример 3
Многочлен Чебышева второго рода:
$$
U_n = \frac{1}{n + 1}T'_{n + 1}(x), \quad n \ge 0.
$$
$$
U_n(x) = \frac{(x + \sqrt{x^2 - 1})^{n + 1} - (x - \sqrt{x^2 - 1})^{n + 1}}{2\sqrt{x^2 - 1}}, \quad |x| \ge 1.
$$
Построим многочлен Чебышева степени 7 двумя способами, сравним нормы разностей на сетке как в Примере 2.
###Code
def U(n, x):
return sympy.expand(sympy.simplify(T(n + 1, x).diff(x)/(n + 1)))
def U_new(n, x):
return sympy.expand(sympy.simplify(sympy.expand(((x + sympy.sqrt(x**2 - 1))**(n + 1) - (x - sympy.sqrt(x**2 - 1))**(n + 1)))/(2*sympy.sqrt(x**2 - 1))))
def U1(n, x):
return sympy.expand(sympy.simplify(T_new(n + 1, x).diff(x)/(n + 1)))
display(U(7, x), U_new(7, x), U1(7, x))
###Output
_____no_output_____
###Markdown
Многочлен Чебышева в Sympy.
В sympy.functions.special.polynomials есть chebyshevt и chebyshevu,
возвращающие соответственно многочлен Чебышева первого и второго рода, аргументы - порядок многочлена и переменная.
Пример 4.
Построим полиномы Чебышева первого и второго рода порядка 5:
###Code
display(chebyshevt(5, x), chebyshevu(5, x))
###Output
_____no_output_____
###Markdown
Пример 5
Норма Чебышева (максимум модуля на данном отрезке):
$$
|f|_0 = \max_{[-1, 1]}|f(x)|.
$$
Площадь под графиком функции на данном отрезке:
$$
|f|_1 = \int_{-1}^1|f(x)|\,dx.
$$
Вычислить норму Чебышева и площадь под графиком для $f(x) = \frac{x^3}{e^x}$ на отрезке [-1, 1].
###Code
x = S('x')
def f5(x):
return x**3/sympy.exp(x)
f5_norm0 = sympy.calculus.util.maximum(f5(x), x, domain=sympy.Interval(-1, 1))
f5_norm1 = sympy.Abs(f5(x)).integrate((x, -1, 1))
display('|f4|0 = {0} = {1}'.format(f5_norm0, round(f5_norm0, 3)),
'|f4|1 = {0} = {1}'.format(f5_norm1, round(f5_norm1, 3)))
###Output
_____no_output_____
###Markdown
Пример 6
Наилучшее приближение функции $f(x)$ многочленом степени $\le n$:
$$
\tilde{f}(x) = \sum_{i=0}^n \frac{}{}T_i(x),
$$
$T_i(x)$ многочлен Чебышева первого рода степени $i$,
$$
= \int_{-1}^1 \frac{f(x)g(x)}{\sqrt{1 - x^2}}\,dx, \quad
|f| = \sqrt{} = \sqrt{\int_{-1}^1 \frac{f^2(x)}{\sqrt{1 - x^2}}\,dx}
$$
Построим многочлен степени не выше 3, наилучшим образом приближающий $x^5 - 1$ на [-1, 1]
###Code
x = S('x')
def f6(x):
return x**5 - 1
def dot_prod_cheb(f, g, x):
return (f*g/sympy.sqrt(1 - x**2)).integrate((x, -1, 1))
def f_cheb(f, n, x):
res = 0
for k in range(n + 1):
cheb_k = chebyshevt(k, x)
coef = dot_prod_cheb(cheb_k, f, x)/dot_prod_cheb(cheb_k, cheb_k, x)
print(coef)
res += coef*cheb_k
return res
res6 = f_cheb(f6(x), 3, x)
sympy.plot((res6, (x, -1, 1)), (f6(x), (x, -1, 1)))
display(res6)
###Output
-1
5/8
0
5/16
###Markdown
Наилучшее приближение функции $f(x)$ многочленом Чебышева, Numpy.
https://numpy.org/doc/stable/reference/generated/numpy.polynomial.chebyshev.Chebyshev.html
Для построения многочлена Чебышева используем chebinterpolate из
numpy.polynomial.chebyshev
Первый (обязательный) аргумент chebinterpolate - имя функции, второй - порядок многочлена.
Возвращает chebinterpolate коэффициенты $a_i$ при многочленах Чебышева, в порядке возрастания степени:
$$
f(x) = a_0T_0 + a_1T_1 + ... + a_nT_n.
$$
Пример 7
Построим многочлен Чебышева для Примера 6 с помощью chebinterpolate.
###Code
res7 = chebinterpolate(f6, 3)
res7
###Output
_____no_output_____
###Markdown
Представим функцию $f_6$ с помощью полиномов Чебышева:
###Code
res7poly = res7[0]
for k in range(1, len(res7)):
res7poly += res7[k]*chebyshevt(k, x)
display(res7poly)
###Output
_____no_output_____
###Markdown
Изобразим на одном графике функцию $f_6$ и ее представление полиномами Чебышева, полученными с помощью Sympy и Numpy:
###Code
p = sympy.plot(res7poly, (x, -1, 1), line_color='green', show=False)
p.extend(sympy.plot(f6(x), (x, -1, 1), line_color='black', show=False))
p.extend(sympy.plot(res6, (x, -1, 1), line_color='red', show=False))
p.show()
###Output
_____no_output_____
###Markdown
Представления $f_6$, полученные с помощью Sympy и Numpy несколько различаются.
Сравним нормы разностей функции $f_6$ и ее разложений:
###Code
for item in (res7poly, res6):
f7_norm0 = sympy.calculus.util.maximum(f6(x) - item, x, domain=sympy.Interval(-1, 1))
f7_norm1 = sympy.Abs(f6(x) - item).integrate((x, -1, 1))
display('|f7|0 = {0}'.format(round(f7_norm0, 3)),
'|f7|1 = {0}'.format(round(f7_norm1, 3)))
###Output
_____no_output_____
###Markdown
Как видно, у разложения Numpy меньше максимум модуля отклонения от функции на отрезке [-1, 1], а у разложения Sympy - площадь между графиками функции и ее разложения.
Посмотрим, как Numpy и Sympy раскладывают по полиномам Чебышева $x^5 - 1$ в полином степени не больше 5 (должно получиться ровно $x^5 - 1$).
###Code
res10= chebinterpolate(lambda x: x**5 - 1, 5)
res10poly = res10[0]
for k in range(1, len(res10)):
res10poly += res10[k]*chebyshevt(k, x)
res10poly
x = S('x')
def f0(x):
return x**5 - 1
res = f_cheb(f0(x), 5, x)
sympy.plot((res, (x, -1, 1)), (f0(x), (x, -1, 1)))
display(res)
###Output
-1
5/8
0
5/16
0
1/16
###Markdown
Дополнение по полиномам Чебышева
###Code
x = S('x')
def cos_n(n, x):
return(chebyshevt(n, sympy.cos(x)))
for k in range(5):
display(cos_n(k, x))
###Output
_____no_output_____ |
project/Blessing_Bassey_CTC.ipynb | ###Markdown
Part 1 : contrastive predictive codingContrastive Predictive Coding (CPC) is a method of unsupervised training for speech models. The idea behind it is pretty simple:1. The raw audio wave is passed through a convolutional network: the ```encoder```2. Then, the encoder's output is given to a recurrent network the ```context```3. A third party network, the ```prediction_network``` will try to predict the future embeddings of the encoder using the output of the context network.In order to avoid a collapse to trivial solutions, the prediction_network doesn't try to reconstruct the future features. Instead, using the context output $c_t$ at time $t$ it is trained to discriminate the real encoder representatioin $g_{t+k}$ at time $t+k$ from several other features $(g_n)_n$ taken elsewhere in the batch. Thus the loss becomes:\\[ \mathcal{L}_c = - \frac{1}{K} \sum_{k=1}^K \text{Cross_entropy}(\phi_k(c_t), g_{t+k}) \\]Or:\\[ \mathcal{L}_c = - \frac{1}{K} \sum_{k=1}^K \log \frac{ \exp\left(\phi_k(c_t)^\top g_{t+k}\right) }{ \sum_{\mathbf{n}\in\mathcal{N}_t} \exp\left(\phi_k(c_t)^\top g_n\right)} \\]Where:* $\phi_k$ is the prediction network for the kth timestep* $\mathcal{N}_t$ is the set of all negative examples sampled for timestep $t$ Exercice 1 : Building the modelIn this exercise, we will build and train a small CPC model using the repository CPC_audio.The code below loads a context and an encoder newtorks.
###Code
%cd /content/CPC_audio
from cpc.model import CPCEncoder, CPCAR
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DIM_ENCODER=256
DIM_CONTEXT=256
KEEP_HIDDEN_VECTOR=False
N_LEVELS_CONTEXT=1
CONTEXT_RNN="LSTM"
N_PREDICTIONS=12
LEARNING_RATE=2e-4
N_NEGATIVE_SAMPLE =128
%cd ../../
encoder = CPCEncoder(DIM_ENCODER).to(device)
context = CPCAR(DIM_ENCODER, DIM_CONTEXT, KEEP_HIDDEN_VECTOR, 1, mode=CONTEXT_RNN).to(device)
!pip install soundfile
# Several functions that will be necessary to load the data later
from cpc.dataset import findAllSeqs, AudioBatchData, parseSeqLabels
SIZE_WINDOW = 20480
BATCH_SIZE=8
def load_dataset(path_dataset, file_extension='.wav', phone_label_dict=None):
data_list, speakers = findAllSeqs(path_dataset, extension=file_extension)
dataset = AudioBatchData(path_dataset, SIZE_WINDOW, data_list, phone_label_dict, len(speakers))
return dataset
###Output
_____no_output_____
###Markdown
Now build a new class, ```CPCModel``` which will
###Code
class CPCModel(torch.nn.Module):
def __init__(self,
encoder,
AR):
super(CPCModel, self).__init__()
self.gEncoder = encoder
self.gAR = AR
def forward(self, batch_data):
encoder_output = self.gEncoder(batch_data)
#print(encoder_output.shape)
# The output of the encoder data does not have the good format
# indeed it is Batch_size x Hidden_size x temp size
# while the context requires Batch_size x temp size x Hidden_size
# thus you need to permute
context_input = encoder_output.permute(0, 2, 1)
context_output = self.gAR(context_input)
#print(context_output.shape)
return context_output, encoder_output
###Output
_____no_output_____
###Markdown
Exercise 2 : CPC lossWe will define a class ```CPCCriterion``` which will hold the prediction networks $\phi_k$ defined above and perform the classification loss $\mathcal{L}_c$.a) In this exercise, the $\phi_k$ will be a linear transform, ie:\\[ \phi_k(c_t) = \mathbf{A}_k c_t\\]Using the class [torch.nn.Linear](https://pytorch.org/docs/stable/nn.htmltorch.nn.Linear), define the transformations $\phi_k$ in the code below and complete the function ```get_prediction_k``` which computes $\phi_k(c_t)$ for a given batch of vectors $c_t$.b) Using both ```get_prediction_k``` and ```sample_negatives``` defined below, write the forward function which will take as input two batches of features $c_t$ and $g_t$ and outputs the classification loss $\mathcal{L}_c$ and the average acuracy for all predictions.
###Code
# Exercice 2: write the CPC loss
# a) Write the negative sampling (with some help)
# ERRATUM: it's really hard, the sampling will be provided
class CPCCriterion(torch.nn.Module):
def __init__(self,
K,
dim_context,
dim_encoder,
n_negative):
super(CPCCriterion, self).__init__()
self.K_ = K
self.dim_context = dim_context
self.dim_encoder = dim_encoder
self.n_negative = n_negative
self.predictors = torch.nn.ModuleList()
for k in range(self.K_):
# TO COMPLETE !
# A affine transformation in pytorch is equivalent to a nn.Linear layer
# To get a linear transformation you must set bias=False
# input dimension of the layer = dimension of the encoder
# output dimension of the layer = dimension of the context
self.predictors.append(torch.nn.Linear(dim_context, dim_encoder, bias=False))
def get_prediction_k(self, context_data):
#TO COMPLETE !
output = []
# For each time step k
for k in range(self.K_):
# We need to compute phi_k = A_k * c_t
phi_k = self.predictors[k](context_data)
output.append(phi_k)
return output
def sample_negatives(self, encoded_data):
r"""
Sample some negative examples in the given encoded data.
Input:
- encoded_data size: B x T x H
Returns
- outputs of size B x (n_negative + 1) x (T - K_) x H
outputs[:, 0, :, :] contains the positive example
outputs[:, 1:, :, :] contains negative example sampled in the batch
- labels, long tensor of size B x (T - K_)
Since the positive example is always at coordinates 0 for all sequences
in the batch and all timestep in the sequence, labels is just a tensor
full of zeros !
"""
batch_size, time_size, dim_encoded = encoded_data.size()
window_size = time_size - self.K_
outputs = []
neg_ext = encoded_data.contiguous().view(-1, dim_encoded)
n_elem_sampled = self.n_negative * window_size * batch_size
# Draw nNegativeExt * batchSize negative samples anywhere in the batch
batch_idx = torch.randint(low=0, high=batch_size,
size=(n_elem_sampled, ),
device=encoded_data.device)
seq_idx = torch.randint(low=1, high=time_size,
size=(n_elem_sampled, ),
device=encoded_data.device)
base_idx = torch.arange(0, window_size, device=encoded_data.device)
base_idx = base_idx.view(1, 1, window_size)
base_idx = base_idx.expand(1, self.n_negative, window_size)
base_idx = base_idx.expand(batch_size, self.n_negative, window_size)
seq_idx += base_idx.contiguous().view(-1)
seq_idx = torch.remainder(seq_idx, time_size)
ext_idx = seq_idx + batch_idx * time_size
neg_ext = neg_ext[ext_idx].view(batch_size, self.n_negative,
window_size, dim_encoded)
label_loss = torch.zeros((batch_size, window_size),
dtype=torch.long,
device=encoded_data.device)
for k in range(1, self.K_ + 1):
# Positive samples
if k < self.K_:
pos_seq = encoded_data[:, k:-(self.K_-k)]
else:
pos_seq = encoded_data[:, k:]
pos_seq = pos_seq.view(batch_size, 1, pos_seq.size(1), dim_encoded)
full_seq = torch.cat((pos_seq, neg_ext), dim=1)
outputs.append(full_seq)
return outputs, label_loss
def forward(self, encoded_data, context_data):
# TO COMPLETE:
# Perform the full cpc criterion
# Returns 2 values:
# - the average classification loss avg_loss
# - the average classification acuracy avg_acc
# Reminder : The permuation !
encoded_data = encoded_data.permute(0, 2, 1)
# First we need to sample the negative examples
negative_samples, labels = self.sample_negatives(encoded_data)
# Then we must compute phi_k
phi_k = self.get_prediction_k(context_data)
# Finally we must get the dot product between phi_k and negative_samples
# for each k
#The total loss is the average of all losses
avg_loss = 0
# Average acuracy
avg_acc = 0
for k in range(self.K_):
B, N_sampled, S_small, H = negative_samples[k].size()
B, S, H = phi_k[k].size()
# As told before S = S_small + K. For segments too far in the sequence
# there are no positive exmples anyway, so we must shorten phi_k
phi = phi_k[k][:, :S_small]
# Now the dot product
# You have several ways to do that, let's do the simple but non optimal
# one
# pytorch has a matrix product function https://pytorch.org/docs/stable/torch.html#torch.bmm
# But it takes only 3D tensors of the same batch size !
# To begin negative_samples is a 4D tensor !
# We want to compute the dot product for each features, of each sequence
# of the batch. Thus we are trying to compute a dot product for all
# B* N_sampled * S_small 1D vector of negative_samples[k]
# Or, a 1D tensor of size H is also a matrix of size 1 x H
# Then, we must view it as a 3D tensor of size (B* N_sampled * S_small, 1, H)
negative_sample_k = negative_samples[k].view(B* N_sampled* S_small, 1, H)
# But now phi and negative_sample_k no longer have the same batch size !
# No worries, we can expand phi so that each sequence of the batch
# is repeated N_sampled times
phi = phi.view(B, 1,S_small, H).expand(B, N_sampled, S_small, H)
# And now we can view it as a 3D tensor
phi = phi.contiguous().view(B * N_sampled * S_small, H, 1)
# We can finally get the dot product !
scores = torch.bmm(negative_sample_k, phi)
# Dot_product has a size (B * N_sampled * S_small , 1, 1)
# Let's reorder it a bit
scores = scores.reshape(B, N_sampled, S_small)
# For each elements of the sequence, and each elements sampled, it gives
# a floating score stating the likelihood of this element being the
# true one.
# Now the classification loss, we need to use the Cross Entropy loss
# https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html
# For each time-step of each sequence of the batch
# we have N_sampled possible predictions.
# Looking at the documentation of torch.nn.CrossEntropyLoss
# we can see that this loss expect a tensor of size M x C where
# - M is the number of elements with a classification score
# - C is the number of possible classes
# There are N_sampled candidates for each predictions so
# C = N_sampled
# Each timestep of each sequence of the batch has a prediction so
# M = B * S_small
# Thus we need an input vector of size B * S_small, N_sampled
# To begin, we need to permute the axis
scores = scores.permute(0, 2, 1) # Now it has size B , S_small, N_sampled
# Then we can cast it into a 2D tensor
scores = scores.reshape(B * S_small, N_sampled)
# Same thing for the labels
labels = labels.reshape(B * S_small)
# Finally we can get the classification loss
loss_criterion = torch.nn.CrossEntropyLoss()
loss_k = loss_criterion(scores, labels)
avg_loss+= loss_k
# And for the acuracy
# The prediction for each elements is the sample with the highest score
# Thus the tensors of all predictions is the tensors of the index of the
# maximal score for each time-step of each sequence of the batch
predictions = torch.argmax(scores, 1)
acc_k = (labels == predictions).sum() / (B * S_small)
avg_acc += acc_k
# Normalization
avg_loss = avg_loss / self.K_
avg_acc = avg_acc / self.K_
return avg_loss , avg_acc
!ls
from google.colab import drive
drive.mount('/content/drive')
data_dir = '/content/drive/My Drive/PidginSpeech'
train_dir = data_dir + '/records/train'
test_dir = data_dir + '/records/test'
val_dir = data_dir + '/records/val'
# !unzip /content/drive/"My Drive"/part2.zip
#!ls
###Output
_____no_output_____
###Markdown
Don't forget to test !
###Code
audio = torchaudio.load(train_dir + "/200624-232607_pcm_9fa_elicit_5.wav")
audio = audio[0].view(1, 1, -1)
cpc_model = CPCModel(encoder, context).to(device)
cpc_criterion = CPCCriterion(N_PREDICTIONS, DIM_CONTEXT,
DIM_ENCODER, N_NEGATIVE_SAMPLE).to(device)
context_output, encoder_output = cpc_model(audio.to(device))
loss, avg = cpc_criterion(encoder_output,context_output)
###Output
/pytorch/aten/src/ATen/native/BinaryOps.cpp:81: UserWarning: Integer division of tensors using div or / is deprecated, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
###Markdown
Exercise 3: Full training loop !You have the model, you have the criterion. All you need now are a data loader and an optimizer to run your training loop.We will use an Adam optimizer:
###Code
parameters = list(cpc_criterion.parameters()) + list(cpc_model.parameters())
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
And as far as the data loader is concerned, we will rely on the data loader provided by the CPC_audio library.
###Code
dataset_train = load_dataset(train_dir)
dataset_val = load_dataset(val_dir)
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_train.getDataLoader(BATCH_SIZE, "sequence", False)
###Output
1it [00:00, 129.25it/s]
###Markdown
Now that everything is ready, complete and test the ```train_step``` function below which trains the model for one epoch.
###Code
def train_step(data_loader,
cpc_model,
cpc_criterion,
optimizer):
avg_loss = 0
avg_acc = 0
n_items = 0
for step, data in enumerate(data_loader):
x,y = data
bs = len(x)
optimizer.zero_grad()
context_output, encoder_output = cpc_model(x.to(device))
loss , acc = cpc_criterion(encoder_output, context_output)
loss.backward()
n_items+=bs
avg_loss+=loss.item()*bs
avg_acc +=acc.item()*bs
avg_loss/=n_items
avg_acc/=n_items
return avg_loss, avg_acc
###Output
_____no_output_____
###Markdown
Exercise 4 : Validation loopNow complete the validation loop.
###Code
def validation_step(data_loader,
cpc_model,
cpc_criterion):
avg_loss = 0
avg_acc = 0
n_items = 0
for step, data in enumerate(data_loader):
x,y = data
bs = len(x)
context_output, encoder_output = cpc_model(x.to(device))
loss , acc = cpc_criterion(encoder_output, context_output)
n_items+=bs
avg_loss+=loss.item()*bs
avg_acc+=acc.item()*bs
avg_loss/=n_items
avg_acc/=n_items
return avg_loss, avg_acc
###Output
_____no_output_____
###Markdown
Exercise 5: Run everything
###Code
def run(train_loader,
val_loader,
cpc_model,
cpc_criterion,
optimizer,
n_epochs):
for epoch in range(n_epochs):
print(f"Running epoch {epoch+1} / {n_epochs}")
avg_loss_train, avg_acc_train = train_step(train_loader, cpc_model, cpc_criterion, optimizer)
print("----------------------")
print(f"Training dataset")
print(f"- average loss : {avg_loss_train}")
print(f"- average acuracy : {avg_acc_train}")
print("----------------------")
with torch.no_grad():
cpc_model.eval()
cpc_criterion.eval()
avg_loss_val, avg_acc_val = validation_step(val_loader, cpc_model, cpc_criterion)
print(f"Validation dataset")
print(f"- average loss : {avg_loss_val}")
print(f"- average acuracy : {avg_acc_val}")
print("----------------------")
print()
cpc_model.train()
cpc_criterion.train()
#run(data_loader_train, data_loader_val, cpc_model,cpc_criterion,optimizer,10)
###Output
_____no_output_____
###Markdown
Once everything is donw, clear the memory.
###Code
del dataset_train
del dataset_val
del cpc_model
del context
del encoder
###Output
_____no_output_____
###Markdown
Part 2 : Fine tuning
###Code
%cd /content/CPC_audio
from cpc.model import CPCEncoder, CPCAR
DIM_ENCODER=256
DIM_CONTEXT=256
KEEP_HIDDEN_VECTOR=False
N_LEVELS_CONTEXT=1
CONTEXT_RNN="LSTM"
N_PREDICTIONS=12
LEARNING_RATE=2e-4
N_NEGATIVE_SAMPLE =128
%cd ../../
encoder = CPCEncoder(DIM_ENCODER)
context = CPCAR(DIM_ENCODER, DIM_CONTEXT, KEEP_HIDDEN_VECTOR, 1, mode=CONTEXT_RNN)
###Output
_____no_output_____
###Markdown
Exercice 1 : Phone separability with aligned phonemes.One option to evaluate the quality of the features trained with CPC can be to check if they can be used to recognize phonemes. To do so, we can fine-tune a pre-trained model using a limited amount of labelled speech data.We are going to start with a simple evaluation setting where we have the phone labels for each timestep corresponding to a CPC feature.We will work with a model already pre-trained on English data. As far as the fine-tuning dataset is concerned, we will use a 1h subset of [librispeech-100](http://www.openslr.org/12/).
###Code
!mkdir checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_30.pt -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_logs.json -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_args.json -P checkpoint_data
!ls checkpoint_data
#!ls
# %cd /content/CPC_audio
from cpc.dataset import parseSeqLabels
from cpc.feature_loader import loadModel, getCheckpointData
import os
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
label_dict, N_PHONES = parseSeqLabels(data_dir+"/chars.txt")
print(label_dict)
dataset_train = load_dataset(train_dir, file_extension='.wav', phone_label_dict=label_dict)
dataset_val = load_dataset(val_dir, file_extension='.wav', phone_label_dict=label_dict)
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_val.getDataLoader(BATCH_SIZE, "sequence", False)
###Output
Loading checkpoint checkpoint_data/checkpoint_30.pt
Loading the state dict at checkpoint_data/checkpoint_30.pt
###Markdown
Then we will use a simple linear classifier to recognize the phonemes from the features produced by ```cpc_model```. a) Build the phone classifier Design a class of linear classifiers, ```PhoneClassifier``` that will take as input a batch of sequences of CPC features and output a score vector for each phoneme
###Code
class PhoneClassifier(torch.nn.Module):
def __init__(self,
input_dim : int,
n_phones : int):
super(PhoneClassifier, self).__init__()
self.linear = torch.nn.Linear(input_dim, n_phones)
def forward(self, x):
return self.linear(x)
###Output
_____no_output_____
###Markdown
Our phone classifier will then be:
###Code
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
###Output
_____no_output_____
###Markdown
b - What would be the correct loss criterion for this task ?
###Code
loss_criterion = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
To perform the fine-tuning, we will also need an optimization function.We will use an [Adam optimizer ](https://pytorch.org/docs/stable/optim.htmltorch.optim.Adam).
###Code
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
You might also want to perform this training while freezing the weights of the ```cpc_model```. Indeed, if the pre-training was good enough, then ```cpc_model``` phonemes representation should be linearly separable. In this case the optimizer should be defined like this:
###Code
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
c- Now let's build a training loop. Complete the function ```train_one_epoch``` below.
###Code
def train_one_epoch(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
###Output
_____no_output_____
###Markdown
Don't forget to test it !
###Code
avg_loss, avg_accuracy = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer_frozen)
avg_loss, avg_accuracy
###Output
_____no_output_____
###Markdown
d- Build the validation loop
###Code
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
###Output
_____no_output_____
###Markdown
e- Run everythingTest this functiion with both ```optimizer``` and ```optimizer_frozen```.
###Code
def run(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train, acc_train = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}. Average accuracy {acc_train}")
print("-------------------")
print("Validation dataset")
loss_val, acc_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}. Average accuracy {acc_val}")
print("-------------------")
print()
run(cpc_model,phone_classifier,loss_criterion,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
###Output
Running epoch 1 / 10
-------------------
Training dataset :
Average loss : 4.101503980570826. Average accuracy 0.053205818965517244
-------------------
Validation dataset
Average loss : 4.014748641422817. Average accuracy 0.052734375
-------------------
Running epoch 2 / 10
-------------------
Training dataset :
Average loss : 3.9334722469592918. Average accuracy 0.057314116379310345
-------------------
Validation dataset
Average loss : 3.868110248020717. Average accuracy 0.05482700892857143
-------------------
Running epoch 3 / 10
-------------------
Training dataset :
Average loss : 3.792913264241712. Average accuracy 0.05862742456896552
-------------------
Validation dataset
Average loss : 3.7432546615600586. Average accuracy 0.057896205357142856
-------------------
Running epoch 4 / 10
-------------------
Training dataset :
Average loss : 3.6782213572798104. Average accuracy 0.06041217672413793
-------------------
Validation dataset
Average loss : 3.6436939920697893. Average accuracy 0.06263950892857142
-------------------
Running epoch 5 / 10
-------------------
Training dataset :
Average loss : 3.5832488783474625. Average accuracy 0.0755994073275862
-------------------
Validation dataset
Average loss : 3.5544798033578053. Average accuracy 0.09612165178571429
-------------------
Running epoch 6 / 10
-------------------
Training dataset :
Average loss : 3.5063610570184114. Average accuracy 0.1287378771551724
-------------------
Validation dataset
Average loss : 3.491436038698469. Average accuracy 0.14634486607142858
-------------------
Running epoch 7 / 10
-------------------
Training dataset :
Average loss : 3.4407443095897805. Average accuracy 0.16500538793103448
-------------------
Validation dataset
Average loss : 3.4290012632097517. Average accuracy 0.16643415178571427
-------------------
Running epoch 8 / 10
-------------------
Training dataset :
Average loss : 3.388541813554435. Average accuracy 0.17349137931034483
-------------------
Validation dataset
Average loss : 3.3842152186802457. Average accuracy 0.17327008928571427
-------------------
Running epoch 9 / 10
-------------------
Training dataset :
Average loss : 3.343233009864544. Average accuracy 0.1781721443965517
-------------------
Validation dataset
Average loss : 3.344113860811506. Average accuracy 0.17633928571428573
-------------------
Running epoch 10 / 10
-------------------
Training dataset :
Average loss : 3.3107476809929155. Average accuracy 0.17881196120689655
-------------------
Validation dataset
Average loss : 3.3056671960013255. Average accuracy 0.17606026785714285
-------------------
###Markdown
Exercise 2 : Phone separability without alignment (PER)Aligned data are very practical, but un real life they are rarely available. That's why in this excercise we will consider a fine-tuning with non-aligned phonemes.The model, the optimizer and the phone classifier will stay the same. However, we will replace our phone criterion with a [CTC loss](https://pytorch.org/docs/master/generated/torch.nn.CTCLoss.html).
###Code
loss_ctc = torch.nn.CTCLoss()
###Output
_____no_output_____
###Markdown
Besides, we will use a siglthy different dataset class.
###Code
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_per = test_dir
path_val_data_per = val_dir
path_phone_data_per = data_dir + '/chars.txt'
BATCH_SIZE=8
phone_labels, N_PHONES = parseSeqLabels(path_phone_data_per)
data_train_per, _ = findAllSeqs(path_train_data_per, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_per, data_train_per, phone_labels)
data_loader_train = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_val_per, _ = findAllSeqs(path_val_data_per, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_per, data_val_per, phone_labels)
data_loader_val = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
###Output
1it [00:00, 147.06it/s]
###Markdown
a- TrainingSince the phonemes are not aligned, there is no simple direct way to get the classification acuracy of a model. Write and test the three functions ```train_one_epoch_ctc```, ```validation_step_ctc``` and ```run_ctc``` as before but without considering the average acuracy of the model.
###Code
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
import torch.nn.functional as F
def train_one_epoch_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def run_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train = train_one_epoch_ctc(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}.")
print("-------------------")
print("Validation dataset")
loss_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}")
print("-------------------")
print()
run_ctc(cpc_model,phone_classifier,loss_ctc,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
###Output
Running epoch 1 / 10
-------------------
Training dataset :
Average loss : 37.65364074707031.
-------------------
Validation dataset
Average loss : 33.79923832670171
-------------------
Running epoch 2 / 10
-------------------
Training dataset :
Average loss : 31.858370150549938.
-------------------
Validation dataset
Average loss : 28.104759378636135
-------------------
Running epoch 3 / 10
-------------------
Training dataset :
Average loss : 25.867438332509185.
-------------------
Validation dataset
Average loss : 22.35495628194606
-------------------
Running epoch 4 / 10
-------------------
Training dataset :
Average loss : 20.166560383166296.
-------------------
Validation dataset
Average loss : 17.26606093061731
-------------------
Running epoch 5 / 10
-------------------
Training dataset :
Average loss : 15.55047143515894.
-------------------
Validation dataset
Average loss : 13.454567706331293
-------------------
Running epoch 6 / 10
-------------------
Training dataset :
Average loss : 12.28218351784399.
-------------------
Validation dataset
Average loss : 10.879266191036143
-------------------
Running epoch 7 / 10
-------------------
Training dataset :
Average loss : 10.105640831640212.
-------------------
Validation dataset
Average loss : 9.164311855397326
-------------------
Running epoch 8 / 10
-------------------
Training dataset :
Average loss : 8.656418905419818.
-------------------
Validation dataset
Average loss : 8.004818571374772
-------------------
Running epoch 9 / 10
-------------------
Training dataset :
Average loss : 7.662299115779036.
-------------------
Validation dataset
Average loss : 7.190797704331418
-------------------
Running epoch 10 / 10
-------------------
Training dataset :
Average loss : 6.950564457198321.
-------------------
Validation dataset
Average loss : 6.598749140475658
-------------------
###Markdown
b- Evaluation: the Phone Error Rate (PER)In order to compute the similarity between two sequences, we can use the [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance). This distance estimates the minimum number of insertion, deletion and addition to move from one sequence to another. If we normalize this distance by the number of characters in the reference sequence we get the Phone Error Rate (PER).This value can be interpreted as :\\[ PER = \frac{S + D + I}{N} \\]Where:* N is the number of characters in the reference* S is the number of substitutiion* I in the number of insertion* D in the number of deletionFor the best possible alignment of the two sequences.
###Code
import numpy as np
def get_PER_sequence(ref_seq, target_seq):
# re = g.split()
# h = h.split()
n = len(ref_seq)
m = len(target_seq)
D = np.zeros((n+1,m+1))
for i in range(1,n+1):
D[i,0] = D[i-1,0]+1
for j in range(1,m+1):
D[0,j] = D[0,j-1]+1
### TODO compute the alignment
for i in range(1,n+1):
for j in range(1,m+1):
D[i,j] = min(
D[i-1,j]+1,
D[i-1,j-1]+1,
D[i,j-1]+1,
D[i-1,j-1]+ 0 if ref_seq[i-1]==target_seq[j-1] else float("inf")
)
return D[n,m]/len(ref_seq)
#return PER
###Output
_____no_output_____
###Markdown
You can test your function below:
###Code
ref_seq = [0, 1, 1, 2, 0, 2, 2]
pred_seq = [1, 1, 2, 2, 0, 0]
expected_PER = 4. / 7.
print(get_PER_sequence(ref_seq, pred_seq) == expected_PER)
###Output
True
###Markdown
c- Evaluating the PER of your model on the test datasetEvaluate the PER on the validation dataset. Please notice that you should usually use a separate dataset, called the dev dataset, to perform this operation. However for the sake of simplicity we will work with validation data in this exercise.
###Code
import progressbar
from multiprocessing import Pool
def cut_data(seq, sizeSeq):
maxSeq = sizeSeq.max()
return seq[:, :maxSeq]
def prepare_data(data):
seq, sizeSeq, phone, sizePhone = data
seq = seq.cuda()
phone = phone.cuda()
sizeSeq = sizeSeq.cuda().view(-1)
sizePhone = sizePhone.cuda().view(-1)
seq = cut_data(seq.permute(0, 2, 1), sizeSeq).permute(0, 2, 1)
return seq, sizeSeq, phone, sizePhone
def get_per(test_dataloader,
cpc_model,
phone_classifier):
downsampling_factor = 160
cpc_model.eval()
phone_classifier.eval()
avgPER = 0
nItems = 0
print("Starting the PER computation through beam search")
bar = progressbar.ProgressBar(maxval=len(test_dataloader))
bar.start()
for index, data in enumerate(test_dataloader):
bar.update(index)
with torch.no_grad():
seq, sizeSeq, phone, sizePhone = prepare_data(data)
c_feature, _, _ = cpc_model(seq.to(device),phone.to(device))
sizeSeq = sizeSeq / downsampling_factor
predictions = torch.nn.functional.softmax(
phone_classifier(c_feature), dim=2).cpu()
phone = phone.cpu()
sizeSeq = sizeSeq.cpu()
sizePhone = sizePhone.cpu()
bs = c_feature.size(0)
data_per = [(predictions[b].argmax(1), phone[b]) for b in range(bs)]
# data_per = [(predictions[b], sizeSeq[b], phone[b], sizePhone[b],
# "criterion.module.BLANK_LABEL") for b in range(bs)]
with Pool(bs) as p:
poolData = p.starmap(get_PER_sequence, data_per)
avgPER += sum([x for x in poolData])
nItems += len(poolData)
bar.finish()
avgPER /= nItems
print(f"Average PER {avgPER}")
return avgPER
get_per(data_loader_val,cpc_model,phone_classifier)
###Output
_____no_output_____
###Markdown
Exercice 3 : Character error rate (CER) The Character Error Rate (CER) is an evaluation metric similar to the PER but with characters insterad of phonemes. Using the following data, run the functions you defined previously to estimate the CER of your model after fine-tuning.
###Code
# Load a dataset labelled with the letters of each sequence.
# %cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_cer = train_dir
path_val_data_cer = val_dir
path_letter_data_cer = data_dir +'/chars.txt'
BATCH_SIZE=8
letters_labels, N_LETTERS = parseSeqLabels(path_letter_data_cer)
data_train_cer, _ = findAllSeqs(path_train_data_cer, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_cer, data_train_cer, letters_labels)
data_val_cer, _ = findAllSeqs(path_val_data_cer, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_cer, data_val_cer, letters_labels)
# The data loader will generate a tuple of tensors data, labels for each batch
# data : size N x T1 x 1 : the audio sequence
# label : size N x T2 the sequence of letters corresponding to the audio data
# IMPORTANT NOTE: just like the PER the CER is computed with non-aligned phone data.
data_loader_train_letters = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_loader_val_letters = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
character_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_LETTERS).to(device)
parameters = list(character_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(character_classifier.parameters()), lr=LEARNING_RATE)
loss_ctc = torch.nn.CTCLoss()
run_ctc(cpc_model,character_classifier,loss_ctc,data_loader_train_letters,data_loader_val_letters,optimizer_frozen,n_epoch=10)
get_per(data_loader_val_letters,cpc_model,character_classifier)
###Output
_____no_output_____ |
notebooks/zipcodes_get_geos.ipynb | ###Markdown
README - zipcodes_get_geos.ipynb This notebook will extract the zipcodes from ../downloads/zip_codes/zipcodes.txt and produce a YAML file containing the ZIP Code along with the lat/lon geo-coordinates for the ZIP Code. This YAML file will be named "zipcodes_city_state_.yml" where city and state values are taken from the config.yml file in ../source. If the zipcodes.txt file is present per above and the location_info section of the config.yml is as desired, this notebook should run as is.**Note:** please be sure to update the city and state names in the [config.yml](../source/config.yml) file prior to executing this notebook.
###Code
import json
import os
import sqlite3
import sys
from datetime import datetime
import logzero
import numpy as np
import pandas as pd
import yaml
from logzero import logger
sys.path.append("../source")
import queries
log_path = "logs/"
log_file = "zipcode_geos.log"
logzero.logfile(log_path + log_file, maxBytes=1e5, backupCount=5, disableStderrLogger=True)
logger.info(f"{log_path}, {log_file}")
logger.info(sys.path)
try:
with open("../source/config.yml", "r") as config_in:
configs = yaml.load(config_in, Loader=yaml.SafeLoader)
logger.info(configs)
except:
logger.error(f"config file open failure.")
exit(1)
data_path = configs["file_paths"]["downloads_path_zips"]
data_file = configs["file_names"]["data_zipcodes"]
db_path = configs["file_paths"]["db_path"]
db_file = configs["file_names"]["db_file_gzc"]
city = configs["location_info"]["city"]
state = configs["location_info"]["state"]
loc_name = city + "_" + state
logger.info(f"{data_path}, {data_file}")
logger.info(f"{db_path}, {db_file}")
logger.info(f"{city}, {state}")
zip_list = []
try:
with open(data_path + data_file, "r") as fh:
lines = fh.readlines()
for line in lines:
zip_list.append(line.strip())
except:
print(
f"File read error. Is the file zipcodes.txt present in {configs['file_paths']['downloads_path_zips']}"
)
logger.info(
f"File read error. Is the file zipcodes.txt present in {configs['file_paths']['downloads_path_zips']}"
)
logger.info(f"{zip_list}")
# establish db connection and cursor
conn = sqlite3.connect(db_path + db_file)
cursor = conn.cursor()
zipcodes = {}
for zip in zip_list:
cursor.execute(queries.select_zipcode_geo, {"zipcode": zip})
zip_data = cursor.fetchall()
zipcodes[zip_data[0][0]] = {"lat": zip_data[0][1], "lon": zip_data[0][2]}
logger.info(f"{zipcodes}\n")
try:
with open(data_path + "zipcodes_" + loc_name + ".yml", "w") as yml_out:
yaml.dump({"zipcodes": zipcodes}, yml_out)
logger.info(f"yaml dump successful\n")
os.rename(data_path + data_file, data_path + "zipcodes_" + loc_name + ".txt")
logger.info(f"zip_codes.txt renamed to {'zipcodes_' + loc_name + '.txt'}")
except:
logger.error("yaml file open error")
conn.close()
###Output
_____no_output_____ |
tutorials/network_x_tutorial.ipynb | ###Markdown
TutorialThis guide can help you start working with NetworkX. Creating a graphCreate an empty graph with no nodes and no edges.
###Code
import networkx as nx
G = nx.Graph()
###Output
_____no_output_____
###Markdown
By definition, a `Graph` is a collection of nodes (vertices) along withidentified pairs of nodes (called edges, links, etc). In NetworkX, nodes canbe any [hashable](https://docs.python.org/3/glossary.htmlterm-hashable) object e.g., a text string, an image, an XML object,another Graph, a customized node object, etc. NodesThe graph `G` can be grown in several ways. NetworkX includes many graphgenerator functions and facilities to read and write graphs in many formats.To get started though we’ll look at simple manipulations. You can add one nodeat a time,
###Code
G.add_node(1)
###Output
_____no_output_____
###Markdown
or add nodes from any [iterable](https://docs.python.org/3/glossary.htmlterm-iterable) container, such as a list
###Code
G.add_nodes_from([2, 3])
###Output
_____no_output_____
###Markdown
You can also add nodes along with nodeattributes if your container yields 2-tuples of the form`(node, node_attribute_dict)`:```>>> G.add_nodes_from([... (4, {"color": "red"}),... (5, {"color": "green"}),... ])```Node attributes are discussed further below.Nodes from one graph can be incorporated into another:
###Code
H = nx.path_graph(10)
G.add_nodes_from(H)
###Output
_____no_output_____
###Markdown
`G` now contains the nodes of `H` as nodes of `G`.In contrast, you could use the graph `H` as a node in `G`.
###Code
G.add_node(H)
###Output
_____no_output_____
###Markdown
The graph `G` now contains `H` as a node. This flexibility is very powerful asit allows graphs of graphs, graphs of files, graphs of functions and much more.It is worth thinking about how to structure your application so that the nodesare useful entities. Of course you can always use a unique identifier in `G`and have a separate dictionary keyed by identifier to the node information ifyou prefer. Edges`G` can also be grown by adding one edge at a time,
###Code
G.add_edge(1, 2)
e = (2, 3)
G.add_edge(*e) # unpack edge tuple*
###Output
_____no_output_____
###Markdown
by adding a list of edges,
###Code
G.add_edges_from([(1, 2), (1, 3)])
###Output
_____no_output_____
###Markdown
or by adding any ebunch of edges. An *ebunch* is any iterablecontainer of edge-tuples. An edge-tuple can be a 2-tuple of nodes or a 3-tuplewith 2 nodes followed by an edge attribute dictionary, e.g.,`(2, 3, {'weight': 3.1415})`. Edge attributes are discussed furtherbelow.
###Code
G.add_edges_from(H.edges)
###Output
_____no_output_____
###Markdown
There are no complaints when adding existing nodes or edges. For example,after removing all nodes and edges,
###Code
G.clear()
###Output
_____no_output_____
###Markdown
we add new nodes/edges and NetworkX quietly ignores any that arealready present.
###Code
G.add_edges_from([(1, 2), (1, 3)])
G.add_node(1)
G.add_edge(1, 2)
G.add_node("spam") # adds node "spam"
G.add_nodes_from("spam") # adds 4 nodes: 's', 'p', 'a', 'm'
G.add_edge(3, 'm')
###Output
_____no_output_____
###Markdown
At this stage the graph `G` consists of 8 nodes and 3 edges, as can be seen by:
###Code
G.number_of_nodes()
G.number_of_edges()
DG = nx.DiGraph()
DG.add_edge(2, 1) # adds the nodes in order 2, 1
DG.add_edge(1, 3)
DG.add_edge(2, 4)
DG.add_edge(1, 2)
assert list(DG.successors(2)) == [1, 4]
assert list(DG.edges) == [(2, 1), (2, 4), (1, 3), (1, 2)]
###Output
_____no_output_____
###Markdown
Examining elements of a graphWe can examine the nodes and edges. Four basic graph properties facilitatereporting: `G.nodes`, `G.edges`, `G.adj` and `G.degree`. Theseare set-like views of the nodes, edges, neighbors (adjacencies), and degreesof nodes in a graph. They offer a continually updated read-only view intothe graph structure. They are also dict-like in that you can look up nodeand edge data attributes via the views and iterate with data attributesusing methods `.items()`, `.data('span')`.If you want a specific container type instead of a view, you can specify one.Here we use lists, though sets, dicts, tuples and other containers may bebetter in other contexts.
###Code
list(G.nodes)
list(G.edges)
list(G.adj[1]) # or list(G.neighbors(1))
G.degree[1] # the number of edges incident to 1
###Output
_____no_output_____
###Markdown
One can specify to report the edges and degree from a subset of all nodesusing an nbunch. An *nbunch* is any of: `None` (meaning all nodes),a node, or an iterable container of nodes that is not itself a node in thegraph.
###Code
G.edges([2, 'm'])
G.degree([2, 3])
###Output
_____no_output_____
###Markdown
Removing elements from a graphOne can remove nodes and edges from the graph in a similar fashion to adding.Use methods`Graph.remove_node()`,`Graph.remove_nodes_from()`,`Graph.remove_edge()`and`Graph.remove_edges_from()`, e.g.
###Code
G.remove_node(2)
G.remove_nodes_from("spam")
list(G.nodes)
G.remove_edge(1, 3)
###Output
_____no_output_____
###Markdown
Using the graph constructorsGraph objects do not have to be built up incrementally - data specifyinggraph structure can be passed directly to the constructors of the variousgraph classes.When creating a graph structure by instantiating one of the graphclasses you can specify data in several formats.
###Code
G.add_edge(1, 2)
H = nx.DiGraph(G) # create a DiGraph using the connections from G
list(H.edges())
edgelist = [(0, 1), (1, 2), (2, 3)]
H = nx.Graph(edgelist)
###Output
_____no_output_____
###Markdown
What to use as nodes and edgesYou might notice that nodes and edges are not specified as NetworkXobjects. This leaves you free to use meaningful items as nodes andedges. The most common choices are numbers or strings, but a node canbe any hashable object (except `None`), and an edge can be associatedwith any object `x` using `G.add_edge(n1, n2, object=x)`.As an example, `n1` and `n2` could be protein objects from the RCSB ProteinData Bank, and `x` could refer to an XML record of publications detailingexperimental observations of their interaction.We have found this power quite useful, but its abusecan lead to surprising behavior unless one is familiar with Python.If in doubt, consider using `convert_node_labels_to_integers()` to obtaina more traditional graph with integer labels. Accessing edges and neighborsIn addition to the views `Graph.edges`, and `Graph.adj`,access to edges and neighbors is possible using subscript notation.
###Code
G = nx.Graph([(1, 2, {"color": "yellow"})])
G[1] # same as G.adj[1]
G[1][2]
G.edges[1, 2]
###Output
_____no_output_____
###Markdown
You can get/set the attributes of an edge using subscript notationif the edge already exists.
###Code
G.add_edge(1, 3)
G[1][3]['color'] = "blue"
G.edges[1, 2]['color'] = "red"
G.edges[1, 2]
###Output
_____no_output_____
###Markdown
Fast examination of all (node, adjacency) pairs is achieved using`G.adjacency()`, or `G.adj.items()`.Note that for undirected graphs, adjacency iteration sees each edge twice.
###Code
FG = nx.Graph()
FG.add_weighted_edges_from([(1, 2, 0.125), (1, 3, 0.75), (2, 4, 1.2), (3, 4, 0.375)])
for n, nbrs in FG.adj.items():
for nbr, eattr in nbrs.items():
wt = eattr['weight']
if wt < 0.5: print(f"({n}, {nbr}, {wt:.3})")
###Output
(1, 2, 0.125)
(2, 1, 0.125)
(3, 4, 0.375)
(4, 3, 0.375)
###Markdown
Convenient access to all edges is achieved with the edges property.
###Code
for (u, v, wt) in FG.edges.data('weight'):
if wt < 0.5:
print(f"({u}, {v}, {wt:.3})")
###Output
(1, 2, 0.125)
(3, 4, 0.375)
###Markdown
Adding attributes to graphs, nodes, and edgesAttributes such as weights, labels, colors, or whatever Python object you like,can be attached to graphs, nodes, or edges.Each graph, node, and edge can hold key/value attribute pairs in an associatedattribute dictionary (the keys must be hashable). By default these are empty,but attributes can be added or changed using `add_edge`, `add_node` or directmanipulation of the attribute dictionaries named `G.graph`, `G.nodes`, and`G.edges` for a graph `G`. Graph attributesAssign graph attributes when creating a new graph
###Code
G = nx.Graph(day="Friday")
G.graph
###Output
_____no_output_____
###Markdown
Or you can modify attributes later
###Code
G.graph['day'] = "Monday"
G.graph
###Output
_____no_output_____
###Markdown
Node attributesAdd node attributes using `add_node()`, `add_nodes_from()`, or `G.nodes`
###Code
G.add_node(1, time='5pm')
G.add_nodes_from([3], time='2pm')
G.nodes[1]
G.nodes[1]['room'] = 714
G.nodes.data()
###Output
_____no_output_____
###Markdown
Note that adding a node to `G.nodes` does not add it to the graph, use`G.add_node()` to add new nodes. Similarly for edges. Edge AttributesAdd/change edge attributes using `add_edge()`, `add_edges_from()`,or subscript notation.
###Code
G.add_edge(1, 2, weight=4.7 )
G.add_edges_from([(3, 4), (4, 5)], color='red')
G.add_edges_from([(1, 2, {'color': 'blue'}), (2, 3, {'weight': 8})])
G[1][2]['weight'] = 4.7
G.edges[3, 4]['weight'] = 4.2
###Output
_____no_output_____
###Markdown
The special attribute `weight` should be numeric as it is used byalgorithms requiring weighted edges. Directed graphsThe `DiGraph` class provides additional methods and properties specificto directed edges, e.g.,`DiGraph.out_edges`, `DiGraph.in_degree`,`DiGraph.predecessors()`, `DiGraph.successors()` etc.To allow algorithms to work with both classes easily, the directed versions of`neighbors()` is equivalent to `successors()` while `degree` reportsthe sum of `in_degree` and `out_degree` even though that may feelinconsistent at times.
###Code
DG = nx.DiGraph()
DG.add_weighted_edges_from([(1, 2, 0.5), (3, 1, 0.75)])
DG.out_degree(1, weight='weight')
DG.degree(1, weight='weight')
list(DG.successors(1))
list(DG.neighbors(1))
###Output
_____no_output_____
###Markdown
Some algorithms work only for directed graphs and others are not welldefined for directed graphs. Indeed the tendency to lump directedand undirected graphs together is dangerous. If you want to treata directed graph as undirected for some measurement you should probablyconvert it using `Graph.to_undirected()` or with
###Code
H = nx.Graph(G) # create an undirected graph H from a directed graph G
###Output
_____no_output_____
###Markdown
MultigraphsNetworkX provides classes for graphs which allow multiple edgesbetween any pair of nodes. The `MultiGraph` and`MultiDiGraph`classes allow you to add the same edge twice, possibly with differentedge data. This can be powerful for some applications, but manyalgorithms are not well defined on such graphs.Where results are well defined,e.g., `MultiGraph.degree()` we provide the function. Otherwise youshould convert to a standard graph in a way that makes the measurementwell defined.
###Code
MG = nx.MultiGraph()
MG.add_weighted_edges_from([(1, 2, 0.5), (1, 2, 0.75), (2, 3, 0.5)])
dict(MG.degree(weight='weight'))
GG = nx.Graph()
for n, nbrs in MG.adjacency():
for nbr, edict in nbrs.items():
minvalue = min([d['weight'] for d in edict.values()])
GG.add_edge(n, nbr, weight = minvalue)
nx.shortest_path(GG, 1, 3)
###Output
_____no_output_____
###Markdown
Graph generators and graph operationsIn addition to constructing graphs node-by-node or edge-by-edge, theycan also be generated by1. Applying classic graph operations, such as:1. Using a call to one of the classic small graphs, e.g.,1. Using a (constructive) generator for a classic graph, e.g.,like so:
###Code
K_5 = nx.complete_graph(5)
K_3_5 = nx.complete_bipartite_graph(3, 5)
barbell = nx.barbell_graph(10, 10)
lollipop = nx.lollipop_graph(10, 20)
###Output
_____no_output_____
###Markdown
1. Using a stochastic graph generator, e.g,like so:
###Code
er = nx.erdos_renyi_graph(100, 0.15)
ws = nx.watts_strogatz_graph(30, 3, 0.1)
ba = nx.barabasi_albert_graph(100, 5)
red = nx.random_lobster(100, 0.9, 0.9)
###Output
_____no_output_____
###Markdown
1. Reading a graph stored in a file using common graph formats, such as edge lists, adjacency lists, GML, GraphML, pickle, LEDA and others.
###Code
nx.write_gml(red, "path.to.file")
mygraph = nx.read_gml("path.to.file")
###Output
_____no_output_____
###Markdown
For details on graph formats see Reading and writing graphsand for graph generator functions see Graph generators Analyzing graphsThe structure of `G` can be analyzed using various graph-theoreticfunctions such as:
###Code
G = nx.Graph()
G.add_edges_from([(1, 2), (1, 3)])
G.add_node("spam") # adds node "spam"
list(nx.connected_components(G))
sorted(d for n, d in G.degree())
nx.clustering(G)
###Output
_____no_output_____
###Markdown
Some functions with large output iterate over (node, value) 2-tuples.These are easily stored in a [dict](https://docs.python.org/3/library/stdtypes.htmldict) structure if you desire.
###Code
sp = dict(nx.all_pairs_shortest_path(G))
sp[3]
###Output
_____no_output_____
###Markdown
See Algorithms for details on graph algorithmssupported. Drawing graphsNetworkX is not primarily a graph drawing package but basic drawing withMatplotlib as well as an interface to use the open source Graphviz softwarepackage are included. These are part of the `networkx.drawing` module and willbe imported if possible.First import Matplotlib’s plot interface (pylab works too)
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
To test if the import of `networkx.drawing` was successful draw `G` using one of
###Code
G = nx.petersen_graph()
subax1 = plt.subplot(121)
nx.draw(G, with_labels=True, font_weight='bold')
subax2 = plt.subplot(122)
nx.draw_shell(G, nlist=[range(5, 10), range(5)], with_labels=True, font_weight='bold')
###Output
_____no_output_____
###Markdown
when drawing to an interactive display. Note that you may need to issue aMatplotlib
###Code
plt.show()
###Output
_____no_output_____
###Markdown
command if you are not using matplotlib in interactive mode (see[this Matplotlib FAQ](https://matplotlib.org/stable/faq/installing_faq.html)).
###Code
options = {
'node_color': 'black',
'node_size': 100,
'width': 3,
}
subax1 = plt.subplot(221)
nx.draw_random(G, **options)
subax2 = plt.subplot(222)
nx.draw_circular(G, **options)
subax3 = plt.subplot(223)
nx.draw_spectral(G, **options)
subax4 = plt.subplot(224)
nx.draw_shell(G, nlist=[range(5,10), range(5)], **options)
###Output
_____no_output_____
###Markdown
You can find additional options via `draw_networkx()` andlayouts via `layout`.You can use multiple shells with `draw_shell()`.
###Code
G = nx.dodecahedral_graph()
shells = [[2, 3, 4, 5, 6], [8, 1, 0, 19, 18, 17, 16, 15, 14, 7], [9, 10, 11, 12, 13]]
nx.draw_shell(G, nlist=shells, **options)
###Output
_____no_output_____
###Markdown
To save drawings to a file, use, for example
###Code
nx.draw(G)
plt.savefig("path.png")
###Output
_____no_output_____
###Markdown
writes to the file `path.png` in the local directory. If Graphviz andPyGraphviz or pydot, are available on your system, you can also use`nx_agraph.graphviz_layout(G)` or `nx_pydot.graphviz_layout(G)` to get thenode positions, or write the graph in dot format for further processing.
###Code
from networkx.drawing.nx_pydot import write_dot
pos = nx.nx_agraph.graphviz_layout(G)
nx.draw(G, pos=pos)
write_dot(G, 'file.dot')
###Output
_____no_output_____ |
File 1b - MPS sample states.ipynb | ###Markdown
Session 3: MPS sample states
###Code
import numpy as np
from mps.state import *
###Output
_____no_output_____
###Markdown
Simple states We build a function that creates a simple product state in MPS form.
###Code
# file: mps/state.py
def product(vectors, length=None):
#
# If `length` is `None`, `vectors` will be a list of complex vectors
# representing the elements of the product state.
#
# If `length` is an integer, `vectors` is a single complex vector and
# it is repeated `length` times to build a product state.
#
def to_tensor(v):
v = np.array(v)
return np.reshape(v, (1, v.size, 1))
if length is not None:
return MPS([to_tensor(vectors)] * length)
else:
return MPS([to_tensor(v) for v in vectors])
###Output
_____no_output_____
###Markdown
Qubit entangled states The GHZ state is the Schrödinger cat state$$\frac{1}{\sqrt{2}}(|0\rangle^{\otimes N} + |1\rangle^{\otimes N}).$$
###Code
# file: mps/state.py
def GHZ(n):
"""Return a GHZ state with `n` qubits in MPS form."""
a = np.zeros((2, 2, 2))
b = a.copy()
a[0, 0, 0] = a[0, 1, 1] = 1.0/np.sqrt(2.0)
b[0, 0, 0] = 1.0
b[1, 1, 1] = 1.0
data = [a]+[b] * (n-1)
data[0] = a[0:1, :, :]
b = data[n-1]
data[n-1] = (b[:, :, 1:2] + b[:, :, 0:1])
return MPS(data)
###Output
_____no_output_____
###Markdown
The W state of one excitation is a delocalized spin-wave over N qubits$$\sum \frac{1}{\sqrt{N}}\sigma^+_i |0\rangle^{\otimes N}$$
###Code
# file: mps/state.py
def W(n):
"""Return a W with one excitation over `n` qubits."""
a = np.zeros((2, 2, 2))
a[0, 0, 0] = 1.0
a[0, 1, 1] = 1.0/np.sqrt(n)
a[1, 0, 1] = 1.0
data = [a] * n
data[0] = a[0:1, :, :]
data[n-1] = data[n-1][:, :, 1:2]
return MPS(data)
###Output
_____no_output_____
###Markdown
This is a particular case of a spin-wave$$\sum_i \psi_i \sigma^+_i |0\rangle^{\otimes N}$$
###Code
# file: mps/state.py
def wavepacket(ψ):
#
# Create an MPS for a spin 1/2 system with the given amplitude
# of the excited state on each site. In other words, we create
#
# \sum_i Ψ[i] σ^+ |0000...>
#
# The MPS is created with a single tensor: A(i,s,j)
# The input index "i" can take two values, [0,1]. If it is '0'
# it means we have not applied any σ^+ anywhere else, and we can
# excite a spin here. Therefore, we have two possible values:
#
# A(0,0,0) = 1.0
# A(0,1,1) = ψ[n] (n=given site)
#
# If i=1, then we cannot excite any further spin and
# A(1,0,1) = 1.0
#
# All other elements are zero. Of course, we have to impose
# boundary conditions that the first site only has A(0,s,j)
# and the last site only has A(i,s,1) (at least one spin has
# been excited)
#
ψ = np.array(ψ)
data = [0] * ψ.size
for n in range(0, ψ.size):
B = np.zeros((2, 2, 2), dtype=ψ.dtype)
B[0, 0, 0] = B[1, 0, 1] = 1.0
B[0, 1, 1] = ψ[n]
data[n] = B
data[0] = data[0][0:1, :, :]
data[-1] = data[-1][:, :, 1:]
return MPS(data)
###Output
_____no_output_____
###Markdown
Create the graph states for qubits in 1D.
###Code
# file: mps/state.py
def graph(n, mps=True):
"""Create a one-dimensional graph state of `n` qubits."""
# Choose entangled pair state as : |00>+|11>
# Apply Hadamard H on the left virtual spins (which are the right spins of the entangled bond pairs)
assert n > 1
H = np.array([[1, 1], [1, -1]])
# which gives |0>x(|0>+|1>)+|1>x(|0>-|1>) = |00>+|01>+|10>-|11>
# Project as |0><00| + |1><11|
# We get the following MPS projectors:
A0 = np.dot(np.array([[1, 0], [0, 0]]), H)
A1 = np.dot(np.array([[0, 0], [0, 1]]), H)
AA = np.array([A0, A1])
AA = np.swapaxes(AA, 0, 1)
data = [AA]*n
data[0] = np.dot(np.array([[[1, 0], [0, 1]]]), H)
data[-1] = np.swapaxes(np.array([[[1, 0], [0, 1]]]), 0, 2) / np.sqrt(2**n)
return MPS(data)
###Output
_____no_output_____
###Markdown
Qutrit states
###Code
# file: mps/state.py
# open boundary conditions
# free virtual spins at both ends are taken to be zero
def AKLT(n, mps=True):
"""Return an AKL state with `n` spin-1 particles."""
assert n > 1
# Choose entangled pair state as : |00>+|11>
# Apply i * Pauli Y matrix on the left virtual spins (which are the right spins of the entangled bond pairs)
iY = np.array([[0, 1], [-1, 0]])
# which gives -|01>+|10>
# Project as |-1><00| +|0> (<01|+ <10|)/ \sqrt(2)+ |1><11|
# We get the following MPS projectors:
A0 = np.dot(np.array([[1, 0], [0, 0]]), iY)
A1 = np.dot(np.array([[0, 1], [1, 0]]), iY)
A2 = np.dot(np.array([[0, 0], [0, 1]]), iY)
AA = np.array([A0, A1, A2]) / np.sqrt(2)
AA = np.swapaxes(AA, 0, 1)
data = [AA]*n
data[-1] = np.array([[[1, 0], [0, 1], [0, 0]]])
data[0] = np.array(np.einsum('ijk,kl->ijl',
data[-1], iY))/np.sqrt(2)
data[-1] = np.swapaxes(data[-1], 0, 2)
return MPS(data)
###Output
_____no_output_____
###Markdown
Generic
###Code
# file: mps/state.py
def random(d, N, D=1):
"""Create a random state with 'N' elements of dimension 'd' and bond
dimension 'D'."""
mps = [1]*N
DR = 1
for i in range(N):
DL = DR
if N > 60 and i != N-1:
DR = D
else:
DR = np.min([DR*d, D, d**(N-i-1)])
mps[i] = np.random.rand(DL, d, DR)
return MPS(mps)
# file: mps/state.py
def gaussian(n, x0, w0, k0, mps=True):
#
# Return a W state with `n` components in MPS form or
# in vector form
#
xx = np.arange(n, dtype=complex)
coefs = np.exp(-(xx-x0)**2 / w0**2 + 1j * k0*xx, dtype=complex)
return wavepacket(coefs / np.linalg.norm(coefs))
###Output
_____no_output_____
###Markdown
---- Tests
###Code
# file: mps/test/test_sample_states.py
import numpy as np
import unittest
from mps.tools import *
from mps.state import *
class TestSampleStates(unittest.TestCase):
def test_product_state(self):
a = np.array([1.0, 7.0])
b = np.array([0.0, 1.0, 3.0])
c = np.array([3.0, 5.0])
# Test a product state MPS of size 3
state1 = product(a, length=3)
tensor1 = np.reshape(a, (1, 2, 1))
#
# Test whether the MPS has the right size and dimension
self.assertEqual(state1.size, 3)
self.assertEqual(state1.dimension(), 8)
#
# Verify that it has the same data as input
self.assertTrue(np.array_equal(state1[0], tensor1))
self.assertTrue(np.array_equal(state1[1], tensor1))
self.assertTrue(np.array_equal(state1[2], tensor1))
#
# Verify that they produce the same wavefunction as directly
# creating the vector
state1ψ = np.kron(a, np.kron(a, a))
self.assertTrue(np.array_equal(state1.tovector(), state1ψ))
# Test a product state with different physical dimensions on
# even and odd sites.
state2 = product([a, b, c])
tensor2a = np.reshape(a, (1, 2, 1))
tensor2b = np.reshape(b, (1, 3, 1))
tensor2c = np.reshape(c, (1, 2, 1))
#
# Test whether the MPS has the right size and dimension
self.assertEqual(state2.size, 3)
self.assertEqual(state2.dimension(), 2*3*2)
#
# Verify that it has the same data as input
self.assertTrue(np.array_equal(state2[0], tensor2a))
self.assertTrue(np.array_equal(state2[1], tensor2b))
self.assertTrue(np.array_equal(state2[2], tensor2c))
#
# Verify that they produce the same wavefunction as directly
# creating the vector
state2ψ = np.kron(a, np.kron(b, c))
self.assertTrue(np.array_equal(state2.tovector(), state2ψ))
def test_GHZ(self):
ghz1 = np.array([1.0, 1.0])/np.sqrt(2.0)
mps1 = GHZ(1)
self.assertTrue(np.array_equal(mps1.tovector(), ghz1))
ghz2 = np.array([1.0, 0.0, 0.0, 1.0])/np.sqrt(2.0)
mps2 = GHZ(2)
self.assertTrue(np.array_equal(mps2.tovector(), ghz2))
ghz3 = np.array([1.0, 0, 0, 0, 0, 0, 0, 1.0])/np.sqrt(2.0)
mps3 = GHZ(3)
self.assertTrue(np.array_equal(mps3.tovector(), ghz3))
for i in range(1, 2):
Ψ = GHZ(i)
self.assertEqual(Ψ.size, i)
self.assertEqual(Ψ.dimension(), 2**i)
def test_W(self):
W1 = np.array([0, 1.0])
mps1 = W(1)
self.assertTrue(np.array_equal(mps1.tovector(), W1))
W2 = np.array([0, 1, 1, 0])/np.sqrt(2.0)
mps2 = W(2)
self.assertTrue(np.array_equal(mps2.tovector(), W2))
W3 = np.array([0, 1, 1, 0, 1, 0, 0, 0])/np.sqrt(3.0)
mps3 = W(3)
self.assertTrue(np.array_equal(mps3.tovector(), W3))
for i in range(1, 2):
Ψ = W(i)
self.assertEqual(Ψ.size, i)
self.assertEqual(Ψ.dimension(), 2**i)
def test_AKLT(self):
AKLT2 = np.zeros(3**2)
AKLT2[1] = 1
AKLT2[3] = -1
AKLT2 = AKLT2/np.sqrt(2)
self.assertTrue(np.array_equal(AKLT(2).tovector(), AKLT2))
AKLT3 = np.zeros(3**3)
AKLT3[4] = 1
AKLT3[6] = -1
AKLT3[10] = -1
AKLT3[12] = 1
AKLT3 = AKLT3/(np.sqrt(2)**2)
self.assertTrue(np.array_equal(AKLT(3).tovector(), AKLT3))
for i in range(2, 5):
Ψ = AKLT(i)
self.assertEqual(Ψ.size, i)
self.assertEqual(Ψ.dimension(), 3**i)
def test_graph(self):
GR = np.ones(2**2)/np.sqrt(2**2)
GR[-1] = -GR[-1]
self.assertTrue(np.array_equal(graph(2).tovector(), GR))
GR = np.ones(2**3)/np.sqrt(2**3)
GR[3] = -GR[3]
GR[-2] = -GR[-2]
self.assertTrue(np.array_equal(graph(3).tovector(), GR))
for i in range(1, 2):
Ψ = W(i)
self.assertEqual(Ψ.size, i)
self.assertEqual(Ψ.dimension(), 2**i)
suite1 = unittest.TestLoader().loadTestsFromNames(['__main__.TestSampleStates'])
unittest.TextTestRunner(verbosity=2).run(suite1);
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.