path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Tutorial#3-1 Keras and Tensorflow Introduction.ipynb | ###Markdown
Installing Tensorflow NOTE: it will take some time!
###Code
%pip install --upgrade pip
%pip install tensorflow==2.5.0
###Output
Requirement already satisfied: pip in c:\users\peese\anaconda3\lib\site-packages (21.1.3)
Note: you may need to restart the kernel to use updated packages.
Collecting tensorflow==2.5.0
Using cached tensorflow-2.5.0-cp38-cp38-win_amd64.whl (422.6 MB)
Requirement already satisfied: protobuf>=3.9.2 in c:\users\peese\anaconda3\lib\site-packages (from tensorflow==2.5.0) (3.17.3)
Collecting keras-nightly~=2.5.0.dev
Using cached keras_nightly-2.5.0.dev2021032900-py2.py3-none-any.whl (1.2 MB)
Requirement already satisfied: wrapt~=1.12.1 in c:\users\peese\anaconda3\lib\site-packages (from tensorflow==2.5.0) (1.12.1)
Collecting flatbuffers~=1.12.0
Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting termcolor~=1.1.0
Using cached termcolor-1.1.0-py3-none-any.whl
Collecting astunparse~=1.6.3
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Requirement already satisfied: wheel~=0.35 in c:\users\peese\anaconda3\lib\site-packages (from tensorflow==2.5.0) (0.36.2)
Collecting absl-py~=0.10
Using cached absl_py-0.13.0-py3-none-any.whl (132 kB)
Collecting h5py~=3.1.0
Using cached h5py-3.1.0-cp38-cp38-win_amd64.whl (2.7 MB)
Collecting google-pasta~=0.2
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting grpcio~=1.34.0
Using cached grpcio-1.34.1-cp38-cp38-win_amd64.whl (2.9 MB)
Collecting tensorflow-estimator<2.6.0,>=2.5.0rc0
Using cached tensorflow_estimator-2.5.0-py2.py3-none-any.whl (462 kB)
Collecting keras-preprocessing~=1.1.2
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Requirement already satisfied: typing-extensions~=3.7.4 in c:\users\peese\anaconda3\lib\site-packages (from tensorflow==2.5.0) (3.7.4.3)
Collecting tensorboard~=2.5
Using cached tensorboard-2.5.0-py3-none-any.whl (6.0 MB)
Requirement already satisfied: numpy~=1.19.2 in c:\users\peese\anaconda3\lib\site-packages (from tensorflow==2.5.0) (1.19.5)
Collecting gast==0.4.0
Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting opt-einsum~=3.3.0
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Requirement already satisfied: six~=1.15.0 in c:\users\peese\anaconda3\lib\site-packages (from tensorflow==2.5.0) (1.15.0)
Note: you may need to restart the kernel to use updated packages.
Collecting markdown>=2.6.8
Using cached Markdown-3.3.4-py3-none-any.whl (97 kB)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\users\peese\anaconda3\lib\site-packages (from tensorboard~=2.5->tensorflow==2.5.0) (0.6.1)
Collecting google-auth-oauthlib<0.5,>=0.4.1
Using cached google_auth_oauthlib-0.4.4-py2.py3-none-any.whl (18 kB)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\users\peese\anaconda3\lib\site-packages (from tensorboard~=2.5->tensorflow==2.5.0) (1.8.0)
Requirement already satisfied: setuptools>=41.0.0 in c:\users\peese\anaconda3\lib\site-packages (from tensorboard~=2.5->tensorflow==2.5.0) (52.0.0.post20210125)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\users\peese\anaconda3\lib\site-packages (from tensorboard~=2.5->tensorflow==2.5.0) (1.32.1)
Requirement already satisfied: requests<3,>=2.21.0 in c:\users\peese\anaconda3\lib\site-packages (from tensorboard~=2.5->tensorflow==2.5.0) (2.25.1)
Requirement already satisfied: werkzeug>=0.11.15 in c:\users\peese\anaconda3\lib\site-packages (from tensorboard~=2.5->tensorflow==2.5.0) (1.0.1)
Requirement already satisfied: rsa<5,>=3.1.4 in c:\users\peese\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow==2.5.0) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\users\peese\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow==2.5.0) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\users\peese\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow==2.5.0) (4.2.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\users\peese\anaconda3\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow==2.5.0) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in c:\users\peese\anaconda3\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow==2.5.0) (0.4.8)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\peese\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow==2.5.0) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\users\peese\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow==2.5.0) (4.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\peese\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow==2.5.0) (1.26.4)
Requirement already satisfied: idna<3,>=2.5 in c:\users\peese\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow==2.5.0) (2.10)
Requirement already satisfied: oauthlib>=3.0.0 in c:\users\peese\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow==2.5.0) (3.1.1)
Installing collected packages: markdown, grpcio, google-auth-oauthlib, absl-py, termcolor, tensorflow-estimator, tensorboard, opt-einsum, keras-preprocessing, keras-nightly, h5py, google-pasta, gast, flatbuffers, astunparse, tensorflow
Attempting uninstall: h5py
Found existing installation: h5py 2.10.0
Uninstalling h5py-2.10.0:
Successfully uninstalled h5py-2.10.0
Successfully installed absl-py-0.13.0 astunparse-1.6.3 flatbuffers-1.12 gast-0.4.0 google-auth-oauthlib-0.4.4 google-pasta-0.2.0 grpcio-1.34.1 h5py-3.1.0 keras-nightly-2.5.0.dev2021032900 keras-preprocessing-1.1.2 markdown-3.3.4 opt-einsum-3.3.0 tensorboard-2.5.0 tensorflow-2.5.0 tensorflow-estimator-2.5.0 termcolor-1.1.0
###Markdown
If you see the message below, restart the kernel please from the panel above (Kernels>restart)! 'Note: you may need to restart the kernel to use updated packages.' Let's check if you have everything!
###Code
import tensorflow as tf
print(tf.__version__)
reachout='Please repeat the steps above. If it still does not work, reach out to me ([email protected])'
try:
import tensorflow
print('tensorflow is all good!')
except:
print("An exception occurred in tensorflow installation."+reachout)
try:
import keras
print('keras is all good!')
except:
print("An exception occurred in keras installation."+reachout)
###Output
tensorflow is all good!
keras is all good!
###Markdown
Now let's explore tensorflow!From its name tensorflow stores constants as tensor objects! Let's create our first constant!
###Code
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
myfirstconst = tf.constant('Hello World')
myfirstconst
x = tf.constant(130.272)
x
###Output
_____no_output_____
###Markdown
TF SessionsLet's create a TensorFlow Session. It can be thought of as a class for running TensorFlow operations. The session encapsulates the environment in which operations take place.Let's do a quick example:
###Code
a = tf.constant(1)
b = tf.constant(5)
with tf.Session() as Session:
print('TF simple Operations')
print('Multiply',Session.run(a*b))
print('Divide',Session.run(a/b))
print('Add',Session.run(a+b))
print('Subtract',Session.run(b-a))
###Output
TF simple Operations
Multiply 5
Divide 0.2
Add 6
Subtract 4
###Markdown
Now let's multiply a matrix
###Code
import numpy as np
m = np.array([[1.0,2.0]])
n = np.array([[3.0],[4.0]])
multi = tf.matmul(m,n)
multi
with tf.Session() as Session:
res = Session.run(multi)
print(res)
###Output
[[11.]]
###Markdown
TF VariablesSometimes you want to define a variable rsulting from operations. **tf.variable is ideal for this case!**Let's see how to use it!
###Code
#We have to start a session!
sess = tf.InteractiveSession()
atensor = tf.random_uniform((2,2),0,1)
atensor
var = tf.Variable(initial_value=atensor)
var
try:
with tf.Session() as Session:
res = Session.run(var)
print(res)
except:
print("error!")
initialize = tf.global_variables_initializer()
initialize.run()
var.eval()
sess.run(var)
###Output
_____no_output_____
###Markdown
Now let's custom build our first neural networks!
###Code
xd = np.linspace(0,10,100) + np.random.uniform(-3,.5,100)
yd = np.linspace(0,10,100) + np.random.uniform(-.5,2,100)
import matplotlib.pyplot as plt
plt.plot(xd,yd,'o')
###Output
_____no_output_____
###Markdown
Let's define our variables here$y=m*x+b$
###Code
#Let's intialize with a guess
m = tf.Variable(1.0)
b = tf.Variable(0.1)
#Let's build or objective function!
#initialize error
e=0
for x,y in zip(xd,yd):
#our model
y_pred = m*x + b
# our error
e += (y-y_pred)**2
## tensorflow optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0001)
## we want to minimize error
training = optimizer.minimize(e)
## initilize our variables with tensorflow
initalize = tf.global_variables_initializer()
#start the session for 1000 epochs!
with tf.Session() as sess:
sess.run(initalize)
epochs = 100
for i in range(epochs):
sess.run(training)
# Get results
mf, bf = sess.run([m,b])
print("The slope is {} and the intercept is {}".format(mf, bf))
#Let's evalute our results
x_v = np.linspace(-3,11,300)
y_v = mf*x_v + bf
plt.plot(x_v,y_v,'r')
plt.plot(xd,yd,'o')
###Output
_____no_output_____ |
investigate-dataset.ipynb | ###Markdown
> **Tip**: Welcome to the Investigate a Dataset project! You will find tips in quoted sections like this to help organize your approach to your investigation. Before submitting your project, it will be a good idea to go back through your report and remove these sections to make the presentation of your work as tidy as possible. First things first, you might want to double-click this Markdown cell and change the title so that it reflects your dataset and investigation. Project: Investigate a Dataset (Replace this with something more specific!) Table of ContentsIntroductionData WranglingExploratory Data AnalysisConclusions Introduction> **Tip**: In this section of the report, provide a brief introduction to the dataset you've selected for analysis. At the end of this section, describe the questions that you plan on exploring over the course of the report. Try to build your report around the analysis of at least one dependent variable and three independent variables.>> If you haven't yet selected and downloaded your data, make sure you do that first before coming back here. If you're not sure what questions to ask right now, then make sure you familiarize yourself with the variables and the dataset context for ideas of what to explore. DataSet DescriptionThis data set ('tmdb-movies.csv') contains information about 10,000 movies collected from The Movie Database (TMDb), including user ratings and revenue. QuestionsAs checked in the below cells, after studing the data, there are few key aspects which would like to explore in this project:* Which type of movie genre has the heighest and the lowest vote_average?* What is the tend of budget for making a movie over time, lets see this via 'budget_adj' field?* Which actors have the highest summed revenue for their movies? * Which directors have the highest summed revenue for their movies? * What is the Corelation between popularity of movie and its vote average?* What is the Corelation of vote_average with respect to runtime, vote_count, revenue_adj or budget_adj?
###Code
# Use this cell to set up import statements for all of the packages that you
# plan to use.
# Remember to include a 'magic word' so that your visualizations are plotted
# inline with the notebook. See this page for more:
# http://ipython.readthedocs.io/en/stable/interactive/magics.html
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# visualizations are plotted inline with the notebook
% matplotlib inline
###Output
_____no_output_____
###Markdown
Data Wrangling> **Tip**: In this section of the report, you will load in the data, check for cleanliness, and then trim and clean your dataset for analysis. Make sure that you document your steps carefully and justify your cleaning decisions. General Properties
###Code
# Load data into DataFrame
df = pd.read_csv('tmdb-movies.csv')
# Print out a few lines from the loaded DataFrame.
df.head()
# Check the size of the data and number of Columns per record.
df.shape
###Output
_____no_output_____
###Markdown
Above, we see that the total number of records(rows) are 10866.
###Code
#Perform operations to inspect data types and look for instances of missing or possibly errant data.
df.dtypes
###Output
_____no_output_____
###Markdown
Values like popularity, budget, revenue, runtime, vote_count, vote_average, budget_adj, revenue_adj are already in integer or float form, which in teran will help us in analysis. One value which I would like to change is 'release_date' which is currently string, lets change this to datetime object, which I'll do in data cleaning process below. Lets also See the number of NaN entries in the dataframe:
###Code
# For below solution, I used this "https://stackoverflow.com/questions/39421433/efficient-way-to-find-null-values-in-a-dataframe"
# StackOverflow page for assistance.
def list_null_count(df):
null_counts = df.isnull().sum()
print ('Total Null entries with respect to each column:')
return null_counts.sort_values(ascending=False)
list_null_count(df)
###Output
Total Null entries with respect to each column:
###Markdown
By above data, we can see that there are some columns which have a significant count of missing values. But all of the columns which have int or fload data have values in them.In below cell lets count the number of unique values in each column:
###Code
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nunique.html
df.nunique()
###Output
_____no_output_____
###Markdown
In above cell's output we can see that there are lot of unique data, one thing we notice that there are total 56 unique year's data we have got.Additionally if you see, the total number of Records are 10866, but unique 'id' are 10865, which means there is one record which is repeated. We will remove that in our data cleaning process.
###Code
# Show description of Data:
df.describe()
###Output
_____no_output_____
###Markdown
* Runtime varies from 0 to 900* vote_average varies from 10 to 9767* Overall all the movies are from year 1960 to 2015 > **Tip**: You should _not_ perform too many operations in each cell. Create cells freely to explore your data. One option that you can take with this project is to do a lot of explorations in an initial notebook. These don't have to be organized, but make sure you use enough comments to understand the purpose of each code cell. Then, after you're done with your analysis, create a duplicate notebook where you will trim the excess and organize your steps so that you have a flowing, cohesive report.> **Tip**: Make sure that you keep your reader informed on the steps that you are taking in your investigation. Follow every code cell, or every set of related code cells, with a markdown cell to describe to the reader what was found in the preceding cell(s). Try to make it so that the reader can then understand what they will be seeing in the following cell(s). Data Cleaning (Replace this with more specific notes!)To clean data we will perform:* Remove duplicate data* Change 'release_year' from string to datetime object* Replace NaN in genres, cast, director, and tagline with empty string.* Seperate multiple values separated by pipe (|) characters in python lists Remove Duplicate dataAs we have checked aboove, there was one row which was duplicate, so lets first remove that:
###Code
# After discussing the structure of the data and any problems that need to be
# cleaned, perform those cleaning steps in the second part of this section.
#Step 1: show count of duplicate rows (https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html):
print ("Total duplicate rows: {}".format(sum(df.duplicated())))
###Output
Total duplicate rows: 1
###Markdown
Now lets delete this duplicate row:
###Code
# Step 2: Remove duplicate: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html
df = df.drop_duplicates()
print ("Total duplicate rows after drop_duplicates: {}".format(sum(df.duplicated())))
###Output
Total duplicate rows after drop_duplicates: 0
###Markdown
Lets fix data type by conversion:
###Code
df['release_date'] = pd.to_datetime(df['release_date'])
# Lets check the data types to confirm that the date type is changed
df.dtypes
###Output
_____no_output_____
###Markdown
"release_date" column is changed from "Object" to "datetime64[ns]", Now, lets replace NaN to Empty strings:
###Code
# check Nan info: genres, tagline and cast
df.info()
# Lets now clear the NaNs in the columns we are interested in:
#https://stackoverflow.com/questions/26837998/pandas-replace-nan-with-blank-empty-string
def replace_NaN_by_empty_string(data, lable):
data[lable] = data[lable].fillna('')
return data
df = replace_NaN_by_empty_string(df, 'genres')
df = replace_NaN_by_empty_string(df, 'tagline')
df = replace_NaN_by_empty_string(df, 'cast')
# check Nan info
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10865 entries, 0 to 10865
Data columns (total 21 columns):
id 10865 non-null int64
imdb_id 10855 non-null object
popularity 10865 non-null float64
budget 10865 non-null int64
revenue 10865 non-null int64
original_title 10865 non-null object
cast 10865 non-null object
homepage 2936 non-null object
director 10821 non-null object
tagline 10865 non-null object
keywords 9372 non-null object
overview 10861 non-null object
runtime 10865 non-null int64
genres 10865 non-null object
production_companies 9835 non-null object
release_date 10865 non-null datetime64[ns]
vote_count 10865 non-null int64
vote_average 10865 non-null float64
release_year 10865 non-null int64
budget_adj 10865 non-null float64
revenue_adj 10865 non-null float64
dtypes: datetime64[ns](1), float64(4), int64(6), object(10)
memory usage: 1.8+ MB
###Markdown
we can see that the Nans in the genres, tagline and cast are removed Exploratory Data Analysis> **Tip**: Now that you've trimmed and cleaned your data, you're ready to move on to exploration. Compute statistics and create visualizations with the goal of addressing the research questions that you posed in the Introduction section. It is recommended that you be systematic with your approach. Look at one variable at a time, and then follow it up by looking at relationships between variables. Question: Which type of movie genre has the heighest and the lowest vote_average?
###Code
# Lets assume that the first genre in the list is the most releavent. Assuming that, lets create a new column 'releavent_genre',
# which will contain the first genre:
def split_and_take_first_item(x):
return x.split('|')[0]
df['releavent_genre'] = df['genres'].apply(split_and_take_first_item)
#Lets cheack if the new column is added:
df.head()
###Output
_____no_output_____
###Markdown
Based on the new column, now let's see which genre has the highest and lowest vote_average':
###Code
releavent_genre_sorted_for_mean = df.groupby('releavent_genre')['vote_average'].mean().sort_values(ascending=False)
releavent_genre_sorted_for_mean
# Lets plot a bargraph of this data:
releavent_genre_sorted_for_mean.plot(kind='bar')
###Output
_____no_output_____
###Markdown
We can clearly see that * The Gerne with the highest vote_average is "Documentry" followed by "Music".* The Gerne with the lowest vote_average is "Horror" followed by "Thriller". Research Question: What is the tend of budget for making a movie over time, lets see this via 'budget_adj' field?
###Code
# Lets first create a dataframe from given df which has columns where value of budget_adj is not equal to 0
df_budget_adj = df[df.budget_adj != 0]
df_budget_adj.shape
# Total Movies count
df_budget_adj.groupby('release_year')['budget_adj'].count().plot()
# Mean adjusted budget over the years
df_budget_adj.groupby('release_year')['budget_adj'].mean().plot()
# Sum of budget put into movies every year.
df_budget_adj.groupby('release_year')['budget_adj'].sum().plot()
###Output
_____no_output_____
###Markdown
We can see in the above two graphs that adjusted budget mean value had a diffent trend at different time, but to understand this when we see the total number of moved made in that perticuler year, we see that over time the number of movies made each and every ear is increasing, which in retun effect the mean of the adjusted budget over the years. So finally we check the total sum put into making of movies over the year, as the data is I've thought, increasing rapidly over time. We can say that adjusted budget mean over the years has increased and decreased since lot and lot of movies are being made each and every coming year. Sinxce 2000 , the number of movies per year has significantly increased, to the mean has shifted and pulled the average adjusted mean down. Research Question: Which actors have the highest summed revenue for their movies?
###Code
# Similar to what we have done in Gerne, we'll take the first actor/actress form the cast assuming as them
# the lead character in the movie, so lets create a new column whith the most releavent actor or cast
df['releavent_cast'] = df['cast'].apply(split_and_take_first_item)
df.head()
# Since we now have new column which gives the releavent/first cast, we will check the first/top 10 casts which are in biggest
# revenue collection movies.
# We first group on "releavent_cast" and on revenue_adj column we sum them, sort them and get first 10 casts
releavent_cast_sorted_for_sum = df.groupby('releavent_cast')['revenue_adj'].sum().sort_values(ascending=False)[:10]
releavent_cast_sorted_for_sum
# Plot the values on bar graph
releavent_cast_sorted_for_sum.plot(kind='bar')
###Output
_____no_output_____
###Markdown
We can see that the Tom Cruise and Tom Hanks are the twoi cast are which have got the highest revenue for their movies followed by harrison ford Research Question: Which directors have the highest summed revenue for their movies?
###Code
director_df_sum_revenue = df.groupby('director')['revenue_adj'].sum().sort_values(ascending=False)[:10]
director_df_sum_revenue
director_df_sum.plot(kind='bar')
###Output
_____no_output_____
###Markdown
With the above graph, we can say that moves directed by Steven Spielberg generate way more revenue than by other directors, followed by James Cameron. Research Question: Which directors have the highest ratio of revenue generated to budget for their movies?
###Code
director_df_sum_revenue_all = df.groupby('director')['revenue_adj'].sum()
director_df_sum_budget_all = df.groupby('director')['budget_adj'].sum()
# Calculate Efficiency:
director_efficiency = director_df_sum_revenue_all / director_df_sum_budget_all
# Show top 10 records
print (director_efficiency[:10])
len(director_efficiency)
# 1. Replace Inifnity values to NaN
# 2. Drop NaN
director_efficiency = director_efficiency.replace([np.inf, -np.inf], np.nan).dropna()
director_efficiency = director_efficiency.sort_values(ascending=False)
director_efficiency[:10]
director_efficiency[:3].plot(kind='bar')
###Output
_____no_output_____ |
Tensorflow_Eager_Exec_1_x_2_0ipynb.ipynb | ###Markdown
টেন্সর-ফ্লো ১.x ভার্সন Tensorflow Eager mode নেই
###Code
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
এখন কি আছে?
###Code
print(tf.__version__)
###Output
1.13.1
###Markdown
Eager Enabled নেই (Default হিসেবে)
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
tf.matmul: Multiply Two Matricies Using TensorFlow MatMul
###Code
x = [[2., 2.],
[1., 0.]]
m = tf.matmul(x, x)
m
###Output
_____no_output_____
###Markdown
টেন্সর-ফ্লো ২.০ ইনস্টলেশন কিছু জিনিস ঠিকমতো কাজ না করতে পারে, টেস্ট করতে থাকুন
###Code
!pip install --upgrade tensorflow-gpu==2.0.0-alpha0
import numpy as np
import tensorflow as tf
tf.executing_eagerly()
tf.__version__
x = [[2., 2.],
[1., 0.]]
m = tf.matmul(x, x)
m
m.numpy()
###Output
_____no_output_____ |
Python-RF-Intro.ipynb | ###Markdown
Introduction to Python* General purpose programming language* More redable than other lanuages* Multi paradidm language. Object-oriented, Functional* ZEN OF PYTHON
###Code
import this
###Output
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
###Markdown
Installation- https://www.python.org/downloads/- python -V- pip -V
###Code
import sys
print(sys.version_info)
###Output
sys.version_info(major=3, minor=5, micro=3, releaselevel='final', serial=0)
###Markdown
Data Types Some immutable types int, float, long, complex str bytes tuple frozen set Boolean array Some mutable types byte array list set dict Python Lists and operations
###Code
# STATIC LISTS
lst1 = [1, 2, 3, 4] # integer item types
lst2 = ['a', 'v', 'i'] # string item types
lst3 = ['m', 4, lst1] # heteregeneous item types
print(lst1)
print(lst2)
print(lst3)
# DYNAMIC LISTS
name = 'Avi Mehenwal' # string type
dlst = list() # empty list
for i in name:
dlst.append(i) # list.append() operation
print(len(dlst), 'ORIGINAL :', dlst)
dlst.reverse()
print(len(dlst), 'REVERSED :', dlst)
dlst.sort()
print(len(dlst), 'SORTED :', dlst)
print('POP', dlst.pop(), ' ', len(dlst), dlst)
print(len(dlst), 'ORIGINAL :', dlst)
###Output
12 ORIGINAL : ['A', 'v', 'i', ' ', 'M', 'e', 'h', 'e', 'n', 'w', 'a', 'l']
12 REVERSED : ['l', 'a', 'w', 'n', 'e', 'h', 'e', 'M', ' ', 'i', 'v', 'A']
12 SORTED : [' ', 'A', 'M', 'a', 'e', 'e', 'h', 'i', 'l', 'n', 'v', 'w']
POP w 11 [' ', 'A', 'M', 'a', 'e', 'e', 'h', 'i', 'l', 'n', 'v']
11 ORIGINAL : [' ', 'A', 'M', 'a', 'e', 'e', 'h', 'i', 'l', 'n', 'v']
###Markdown
Python DictionariesKey Value Hash values
###Code
# Static
name = dict()
name = {
"first" : "avi",
"surname" : "mehenwal"
}
print(name)
# Dynamic
name["middle"] = "kumar"
print(name)
###Output
{'surname': 'mehenwal', 'first': 'avi'}
{'middle': 'kumar', 'surname': 'mehenwal', 'first': 'avi'}
|
Data Science/Python/07_Practice+Exercise+2.ipynb | ###Markdown
Practice Exercise 2 This practice is continued from the Cricket example that you have seen as a part of this sesion. Now, you are provided with 2 lists that contain the data of the player. They are asked to play one match each and the data is collected. The first list contains the player ID and the second list consists of tuples where first element is the runs scored, second is the wickets taken and third is the number of catches taken. As a part of this exercise, solve the questions that are provided below.
###Code
player = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144, 1145, 1146, 1147, 1148, 1149, 1150, 1151, 1152, 1153, 1154, 1155, 1156, 1157, 1158, 1159, 1160, 1161, 1162, 1163, 1164, 1165, 1166, 1167, 1168, 1169, 1170, 1171, 1172, 1173, 1174, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1191, 1192, 1193, 1194, 1195, 1196, 1197, 1198, 1199, 1200, 1201, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1209, 1210, 1211, 1212, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1223, 1224, 1225, 1226, 1227, 1228, 1229, 1230, 1231, 1232, 1233, 1234, 1235, 1236, 1237, 1238, 1239, 1240, 1241, 1242, 1243, 1244, 1245, 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260, 1261, 1262, 1263, 1264, 1265, 1266, 1267, 1268, 1269, 1270, 1271, 1272, 1273, 1274, 1275, 1276, 1277, 1278, 1279, 1280, 1281, 1282, 1283, 1284, 1285, 1286, 1287, 1288, 1289, 1290, 1291, 1292, 1293, 1294, 1295, 1296, 1297, 1298, 1299, 1300, 1301, 1302, 1303, 1304, 1305, 1306, 1307, 1308, 1309, 1310, 1311, 1312, 1313, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321, 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331, 1332, 1333, 1334, 1335, 1336, 1337, 1338, 1339, 1340, 1341, 1342, 1343, 1344, 1345, 1346, 1347, 1348, 1349, 1350, 1351, 1352, 1353, 1354, 1355, 1356, 1357, 1358, 1359, 1360, 1361, 1362, 1363, 1364, 1365, 1366, 1367, 1368, 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1380, 1381, 1382, 1383, 1384, 1385, 1386, 1387, 1388, 1389, 1390, 1391, 1392, 1393, 1394, 1395, 1396, 1397, 1398, 1399, 1400, 1401, 1402, 1403, 1404, 1405, 1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 1414, 1415, 1416, 1417, 1418, 1419, 1420, 1421, 1422, 1423, 1424, 1425, 1426, 1427, 1428, 1429, 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1438, 1439, 1440, 1441, 1442, 1443, 1444, 1445, 1446, 1447, 1448, 1449, 1450, 1451, 1452, 1453, 1454, 1455, 1456, 1457, 1458, 1459, 1460, 1461, 1462, 1463, 1464, 1465, 1466, 1467, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475, 1476, 1477, 1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1487, 1488, 1489, 1490, 1491, 1492, 1493, 1494, 1495, 1496, 1497, 1498, 1499, 1500, 1501, 1502]
score = [(46, 1, 0), (19, 0, 1), (35, 1, 0), (25, 2, 1), (0, 3, 0), (20, 0, 2), (34, 2, 0), (39, 1, 0), (6, 3, 0), (0, 1, 0), (69, 0, 0), (9, 2, 0), (18, 0, 2), (46, 0, 1), (11, 0, 2), (25, 1, 0), (34, 0, 3), (47, 1, 0), (2, 3, 0), (8, 0, 2), (2, 0, 0), (27, 2, 0), (42, 1, 0), (35, 1, 0), (34, 1, 0), (61, 0, 0), (62, 0, 0), (43, 0, 0), (1, 2, 0), (32, 1, 2), (35, 0, 1), (39, 0, 3), (37, 1, 0), (39, 0, 0), (82, 0, 3), (74, 1, 0), (33, 2, 3), (71, 1, 0), (7, 2, 0), (42, 1, 0), (78, 0, 0), (27, 1, 0), (50, 0, 1), (6, 4, 2), (59, 0, 1), (4, 4, 0), (8, 2, 2), (15, 1, 1), (33, 1, 0), (68, 1, 0), (34, 1, 2), (17, 2, 0), (83, 1, 0), (31, 2, 1), (17, 2, 3), (91, 0, 0), (67, 0, 0), (76, 0, 0), (22, 2, 0), (31, 0, 1), (27, 0, 1), (26, 2, 0), (9, 3, 1), (43, 1, 1), (6, 1, 2), (52, 0, 0), (48, 0, 0), (26, 0, 1), (50, 0, 0), (46, 0, 2), (47, 1, 0), (21, 3, 3), (10, 2, 0), (33, 0, 1), (48, 1, 0), (7, 1, 1), (42, 0, 1), (1, 2, 2), (82, 0, 1), (24, 0, 0), (28, 1, 0), (0, 0, 0), (14, 0, 3), (80, 0, 3), (38, 0, 0), (16, 0, 3), (14, 2, 0), (26, 1, 0), (17, 2, 2), (24, 2, 0), (42, 0, 0), (42, 0, 1), (47, 1, 0), (8, 1, 1), (7, 2, 1), (16, 3, 1), (46, 0, 0), (61, 0, 0), (67, 0, 1), (36, 0, 0), (15, 1, 0), (48, 0, 1), (75, 1, 1), (13, 0, 0), (32, 0, 1), (72, 1, 2), (45, 1, 0), (85, 0, 1), (5, 0, 0), (88, 0, 0), (12, 2, 0), (60, 1, 0), (42, 0, 3), (15, 3, 0), (12, 0, 1), (46, 0, 1), (1, 1, 2), (35, 2, 0), (49, 0, 1), (24, 0, 0), (33, 0, 0), (18, 1, 2), (39, 0, 1), (26, 2, 1), (35, 0, 0), (42, 0, 0), (15, 3, 2), (19, 1, 0), (14, 2, 0), (32, 0, 2), (19, 1, 0), (44, 1, 1), (48, 0, 0), (84, 0, 0), (17, 3, 0), (14, 2, 1), (14, 0, 1), (37, 0, 1), (22, 2, 0), (26, 0, 0), (26, 1, 0), (15, 2, 0), (32, 1, 1), (41, 0, 0), (10, 4, 1), (89, 0, 0), (30, 2, 0), (10, 1, 1), (19, 1, 0), (32, 1, 0), (77, 0, 0), (78, 1, 0), (63, 0, 0), (79, 0, 2), (35, 2, 1), (87, 0, 0), (28, 2, 0), (11, 2, 1), (5, 1, 1), (40, 0, 2), (80, 0, 1), (72, 1, 2), (21, 1, 0), (2, 1, 3), (22, 0, 0), (15, 2, 2), (11, 4, 0), (28, 0, 0), (85, 1, 1), (39, 0, 1), (10, 0, 1), (79, 0, 0), (42, 0, 1), (21, 1, 1), (15, 0, 0), (71, 0, 1), (20, 0, 0), (29, 1, 0), (5, 0, 1), (11, 2, 0), (7, 3, 1), (38, 0, 0), (49, 0, 3), (10, 3, 3), (80, 1, 0), (54, 0, 0), (18, 2, 2), (47, 0, 0), (32, 1, 3), (69, 0, 0), (48, 0, 2), (13, 1, 1), (89, 1, 0), (22, 2, 0), (0, 4, 0), (27, 0, 0), (19, 3, 0), (0, 2, 3), (11, 1, 2), (48, 1, 2), (15, 3, 0), (34, 1, 0), (41, 0, 2), (4, 3, 0), (17, 0, 0), (38, 0, 0), (32, 1, 0), (39, 1, 1), (29, 1, 0), (38, 0, 1), (70, 0, 1), (21, 3, 0), (5, 3, 0), (19, 2, 0), (0, 1, 0), (29, 2, 2), (21, 1, 0), (61, 0, 2), (90, 0, 0), (12, 2, 1), (45, 0, 2), (45, 0, 2), (51, 0, 3), (9, 1, 0), (50, 0, 1), (28, 0, 0), (41, 0, 0), (2, 4, 0), (79, 1, 1), (47, 1, 1), (30, 1, 0), (13, 3, 2), (31, 0, 1), (46, 1, 0), (44, 1, 2), (34, 0, 1), (28, 1, 0), (10, 1, 0), (5, 4, 1), (78, 1, 0), (31, 0, 0), (44, 0, 0), (46, 1, 2), (5, 0, 0), (25, 2, 0), (62, 0, 0), (33, 1, 0), (10, 3, 0), (65, 0, 2), (30, 1, 3), (7, 1, 0), (37, 1, 0), (41, 0, 0), (24, 0, 0), (70, 1, 0), (4, 1, 2), (20, 2, 2), (82, 0, 1), (1, 0, 0), (38, 1, 1), (64, 1, 0), (32, 1, 1), (17, 2, 2), (17, 2, 1), (86, 0, 2), (7, 3, 0), (4, 0, 1), (72, 0, 0), (1, 1, 2), (35, 1, 1), (43, 0, 2), (48, 0, 0), (20, 1, 1), (40, 1, 0), (69, 0, 1), (52, 1, 1), (78, 0, 2), (0, 4, 0), (19, 2, 2), (25, 2, 0), (44, 1, 3), (43, 1, 3), (37, 1, 1), (23, 2, 0), (60, 1, 0), (47, 0, 0), (27, 2, 2), (16, 2, 0), (8, 4, 0), (39, 1, 0), (1, 2, 2), (10, 2, 0), (25, 0, 2), (11, 2, 0), (9, 2, 0), (58, 1, 1), (31, 1, 0), (49, 0, 1), (35, 2, 0), (48, 1, 1), (5, 1, 2), (24, 1, 0), (22, 3, 1), (31, 2, 0), (50, 1, 2), (18, 1, 0), (44, 1, 0), (1, 1, 2), (43, 1, 1), (13, 0, 0), (82, 0, 0), (1, 0, 2), (79, 1, 1), (82, 1, 0), (65, 0, 0), (42, 1, 0), (34, 0, 1), (0, 2, 2), (8, 1, 1), (37, 0, 0), (40, 1, 0), (44, 1, 0), (11, 1, 0), (37, 3, 0), (10, 3, 1), (3, 2, 0), (0, 1, 0), (41, 1, 1), (14, 1, 0), (5, 1, 0), (76, 1, 1), (4, 2, 1), (28, 2, 0), (11, 0, 1), (39, 1, 0), (13, 2, 0), (44, 2, 0), (74, 0, 2), (12, 1, 1), (24, 2, 1), (42, 0, 0), (37, 0, 2), (40, 1, 0), (30, 2, 0), (11, 3, 0), (5, 2, 0), (13, 1, 1), (67, 0, 2), (46, 1, 1), (26, 2, 1), (81, 0, 0), (68, 1, 2), (69, 0, 2), (89, 0, 1), (71, 0, 0), (20, 2, 2), (7, 2, 0), (76, 0, 2), (14, 2, 0), (65, 0, 0), (37, 1, 2), (15, 2, 0), (25, 1, 0), (22, 3, 0), (37, 0, 0), (76, 0, 3), (33, 0, 0), (23, 2, 0), (62, 0, 0), (7, 0, 0), (86, 0, 0), (40, 1, 1), (32, 2, 2), (21, 0, 1), (41, 1, 0), (79, 0, 2), (25, 2, 0), (76, 0, 1), (62, 1, 1), (24, 0, 2), (64, 0, 1), (34, 0, 0), (34, 2, 1), (41, 1, 1), (27, 2, 1), (36, 1, 0), (17, 1, 0), (19, 1, 0), (80, 0, 0), (78, 0, 0), (18, 2, 1), (19, 2, 2), (28, 0, 1), (39, 0, 0), (11, 2, 1), (4, 2, 0), (27, 2, 0), (67, 1, 0), (46, 0, 0), (49, 0, 0), (44, 1, 0), (19, 1, 0), (43, 1, 0), (39, 0, 0), (30, 2, 1), (34, 1, 0), (42, 1, 0), (55, 1, 0), (55, 0, 1), (72, 1, 1), (48, 1, 0), (33, 0, 0), (7, 1, 0), (21, 1, 2), (33, 2, 1), (4, 1, 2), (89, 0, 0), (2, 1, 0), (34, 1, 0), (23, 1, 2), (28, 2, 0), (85, 0, 1), (76, 0, 1), (48, 0, 0), (27, 2, 0), (44, 1, 2), (45, 0, 0), (42, 0, 0), (20, 2, 0), (64, 1, 0), (63, 0, 0), (23, 2, 1), (81, 0, 1), (54, 0, 0), (54, 1, 0), (64, 0, 2), (38, 1, 0), (59, 0, 1), (30, 0, 0), (82, 0, 0), (13, 2, 0), (80, 0, 0), (74, 0, 2), (44, 1, 0), (62, 0, 0), (6, 3, 0), (89, 0, 0), (23, 2, 0), (10, 3, 1), (4, 0, 0), (2, 2, 0), (86, 0, 0), (46, 1, 1), (39, 0, 2), (49, 0, 1), (47, 1, 0), (1, 2, 0), (20, 2, 0), (4, 1, 2), (29, 0, 1), (52, 0, 0), (45, 0, 0), (22, 1, 0), (57, 0, 2), (20, 0, 0), (7, 1, 3), (19, 2, 1), (24, 1, 1), (13, 3, 1), (16, 1, 0), (4, 2, 2), (33, 2, 2), (56, 0, 2), (77, 1, 1), (35, 2, 1), (89, 1, 2), (8, 3, 1), (30, 1, 1), (2, 4, 0), (16, 2, 1), (35, 0, 0), (15, 1, 2), (56, 0, 2), (3, 4, 0), (83, 0, 1), (28, 2, 3), (25, 1, 0), (41, 0, 0), (69, 0, 0), (49, 0, 0), (3, 0, 0), (74, 0, 0), (73, 1, 1), (11, 2, 0), (3, 4, 0), (1, 3, 0), (49, 0, 0), (79, 0, 0), (25, 0, 0), (85, 0, 1), (39, 0, 0), (5, 1, 1), (30, 2, 0), (3, 0, 0), (42, 0, 2), (19, 2, 1), (37, 1, 0), (48, 1, 0), (10, 2, 0), (23, 2, 3), (47, 0, 0), (32, 0, 2), (30, 1, 1), (23, 2, 1), (47, 1, 0), (16, 2, 0), (0, 2, 1), (65, 0, 3), (8, 2, 0), (27, 1, 2), (27, 1, 1), (25, 2, 0), (28, 1, 2), (47, 0, 0), (11, 0, 0), (15, 2, 1), (49, 0, 1), (20, 2, 1), (18, 2, 0), (45, 0, 2), (21, 0, 3), (27, 2, 0), (45, 1, 2), (26, 2, 1), (66, 1, 3), (43, 1, 2), (67, 1, 1), (28, 0, 0), (77, 1, 1), (42, 0, 0), (11, 0, 2), (25, 1, 1), (14, 2, 1), (25, 1, 0), (23, 1, 0), (3, 0, 0), (71, 1, 1), (94, 0, 0), (52, 1, 0), (33, 1, 0), (8, 2, 0), (44, 0, 1), (40, 0, 0), (5, 3, 0), (12, 2, 0), (26, 0, 0), (23, 3, 1), (8, 1, 0), (51, 1, 0), (29, 1, 2), (1, 4, 2), (77, 1, 1), (0, 0, 0), (33, 0, 2), (89, 1, 3), (22, 0, 2), (55, 0, 2), (30, 0, 1), (28, 0, 3), (68, 1, 2), (48, 0, 0), (30, 1, 2), (21, 1, 1), (32, 2, 1), (7, 2, 0), (45, 0, 2), (10, 4, 0), (46, 1, 0), (44, 1, 0), (2, 3, 1), (27, 1, 0), (55, 1, 1), (39, 0, 0), (19, 0, 2), (27, 1, 1), (78, 0, 0), (80, 0, 0), (22, 2, 2), (27, 2, 0), (53, 0, 3), (42, 0, 2), (41, 1, 0), (4, 3, 0), (29, 0, 1), (59, 0, 1), (3, 0, 2), (7, 2, 1), (13, 1, 1), (10, 4, 0), (17, 3, 0), (1, 4, 0), (26, 1, 2), (87, 0, 0), (23, 0, 1), (45, 0, 0), (26, 2, 0), (26, 1, 0), (54, 0, 0), (43, 0, 0), (25, 0, 0), (16, 1, 1), (21, 2, 2), (40, 0, 1), (81, 0, 0), (16, 1, 1), (19, 2, 1), (83, 0, 0), (4, 0, 0), (19, 1, 0), (21, 1, 0), (7, 1, 0), (44, 0, 1), (8, 2, 0), (40, 0, 0), (47, 0, 0), (17, 2, 0), (20, 0, 2), (32, 2, 1), (10, 1, 2), (19, 1, 0), (14, 0, 3), (23, 0, 0), (66, 1, 1), (73, 1, 2), (48, 1, 0), (62, 1, 1), (9, 1, 0), (22, 0, 0), (87, 0, 0), (15, 2, 0), (21, 2, 0), (48, 1, 0), (11, 0, 0), (17, 2, 0), (3, 4, 1), (19, 0, 0), (62, 1, 2), (35, 0, 0), (8, 3, 0), (16, 2, 0), (5, 0, 0), (35, 0, 0), (2, 3, 0), (9, 3, 2), (34, 0, 0), (10, 1, 0), (17, 2, 0), (14, 1, 0), (11, 2, 1), (6, 2, 2), (23, 0, 0), (22, 3, 2), (12, 2, 0), (19, 0, 1), (68, 0, 0), (22, 2, 1), (40, 1, 0), (29, 2, 2), (17, 2, 0), (40, 1, 1), (26, 1, 0), (38, 1, 1), (2, 2, 2), (9, 2, 2), (34, 0, 1), (56, 0, 0), (20, 3, 0), (2, 0, 0), (46, 0, 2), (38, 0, 0), (38, 1, 0), (29, 0, 0), (25, 0, 1), (79, 1, 1), (0, 4, 1), (72, 1, 0), (1, 0, 0), (37, 0, 0), (25, 1, 0), (47, 1, 1), (5, 1, 1), (29, 2, 2), (14, 2, 2), (26, 2, 0), (47, 1, 1), (21, 1, 1), (87, 0, 0), (31, 1, 0), (19, 1, 0), (25, 1, 1), (34, 1, 0), (12, 1, 1), (42, 1, 0), (7, 0, 2), (8, 2, 0), (89, 0, 3), (79, 0, 0), (14, 2, 0), (38, 0, 0), (42, 1, 0), (26, 2, 1), (45, 0, 0), (16, 1, 1), (60, 0, 0), (8, 1, 0), (4, 1, 0), (29, 1, 2), (1, 2, 0), (71, 1, 0), (60, 0, 1), (82, 0, 0), (39, 1, 1), (20, 2, 1), (29, 1, 2), (82, 1, 0), (34, 1, 3), (44, 1, 1), (24, 0, 0), (5, 0, 0), (0, 2, 1), (24, 1, 1), (7, 0, 2), (28, 2, 0), (46, 0, 0), (9, 0, 1), (45, 1, 1), (16, 0, 0), (12, 3, 1), (80, 0, 1), (19, 2, 1), (21, 2, 1), (43, 0, 1), (43, 0, 2), (83, 0, 1), (3, 2, 0), (39, 1, 1), (34, 1, 2), (7, 4, 1), (29, 0, 3), (74, 0, 1), (65, 0, 1), (37, 1, 3), (49, 0, 0), (38, 1, 2), (25, 1, 0), (25, 1, 0), (35, 2, 0), (34, 2, 0), (52, 0, 2), (19, 3, 0), (45, 1, 1), (21, 1, 0), (23, 2, 0), (17, 1, 0), (13, 1, 1), (24, 1, 0), (69, 1, 0), (0, 2, 0), (15, 0, 1), (49, 0, 1), (40, 1, 0), (1, 0, 0), (4, 4, 0), (16, 1, 0), (2, 4, 2), (6, 4, 1), (61, 0, 2), (27, 2, 0), (23, 2, 2), (44, 0, 2), (32, 1, 2), (1, 4, 0), (65, 0, 0), (20, 2, 0), (23, 0, 0), (28, 2, 0), (47, 1, 1), (36, 0, 0), (2, 0, 0), (48, 1, 3), (14, 1, 1), (21, 2, 0), (35, 0, 0), (28, 0, 0), (35, 2, 2), (11, 3, 1), (17, 0, 1), (25, 1, 0), (13, 0, 3), (0, 3, 2), (19, 2, 0), (43, 1, 0), (42, 0, 1), (58, 0, 1), (40, 1, 2), (37, 1, 1), (57, 0, 2), (27, 1, 0), (33, 1, 1), (22, 0, 0), (37, 0, 0), (39, 0, 2), (21, 1, 0), (49, 0, 0), (78, 0, 0), (77, 0, 0), (29, 0, 0), (2, 2, 0), (40, 0, 2), (1, 1, 1), (15, 3, 0), (69, 1, 1), (24, 0, 3), (29, 1, 1), (77, 0, 0), (30, 2, 1), (31, 0, 0), (45, 0, 0), (1, 0, 0), (40, 0, 0), (1, 0, 1), (35, 0, 0), (56, 1, 2), (88, 0, 0), (29, 2, 0), (34, 2, 1), (24, 0, 1), (47, 1, 2), (71, 0, 1), (11, 0, 1), (22, 2, 0), (9, 0, 0), (2, 0, 1), (15, 3, 1), (58, 0, 0), (16, 0, 0), (46, 1, 0), (11, 0, 2), (88, 0, 2), (20, 0, 1), (47, 1, 0), (19, 2, 1), (24, 1, 0), (31, 0, 2), (0, 4, 0), (46, 0, 2), (2, 3, 1), (33, 2, 0), (11, 4, 0), (42, 1, 1), (35, 2, 0), (23, 1, 1), (47, 0, 1), (75, 0, 3), (30, 2, 0), (12, 0, 2), (11, 1, 1), (32, 2, 0), (7, 3, 1), (41, 1, 1), (3, 1, 0), (36, 0, 0), (17, 3, 0), (16, 1, 0), (26, 1, 0), (8, 2, 0), (7, 2, 0), (19, 0, 1), (9, 2, 2), (22, 2, 0), (5, 0, 0), (15, 2, 2), (45, 0, 2), (39, 0, 0), (2, 3, 0), (43, 0, 0), (44, 0, 3), (20, 2, 0), (32, 2, 1), (43, 0, 0), (80, 1, 0), (47, 0, 1), (62, 0, 1), (0, 1, 0), (20, 0, 1), (28, 2, 0), (27, 1, 0), (24, 2, 0), (77, 1, 1), (23, 1, 3), (49, 1, 1), (47, 0, 2), (82, 0, 0), (72, 0, 2), (9, 0, 0), (52, 1, 0), (50, 3, 1), (15, 2, 0), (20, 3, 1), (11, 1, 0), (48, 0, 0), (0, 2, 1), (14, 2, 1), (20, 0, 2), (20, 2, 2), (70, 0, 0), (77, 0, 0), (89, 0, 0), (16, 2, 1), (30, 1, 1), (6, 1, 1), (26, 2, 1), (46, 0, 0), (48, 1, 1), (20, 2, 2), (13, 1, 1), (13, 1, 2), (6, 4, 2), (38, 1, 1), (5, 2, 0), (3, 3, 0), (32, 2, 0), (22, 3, 0), (71, 0, 0), (33, 0, 2), (48, 0, 0), (35, 0, 0), (32, 1, 0), (4, 3, 0), (37, 0, 0), (30, 1, 0), (78, 0, 0), (49, 0, 1), (0, 0, 1), (24, 2, 1), (48, 1, 1), (35, 0, 1), (6, 2, 0), (17, 0, 0), (42, 0, 0), (45, 0, 1), (70, 0, 0), (30, 0, 0), (42, 1, 0), (43, 0, 0), (46, 0, 1), (24, 1, 1), (3, 1, 0), (29, 1, 1), (14, 2, 0), (41, 0, 2), (80, 0, 1), (31, 2, 0), (20, 2, 0), (29, 0, 0), (4, 1, 2), (39, 1, 0), (17, 1, 0), (28, 2, 1), (83, 0, 1), (12, 2, 1), (27, 2, 0), (70, 0, 1), (28, 0, 1), (5, 2, 0), (45, 1, 3), (12, 1, 2), (40, 0, 1), (54, 2, 1), (0, 3, 0), (7, 1, 1), (49, 0, 0), (8, 3, 1), (23, 3, 1), (1, 2, 0), (54, 0, 2), (39, 0, 0), (4, 3, 1), (38, 1, 1), (23, 1, 0), (45, 1, 1), (23, 0, 0), (48, 1, 0), (49, 1, 0), (27, 2, 0), (45, 0, 0), (17, 0, 1), (10, 0, 0), (27, 1, 1), (37, 0, 0), (27, 0, 1), (84, 1, 3), (37, 0, 0), (43, 0, 0), (72, 0, 0), (3, 4, 0), (41, 0, 0), (24, 1, 0), (9, 1, 1), (48, 1, 2), (37, 1, 1), (27, 2, 0), (65, 1, 0), (18, 0, 1), (14, 1, 0), (28, 0, 0), (2, 3, 1), (43, 0, 1), (37, 1, 0), (34, 0, 1), (24, 0, 1), (15, 0, 2), (20, 1, 1), (41, 0, 1), (5, 0, 1), (31, 0, 2), (43, 0, 1), (1, 1, 2), (7, 2, 2), (62, 0, 0), (2, 0, 0), (24, 2, 0), (30, 0, 2), (22, 2, 0), (50, 0, 2), (11, 2, 3), (31, 2, 0), (8, 0, 2), (41, 0, 0), (42, 1, 0), (24, 1, 1), (19, 0, 2), (43, 0, 0), (81, 0, 1), (48, 1, 1), (10, 2, 0), (21, 2, 1), (35, 2, 1), (28, 1, 0), (37, 1, 0), (26, 1, 0), (9, 0, 2), (55, 0, 0), (31, 0, 0), (14, 2, 0), (32, 0, 0), (85, 0, 2), (46, 1, 0), (28, 0, 0), (52, 1, 0), (76, 0, 1), (9, 2, 0), (3, 4, 3), (39, 1, 1), (2, 4, 0), (33, 2, 0), (78, 1, 2), (58, 0, 1), (23, 3, 0), (34, 1, 1), (15, 0, 0), (48, 1, 0), (76, 1, 0), (28, 2, 0), (0, 2, 3), (33, 0, 0), (15, 2, 0), (3, 2, 0), (14, 1, 1), (29, 0, 0), (87, 0, 0), (9, 2, 1), (9, 1, 0), (44, 1, 1), (37, 1, 0), (17, 2, 1), (7, 0, 0), (9, 0, 0), (39, 0, 2), (39, 0, 0), (5, 3, 0), (45, 0, 0), (30, 0, 0), (19, 2, 0), (47, 0, 1), (32, 0, 2), (12, 3, 1), (33, 1, 1), (64, 0, 2), (45, 1, 2), (31, 2, 0), (18, 2, 0), (28, 1, 0), (39, 0, 0), (18, 2, 0), (9, 0, 0), (46, 0, 0), (11, 0, 2), (49, 0, 2), (16, 0, 0), (27, 0, 2), (47, 1, 1), (65, 0, 3), (36, 1, 0), (24, 2, 0), (41, 0, 1), (44, 1, 0), (33, 2, 0), (27, 1, 0), (14, 0, 2), (41, 0, 1), (62, 0, 0), (36, 0, 2), (21, 2, 0), (79, 0, 1), (11, 2, 0), (42, 0, 0), (4, 1, 0), (28, 2, 0), (20, 0, 1), (44, 1, 2), (36, 1, 2), (84, 0, 2), (41, 0, 1), (38, 1, 0), (34, 0, 0), (23, 2, 0), (23, 2, 0), (44, 1, 1), (2, 1, 1), (23, 1, 1), (55, 0, 2), (44, 1, 1), (43, 0, 1), (30, 2, 0), (43, 0, 0), (39, 0, 0), (82, 0, 0), (31, 1, 0), (14, 0, 0), (40, 0, 2), (35, 1, 0), (23, 0, 2), (2, 2, 0), (12, 1, 2), (77, 0, 2), (59, 0, 0), (28, 1, 0), (54, 0, 0), (41, 0, 0), (45, 1, 0), (28, 0, 1), (32, 1, 3), (27, 2, 2), (84, 1, 0), (49, 0, 3), (4, 3, 2), (44, 0, 2), (43, 0, 0), (12, 3, 1), (47, 0, 1), (40, 0, 0), (24, 1, 1), (0, 1, 2), (39, 4, 0), (17, 0, 0), (60, 0, 0), (32, 0, 1), (3, 0, 2), (6, 2, 1), (30, 0, 1), (26, 0, 1), (17, 3, 1), (58, 0, 0), (32, 0, 0), (13, 2, 1), (85, 1, 0), (34, 2, 0), (10, 1, 1), (82, 0, 0), (15, 0, 0), (78, 0, 1), (45, 0, 0), (25, 2, 3), (14, 0, 0), (40, 1, 1), (19, 1, 0), (5, 1, 1), (15, 2, 0), (9, 3, 0), (23, 2, 0), (22, 2, 0), (48, 1, 0), (0, 4, 0), (37, 1, 1), (45, 1, 0), (15, 0, 0), (42, 0, 1), (55, 0, 1), (43, 1, 1), (3, 2, 1), (32, 2, 0), (21, 2, 0), (82, 0, 0), (22, 1, 0), (79, 1, 0), (2, 4, 0), (88, 1, 1), (32, 1, 0), (32, 0, 2), (14, 2, 0), (6, 3, 2), (2, 2, 1), (49, 0, 1), (49, 0, 2), (17, 0, 2), (33, 1, 3), (23, 1, 1), (33, 0, 2), (15, 2, 0), (33, 2, 0), (34, 2, 0), (48, 1, 0), (46, 0, 0), (18, 2, 0), (28, 1, 0), (2, 2, 0), (32, 1, 0), (9, 0, 0), (70, 0, 0), (17, 2, 0), (32, 2, 2), (79, 1, 0), (78, 0, 0), (36, 0, 0), (40, 0, 1), (64, 1, 3), (25, 0, 0), (23, 3, 1), (65, 0, 3), (3, 2, 0), (21, 2, 1), (12, 1, 2), (12, 2, 0), (3, 1, 0), (34, 1, 0), (70, 1, 2), (30, 0, 0), (4, 2, 0), (23, 1, 3), (6, 4, 0), (4, 4, 0), (22, 3, 0), (9, 0, 0), (89, 0, 0), (25, 2, 1), (39, 1, 1), (46, 1, 1), (74, 1, 0), (12, 2, 0), (3, 0, 0), (24, 0, 0), (44, 0, 0), (7, 3, 3), (11, 1, 0), (27, 0, 0), (6, 4, 2), (71, 1, 1), (8, 0, 1), (33, 0, 0), (1, 2, 1), (49, 0, 2), (35, 2, 0), (39, 0, 1), (5, 2, 0), (25, 0, 0), (54, 0, 1), (35, 1, 1), (5, 1, 2), (46, 0, 0), (12, 2, 0), (25, 0, 0), (84, 0, 0), (9, 2, 3), (19, 1, 1), (24, 2, 2), (38, 0, 1), (15, 1, 2), (66, 0, 1), (14, 3, 1), (64, 0, 0), (8, 4, 0), (87, 0, 0), (2, 1, 1), (47, 1, 0), (33, 2, 2), (35, 0, 1), (45, 1, 3), (31, 1, 0), (46, 0, 0), (2, 3, 0), (36, 1, 1), (14, 3, 3), (79, 0, 0), (4, 4, 1), (29, 0, 2), (14, 2, 0), (49, 0, 0), (20, 1, 0), (14, 0, 0), (12, 2, 3), (47, 0, 0), (20, 3, 2), (36, 0, 3), (36, 0, 0), (41, 1, 0), (8, 4, 0), (39, 1, 1), (32, 0, 1), (3, 2, 2), (35, 1, 0), (13, 2, 0), (10, 0, 0), (35, 1, 0), (77, 0, 0), (48, 0, 1), (40, 0, 1), (32, 1, 0), (12, 2, 0), (43, 0, 0), (30, 1, 0), (20, 0, 1), (22, 1, 1), (1, 2, 1), (5, 1, 0), (36, 2, 0), (76, 0, 1), (8, 1, 0), (30, 2, 0), (9, 0, 1), (34, 2, 0), (13, 2, 1), (39, 0, 0), (26, 0, 0), (32, 0, 0), (27, 1, 1), (68, 0, 1), (6, 4, 0), (36, 0, 1), (16, 2, 0), (58, 0, 0), (40, 0, 1), (23, 0, 1), (16, 3, 0), (43, 0, 0), (39, 1, 1), (26, 0, 0), (48, 0, 2), (53, 0, 0), (37, 1, 1), (47, 0, 1), (34, 2, 2), (17, 0, 0), (25, 1, 3), (60, 0, 0), (42, 1, 0), (13, 2, 0), (32, 2, 3), (14, 3, 0), (31, 1, 1), (10, 2, 2), (75, 0, 0), (40, 0, 1), (33, 0, 1), (85, 1, 0), (30, 0, 2), (44, 1, 0), (34, 1, 1), (43, 1, 0), (14, 2, 0), (49, 1, 1), (53, 0, 0), (88, 1, 0), (30, 2, 1), (17, 0, 1), (5, 2, 1), (12, 2, 1), (24, 0, 1), (9, 1, 0), (44, 1, 0), (34, 0, 0), (33, 2, 3), (41, 0, 0), (8, 4, 0), (48, 1, 0), (82, 0, 0), (33, 0, 0), (8, 3, 0), (15, 1, 1), (23, 3, 0), (29, 0, 2), (35, 0, 1), (43, 1, 1), (29, 1, 2), (4, 2, 2), (1, 0, 1), (20, 3, 0), (53, 1, 0), (30, 2, 0), (81, 0, 2), (18, 1, 2), (27, 2, 2), (42, 1, 0), (60, 0, 0), (82, 0, 0), (37, 0, 0), (52, 0, 0), (30, 2, 1), (69, 1, 0), (30, 2, 1), (2, 4, 0), (60, 0, 0), (36, 1, 0), (55, 1, 1), (80, 1, 0), (14, 2, 2), (33, 2, 0), (35, 1, 0), (21, 2, 0), (8, 2, 0), (43, 1, 0), (19, 2, 1), (25, 1, 0), (30, 2, 1), (17, 2, 2), (24, 0, 0), (3, 2, 0), (20, 3, 1), (18, 0, 0), (10, 4, 0), (33, 0, 0), (26, 1, 0), (1, 1, 0), (40, 1, 2), (13, 1, 1), (0, 4, 0), (24, 0, 0), (34, 0, 0), (33, 1, 0), (33, 0, 2), (0, 2, 0), (54, 1, 0), (13, 2, 0), (24, 0, 1), (46, 0, 0), (45, 0, 0), (38, 1, 0), (26, 1, 0), (44, 0, 2), (9, 0, 0), (20, 0, 1), (64, 0, 0), (3, 2, 0), (17, 1, 2), (39, 1, 0), (13, 1, 3), (14, 1, 1), (46, 1, 0), (1, 1, 2), (33, 0, 0), (39, 0, 0), (41, 1, 0), (24, 2, 0), (38, 1, 2), (58, 1, 0), (9, 2, 3), (48, 1, 0), (63, 0, 0), (26, 2, 0), (48, 0, 1), (21, 1, 2), (10, 3, 3), (6, 0, 0), (33, 2, 0), (3, 0, 3), (0, 4, 0), (84, 0, 1), (19, 0, 0), (41, 1, 1), (21, 1, 1), (45, 0, 3), (12, 3, 0)]
###Output
_____no_output_____
###Markdown
What are the dimensions of the array created using the list 'score'?- 1- 2- 3- 4
###Code
# Type your code here
import numpy as np
np.array(score).ndim
###Output
_____no_output_____
###Markdown
How many players scored zero runs in their inning?- 20- 22- 24- 26
###Code
# Type your code here
score = np.array(score)
score[score[:,0] == 0].shape
###Output
_____no_output_____
###Markdown
What is the maximum number of wickets taken by a player?- 4- 6- 3- 5
###Code
# Type your code here
max(score[:,1])
###Output
_____no_output_____
###Markdown
Which player scored the maximum runs?- 193- 219- 548- 1021
###Code
# Type your code here
np.array(player)[score[:,0] == np.max(score[:,0])]
###Output
_____no_output_____
###Markdown
You are asked to check for all-rounders within the given set of players. How many all-rounders are present in the provided list?An all-rounder is someone who is good in both, batting and bowling.Check if the individual has taken 2 or more wickets and scored more than 35 runs in their innings- 5- 6- 7- 8
###Code
# Type your code here
np.arange
###Output
_____no_output_____ |
3-McNulty_Project/SQL_Challenge/09_part_ii_sql_baseball.ipynb | ###Markdown
In the terminal:ssh -i ~/.ssh/id_rsa [email protected] \connect baseball connect to baseball database \d show tables DROP TABLE master; remove a table
###Code
#import sqlalchemy
from sqlalchemy import create_engine
import pandas as pd
from sshtunnel import SSHTunnelForwarder
AWS_IP_ADDRESS = '54.188.60.161'
AWS_USERNAME = 'dana'
SSH_KEY_PATH = '/Users/dana/.ssh/id_rsa'
server = SSHTunnelForwarder(
AWS_IP_ADDRESS,
ssh_username=AWS_USERNAME,
ssh_pkey=SSH_KEY_PATH,
remote_bind_address=('localhost', 5432),
)
server.start()
print(server.is_active, server.is_alive, server.local_bind_port)
# Postgres username, password, and database name
POSTGRES_IP_ADDRESS = 'localhost' ## This is localhost because SSH tunnel is active
POSTGRES_PORT = str(server.local_bind_port)
POSTGRES_USERNAME = 'dana' ## CHANGE THIS TO YOUR POSTGRES USERNAME
POSTGRES_PASSWORD = 'dana' ## CHANGE THIS TO YOUR POSTGRES PASSWORD
POSTGRES_DBNAME = 'baseball'
# A long string that contains the necessary Postgres login information
postgres_str = ('postgresql://{username}:{password}@{ipaddress}:{port}/{dbname}'
.format(username=POSTGRES_USERNAME,
password=POSTGRES_PASSWORD,
ipaddress=POSTGRES_IP_ADDRESS,
port=POSTGRES_PORT,
dbname=POSTGRES_DBNAME))
# Create the connection
cnx = create_engine(postgres_str)
###Output
_____no_output_____
###Markdown
1. What was the total spent on salaries by each team, each year?
###Code
sql_query = '''SELECT teamid, yearid, SUM(salary) as total
FROM salaries
GROUP BY teamid, yearid
ORDER BY teamid;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
2. What is the first and last year played for each player? Hint: Create a new table from 'Fielding.csv'. On AWS in psql, create a new table called Fielding and populate with data```sql CREATE TABLE IF NOT EXISTS Fielding ( playerID varchar(20) NOT NULL, yearID int NOT NULL, stint int NOT NULL, teamID text DEFAULT NULL, lgID text DEFAULT NULL, POS text DEFAULT NULL, G int DEFAULT NULL, GS double precision DEFAULT NULL, InnOuts double precision DEFAULT NULL, PO double precision DEFAULT NULL, A double precision DEFAULT NULL, E double precision DEFAULT NULL, DP double precision DEFAULT NULL, PB double precision DEFAULT NULL, WP double precision DEFAULT NULL, SB double precision DEFAULT NULL, CS double precision DEFAULT NULL, ZR double precision DEFAULT NULL, PRIMARY KEY (playerID, yearID,POS,stint));```then load the data into the table```sqlCOPY Fielding FROM '/home/dana/baseballdata/Fielding.csv' DELIMITER ',' CSV HEADER;```
###Code
sql_query = '''SELECT playerid, min(yearid), max(yearid)
FROM Fielding
GROUP BY playerid;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
3. Who has played the most all star games?
###Code
sql_query = '''SELECT playerID, COUNT(yearID) AS num
FROM AllstarFull
GROUP BY playerID
ORDER BY num DESC
LIMIT 1;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
4. Which school has generated the most distinct players? Hint: Create new table from 'CollegePlaying.csv'. On AWS in psql, create a new table called Fielding and populate with data```sql CREATE TABLE IF NOT EXISTS SchoolsPlayers ( playerID varchar(20) NOT NULL, schoolID varchar(20) NOT NULL, yearMin int NOT NULL, yearMax int NOT NULL, PRIMARY KEY (playerID, schoolID));```then load the data into the table```sqlCOPY SchoolsPlayers FROM '/home/dana/baseballdata/SchoolsPlayers.csv' DELIMITER ',' CSV HEADER;```
###Code
sql_query = '''SELECT schoolID, COUNT(playerID) AS num
FROM SchoolsPlayers
GROUP BY schoolID
ORDER BY num DESC
LIMIT 1;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
5. Which players have the longest career? Assume that the debut and finalGame columns comprise the start and end, respectively, of a player's career. Hint: Create a new table from 'Master.csv'. Also note that strings can be converted to dates using the DATE function and can then be subtracted from each other yielding their difference in days. On AWS in psql, create a new table called Master and populate with data```sql CREATE TABLE IF NOT EXISTS Master ( playerID varchar(20) NOT NULL, nameFirst varchar(20), nameLast varchar(20), debut varchar(20), finalGame varchar(20), PRIMARY KEY (playerID));```then load the data into the table```sqlCOPY Master FROM PROGRAM 'cut -d "," -f 1,14,15,21,22 /home/dana/baseballdata/Master.csv' DELIMITER ',' CSV HEADER;```
###Code
sql_query = '''SELECT nameFirst, nameLast, DATE(finalGame)-DATE(debut) AS diff
FROM Master
WHERE debut IS NOT NULL
GROUP BY playerID
ORDER BY diff DESC
LIMIT 1;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
6. What is the distribution of debut months? Hint: Look at the DATE and EXTRACT functions.
###Code
sql_query = '''SELECT EXTRACT(MONTH from DATE(debut)) as m
FROM Master
WHERE debut IS NOT NULL
GROUP BY m;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
7. What is the effect of table join order on mean salary for the players listed in the main (master) table? Hint: Perform two different queries, one that joins on playerID in the salary table and other that joins on the same column in the master table. You will have to use left joins for each since right joins are not currently supported with SQLalchemy.
###Code
sql_query = '''SELECT AVG(salaries.salary)
FROM Master
LEFT JOIN salaries ON Master.playerID=salaries.playerID;'''
pd.read_sql_query(sql_query, cnx)
sql_query = '''SELECT AVG(salaries.salary)
FROM salaries
LEFT JOIN Master ON Master.playerID=salaries.playerID;'''
pd.read_sql_query(sql_query, cnx)
###Output
_____no_output_____
###Markdown
There is no effect on JOIN order
###Code
# close connection
server.close()
###Output
_____no_output_____ |
notebooks/interviews/Template.ipynb | ###Markdown
Behavioral
###Code
The 3 Dos are:
- Do your research
- Do emphasize impact
- Do show multiple attributes of yourself
The 3 Don’ts are:
- Don't be blameful
- Don't talk about work-life balance, perks, or compensation
- Don't ask offensive questions
Why The S.T.A.R Method Does Not Work in Data Science Interviews and What to Do Instead
https://towardsdatascience.com/why-th...
###Output
_____no_output_____
###Markdown
Projects
###Code
3 Tips:
- Package up Your Project
- Remove Useless Details
- Engage the Interviewer
Goal
More Context (if required)
Impact
Challenges
Technical
Non-technical
Interesting Findings
###Output
_____no_output_____
###Markdown
Statistics
###Code
three areas of knowledge you need for statistical interviews:
- Probability
- Hypothesis Testing
- Regression
types of questions you will be asked:
- Conceptual Questions
- Question Involving Calculations
- Coding Questions
5 statistical concepts:
Power
Type 1 error
Type 2 error
Confidence Interval
p-value
Hypothesis testing problems
Hypothesis testing basics
Hypothesis testing + A/B testing
Hypothsis testing + SQL
Hypothesis testing + A/B testing
Which hypothesis test to use?
What is the null hypothesis?
Is the result statisically significant?
Is the result practically significant?
###Output
_____no_output_____ |
Dataset_creation/Check_genius_lyrics.ipynb | ###Markdown
Initialize the framework Mount Google Drive to load the downloaded dataset of genius lyrics
###Code
# Mount drive
from google.colab import drive
drive.mount('/content/drive')
#genius_path = '/content/drive/MyDrive/DM project - NLP lyrics generation/Dataset creation notebooks/genius raw/genius_lyrics_200-300.csv'
genius_path = '/content/drive/MyDrive/DM project - NLP lyrics generation/genius_lyrics.csv'
###Output
Mounted at /content/drive
###Markdown
Read the dataset Read the Genius dataset to display lyrics
###Code
import pandas as pd
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
pd.set_option("display.max_colwidth", None)
# Read CSV file and get columns of interest
data = pd.read_csv(genius_path)
#print("Genius dataset:")
#print(data)
# Remove missing values
data = data.dropna()
#data['lyrics'].apply(lambda x: print("Lyrics:\n\n%s\n\n%s\n" %(x, "*"*50)))
data['lyrics'].apply(lambda x: print("Lyrics ending:\n\n%s\n\n%s\n" %(x[len(x)-50:], "*"*50)))
print("All lyrics printed. Bye!")
###Output
[1;30;43mOutput streaming troncato alle ultime 5000 righe.[0m
**************************************************
Lyrics ending:
now, everything we speak is the truth
Terror Squad
**************************************************
Lyrics ending:
er's daughter anymore, mommy dear
Ooh, mommy, dear
**************************************************
Lyrics ending:
see ya
Baby, I'ma knock you down, yeah
Hmm, huh...
**************************************************
Lyrics ending:
all mine?
Honey, I love you, you're mine all mine
**************************************************
Lyrics ending:
h
Aaaah - Take my words and find your kingdom now!
**************************************************
Lyrics ending:
t's gonna be alright
Everything's gonna be alright
**************************************************
Lyrics ending:
me in...
Said how do I get to Heaven from here...
**************************************************
Lyrics ending:
ere I will go
Sooner or later
It's there I will go
**************************************************
Lyrics ending:
Hmm, If you wanna sing a song
You just sing it...
**************************************************
Lyrics ending:
now
Let it go now
If you don't know I'll tell you
**************************************************
Lyrics ending:
nowhere
We finally made it here
All that for this
**************************************************
Lyrics ending:
n
Hmm, what's going down
I think it's time we stop
**************************************************
Lyrics ending:
ow, darling
Or you will never
You'll never know...
**************************************************
Lyrics ending:
a be your mason baby
I wanna build a life with you
**************************************************
Lyrics ending:
I say amen, amen for my friends
I said amen, amen
**************************************************
Lyrics ending:
ay, I'm never coming
No, I am never coming home...
**************************************************
Lyrics ending:
titch you back up
I'll stitch you back up like new
**************************************************
Lyrics ending:
min' they're hummin'
They're hummin' Arlene
Arlene
**************************************************
Lyrics ending:
s where the story ends
Here's where the story ends
**************************************************
Lyrics ending:
know
You're that I'm missing
I only sing, is shine
**************************************************
Lyrics ending:
for
Cause this ain't over til the whiskey's gone!
**************************************************
Lyrics ending:
d enough for me good enough for me and Bobby McGee
**************************************************
Lyrics ending:
And hope you'll hear it someday, someday, someday!
**************************************************
Lyrics ending:
come around
Lonely won't come around
Yeah, hmm...
**************************************************
Lyrics ending:
d on to something
We gotta hold on to something...
**************************************************
Lyrics ending:
on, movin' on
Whoa, I'm movin' on, movin' on, yeah
**************************************************
Lyrics ending:
a my way
Yeah, yeah, yeah, yeah
Hmm, yeah, yeah...
**************************************************
Lyrics ending:
ows
Home is the place, where the green glass grows
**************************************************
Lyrics ending:
t look at yourself honey
Mhm, look at yourself, oh
**************************************************
Lyrics ending:
g but love
Bringing peace when the push turn shove
**************************************************
Lyrics ending:
t funky, get funky
Get funky, get funky, get funky
**************************************************
Lyrics ending:
Trick take a flick of the click here to face y'all
**************************************************
Lyrics ending:
for mine
I’m down for yours too now relax, unwind
**************************************************
Lyrics ending:
los tres y ahora...
Para dentro... con la gente
**************************************************
Lyrics ending:
d
Tell me...
- repeat 4X
(*music fades 'til end*)
**************************************************
Lyrics ending:
at
So many morals to keep
So many mama's will weep
**************************************************
Lyrics ending:
g
Please feel free to correct any mistakes I made*
**************************************************
Lyrics ending:
and brace yourself
Here Comes The Horns... (x 3)
**************************************************
Lyrics ending:
By merts to get me high by friends we get by, (2X)
**************************************************
Lyrics ending:
RRRR* you best to duck run and hide
You know, yeah
**************************************************
Lyrics ending:
he horns
Here comes the horns
Here comes the horns
**************************************************
Lyrics ending:
hug
Cause this one's for all the past times I dug
**************************************************
Lyrics ending:
line it up so I can feed my greed
I’m addicted...
**************************************************
Lyrics ending:
diga
Conozco a los dos
Conozco a los dos
Ives raps
**************************************************
Lyrics ending:
e from the hood
Real good like you know you should
**************************************************
Lyrics ending:
try
Workin' hard to make a difference before I die
**************************************************
Lyrics ending:
ow
I can't fight it yo I think it's time we go...
**************************************************
Lyrics ending:
t was English
No, huh-uh, that's... that's Spanish
**************************************************
Lyrics ending:
In fact tell you somethin' fool you ain't nothin'
**************************************************
Lyrics ending:
r pounds
Street doctors, daily we gotta make bread
**************************************************
Lyrics ending:
y, hold your breath, passion that'll make it last
**************************************************
Lyrics ending:
an todo Sur Califa I figure
My DNA is everlasting
**************************************************
Lyrics ending:
one
If you want some get some bad enough take some
**************************************************
Lyrics ending:
as nicely stacked hats off to all them cats
Chorus
**************************************************
Lyrics ending:
rue
But the son of Virgin Mary coming down for you
**************************************************
Lyrics ending:
hout Mexicans
It wouldn't be L.A. without Mexicans
**************************************************
Lyrics ending:
lture
It's all just a part of the hip-hop culture
**************************************************
Lyrics ending:
y mission back on the road and freak the next jam
**************************************************
Lyrics ending:
doche
Ridin most high to light up the noche
Chorus
**************************************************
Lyrics ending:
spliff lit
Everybody spread love nobody talk shit
**************************************************
Lyrics ending:
t it's in the house, to rock this shit
Chorus (x2)
**************************************************
Lyrics ending:
rums flower crushed, rushed and smell sweet
Chorus
**************************************************
Lyrics ending:
he lifestyle you know you need to get right
Chorus
**************************************************
Lyrics ending:
n time
Station Thirteen SAP mode volume above nine
**************************************************
Lyrics ending:
(in bed)
I’m inside you
You’re inside me (in bed)
**************************************************
Lyrics ending:
here does that leave me?
Where does that leave me?
**************************************************
Lyrics ending:
d meaningless
Meaningless
Meaningless
Meaningless!
**************************************************
Lyrics ending:
homeless
My heart is homeless
My heart is homeless
**************************************************
Lyrics ending:
s cold
That’s all I feel is cold
That’s all I feel
**************************************************
Lyrics ending:
one to call your own (hey girl)
Hey girl
Hey girl
**************************************************
Lyrics ending:
hard)
Praying for heaven
(I never prayed so hard)
**************************************************
Lyrics ending:
re full of foxy
You’re outer limit rad
Space Ghost
**************************************************
Lyrics ending:
zy
Call me mad
Call me mad
Call me mad
Call me mad
**************************************************
Lyrics ending:
Amerinoid
Amerinoid
Amerinoid
Amerinoid
Amerinoid
**************************************************
Lyrics ending:
, I would never, never ever write a song
About you
**************************************************
Lyrics ending:
in
Give up, give up, give in
And shoulder the sky
**************************************************
Lyrics ending:
boyfriend was a teenage hustler, amazing but true
**************************************************
Lyrics ending:
t hope
I can’t help but hope
I can’t help but hope
**************************************************
Lyrics ending:
ts out
It's over
The night's just about to explode
**************************************************
Lyrics ending:
loosing my mind
Am I Loosing my mind
Over you girl
**************************************************
Lyrics ending:
ody lose control
Hold your breath, don't let me go
**************************************************
Lyrics ending:
Something good will come
Something good will come
**************************************************
Lyrics ending:
tell your boyfriend about it
(you spin me around)
**************************************************
Lyrics ending:
I think I'm better alone, I think I'm better alone
**************************************************
Lyrics ending:
all asleep and get carried away
Carry on, carry on
**************************************************
Lyrics ending:
asted thoughts and the pain in the back of my mind
**************************************************
Lyrics ending:
isco girls
Late nights
Late nights and disco girls
**************************************************
Lyrics ending:
t should feel this pain
We'll start all over again
**************************************************
Lyrics ending:
't see in the dark
But I'm sure I'm getting closer
**************************************************
Lyrics ending:
got you where I want you
Got you where I want you
**************************************************
Lyrics ending:
with you
She's so huge
She's so huge
She's so huge
**************************************************
Lyrics ending:
like 2 be
You
Ooooooooooooooooooooooooooooooooooh
**************************************************
Lyrics ending:
uva time
Helluva helluva time
Helluva helluva time
**************************************************
Lyrics ending:
hem all
The cruelest them all
You're beautifuuuuul
**************************************************
Lyrics ending:
roove is in the middllle of your mind
Of your mind
**************************************************
Lyrics ending:
i'll call you when i get there
I'll take you there
**************************************************
Lyrics ending:
Lord
But there was no chance for me pretty boy boy
**************************************************
Lyrics ending:
u do the things you do
What you dooooooooooooooooo
**************************************************
Lyrics ending:
wanna
Someone left me in a pinball machineeeeeeee
**************************************************
Lyrics ending:
it up
Pick it up
Pick it up
Pick it up
Pick it up
**************************************************
Lyrics ending:
eeeeeee
I want a bluuuuuue liiiiiiiiiiiiiiiiiifeee
**************************************************
Lyrics ending:
razee
Dog ear, dog eat, dog eat, dog eat dog world
**************************************************
Lyrics ending:
you're goooone
They'll be sorry when you're gooone
**************************************************
Lyrics ending:
you want
I know what you want
I know what you want
**************************************************
Lyrics ending:
fraid
I see
Your face
And i know all of your fears
**************************************************
Lyrics ending:
ve you
Maaaaaartin
Maaaartin
Suuuuuuuuuperflllyyyy
**************************************************
Lyrics ending:
my
I must be losin' my
I think I'm losin' my mind
**************************************************
Lyrics ending:
sky
Helicopters in the sky
Helicopters in the sky
**************************************************
Lyrics ending:
ooohhhh mother
Oooooohhh mother
Oooooooohhh mother
**************************************************
Lyrics ending:
you will
Staay if you will
Staaaaaay if yoou will
**************************************************
Lyrics ending:
ani man says
Pakistani man says
Pakistani man says
**************************************************
Lyrics ending:
dreamin' bout youuu)
(Dreamin' bout something...)
**************************************************
Lyrics ending:
ll it to the family
Yeah
The family
Yeaaaaaaaaaaaa
**************************************************
Lyrics ending:
ready to fly
Out of the way im ready to flyyyyyyyy
**************************************************
Lyrics ending:
gone
She knows someday that ill be goneeeeeeeeeeee
**************************************************
Lyrics ending:
ing here
Oooooooh standing here
Standing hereeeeee
**************************************************
Lyrics ending:
listening
(no one's listening)
No one's listening
**************************************************
Lyrics ending:
theree? are you thereee?)
Annabelle
You're fading
**************************************************
Lyrics ending:
uuuuuuuuu
**************************************************
Lyrics ending:
little one
Sleep well (Sleep well), my little one
**************************************************
Lyrics ending:
e cabin
My tеddy bear and me
My teddy bеar and me
**************************************************
Lyrics ending:
ossibility to overcome Evil
So listen what Man did
**************************************************
Lyrics ending:
You've got to chase the early rising sun
Next day
**************************************************
Lyrics ending:
cy with our fear?!)
Where is mercy with our fear?!
**************************************************
Lyrics ending:
the game's return
'Cause tomorrow may not ever be
**************************************************
Lyrics ending:
ond in the night
Always there to make things right
**************************************************
Lyrics ending:
ld
That's full of fears and tears and "progresses"
**************************************************
Lyrics ending:
She's a girl full of plastic, she's a plastic girl
**************************************************
Lyrics ending:
ow will we stand the fire tomorrow
(Instrumental)
**************************************************
Lyrics ending:
fear my tear as blind as bride
Tearin' out of side
**************************************************
Lyrics ending:
possibility to survive
You play with all our life
**************************************************
Lyrics ending:
Proclaim my duty
To be prepared for final mistake
**************************************************
Lyrics ending:
away our fears
And find a way to bridge the years
**************************************************
Lyrics ending:
ord of creation
Here comes the master of sensation
**************************************************
Lyrics ending:
We will do more, we will do more about our get on
**************************************************
Lyrics ending:
ur breath
And know that I
Won't stay alone anymore
**************************************************
Lyrics ending:
freedom
In a land of freedom
In a land of freedom
**************************************************
Lyrics ending:
ness shines
Till the end of our time
From this day
**************************************************
Lyrics ending:
self
That can give some sense to your life in time
**************************************************
Lyrics ending:
die above
Nothing could touch me - Jeanne was love
**************************************************
Lyrics ending:
my memory
I wish she were with me
I need her here
**************************************************
Lyrics ending:
mental source
Of living and dying, of every course
**************************************************
Lyrics ending:
r there?
But they're old-fashioned clothes to wear
**************************************************
Lyrics ending:
t the ever blowin' wind
Here I come!
Start to run!
**************************************************
Lyrics ending:
e drives me on
For you're the only one I can trust
**************************************************
Lyrics ending:
from where we stand
Mind vibration
Child Migration
**************************************************
Lyrics ending:
the everlasting future
Future
Future
Future
Future
**************************************************
Lyrics ending:
nbow
Like an eagle you can glide
High --- HIGH!!!!
**************************************************
Lyrics ending:
answers to questions are gained
In joy and in pain
**************************************************
Lyrics ending:
ours sent within us
Our endless caress for freedom
**************************************************
Lyrics ending:
and handle with
No one but their selves in the way
**************************************************
Lyrics ending:
will arise
Mighty echoes will arise
Mighty Echoes
**************************************************
Lyrics ending:
your spirit still cries
Torn between fire and ice
**************************************************
Lyrics ending:
ck has come now
His final run of luck has come now
**************************************************
Lyrics ending:
light
Open our eyes
Make us wise
Pilot to paradise
**************************************************
Lyrics ending:
ou don't need nobody else
I want you all to myself
**************************************************
Lyrics ending:
nye, I'm Bound 2
Shawty, if you down, I'm down too
**************************************************
Lyrics ending:
get this understood, right now we're misunderstood
**************************************************
Lyrics ending:
in my face
Remy
Yeah, I'm lit off that Remy, yeah
**************************************************
Lyrics ending:
You would leave me alone
You would leave me alone
**************************************************
Lyrics ending:
h (yeah)
Yeah yeah (yeah)
Yeah yeah (yeah)
Oh yeah
**************************************************
Lyrics ending:
e like I need you
Girl I just wanna make you smile
**************************************************
Lyrics ending:
eek
Yeah, shawty on fleek
Shawty on fleek
Oh ah ah
**************************************************
Lyrics ending:
ng like lovin' you, lo-, lo-
Lovin' you
Lovin' you
**************************************************
Lyrics ending:
es
Shawty feeling my vibes, I be like there she go
**************************************************
Lyrics ending:
'Cause I told my bitch that I love her but I lied
**************************************************
Lyrics ending:
go get the steppin'
'Cause I can't take it no, no
**************************************************
Lyrics ending:
all the way 100
Baby, I ain't got no time for you
**************************************************
Lyrics ending:
feel like I just got to ask you
Do you notice me?
**************************************************
Lyrics ending:
ride or die
You my ride or die
You my ride or die
**************************************************
Lyrics ending:
He can't take you out, that shit is a dub for you
**************************************************
Lyrics ending:
, I like (What?)
I like girls who like girls (Hey)
**************************************************
Lyrics ending:
ed something different but I don't know what it is
**************************************************
Lyrics ending:
it, yeah, ayy
When you making 'em all jealous, ayy
**************************************************
Lyrics ending:
want, yeah
'Cause I don't wanna be a playa no more
**************************************************
Lyrics ending:
believe me
Know, that I'll always be there for you
**************************************************
Lyrics ending:
ou just need to relax
Girl, you just need to relax
**************************************************
Lyrics ending:
d?
In a long time haven't seen her
I been thinkin'
**************************************************
Lyrics ending:
with me? (yeah)
We could just stay lowkey (yeah)
**************************************************
Lyrics ending:
-oh-oh-oh
She want drip like mine
Woah-oh-oh-oh-oh
**************************************************
Lyrics ending:
hen, I said "let's ride" and you hopped in my ride
**************************************************
Lyrics ending:
y ain’t duplicate me
So all they can do is hate me
**************************************************
Lyrics ending:
end zone
With the friend zone
With the friend zone
**************************************************
Lyrics ending:
w or never, ooh
Now or never, now or never
Oh yeah
**************************************************
Lyrics ending:
ike 80 on a chain
Blowing like money ain't a thang
**************************************************
Lyrics ending:
s like you
People write songs about girls like you
**************************************************
Lyrics ending:
I know what you like, I know what you like
From me
**************************************************
Lyrics ending:
u wrong, yeah
Yeah, I want you but I took too long
**************************************************
Lyrics ending:
ou
The way you left me girl, what was a thug to do
**************************************************
Lyrics ending:
h
It's Fucked Up Cause I Only Trust You Bitch
DAMN
**************************************************
Lyrics ending:
She's the one for me, me oh (Mary Moon, Mary Moon)
**************************************************
Lyrics ending:
en though you're gone...Ooh, Marguerite Heurtin...
**************************************************
Lyrics ending:
ur love is killing me...Your love is killing me...
**************************************************
Lyrics ending:
er hear me or ever did
Oh, Molly, I still want you
**************************************************
Lyrics ending:
mily...
The perfect family...The perfect family...
**************************************************
Lyrics ending:
got to stop
This sentimental crap has got to stop
**************************************************
Lyrics ending:
y, well, that's a different story
Yeah, yeah, yeah
**************************************************
Lyrics ending:
ath
Take the oath, take the oath, take the oath...
**************************************************
Lyrics ending:
l over now, and I need you love...Like A Shadow...
**************************************************
Lyrics ending:
win, won't lose
I don't need anyone...just you...
**************************************************
Lyrics ending:
n arrow that he narrowly escaped, oh yeah, oh yeah
**************************************************
Lyrics ending:
y! Hey! (BECAUSE WE WANT TO! BECAUSE WE WANT TO!)
**************************************************
Lyrics ending:
ney to the bee, that's you for me
Yeah, you for me
**************************************************
Lyrics ending:
shining light)
And that's the day and night, babe
**************************************************
Lyrics ending:
e with me
(I know she wants, I know she wants you)
**************************************************
Lyrics ending:
hone
Cos I can do betta by myself
Better by myself
**************************************************
Lyrics ending:
st thinking
We can spend some time
Repeat and fade
**************************************************
Lyrics ending:
a plan
(I'm on my way)
Baby I can
(I'm on my way)
**************************************************
Lyrics ending:
'm saying that I'm sorry now
Saying I'm sorry now
**************************************************
Lyrics ending:
oor
I'm saying bye bye
Bye bye, bye bye baby
Gone!
**************************************************
Lyrics ending:
t
Your first love is someone that you never forget
**************************************************
Lyrics ending:
in me
Are you gonna walk the walk of life with me?
**************************************************
Lyrics ending:
holdin' on
I'm gonna be your number one
Number one
**************************************************
Lyrics ending:
I believe it's real
I believe it's real
It's real
**************************************************
Lyrics ending:
ybody swinging on the party line
So get ringing it
**************************************************
Lyrics ending:
ver been this satisfied
It's something deep inside
**************************************************
Lyrics ending:
urs
Always give my love to you
Baby all night long
**************************************************
Lyrics ending:
Baby can't you see
That no-one else
Will do for me
**************************************************
Lyrics ending:
save me from tears
I'll give it to someone special
**************************************************
Lyrics ending:
on't you give my love a try
You'll be safe with me
**************************************************
Lyrics ending:
tell who is at my door
Ring my bell, ring my bell
**************************************************
Lyrics ending:
et your mind on me the love groove
The love groove
**************************************************
Lyrics ending:
will be thinking of you
I will be dreaming of you
**************************************************
Lyrics ending:
op the world misfocusing on me
Oh I, oh I, oh I...
**************************************************
Lyrics ending:
tting you free
Livin out my fantasy
Because of you
**************************************************
Lyrics ending:
it on
Down down down down
Bring it on, bring it on
**************************************************
Lyrics ending:
you go
I'm going to find you
I'm going to find you
**************************************************
Lyrics ending:
doubt
Hoping we can work it out
What game is this?
**************************************************
Lyrics ending:
l love
Run that by me one more time
You don't feel
**************************************************
Lyrics ending:
You don't have to worry, at all
Baby just phone me
**************************************************
Lyrics ending:
w for sure
The hardest things to keep are promises
**************************************************
Lyrics ending:
Caress the gold
**************************************************
Lyrics ending:
your life
What's missing in your life is me
I know
**************************************************
Lyrics ending:
s two baby
To make a dream come true
Just take two
**************************************************
Lyrics ending:
urning back now
Wa-ha-hau..
Young hearts beat fast
**************************************************
Lyrics ending:
we had was so short
I can't stand to turn the page
**************************************************
Lyrics ending:
r, under
Oh, only you can send me under
Yeah
Under
**************************************************
Lyrics ending:
h
When you're in love
In love, in love, love, woah
**************************************************
Lyrics ending:
kissing you, kissing you
Kissing you, kissing you
**************************************************
Lyrics ending:
can murder
Only you can send me under
Under, under
**************************************************
Lyrics ending:
me
No matter how hard the time
I keep my head high
**************************************************
Lyrics ending:
en the things I seen
If you seen the things I seen
**************************************************
Lyrics ending:
p the heat
Now you got solid gold (Solid gold now)
**************************************************
Lyrics ending:
can't stop
I can't stop till it's over, can't stop
**************************************************
Lyrics ending:
a dangerous game
Watching you now
Checkin' me out
**************************************************
Lyrics ending:
u see me don't just stand there
Step into my world
**************************************************
Lyrics ending:
e this?
I used to be happy
Happy without you
(You)
**************************************************
Lyrics ending:
ss
Hopeless
Mama won't you wake me from this dream
**************************************************
Lyrics ending:
y you deserve her
Gotta shoot her down
Miss misery
**************************************************
Lyrics ending:
a girl
They can’t deliver love
In this man’s world
**************************************************
Lyrics ending:
r love for me, you never lose your love
Dear Mama
**************************************************
Lyrics ending:
our means
Cause two point four ain't enough for me
**************************************************
Lyrics ending:
there's nothing left to say or do
I hold on to you
**************************************************
Lyrics ending:
ou hold me
My darlin' just hold me
Hold me tonight
**************************************************
Lyrics ending:
u let me
Won't you let me
Cause I love to love you
**************************************************
Lyrics ending:
ch
'Cause I'm a b-b-b-b-bad, b-b-bad bitch
(Bitch)
**************************************************
Lyrics ending:
e you when you’re old (I love you when you’re old)
**************************************************
Lyrics ending:
ouch yourself
If you feel big, then touch yourself
**************************************************
Lyrics ending:
you just love someone
Until they're over the edge?
**************************************************
Lyrics ending:
rever like I was before
Like I was before you came
**************************************************
Lyrics ending:
ll looking for the high
We're all chasing paradise
**************************************************
Lyrics ending:
s world, in your eyes
It's hard to be a good woman
**************************************************
Lyrics ending:
street light
You never seem to find your way home
**************************************************
Lyrics ending:
avy
Let's go I'm ready
Get heavy heavy heavy heavy
**************************************************
Lyrics ending:
ve you
I said I love you
Baby wanna take you there
**************************************************
Lyrics ending:
hopeless
Mama won't you wake me from this dream...
**************************************************
Lyrics ending:
h, like I'm a star
So high, like I'm a star
Azucar
**************************************************
Lyrics ending:
ht up, light up
Breathe deep and put it in the sky
**************************************************
Lyrics ending:
rerrr
Errr rerr rer rer rer rer rer rerr rerrr
Ohh
**************************************************
Lyrics ending:
4X*}
Now I'm back, let me hit it (Oh) {*4X*}
- 2X
**************************************************
Lyrics ending:
)
Is this an illusion that I have in my heart
Amor
**************************************************
Lyrics ending:
scrumptious, ew, she dumb
Repeat Break
Go-go girl
**************************************************
Lyrics ending:
igh like I'm a star
So high like I'm a star
Azucar
**************************************************
Lyrics ending:
w key, I'ma keep it low key)
Gotta keep it low key
**************************************************
Lyrics ending:
ll night..."
"F-f-f-Fi-Fi-Fingazz on the track..."
**************************************************
Lyrics ending:
se this, this is where you supposed to be
So, just
**************************************************
Lyrics ending:
an muggin'
Mean muggin' (Now everybody gon' do it)
**************************************************
Lyrics ending:
e get back to a shawty going front of me, hey, hey
**************************************************
Lyrics ending:
got no shame (no shame)
No shame
No shame
No shame
**************************************************
Lyrics ending:
mama
What is it, what, what
What is it, what is it
**************************************************
Lyrics ending:
no brakes
Okay!
Man you wrecked!
I told you fool
**************************************************
Lyrics ending:
ight, oh
Baby Bash
Keith Sweat
Felli Fel
Mmm, yeah
**************************************************
Lyrics ending:
h-nah-nah, nah)
Some na na (Nah, nah-nah-nah, nah)
**************************************************
Lyrics ending:
low
Yeah I'm your shorty boo, but you already know
**************************************************
Lyrics ending:
fore)
Una cancion, cancion, cancion, de amor, amor
**************************************************
Lyrics ending:
in it
And this is MigElegante
And I'm out
Yahtzee!
**************************************************
Lyrics ending:
n
Got lost from your man, you don't wanna be found
**************************************************
Lyrics ending:
m homies, catchin' cases
Repeat Hook
Repeat Chorus
**************************************************
Lyrics ending:
hew
What it do, what it do, it's ya smokin' nephew
**************************************************
Lyrics ending:
ho
Oh she make me feel like oh wo oh oh oh
Whistle
**************************************************
Lyrics ending:
rol...)
It's outta control (It's outta control...)
**************************************************
Lyrics ending:
, what
What is it
What is it
What, what
What is it
**************************************************
Lyrics ending:
e and you to hook up
Let’s get this thing together
**************************************************
Lyrics ending:
we do
Everybody keep sellin' milo
Amor y fiesta x4
**************************************************
Lyrics ending:
and Happy P love the
And shout out she love the...
**************************************************
Lyrics ending:
ad
Money, money, money, money
Y'all ready know
**************************************************
Lyrics ending:
at Valley Jo/Vallejo
H-Town, Texas, Cali, bro
Yee!
**************************************************
Lyrics ending:
t down
Said I gotta get down
Says I gotta get down
**************************************************
Lyrics ending:
hhh) x3
Now im back let me hit it (oooohhh) x3
x2
**************************************************
Lyrics ending:
go
And keep goin'
Repeat Chorus
That's how we go
**************************************************
Lyrics ending:
when the police try to get me
Repeat Chorus Twice
**************************************************
Lyrics ending:
ight, minds wide, magic imagery, oh-ho...
Oh-ho...
**************************************************
Lyrics ending:
n' badges!"
(Gunfire & yelling)
(Laughing to fade)
**************************************************
Lyrics ending:
crazy
(Crazy, crazy, crazy)
This is so fucking bad
**************************************************
Lyrics ending:
today
"Sodom and Gomorrah? This is London, guv."
**************************************************
Lyrics ending:
at
That's true, you know?
Wadda dem-dem-dem.......
**************************************************
Lyrics ending:
- part two
I'm gonna take you to part two part two
**************************************************
Lyrics ending:
ry beatbox, let's party right now
Party right now!
**************************************************
Lyrics ending:
e not. Remember this...do not try to imitate him."
**************************************************
Lyrics ending:
America
Slipping into hell
"I got you"
"I'm sorry"
**************************************************
Lyrics ending:
and roll alright by me
I turned out a punk
CHORUS
**************************************************
Lyrics ending:
ll day, I know
And everybody needs, need a holiday
**************************************************
Lyrics ending:
t go on like this
No, I can't go on like this
Yes?
**************************************************
Lyrics ending:
state
The things that made this country great
Sony
**************************************************
Lyrics ending:
e bought it and I'm in labour"
"We are here today"
**************************************************
Lyrics ending:
nowadays can offend
Never gonna fall in love again
**************************************************
Lyrics ending:
...
Operator!! Hey, come on now! Operator!.....
**************************************************
Lyrics ending:
at
Love boat, sub attack, beware of sudden impact
**************************************************
Lyrics ending:
't you know you're driving me crazy all night long
**************************************************
Lyrics ending:
l everybody the news
And with the winter coming...
**************************************************
Lyrics ending:
There ain't no getting away
From how I feel today
**************************************************
Lyrics ending:
his about big game, don`t you know the lions tame?
**************************************************
Lyrics ending:
!
Sometimes from London town
Just play that music!
**************************************************
Lyrics ending:
ng bandit's gonna sweat
"Socrates
Goooaaaallll!!!"
**************************************************
Lyrics ending:
e
With entertainment provided by the cabbing crew"
**************************************************
Lyrics ending:
ket '58
The rock around was busting out interstate
**************************************************
Lyrics ending:
idn't have a rush like that since the Burgly riots
**************************************************
Lyrics ending:
t, ticket, ticket
Tell them
Ticket, ticket, ticket
**************************************************
Lyrics ending:
fine
No one gets ten out of ten
Lucky if it's nine
**************************************************
Lyrics ending:
and say goodnight
You play it cool she wonders why
**************************************************
Lyrics ending:
g
Help! Help! Help! Help! Help! Help! Help! Help!
**************************************************
Lyrics ending:
ahs of the milk bar, ohohoh
Hellraisers to the end
**************************************************
Lyrics ending:
pagne!
Champagne!
Champagne!
Champagne!
Champagne!
**************************************************
Lyrics ending:
stars
I think I'll write my own
So Mr. Walker said
**************************************************
Lyrics ending:
s night could last all day
Order you and take away
**************************************************
Lyrics ending:
ride
B.A.D. in the night time ride
BAD
BAD
BAD
BAD
**************************************************
Lyrics ending:
n again...
And back to stardust
We return again...
**************************************************
Lyrics ending:
e me 'cuz I know just what this world is all about
**************************************************
Lyrics ending:
ng
I would gladly sip my champagne from your shoes
**************************************************
Lyrics ending:
't find comfort in the fact that it could be worse
**************************************************
Lyrics ending:
to be free
Girls just want
A sweet, a sweet melody
**************************************************
Lyrics ending:
a stranger to yourself, be a stranger to yourself
**************************************************
Lyrics ending:
l be
Do you really love me?
Do you really love me?
**************************************************
Lyrics ending:
u love me, baby?
Do you love me 'cause I love you?
**************************************************
Lyrics ending:
be afraid
Don't worry
Your beauty shall never fade
**************************************************
Lyrics ending:
ulness must leave the room
If I ever wish you well
**************************************************
Lyrics ending:
loser
And the thought that we've never been closer
**************************************************
Lyrics ending:
hing means anything
Nothing means anything anymore
**************************************************
Lyrics ending:
since you've kissed me
Afloating away we shall be
**************************************************
Lyrics ending:
can cripple
Of Thanksgiving waves that can cripple
**************************************************
Lyrics ending:
wn
I love the unknown
He said he loves the unknown
**************************************************
Lyrics ending:
lungs
And the breath that I crave from your lungs
**************************************************
Lyrics ending:
Heaven and nature sing
And Heaven and nature sing
**************************************************
Lyrics ending:
ferent plan
Now that I'm saved I wish I was damned
**************************************************
Lyrics ending:
the moment whеn the dying ends
When thе dying ends
**************************************************
Lyrics ending:
ou how my heart sings
Whenever I look in your eyes
**************************************************
Lyrics ending:
they crash the party in your mind?
How dare they?
**************************************************
Lyrics ending:
young
But I love having fun
Takе me and you’ll see
**************************************************
Lyrics ending:
e another tree
Make another tree
Make another tree
**************************************************
Lyrics ending:
ng
Return to see
The not everything is everlasting
**************************************************
Lyrics ending:
k to reality
Drag me back to insanity
I am falling
**************************************************
Lyrics ending:
my mental walls
Can you find me, I am hidden away
**************************************************
Lyrics ending:
ientation
The Realization
I can not open this door
**************************************************
Lyrics ending:
t I am lost behind these mental walls
To die alone
**************************************************
Lyrics ending:
the darkness that calls
There is no way to forgive
**************************************************
Lyrics ending:
t like a casualty
But you ain't gettin my sympathy
**************************************************
Lyrics ending:
in the night
Just like fairy gifts gone in the sky
**************************************************
Lyrics ending:
built to do
Oh Geno, woah Geno
Oh Geno, woah Geno
**************************************************
Lyrics ending:
and explain
But you'd never see in a million years
**************************************************
Lyrics ending:
that's my story
The strongest thing I've ever seen
**************************************************
Lyrics ending:
it
Shut your fucking mouth 'til you know the truth
**************************************************
Lyrics ending:
ould you please tell me when my light turns green?
**************************************************
Lyrics ending:
h, ah-ah) Somebody help me
(Ooh-ooh, ah-ah) Oh-ooh
**************************************************
Lyrics ending:
thank you
Yes, yes, yes
More please and thank you
**************************************************
Lyrics ending:
oo, much too, much too long
Seven days without you
**************************************************
Lyrics ending:
m running
I'm burning
I wouldn't sell you anything
**************************************************
Lyrics ending:
ture don't you?
Ahh
Ooh, uh-ahh
Yeah, yeah, yeah!
**************************************************
Lyrics ending:
t's make this precious, (I think we probably will)
**************************************************
Lyrics ending:
n one of 'those' things?
Yeah, one of those things
**************************************************
Lyrics ending:
feel
Pretend you don't hear
Don't come any closer
**************************************************
Lyrics ending:
me at someone
Show me someone
Who feels like I see
**************************************************
Lyrics ending:
learn today?
I'll hear all you say
I won't go away
**************************************************
Lyrics ending:
illing thing
My national pride is a personal pride
**************************************************
Lyrics ending:
ve
You were asking me... I care, Baby. (Anyway...)
**************************************************
Lyrics ending:
es
(That's all there ever is) Oh yeah, yeah, yeah?
**************************************************
Lyrics ending:
lone
Open to suggestions, is that the way you feel
**************************************************
Lyrics ending:
I'll never stop saying your name
Here is a protest
**************************************************
Lyrics ending:
heaven
I'm in heaven
I'm in heaven when you smile
**************************************************
Lyrics ending:
he's still crying. I won't smile while he's there
**************************************************
Lyrics ending:
ce
Safe now cause your head is in the sand
Keep it
**************************************************
Lyrics ending:
it
Shut your fucking mouth til you know the truth
**************************************************
Lyrics ending:
mes I almost envy the need, but dont see the prize
**************************************************
Lyrics ending:
go for me
What don't go for me
Will not go for me
**************************************************
Lyrics ending:
tú mé san damsa fión
Anduici tú mé san damsa fión
**************************************************
Lyrics ending:
ere singin' let's get this straight from the start
**************************************************
Lyrics ending:
l deal with it myself
Ooh yeah, ooh yeah, ooh yeah
**************************************************
Lyrics ending:
m now
Let me look and see
How they?ve grown up now
**************************************************
Lyrics ending:
all the time I loved you so
I love you, I love you
**************************************************
Lyrics ending:
heaven
I'm in heaven
I'm in heaven when you smile
**************************************************
Lyrics ending:
and affection!
Brick by brick, tearing them down!
**************************************************
Lyrics ending:
e to look like this
I'll never ask again I promise
**************************************************
Lyrics ending:
r, it's over
You let your heart on my pillow, noo!
**************************************************
Lyrics ending:
tle scars
With a broken heart
And she's got it all
**************************************************
Lyrics ending:
ou're gonna crash and burn when you hit the bottom
**************************************************
Lyrics ending:
I'm still living with the ghost of you
Whoa, whoa
**************************************************
Lyrics ending:
u're all I needed, nothing less
You're so far away
**************************************************
Lyrics ending:
ht
I'm alright this time
Ye, I'm alright this time
**************************************************
Lyrics ending:
to live for
And no where to run
Your coming undone
**************************************************
Lyrics ending:
ide she's crying, follow your heart that's beating
**************************************************
Lyrics ending:
you don t stand a chance if you re messin with me
**************************************************
Lyrics ending:
want
And I'm caught in between
What's left of me..
**************************************************
Lyrics ending:
ut had it with you
I've just about had it with you
**************************************************
Lyrics ending:
Get back to where we fell...
I'm never letting go
**************************************************
Lyrics ending:
don't want to lose you, again
Not again, not again
**************************************************
Lyrics ending:
e'd stick together
But always doesn't last forever
**************************************************
Lyrics ending:
Oh, lets get the meaning, of what's been happening
**************************************************
Lyrics ending:
ty, tonight
Come on, let's take this city, tonight
**************************************************
Lyrics ending:
is last forever!
Never stay...
Run away!
Ohhhhh...
**************************************************
Lyrics ending:
o break!
I am not afraid
Cause I'm about to break!
**************************************************
Lyrics ending:
y heart that you can tell
Your perfectly worthless
**************************************************
Lyrics ending:
without you ever here
Tell me you're never wrong
**************************************************
Lyrics ending:
t give it up
Whoa just give it up, just give it up
**************************************************
Lyrics ending:
-oh, wo-oh, wo-wo-oh
Wo-oh, wo-oh, wo-oh, wo-wo-oh
**************************************************
Lyrics ending:
we've come
'Cuz baby, look how far that we've come
**************************************************
Lyrics ending:
vice, human sacrifice
We know what's good for you
**************************************************
Lyrics ending:
pa pa, mow mow
Hi yo silver, away...
Elvira
Elvira
**************************************************
Lyrics ending:
like always he's still one hundred per cent, baby
**************************************************
Lyrics ending:
s my reason for living...
There goes my everything
**************************************************
Lyrics ending:
rself away
Sweetheart don't throw yourself away...
**************************************************
Lyrics ending:
t talkin', slow walkin'
Good lookin' Mohair Sam...
**************************************************
Lyrics ending:
ecome... our sanctuary...
Our sanctuary..
Oooh....
**************************************************
Lyrics ending:
tasies
Give us back our fairytales, our fairytales
**************************************************
Lyrics ending:
my guardian angel
My guardian angel
Guardian Angel
**************************************************
Lyrics ending:
ay
But if you try, true friends will stay with you
**************************************************
Lyrics ending:
's goodbye until then goodbye my love
Fade away...
**************************************************
Lyrics ending:
seen...
They've seen the death of a cold heart...
**************************************************
Lyrics ending:
e prevailed
So we shall so the light forever again
**************************************************
Lyrics ending:
t little lies
Show me your eyes
Your angel eyes...
**************************************************
Lyrics ending:
her deadly grasp
When you'll feel her deadly grasp
**************************************************
Lyrics ending:
ows of your mind
Here are the gates beyond reality
**************************************************
Lyrics ending:
ream
I want to forget all these things I have seen
**************************************************
Lyrics ending:
die
It won't happen don't be sad
Magic fountain...
**************************************************
Lyrics ending:
e light
But you're still unaware
I sell my soul...
**************************************************
Lyrics ending:
re, a bloodbath
Revenge is what she wants, no less
**************************************************
Lyrics ending:
I know this is my fall
Because of
Three wishes...
**************************************************
Lyrics ending:
ss
I'll get out of this mess, I promise
OH, OH, OH
**************************************************
Lyrics ending:
to the diviner
For the truth you are searching for
**************************************************
Lyrics ending:
reedom
Gave us a kingdom
Our march for freedom, oh
**************************************************
Lyrics ending:
ing at the moon
...at the moon
Howling at the moon
**************************************************
Lyrics ending:
the burden
And the pain it brings
Broken wings...
**************************************************
Lyrics ending:
he field
Keepers of the field
Keepers of the field
**************************************************
Lyrics ending:
etting you free
Come inside, now the path is clear
**************************************************
Lyrics ending:
the forest
Forest..
The Lady..
Lady of the forest!
**************************************************
Lyrics ending:
g of these wandering souls
For whom the bell tolls
**************************************************
Lyrics ending:
wist in my sobriety
More than twist in my sobriety
**************************************************
Lyrics ending:
's nothing like it seem
All illusions, only dreams
**************************************************
Lyrics ending:
And my soul is free
The evil spirits is now theirs
**************************************************
Lyrics ending:
s of thy tree
And trails it's blossoms in the dust
**************************************************
Lyrics ending:
face
Demon's corpse dissolves in a blaze of flames
**************************************************
Lyrics ending:
t's your crucible
The crucible
The scourge of hell
**************************************************
Lyrics ending:
l the anger starts to rave
My hopes have been vast
**************************************************
Lyrics ending:
f time killed the flames
Blew out all the was mine
**************************************************
Lyrics ending:
My existence
My life
And my heart
Spirit
Darkness
**************************************************
Lyrics ending:
gh my veins
Elated music supports the corpse dance
**************************************************
Lyrics ending:
d
Flee my row, in unease
Filth no gain, is my doom
**************************************************
Lyrics ending:
kin will disappear
My beauty returns with my force
**************************************************
Lyrics ending:
oughts are calling
Ideas just strange devices (x2)
**************************************************
Lyrics ending:
irls
If he is not the Redeemer
Because it stays...
**************************************************
Lyrics ending:
, curse, cursed you are
No love left you can share
**************************************************
Lyrics ending:
veless, a prison, just arisen
Men remain bloodless
**************************************************
Lyrics ending:
d the wardens
Deep below, insurrection is sounding
**************************************************
Lyrics ending:
any slaugther will there pave
The way to this gate
**************************************************
Lyrics ending:
otions
Fate light as a feather
Immortal corrosions
**************************************************
Lyrics ending:
ery aim is it worth, the final goal is an illusion
**************************************************
Lyrics ending:
s
The fate was sealed
The prophecy I had to see...
**************************************************
Lyrics ending:
the hopes weird host
Unimportance will be realized
**************************************************
Lyrics ending:
this the price to pay
Overwhelming lust all I feel
**************************************************
Lyrics ending:
am no toy
My victims I will harm
Now I know it all
**************************************************
Lyrics ending:
my name
Will you all my name?
I say :
La la la...
**************************************************
Lyrics ending:
pact now it's broken
Supreme the nous, the remedy
**************************************************
Lyrics ending:
sun
Realize the menace, you won't be able to shun
**************************************************
Lyrics ending:
es
And my soul is free
The evil spirit is now ours
**************************************************
Lyrics ending:
cruel deed
Dragged down by fate, fallen into blade
**************************************************
Lyrics ending:
cateur
The maschinenmensch promotes men to creator
**************************************************
Lyrics ending:
ods
Bow down before me, mankind
We resist all odds
**************************************************
Lyrics ending:
ost
A machine, industrination
Ultimate domination!
**************************************************
Lyrics ending:
ty again begins to cleave
Evolution finally slowed
**************************************************
Lyrics ending:
truth is away
Mankind had to comply, facts astray
**************************************************
Lyrics ending:
ückt von dem Leiden
In graue Schleier sich kleiden
**************************************************
Lyrics ending:
turn around
Just walk on... and feel the wind blow
**************************************************
Lyrics ending:
e my conscience, and I fight. To keep them sane...
**************************************************
Lyrics ending:
ahh ahhh
And in the morning
Radagacuca (repeating)
**************************************************
Lyrics ending:
s innocent as the next will appear on your horizon
**************************************************
Lyrics ending:
er be alone
Hope you're happy off you go
Not alone
**************************************************
Lyrics ending:
y, it takes me back to you
It takes me back to you
**************************************************
Lyrics ending:
na see our dreams become actual
Things we live off
**************************************************
Lyrics ending:
for you even if we may seek a separate life
Oh-oh
**************************************************
Lyrics ending:
ace
In this quiet place I can give you all my time
**************************************************
Lyrics ending:
ow anything
But wherever
You are now
I’ll carry on
**************************************************
Lyrics ending:
y head, I'm in my head)
I'll make it all come true
**************************************************
Lyrics ending:
I never know
No I never know
No I never know, why
**************************************************
Lyrics ending:
e to come along and, oh I'd like to come along and
**************************************************
Lyrics ending:
een thinking to myself that this is who i gotta be
**************************************************
Lyrics ending:
by your side my dear
We will have nothing to fear
**************************************************
Lyrics ending:
u dream or do you take
Said do you dream or do you
**************************************************
Lyrics ending:
dream
Summer is like a dream
It's all just a dream
**************************************************
Lyrics ending:
on every sign so when I look into your eyes I know
**************************************************
Lyrics ending:
n times come my way
I'm takin em
So great when the
**************************************************
Lyrics ending:
w where we were going
I wonder where it went wrong
**************************************************
Lyrics ending:
e, in love, oh in love
I can't deny it I'm in love
**************************************************
Lyrics ending:
are the only one that take my mind somewhere else
**************************************************
Lyrics ending:
might be crazy
But I think that it's worth my time
**************************************************
Lyrics ending:
I tripped and instead I found a new place to begin
**************************************************
Lyrics ending:
which I never seem to find
No I am never satisfied
**************************************************
Lyrics ending:
ch me go
I can't afford
Something with you
Yeah...
**************************************************
Lyrics ending:
o navigate
Through this blue world
This blue world
**************************************************
Lyrics ending:
beginning
Yeah it's always the beginning it’s true
**************************************************
Lyrics ending:
you
Maybe in the future I could be the one for you
**************************************************
Lyrics ending:
me feel great
Your love is all that makes me feel
**************************************************
Lyrics ending:
it's not about the how but getting out of the door
**************************************************
Lyrics ending:
er
I was a daydreamer
But now I’m just lost in you
**************************************************
Lyrics ending:
, yes, for my sake I will learn from your mistakes
**************************************************
Lyrics ending:
reaking(even)
All that you can do is just find out
**************************************************
Lyrics ending:
id those flowers wilt how did that life pass me by
**************************************************
Lyrics ending:
ng now I know its the same one that makes you rise
**************************************************
Lyrics ending:
smokescreen
Everyday I see you but I'd rather not
**************************************************
Lyrics ending:
't nothing to get if you can't let it go
Let it go
**************************************************
Lyrics ending:
't let go of that love
You have always been enough
**************************************************
Lyrics ending:
y head In my head
And I won't forget what she said
**************************************************
Lyrics ending:
n some forgotten morning
Until then I have no home
**************************************************
Lyrics ending:
y and Mexico
But my baby she don't love me no more
**************************************************
Lyrics ending:
gged mile
And you know
It sure was worth the while
**************************************************
Lyrics ending:
and treat me right
Make me believe were not alone
**************************************************
Lyrics ending:
ld as a stone
Waiting for our love to take us home
**************************************************
Lyrics ending:
t used to be
Will be nothing but a memory, Anjolie
**************************************************
Lyrics ending:
down in my sin
In the gutter drinking Dixie again
**************************************************
Lyrics ending:
ooting stars
The woman I love is the woman you are
**************************************************
Lyrics ending:
re down
Playing hearts ain't the only game in town
**************************************************
Lyrics ending:
akes it hard for me to say, Desiree
Won't you stay
**************************************************
Lyrics ending:
for my pillow, let the moonlight fall on my spread
**************************************************
Lyrics ending:
this twisted big resting breathing ground anymore
**************************************************
Lyrics ending:
her love but all you know how to percive is hunger
**************************************************
Lyrics ending:
n't even go there
Passed out at the end of the bar
**************************************************
Lyrics ending:
e the difference between
The butcher and the beast
**************************************************
Lyrics ending:
s
Pocket full of nothin' and a mouth full of moans
**************************************************
Lyrics ending:
ou in a lie
Why do all the good things have to die
**************************************************
Lyrics ending:
ie come back home to me,
I love you can't you see
**************************************************
Lyrics ending:
What bad love'll do to good people like you and me
**************************************************
Lyrics ending:
your time, your love, your space, your energy (oh)
**************************************************
Lyrics ending:
energy on WTME (No, we don't)
We're gonna move on
**************************************************
Lyrics ending:
ould you leave?
Baby please answer these questions
**************************************************
Lyrics ending:
m good without you
I'm good, I'm good (*I'm good*)
**************************************************
Lyrics ending:
nd
It's the end
Take the hint
Baby, as if
Ha ha ha
**************************************************
Lyrics ending:
And I do give him the love that he wants
And I do
**************************************************
Lyrics ending:
e with me
In my heart
When the last teardrop falls
**************************************************
Lyrics ending:
fans, god!)
No, you got fans in New York, trust me
**************************************************
Lyrics ending:
l be right there
I'll be there
I'll be right there
**************************************************
Lyrics ending:
-da-d-da
D-d, da-da
(Far, farther than a dream...)
**************************************************
Lyrics ending:
e is there wherever you are
It's all inside of you
**************************************************
Lyrics ending:
ike that
No, you can't have it back silly rabbit!
**************************************************
Lyrics ending:
ights off
Boy you ugly so whoo won't you disappear
**************************************************
Lyrics ending:
your time, your love, your space, your energy (oh)
**************************************************
Lyrics ending:
Wha what?)
Duh duh da da duh da da (Haha, yeah uh)
**************************************************
Lyrics ending:
orld
I know God got somethin' special, like we did
**************************************************
Lyrics ending:
w your love,your special touch
You gave me truusst
**************************************************
Lyrics ending:
rries it's gonna be alright
(It's gon' be alright)
**************************************************
Lyrics ending:
go
Release me
Let me go
Release me
Release me
Yea
**************************************************
Lyrics ending:
you will be right next to me
Right here next to me
**************************************************
Lyrics ending:
and want to wil' out then Blaque out, Blaque Out!
**************************************************
Lyrics ending:
)
Time after time
Time after time...
(until fades)
**************************************************
Lyrics ending:
And I do give him the love that he wants
And I do
**************************************************
Lyrics ending:
Gon' Blaque Out
(Brandi)
Gonna Blaque Out
We out!
**************************************************
Lyrics ending:
ugh
I need you to fall through
(Oooh I kinda hope)
**************************************************
Lyrics ending:
ever cop nothin'
Been hustlin' a long time
Yeah...
**************************************************
Lyrics ending:
prepare for the ultimate entertainment experience
**************************************************
Lyrics ending:
I'm good without you
I'm good, I'm good, I'm good
**************************************************
Lyrics ending:
Back)
Fall Back (Fall Back)
Fall Back (Fall Back)
**************************************************
Lyrics ending:
imme your time, your love, your space, your energy
**************************************************
Lyrics ending:
make my temperature rise up
And I can't get enough
**************************************************
Lyrics ending:
eaningless
What do you think you were going to get
**************************************************
Lyrics ending:
up before you keep sayin (Whatcha sayin boo)
Oh oh
**************************************************
Lyrics ending:
, thinkin about it
Thinkin about, thinkin about it
**************************************************
Lyrics ending:
he case (Uh huh)
Two thousand and three!
Come on!
**************************************************
Lyrics ending:
the only way to live
This is the only way to live
**************************************************
Lyrics ending:
ur front doors
Fresh kills, casualties of sex wars
**************************************************
Lyrics ending:
And try to go back to the summer of the purple man
**************************************************
Lyrics ending:
t believe
That I can't leave
When I'm still in you
**************************************************
Lyrics ending:
pigeon and rip the tapestry of lies you weave
Hey!
**************************************************
Lyrics ending:
ght
Cause my head can't wait for your love tonight
**************************************************
Lyrics ending:
u again
Please
Please don't make me feel you again
**************************************************
Lyrics ending:
to be
May we meet again some day on Vega System 3
**************************************************
Lyrics ending:
um
The best years of our lives are yet to come
x2
**************************************************
Lyrics ending:
found
In my eyes
In my eyes
In my eyes
In my eyes
**************************************************
Lyrics ending:
ou got a lot of guitars, but you don't play guitar
**************************************************
Lyrics ending:
just insecurity
Because I secretly want to be her
**************************************************
Lyrics ending:
Before this love gets uglier
Than a wart on a toad
**************************************************
Lyrics ending:
n Tribecca
That's gonna be a national monument too
**************************************************
Lyrics ending:
lay me down to die
My heart is fading
And so am I
**************************************************
Lyrics ending:
those pumps
Yeah
Walk on me with those pumps
Pumps
**************************************************
Lyrics ending:
that
Do you like that? I think you like that
x2
**************************************************
Lyrics ending:
times
And raise the roof with my signature rhymes
**************************************************
Lyrics ending:
tem overload
Inside my system overload
Sweet Jesus
**************************************************
Lyrics ending:
h, huah
Huah, huah
Chinese drug dealers
Huah, huah
**************************************************
Lyrics ending:
stuck forks in our eyes
Such a horrible commotion
**************************************************
Lyrics ending:
alm tan
If sees you reading the Quran
Military man
**************************************************
Lyrics ending:
of the crime
Of choppin' up the teacher
It was him
**************************************************
Lyrics ending:
y nobody eats parsley
I'm turning my tables on you
**************************************************
Lyrics ending:
illabong, who'll come a-waltzing Matilda with me?"
**************************************************
Lyrics ending:
s?
Did the pipes play 'The Flowers o' the Forest'?
**************************************************
Lyrics ending:
first tears trickling forth
Goodbye my nancy o
Cho
**************************************************
Lyrics ending:
all cast nets in the sea
Copyright Eric Bogle
BAZ
**************************************************
Lyrics ending:
nearly over now, and now I'm easy
And now I'm easy
**************************************************
Lyrics ending:
m on my sleeve, his head hung low
As he if he knew
**************************************************
Lyrics ending:
k their bright treasures from the corners of earth
**************************************************
Lyrics ending:
what we use
When we got sod all to say. (cosumel)
**************************************************
Lyrics ending:
time he's here to stay
Words and music: eric bogle
**************************************************
Lyrics ending:
our national song, "Advance Australia", backwards!
**************************************************
Lyrics ending:
always be our shelter
May we always live in peace
**************************************************
Lyrics ending:
there must a way, there must be reason for it all
**************************************************
Lyrics ending:
g your spirit home
Ends with singing of SHOSHOLOSA
**************************************************
Lyrics ending:
give an old man's tears, & thank you for the years
**************************************************
Lyrics ending:
e have no Chihuahuas, we have no Chihuahuas today"
**************************************************
Lyrics ending:
red and squashed and soggy
He's nobody's moggy now
**************************************************
Lyrics ending:
ye decide ta arise
Yer mammy will be here waitin'
**************************************************
Lyrics ending:
in my heart
Your light's still shining in my heart
**************************************************
Lyrics ending:
ing better?
Surely there must be something better?
**************************************************
Lyrics ending:
e
What they touch they bastardise
Hard, hard times
**************************************************
Lyrics ending:
s why every road I travel leads always home to you
**************************************************
Lyrics ending:
red and squashed and soggy
He's nobody's moggy now
**************************************************
Lyrics ending:
the end
I loved Roy Rogers 'cause he was my friend
**************************************************
Lyrics ending:
eyes say what I need to know
M y Iady from Bendigo
**************************************************
Lyrics ending:
If we let them do that - what kind of men are we?
**************************************************
Lyrics ending:
t bear the guiIt when the fauIt is yours and m ine
**************************************************
Lyrics ending:
all of his kind he'll fall
Before the whaler's gun
**************************************************
Lyrics ending:
mate and grab your plate, let's have a bar-b-que!
**************************************************
Lyrics ending:
ilda once more
We'll go Waltzing Matilda once more
**************************************************
Lyrics ending:
ilk and satin
Ye were my bonnie belle o' Broughton
**************************************************
Lyrics ending:
of pride and pIace
Reaches for the vision spIendid
**************************************************
Lyrics ending:
re?
Where are you when we need you, Christy Moore?
**************************************************
Lyrics ending:
t you, my name's Dan
And I'm an honest working man
**************************************************
Lyrics ending:
home
May you find some kind of peace, welcome home
**************************************************
Lyrics ending:
ng arms
The Family of the man with two strong arms
**************************************************
Lyrics ending:
Everytime I think of you
Everytime I think of you
**************************************************
Lyrics ending:
this love could be your mistake)
(Isn't it time?)
**************************************************
Lyrics ending:
my feet again
Here I am
I'm back on my feet again
**************************************************
Lyrics ending:
baby
No no no no
No no no no
Baby please don't go
**************************************************
Lyrics ending:
g
What else can I say
Oooh I'm falling
Fading away
**************************************************
Lyrics ending:
your eyes
Gazing in your eyes
Gazing in your eyes
**************************************************
Lyrics ending:
Oh I really wanna fuck you
Oh
Midnight rendezvous
**************************************************
Lyrics ending:
e your love (Give me your love)
Give me everything
**************************************************
Lyrics ending:
oll
We're gonna rip it up
We're fallin' head first
**************************************************
Lyrics ending:
he's my baby
She's my girl
She's my
She's my world
**************************************************
Lyrics ending:
n and walk away
We're gonna turn our backs on love
**************************************************
Lyrics ending:
g silver dreams
Silver dreams
Silver dreams
Ahh...
**************************************************
Lyrics ending:
again
Baby
You can fall in love
Fall in love again
**************************************************
Lyrics ending:
ce of the cake
For a slice of the cake
Oh, oh, oh!
**************************************************
Lyrics ending:
tery
Love is just a mystery
Love is just a mystery
**************************************************
Lyrics ending:
n, down
Down, down
Down, down
Down,down
Down, down
**************************************************
Lyrics ending:
I mean
(Fifteen sixteen seventeen)
Sweet seventeen
**************************************************
Lyrics ending:
ove me
Love how you love me
Babe, love you so much
**************************************************
Lyrics ending:
ows
Oh, I've got love that grows
I've got the love
**************************************************
Lyrics ending:
e in love oh
(I believe in love)
I believe in love
**************************************************
Lyrics ending:
on angels singin'
Oh it didn't sound like you Lord
**************************************************
Lyrics ending:
everyone
I'm looking for love
But love never comes
**************************************************
Lyrics ending:
e your tears away
It's all that you can do
Oh yeah
**************************************************
Lyrics ending:
ico
In Mexico
Mexico
In Mexico
In Mexico
In Mexico
**************************************************
Lyrics ending:
baby
And if you could
And if you could see me fly
**************************************************
Lyrics ending:
me)
Anytime you want my love
(Anytime anytime)
Ooh
**************************************************
Lyrics ending:
ound
Turnin' 'round
Spinning 'round
Turnin' 'round
**************************************************
Lyrics ending:
t would
Oh it just don't mean that much anymore oh
**************************************************
Lyrics ending:
ve don't prove that I'm right, no no no no
Oh yeah
**************************************************
Lyrics ending:
e deadlines
Now you talk about me in the headlines
**************************************************
Lyrics ending:
re you'll see
You're like me
We're all Union Jacks
**************************************************
Lyrics ending:
I'm fallin' on a whiskey sea
So come on rescue me
**************************************************
|
Spark_MovieRecommendationSystem.ipynb | ###Markdown
Spark HW2 Moive RecommendationIn this notebook, we will use an Alternating Least Squares (ALS) algorithm with Spark APIs to predict the ratings for the movies in [MovieLens small dataset](https://grouplens.org/datasets/movielens/latest/)
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import math
import os
os.environ["PYSPARK_PYTHON"] = "python3"
###Output
_____no_output_____
###Markdown
Part1: Data ETL and Data Exploration
###Code
from pyspark.sql.functions import col
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("moive analysis") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
movies = spark.read.load("/FileStore/tables/movies.csv", format='csv', header = True)
ratings = spark.read.load("/FileStore/tables/ratings.csv", format='csv', header = True)
links = spark.read.load("/FileStore/tables/links.csv", format='csv', header = True)
tags = spark.read.load("/FileStore/tables/tags.csv", format='csv', header = True)
movies.show(5)
ratings.show(5)
tmp1 = ratings.groupBy("userID").count().toPandas()['count'].min()
tmp2 = ratings.groupBy("movieId").count().toPandas()['count'].min()
print('For the users that rated movies and the movies that were rated:')
print('Minimum number of ratings per user is {}'.format(tmp1))
print('Minimum number of ratings per movie is {}'.format(tmp2))
tmp1 = sum(ratings.groupBy("movieId").count().toPandas()['count'] == 1)
tmp2 = ratings.select('movieId').distinct().count()
print('{} out of {} movies are rated by only one user'.format(tmp1, tmp2))
###Output
_____no_output_____
###Markdown
Part 2: Spark SQL and OLAP Q1: The number of Users
###Code
q1_result = ratings.select('userId').distinct().count()
print('Number of users is {}.'.format(q1_result))
###Output
_____no_output_____
###Markdown
Q2: The number of Movies
###Code
q2_result = movies.select('movieId').distinct().count()
print('The number of movies is {}.'.format(q2_result))
###Output
_____no_output_____
###Markdown
Q3: How many movies are rated by users? List movies not rated before
###Code
q3_result_1 = ratings.select('movieId').distinct().count()
print('The number of movies have been rated is {}.'.format(q3_result_1))
movies.createOrReplaceTempView('movies')
ratings.createOrReplaceTempView('ratings')
q3_result_2 = spark.sql("SELECT movieId, title FROM movies WHERE movieId NOT IN (SELECT DISTINCT movieId FROM ratings)")
display(q3_result_2)
###Output
_____no_output_____
###Markdown
Q4: List Movie Genres
###Code
# Data Frame based
display(movies.select('genres').where(col('genres').contains('(no genres listed)') == False)).distinct().orderBy("genres", ascending=False)
# RDD function
genres = set(movies.select('genres').where(col('genres').contains('(no genres listed)') == False).distinct().rdd.flatMap(lambda x: x).flatMap(lambda x: x.split('|')).collect())
print(genres)
###Output
_____no_output_____
###Markdown
Q5: Movie for Each Category
###Code
d = {}
for genre in genres:
d[genre] = movies.where(col('genres').contains(genre)).select('title')
d['Animation'].show()
###Output
_____no_output_____
###Markdown
Part2: Spark ALS based approach for training modelWe will use an RDD-based API from [pyspark.mllib](https://spark.apache.org/docs/2.1.1/mllib-collaborative-filtering.html) to predict the ratings, so let's reload "ratings.csv" using ``sc.textFile`` and then convert it to the form of (user, item, rating) tuples.
###Code
from pyspark.mllib.recommendation import ALS
movie_rating = sc.textFile("/FileStore/tables/ratings.csv")
header = movie_rating.take(1)[0]
rating_data = movie_rating.filter(lambda line: line!=header).map(lambda line: line.split(",")).map(lambda tokens: (tokens[0],tokens[1],tokens[2])).cache()
# check three rows
rating_data.take(3)
###Output
_____no_output_____
###Markdown
Now we split the data into training/validation/testing sets using a 6/2/2 ratio.
###Code
train, validation, test = rating_data.randomSplit([6,2,2],seed = 7856)
train.cache()
validation.cache()
test.cache()
###Output
_____no_output_____
###Markdown
ALS Model Selection and EvaluationWith the ALS model, we can use a grid search to find the optimal hyperparameters.
###Code
def train_ALS(train_data, validation_data, num_iters, reg_param, ranks):
min_error = float('inf')
best_rank = -1
best_regularization = 0
best_model = None
for rank in ranks:
for reg in reg_param:
# the approach to train ALS model
model = ALS.train(train_data, rank = rank, iterations= num_iters, lambda_= reg)
# make prediction
predict = model.predictAll(validation_data.map(lambda x: (x[0], x[1]))).map(lambda x: ((x[0],x[1]),x[2]))
# get the rating result
rating = validation_data.map(lambda x: ((int(x[0]), int(x[1])), float(x[2]))).join(predict)
# get the RMSE
error = np.sqrt(rating.map(lambda x: (x[1][0] - x[1][1]) ** 2).mean())
print ('{} latent factors and regularization = {}: validation RMSE is {}'.format(rank, reg, error))
if error < min_error:
min_error = error
best_rank = rank
best_regularization = reg
best_model = model
print ('\nThe best model has {} latent factors and regularization = {}'.format(best_rank, best_regularization))
return best_model
num_iterations = 10
ranks = [10, 12, 14]
reg_params = [0.05, 0.1, 0.2]
import time
start_time = time.time()
final_model = train_ALS(train, validation, num_iterations, reg_params, ranks)
print ('Total Runtime: {:.2f} seconds'.format(time.time() - start_time))
iter_array = [1, 2, 5, 10]
# function to plot the learning curve
def plot_learning_curve(iter_array, train_data, validation_data, reg, rank):
val_err = []
train_err = []
for iter in iter_array:
model = ALS().train(train_data, rank = rank, iterations = iter, lambda_ = reg)
# make prediction
predict_val = model.predictAll(validation_data.map(lambda x: (x[0], x[1]))).map(lambda x: ((x[0],x[1]),x[2]))
predict_train = model.predictAll(train_data.map(lambda x: (x[0], x[1]))).map(lambda x: ((x[0],x[1]),x[2]))
# get the rating result
rating_val = validation_data.map(lambda x: ((int(x[0]), int(x[1])), float(x[2]))).join(predict_val)
rating_train = train_data.map(lambda x: ((int(x[0]), int(x[1])), float(x[2]))).join(predict_train)
# get the RMSE
error_val = np.sqrt(rating_val.map(lambda x: (x[1][0] - x[1][1]) ** 2).mean())
error_train = np.sqrt(rating_train.map(lambda x: (x[1][0] - x[1][1]) ** 2).mean())
val_err.append(error_val)
train_err.append(error_train)
plt.figure(figsize = (8, 6))
plt.plot(iter_array, val_err, label = 'val_error')
plt.plot(iter_array, train_err, label = 'train_error')
plt.legend()
display()
plot_learning_curve(iter_array, train, validation, 0.05, 14)
###Output
_____no_output_____
###Markdown
Model testing on the test dataAnd finally, write code to make a prediction and check the testing error.
###Code
model = ALS().train(train, rank = 8, iterations = 14, lambda_ = 0.05)
predict_test = model.predictAll(test.map(lambda x: (x[0], x[1]))).map(lambda x: ((x[0],x[1]),x[2]))
rating_test = test.map(lambda x: ((int(x[0]), int(x[1])), float(x[2]))).join(predict_test)
error_test = np.sqrt(rating_test.map(lambda x: (x[1][0] - x[1][1]) ** 2).mean())
print("The rmse of test data on best model is {}.".format(error_test))
###Output
_____no_output_____
###Markdown
For each user, recommend movies
###Code
movie_feature = model.productFeatures()
movie_feature_res = spark.createDataFrame(movie_feature, ['movieId', 'feature'])
user_feature = model.userFeatures()
user_feature_res = spark.createDataFrame(user_feature, ['userId', 'feature'])
movie_feature_res.show()
user_feature_res.show()
def movieName(movieId):
name = movies.where(col('movieId').isin(movieId)).select('title').toPandas()['title'].tolist()
return list(zip(movieId,name))
def recommendedMovie(userId, num):
recommendedMovies = pd.DataFrame(model.recommendProducts(user = userId, num = num))['product'].tolist()
return movieName(recommendedMovies)
def ratedMovie(userId, num):
ratings = spark.read.load("/FileStore/tables/ratings.csv", format='csv', header = True)
ratedMovies_rating = ratings.where(col('userId') == userId).select('movieId','rating').orderBy('rating',ascending = False).limit(num).toPandas()
ratedMovies = ratedMovies_rating['movieId'].tolist()
ratings = ratedMovies_rating['rating'].tolist()
return list(zip(movieName(ratedMovies), ratings))
def similarity(movieId_1, movieId_2, userId):
movieFeatures = movie_feature_res.where(col('movieId').isin(movieId_1, movieId_2)).select('feature').collect()
movie_top_1 = np.array(movie_feature_res[0][0])
movie_top_2 = np.array(movie_feature_res[1][0])
user = np.array(user_feature_res.where(col('userId') == userId).select('feature').collect()[0][0])
movie_sim = np.dot(movie_top_1, movie_top_2)
user_movie_1 = np.dot(movie_top_1, user)
user_movie_2 = np.dot(movie_top_2, user)
print('Movie similarity between top 2 movies is {}.'.format(str(movie_sim)))
print('Similarity between movie 1 and user is {}.'.format(str(user_movie_1)))
print('Similarity between movie 2 and user is {}.'.format(str(user_movie_2)))
userId = 328
num = 5
print('The user like the following movies. \n')
print(ratedMovie(userId, num))
print("The folling movies are recommended to this user.")
print(recommendedMovie(userId, num))
movieId_1 = 1235
movieId_2 = 4
similarity(movieId_1, movieId_2, userId)
###Output
_____no_output_____ |
pgdrive/examples/Basic PGDrive Usages.ipynb | ###Markdown
Quick Start Tutorial of the basic functionality of PGDriveWelcome to try out PGDrive!PGDrive v0.1.1 supports two running modes:1. **With rendering functionality**: PGDrive can easily install and run in personal computer, but may need special treatment in headless machine and cloud servers.2. **Without rendering functionality**: PGDrive can easily install and run in any machine. In this Colab notebook, we demonstrate PGDrive in this mode.In this tutorial, we will navigate you through the installation and some basic functionality of PGDrive! InstallationYou can install PGDrive easily.
###Code
#@title Collect the PGDrive
%pip install pgdrive==0.1.1
###Output
_____no_output_____
###Markdown
Basic Functionality
###Code
#@title A minimalist example of using PGDrive with LiDAR observation
from pgdrive import PGDriveEnv
import gym
env = gym.make("PGDrive-v0")
# env = PGDriveEnv(dict(environment_num=100)) # Or you can also choose to create env from class.
print("\nThe action space: {}".format(env.action_space))
print("\nThe observation space: {}\n".format(env.observation_space))
print("Starting the environment ...\n")
ep_reward = 0.0
obs = env.reset()
for i in range(1000):
obs, reward, done, info = env.step(env.action_space.sample())
ep_reward += reward
if done:
print("\nThe episode reward: ", ep_reward)
break
print("\nThe observation shape: {}.".format(obs.shape))
print("\nThe returned reward: {}.".format(reward))
print("\nThe returned information: {}.".format(info))
env.close()
print("\nPGDrive successfully run!")
# @title You can also using an expert to drive
from pgdrive import PGDriveEnv
from pgdrive.examples import expert
env = PGDriveEnv() # You can also choose to create env from class.
print("\nThe action space: {}".format(env.action_space))
print("\nThe observation space: {}\n".format(env.observation_space))
print("Starting the environment ...\n")
ep_reward = 0.0
obs = env.reset()
for i in range(1000):
obs, reward, done, info = env.step(expert(obs))
ep_reward += reward
if done:
print("\nEpisode reward: ", ep_reward)
break
print("\nThe returned reward: {}.".format(reward))
print("\nThe returned information: {}".format(info))
env.close()
print("\nPGDrive successfully run!")
###Output
_____no_output_____
###Markdown
Map Generation
###Code
# @title Draw the generated maps in top-down view
import random
import matplotlib.pyplot as plt
from pgdrive import PGDriveEnv
env = PGDriveEnv(config=dict(
environment_num=100,
map=7,
start_seed=random.randint(0, 1000)
))
fig, axs = plt.subplots(4, 4, figsize=(10, 10), dpi=200)
for i in range(4):
for j in range(4):
env.reset()
m = env.get_map()
ax = axs[i][j]
ax.imshow(m, cmap="bone")
ax.set_xticks([])
ax.set_yticks([])
fig.suptitle("Bird's-eye view of generated maps")
plt.show()
env.close()
# @title Draw the generated maps in top-down view with fixed block sequence
# @markdown You can also specify the road block sequence then randomize the block parameters.
# @markdown Please refer to [documentation](https://pgdrive.readthedocs.io/en/latest/env_config.html#map-config) for the meaning of the map string.
import random
import matplotlib.pyplot as plt
from pgdrive import PGDriveEnv
env = PGDriveEnv(config=dict(
environment_num=100,
map="CrTRXOS",
start_seed=random.randint(0, 1000)
))
fig, axs = plt.subplots(4, 4, figsize=(10, 10), dpi=200)
for i in range(4):
for j in range(4):
env.reset()
m = env.get_map()
ax = axs[i][j]
ax.imshow(m, cmap="bone")
ax.set_xticks([])
ax.set_yticks([])
fig.suptitle("Bird's-eye view of generated maps")
plt.show()
env.close()
###Output
_____no_output_____ |
001-Jupyter/003-JupyterWebApplications/ipywidgets/0_overview_of_all_widgets.ipynb | ###Markdown
Widget List
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
Numeric widgets There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing `Float` with `Int` in the widget name, you can find the Integer equivalent. IntSlider
###Code
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
###Output
_____no_output_____
###Markdown
FloatSlider
###Code
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
###Output
_____no_output_____
###Markdown
Sliders can also be **displayed vertically**.
###Code
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.1f',
)
###Output
_____no_output_____
###Markdown
FloatLogSlider The `FloatLogSlider` has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The `min` and `max` refer to the minimum and maximum exponents of the `base`, and the `value` refers to the actual value of the slider.
###Code
widgets.FloatLogSlider(
value=10,
base=10,
min=-10, # max exponent of base
max=10, # min exponent of base
step=0.2, # exponent step
description='Log Slider'
)
###Output
_____no_output_____
###Markdown
IntRangeSlider
###Code
widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
###Output
_____no_output_____
###Markdown
FloatRangeSlider
###Code
widgets.FloatRangeSlider(
value=[5, 7.5],
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
###Output
_____no_output_____
###Markdown
IntProgress
###Code
widgets.IntProgress(
value=7,
min=0,
max=10,
step=1,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
###Output
_____no_output_____
###Markdown
FloatProgress
###Code
widgets.FloatProgress(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Loading:',
bar_style='info',
orientation='horizontal'
)
###Output
_____no_output_____
###Markdown
The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter. BoundedIntText
###Code
widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
###Output
_____no_output_____
###Markdown
BoundedFloatText
###Code
widgets.BoundedFloatText(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Text:',
disabled=False
)
###Output
_____no_output_____
###Markdown
IntText
###Code
widgets.IntText(
value=7,
description='Any:',
disabled=False
)
###Output
_____no_output_____
###Markdown
FloatText
###Code
widgets.FloatText(
value=7.5,
description='Any:',
disabled=False
)
###Output
_____no_output_____
###Markdown
Boolean widgets There are three widgets that are designed to display a boolean value. ToggleButton
###Code
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check'
)
###Output
_____no_output_____
###Markdown
Checkbox
###Code
widgets.Checkbox(
value=False,
description='Check me',
disabled=False
)
###Output
_____no_output_____
###Markdown
ValidThe valid widget provides a read-only indicator.
###Code
widgets.Valid(
value=False,
description='Valid!',
)
###Output
_____no_output_____
###Markdown
Selection widgets There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the **enumeration of selectable options by passing a list** (options are either (label, value) pairs, or simply values for which the labels are derived by calling `str`). Dropdown
###Code
widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
disabled=False,
)
###Output
_____no_output_____
###Markdown
The following is also valid, displaying the words `'One', 'Two', 'Three'` as the dropdown choices but returning the values `1, 2, 3`.
###Code
widgets.Dropdown(
options=[('One', 1), ('Two', 2), ('Three', 3)],
value=2,
description='Number:',
)
###Output
_____no_output_____
###Markdown
RadioButtons
###Code
widgets.RadioButtons(
options=['pepperoni', 'pineapple', 'anchovies'],
# value='pineapple',
description='Pizza topping:',
disabled=False
)
###Output
_____no_output_____
###Markdown
Select
###Code
widgets.Select(
options=['Linux', 'Windows', 'OSX'],
value='OSX',
# rows=10,
description='OS:',
disabled=False
)
###Output
_____no_output_____
###Markdown
SelectionSlider
###Code
widgets.SelectionSlider(
options=['scrambled', 'sunny side up', 'poached', 'over easy'],
value='sunny side up',
description='I like my eggs ...',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
###Output
_____no_output_____
###Markdown
SelectionRangeSliderThe value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty.
###Code
import datetime
dates = [datetime.date(2015,i,1) for i in range(1,13)]
options = [(i.strftime('%b'), i) for i in dates]
widgets.SelectionRangeSlider(
options=options,
index=(0,11),
description='Months (2015)',
disabled=False
)
###Output
_____no_output_____
###Markdown
ToggleButtons
###Code
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
# icons=['check'] * 3
)
###Output
_____no_output_____
###Markdown
SelectMultipleMultiple values can be selected with shift and/or ctrl (or command) pressed and mouse clicks or arrow keys.
###Code
widgets.SelectMultiple(
options=['Apples', 'Oranges', 'Pears'],
value=['Oranges'],
#rows=10,
description='Fruits',
disabled=False
)
###Output
_____no_output_____
###Markdown
String widgets There are several widgets that can be used to display a string value. The `Text`, `Textarea`, and `Combobox` widgets accept input. The `HTML` and `HTMLMath` widgets display a string as HTML (`HTMLMath` also renders math). The `Label` widget can be used to construct a custom control label. Text
###Code
widgets.Text(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
###Output
_____no_output_____
###Markdown
Textarea
###Code
widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
###Output
_____no_output_____
###Markdown
Combobox
###Code
widgets.Combobox(
# value='John',
placeholder='Choose Someone',
options=['Paul', 'John', 'George', 'Ringo'],
description='Combobox:',
ensure_option=True,
disabled=False
)
###Output
_____no_output_____
###Markdown
LabelThe `Label` widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions.
###Code
widgets.HBox([widgets.Label(value="The $m$ in $E=mc^2$:"), widgets.FloatSlider()])
###Output
_____no_output_____
###Markdown
HTML
###Code
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
)
###Output
_____no_output_____
###Markdown
HTML Math
###Code
widgets.HTMLMath(
value=r"Some math and <i>HTML</i>: \(x^2\) and $$\frac{x+1}{x-1}$$",
placeholder='Some HTML',
description='Some HTML',
)
###Output
_____no_output_____
###Markdown
Image
###Code
file = open("../images/WidgetArch.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=300,
height=400,
)
###Output
_____no_output_____
###Markdown
Button
###Code
widgets.Button(
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check'
)
###Output
_____no_output_____
###Markdown
OutputThe `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). For detailed documentation, see the [output widget examples](https://ipywidgets.readthedocs.io/en/latest/examples/Output Widget.html). Play (Animation) widget The `Play` widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
###Code
play = widgets.Play(
# interval=10,
value=50,
min=0,
max=100,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
###Output
_____no_output_____
###Markdown
Date pickerThe date picker widget works in Chrome, Firefox and IE Edge, but does not currently work in Safari because it does not support the HTML date input field.
###Code
widgets.DatePicker(
description='Pick a Date',
disabled=False
)
###Output
_____no_output_____
###Markdown
Color picker
###Code
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
###Output
_____no_output_____
###Markdown
File UploadThe `FileUpload` allows to upload any type of file(s) as bytes.
###Code
widgets.FileUpload(
accept='', # Accepted file extension e.g. '.txt', '.pdf', 'image/*', 'image/*,.pdf'
multiple=False # True to accept multiple files upload else False
)
###Output
_____no_output_____
###Markdown
ControllerThe `Controller` allows a game controller to be used as an input device.
###Code
widgets.Controller(
index=0,
)
###Output
_____no_output_____
###Markdown
Container/Layout widgetsThese widgets are used to hold other widgets, called children. Each has a `children` property that may be set either when the widget is created or later. Box
###Code
items = [widgets.Label(str(i)) for i in range(4)]
widgets.Box(items)
###Output
_____no_output_____
###Markdown
HBox
###Code
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
###Output
_____no_output_____
###Markdown
VBox
###Code
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
###Output
_____no_output_____
###Markdown
GridBoxThis box uses the HTML Grid specification to lay out its children in two dimensional grid. The example below lays out the 8 items inside in 3 columns and as many rows as needed to accommodate the items.
###Code
items = [widgets.Label(str(i)) for i in range(8)]
widgets.GridBox(items, layout=widgets.Layout(grid_template_columns="repeat(3, 100px)"))
###Output
_____no_output_____
###Markdown
Accordion
###Code
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
###Output
_____no_output_____
###Markdown
TabsIn this example the children are set after the tab is created. Titles for the tabs are set in the same way they are for `Accordion`.
###Code
tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in tab_contents]
tab = widgets.Tab()
tab.children = children
for i in range(len(children)):
tab.set_title(i, str(i))
tab
###Output
_____no_output_____
###Markdown
Accordion and Tab use `selected_index`, not valueUnlike the rest of the widgets discussed earlier, the container widgets `Accordion` and `Tab` update their `selected_index` attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing *and* programmatically set what the user sees by setting the value of `selected_index`.Setting `selected_index = None` closes all of the accordions or deselects all tabs. In the cells below try displaying or setting the `selected_index` of the `tab` and/or `accordion`.
###Code
tab.selected_index = 3
accordion.selected_index = None
###Output
_____no_output_____
###Markdown
Nesting tabs and accordionsTabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion. The example below makes a couple of tabs with an accordion children in one of them
###Code
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.set_title(0, 'An accordion')
tab_nest.set_title(1, 'Copy of the accordion')
tab_nest
###Output
_____no_output_____ |
sklearn/sklearn learning/demonstration/auto_examples_jupyter/cluster/plot_digits_linkage.ipynb | ###Markdown
Various Agglomerative Clustering on a 2D embedding of digitsAn illustration of various linkage option for agglomerative clustering ona 2D embedding of the digits dataset.The goal of this example is to show intuitively how the metrics behave, andnot to find good clusters for the digits. This is why the example works on a2D embedding.What this example shows us is the behavior "rich getting richer" ofagglomerative clustering that tends to create uneven cluster sizes.This behavior is pronounced for the average linkage strategy,that ends up with a couple of singleton clusters, while in the caseof single linkage we get a single central cluster with all other clustersbeing drawn from noise points around the fringes.
###Code
# Authors: Gael Varoquaux
# License: BSD 3 clause (C) INRIA 2014
print(__doc__)
from time import time
import numpy as np
from scipy import ndimage
from matplotlib import pyplot as plt
from sklearn import manifold, datasets
X, y = datasets.load_digits(return_X_y=True)
n_samples, n_features = X.shape
np.random.seed(0)
def nudge_images(X, y):
# Having a larger dataset shows more clearly the behavior of the
# methods, but we multiply the size of the dataset only by 2, as the
# cost of the hierarchical clustering methods are strongly
# super-linear in n_samples
shift = lambda x: ndimage.shift(x.reshape((8, 8)),
.3 * np.random.normal(size=2),
mode='constant',
).ravel()
X = np.concatenate([X, np.apply_along_axis(shift, 1, X)])
Y = np.concatenate([y, y], axis=0)
return X, Y
X, y = nudge_images(X, y)
#----------------------------------------------------------------------
# Visualize the clustering
def plot_clustering(X_red, labels, title=None):
x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0)
X_red = (X_red - x_min) / (x_max - x_min)
plt.figure(figsize=(6, 4))
for i in range(X_red.shape[0]):
plt.text(X_red[i, 0], X_red[i, 1], str(y[i]),
color=plt.cm.nipy_spectral(labels[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
plt.xticks([])
plt.yticks([])
if title is not None:
plt.title(title, size=17)
plt.axis('off')
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
#----------------------------------------------------------------------
# 2D embedding of the digits dataset
print("Computing embedding")
X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X)
print("Done.")
from sklearn.cluster import AgglomerativeClustering
for linkage in ('ward', 'average', 'complete', 'single'):
clustering = AgglomerativeClustering(linkage=linkage, n_clusters=10)
t0 = time()
clustering.fit(X_red)
print("%s :\t%.2fs" % (linkage, time() - t0))
plot_clustering(X_red, clustering.labels_, "%s linkage" % linkage)
plt.show()
###Output
_____no_output_____ |
demo_independent_nonlin_paper.ipynb | ###Markdown
Cosinor analysis
###Code
df_results = cosinor.fit_group(df, n_components = [1,2,3], period=24, plot=False, plot_phase=False, lin_comp=False)
df_best_models = cosinor.get_best_models(df, df_results, n_components = [1,2,3])
df_best_models.to_csv(os.path.join("supp_tables2","supp_table_1.csv"), index=False)
###Output
_____no_output_____
###Markdown
... and plot these models
###Code
cosinor.plot_df_models(df, df_best_models, plot_phase=False,folder="img\\nonlin_basic_models")
###Output
_____no_output_____
###Markdown
We can analyse these models more in details using bootstrap analysis.
###Code
df_results_extended = cosinor.analyse_best_models(df, df_best_models)
df_results_extended.to_csv(os.path.join("supp_tables2","supp_table_2.csv"), index=False)
df_out = df_results_extended[['test', 'amplitude', 'q(amplitude)', 'acrophase', 'q(acrophase)']].round(3)
f = open('table_cosinor_bootstrap.txt', 'w')
f.write(df_out.to_latex(index=False))
f.close()
df_out
###Output
_____no_output_____
###Markdown
Obviously, some of these fits could be better by introducing linear component and/or amplification coefficient. Generalized cosinor1 analysis
###Code
df_results = cosinor_nonlin.fit_generalized_cosinor_group(df, period = 24, plot=True, folder="img\\nonlin_gen1_models")
df_results.to_csv(os.path.join("supp_tables2","supp_table_3.csv"), index=False)
df_out = df_results[['test', 'amplitude', 'q(amplitude)', 'acrophase', 'q(acrophase)', 'amplification', 'q(amplification)', 'lin_comp', 'q(lin_comp)']].round(3)
f = open('table_gen_cosinor1.txt', 'w')
f.write(df_out.to_latex(index=False))
f.close()
df_out
###Output
_____no_output_____
###Markdown
Generalized multicomponent cosinor analysisA better fit would be obtained in some cases (e.g., test7 and test8) if a multicomponent cosinor model would be used.
###Code
df_best_models = cosinor_nonlin.fit_generalized_cosinor_n_comp_group_best(df, period=24, n_components = [1,2,3], plot=True, folder="img\\nonlin_gen_models")
df_best_models.to_csv(os.path.join("supp_tables2","supp_table_4.csv"), index=False)
df_best_models[['test', 'n_components']]
###Output
_____no_output_____
###Markdown
However, the significance of amplitudes and acrophases being different than zero now needs to be evaluated using bootstrap.We can do this using the best models for each dataset:
###Code
df_bootstrap = cosinor_nonlin.bootstrap_generalized_cosinor_n_comp_group_best(df, df_best_models, bootstrap_size=100)
df_bootstrap.to_csv(os.path.join("supp_tables2","supp_table_5.csv"), index=False)
df_out = df_bootstrap[['test', 'n_components','amplitude', 'q(amplitude)', 'acrophase', 'q(acrophase)', 'amplification', 'q(amplification)', 'lin_comp', 'q(lin_comp)']].round(3)
f = open('table_gen_cosinor_bootstrap.txt', 'w')
f.write(df_out.to_latex(index=False))
f.close()
df_out
###Output
_____no_output_____
###Markdown
Comparison using generalized multicomponent cosinor analysisThis analysis relies on the bootsrapping as in the basic multicomponent cosinor analysis. However, it is not necessary to run bootstrap again, since we can use the results produced in previous steps. Namely, we will use the confidence intervals of the basic bootstrap analyses.
###Code
pairs = [("sym", "sym_lin_comp"),("asym", "asym_lin_comp"), ("sym_damped", "sym_forced"), ("asym_damped", "asym_forced")]
###Output
_____no_output_____
###Markdown
Then, we can run the analysis. To reduce the computing time, we can specify the bootstrap results obtained earlier (`df_bootstrap_single` parameter).
###Code
df_bootstrap_compare = cosinor_nonlin.compare_pairs_n_comp_bootstrap_group(df, pairs, df_best_models=df_best_models, df_bootstrap_single=df_bootstrap, plot=True, folder="img\\nonlin_gen_compare")
df_bootstrap.to_csv(os.path.join("supp_tables2","supp_table_6.csv"), index=False)
df_out = df_bootstrap_compare[['test', 'n_components1', 'n_components2', 'd_amplitude', 'q(d_amplitude)', 'd_acrophase', 'q(d_acrophase)', 'd_amplification', 'q(d_amplification)', 'd_lin_comp', 'q(d_lin_comp)']].round(3)
f = open('table_gen_cosinor_bootstrap_compare.txt', 'w')
f.write(df_out.to_latex(index=False))
f.close()
df_out
###Output
_____no_output_____ |
code/babyIAXO.ipynb | ###Markdown
babyIAXOSame as the IAXO results except only has 1 bore and the length is 10 metres not 20.
###Code
from numpy import *
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import AxionFuncs
import Like
gname = 'Photon'
# Number of mass points for discovery limits
n_DL = 1000
m_DL_vals = logspace(log10(1e-3),log10(2e-1),n_DL)
# Number of mass points for data table
nm = 1000
m_vals = logspace(-4.0,2e0,nm)
# Energy range for binned data
E_max = 20.0 # Max energy (20 keV for photon, 10 keV for electron)
nE_bins = 300 # Number of bins (needs to be >100 to get good results)
E0 = 1.0e-3*10.0 # Range of energy resolutions for the plot
# Generate IAXO limit
# I've turned off energy res here just for speed
E_bins,R1_tab,R0 = AxionFuncs.BinnedPhotonNumberTable(m_vals,E0,E_max,nE_bins,coupling=gname,nfine=10,res_on=False)
DLIAXO = Like.MassDiscoveryLimit_Simple(m_vals,R1_tab,R0,m_DL_vals)
IAXO = Like.ConstantObsNumberLine(5,m_DL_vals,m_vals,R1_tab)
# Generate babyIAXO limit
E_bins,R1_tab,R0 = AxionFuncs.BinnedPhotonNumberTable(m_vals,E0,E_max,nE_bins,coupling=gname,nfine=10,res_on=False,\
Length=10.0,N_bores=1,Exposure=1.0) # <- babyIAXO params (only need to set)
DL = Like.MassDiscoveryLimit_Simple(m_vals,R1_tab,R0,m_DL_vals)
babyIAXO = Like.ConstantObsNumberLine(5,m_DL_vals,m_vals,R1_tab)
print 'Done.'
# Set various plotting style things
plt.rcParams['axes.linewidth'] = 2.5
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
fig = plt.figure(figsize=(14.3,13))
ax = fig.add_subplot(111)
# Limits for y-axis
ymin = 1e-12
ymax = 1e-9
# blue
col = flipud(cm.Blues(linspace(0,1,10)))
# Plot IAXO limit
plt.fill_between(m_DL_vals,IAXO,y2=ymax,edgecolor="goldenrod",facecolor="Orange",alpha=0.1,linewidth=5,label=r'IAXO sens.',zorder=0)
plt.loglog(m_DL_vals,DLIAXO,'-',linewidth=5,color="goldenrod",alpha=0.5,label=r'IAXO mass')
# Plot babyIAXO limit
plt.fill_between(m_DL_vals,babyIAXO,y2=ymax,edgecolor=col[6,:],facecolor=col[5,:],linewidth=5,label=r'babyIAXO sens.',zorder=0)
plt.loglog(m_DL_vals,DL,linewidth=5,color=col[0,:],label=r'babyIAXO mass')
# Testing result makes sense by rescaling babyIAXO to recover IAXO
#mass_rescale = sqrt(20.0/10.0) # rescaled by length
#coupling_rescale = (((20/10)**2)*(8/1)*(1.5/1))**0.25 # rescaled by number of bores, exposure and length
#plt.loglog(m_DL_vals/mass_rescale,babyIAXO/coupling_rescale,'r:')
#plt.loglog(m_DL_vals/mass_rescale,DL/coupling_rescale,'r:')
# Plot constant event numbers lines
for Ngamma in [10,100,1000]:
Nline = Like.ConstantObsNumberLine(Ngamma,m_DL_vals,m_vals,R1_tab)
plt.loglog(m_DL_vals,Nline,'k-',zorder=0)
plt.text(1.1e-3,Nline[0]*1.01,r'$N_\gamma$ = '+str(Ngamma),fontsize=20)
# HB limit
HB_col = [0.0, 0.66, 0.42]
HBmin = 6.7e-11
plt.fill_between([1e-3,1e0],[HBmin,HBmin],y2=ymax,edgecolor=None,facecolor=HB_col)
plt.text(2.5e-2,7e-11,r'{\bf Horizontal Branch}',fontsize=25,color='w')
# CAST limit
CAST_col = [0.5, 0.0, 0.13]
CAST = (babyIAXO/babyIAXO[0])*6.7e-11
plt.fill_between(m_DL_vals,CAST,y2=ymax,edgecolor=None,facecolor=CAST_col)
plt.text(5e-3,3e-10,r'{\bf CAST}',fontsize=30,color='w')
# Plot g \propto m^(-1.74) line
#plt.plot(m_DL_vals,1.8e-15*m_DL_vals**-1.74,'k--',linewidth=3)
#plt.text(1.8e-2,2.1e-12,r'$\propto m_a^{-1.74}$',fontsize=30,rotation=-50)
# Style
plt.xlim([m_DL_vals[0],m_DL_vals[-1]])
plt.ylim([ymin,ymax])
plt.xticks(fontsize=35)
plt.yticks(fontsize=35)
ax.tick_params(which='major',direction='in',width=2,length=10,right=True,top=True)
ax.tick_params(which='minor',direction='in',width=1,length=7,right=True,top=True)
ax.tick_params(axis='x', which='major', pad=10)
plt.xlabel(r"$m_a$ [eV]",fontsize=45)
plt.ylabel(r"$|g_{a\gamma}|$ [GeV$^{-1}$]",fontsize=45)
# Legend
leg = plt.legend(fontsize=30,frameon=False,loc="lower right")
plt.setp(leg.get_title(),fontsize=30)
# Show and save
plt.show()
fig.savefig('../plots/MassDiscoveryLimit_babyIAXO.pdf',bbox_inches='tight')
fig.savefig('../plots/plots_png/MassDiscoveryLimit_babyIAXO.png',bbox_inches='tight') # Save for preview in README
# Save data for other plots
savetxt("../my_data/MassDiscoveryLimit_babyIAXO.txt",vstack((m_DL_vals,DL)))
# Scaling for TASTE https://arxiv.org/abs/1706.09378
# B = 3.5
# L = 12
# Exposure = 1.5
# Nbores = 1
mass_rescale = sqrt(20.0/12.0) # rescaled by length
coupling_rescale = (((2.5/3.5)**2.0*(20/12)**2)*(8/1)*(1.5/1.5))**0.25 # rescaled by number of bores, exposure and length
print mass_rescale,coupling_rescale
###Output
1.29099444874 1.42137436628
|
09DecisionTree/05CART-and-Decision-Tree-Hyperparameters.ipynb | ###Markdown
CART 和 决策树的超参数
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
X, y = datasets.make_moons(noise=0.25, random_state=666)
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
from sklearn.tree import DecisionTreeClassifier
dt_clf = DecisionTreeClassifier()
dt_clf.fit(X, y)
def plot_decision_boundary(model, axis):
x0, x1 = np.meshgrid(
np.linspace(axis[0], axis[1], int((axis[1]-axis[0])*100)).reshape(-1, 1),
np.linspace(axis[2], axis[3], int((axis[3]-axis[2])*100)).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = model.predict(X_new)
zz = y_predict.reshape(x0.shape)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9'])
plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap)
plot_decision_boundary(dt_clf, axis=[-1.5, 2.5, -1.0, 1.5])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
dt_clf2 = DecisionTreeClassifier(max_depth=2)
dt_clf2.fit(X, y)
plot_decision_boundary(dt_clf2, axis=[-1.5, 2.5, -1.0, 1.5])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
dt_clf3 = DecisionTreeClassifier(min_samples_split=10)
dt_clf3.fit(X, y)
plot_decision_boundary(dt_clf3, axis=[-1.5, 2.5, -1.0, 1.5])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
dt_clf4 = DecisionTreeClassifier(min_samples_leaf=6)
dt_clf4.fit(X, y)
plot_decision_boundary(dt_clf4, axis=[-1.5, 2.5, -1.0, 1.5])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
dt_clf5 = DecisionTreeClassifier(max_leaf_nodes=4)
dt_clf5.fit(X, y)
plot_decision_boundary(dt_clf5, axis=[-1.5, 2.5, -1.0, 1.5])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
###Output
D:\software\Anaconda\path\lib\site-packages\matplotlib\contour.py:1004: UserWarning: The following kwargs were not used by contour: 'linewidth'
s)
|
notebooks/Client_Example.ipynb | ###Markdown
Client Post Request API Functions --- Imports
###Code
import json
import pandas as pd
from datetime import datetime
from bokeh.plotting import figure, show
from bokeh.layouts import layout
from bokeh.io import output_notebook
from statsmodels.tsa.holtwinters import ExponentialSmoothing as HWES
from matplotlib import pyplot as plt
from statsmodels.tsa.seasonal import seasonal_decompose
from sklearn.metrics import mean_squared_error
import numpy as np
from itertools import combinations
import requests
import logging
from random import random
import seaborn as sn
###Output
_____no_output_____
###Markdown
Forecast Simple Example with custom time series (String data)
###Code
url = 'http://H O S T/time_series/forecast'
payload = dict(start=8, end=14, forecast_range=2, period=[2,3,4,6,8,12,16], data = ['2','4','6','8','10','12','14','16','18','20','22','24','26','28','30'],locale = 'None',api_key='XXXX')
data = payload
res = requests.post(url, json = data)
print(res.text)
###Output
_____no_output_____
###Markdown
Decompose with custom time series (String data)
###Code
url = 'http://H O S T/time_series/decompose'
payload = dict(data = ['2','4','6','8','10','12','14','16','18','20','22','24','26','28'],model = 'multiplicative', period = 2,locale = 'None',api_key = "XXXX")
data = payload
res = requests.post(url, json = data)
print(res.text)
###Output
_____no_output_____
###Markdown
Forecast Simple Example with custom time series (float data)
###Code
url = 'http://H O S T/time_series/forecast'
payload = dict(start=8, end=9, forecast_range = 3, period = [2],data = [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32],locale = 'None',api_key="XXXX")
data = payload
res = requests.post(url, json = data)
print(res.text)
###Output
_____no_output_____
###Markdown
Forecast Simple Example with custom time series (String data (German locale))
###Code
url = 'http://H O S T/time_series/forecast'
payload = dict(start=8, end=9, forecast_range = 3, period = [2], data = ['2,1','4,2','6,1','8,2','10,1','12,1','14,2','16,1','18,2','20,1','22,2','24,1','26,2','28,1','30,2'],locale = 'de_DE.utf8',api_key="XXXX")
data = payload
res = requests.post(url, json = data)
print(res.text)
###Output
_____no_output_____
###Markdown
Example Catalog
###Code
url = 'http://H O S T/time_series/catalog'
res = requests.get(url)
print(res.text)
url = 'http://H O S T/time_series/forecast'
payload = dict(start=100, end=106, forecast_range = 2, period = [256,7,14,2,4,300], data = 'EU- Corn Future_0011B1.txt',locale = 'None',api_key="XXXX")
data = payload
res = requests.post(url, json = data)
print(res.text)
###Output
_____no_output_____
###Markdown
API 'correlate()' call example using custom data
###Code
# Create Data
# -----------
time_series_1 = [i*2 for i in range(1,31)]
time_series_2 = [i*3 for i in range(1,31)]
time_series_3 = [random() for i in range(1,31)]
data = [time_series_1,time_series_2,time_series_3]
# API call
# -------
url = 'http://H O S T/time_series/correlate'
payload = dict(data=data, start=0, window_size=3, step_size=3 ,steps=3 ,correlation_method='pearson', locale='None',api_key="XXXX")
data = payload
res = requests.post(url, json = data)
# Results Visualisation
# --------------------
data = res.json()
data = data['correlations']
for i in range(0,len(data)):
pd.DataFrame(data[i])
sn.heatmap(pd.DataFrame(data[i],columns=['A','B','C']), annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
API 'correlate()' call example using sftp data
###Code
# Call catalog to see the available data on the sftp
# --------------------------------------------------
url = 'http://H O S T/time_series/catalog'
res = requests.get(url)
print(res.text)
# Select data
# -----------
data = ["EU- Corn Future_0011B1.txt", "EU- Milling Wheat Future_0011B2.txt","GB- Robusta Coffee Fut. (409)_001071.txt"]
# API call
# -------
url = 'http://H O S T/time_series/correlate'
payload = dict(data=data, start=0, window_size=3, step_size=100 ,steps=3 ,correlation_method='pearson', locale='None',api_key = 'XXXX')
data = payload
res = requests.post(url, json = data)
# Results Visualisation
# --------------------
data = res.json()
data = data['correlations']
for i in range(0,len(data)):
pd.DataFrame(data[i])
sn.heatmap(pd.DataFrame(data[i],columns=['A','B','C']), annot=True)
plt.show()
###Output
_____no_output_____ |
Operacionales/Temp/demo.ipynb | ###Markdown
Fundamentos de Sistemas OperativosPython en Jupyter NotebookCreado por Giancarlo Ortiz para el curso de SO-1 .formula { background: f7f7f7; border-radius: 50px; padding: 15px; } .border { display: inline-block; border: solid 1px rgba(204, 204, 204, 0.4); border-bottom-color: rgba(187, 187, 187, 0.4); border-radius: 3px; box-shadow: inset 0 -1px 0 rgba(187, 187, 187, 0.4); background-color: inherit !important; vertical-align: middle; color: inherit !important; font-size: 11px; padding: 3px 5px; margin: 0 2px; } Sistemas OperativosSe refiere a una evaluación de la rentabilidad y estabilidad de un proyecto para determinar su viabilidad. Agenda1. Rentabilidad1. Estabilidad1. Devaluación1. Inflación
###Code
# Importar módulos al cuaderno de Jupyter
import numpy_financial as npf
import pylab as pl
###Output
_____no_output_____
###Markdown
1.1 Rentabilidad---La capacidad para obtener ingresos y sostener el crecimiento tanto a corto como a largo plazo. El grado de rentabilidad de una empresa generalmente se basa en el estado de resultados, que informa sobre los resultados de las operaciones de la compañía.\begin{equation*}ROE = \frac {Beneficio Neto}{Patrimonio Neto} \\\end{equation*}>Existen empresas que siguen una estrategia de líderes en costes que basan su rentabilidad en una alta rotación con un bajo margen. Es decir venden mucha cantidad pero con poco margen en cada venta. En tanto otras empresas basan su rentabilidad en unos altos márgenes, pero una rotación baja.\begin{equation*}ROE = \frac {Beneficio}{Ventas} \cdot \frac {Ventas}{Activos} \cdot \frac {Activos}{Recursos Propios} \\\end{equation*}\begin{equation*}ROE = M \cdot R \cdot A \\\end{equation*}>Donde:> * $\color{a78a4d}{ROE}$ = _Return on equity_ o [Rentabilidad financiera][111]> * $\color{a78a4d}{M}$ = _Profit margin_ o [Margen de beneficio][112]> * $\color{a78a4d}{R}$ = _Asset turnover_ o [Rotación de activos][113]> * $\color{a78a4d}{A}$ = _Leverage_ o [Apalancamiento][114][111]:https://es.wikipedia.org/wiki/Rentabilidad_financiera[112]:https://es.wikipedia.org/wiki/Margen_de_beneficio[113]:https://es.wikipedia.org/wiki/Rotaci%C3%B3n_de_activos[114]:https://es.wikipedia.org/wiki/Apalancamiento Ejemplo: Rentabilidad ---Linda Castillo quiere acelerar __Estrella__, una Startup con un modelo de negocio en el segmento de telecomunicaciones que ya ha validado con el mercado y necesita para ello US$50.000 dolares para fortalecer la operación, sus socios fundadores y él reúnen US$10.000 dolares de sus ahorros a la [TRM][114] del dia y un fondo de inversion les propone apalancar el proyecto con la diferencia por el 25% del beneficio neto.Si al final del primer año se ha obtenido un beneficio de US$8.270:* ¿Cuál es la rentabilidad para los socios fundadores?* ¿Cuál es la rentabilidad para el fondo de inversion?.[114]:https://www.banrep.gov.co/es/estadisticas/trm
###Code
# Datos
inversion = 50_000
capital_socios = 10_000
beneficio = 8270
# Calculos
capital_fondo = inversion - capital_socios
beneficio_socios = 75*beneficio/100
beneficio_fondo = 25*beneficio/100
roe = beneficio/inversion
roe_socios = beneficio_socios/capital_socios
roe_fondo = beneficio_fondo/capital_fondo
# Salida
print(f"-"*70)
print(f"| Detalle |" + "Socios".center(16) + "|" +"Fondo".center(16) + "|" + "Total".center(18) + "|")
print(f"-"*70)
print(f"| Inversión | US$ {capital_socios:10,.2f} | US$ {capital_fondo:10,.2f} | US$ {inversion:12,.2f} |")
print(f"| Beneficio | US$ {beneficio_socios:10,.2f} | US$ {beneficio_fondo:10,.2f} | US$ {beneficio:12,.2f} |")
print(f"| Rentabilidad | {100*roe_socios:13,.2f}% | {100*roe_fondo:13,.2f}% | {100*roe:15,.2f}% |")
print(f"-"*70)
###Output
----------------------------------------------------------------------
| Detalle | Socios | Fondo | Total |
----------------------------------------------------------------------
| Inversión | US$ 10,000.00 | US$ 40,000.00 | US$ 50,000.00 |
| Beneficio | US$ 6,202.50 | US$ 2,067.50 | US$ 8,270.00 |
| Rentabilidad | 62.02% | 5.17% | 16.54% |
----------------------------------------------------------------------
|
tools/Load_Images_To_TensorFlow.ipynb | ###Markdown
Ensure that you have the same tf and tfds version as below or else it might not work
###Code
tf.__version__,tfds.__version__
###Output
_____no_output_____
###Markdown
Need to download gs://cbis-ddsm-tf/curated_breast_imaging_ddsm into ~/tensorflow_datasets/ so that the path is ~/tensorflow_datasets/curated_breast_imaging_ddsm Requires you to download the source data manually into download_config.manual_dir (defaults to ~/tensorflow_datasets/downloads/manual/):
###Code
(ds_train,ds_test,ds_valid),info = tfds.load('curated_breast_imaging_ddsm/patches', split=['train','test','validation'], shuffle_files=True,
with_info=True)
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(1)) # Human readable
print(info.features.shape)
print(info.features.dtype)
fig = tfds.show_examples(ds_train, info)
###Output
_____no_output_____
###Markdown
Prepare train_dataset to feed into a NN model
###Code
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
def load_image_train(datapoint):
input_image = tf.image.resize(datapoint['image'], (224, 224))
input_image = tf.image.grayscale_to_rgb(input_image) # if using pretrained models
input_image = normalization_layer(input_image)
if tf.random.uniform(()) > 0.5:
input_image = tf.image.flip_left_right(input_image)
return input_image,datapoint['label']
def load_image_test(datapoint):
input_image = tf.image.resize(datapoint['image'], (224, 224))
input_image = tf.image.grayscale_to_rgb(input_image) # if using pretrained models
input_image = normalization_layer(input_image)
return input_image,datapoint['label']
TRAIN_LENGTH = info.splits['train'].num_examples
BATCH_SIZE = 100
BUFFER_SIZE = 1000
STEPS_PER_EPOCH = 10 #TRAIN_LENGTH // BATCH_SIZE
train_dataset = ds_train.map(load_image_train, num_parallel_calls=tf.data.AUTOTUNE)
train_dataset = train_dataset.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
train_dataset = train_dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
valid_dataset = ds_valid.map(load_image_test, num_parallel_calls=tf.data.AUTOTUNE)
valid_dataset = valid_dataset.batch(BATCH_SIZE).cache()
valid_dataset = valid_dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
for im,lab in train_dataset.take(1):
break
###Output
_____no_output_____
###Markdown
Train a model
###Code
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224,224,3), trainable=False)
num_classes = 5
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(num_classes)
])
model.summary()
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.keras.metrics.SparseCategoricalAccuracy())
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_acc = []
def on_train_batch_end(self, batch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['sparse_categorical_accuracy'])
self.model.reset_metrics()
batch_stats_callback = CollectBatchStats()
history = model.fit(train_dataset,
epochs=1, htop
steps_per_epoch=STEPS_PER_EPOCH,
validation_data = valid_dataset,
callbacks=[batch_stats_callback])
plt.figure()
plt.ylabel("Loss")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(batch_stats_callback.batch_losses)
plt.figure()
plt.ylabel("Accuracy")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(batch_stats_callback.batch_acc)
###Output
_____no_output_____ |
Jupyter_Notes/Lecture04_Sec1-4_AlgebraProperty_Sec1-5_SpecMat.ipynb | ###Markdown
Section 1.4 $\quad$ Algebraic Properties of Matrix Operations 1. Properties of Matrix Addition Let $A$, $B$, and $C$ be $m\times n$ matrices - $A+B = $- $A+(B+C)=$- There is a unique $m\times n$ **zero matrix**, denoted by $O$, such that- For each $m\times n$ matrix $A$, there is a unique $m\times n$ matrix $D$ such that 2. Properties of Matrix Multiplication Let $A$, $B$, and $C$ be matrices of appropriate sizes.- $A(BC) = $- $(A+B)C = $- $C(A+B) = $**Remark:** Example 1 Let$$ A = \left[ \begin{array}{ccc} 2 & 2 & 3 \\ 3 & -1 & 2 \\ \end{array} \right],~~~~ B = \left[ \begin{array}{ccc} 0 & 0 & 1\\ 2 & 3 & -1\\ \end{array} \right],~~~\text{and}~~~ C = \left[ \begin{array}{cc} 1 & 0 \\ 2 & 2 \\ 3 &-1 \\ \end{array} \right]$$Compute $(A+B)C$ and $AC+BC$.
###Code
from numpy import *
A = array([[2, 2, 3], [3, -1, 2]]);
B = array([[0, 0, 1], [2, 3, -1]]);
C = array([[1, 0], [2, 2], [3, -1]]);
dot(A + B, C)
from numpy import *
A = array([[2, 2, 3], [3, -1, 2]]);
B = array([[0, 0, 1], [2, 3, -1]]);
C = array([[1, 0], [2, 2], [3, -1]]);
dot(A, C) + dot(B, C)
###Output
_____no_output_____
###Markdown
3. Properties of Scalar Multiplication Let $r$ and $s$ be real numbers. Let $A$ and $B$ be matrices of appropriate sizes.- $r(sA) =$- $(r+s)A = $- $r(A+B) = $- $A(rB) = $ 4. Properties of Transpose Let $r$ and $s$ be real numbers. Let $A$ and $B$ be matrices of appropriate sizes.- $(A^T)^T =$- $(A+B)^T = $- $(AB)^T = $- $(rA)^T = $ **Questions** - Does $A^2 = O $ imply $A = O$? - Does $AB = AC$ imply $B = C$? Section 1.5 $\quad$ Special Types of Matrices Diagonal Matrices - An $n\times n$ matrix $A = [a_{ij}]$ is called a $\underline{\hspace{1.5in}}$ if **Question:** Is the zero matrix $O$ a diagonal matrix?- If the diagonal elements of a diagonal matrix are equal, we call it a $\underline{\hspace{1.5in}}$.- If the diagonal elements of a diagonal matrix and are equal to $1$, we call it a $\underline{\hspace{1.5in}}$ and write it as $\underline{\hspace{1in}}$. **Property:** $A I_n = $ $\hspace{1.5in}$ $I_m A = $ Symmetric Matrices - An $n\times n$ matrix $A = [a_{ij}]$ is called $\underline{\hspace{1.5in}}$ if It is called $\underline{\hspace{2in}}$ if- matrix $A$ with real entries is called $\underline{\hspace{1.5in}}$ if $\underline{\hspace{1.5in}}$.- A matrix $A$ with real entries is called $\underline{\hspace{2in}}$ if $\underline{\hspace{1.5in}}$. **Property:** Every square matrix can be decomposed as the sum of a symmetric matrix and a skew symmetric matrix. **Proof:** Nonsingular Matrices An $n\times n$ matrix $A$ is called $\underline{\hspace{1.5in}}$ or $\underline{\hspace{1.5in}}$ ifSuch a matrix $B$ is called an $\underline{\hspace{1.5in}}$ of $A$ and denoted by $\underline{\hspace{1in}}$.If $A$ is not invertible, we call it $\underline{\hspace{1.5in}}$ or $\underline{\hspace{1.5in}}$. >**Theorem** The inverse of a matrix, if **Proof:** Example 2 Let$$ A = \left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right],~~~~~~~ B = \left[ \begin{array}{cc} 1 & 2 \\ 2 & 4 \\ \end{array} \right]$$Find the inverse $A^{-1}$ and $B^{-1}$, if they exist.
###Code
from numpy import *
A = array([[1, 2], [3, 4]]);
linalg.inv(A)
from numpy import *
B = array([[1, 2], [2, 4]]);
linalg.inv(B)
###Output
_____no_output_____ |
python_helpers/python_helpers/Decypher_hints.ipynb | ###Markdown
Hints from the orgs Fri, 09 July 08:49 UTCta.jvfblvbiTe6XDacTWXn Thu, 08 July 16:30 UTCvbtib.lafvjKE7ZFnZQ59D Wed, 07 July 15:36 UTCWe strongly recommend participants this year to wear waterproof clothing and leave their phone in a locker.
###Code
pairs = [
["vbtib.lafvj", "KE7ZFnZQ59D"],
["ta.jvfblvbi", "Te6XDacTWXn"],
]
print("Hint length:", len(pairs[0][0]))
def str_ord(s):
return [ord(c) for c in s]
for p in pairs:
print(str_ord(p[0]), str_ord(p[1]))
for p in pairs:
print("".join(sorted(p[0])), "".join(sorted(p[1])))
for p in pairs:
print("".join([chr(ord(c1) ^ ord(c2)) for c1, c2 in zip(p[0], p[1])]))
for c in ['0', '9', 'A', 'Z', 'a', 'z']:
print(c, ord(c))
def get_bin(c):
return '{0:08b}'.format(ord(c))
print(get_bin('.'))
print(get_bin('n'))
for c1, c2 in zip(s1, s2):
print(get_bin(c1), get_bin(c2))
import base64
[int(b) for b in base64.b64decode(pairs[0][1] + '=')]
[int(b) for b in base64.b64decode(pairs[1][1] + '=')]
###Output
_____no_output_____ |
ml_association_rule.ipynb | ###Markdown
Load Data
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
groceries = pd.read_csv("data\groceries.csv")
pd.set_option('display.max_rows', 100)
display(groceries)
###Output
_____no_output_____
###Markdown
Clean
###Code
purchases_frequency_df = groceries["Items"].value_counts()
purchases_frequency_df = pd.DataFrame(pd.concat([pd.Series(purchases_frequency_df.index),
pd.Series(purchases_frequency_df.values)], axis=1))
purchases_frequency_df.columns = ["Items Purchased", "Frequency"]
purchases_df = pd.DataFrame(groceries["Items"])
purchases_df.columns = ["Items Purchased"]
groceries = groceries.drop(columns="Items")
groceries = groceries.fillna("none")
###Output
_____no_output_____
###Markdown
EDA
###Code
unique_items = []
items_frequency = []
for col in groceries:
col_items = groceries[col].unique()
all_items = groceries[col]
for itm in col_items:
unique_items.append(itm)
for all_itms in all_items:
items_frequency.append(all_itms)
unique_items = list(set(unique_items))
unique_items.remove("none")
print("Unique Items:", len(unique_items))
items_frequency = pd.Series([val for val in items_frequency if val != "none"])
items_frequency = items_frequency.value_counts()
items_frequency_df = pd.DataFrame(pd.concat([pd.Series(items_frequency.index), pd.Series(items_frequency.values)], axis=1))
items_frequency_df.columns = ["Item", "Frequency"]
display(items_frequency)
print(f"Total # of items purchased: {items_frequency.sum()}")
print(f"{items_frequency.index[0]} was purchased the most ({items_frequency.max()} purchases)")
print()
print(f"Mean # of times each item was purchased: {items_frequency.mean().round(0)}")
print(f"Stdandard Deviation of # of times each item was purchased: {items_frequency.std().round(0)}")
print()
print(f"Mean # of items purchased per customer: {purchases_df.mean()}")
print(f"Standard Deviation of # of items purchased per customer: {purchases_df.std()}")
print()
print(f"Min # of items purchased per customer: {purchases_df.min()}")
print(f"Max # of items purchased per customer: {purchases_df.max()}")
plt.figure(figsize=(14,8))
plt.bar(items_frequency[:30].index, items_frequency[:30], color="teal")
plt.xticks(rotation=75)
plt.xlabel("Item")
plt.ylabel("Frequency")
plt.title("Item Frequency (30 most frequent items)")
plt.show()
display(items_frequency_df)
plt.figure(figsize=(14,8))
plt.bar(purchases_frequency_df["Items Purchased"], purchases_frequency_df["Frequency"])
plt.xlabel("Items")
plt.ylabel("Frequency")
plt.title("Number of Items Purchased per Transaction Frequency")
plt.show()
###Output
_____no_output_____
###Markdown
Preprocess
###Code
groceries_list = groceries.values.tolist()
cleaned_list = []
for row in groceries_list:
new_row = [val for val in row if val != "none"]
cleaned_list.append(new_row)
from mlxtend.preprocessing import TransactionEncoder
enc = TransactionEncoder()
enc.fit(cleaned_list)
enc_groceries = pd.DataFrame(enc.transform(cleaned_list), columns=enc.columns_)
display(enc_groceries)
###Output
_____no_output_____
###Markdown
Model
###Code
from mlxtend.frequent_patterns import apriori, association_rules
itemset = apriori(enc_groceries, min_support=0.01, use_colnames=True)
display(itemset)
rules = association_rules(itemset, metric="lift", min_threshold=2.25)
display(rules.head(10))
###Output
_____no_output_____ |
3_gabor/03_model_fitting__SNPE.ipynb | ###Markdown
inference for Gabor-GLM with SNPElearning receptive field parameters from inputs (white-noise videos) and outputs (spike trains) of linear-nonlinear neuron models with parameterized linear filters- we fit a mixture-density network with convolutional layers to directly obtain posterior estimates from spike-triggered averages (STAs)- two-stage fitting procedure: - a first round identifies the rough region in parameter space by fitting a Gaussian posterior approximation - a second round identifies the exact posterior shape within that region by fitting an 8-component mixture of Gaussians. - this notebook imports a custom CDELFI which does custom init of components for second round
###Code
%%capture
%matplotlib inline
use_gpu = True
if use_gpu:
import os
os.environ['THEANO_FLAGS'] = "device=cuda0"
import theano
theano.config.floatX='float32'
import matplotlib.pyplot as plt
import numpy as np
import lasagne.nonlinearities as lnl
import dill as pickle
import delfi.distribution as dd
import delfi.generator as dg
import delfi.inference as infer
from support_files.CDELFI import CDELFI
import delfi.utils.io as io
from delfi.utils.viz import plot_pdf
from utils import get_maprf_prior_01, setup_sim, setup_sampler, \
get_data_o, quick_plot, contour_draws
from model.gabor_rf import maprf as model
from model.gabor_stats import maprfStats
seed = 42
def plot_hist_marginals(data, weights=None, lims=None, gt=None, upper=False, rasterized=False):
"""
Plots marginal histograms and pairwise scatter plots of a dataset.
"""
data = np.asarray(data)
n_bins = int(np.sqrt(data.shape[0]))
if data.ndim == 1:
fig, ax = plt.subplots(1, 1)
ax.hist(data, weights=weights, bins=n_bins, normed=True, rasterized=rasterized)
ax.set_ylim([0.0, ax.get_ylim()[1]])
ax.tick_params(axis='y', which='both', left=False, right=False, labelleft=False)
if lims is not None: ax.set_xlim(lims)
if gt is not None: ax.vlines(gt, 0, ax.get_ylim()[1], color='r')
else:
n_dim = data.shape[1]
fig = plt.figure()
if weights is None:
col = 'k'
vmin, vmax = None, None
else:
col = weights
vmin, vmax = 0., np.max(weights)
if lims is not None:
lims = np.asarray(lims)
lims = np.tile(lims, [n_dim, 1]) if lims.ndim == 1 else lims
for i in range(n_dim):
for j in range(i, n_dim) if upper else range(i + 1):
ax = fig.add_subplot(n_dim, n_dim, i * n_dim + j + 1)
if i == j:
ax.hist(data[:, i], weights=weights, bins=n_bins, normed=True, rasterized=rasterized)
ax.set_ylim([0.0, ax.get_ylim()[1]])
ax.tick_params(axis='y', which='both', left=False, right=False, labelleft=False)
if i < n_dim - 1 and not upper: ax.tick_params(axis='x', which='both', labelbottom=False)
if lims is not None: ax.set_xlim(lims[i])
if gt is not None: ax.vlines(gt[i], 0, ax.get_ylim()[1], color='r')
else:
ax.scatter(data[:, j], data[:, i], c=col, s=3, marker='o', vmin=vmin, vmax=vmax, cmap='binary', edgecolors='none', rasterized=rasterized)
if i < n_dim - 1: ax.tick_params(axis='x', which='both', labelbottom=False)
if j > 0: ax.tick_params(axis='y', which='both', labelleft=False)
if j == n_dim - 1: ax.tick_params(axis='y', which='both', labelright=True)
if lims is not None:
ax.set_xlim(lims[j])
ax.set_ylim(lims[i])
if gt is not None: ax.scatter(gt[j], gt[i], c='r', s=20, marker='o', edgecolors='none')
return fig
# observation, models
reload_obs_stats = False
if reload_obs_stats:
gtd = np.load('results/SNPE/toycell_6/ground_truth_data.npy', allow_pickle=True)[()]
obs_stats = gtd['obs_stats']
sim_info = np.load('results/sim_info.npy', allow_pickle=True)[()]
d, params_ls = sim_info['d'], sim_info['params_ls']
p = get_maprf_prior_01(params_ls)
import delfi.generator as dg
g = dg.Default(model=None, prior=p[0], summary=None)
else:
# result dirs
!mkdir -p results/
!mkdir -p results/SNPE/
!mkdir -p results/SNPE/toycell_6/
# training data and true parameters, data, statistics
idx_cell = 6 # load toy cell number 6 (cosine-shaped RF with 1Hz firing rate)
filename = 'results/toy_cells/toy_cell_' + str(idx_cell) + '.npy'
g, prior, d = setup_sim(seed, path='')
obs_stats, pars_true = get_data_o(filename, g, seed)
rf = g.model.params_to_rf(pars_true)[0]
# plot ground-truth receptive field
plt.imshow(rf, interpolation='None')
plt.show()
obs_stats, obs_stats[0,-1] # summary statistics: (STA , spike count (over 5 minutes simulation) )
np.save('results/SNPE/toycell_6/ground_truth_data',
{'obs_stats' : obs_stats, 'pars_true' : pars_true, 'rf' : rf})
# visualize RFs defined by prior-drawn parameters theta
contour_draws(g.prior, g, obs_stats, d=d)
print(obs_stats)
# network architecture: 9 layer network [5x conv, 3x fully conn., 1x MoG]
filter_sizes=[3,3,3,3,2] # 5 conv ReLU layers
n_filters=(16,16,32,32,32) # 16 to 32 filters
pool_sizes=[1,2,2,2,2] # pooling layers
n_hiddens=[50,50] # 2 fully connected layers per MAF
actfun=lnl.rectify # using ReLU's for fully connected layers
# N = 10k for first round
n_train = 10000
n_rounds = 2
# number of Gaussian components for final-round posterior estimate
# feature for CNN architectures: passing a value directly to the hidden layers (bypassing the conv layers).
# In this case, we pass the number of spikes (single number) directly, which allows to normalize the STAs
# and hence help out the conv layers. Without that extra input, we couldn't recover the RF gain anymore!
n_inputs_hidden = 1
# some learning-schedule parameters
lr_decay = 0.999 # learning-rate decay over epochs
epochs = 500 # number of epochs
minibatch=100 # minibatch-size for stochastic gradient descent
svi=False # whether to regularize the network weight. Large N should make this do very little anyways
reg_lambda=0.0 # regularization strength (not used if svi=False)
pilot_samples=1000 # z-scoring only applies to extra inputs (here: firing rate) directly fed to fully connected layers
prior_norm = False # normalizes prior scales to mean zero and unit variances.
# Helpful if parameter have vastly different scales.
init_norm = False # normalizes network intitialization. Not yet support for conv- and ReLU- layers
rank = None # no rank constraint on covariance matrices of posterior
n_mades = 5
act_fun = 'tanh'
mode = 'random'
rng = np.random
rng.seed(seed)
batch_norm= False
val_frac = 0.02
assert (n_train * val_frac) % minibatch == 0 # cannot deal with incomplete minibatches right now....
obs_stats[0,-1]
inf = infer.APT(
generator=g,
obs=obs_stats,
prior_norm=prior_norm, # PRIOR NORMALIZATION OFF
pilot_samples=pilot_samples,
seed=seed,
svi=False,
n_hiddens=n_hiddens,
n_filters=n_filters,
density='maf',
n_mades=n_mades,
maf_actfun=act_fun,
maf_mode=mode,
batch_norm=batch_norm,
n_inputs = d*d,
input_shape = (1,d,d),
n_bypass=1,
filter_sizes=filter_sizes,
pool_sizes=pool_sizes,
actfun=actfun,
verbose=True)
inf.network.aps[1].dtype
# print parameter numbers per layer (just weights, not biases)
def get_shape(i):
return inf.network.aps[i].get_value().shape
print([get_shape(i) for i in range(1,17,2)])
print([np.prod(get_shape(i)) for i in range(1,17,2)])
#run SNPE-C
print('fitting model with SNPC-C')
log, trn_data, posteriors = inf.run(
n_train=n_train,
epochs=epochs,
proposal='atomic',
n_atoms = minibatch - 1,
moo='resample',
n_rounds=n_rounds,
train_on_all=False,
minibatch=minibatch,
val_frac=val_frac,
silent_fail=False,
verbose=True,
print_each_epoch=True)
for r in range(len(posteriors)):
posterior = posteriors[r]
post_draws = posterior.gen(1000)
plot_prior = dd.TransformedNormal(m=g.prior.m, S = g.prior.S,
flags=[0,0,2,1,2,1,1,2,2],
lower=[0,0,0,0,0,0,0,-1,-1], upper=[0,0,np.pi,0,2*np.pi,0,0,1,1])
post_draws_trans = plot_prior._f(post_draws)
fig = plot_hist_marginals(data=post_draws_trans, weights=None,
lims=[[-1.5,1.5], [-1.1,1.1], [0,np.pi], [0, 2.5], [0,2*np.pi], [0,2], [0,4], [-1,1], [-1,1]],
gt=None, upper=True, rasterized=False)
fig.set_figwidth(16)
fig.set_figheight(16)
fig.show()
###Output
_____no_output_____
###Markdown
plot posteriors in original space (back-transformed)fitting Gaussians on log-transformed (frequency, ratio, width) and logit-transformed (phase, angle , location) parameters gives log- resp. logit-Normal marginals on original parameters. The 9-dimenensional joint distribution of all parameters can be transformed analytically.
###Code
plot_prior = dd.TransformedNormal(m=g.prior.m, S = g.prior.S,
flags=[0,0,2,1,2,1,1,2,2],
lower=[0,0,0,0,0,0,0,-1,-1], upper=[0,0,np.pi,0,2*np.pi,0,0,1,1])
plot_post = dd.mixture.TransformedGaussianMixture.MoTG(
ms= [posterior.xs[i].m for i in range(posterior.n_components)],
Ss =[posterior.xs[i].S for i in range(posterior.n_components)],
a = posterior.a,
flags=[0,0,2,1,2,1,1,2,2],
lower=[0,0,0,0,0,0,0,-1,-1], upper=[0,0,np.pi,0,2*np.pi,0,0,1,1])
lims = np.array([[-2, -1.5, .001, 0, .001, 0, 0, -.999, -.999],
[ 2, 1.5, .999*np.pi, 3, 1.999*np.pi, 3, 3, .999, .999]]).T
fig, _ = plot_pdf(plot_post, pdf2=plot_prior, lims=lims, gt=plot_post._f(pars_true.reshape(1,-1)).reshape(-1),
figsize=(16,16), resolution=100,
labels_params=['bias', 'gain', 'logit phase', 'log freq', 'logit angle', 'log ratio', 'log width', 'xo', 'yo'])
###Output
_____no_output_____
###Markdown
show contours of posterior draws
###Code
lvls=[0.5, 0.5]
p = posterior
n_draws = 10
plt.figure(figsize=(6,6))
plt.imshow(obs_stats[0,:-1].reshape(d,d), interpolation='None', cmap='gray')
for i in range(n_draws):
rfm = g.model.params_to_rf(p.gen().reshape(-1))[0]
plt.contour(rfm, levels=[lvls[0]*rfm.min(), lvls[1]*rfm.max()])
#print(rfm.min(), rfm.max())
#plt.hold(True)
plt.title('RF posterior draws')
rfm = g.model.params_to_rf(pars_true.reshape(-1))[0]
plt.contour(rfm, levels=[lvls[0]*rfm.min(), lvls[1]*rfm.max()], colors='r')
plt.show()
###Output
_____no_output_____
###Markdown
store final results
###Code
round_ = 2
filename1 = 'results/SNPE/toycell_6/maprf_100k_prior01_run_1_round' + str(round_) + '_param9_nosvi_CDELFI.pkl'
filename2 = 'results/SNPE/toycell_6/maprf_100k_prior01_run_1_round' + str(round_) + '_param9_nosvi_CDELFI_res.pkl'
filename4 = 'results/SNPE/toycell_6/maprf_100k_prior01_run_1_round' + str(round_) + '_param9_nosvi_CDELFI_net_only.pkl'
io.save_pkl((log2, trn_data2, posteriors2),filename1)
net = inf.network
data = {'network.spec_dict' : net.spec_dict,
'network.params_dict' : net.params_dict }
io.save_pkl(data, filename4)
# key results for figure 3 in paper
np.save('results/SNPE/toycell_6/maprf_100k_prior01_run_1_round' + str(round_) + '_param9_nosvi_CDELFI_posterior',
{'posterior' : posteriors2[-1],
'proposal' : inf.generator.proposal,
'prior' : inf.generator.prior})
round_=2
p=np.load('results/SNPE/toycell_6/maprf_100k_prior01_run_1_round' + str(round_) + '_param9_nosvi_CDELFI_posterior.npy', allow_pickle=True)[()]
###Output
_____no_output_____ |
examples/seismic/tutorials/08_DRP_schemes.ipynb | ###Markdown
Custom finite difference coefficients in Devito IntroductionWhen taking the numerical derivative of a function in Devito, the default behaviour is for 'standard' finite difference weights (obtained via a Taylor series expansion about the point of differentiation) to be applied. Consider the following example for some field $u(\mathbf{x},t)$, where $\mathbf{x}=(x,y)$. Let us define a computational domain/grid and differentiate our field with respect to $x$.
###Code
import numpy as np
from devito import Grid, TimeFunction
# Create our grid (computational domain)
Lx = 10
Ly = Lx
Nx = 11
Ny = Nx
dx = Lx/(Nx-1)
dy = dx
grid = Grid(shape=(Nx,Ny), extent=(Lx,Ly))
# Define u(x,y,t) on this grid
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2)
###Output
_____no_output_____
###Markdown
Now, lets look at the output of $\partial u/\partial x$:
###Code
print(u.dx)
###Output
-0.5*u(t, x - h_x, y)/h_x + 0.5*u(t, x + h_x, y)/h_x
###Markdown
By default the 'standard' Taylor series expansion result, where `h_x` represents the $x$-direction grid spacing, is returned. However, there may be instances when a user wishes to use 'non-standard' weights when, for example, implementing a dispersion-relation-preserving (DRP) scheme. See e.g. [1] Christopher K.W. Tam, Jay C. Webb (1993). ”Dispersion-Relation-Preserving Finite Difference Schemes for Computational Acoustics.” **J. Comput. Phys.**, 107(2), 262--281. https://doi.org/10.1006/jcph.1993.1142for further details. The use of such modified weights is facilitated in Devito via the 'symbolic' finite difference coefficents functionality. Let us start by re-defining the function $u(\mathbf{x},t)$ in the following manner:
###Code
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2, coefficients='symbolic')
###Output
_____no_output_____
###Markdown
Note the addition of the `coefficients='symbolic'` keyword. Now, when printing $\partial u/\partial x$ we obtain:
###Code
print(u.dx)
###Output
W(x, 1, u(t, x, y), x)*u(t, x, y) + W(x - h_x, 1, u(t, x, y), x)*u(t, x - h_x, y) + W(x + h_x, 1, u(t, x, y), x)*u(t, x + h_x, y)
###Markdown
Owing to the addition of the `coefficients='symbolic'` keyword the weights have been replaced by sympy functions. Now, take for example the weight `W(x - h_x, 1, u(t, x, y), x)`, the notation is as follows:* The first `x - h_x` refers to the spatial location of the weight w.r.t. the evaluation point `x`.* The `1` refers to the order of the derivative.* `u(t, x, y)` refers to the function with which the weight is associated.* Finally, the `x` refers to the dimension along which the derivative is being taken.Symbolic coefficients can then be manipulated using the Devito 'Coefficient' and 'Substitutions' objects. First, let us consider an example where we wish to replace the coefficients with a set of constants throughout the entire computational domain.
###Code
from devito import Coefficient, Substitutions # Import the Devito Coefficient and Substitutions objects
# Grab the grid spatial dimensions: Note x[0] will correspond to the x-direction and x[1] to y-direction
x = grid.dimensions
# Form a Coefficient object and then a replacement rules object (to pass to a Devito equation):
u_x_coeffs = Coefficient(1, u, x[0], np.array([-0.6, 0.1, 0.6]))
coeffs = Substitutions(u_x_coeffs)
###Output
_____no_output_____
###Markdown
Devito Coefficient ojects take arguments in the following order:1. Derivative order (in the above example this is the first derivative)2. Function to which the coefficients 'belong' (in the above example this is the time function `u`)3. Dimension on which coefficients will be applied (in the above example this is the x-direction)4. Coefficient data. Since, in the above example, the coefficients have been applied as a 1-d numpy array replacement will occur at the equation level. (Note that other options are in development and will be the subject of future notebooks). Now, lets form a Devito equation, pass it the Substitutions object, and take a look at the output:
###Code
from devito import Eq
eq = Eq(u.dt+u.dx, coefficients=coeffs)
print(eq)
###Output
Eq(0.1*u(t, x, y) - 0.6*u(t, x - h_x, y) + 0.6*u(t, x + h_x, y) - u(t - dt, x, y)/(2*dt) + u(t + dt, x, y)/(2*dt), 0)
###Markdown
We see that in the above equation the standard weights for the first derivative of `u` in the $x$-direction have now been replaced with our user defined weights. Note that since no replacement rules were defined for the time derivative (`u.dt`) standard weights have replaced the symbolic weights.Now, let us consider a more complete example. Example: Finite difference modeling for a large velocity-contrast acousitc wave modelIt is advised to read through the 'Introduction to seismic modelling' notebook located in devito/examples/seismic/tutorials/01_modelling.ipynb before proceeding with this example since much introductory material will be ommited here. The example now considered is based on an example introduced in[2] Yang Liu (2013). ”Globally optimal finite-difference schemes based on least squares.” **GEOPHYSICS**, 78(4), 113--132. https://doi.org/10.1006/jcph.1993.1142.See figure 18 of [2] for further details. Note that here we will simply use Devito to 'reproduce' the simulations leading to two results presented in the aforementioned figure. No analysis of the results will be carried out. The domain under consideration has a sptaial extent of $2km \times 2km$ and, letting $x$ be the horizontal coordinate and $z$ the depth, a velocity profile such that $v_1(x,z)=1500ms^{-1}$ for $z\leq1200m$ and $v_2(x,z)=4000ms^{-1}$ for $z>1200m$.
###Code
from examples.seismic import Model, plot_velocity
%matplotlib inline
# Define a physical size
Lx = 2000
Lz = Lx
h = 10
Nx = int(Lx/h)+1
Nz = Nx
shape = (Nx, Nz) # Number of grid point
spacing = (h, h) # Grid spacing in m. The domain size is now 2km by 2km
origin = (0., 0.)
# Define a velocity profile. The velocity is in km/s
v = np.empty(shape, dtype=np.float32)
v[:, :121] = 1.5
v[:, 121:] = 4.0
# With the velocity and model size defined, we can create the seismic model that
# encapsulates these properties. We also define the size of the absorbing layer as 10 grid points
nbpml = 10
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=20, nbpml=nbpml)
plot_velocity(model)
###Output
_____no_output_____
###Markdown
The seismic wave source term will be modelled as a Ricker Wavelet with a peak-frequency of $25$Hz located at $(1000m,800m)$. Before applying the DRP scheme, we begin by generating a 'reference' solution using a spatially high-order standard finite difference scheme and time step well below the model's critical time-step. The scheme will be 2nd order in time.
###Code
from examples.seismic import TimeAxis
t0 = 0. # Simulation starts a t=0
tn = 500. # Simulation lasts 0.5 seconds (500 ms)
dt = 0.2 # Time step of 0.2ms
time_range = TimeAxis(start=t0, stop=tn, step=dt)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import RickerSource
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 800. # Depth is 800m
# We can plot the time signature to see the wavelet
src.show()
###Output
_____no_output_____
###Markdown
Now let us define our wavefield and PDE:
###Code
# Define the wavefield with the size of the model and the time dimension
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=20)
# We can now write the PDE
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
# This discrete PDE can be solved in a time-marching way updating u(t+dt) from the previous time step
# Devito as a shortcut for u(t+dt) which is u.forward. We can then rewrite the PDE as
# a time marching updating equation known as a stencil using customized SymPy functions
from devito import solve
stencil = Eq(u.forward, solve(pde, u.forward))
# Finally we define the source injection and receiver read function to generate the corresponding code
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)
###Output
_____no_output_____
###Markdown
Now, lets create the operator and execute the time marching scheme:
###Code
#NBVAL_IGNORE_OUTPUT
from devito import Operator
op = Operator([stencil] + src_term, subs=model.spacing_map)
#NBVAL_IGNORE_OUTPUT
op(time=time_range.num-1, dt=dt)
###Output
Operator `Kernel` run in 9.62 s
###Markdown
And plot the result:
###Code
#import matplotlib
import matplotlib.pyplot as plt
from matplotlib import cm
Lx = 2000
Lz = 2000
abs_lay = nbpml*h
dx = h
dz = dx
X, Z = np.mgrid[-abs_lay: Lx+abs_lay+1e-10: dx, -abs_lay: Lz+abs_lay+1e-10: dz]
levels = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,u.data[0,:,:], levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
ax1.set_title('$u(x,z,500)$')
plt.gca().invert_yaxis()
plt.show()
###Output
_____no_output_____
###Markdown
We will now reimplement the above model applying the DRP scheme presented in [2].First, since we wish to apply different custom FD coefficients in the upper on lower layers we need to define these two 'subdomains' using the `Devito SubDomain` functionality:
###Code
from devito import SubDomain
# Define our 'upper' and 'lower' SubDomains:
class Upper(SubDomain):
name = 'upper'
def define(self, dimensions):
x, z = dimensions
# We want our upper layer to span the entire x-dimension and all
# but the bottom 80 (+boundary layer) cells in the z-direction, which is achieved via
# the following notation:
return {x: x, z: ('left', 80+nbpml)}
class Lower(SubDomain):
name = 'lower'
def define(self, dimensions):
x, z = dimensions
# We want our lower layer to span the entire x-dimension and all
# but the top 121 (+boundary layer) cells in the z-direction.
return {x: x, z: ('right', 121+nbpml)}
# Create these subdomains:
ur = Upper()
lr = Lower()
###Output
_____no_output_____
###Markdown
We now create our model incoporating these subdomains:
###Code
# Our scheme will now be 10th order (or less) in space.
order = 10
# Create our model passing it our 'upper' and 'lower' subdomains:
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=order, nbpml=nbpml, subdomains=(ur,lr))
###Output
_____no_output_____
###Markdown
And re-define model related objects. Note that now our wave-field will be defined with `coefficients='symbolic'`.
###Code
t0 = 0. # Simulation starts a t=0
tn = 500. # Simulation last 1 second (500 ms)
dt = 1.0 # Time step of 1.0ms
time_range = TimeAxis(start=t0, stop=tn, step=dt)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 800. # Depth is 800m
# New wave-field
u_DRP = TimeFunction(name="u_DRP", grid=model.grid, time_order=2, space_order=order, coefficients='symbolic')
###Output
_____no_output_____
###Markdown
We now create a stencil for each of our 'Upper' and 'Lower' subdomains defining different custom FD weights within each of these subdomains.
###Code
# The underlying pde is the same in both subdomains
pde_DRP = model.m * u_DRP.dt2 - u_DRP.laplace + model.damp * u_DRP.dt
# Define our custom FD coefficients:
x, z = model.grid.dimensions
# Upper layer
weights_u = np.array([ 2.00462e-03, -1.63274e-02, 7.72781e-02,
-3.15476e-01, 1.77768e+00, -3.05033e+00,
1.77768e+00, -3.15476e-01, 7.72781e-02,
-1.63274e-02, 2.00462e-03])
# Lower layer
weights_l = np.array([ 0. , 0. , 0.0274017,
-0.223818, 1.64875 , -2.90467,
1.64875 , -0.223818, 0.0274017,
0. , 0. ])
# Create the Devito Coefficient objects:
ux_u_coeffs = Coefficient(2, u_DRP, x, weights_u/x.spacing**2)
uz_u_coeffs = Coefficient(2, u_DRP, z, weights_u/z.spacing**2)
ux_l_coeffs = Coefficient(2, u_DRP, x, weights_l/x.spacing**2)
uz_l_coeffs = Coefficient(2, u_DRP, z, weights_l/z.spacing**2)
# And the replacement rules:
coeffs_u = Substitutions(ux_u_coeffs,uz_u_coeffs)
coeffs_l = Substitutions(ux_l_coeffs,uz_l_coeffs)
# Create a stencil for each subdomain:
stencil_u = Eq(u_DRP.forward, solve(pde_DRP, u_DRP.forward), subdomain = model.grid.subdomains['upper'],
coefficients=coeffs_u)
stencil_l = Eq(u_DRP.forward, solve(pde_DRP, u_DRP.forward), subdomain = model.grid.subdomains['lower'],
coefficients=coeffs_l)
# Source term:
src_term = src.inject(field=u_DRP.forward, expr=src * dt**2 / model.m)
# Create the operator, incoporating both upper and lower stencils:
op = Operator([stencil_u, stencil_l] + src_term, subs=model.spacing_map)
###Output
_____no_output_____
###Markdown
And now execute the operator:
###Code
op(time=time_range.num-1, dt=dt)
###Output
Operator `Kernel` run in 0.65 s
###Markdown
And plot the new results:
###Code
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,u_DRP.data[0,:,:], levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
ax1.set_title('$u_{DRP}(x,z,500)$')
plt.gca().invert_yaxis()
plt.show()
###Output
_____no_output_____
###Markdown
Finally, for comparison, lets plot the difference between the standard 20th order and optimized 10th order models:
###Code
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,abs(u_DRP.data[0,:,:]-u.data[0,:,:]), levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
plt.gca().invert_yaxis()
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/HyperFoods-checkpoint.ipynb | ###Markdown
HyperFoods Recipe Retrieval w/ Higher Number Anti-Cancer MoleculesEach recipe had all the ingredients concatenated in single string. It was used the ingredients vocabulary of the datasetto filter what were and what weren't ingredient names in each string. Finally, it was calculated the sum of the numberof anti-cancer molecules present in each recipe using the table food_compound.csv. A DataFrame object was created so thatit not ony shows us the ID of each recipe, but also the number of anti-cancer molecules, along with an URL to the recipe'slocation online. Importing ModulesImporting libraries installed using PyPI and functions present in scripts created in for this project.
###Code
# ---------------------------- Data Management ----------------------------
# pandas is an open source library providing high-performance, easy-to-use data structures and data
# analysis tools for the Python programming language.
import pandas
# ---------------------------- Scientific Operations ----------------------------
# NumPy is the fundamental package for scientific computing with Python. It contains among other things: a powerful
# N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code,
# useful linear algebra, Fourier transform, and random number capabilities.
import numpy
# ---------------------------- Write & Read JSON Files ----------------------------
# Python has a built-in package which can be used to work with JSON data.
import json
# ---------------------------- Pickling ----------------------------
# The pickle module implements binary protocols for serializing and de-serializing a Python object structure. “Pickling”
# is the process whereby a Python object hierarchy is converted into a byte stream, and “unpickling” is the inverse
# operation, whereby a byte stream (from a binary file or bytes-like object) is converted back into an object hierarchy.
import pickle
# ------------------------------------- Word2Vec -------------------------------------
# Word2Vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural
# networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of
# text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being
# assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that
# share common contexts in the corpus are located close to one another in the space.
# Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Target
# audience is the natural language processing (NLP) and information retrieval (IR) community.
import gensim
from gensim.models import Word2Vec
# -------------------------- Dimensionality Reduction Tools --------------------------
# Scikit-learn (also known as sklearn) is a free software machine learning library for the
# Python programming language.It features various classification, regression and clustering algorithms including
# support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with
# the Python numerical and scientific libraries NumPy and SciPy.
# Principal component analysis (PCA) - Linear dimensionality reduction using Singular Value Decomposition of the data to
# project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying
# the SVD.
# t-distributed Stochastic Neighbor Embedding (t-SNE) - It is a tool to visualize high-dimensional data. It converts
# similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between
# the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that
# is not convex, i.e. with different initializations we can get different results.
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# ------------------------------ Check File Existance -------------------------------
# The main purpose of the OS module is to interact with the operating system. Its primary use consists in
# creating folders, removing folders, moving folders, and sometimes changing the working directory.
import os
from os import path
# ------------------------ Designed Visualization Functions -------------------------
# Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats
# and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython
# shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits.
# Plotly's Python graphing library makes interactive, publication-quality graphs. You can use it to make line plots,
# scatter plots, area charts, bar charts, error bars, box plots, histograms, heatmaps, subplots, multiple-axes, polar
# charts, and bubble charts.
# Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing
# attractive and informative statistical graphics.
from algorithms.view.matplotlib_designed import matplotlib_function
from algorithms.view.plotly_designed import plotly_function
from algorithms.view.seaborn_designed import seaborn_function
# ------------------------ Retrieving Ingredients, Units and Quantities -------------------------
from algorithms.parsing.ingredient_quantities import ingredient_quantities
# ------------------------ Create Distance Matrix -------------------------
# SciPy is a free and open-source Python library used for scientific and technical computing. SciPy contains modules for
# optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE
# solvers and other tasks common in science and engineering.
# distance_matrix returns the matrix of all pair-wise distances.
from scipy.spatial import distance_matrix
# ------------------------ Unsupervised Learning -------------------------
#
from clustering.infomapAlgorithm import infomap_function # Infomap algorithm detects communities in large networks with the map equation framework.
from sklearn.cluster import DBSCAN # DBSCAN
from sklearn.cluster import MeanShift # Meanshift
import community # Louvain
# ------------------------ Supervised Learning -------------------------
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import LeaveOneOut
# ------------------------ Jupyter Notebook Widgets -------------------------
# Interactive HTML widgets for Jupyter notebooks and the IPython kernel.
import ipywidgets as w
from IPython.core.display import display
from IPython.display import Image
# ------------------------ IoU Score -------------------------
# The Jaccard index, also known as Intersection over Union and the Jaccard similarity coefficient (originally given the
# French name coefficient de communauté by Paul Jaccard), is a statistic used for gauging the similarity and diversity
# of sample sets. The Jaccard coefficient measures similarity between finite sample sets, and is defined as the size of
# the intersection divided by the size of the union of the sample sets.
# Function implemented during this project.
from benchmark.iou_designed import iou_function
# ------------------------ F1 Score -------------------------
# The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best
# value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The
# formula for the F1 score is: F1 = 2 * (precision * recall) / (precision + recall)
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.metrics import f1_score
# ------------------------ API Requests -------------------------
# The requests library is the de facto standard for making HTTP requests in Python. It abstracts the complexities of
# making requests behind a beautiful, simple API so that you can focus on interacting with services and consuming data
# in your application.
import requests
# ------------------------ RegEx -------------------------
# A RegEx, or Regular Expression, is a sequence of characters that forms a search pattern.
# RegEx can be used to check if a string contains the specified search pattern.
# Python has a built-in package called re, which can be used to work with Regular Expressions.
import re
# ------------------------ Inflect -------------------------
# Correctly generate plurals, singular nouns, ordinals, indefinite articles; convert numbers to words.
import inflect
# ------------------------ Parse URLs -------------------------
# This module defines a standard interface to break Uniform Resource Locator (URL) strings up in components (addressing
# scheme, network location, path etc.), to combine the components back into a URL string, and to convert a “relative URL”
# to an absolute URL given a “base URL.”
from urllib.parse import urlparse
# ------------------------ Embedding HTML -------------------------
# Public API for display tools in IPython.
from IPython.display import HTML
# ------------------------ Creating Graph -------------------------
# NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of
# complex networks.
import networkx
# ------------------------ Language Detectors -------------------------
# TextBlob requires API connnection to Google translating tool (low limit on the number of requests). langdetect is an offline detector.
from textblob import TextBlob
from langdetect import detect
# ------------------------ Language Detectors -------------------------
# In Python, string.punctuation will give the all sets of punctuation: !"#$%&'()*+, -./:;<=>?@[\]^_`{|}~
import string
# ------------------------ CSV Reader -------------------------
# CSV (Comma Separated Values) format is the most common import and export format for spreadsheets and databases.
import csv
# ------------------------ Natural Language Processing -------------------------
#
import nltk
#nltk.download()
from nltk.corpus import stopwords, wordnet
import webcolors
from nltk.corpus import wordnet
###Output
_____no_output_____
###Markdown
Recipe1M+ Dataset
###Code
# ---------------------------- Importing Recipe1M+ Dataset ----------------------------
f = open('./data/recipe1M+/layer1.json')
recipes_data = (json.load(f))#[0:100000] # Regular computer able to read Recipe1M+ full dataset.
f.close()
id_ingredients = {}
#id_url = {}
for recipe in recipes_data:
id_ingredients[recipe["id"]] = []
#id_url[recipe["id"]] = recipe["url"]
for index, ingredient in enumerate(recipe["ingredients"]):
id_ingredients[recipe["id"]].append({"id": index, "ingredient": (ingredient["text"]).lower()})
# ---------------------------- Details Recipe1M+ ----------------------------
# Online websites parsed to retrieve recipes.
recipe_databases = []
for key, value in id_url.items():
parsed_uri = urlparse(value)
result = '{uri.scheme}://{uri.netloc}'.format(uri=parsed_uri)
recipe_databases.append(result)
list(set(recipe_databases)) # The common approach to get a unique collection of items is to use a set. Sets are
# unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the built-in
# set() function. If you later need a real list again, you can similarly pass the set to the list() function.
with open('./data/allRecipeDatabases.txt', 'w') as f:
for item in list(set(recipe_databases)):
f.write("%s\n" % item)
###Output
_____no_output_____
###Markdown
Recipe1M+ Dataset Errors Corrected
###Code
# ---------------------------- Deleting Empty Instructions and Ingredients ----------------------------
modified_recipes_data = recipes_data
for key, recipe in enumerate(recipes_data):
for key2, ingredient in enumerate(recipe["ingredients"]):
if not ingredient["text"].translate({ord(ii): None for ii in (string.punctuation + "0123456789")}):
modified_recipes_data[key]["ingredients"].remove(recipes_data[key]["ingredients"][key2])
for key3, instruction in enumerate(recipe["instructions"]):
if not instruction["text"].translate({ord(ii): None for ii in (string.punctuation + "0123456789")}):
modified_recipes_data[key]["instructions"].remove(recipes_data[key]["instructions"][key3])
# ---------------------------- Deleting Empty Recipes ----------------------------
modified_modified_recipes_data = modified_recipes_data
for key, recipe in enumerate(modified_recipes_data):
if recipe["ingredients"] or recipe["instructions"]:
continue
else:
print("error")
print(recipe)
modified_modified_recipes_data.remove(modified_recipes_data[key])
# ---------------------------- Removing Double Spaces within Recipes ----------------------------
modified_modified_modified_recipes_data = recipes_data
for key, recipe in enumerate(recipes_data):
for key2, ingredient in enumerate(recipe["ingredients"]):
if " " in ingredient["text"]:
#modified_modified_modified_recipes_data[key]["ingredients"].replace(" ", " ")
print("error")
for key3, instruction in enumerate(recipe["instructions"]):
if " " in instruction["text"]:
#modified_modified_modified_recipes_data[key]["instructions"].replace(" ", " ")
print("error")
# ---------------------------- Deleting Non-English Recipes ----------------------------
true_recipes_positions = []
for key, recipe in enumerate(recipes_data):
joint_ingredients = ""
step1 = True
step2 = False
key2 = 0
while key2 < len(recipe["ingredients"]):
#b = TextBlob(modified_recipes_data[key]["instructions"][0]["text"])
#print(detect(ingredient["text"] + "a"))
#joint_ingredients = joint_ingredients + " " + ingredient["text"]
#print(joint_ingredients)
if step1 and len(recipe["ingredients"][key2]["text"].split(" ")) > 1 and detect(recipe["ingredients"][key2]["text"] + "a") == "en":
#if b.detect_language() == "en":
#print("en")
true_recipes_positions.append(key)
break
elif step1 and key2 == len(recipe["ingredients"]) - 1:
step2 = True
step1 = False
key2 = -1
if step2 and key2 == len(recipe["ingredients"]) - 1 and TextBlob(recipe["ingredients"][key2]["text"]).detect_language() == "en":
true_recipes_positions.append(key)
print(str(key) + "normal")
break
elif step2 and key2 == len(recipe["ingredients"]) - 1:
print(str(key) + "error")
key2 = key2 + 1
#print(recipes_data[399])
#print(true_recipes_positions)
print(recipes_data[1351])
print(recipes_data[1424])
print(recipes_data[1935])
print(recipes_data[2180])
print(recipes_data[2459])
print(recipes_data[3481])
for key, recipe in enumerate(recipes_data):
if key == 1351 or key == 1424 or key == 2180 or key == 2459:
print(recipe)
print(true_recipes_positions)
# ---------------------------- Correcting Fractions in Food.com ----------------------------
relative_units = {"cup": 240, "cups": 240, "c.": 240, "tablespoon": 15, "tablespoons": 15, "bar": 150, "bars": 150, "lump": 5, "lumps": 5, "piece": 25, "pieces": 25, "portion": 100, "portions": 100, "slice": 10, "slices": 10, "teaspoon": 5, "teaspoons": 5, "tbls": 15, "tsp": 5, "jar": 250, "jars": 250, "pinch": 1, "pinches": 1, "dash": 1, "can": 330, "box": 250, "boxes": 250, "small": 250, "medium": 500, "large": 750, "big": 750, "sprig": 0.1, "sprigs": 0.1, "bunch": 100, "bunches": 100, "leaves": 0.1, "packs": 100, "packages": 100, "pck": 100, "pcks": 100, "stalk": 0.1}
modified_modified_modified_recipes_data = modified_modified_recipes_data
for key, recipe in enumerate(modified_modified_recipes_data):
if (".food.com" or "/food.com") in recipe["url"]:
for key2, ingredient in enumerate(recipe["ingredients"]):
fraction_boolean = re.search(r"[1-5][2-9]", ingredient["text"])
if fraction_boolean:
number = fraction_boolean.group()
split_ingredient_list = (ingredient["text"].split(" "))
for index, token in split_ingredient_list:
if index == len(split_ingredient_list) - 1: break
if token == number and split_ingredient_list[index + 1] in list(relative_units.keys()):
split_ingredient = token[0] + "/" + token[1]
split_ingredient = "".join(split_ingredient)
value = split_ingredient
split_ingredient_list = " ".join(split_ingredient_list)
modified_modified_modified_recipes_data[key]["ingredients"][key2]["text"] = split_ingredient_list
# ---------------------------- Exporting Corrected Recipe Dataset ----------------------------
with open('./data/recipe1M+/noEmptyIngredientsOrInstructions/noEmpty(IngredientOrInstruction)Recipes/modified_modified_recipes_data.json', 'w') as json_file:
json.dump(modified_modified_recipes_data, json_file)
###Output
_____no_output_____
###Markdown
Natural Language Processing Creating Units Vocabulary
###Code
p = inflect.engine()
with open('./vocabulary/ingr_vocab.pkl', 'rb') as f: # Includes every ingredient present in the dataset.
ingredients_list = pickle.load(f)
f = open('./data/recipe1M+/layer11.json')
original_recipes_data = (json.load(f))#[0:100000]
f.close()
units_list_temp = set()
def get_units(ingredient_text_input, number_input):
split_ingredient_list2 = ingredient_text_input.replace("/", " ").replace("-", " ").translate({ord(ii): None for ii in string.punctuation.replace(".", "")}).lower().split(" ")
print(split_ingredient_list2)
for number_input_it in number_input:
for iji in range(len(split_ingredient_list2) - 1):
if split_ingredient_list2[iji] == number_input_it and re.search(r"[0-9]", split_ingredient_list2[iji + 1]) is None and re.search(r".\b", split_ingredient_list2[iji + 1]) is None:
units_list_temp.add(split_ingredient_list2[iji + 1])
break
for original_recipes_data_it in original_recipes_data:
for ingredient_it in original_recipes_data_it["ingredients"]:
# search_number = re.search(r"\d", ingredient_text)
number_array = re.findall(r"\d", ingredient_it["text"])
if number_array:
# search_number.group() # [0-9]|[0-9][0-9]|[0-9][0-9][0-9]|[0-9][0-9][0-9][0-9]
get_units(ingredient_it["text"], number_array)
units_list = list(units_list_temp)
units_list.sort()
print(units_list)
# Save a dictionary into a txt file.
with open('./vocabulary/units_list.txt', 'w') as f:
for item in units_list:
if item != "<end>" and item != "<pad>":
f.write("%s\n" % item)
#for jj, ingredients_list_it in enumerate(ingredients_list):
#if predicted_unit in ingredients_list_it or predicted_unit in p.plural(ingredients_list_it):
#break
#elif jj == len(ingredients_list) - 1:
hey = [0, 4, 1, 4, 9]
print(set(hey))
print(0 in set(hey))
for e in set(hey):
print(e)
p = inflect.engine()
with open('./vocabulary/ingr_vocab.pkl', 'rb') as f: # Includes every ingredient present in the dataset.
ingredients_list = pickle.load(f)
lineList = [line.rstrip('\n') for line in open('./vocabulary/units_list.txt')]
print(lineList)
final_units = []
for unit in lineList:
for index, ingredients_list_it in enumerate(ingredients_list):
if unit == ingredients_list_it or unit == p.plural(ingredients_list_it):
break
elif index == len(ingredients_list) - 1:
final_units.append(unit)
print(len(final_units))
# Save a dictionary into a txt file.
with open('./vocabulary/units_list_final.txt', 'w') as f:
for item in final_units:
if item != "<end>" and item != "<pad>":
f.write("%s\n" % item)
food = wordnet.synset('food.n.02')
print("red" in webcolors.CSS3_NAMES_TO_HEX)
with open("./vocabulary/units_list_final - cópia.txt") as f:
content = f.readlines()
# you may also want to remove whitespace characters like `\n` at the end of each line
lines = [x.strip() for x in content]
filtered_stopwords = [word for word in lines if word not in stopwords.words('english')]
filtered_verbs_adjectives_adverbs = []
for w in filtered_stopwords:
if wordnet.synsets(w) and wordnet.synsets(w)[0].pos() != "v" and wordnet.synsets(w)[0].pos() != "a" and wordnet.synsets(w)[0].pos() != "r" and w not in webcolors.CSS3_NAMES_TO_HEX and w not in list(set([w for s in food.closure(lambda s:s.hyponyms()) for w in s.lemma_names()])):
filtered_verbs_adjectives_adverbs.append(w)
elif wordnet.synsets(w) == []:
filtered_verbs_adjectives_adverbs.append(w)
print(filtered_stopwords)
print(len(lines))
print(len(filtered_stopwords))
print(len(filtered_verbs_adjectives_adverbs))
# Save a dictionary into a txt file.
with open('./vocabulary/units_list_final_filtered.txt', 'w') as f:
for item in filtered_verbs_adjectives_adverbs:
if item != "<end>" and item != "<pad>":
f.write("%s\n" % item)
food = wordnet.synset('food.n.02')
len(list(set([w for s in food.closure(lambda s:s.hyponyms()) for w in s.lemma_names()])))
list(set([w for s in food.closure(lambda s:s.hyponyms()) for w in s.lemma_names()]))
###Output
_____no_output_____
###Markdown
Retrieving Ingredients, Units and Quantities from Recipe1M+
###Code
# ---------------------------- Creating Vocabulary to Import Units ----------------------------
absolute_units = {"litre": 1000, "litres": 1000, "ounce": 28, "ounces": 28, "gram": 1, "grams": 1, "grm": 1, "kg": 1000, "kilograms": 1000, "ml": 1, "millilitres": 1, "oz": 28, "l": 1000, "g": 1, "lbs": 454, "pint": 568, "pints": 568, "lb": 454, "gallon": 4546, "gal": 4546, "quart": 1137, "quarts": 1137}
relative_units = {"cup": 240, "cups": 240, "c.": 240, "tablespoon": 15, "tablespoons": 15, "bar": 150, "bars": 150, "lump": 5, "lumps": 5, "piece": 25, "pieces": 25, "portion": 100, "portions": 100, "slice": 10, "slices": 10, "teaspoon": 5, "teaspoons": 5, "tbls": 15, "tsp": 5, "jar": 250, "jars": 250, "pinch": 1, "pinches": 1, "dash": 1, "can": 330, "box": 250, "boxes": 250, "small": 250, "medium": 500, "large": 750, "big": 750, "sprig": 0.1, "sprigs": 0.1, "bunch": 100, "bunches": 100, "leaves": 0.1, "packs": 100, "packages": 100, "pck": 100, "pcks": 100, "stalk": 0.1}
# ---------------------------- Save a dictionary into a txt file ----------------------------
with open('./vocabulary/absolute_units.json', 'w') as json_file:
json.dump(absolute_units, json_file)
with open('./vocabulary/relative_units.json', 'w') as json_file:
json.dump(relative_units, json_file)
# ---------------------------- Importing and Exporting as Text File Ingredient's Vocabulary ----------------------------
# Reading ingredients vocabulary.
# with open('./vocabulary/instr_vocab.pkl', 'rb') as f: # Includes every ingredient, cooking vocabulary and punctuation signals necessary to describe a recipe in the dataset.
with open('./vocabulary/ingr_vocab.pkl', 'rb') as f: # Includes every ingredient present in the dataset.
ingredients_list = pickle.load(f) # Using vocabulary ingredients to retrieve the ones present in the recipes.
# Save a dictionary into a txt file.
with open('./vocabulary/ingr_vocab.txt', 'w') as f:
for item in ingredients_list:
if item != "<end>" and item != "<pad>":
f.write("%s\n" % item)
# ---------------------------- Importing Ingredients, Units and Quantities ----------------------------
relative_units.update(absolute_units)
units_list_dict = relative_units
ingrs_quants_units_final = {}
for recipe in recipes_data:
ingrs_quants_units_final[recipe["id"]] = ingredient_quantities(recipe, ingredients_list, units_list_dict)
# Exporting data for testing
#with open('./data/test/new_id_ingredients_tokenized_position.json', 'w') as json_file:
#json.dump(new_id_ingredients_tokenized_position, json_file)
#with open('./data/test/id_ingredients.json', 'w') as json_file:
#json.dump(id_ingredients, json_file)
new_id_ingredients_tokenized = {}
for key, value in ingrs_quants_units_final.items():
new_id_ingredients_tokenized[key] = []
for value2 in value:
new_id_ingredients_tokenized[key].append(value2["ingredient"])
print(new_id_ingredients_tokenized)
###Output
{'000018c8a5': ['penne', 'cheese', 'cheese', 'gruyere', 'chili', 'butter', 'stick', 'flour', 'milk', 'cheese', 'cheese', 'salt', 'chili', 'garlic'], '000033e39b': ['macaroni', 'cheese', 'celery', 'pepper', 'greens', 'pimentos', 'mayonnaise', 'salad dressing', 'vinegar', 'salt', 'dill'], '000035f7ed': ['tomato', 'salt', 'onion', 'pepper', 'greens', 'pepper', 'pepper', 'cucumber', 'oil', 'olive', 'basil'], '00003a70b1': ['milk', 'water', 'butter', 'potato', 'corn', 'cheese', 'onion'], '00004320bb': ['gelatin', 'watermelon', 'water', 'cool whip', 'watermelon', 'cracker'], '0000631d90': ['coconut', 'beef', 'garlic', 'salt', 'pepper', 'juice', 'lemon', 'soy sauce', 'cornstarch', 'pineapple', 'liquid', 'orange', 'liquid', 'nuts', 'cashews'], '000075604a': ['chicken', 'tea', 'kombu', 'pepper'], '00007bfd16': ['rhubarb', 'rhubarb', 'sugar', 'gelatin', 'strawberry', 'strawberries', 'cake', 'water', 'butter', 'margarine'], '000095fc1d': ['vanilla', 'fat', 'yogurt', 'strawberry', 'strawberries', 'fat'], '0000973574': ['flour', 'cinnamon', 'baking soda', 'salt', 'baking powder', 'egg', 'sugar', 'oil', 'vegetables', 'vanilla', 'zucchini', 'walnuts'], '0000a4bcf6': ['onion', 'greens', 'pepper', 'salmon', 'fillets', 'steak', 'oil', 'olive', 'ginger', 'rice', 'greens', 'mixed salad green', 'carrot', 'cherries', 'tomato', 'fat', 'italian dressing', 'cheese', 'banana', 'kiwi', 'vanilla', 'yogurt', 'cinnamon', 'sugar'], '0000b1e2b5': ['seeds', 'fennel', 'tenderloin', 'pork', 'fennel', 'oil', 'olive', 'garlic', 'clove', 'wine', 'chicken', 'broth', 'butter', 'juice', 'lemon'], '0000c79afb': ['wine', 'roses', 'brandy', 'liqueur', 'orange', 'grand marnier', 'juice', 'cranberries', 'orange', 'lemon', 'sprite', 'ice'], '0000ed95f8': ['butter', 'sugar', 'egg', 'juice', 'orange', 'orange', 'flour', 'baking soda', 'salt', 'juice', 'pineapple', 'pecans'], '00010379bf': ['cake', 'flour', 'baking powder', 'sugar', 'seeds', 'sesame', 'water', 'oil', 'vegetables', 'sugar', 'candy', 'candies', 'sugar', 'candy', 'candies', 'soy sauce', 'candy', 'candies', 'water', 'candy', 'candies'], '000106ec3c': ['tomato', 'corn', 'cheese', 'medium cheddar', 'potato', 'onion', 'hamburger'], '00010c7867': ['beef', 'oats', 'juice', 'tomato', 'egg', 'salt', 'pepper', 'chili', 'onion', 'butter', 'flour', 'milk', 'cheese', 'corn', 'pepper', 'greens'], '00010d44c7': ['broccoli', 'rice', 'cheese', 'cheese', 'egg', 'butter', 'milk', 'onion', 'garlic', 'basil', 'oregano', 'salt', 'pepper'], '00011e0b2c': ['marinade', 'beef', 'steak', 'sirloin', 'asparagus', 'flour', 'tortilla'], '00011fc1f9': ['lentils', 'onion', 'tomato', 'carrot', 'celery', 'oil', 'vegetables', 'olive', 'garlic', 'juice', 'ginger', 'vinaigrette', 'broth', 'vegetables', 'pepper', 'chili', 'berbere', 'water', 'salt', 'pepper'], '000128a538': ['vanilla', 'milk', 'blueberries', 'oats', 'coconut', 'pecans', 'vanilla', 'yogurt'], '00013266c9': ['cracker', 'saltine', 'butter', 'sugar', 'vanilla', 'extract', 'nuts'], '00015b5a39': ['potato', 'beef', 'broth', 'water', 'wine', 'teriyaki sauce', 'garlic', 'cream'], '00016355e6': ['vanilla', 'cookie', 'wafers', 'banana', 'coconut', 'milk', 'fat', 'vanilla', 'extract', 'stevia', 'ice', 'cream'], '0001678f7a': ['rice', 'fillets', 'halibut', 'foie gras', 'chives', 'salt', 'pepper', 'chives', 'oil', 'truffle', 'foie gras', 'egg', 'cream', 'oil', 'truffle', 'chicken', 'chives', 'salt', 'pepper', 'spaghetti', 'squash', 'butter', 'honey', 'ginger', 'salt', 'pepper'], '00016d71a4': ['wafers', 'vanilla', 'ice', 'cream', 'lemonade', 'chips', 'chocolate', 'butter', 'margarine'], '00018371f2': ['juice', 'raspberries', 'pectin', 'certo', 'sugar'], '0001960f61': ['cinnamon', 'bread', 'raisins', 'cheese', 'cream', 'apple', 'egg', 'cream', 'butter', 'syrup'], '00019675ca': ['crabmeat', 'cream', 'cheese', 'cream', 'cheese', 'bacon', 'cilantro', 'seasoning', 'garlic', 'onion', 'pepper', 'salt'], '0001a2f336': ['sausage', 'cheese', 'seasoning', 'salt', 'garlic', 'spaghetti', 'jar', 'oil', 'olive', 'pizza dough'], '0001bdeec0': ['greens', 'leek', 'bacon', 'butter', 'salt', 'pepper', 'potato', 'garlic', 'clove', 'thyme', 'egg', 'cheese', 'nutmeg'], '0001cba765': ['juice', 'orange', 'tequila', 'triple sec', 'juice', 'cranberries', 'orange', 'sugar'], '0001d356b6': ['sugar', 'butter', 'stick', 'egg', 'lemon', 'juice', 'lemon', 'flour', 'baking powder', 'baking soda', 'salt', 'buttermilk', 'blueberries', 'sugar', 'butter', 'stick', 'cheese', 'cream', 'sugar', 'vanilla', 'extract', 'salt'], '0001d6acb7': ['butter', 'flour', 'milk', 'cheese', 'chile', 'poblano chiles', 'salt', 'pepper', 'cilantro', 'mustard', 'mustard', 'grain', 'honey', 'water', 'sugar', 'yeast', 'butter', 'salt', 'flour', 'oil', 'vegetables', 'water', 'baking soda', 'water', 'egg', 'salt'], '0001d81db6': ['avocado', 'garlic', 'clove', 'cheese', 'cream', 'juice', 'lime', 'cilantro', 'cream', 'jalapeno', 'tomato'], '000238353f': ['sausage', 'pork', 'meat', 'onion', 'garlic', 'clove', 'thyme', 'marmalade', 'puff pastry', 'egg', 'seeds', 'poppy seed', 'salt', 'pepper', 'oil', 'butter'], '0002491373': ['cooking spray', 'pepper', 'greens', 'pepper', 'pepper', 'onion', 'garlic', 'clove', 'eggplant', 'pepper', 'jalapeno', 'cilantro', 'capers', 'currants', 'nuts', 'vinegar', 'wine', 'salt', 'pepper'], '00025af750': ['pepper', 'banana', 'tomato', 'pepper', 'scallion', 'cucumber', 'basil', 'tarragon', 'stock', 'vegetables', 'salt', 'pepper', 'worcestershire sauce', 'hot sauce', 'vinegar', 'wine', 'vodka'], '00027b61de': ['lobster', 'butter', 'juice', 'lemon', 'salt', 'garlic'], '00029df38f': ['egg', 'salt', 'sugar', 'flour', 'cinnamon', 'vanilla', 'extract', 'pecans'], '00029f71f7': ['flour', 'wheat', 'oats', 'sugar', 'parsley', 'apple', 'carrot', 'egg', 'oil', 'vegetables', 'water'], '0002a82634': ['popcorn', 'fudge', 'caramel'], '0002e15d76': ['onion', 'oil', 'ketchup', 'water', 'vinegar', 'cider', 'sugar', 'worcestershire sauce', 'honey', 'mustard', 'paprika', 'salt', 'pepper', 'pepper', 'lemon', 'sausage', 'hot dog'], '0002ed1338': ['sugar', 'pumpkin', 'spices', 'cinnamon', 'biscuit', 'egg', 'water', 'sugar', 'syrup'], '0003132d05': ['chives', 'potato', 'garlic', 'clove', 'butter', 'milk', 'cream', 'cheese', 'salt', 'salt', 'onion', 'garlic', 'pepper'], '000320b7ce': ['flour', 'baking powder', 'baking soda', 'salt', 'cinnamon', 'ginger', 'sugar', 'butter', 'banana', 'egg', 'juice', 'lime', 'lime', 'coconut'], '000328f1ed': ['penne', 'sugar', 'peas', 'cream', 'trout', 'lemon', 'dill', 'salt', 'pepper'], '00032d5bcd': ['water', 'juice', 'orange', 'sugar', 'fruit', 'dates', 'cinnamon', 'stick', 'cheese', 'cream', 'philadelphia', 'fat', 'milk', 'honey', 'almonds'], '00033f624d': ['butter', 'egg', 'cheese', 'colby', 'pepper', 'salt', 'curry', 'flour', 'fat', 'bacon', 'ham', 'cheese'], '00034ad6cc': ['pasta', 'fettuccine', 'butter', 'garlic', 'clove', 'cream', 'egg', 'cheese', 'parsley']}
###Markdown
Retrieving Cooking Processes from Recipe1M+ Ingredients -> Vector (Word2Vec)Converting ingredients into 50 dimensional vectors to facilitate
###Code
# Ingredients are converted into vectors and, by averaging the ones belonging to the same recipe, a vector for the
# recipe is obtained.
if path.exists("./trained_models/model.bin"):
corpus = new_id_ingredients_tokenized.values()
model = Word2Vec(corpus, min_count=1,size= 50,workers=3, window =10, sg = 0)
words = list(model.wv.vocab)
# By default, the model is saved in a binary format to save space.
model.wv.save_word2vec_format('./trained_models/model.bin')
# Save the learned model in ASCII format and review the contents
model.wv.save_word2vec_format('./trained_models/model.txt', binary=False)
else:
model = gensim.models.KeyedVectors.load_word2vec_format('./trained_models/model.bin', binary=True) # Saved model can then be loaded again by calling the Word2Vec.load() function.
###Output
_____no_output_____
###Markdown
Ingredients -> Vector (Every vector component corresponds to a word) Recipes -> Vector (Word2Vec)Representing recipes in their vectorized way by taking the average of the vectors of the ingredients present.
###Code
new_id_ingredients_tokenized_keys = new_id_ingredients_tokenized.keys()
id_ingreVectorized = {}
id_recipe = {}
for recipe_id in new_id_ingredients_tokenized_keys:
id_ingreVectorized[recipe_id] = []
for recipe_ingr in new_id_ingredients_tokenized[recipe_id]:
id_ingreVectorized[recipe_id].append(model[recipe_ingr])
id_recipe[recipe_id] = sum(id_ingreVectorized[recipe_id])/len(new_id_ingredients_tokenized[recipe_id])
###Output
_____no_output_____
###Markdown
Recipes -> Vector (Every vector component corresponds to a word) Dimensionality Reduction (Ingredients)PCA and T-SNE intended to decrease the dimensionality (50) of the vectors representing ingredients, so that they can be plotted in visualizable way.
###Code
X_ingredients = model[model.wv.vocab]
print(X_ingredients)
# ---------------------------- PCA ----------------------------
X_ingredients_embedded1 = PCA(n_components=2).fit_transform(X_ingredients)
# ---------------------------- T-SNE ----------------------------
X_ingredients_embedded2 = TSNE(n_components=2).fit_transform(X_ingredients)
###Output
_____no_output_____
###Markdown
Clustering IngredientsFinding groups of ingredients that most often co-occur in the same recipes.
###Code
# ---------------------------- Build Distance Dataframe & Networkx Graph ----------------------------
data = list(X_ingredients_embedded1) # list(X_ingredients_embedded1) / model[model.wv.vocab]
ctys = list(model.wv.vocab)
df = pandas.DataFrame(data, index=ctys)
distances = (pandas.DataFrame(distance_matrix(df.values, df.values), index=df.index, columns=df.index)).rdiv(1) # Creating dataframe from distance matrix between ingredient vectors.
# G = networkx.from_pandas_adjacency(distances) # Creating networkx graph from pandas dataframe.
X = numpy.array(df.values) # Creating numpy array from pandas dataframe.
# ---------------------------- Clustering ----------------------------
# Mean Shift
# ingredientModule = MeanShift().fit(X).labels_
# Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
# ingredientModule = DBSCAN(eps=0.3, min_samples=2).fit(X).labels_ # Noisy samples are given the label -1.
# Louvain
# ingredientModule = list((community.best_partition(G)).values())
# Infomap
ingredientModule = infomap_function(distances, ctys)
###Output
_____no_output_____
###Markdown
Number of Times Ingredients are used in RecipesRetrieving how often different ingredients are used across the recipe dataset.
###Code
ingredients_count = {}
for ingredient in ingredients_list:
if "_" in ingredient:
ingredients_count[ingredient.replace("_", " ")] = 0
continue
ingredients_count[ingredient] = 0 # In case there is no _
for recipe in recipes_data:
for recipe_standardized in ingrs_quants_units_final[recipe["id"]]:
ingredients_count[recipe_standardized["ingredient"]] = ingredients_count[recipe_standardized["ingredient"]] + recipe_standardized["quantity"]
# -------------------------------
ingredientSize = {}
markerSizeConstant = 1
for ingredient_vocabulary in list(model.wv.vocab):
ingredientSize[ingredient_vocabulary] = markerSizeConstant*ingredients_count[ingredient_vocabulary]
ingredientSize = list(ingredientSize.values())
print(ingredientSize)
###Output
_____no_output_____
###Markdown
PCA & T-SNE Visualization (Ingredients)Although some informamation was inevitably lost, a pair of the most variable components was used. Size of each marker is proportional to the number of times the ingredient is used in the recipe dataset. Markers with a similar color group ingredients that are usually used together in the recipe dataset.
###Code
# ---------------------------- Matplotlib ----------------------------
matplotlib_function(X_ingredients_embedded1, X_ingredients_embedded2, list(model.wv.vocab), ingredientModule, ingredientSize, "Ingredients")
# ---------------------------- Plotly ----------------------------
plotly_function(X_ingredients_embedded1, X_ingredients_embedded2, list(model.wv.vocab), ingredientModule, ingredientSize, "true", "Ingredients")
# Toggle Button for Labels
toggle = w.ToggleButton(description='No Labels')
out = w.Output(layout=w.Layout(border = '1px solid black'))
def fun(obj):
with out:
if obj['new']:
plotly_function(X_ingredients_embedded1, X_ingredients_embedded2, list(model.wv.vocab), ingredientModule, ingredientSize, "false")
else:
plotly_function(X_ingredients_embedded1, X_ingredients_embedded2, list(model.wv.vocab), ingredientModule, ingredientSize, "true")
toggle.observe(fun, 'value')
display(toggle)
display(out)
# (Run in localhost to visualize it)
# ---------------------------- Seaborn ----------------------------
seaborn_function(X_ingredients_embedded1, X_ingredients_embedded2, list(model.wv.vocab), ingredientModule, ingredientSize)
###Output
_____no_output_____
###Markdown
Dimensionality Reduction (Recipes)PCA and T-SNE intended to decrease the dimensionality (50) of the vectors representing recipes, so that they can be plotted in visualizable way. Although some informamation was inevitably lost, a pair of the most variale components was used.
###Code
# ---------------------------- PCA ----------------------------
X_recipes_embedded1 = PCA(n_components=2).fit_transform(list(id_recipe.values()))
# ---------------------------- T-SNE ----------------------------
X_recipes_embedded2 = TSNE(n_components=2).fit_transform(list(id_recipe.values()))
###Output
_____no_output_____
###Markdown
Clustering RecipesFinding groups of recipes that most correspond to different types of cuisine.
###Code
# ---------------------------- Build Distance Dataframe & Networkx Graph ----------------------------
data = list(X_recipes_embedded1) # list(X_recipes_embedded1) / id_recipe.values()
ctys = id_recipe.keys()
df = pandas.DataFrame(data, index=ctys)
distances = (pandas.DataFrame(distance_matrix(df.values, df.values), index=df.index, columns=df.index)).rdiv(1)
# G = networkx.from_pandas_adjacency(distances) # Creating networkx graph from pandas dataframe.
X = numpy.array(df.values) # Creating numpy array from pandas dataframe.
# ---------------------------- Clustering ----------------------------
# Mean Shift
recipeModules = MeanShift().fit(X).labels_
# Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
# recipeModules = DBSCAN(eps=0.3, min_samples=2).fit(X).labels_ # Noisy samples are given the label -1.
# Louvain
# recipeModules = list((community.best_partition(G)).values())
# Infomap
# recipeModules = infomap_function(1./distances, ctys)
###Output
_____no_output_____
###Markdown
Number of Ingredients in each RecipeCalculated so that the size of each recipe marker could be proportional to the number of ingredients present.
###Code
numberIngredients = []
markerSizeConstant = 1
for key, value in new_id_ingredients_tokenized.items():
numberIngredients.append(markerSizeConstant*len(value))
print(numberIngredients)
###Output
_____no_output_____
###Markdown
PCA & T-SNE VisualizationSize of each marker is proportional to the number of ingredients a given recipe contains. Markers with a similar color group recipes that contain the higher number of common ingredients.
###Code
# ---------------------------- Matplotlib ----------------------------
matplotlib_function(X_recipes_embedded1, X_recipes_embedded2, list(id_recipe.keys()), recipeModules, numberIngredients, "Recipes")
# ---------------------------- Plotly ----------------------------
plotly_function(X_recipes_embedded1, X_recipes_embedded2, list(id_recipe.keys()), recipeModules, numberIngredients, "true", "Recipes")
toggle = w.ToggleButton(description='No Labels')
out = w.Output(layout=w.Layout(border = '1px solid black'))
def fun(obj):
with out:
if obj['new']:
plotly_function(X_recipes_embedded1, X_recipes_embedded2, list(id_recipe.keys()), recipeModules, numberIngredients, "false")
else:
plotly_function(X_recipes_embedded1, X_recipes_embedded2, list(id_recipe.keys()), recipeModules, numberIngredients, "true")
toggle.observe(fun, 'value')
display(toggle)
display(out)
# (Run in localhost to be able to visualize it)
# ---------------------------- Seaborn ----------------------------
seaborn_function(X_recipes_embedded1, X_recipes_embedded2, list(id_recipe.keys()), recipeModules, numberIngredients)
###Output
_____no_output_____
###Markdown
Importing Anticancer IngredientsGetting the anticancer ingredients and the number of anticancer molecules each one contain. Further data processing to facilitate analysis.
###Code
ac_data = pandas.read_csv("./data/food_compound.csv", delimiter = ",")
ac_data.head()
# Selecting Useful Anti-Cancer Ingredients Columns
ac_data_mod = ac_data[['Common Name', 'Number of CBMs']]
ac_data_mod
# Dropping Nan Rows from Anti-Cancer Ingredients Table
ac_data_mod.replace("", numpy.nan)
ac_data_mod = ac_data_mod.dropna()
ac_data_mod
# Converting DataFrame to Dictionary
ingredient_anticancer = {}
for index, row in ac_data_mod.iterrows():
ingredient_anticancer[row['Common Name'].lower()] = row['Number of CBMs']
###Output
_____no_output_____
###Markdown
Recipes -> ScoreCalculating the score of each recipe taking into account the number of cancer-beating molecules. Data Source: Veselkov, K., Gonzalez, G., Aljifri, S. et al. HyperFoods: Machine intelligent mapping of cancer-beating molecules in foods. Sci Rep 9, 9237 (2019) doi:10.1038/s41598-019-45349-y
###Code
recipe_cancerscore = {}
recipe_weight = {}
for key, value in ingrs_quants_units_final.items():
recipe_weight[key] = 0
for recipe_standardized in value:
recipe_weight[key] = recipe_weight[key] + recipe_standardized["quantity (ml)"]
recipe_weight
# ----------------------
recipe_cancerscore = {}
ingredient_anticancer_keys = list(ingredient_anticancer.keys())
for key, value in ingrs_quants_units_final.items():
recipe_cancerscore[key] = 0
for recipe_standardized in value:
for ingredient_anticancer_iterable in ingredient_anticancer_keys:
if recipe_standardized["ingredient"] in ingredient_anticancer_iterable:
recipe_cancerscore[key] = recipe_cancerscore[key] + ingredient_anticancer[ingredient_anticancer_iterable]*(recipe_standardized["quantity (ml)"])/(recipe_weight[key])
break
###Output
_____no_output_____
###Markdown
Best Recipes Decreasing OrderPrinting, in a decreasing order, the recipes with a bigger number of cancer-beating molecules.
###Code
res1 = pandas.DataFrame.from_dict(recipe_cancerscore, orient='index', columns=['Anticancer Molecules/Number Ingredients'])
res2 = pandas.DataFrame.from_dict(id_url, orient='index', columns=['Recipe URL'])
pandas.set_option('display.max_colwidth', 1000)
pandas.concat([res1, res2], axis=1).reindex(res1.index).sort_values(by=['Anticancer Molecules/Number Ingredients'], ascending=False).head()
# Creating a dataframe object from listoftuples
# pandas.DataFrame(recipe_cancerscore_dataframe)
###Output
_____no_output_____
###Markdown
Recipes -> Nutritional InformationRetrieving nutritional information for each ingredient present in the recipe dataset. Overall recipe score will be calculated taking into account not only the number of cancer-beating molecules, but alsonutrtional content. Data Source: U.S. Department of Agriculture, Agricultural Research Service. FoodData Central, 2019. fdc.nal.usda.gov.
###Code
with open('./vocabulary/ingr_vocab.pkl', 'rb') as f: # Includes every ingredient present in the dataset.
ingredients_list = pickle.load(f)[1:-1]
print(len(ingredients_list))
# -------------------------------- Extracting Ingredients
new_ingredients_list = [] # List of ingredients from the vocabulary with spaces instead of underscores.
for i in range(0, len(ingredients_list)):
if "_" in ingredients_list[i]:
new_ingredients_list.append(ingredients_list[i].replace("_", " "))
continue
new_ingredients_list.append(ingredients_list[i]) # In case there is no _
print(len(new_ingredients_list))
# ---------------------------- Get FoodData Central IDs for Each Ingredient from Vocab ----------------------------
if os.path.exists('./vocabulary/ingredient_fdcIds.json'):
f = open('./vocabulary/ingredient_fdcIds.json')
ingredient_fdcIds = (json.load(f))# [0:100]
f.close()
else:
API_Key = "BslmyYzNnRTysPWT3DDQfNv5lrmfgbmYby3SVsHw"
URL = "https://api.nal.usda.gov/fdc/v1/search?api_key=" + API_Key
ingredient_fdcIds = {}
for value in new_ingredients_list:
ingredient_fdcIds[value] = {}
ingredient_fdcIds[value]["fdcIds"] = []
ingredient_fdcIds[value]["descriptions"] = []
# ------------------------------------------ ADDING RAW
PARAMS2 = {'generalSearchInput': value + " raw"}
r2 = requests.get(url = URL, params = PARAMS2)
data2 = r2.json()
raw = False
if "foods" in data2 and value + " raw" in (data2["foods"][0]["description"]).lower().replace(",", ""):
raw_id = data2["foods"][0]["fdcId"]
raw_description = data2["foods"][0]["description"]
ingredient_fdcIds[value]["fdcIds"].append(raw_id)
ingredient_fdcIds[value]["descriptions"].append(raw_description)
raw = True
# id_nutritionalInfo[value] = []
# for i in range(len(value)):
# Defining a params dict for the parameters to be sent to the API
PARAMS = {'generalSearchInput': value}
# Sending get request and saving the response as response object
r = requests.get(url = URL, params = PARAMS)
# Extracting data in json format
data = r.json()
if "foods" in data:
numberMatches = len(data["foods"])
if numberMatches > 10 and raw == True:
numberMatches = 9
elif numberMatches > 10 and raw == False:
numberMatches = 10
for i in range(numberMatches):
ingredient_fdcIds[value]["fdcIds"].append(data["foods"][i]["fdcId"])
ingredient_fdcIds[value]["descriptions"].append(data["foods"][i]["description"])
#print(ingredient_fdcIds)
# ---------------------------- Get All Nutritional Info from Vocab ----------------------------
if os.path.exists('./vocabulary/ingredient_nutritionalInfo.json'):
f = open('./vocabulary/ingredient_nutritionalInfo.json')
ingredient_nutritionalInfo = (json.load(f))# [0:100]
f.close()
else:
API_Key = "BslmyYzNnRTysPWT3DDQfNv5lrmfgbmYby3SVsHw"
ingredient_nutritionalInfo = {}
for key, value in ingredient_fdcIds.items():
if value["fdcIds"]:
URL = "https://api.nal.usda.gov/fdc/v1/" + str(value["fdcIds"][0]) + "?api_key=" + API_Key
# Sending get request and saving the response as response object
r = requests.get(url = URL)
ingredient_nutritionalInfo[key] = {}
ingredient_nutritionalInfo[key]["fdcId"] = value["fdcIds"][0]
ingredient_nutritionalInfo[key]["description"] = value["descriptions"][0]
ingredient_nutritionalInfo[key]["nutrients"] = {}
for foodNutrient in r.json()["foodNutrients"]:
if "amount" in foodNutrient.keys():
ingredient_nutritionalInfo[key]["nutrients"][foodNutrient["nutrient"]["name"]] = [foodNutrient["amount"], foodNutrient["nutrient"]["unitName"]]
else:
ingredient_nutritionalInfo[key]["nutrients"][foodNutrient["nutrient"]["name"]] = "NA"
else:
ingredient_nutritionalInfo[key] = {}
# ---------------------------- Correcting Units in JSON with Nutritional Info ----------------------------
if os.path.exists('./vocabulary/ingredient_nutritionalInfo_corrected.json'):
f = open('./vocabulary/ingredient_nutritionalInfo_corrected.json')
ingredient_nutritionalInfo_modified = (json.load(f))# [0:100]
f.close()
else:
ingredient_nutritionalInfo_modified = ingredient_nutritionalInfo
for nutrient, dictionary in ingredient_nutritionalInfo.items():
if "nutrients" in dictionary:
for molecule, quantity in dictionary["nutrients"].items():
if quantity != "NA":
if quantity[1] == "mg":
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]/1000
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'g'
elif quantity[1] == "\u00b5g":
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]/1000000
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'g'
elif quantity[1] == "kJ":
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]/4.182
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'kcal'
elif quantity[1] == "IU":
if "Vitamin A" in molecule:
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]*0.45/1000000
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'g'
elif "Vitamin C" in molecule:
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]*50/1000000
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'g'
elif "Vitamin D" in molecule:
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]*40/1000000
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'g'
elif "Vitamin E" in molecule:
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][0] = quantity[0]*0.8/1000
ingredient_nutritionalInfo_modified[nutrient]["nutrients"][molecule][1] = 'g'
# ---------------------------- Get Medium Sizes for each Ingredient in Vocab ----------------------------
f = open('./vocabulary/ingredient_fdcIds.json')
ingredient_fdcIds = (json.load(f))#[0:10]
f.close()
API_Key = "BslmyYzNnRTysPWT3DDQfNv5lrmfgbmYby3SVsHw"
ingredient_mediumSize = {}
for key, value in ingredient_fdcIds.items():
aux = True
for id_key, fdcId in enumerate(value["fdcIds"][0:5]):
if not aux:
break
URL = "https://api.nal.usda.gov/fdc/v1/" + str(fdcId) + "?api_key=" + API_Key
# Sending get request and saving the response as response object
r = requests.get(url = URL)
foodPortions = r.json()["foodPortions"]
i = 0
first_cycle = True
second_cycle = False
third_cycle = False
while i < len(foodPortions):
if "portionDescription" in foodPortions[i]:
if "medium" in foodPortions[i]["portionDescription"] and first_cycle:
ingredient_mediumSize[key] = {"fdcId": fdcId, "description": value["descriptions"][id_key], "weight": foodPortions[i]["gramWeight"]}
aux = False
break
elif i == len(foodPortions) - 1 and first_cycle:
i = -1
first_cycle = False
second_cycle = True
third_cycle = False
elif "Quantity not specified" in foodPortions[i]["portionDescription"] and second_cycle:
ingredient_mediumSize[key] = {"fdcId": fdcId, "description": value["descriptions"][id_key], "weight": foodPortions[i]["gramWeight"]}
aux = False
#print("Quantity not specified" + key)
break
elif i == len(foodPortions) - 1 and second_cycle:
i = -1
first_cycle = False
second_cycle = False
third_cycle = True
elif key in foodPortions[i]["portionDescription"] and third_cycle:
ingredient_mediumSize[key] = {"fdcId": fdcId, "description": value["descriptions"][id_key], "weight": foodPortions[i]["gramWeight"]}
aux = False
#print(key)
break
elif i == len(foodPortions) - 1 and third_cycle:
i = -1
ingredient_mediumSize[key] = {"fdcId": "NA", "description": "NA", "weight": "NA"}
first_cycle = False
second_cycle = False
third_cycle = False
break
else:
break
i = i + 1
#print(ingredient_mediumSize)
# ---------------------------- Save JSON File with Nutritional Info ----------------------------
with open('./vocabulary/id_ingredients_cuisine.json', 'w') as json_file:
json.dump(id_ingredients_cuisine, json_file)
###Output
_____no_output_____
###Markdown
Recipes -> Cuisines Importing Kaggle and Nature Dataset
###Code
#data = pandas.read_csv("./data/jaan/kaggle_and_nature.csv", skiprows=5)
#pandas.read_table('./data/jaan/kaggle_and_nature.csv')
#data.head()
id_ingredients_cuisine = []
cuisines = []
with open('./data/jaan/kaggle_and_nature.csv', newline = '') as games:
game_reader = csv.reader(games, delimiter='\t')
i = 0
for game in game_reader:
id_ingredients_cuisine.append({"id": i, "ingredients": [ingredient.replace("_", " ") for ingredient in game[0].split(",")[1:]], "cuisine": game[0].split(",")[0]})
cuisines.append(game[0].split(",")[0])
i = i + 1
print(len(cuisines))
###Output
96250
###Markdown
Creating Synonymous Vocabulary
###Code
# ---------------------------- Importing Recipe1M+ Vocabulary ----------------------------
with open('./vocabulary/ingr_vocab.pkl', 'rb') as f: # Includes every ingredient present in the dataset.
ingredients_list = pickle.load(f)
#print(len(ingredients_list))
# ---------------------------- Creating Vocabulary to Kaggle and Nature Dataset----------------------------
vocabulary = set()
for recipe in id_ingredients_cuisine:
for ingredient in recipe["ingredients"]:
vocabulary.add(ingredient.replace(" ", "_"))
#print(vocabulary)
print(len(vocabulary))
print(len(ingredients_list))
synonymous = {}
for ingredient2 in list(vocabulary):
synonymous[ingredient2] = "new"
aux = 0
for ingredient2 in list(vocabulary):
for ingredient1 in ingredients_list:
if ingredient1 == ingredient2:
#print(ingredient2 + " " + ingredient1)
synonymous[ingredient2] = ingredient1
break
elif ingredient1 in ingredient2:
synonymous[ingredient2] = ingredient1
if synonymous[ingredient2] == "new":
aux = aux + 1
print(len(synonymous))
new_id_ingredients_cuisine = id_ingredients_cuisine
for key1, recipe in enumerate(id_ingredients_cuisine):
for key2, ingredient in enumerate(recipe["ingredients"]):
if synonymous[id_ingredients_cuisine[key1]["ingredients"][key2].replace(" ", "_")] == "new":
new_id_ingredients_cuisine[key1]["ingredients"].remove(id_ingredients_cuisine[key1]["ingredients"][key2])
continue
new_id_ingredients_cuisine[key1]["ingredients"][key2] = synonymous[id_ingredients_cuisine[key1]["ingredients"][key2].replace(" ", "_")]
if len(id_ingredients_cuisine[key1]["ingredients"]) < 2:
new_id_ingredients_cuisine.remove(id_ingredients_cuisine[key1])
#print(len(synonymous))
###Output
881
1488
881
###Markdown
Ingredients and Recipes to Vectors
###Code
# ---------------------------- Converting Ingredients to Vectors ----------------------------
#ingredients = set()
#for key, recipe in enumerate(new_id_ingredients_cuisine):
#for key2, ingredient in enumerate(recipe["ingredients"]):
#ingredients.add(recipe["ingredients"][key2])
#ingredient_list = ingredients
ingredient_list = ingredients_list
print(len(ingredient_list))
ingredient_vector = {}
for key, value in enumerate(ingredient_list):
ingredient_vector[value] = [0] * len(ingredient_list)
ingredient_vector[value][key] = 1
#print(ingredient_vector["cinnamon"])
# ---------------------------- Converting Recipes to Vectors ----------------------------
id_ingredients_cuisine_vectorized = {}
# print(len(id_ingredients_cuisine))
for key1, recipe in enumerate(new_id_ingredients_cuisine[0:20000]):
id_ingredients_cuisine_vectorized[key1] = []
for ingredient in recipe["ingredients"]:
id_ingredients_cuisine_vectorized[key1].append(ingredient_vector[ingredient])
id_ingredients_cuisine_vectorized[key1] = numpy.sum(numpy.array(id_ingredients_cuisine_vectorized[key1]), 0)
#print(id_ingredients_cuisine_vectorized)
###Output
1488
###Markdown
Support Vector Classifier (Linear)
###Code
# ---------------------------- Importing Data ----------------------------
X = list(id_ingredients_cuisine_vectorized.values())
y = cuisines[0:20000]
#for vector in list(id_ingredients_cuisine_vectorized.values()):
#print(len(vector))
# ---------------------------- Creating Training & Testing Sets ----------------------------
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
#print(X_train[0:10])
#print(y_train[0:10])
clf = svm.LinearSVC(max_iter = 5000)
clf.fit(X_train, y_train)
# ---------------------------- Save Model ----------------------------
#filename = './trained_models/finalized_model2.sav'
#pickle.dump(clf, open(filename, 'wb'))
# ---------------------------- Load Model ----------------------------
#loaded_model = pickle.load(open(filename, 'rb'))
# result = loaded_model.score(X_test, Y_test)
#print(id_ingredients_cuisine_vectorized["10"])
#print(clf.predict([id_ingredients_cuisine_vectorized[430]]))
###Output
_____no_output_____
###Markdown
Random Forest Classifier
###Code
# ---------------------------- Importing Data ----------------------------
X = list(id_ingredients_cuisine_vectorized.values())
y = cuisines[0:20000]
# ---------------------------- Creating Training & Testing Sets ----------------------------
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
# ---------------------------- Save Model ----------------------------
filename = './trained_models/randomForestClassifier.sav'
pickle.dump(clf, open(filename, 'wb'))
# ---------------------------- Load Model ----------------------------
#loaded_model = pickle.load(open(filename, 'rb'))
# result = loaded_model.score(X_test, Y_test)
#print(id_ingredients_cuisine_vectorized["10"])
#print(clf.predict([id_ingredients_cuisine_vectorized[430]]))
#loaded_model = pickle.load(open(filename, 'rb'))
print(clf.predict([id_ingredients_cuisine_vectorized[430]]))
###Output
_____no_output_____
###Markdown
Validating Model
###Code
# Upsides: intuitive and easy to perform.
# Downsides: drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets.
print(clf.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Stratified K-Fold Cross Validation
###Code
cv = StratifiedKFold(n_splits=5)
scores = cross_val_score(clf, X_test, y_test, cv=cv)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
###Output
Accuracy: 0.80 (+/- 0.01)
###Markdown
Leave One Out Cross Validation (LOOCV)
###Code
# LOO is more computationally expensive than k-fold cross validation.
cv = LeaveOneOut()
scores = cross_val_score(clf, X_test, y_test, cv=cv)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
###Output
_____no_output_____
###Markdown
Adding Cuisine to Recipe1M+ Database
###Code
# ---------------------------- Importing Dataset ----------------------------
f = open('./data/recipe1M+/layer11.json')
recipes_data = (json.load(f))#[0:100000]
f.close()
# ---------------------------- Converting Ingredients to Vectors ----------------------------
modified_recipes_data = {}
#print(new_id_ingredients_tokenized)
for key1, list_ingredients in new_id_ingredients_tokenized.items():
modified_recipes_data[key1] = []
for key2, ingredient in enumerate(list_ingredients):
modified_recipes_data[key1].append(ingredient_vector[ingredient.replace(" ", "_")])
# ---------------------------- Converting Recipes to Vectors ----------------------------
id_ingredients_cuisine_vectorized = {}
cuisines_recipe1m = []
for key1, recipe in modified_recipes_data.items():
id_ingredients_cuisine_vectorized[key1] = numpy.sum(numpy.array(modified_recipes_data[key1]), 0)
cuisines_recipe1m.append((clf.predict([id_ingredients_cuisine_vectorized[key1]]))[0])
# ---------------------------- Adding Cuisines to Recipe1M+ Dataset ----------------------------
modified_modified_recipes_data = recipes_data
for key, recipe in enumerate(recipes_data):
modified_modified_recipes_data[key]["cuisine"] = cuisines_recipe1m[key]
# ---------------------------- Generating New Recipe1M+ w/ Cuisines File ----------------------------
file = open('./data/layer11_modified_cuisines.txt','w')
file.write(str(modified_modified_recipes_data))
###Output
_____no_output_____
###Markdown
Dimensionality Reduction
###Code
X_ingredients = list(id_ingredients_cuisine_vectorized.values())
#print(X_ingredients)
# ---------------------------- PCA ----------------------------
X_ingredients_embedded1 = PCA(n_components=2).fit_transform(X_ingredients)
# ---------------------------- T-SNE ----------------------------
# X_ingredients_embedded2 = TSNE(n_components=2).fit_transform(X_ingredients)
###Output
_____no_output_____
###Markdown
Calculating Amount of Ingredients & Identifying Recipes' Cuisines
###Code
#recipeModules = [0] * len(list(id_ingredients_cuisine_vectorized.keys()))
cuisine_number = {}
cuisine_numberized = []
index = 0
cuisine_number["African"] = 0
for key, cuisine in enumerate(cuisines):
if cuisine not in list(cuisine_number.keys()):
index = index + 1
cuisine_number[cuisine] = index
for key, cuisine in enumerate(cuisines):
cuisine_numberized.append(cuisine_number[cuisine])
recipeModules = cuisine_numberized
print(recipeModules)
numberIngredients = [5] * len(list(id_ingredients_cuisine_vectorized.keys()))
###Output
_____no_output_____
###Markdown
PCA & T-SNE Visualization
###Code
# ---------------------------- Matplotlib ----------------------------
matplotlib_function(X_ingredients_embedded1, X_ingredients_embedded1, list(id_ingredients_cuisine_vectorized.keys()), recipeModules, numberIngredients, "Recipes")
###Output
_____no_output_____
###Markdown
Benchmark Facebook Recipe Retrieval AlgorithmIt was created a dictionary object (id_url.json) that matches recipes IDs (layer1.json) with the URLs of images available in layer2.json. Whilesome recipes do not contain images, others contain more than 1. This matching between different files was possible once layer2.jsonalso contain the recipe ID present in layer1.json. Then, by manipulating Facebook's algorithm and its repository, the recipe retrieval algorithm is able to convert the JSON file id_url.json intoan array of strings of URLs. Along with this, it creates a parallel array of strings of the IDs of the recipes, so that in each position there iscorrespondence between ID in this object with an image URL in the previous.Finally, Facebook's algorithm was run and the ingredients list for each image URL was obtained. The number of correct elements over the totalnumber of elements in the ground-truth recipe gives us the accuracy of the algorithm. The ingredients present in each ground-truth recipe were retrieved using the method above - "Recipe Retrieval w/ Higher Number Anti-Cancer Molecules". Writing Input File w/ Images to Facebook's AlgorithmA JSON file (id_url.json) was created to be input in the Facebook's recipe retrieval algorithm, so that it could generate a prediction of the ingredients present in every recipe from the dataset (with, at least, 1 image available). Ground-truth ingredients for each recipe can be found in layer1.json. The respective images are present in the layer2.json.Both files are in the data directory.
###Code
ids = []
for recipe in recipes_data:
ids.append(recipe["id"])
f = open('./data/recipe1M+/layer2.json')
recipes_images_data = (json.load(f))# [0:100]
f.close()
id_images = {}
for recipe in recipes_data:
id_images[recipe["id"]] = []
for recipe_image in recipes_images_data:
for image in recipe_image["images"]:
if recipe["id"] == recipe_image["id"]:
id_images[recipe["id"]].append(image["url"])
# Writing text file with IDs of each recipe and respective URLs for 1 or more online images.
with open('./data/id_url.json', 'w') as json_file:
json.dump(id_images, json_file)
###Output
_____no_output_____
###Markdown
Executing Inverse Cooking AlgorithmRecipe Generation from Food Images. https://github.com/facebookresearch/inversecooking
###Code
'''
from demo import demo_func
f = open('./data/recipe1M+/id_url.json')
id_url = (json.load(f))# [0:100]
f.close()
urls_output = []
ids_output = []
for id, urls in id_url.items():
for url in urls:
urls_output.append(url)
if url:
ids_output.append(id)
print(id_url)
print(urls_output)
print(ids_output)
demo_func(urls_output, ids_output)
'''
###Output
_____no_output_____
###Markdown
Comparing Ingredient Prediction w/ Ground TruthIoU and F1 scores are used to compare the prediction of the ingredients made by the Facebook's algorithm with the ones presentin the dataset. First, a JSON file with the prediction for each recipe is read. Then, the 2 scores are calculated. Finally, a comparison between the benchmark performed by the algorithm's team and ours is made.
###Code
f = open('./data/id_predictedIngredients.json')
id_predictedIngredients = (json.load(f))# [0:100]
f.close()
# ---------------------------- Intersection over Union (IoU) Score / Jaccard Index ----------------------------
iou_list = []
recipe_ids = id_predictedIngredients.keys
for key, value in id_predictedIngredients.items():
iou_list.append(iou_function(new_id_ingredients_tokenized[key], value))
iou = sum(iou_list)/len(iou_list)
# ---------------------------- F1 Score ----------------------------
f1_list = []
for key, value in id_predictedIngredients.items():
y_true = [new_id_ingredients_tokenized[key]]
y_pred = [value]
binarizer = MultiLabelBinarizer()
# In this case, I am considering only the given labels.
binarizer.fit(y_true)
f1_list.append(f1_score(binarizer.transform(y_true), binarizer.transform(y_pred), average='macro'))
f1 = sum(f1_list)/len(f1_list)
# Benchmark Tests Comparison
benchmark = {'Method': ["Ours", "Facebook Group"],
'IoU': [iou, 0.3252],
'F1': [f1, 0.4908]
}
df = pandas.DataFrame(benchmark, columns = ['Method', 'IoU', 'F1'])
print(df)
# Data obtained by the Facebook Research group comparing how their algorithm, a retrieval system and a human perform when
# predicting the ingredients present in the food.
Image("img/iou&f1.png")
###Output
_____no_output_____
###Markdown
AnnotationsList Jupyter running sessions: ```consolejupyter notebook list```Exit Jupyter notebooks:```jupyter notebook stop (8889)```Plot using Matplotlib:https://medium.com/incedge/data-visualization-using-matplotlib-50ffc12f6af2Add large files to github repo:https://git-lfs.github.com/Removing large file from commit:https://help.github.com/en/github/authenticating-to-github/removing-sensitive-data-from-a-repositoryhttps://rtyley.github.io/bfg-repo-cleaner/https://towardsdatascience.com/uploading-large-files-to-github-dbef518fa1a$ bfg --delete-files YOUR-FILE-WITH-SENSITIVE-DATAbfg is an alias for:java -jar bfg.jarInitialize github repo:git initgit remote add origin https://gitlab.com/Harmelodic/MyNewProject.git
###Code
HTML('<iframe src=http://fperez.org/papers/ipython07_pe-gr_cise.pdf width=700 height=350></iframe>')
# embbeddidng projector
###Output
_____no_output_____ |
Week_5_practical_Shedd.ipynb | ###Markdown
Question 0
###Code
list_na <- c(1, NA, 2, 3, 2, 2, NA)
replace_na_mean <- function(x)
{
replace(x, is.na(list_na), mean(list_na, na.rm = TRUE))
}
replace_na_mean(c(1, NA, 2, 3, 2, 2, NA))
###Output
_____no_output_____
###Markdown
Question 1
###Code
get_all_perms <- function(x){
rep(list(0:9),4)%>%
expand.grid() %>%
nrow()
}
get_all_perms()
###Output
_____no_output_____
###Markdown
Question 2
###Code
get_all_perms <- function(size){
list(0:9)%>%
rep(size) %>%
expand.grid() %>%
nrow()
}
get_all_perms(size = 4)
get_all_perms(size = 3)
###Output
_____no_output_____
###Markdown
Question 3
###Code
fish_samples = c(0:340)
probs = mapply(dbinom, fish_samples, size=340, prob=0.43)
sum(probs)
max(probs)
fish_samples[which.max(probs)]
ggplot() +
geom_bar(aes(x=fish_samples, y=probs), stat = "identity")+
xlim(100,200)
###Output
Warning message:
"Removed 240 rows containing missing values (position_stack)."Warning message:
"Removed 2 rows containing missing values (geom_bar)."
###Markdown
Question 4
###Code
x_values = seq(8,12, 0.05)
probs_1 = mapply(dnorm, x_values, mean = 10, sd = 0.5)
probs_2 = mapply(dnorm, x_values, mean = 10.2, sd = 0.5)
ggplot() +
geom_line(aes(x=x_values, y=probs_1, color = "red"), size = 2) +
geom_line(aes(x=x_values, y=probs_2, color = "blue"), size = 2) +
scale_color_manual(labels = c("probs_1", "probs_2"), values = c("blue", "red"))
set.seed(42)
x_sample <- sample(probs_1, 40)
y_sample <- sample(probs_2,40)
t.test(x_sample,y_sample)
##I would definitely not feel comfortable saying these two sample sets are statistically different.
###Output
_____no_output_____
###Markdown
Question 5
###Code
x_values_challenge = seq(8, 12, 0.1)
probs_3 = mapply(dnorm, x_values_challenge, mean = 10, sd = 0.5)
ggplot() +
geom_line(aes(x=x_values_challenge, y=probs_3), size = 2)
probs_3[x_values_challenge == 8]
probs_3[x_values_challenge == 9]
probs_3[x_values_challenge == 10]
probs_3[x_values_challenge == 11]
probs_3[x_values_challenge == 12]
###Output
_____no_output_____ |
Hands-on lab/artifacts/ProductSeasonality_sklearn.ipynb | ###Markdown
Train a classifier to determine product seasonality
###Code
#import necessary libraries
from azureml.core import Workspace, Dataset
from azureml.data.datapath import DataPath
from sklearn.preprocessing import StandardScaler, MinMaxScaler, Normalizer
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Establish workspace from the environment and retrieve the defined AggregatedProductSeasonality dataset
###Code
ws = Workspace.from_config()
# Load data from registered dataset
dataset = Dataset.get_by_name(ws, name='AggregatedProductSeasonality')
prod_df = dataset.to_pandas_dataframe()
# Pivot the data frame to make daily sale items counts columns.
prod_prep_df = prod_df.set_index(['ProductId', 'Seasonality','TransactionDateId'])['TransactionItemsCount'].unstack()
prod_prep_df = prod_prep_df.rename_axis(None, axis=1).reset_index()
prod_prep_df = prod_prep_df.fillna(0)
###Output
_____no_output_____
###Markdown
Isolate features and prediction classes. Standardize feature by removing the mean and scaling to unit variance.
###Code
X = prod_prep_df.iloc[:, 2:].values
y = prod_prep_df['Seasonality'].values
X_scale = StandardScaler().fit_transform(X)
# Perform dimensionality reduction using Principal Components Analysis and two target components.
pca = PCA(n_components=2)
principal_components = pca.fit_transform(X_scale)
principal_components = MinMaxScaler().fit_transform(principal_components)
pca_df = pd.DataFrame(data = principal_components, columns = ['pc1', 'pc2'])
pca_df = pd.concat([pca_df, prod_prep_df[['Seasonality']]], axis = 1)
###Output
_____no_output_____
###Markdown
Visualize the products data mapped to the two principal components
Display the products data frame in two dimensions (mapped to the two principal components).
Note the clear separation of clusters.
###Code
fig = plt.figure(figsize = (6,6))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('2 component PCA', fontsize = 20)
targets = [1, 2, 3]
colors = ['r', 'g', 'b']
for target, color in zip(targets,colors):
indicesToKeep = pca_df['Seasonality'] == target
ax.scatter(pca_df.loc[indicesToKeep, 'pc1']
, pca_df.loc[indicesToKeep, 'pc2']
, c = color
, s = 1)
ax.legend(['All Season Products', 'Summer Products', 'Winter Products'])
ax.plot([-0.05, 1.05], [0.77, 1.0], linestyle=':', linewidth=1, color='y')
ax.plot([-0.05, 1.05], [0.37, 0.6], linestyle=':', linewidth=1, color='y')
ax.grid()
plt.show()
plt.close()
# Redo the Principal Components Analysis, this time with twenty dimensions.
def col_name(x):
return f'f{x:02}'
pca = PCA(n_components=20)
principal_components = pca.fit_transform(X_scale)
principal_components = MinMaxScaler().fit_transform(principal_components)
X = pd.DataFrame(data = principal_components, columns = list(map(col_name, np.arange(0, 20))))
pca_df = pd.concat([X, prod_prep_df[['ProductId']]], axis = 1)
pca_automl_df = pd.concat([X, prod_prep_df[['Seasonality']]], axis = 1)
X = X[:4500]
y = prod_prep_df['Seasonality'][:4500]
pca_automl_df = pca_automl_df[:4500]
###Output
_____no_output_____
###Markdown
Register the PCA dataframe a dataset with AML Studio
###Code
# register the pca_automl_df dataset with azure machine learning workspace for automl use in the next task
# due to the distributed nature, we must first persist the data to storage to be read by a registered dataset
local_path = 'pca.parquet'
pca_automl_df.to_parquet(local_path)
pca_datastore = ws.get_default_datastore()
pca_datastore.upload_files(files=['pca.parquet'], target_path='data', overwrite=True)
pca_ds = Dataset.Tabular.from_parquet_files(pca_datastore.path('data/pca.parquet'))
pca_ds = pca_ds.register(workspace=ws, name='pcadata', description='data for automl')
###Output
_____no_output_____
###Markdown
Train ensemble of trees classifier (using XGBoost)
###Code
# Split into test and training data sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=123)
#train
model = XGBClassifier()
model.fit(X_train, y_train)
# Perform predictions with the newly trained model
y_pred = model.predict(X_test)
# Calculate the accuracy of the model using test data
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
###Output
_____no_output_____ |
TPSA applied.ipynb | ###Markdown
Truncated Power Series and Differential Algebra applied...The present document demonstrates how "Truncated Power Series" and "Differential Algebra" techniques can help to check particle tracking codes for symplecticity. $-$ Adrian Oeftiger, January 2018
###Code
import numpy as np
from pprint import pprint
from matplotlib import pyplot as plt
%matplotlib inline
# comment this if you don't want to plot with the seaborn library
import seaborn as sns
sns.set_context('talk', font_scale=1.4, rc={'lines.linewidth': 3})
sns.set_style('whitegrid', {'grid.linestyle': ':', 'axes.edgecolor': '0.5',
'axes.linewidth': 1.2, 'legend.frameon': True})
import libTPSA
# see also libsymple.py
###Output
_____no_output_____
###Markdown
Example for 2D phase space in $x$ and $p_x$: the first entry in the series is 0-order, the second entry is the differential $dx$ and the third is $dp_x$:
###Code
# position at 2, infinitesimal dx
x = libTPSA.TPS([2, 1, 0])
# momentum at 0, infinitesimal dp_x
xp = libTPSA.TPS([0, 0, 1])
###Output
_____no_output_____
###Markdown
The initial differentials make up a unit Jacobian:
###Code
jacobian = [x.diff, xp.diff]
pprint (jacobian, width=10)
print ("\nJacobian determinant: {}".format(
np.linalg.det(jacobian)))
###Output
[(1, 0),
(0, 1)]
Jacobian determinant: 1.0
###Markdown
Take the harmonic oscillator as a simple example, e.g. $\mathcal{H}=\frac{1}{2}\left(p_x^2 + x^2\right)$ such that the equations of motion are simply$$\partial \mathcal{H} / \partial p_x = p_x$$ and $$\partial \mathcal{H} / \partial x = x.$$
###Code
def H_p(xp): return xp # momentum derivative
def H_x(x): return x # spatial derivative
###Output
_____no_output_____
###Markdown
Symplectic ExampleLet's integrate these equations of motion with some symplectic leap frogging:
###Code
timestep = 0.01
for step in range(42):
xp = xp - H_x(x) * timestep / 2.
x = x + H_p(xp) * timestep
xp = xp - H_x(x) * timestep / 2.
###Output
_____no_output_____
###Markdown
By now, the tracking has changed the numbers (0-order of the TPS) as expected:
###Code
x.real, xp.real
###Output
_____no_output_____
###Markdown
The Jacobian matrix entries changed along the tracking as well. As we started with a unit Jacobian matrix, we can now see what the tracking did in terms of the differential flow: let's check the Jacobian determinant...
###Code
jacobian = [x.diff, xp.diff]
pprint (jacobian, width=10)
print ("\nJacobian determinant: {}".format(
np.linalg.det(jacobian)))
###Output
[(0.91308822672208945,
0.40776714810377629),
(-0.40775695392507377,
0.91308822672208845)]
Jacobian determinant: 1.0
###Markdown
Voila, our leap frogging is indeed symplectic. The Jacobian determinant wasn't changed and remained unity, i.e. phase space volume is preserved during the tracking. During the tracking we can even plot the 0-order of the TPS to see our harmonic oscillator:
###Code
n_steps = 600
timestep = 0.01
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 2))
plt.subplots_adjust(wspace=0.3)
ax1.set_title('phase space')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$p_x$')
ax2.set_title('oscillation')
ax2.set_xlabel('step')
ax2.set_ylabel('$x$')
cm = plt.get_cmap('viridis')
cgen = (cm(float(i) / (n_steps - 1)) for i in range(n_steps))
for step in range(n_steps):
xp = xp - H_x(x) * timestep / 2.
x = x + H_p(xp) * timestep
xp = xp - H_x(x) * timestep / 2.
color = next(cgen)
ax1.scatter(x.real, xp.real, color=color)
ax2.scatter(step, xp.real, color=color)
###Output
_____no_output_____
###Markdown
Non-symplectic ExampleThis time we will do 2nd order Runge-Kutta showing that it is not symplectic. We start from the same set up:
###Code
# position at 2, infinitesimal dx
x = libTPSA.TPS([2, 1, 0])
# momentum at 0, infinitesimal dp_x
xp = libTPSA.TPS([0, 0, 1])
###Output
_____no_output_____
###Markdown
The Runge-Kutta tracking:
###Code
n_steps = 200
timestep = 0.3
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 2))
plt.subplots_adjust(wspace=0.3)
ax1.set_title('phase space')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$p_x$')
ax2.set_title('oscillation')
ax2.set_xlabel('step')
ax2.set_ylabel('$x$')
cm = plt.get_cmap('viridis')
cgen = (cm(float(i) / (n_steps - 1)) for i in range(n_steps))
for step in range(n_steps):
x_1 = timestep * H_p(xp)
xp_1 = -timestep * H_x(x)
x_2 = timestep * H_p(xp + 0.5*xp_1)
xp_2 = -timestep * H_x(x + 0.5*x_1)
x = x + x_2
xp = xp + xp_2
color = next(cgen)
ax1.scatter(x.real, xp.real, color=color)
ax2.scatter(step, xp.real, color=color)
###Output
_____no_output_____
###Markdown
Clearly the energy of the oscillator grows over time due to the non-symplecticity of the integration method. Let's check the Jacobian to verify this:
###Code
jacobian = [x.diff, xp.diff]
pprint (jacobian, width=10)
print ("\nJacobian determinant: {}".format(
np.linalg.det(jacobian)))
###Output
[(-0.46102158829335288,
-1.1340845391251824),
(1.1340845391251824,
-0.46102158829335288)]
Jacobian determinant: 1.49868864676
###Markdown
Of course a time step this large showed the non-symplecticity much clearlier. However, even if we stick to the small time step from the symplectic example, we will note the Jacobian determinant to depart from unity although the artificial heating is not visible by eye:
###Code
# position at 2, infinitesimal dx
x = libTPSA.TPS([2, 1, 0])
# momentum at 0, infinitesimal dp_x
xp = libTPSA.TPS([0, 0, 1])
n_steps = 600
timestep = 0.01
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 2))
plt.subplots_adjust(wspace=0.3)
ax1.set_title('phase space')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$p_x$')
ax2.set_title('oscillation')
ax2.set_xlabel('step')
ax2.set_ylabel('$x$')
cm = plt.get_cmap('viridis')
cgen = (cm(float(i) / (n_steps - 1)) for i in range(n_steps))
for step in range(n_steps):
x_1 = timestep * H_p(xp)
xp_1 = -timestep * H_x(x)
x_2 = timestep * H_p(xp + 0.5*xp_1)
xp_2 = -timestep * H_x(x + 0.5*x_1)
x = x + x_2
xp = xp + xp_2
color = next(cgen)
ax1.scatter(x.real, xp.real, color=color)
ax2.scatter(step, xp.real, color=color)
jacobian = [x.diff, xp.diff]
pprint (jacobian, width=10)
print ("\nJacobian determinant: {}".format(
np.linalg.det(jacobian)))
###Output
[(0.96019894271023742,
-0.27931969214373153),
(0.27931969214373153,
0.96019894271023742)]
Jacobian determinant: 1.0000015
|
notebooks/Data - t20a.ipynb | ###Markdown
Treatment T20
###Code
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
import seaborn as sns
import imblearn
TREATMENT = "t20a"
export_folder = f"../data/output/diagrams/{TREATMENT}"
os.makedirs(export_folder, exist_ok=True)
# Read and sanitize the data
df = pd.read_csv(f"../data/{TREATMENT}/export/result__{TREATMENT}_prop.csv")
df_full = df.copy()
# drop_cols = ["worker_id", "resp_worker_id", "prop_worker_id", "updated", "status", "job_id", "status", "timestamp", "rowid", "offer_dss", "offer", "offer_final", "completion_code"]
drop_cols = ["worker_id", "resp_worker_id", "prop_worker_id", "updated", "status", "job_id", "status", "timestamp", "rowid", "offer_dss", "offer", "offer_final", "completion_code", "prop_time_spent"]
df = df[[col for col in df.columns if col not in drop_cols]]
df = df.dropna()
cols = [col for col in df.columns if col != "min_offer"] + ["min_offer"]
df_full[["ai_offer", "min_offer"]].describe()
###Output
_____no_output_____
###Markdown
**Correlation to the target value** **Responder's min_offer / Proposer's over and final_offer distribution**
###Code
bins = list(range(0, 105, 5))
f, axes = plt.subplots(1, 2, figsize=(8,4))
# ax = sns.distplot(df["min_offer_final"], hist=True, kde=False, axlabel="Responder minimum offer", bins=bins, label="Responder", ax=axes[0])
ax = sns.distplot(df["min_offer_final"], hist=True, kde=False, axlabel="minimum offer", bins=bins, label="Responder + DSS info", ax=axes[0])
_ = ax.set_ylabel("Frequency")
ax.legend(loc='best')
ax = sns.distplot(df_full["ai_offer"], hist=True, kde=False, axlabel="offer", bins=bins, label="Autonomous DSS", ax=axes[1])
_ = ax.set_ylabel("Frequency")
ax.legend(loc='center right')
plt.tight_layout()
ax.figure.savefig(os.path.join(export_folder, "min_offer_offer.pdf"))
bins = list(range(-100, 105, 5))
plt.figure(figsize=(8,4))
offer_min_offer_diff = df_full["ai_offer"] - df_full["min_offer"]
ax = sns.distplot(offer_min_offer_diff, hist=True, kde=False, axlabel="offer - minimum offer", bins=bins, label="Responder")
_ = ax.set_ylabel("Frequency")
# offer_min_offer_diff = df_full["offer_final"] - df_full["min_offer_final"]
# ax = sns.distplot(offer_min_offer_diff, hist=True, kde=False, axlabel="offer - minimum offer", bins=bins, label="Responder + DSS info", ax=ax)
# plt.legend()
plt.tight_layout()
ax.figure.savefig(os.path.join(export_folder, "offer-min_offer.pdf"))
from core.models.metrics import cross_compute, avg_gain_ratio, gain_mean, rejection_ratio, loss_sum, MAX_GAIN
def get_infos(min_offer, offer, metrics=None, do_cross_compute=False):
if metrics is None:
metrics = [avg_gain_ratio, gain_mean, rejection_ratio, loss_sum]
#df = pd.DataFrame()
infos = dict()
for idx, metric in enumerate(metrics):
if do_cross_compute:
infos[metric.__name__] = cross_compute(min_offer, offer, metric)
else:
infos[metric.__name__] = metric(min_offer, offer)
return infos
###Output
_____no_output_____
###Markdown
**Proposer's performance**
###Code
df_infos = pd.DataFrame()
#Human (fixed-matching) performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['ai_offer']), ignore_index=True)
#Human (cross-matched) average performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer'], do_cross_compute=True), ignore_index=True)
#Human + DSS (fixed-matching) performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer_final']), ignore_index=True)
#Human + DSS(cross-matched) average performance t00
df_infos = df_infos.append(get_infos(df_full['min_offer'], df_full['offer_final'], do_cross_compute=True), ignore_index=True)
#Top-model (fixed 50% prediction) average performance t00
fixed_offer = MAX_GAIN // 2
df_infos = df_infos.append(get_infos(df_full['min_offer'], [fixed_offer], do_cross_compute=True), ignore_index=True)
df_infos.index = ["Proposer", "Proposer (cross matched)", "Proposer + DSS", "Proposer + DSS (cross matched)", "AI-System"]
df_infos = df_infos.loc[["Proposer", "Proposer + DSS", "AI-System"]]
df_infos
def woa(offer_final, offer, ai_offer):
res = (abs(offer_final - offer) ) / (abs(ai_offer - offer ))
res = res[np.invert(np.isnan(res) | np.isinf(res))]
res = np.clip(res, 0, 1)
return abs(res).mean()
def get_resp_variation(df_full):
df_full = df_full.copy()[df_full["min_offer"]>0]
return 100 * ((df_full["min_offer"] - df_full["min_offer_dss"]) / df_full["min_offer"]).mean()
def get_rel_gain(df_infos):
acc = df_infos['avg_gain_ratio']['Proposer']
acc_dss = df_infos['avg_gain_ratio']['Proposer + DSS']
return 100 * abs(acc - acc_dss) / acc
def get_dss_usage(df_full):
return 100 * (df_full.ai_nb_calls > 0).mean()
print("rel_gain: ", round(get_rel_gain(df_infos), 2), "%")
print("dss_usage: ", round(get_dss_usage(df_full), 2), "%")
print("rel_min_offer_variation: ", round(get_resp_variation(df_full), 2), "%")
###Output
rel_gain: 9.21 %
dss_usage: 0.0 %
rel_min_offer_variation: 0.0 %
|
kv_laskuharjoitukset/kierros2/Kvanttilaskenta, kierros 2.ipynb | ###Markdown
$$\newcommand{\ket}[1]{\left|{1}\right\rangle}$$$$\newcommand{\bra}[1]{\left\langle{1}\right|}$$ Tehtävä 1Kvanttitietokoneen muistissa tieto esitetään kubittien avulla. Kertauksena viime kierrokselta, kubitin perustilat ovat $\ket 0$ ja $\ket{1}$, ja ne esitetään pystyvektoreina$$\begin{align}\ket 0 &= \pmatrix {1 \\ 0}, \\\ket 1 &= \pmatrix {0 \\ 1}.\end{align}$$NOT-portin matriisiesitys on seuraava:$$X=\pmatrix {0 & 1 \\ 1 & 0}$$Kierroksella 1 laskimme, että kun $X$-portti operoi kubittiin, niin kubitin tila muuttuu käänteiseksi (bit-flip gate), eli$$X\ket 0 = \ket 1 \\X\ket 1 = \ket 0$$Voit palauttaa a- ja b-kohdat Classroomiin haluamassasi muodossa.a) Osoita matriisiesitystä käyttäen, että$$\begin{align}XX\ket 0 &= \ket 0 \text{ja} \\XX\ket 1 &= \ket 1\end{align}$$ohjevideo NOT-portin toiminnastab) Osoita laskemalla matriisien tulo, että $XX=I$, missä $I$ on yksikkömatriisi. Tämä tarkoittaa, että $X$ on itsensä käänteismatriisi.c) Seuraavassa esimerkissä luodaan Qiskitiä käyttäen kvanttipiiri, jossa on yksi kubitti, jota operoidaan $X$-portilla.
###Code
# Otetaan qiskit-kirjasto käyttöön
from qiskit import *
# Luodaan kvanttipiiri muuttujaan circ, jossa on yksi kubitti (diagrammissa q).
circ1 = QuantumCircuit(1)
# Operoidaan kubittia 0, eli piirin ainoaa kubittia X-portilla.
circ1.x(0)
# Piirretään luotu piiri.
# Parametri output="mpl" kertoo, että piirto tapahtuu graafisesti.
circ1.draw(output="mpl")
###Output
_____no_output_____
###Markdown
Kirjoita alle yllä olevan esimerkin mukaisesti Python-koodi, joka luo kvanttipiirin, jossa on kaksi kubittia. Sen jälkeen lisää piiriin portteja siten, että ensimmäiseen kubittiin tehdään kaksi $X$-operaatiota, ja toiseen yksi $X$-operaatio. Lopuksi piirrä luomasi piiri.
###Code
# Kirjoita koodisi tähän
###Output
_____no_output_____
###Markdown
Tehtävä 2Hadamard eli $H$-portti muuttaa kubitin tilat $\ket 0$ ja $\ket 1$ superpositioon seuraavasti:$$H\ket 0 = \frac{1}{\sqrt 2} (\ket 0 + \ket 1) = \ket + \\H\ket 1 = \frac{1}{\sqrt 2} (\ket 0 - \ket 1) = \ket -$$Huomaa, että näitä kyseisiä tiloja merkitään välillä myös $\ket +$ ja $\ket -$.$H$-portin matriisiesitys on seuraava:$$H = \frac{1}{\sqrt 2} \pmatrix {1 & 1 \\ 1 & -1}$$a) Osoita matriisiesitystä käyttäen, että$$\begin{align}HH\ket 0 &= \ket 0 \text{ja} \\HH\ket 1 &= \ket 1.\end{align}$$b) Osoita laskemalla matriisien tulo, että $HH=I$, missä $I$ on yksikkömatriisi. Tämä tarkoittaa jälleen, että $H$ on itsensä käänteismatriisi.ohjevideo H-portin toiminnastac) Seuraavassa esimerkissä luodaan Qiskitiä käyttäen kvanttipiiri, jossa on yksi kubitti ja $H$-portti.
###Code
from qiskit import *
# Luodaan yhden kubitin kvanttipiiri kuten edellisessä esimerkissä.
circ2 = QuantumCircuit(1)
# Operoidaan kubittia 0 H-portilla.
circ2.h(0)
# Piirretään luotu piiri.
circ2.draw(output="mpl")
###Output
_____no_output_____
###Markdown
Kirjoita alle yllä olevan esimerkin mukaisesti Python-koodi, joka luo kvanttipiirin, jossa on kaksi kubittia. Sen jälkeen lisää piiriin portteja siten, että ensimmäiseen kubittiin tehdään yksi $H$-operaatio, ja toiseen yksi $X$-operaatio. Piirrä myös tämä piiri.
###Code
# Kirjoita koodisi tähän
###Output
_____no_output_____
###Markdown
Tehtävä 3 Kubittien tilat $\ket +$ ja $\ket -$ määritettiin$$ \ket + = \frac{1}{\sqrt 2} (\ket 0 + \ket 1), \\ \ket - = \frac{1}{\sqrt 2} (\ket 0 - \ket 1). $$Z-portin (ns. phase-flip gate) matriisiesitys on seuraava:$$Z= \pmatrix {1 & 0 \\ 0 & -1}$$Osoita seuraavat yhtälöt oikeiksi laskemalla matriisien tulo:$$\begin{align}Z\ket + &= \ket - \text{ja}\\Z\ket - &= \ket +\end{align}$$ Tehtävä 4Tehtävät 1 - 3 käsittelivät yhden kubitin systeemiä. Seuraavassa kahdessa tehtävässä tarkastelemme controlled NOT -porttia ($\mathit{CNOT}$), joka on kahden kubitin välinen operaatio. Kahden kubitin kvanttipiirissä systeemin tila esitetään seuraavilla kantavektoreilla:$$\ket{00},\ \ket{01},\ \ket{10},\ \ket{11}, $$missä $$\ket{00}=\pmatrix{1 \\ 0 \\ 0 \\0},\ \ket{01}=\pmatrix{0 \\ 1 \\ 0 \\0},\ \ket{10}=\pmatrix{0 \\ 0 \\ 1 \\0},\ \ket{11}=\pmatrix{0 \\ 0 \\ 0 \\1}.$$$\mathit{CNOT}$-portin operoidessa ensimmäinen qubitti on ns. ohjaava kubitti (control qubi1t), ja jälkimmäinen on kohdekubitti (target qubit). $\mathit{CNOT}$ vaikuttaa koko systeemin tilaan seuraavasti:$$\ket{00} \rightarrow \ket{00} \\\ket{01} \rightarrow \ket{01} \\\ket{10} \rightarrow \ket{11} \\\ket{11} \rightarrow \ket{10} \\$$Tulos tarkoittaa, että jos ensimmäinen, ns. ohjaava kubitti on $0$, niin kohdekubitin tila ei muutu. Jos ohjaavakubitti $1$, niin kohdekubitin tila muuttuu. $\mathit{CNOT}$-portin matriisiesitys on seuraava:$$\mathit{CNOT} = \pmatrix {1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} $$Huomaa, että vaikka matriisia merkitään monella kirjaimella, se ei ole monen matriisin tulo.Piirrosmerkki, jossa ylempänä esitetty kubitti on ohjaava, ja alempi toimii kohteena:a) Laske CNOT-matriisin ja 2 kubitin tilaa kuvaavan pystyvektorin tulo, ja osoita että $$\ket{01} \rightarrow \ket{01} \text{eli laske tulo } \mathit{CNOT} \ket{01} \\\ket{10} \rightarrow \ket{11} \text{eli laske tulo } \mathit{CNOT} \ket{10} \\$$ohjevideo CNOT-portin toiminnasta b) Alla on esimerkki, jossa Qiskitillä luodaan kvanttipiiriin kaksi kubittia. Ensimmäistä kubittia operoi ensin Hadamard-portti $H$, jonka jälkeen tehdään molempien kubittien välinen $\mathit{CNOT}$-operaatio.
###Code
from qiskit import *
circ3 = QuantumCircuit(2)
circ3.h(0)
circ3.cx(0,1)
# Piirretään luotu piiri.
circ3.draw(output="mpl")
###Output
_____no_output_____
###Markdown
Huomaa, että kuvassa aika eli suoritusjärjestys etenee vasemmalta oikealle. Kirjoita alle koodi, joka luo kolmen kubitin kvanttipiirin.Ensimmäiseen kubittiin operoi H-portti.Tämän jälkeen kvanttipiirissä on $\mathit{CNOT}$-portti kubittien 0 ja 1 välillä siten, että kubitti 0 on ohjaava kubitti ja kubitti 1 kohde. Lisää piiriin $\mathit{CNOT}$-portti myös kubittien 1 ja 2 välille, jossa kubitti 1 on ohjaava.
###Code
# Kirjoita koodisi tähän
###Output
_____no_output_____
###Markdown
Tehtävä 5Seuraavassa esimerkissä kvanttipiiriä täydennetään kahdella kvanttirekisterillä ja kahdella klassisella rekisterillä. Kvanttipiiri luodaan syöttämällä rekisterit parametrina komennolle QuantumCircuit(). Kun H-portti operoi systeemiin, niin systeemin, niin kubitin 0 tila on määritetty tilojen $\ket 0$ ja $\ket 1$ superpositiona. Kun piirin tila mitataan, niin silloin systeemi siirtyy tiettyyn hyvin määritettyyn tilaan, jonka arvo tallenetaan klassiseen rekisteriin. Kvanttimekaanisella systeemillä jokaisella mahdollisella mitatulla tilalla on tietty todennäköisyys, ja kaikkien eri tilavaihtoehtojen todennäköisyyksien summa on 1.
###Code
from qiskit import *
# Luodaan 2 kubitin rekisteri piiriä varten.
qr = QuantumRegister(2)
# Luodaan kahden bitin klassinen rekisteri mittaustulosten tallentamista varten.
cr = ClassicalRegister(2)
# Luodaan nyt kvanttipiiri käyttäen juuri luotuja rekistereitä.
mycircuit = QuantumCircuit(qr, cr)
# H-portti operoi kubittiin 0Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
mycircuit.h(qr[0])
# Määrätään piiri mittaamaan molemmat kubitit ja tallennetaan tulokset klassiseen rekisteriin.
mycircuit.measure(qr, cr)
mycircuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
a) Kerro alla olevan kvanttipiirin toimintaperiate. Miten piiri toimii?Lisämateriaalia: Measuring a qubit? Kirjoita vastauksesi tähän. b) Kokeile määritellä piiri Python-kooodilla.
###Code
# Kirjoita koodisi tähän
###Output
_____no_output_____ |
H&M Dataset (EDA)/references/h-m-how-to-calculate-map-12.ipynb | ###Markdown
Thank you for visit In this notebook, I want to show Example how to calculate MAP@12 If you like this notebook, upvote please 😉 Version update (Feb 15, 2022)- I receive message from @hervind and @t88take that tell me my mistake about competition metrics.- So, I fixed this notebook.- Thank you. Version update (Feb 16, 2022)- I fixed this notebook again. 😂- I saw some discussions and learned from previous released notebooks.- If this will be useful for you, I'm happy too.
###Code
import numpy as np
import pandas as pd
import gc
import os
import time
import random
from tqdm.auto import tqdm
def visualize_df(df):
print(df.shape)
display(df.head())
transactions_train = pd.read_csv('../input/h-and-m-personalized-fashion-recommendations/transactions_train.csv')
visualize_df(transactions_train)
sub = pd.read_csv('../input/h-and-m-personalized-fashion-recommendations/sample_submission.csv')
del sub['prediction']; gc.collect()
visualize_df(sub)
# transactions_train['t_dat'].unique()[-7:]
# array(['2020-09-16', '2020-09-17', '2020-09-18', '2020-09-19',
# '2020-09-20', '2020-09-21', '2020-09-22'], dtype=object)
val_start_date = '2020-09-16'
train_data = transactions_train.query(f"t_dat < '{val_start_date}'").reset_index(drop=True)
valid_data = transactions_train.query(f"t_dat >= '{val_start_date}'").reset_index(drop=True)
visualize_df(train_data)
visualize_df(valid_data)
train_unq = train_data.groupby('customer_id')['article_id'].apply(list).reset_index()
train_unq['valid_pred'] = train_unq['article_id'].map(lambda x: '0'+' 0'.join(str(x)[1:-1].split(', ')))
visualize_df(train_unq)
valid_unq = valid_data.groupby('customer_id')['article_id'].apply(list).reset_index()
valid_unq['valid_true'] = valid_unq['article_id'].map(lambda x: '0'+' 0'.join(str(x)[1:-1].split(', ')))
visualize_df(valid_unq)
merged = pd.merge(sub, train_unq, on='customer_id', how='left').fillna('')
merged = pd.merge(merged, valid_unq, on='customer_id', how='left').fillna('')
del merged['article_id_x'], merged['article_id_y']; gc.collect()
merged.head()
# https://www.kaggle.com/c/h-and-m-personalized-fashion-recommendations/discussion/306007
# https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py
def apk(actual, predicted, k=10):
"""
Computes the average precision at k.
This function computes the average prescision at k between two lists of
items.
Parameters
----------
actual : list
A list of elements that are to be predicted (order doesn't matter)
predicted : list
A list of predicted elements (order does matter)
k : int, optional
The maximum number of predicted elements
Returns
-------
score : double
The average precision at k over the input lists
"""
if len(predicted)>k:
predicted = predicted[:k]
score = 0.0
num_hits = 0.0
for i,p in enumerate(predicted):
if p in actual and p not in predicted[:i]:
num_hits += 1.0
score += num_hits / (i+1.0)
if not actual:
return 0.0
return score / min(len(actual), k)
def mapk(actual, predicted, k=10):
"""
Computes the mean average precision at k.
This function computes the mean average prescision at k between two lists
of lists of items.
Parameters
----------
actual : list
A list of lists of elements that are to be predicted
(order doesn't matter in the lists)
predicted : list
A list of lists of predicted elements
(order matters in the lists)
k : int, optional
The maximum number of predicted elements
Returns
-------
score : double
The mean average precision at k over the input lists
"""
return np.mean([apk(a,p,k) for a,p in zip(actual, predicted)])
tqdm.pandas()
mapk(merged['valid_true'].map(lambda x: x.split()), merged['valid_pred'].map(lambda x: x.split()), k=12)
###Output
_____no_output_____
###Markdown
CV is 0.000147 Actual LB score is 0.001XX
###Code
sub = merged[['customer_id', 'valid_pred']].copy()
sub.columns = ['customer_id', 'prediction']
print(sub.shape)
sub.to_csv('submission.csv', index=False)
###Output
(1371980, 2)
|
examples/tutorial/jupyter/execution/omnisci_on_native/local/exercise_1.ipynb | ###Markdown
Scale your pandas workflows by changing one line of code Exercise 1: How to use Modin**GOAL**: Learn how to import Modin to accelerate and scale pandas workflows. Modin is a drop-in replacement for pandas that distributes the computation across all of the cores in your machine or in a cluster.In practical terms, this means that you can continue using the same pandas scriptsas before and expect the behavior and results to be the same. The only thing that needsto change is the import statement. Normally, you would change:```pythonimport pandas as pd```to:```pythonimport modin.pandas as pd```Changing this line of code will allow you to use all of the cores in your machine to do computation on your data. One of the major performance bottlenecks of pandas is that it only uses a single core for any given computation. Modin exposes an API that is identical to pandas, allowing you to continue interacting with your data as you would with pandas. There are no additional commands required to use Modin locally. Partitioning, scheduling, data transfer, and other related concerns are all handled by Modin under the hood. pandas on a multicore laptop Modin on a multicore laptop Concept for exercise: Dataframe constructorOften when playing around in pandas, it is useful to create a DataFrame with the constructor. That is where we will start.```pythonimport numpy as npimport pandas as pdframe_data = np.random.randint(0, 100, size=(2**10, 2**5))df = pd.DataFrame(frame_data)```When creating a dataframe from a non-distributed object, it will take extra time to partition the data for Modin. When this is happening, you will see this message:```UserWarning: Distributing object. This may take some time.```Modin uses Ray as an execution engine by default. Since this notebook is related to OmniSci, let's run examples on the OmniSci engine. For reaching this, we need to activate OmniSci either via Modin config or Modin environment variable. See more in [OmniSci usage](https://github.com/modin-project/modin/blob/master/docs/development/using_omnisci.rst) section.
###Code
import modin.config as cfg
cfg.StorageFormat.put('omnisci')
# Note: Importing notebooks dependencies. Do not change this code!
import numpy as np
import pandas
import sys
import modin
pandas.__version__
modin.__version__
# Implement your answer here. You are also free to play with the size
# and shape of the DataFrame, but beware of exceeding your memory!
import pandas as pd
frame_data = np.random.randint(0, 100, size=(2**10, 2**5))
df = pd.DataFrame(frame_data)
# ***** Do not change the code below! It verifies that
# ***** the exercise has been done correctly. *****
try:
assert df is not None
assert frame_data is not None
assert isinstance(frame_data, np.ndarray)
except:
raise AssertionError("Don't change too much of the original code!")
assert "modin.pandas" in sys.modules, "Not quite correct. Remember the single line of code change (See above)"
import modin.pandas
assert pd == modin.pandas, "Remember the single line of code change (See above)"
assert hasattr(df, "_query_compiler"), "Make sure that `df` is a modin.pandas DataFrame."
print("Success! You only need to change one line of code!")
###Output
_____no_output_____
###Markdown
Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways. Concept for Exercise: Data Interaction and PrintingWhen interacting with data, it is very imporant to look at different parts of the data (e.g. `df.head()`). Here we will show that you can print the modin.pandas DataFrame in the same ways you would pandas.
###Code
# When working with non-string column labels it could happen that some backend logic would try to insert a column
# with a string name to the frame, so we do add_prefix()
df = df.add_prefix("col")
# Print the first 10 lines.
df.head(10)
df.count()
###Output
_____no_output_____ |
ep4.ipynb | ###Markdown
--- Solution This part of the notebook contains the solution for EP4.It's noticeable that there are repeated imports in multiple code cells. It was done in order to keep each section individually runnable, as some training parts take a long time to run. The only mandatory section is the first one, as it separates the training data and creates the variables `D_X_train`, `D_X_val`, `D_y_train` and `D_y_val`. After running the first section code, you should be able to run the other section independently. 0. Lib versions
###Code
print('numpy version:', np.__version__)
import matplotlib
print('matplotlib version:', matplotlib.__version__)
del matplotlib
import tensorflow
print('tensorflow version: ', tensorflow.__version__)
del tensorflow
import sklearn
print('scikit-learn version:', sklearn.__version__)
del sklearn
###Output
numpy version: 1.19.5
matplotlib version: 3.4.1
tensorflow version: 2.5.0
scikit-learn version: 0.24.2
###Markdown
1. Dataset preparation
###Code
from sklearn.model_selection import train_test_split
D_X_train, D_X_val, D_y_train, D_y_val = train_test_split(X_train, y_train, test_size=0.3, random_state=42, stratify=y_train)
print(D_X_train.shape)
print(D_X_val.shape)
print(D_y_train.shape)
print(D_y_val.shape)
def print_distribution(y_train, y_test, train_label, test_label):
"""
Plots distribution of train and test answers
:param y_train: answers for train set
:type y_train: np.ndarray(shape=(M,))
:param y_test: answers for test set
:type y_test: np.ndarray(shape=(N,))
:param train_label: label shown for train set
:type train_label: str
:param test_label: label shown for test set
:type test_label: str
:return: nothing
:rtype: None
"""
labels = ["%s"%i for i in range(10)]
unique, counts = np.unique(y_train, return_counts=True)
uniquet, countst = np.unique(y_test, return_counts=True)
fig, ax = plt.subplots()
rects1 = ax.bar(unique - 0.2, counts, 0.25, label=train_label)
rects2 = ax.bar(unique + 0.2, countst, 0.25, label=test_label)
ax.legend()
ax.set_xticks(unique)
ax.set_xticklabels(labels)
plt.title('MNIST classes')
plt.xlabel('Class')
plt.ylabel('Frequency')
plt.show()
print_distribution(D_y_train, D_y_val, 'Train', 'Validation')
###Output
_____no_output_____
###Markdown
2. Training, evaluating and selecting models 2.1 Logistic regression To train and evaluate a model using Logistic Regression, the method [LogisticRegressionCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html) was used. It implements the Logistic Regression function along with the Cross Validation Technique. It also applies regularization by default.The hyperparameter is `C`, which is the inverse of the regularization strength (which we've seen in class as $\lambda$). The values chosen to be tested were `[0.01, 0.1, 1, 10, 100]`.The Cross Validation technique applied is a Stratified K-fold with 5 folds.The iterations number was increased due to multiple no conversion warnings with default value (`100` iterations).After fitting the data, **the chosen `C` value was `10`**, with an **accuracy of approx. 91.73%**.The decisions for this sections were taken based on the following articles from scikit-learn documentation: [about Logistic Regression](https://scikit-learn.org/stable/modules/linear_model.htmllogistic-regression), [LogisticRegression method docs](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html), [LogisticRegressionCV method docs](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html), [the whole section about model selection](https://scikit-learn.org/stable/model_selection.html) and usage examples such as [this one](https://scikit-learn.org/stable/auto_examples/linear_model/plot_sparse_logistic_regression_mnist.html).
###Code
from sklearn.linear_model import LogisticRegressionCV
import warnings
warnings.filterwarnings('ignore') # ignores ConvergenceWarning
logistic_regression_model = LogisticRegressionCV(
Cs=[0.01, 0.1, 1, 10, 100],
cv=5,
max_iter=1000,
random_state=42
).fit(D_X_train, D_y_train)
warnings.filterwarnings('default') # reestablish warnings
print('Score for Logistic Regression model:', logistic_regression_model.score(D_X_train, D_y_train))
print('Chosen C parameter:', logistic_regression_model.C_)
###Output
Score for Logistic Regression model: 0.9172857142857143
Chosen C parameter: [10. 10. 10. 10. 10. 10. 10. 10. 10. 10.]
###Markdown
2.2 Neural network To train and evaluate a model using Neural Networks, the method [MLPClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) was used, along with the [HalvingRandomSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.HalvingRandomSearchCV.html). The former implements a multi-layer perceptron neural network (with regularization). The latter receives the defined neural network object (it could be other techniques from scikit-learn) and trains it with multiple combinations of the given hyperparameters, in order to find the combination with highest score (in this case, accuracy). This particular model selector randomly chooses the next permutation to test, and tries to eliminate some of the possibilities. Although it makes the training part faster, there might be an eliminated permutation that performs better than the chosen one.The hyperparameter are:- `activation (default value: 'relu')`: activation function for the hidden layer;- `alpha (default value: 0.0001)`: L2 penalty (regularization term) parameter;- `hidden_later_sizes (default value:(100,))`: the ith element represents the number of neurons in the ith hidden layer.;- `learning_rate_init (default value: 0.001)`: the initial learning rate used. It controls the step-size in updating the weights.;- `max_iter (default value: 200)`: Maximum number of iterations. The solver iterates until convergence.The values were based on examples such as: (1)[https://johdev.com/jupyter/2020/03/02/Sklearn_MLP_for_MNIST.html] and (2)[https://nasirml.wordpress.com/2017/12/16/multi-layer-perceptron-in-tensorflow-part-2-mnist/]The Cross Validation technique applied is a Stratified K-fold with 5 folds.After fitting the data, the chosen values were **`{'max_iter': 200, 'learning_rate_init': 0.01, 'hidden_layer_sizes': (196,), 'alpha': 0.001, 'activation': 'relu'}`** with an **accuracy of approx. 96.67%**.Other material used: documentation about [neural networks](https://scikit-learn.org/stable/modules/neural_networks_supervised.html) and [grid search](https://scikit-learn.org/stable/modules/grid_search.html).
###Code
from sklearn.neural_network import MLPClassifier
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingRandomSearchCV
import warnings
warnings.filterwarnings('ignore') # ignores ConvergenceWarning
param_grid = {
'hidden_layer_sizes': [(10,), (100,), (196,), (196, 98)],
'activation': ['tanh', 'relu'],
'alpha': [1e-3, 1e-4, 1e-5],
'learning_rate_init': [0.1, 0.01, 0.001],
'max_iter': [10, 50, 100, 200],
}
base_neural_network = MLPClassifier(early_stopping=True, random_state=42)
sh = HalvingRandomSearchCV(base_neural_network, param_grid, random_state=42).fit(D_X_train, D_y_train)
warnings.filterwarnings('default') # reestablish warnings
print('Score for Neural Network model', sh.best_score_)
print('Selected params:', sh.best_params_)
###Output
Score for Neural Network model 0.9667489711934157
Selected params: {'max_iter': 200, 'learning_rate_init': 0.01, 'hidden_layer_sizes': (196,), 'alpha': 0.001, 'activation': 'relu'}
###Markdown
2.3 SVM To train and evaluate a model using SVM, the method [SVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) was used, along with the [HalvingRandomSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.HalvingRandomSearchCV.html). The former implements a C-Support Vector Classification (with regularization). The latter is the same used at the section above.The hyperparameter are:- `C (default value: '1.0')`: regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty;- `gamma (default value: 'scale')`: kernel coefficient;- `decision_function_shape (default value: 'ovr')`: whether to return a one-vs-rest (‘ovr’) decision function of shape (n_samples, n_classes) as all other classifiers, or the original one-vs-one (‘ovo’) decision function of libsvm which has shape (n_samples, n_classes * (n_classes - 1) / 2).The Cross Validation technique applied is a Stratified K-fold with 5 folds.After fitting the data, the chosen values were **`{'C': 10, 'decision_function_shape': 'ovo', 'gamma': 'scale'}`** with an **accuracy of approx. 97.59%**.Other material used: documentation about [SVM](https://scikit-learn.org/stable/modules/svm.html)
###Code
from sklearn.svm import SVC
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingGridSearchCV
param_grid = {
'C': [0.01, 0.1, 1, 10, 100],
'gamma': ['scale', 'auto', 1, 0.5, 0.01, 0.001],
'decision_function_shape': ['ovo', 'ovr'],
}
base_svm = SVC(random_state=42)
sh = HalvingGridSearchCV(base_svm, param_grid).fit(D_X_train, D_y_train)
print('Score for SVM model', sh.best_score_)
print('Selected params:', sh.best_params_)
###Output
Score for SVM model 0.9758961533881149
Selected params: {'C': 10, 'decision_function_shape': 'ovo', 'gamma': 'scale'}
###Markdown
3. Choosing a final model In this section, models will be created with the selected hyperparameters from previous section. Then, they will be trained with the whole `D_train` set, tested with `D_val`, and compared.Analysis and some code based on [this example from scikit-learn docs](https://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html). 3.1 Logistic Regression Model
###Code
from sklearn.linear_model import LogisticRegression
logistic_regression_model = LogisticRegression(
C=10,
max_iter=1000,
random_state=42).fit(D_X_train, D_y_train)
from sklearn.metrics import classification_report
predictions_logistic_regression = logistic_regression_model.predict(D_X_val)
print(classification_report(D_y_val, predictions_logistic_regression, digits=4))
from sklearn.metrics import r2_score
r2_score(D_y_val, predictions_logistic_regression)
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
plot_confusion_matrix(logistic_regression_model, D_X_val, D_y_val, normalize='pred', ax=ax)
plt.plot()
###Output
_____no_output_____
###Markdown
3.2 Neural Network Model
###Code
from sklearn.neural_network import MLPClassifier
neural_network_model = MLPClassifier(
activation='relu',
alpha=0.001,
early_stopping=True,
hidden_layer_sizes=(196,),
learning_rate_init=0.01,
random_state=42).fit(D_X_train, D_y_train)
from sklearn.metrics import classification_report
predictions_neural_network = neural_network_model.predict(D_X_val)
print(classification_report(D_y_val, predictions_neural_network, digits=4))
from sklearn.metrics import r2_score
r2_score(D_y_val, predictions_neural_network)
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
plot_confusion_matrix(neural_network_model, D_X_val, D_y_val, normalize='pred', ax=ax)
plt.plot()
###Output
_____no_output_____
###Markdown
3.3 Training the SVM Model
###Code
from sklearn.svm import SVC
svm_model = SVC(
C=10,
decision_function_shape='ovo',
random_state=42).fit(D_X_train, D_y_train)
from sklearn.metrics import classification_report
predictions_svm = svm_model.predict(D_X_val)
print(classification_report(D_y_val, predictions_svm, digits=4))
from sklearn.metrics import r2_score
r2_score(D_y_val, predictions_svm)
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
plot_confusion_matrix(svm_model, D_X_val, D_y_val, normalize='pred', ax=ax)
plt.plot()
###Output
_____no_output_____
###Markdown
3.4 Choosing a model Based on the shown results, the **SVM** is the chosen model.The linear model - Logistic Regression - presents the lowest accuracy, with an average of about _91%_. Both the Neural Network and the SVM models' performances are better, with _97%_ and _98%_ average accuracy respectively. Therefore, the SVM model wins with a slight advantage over the Neural Network model.Also, all the models performed similarly to their training scores, which shows that the regularization techniques had an impact on overfitting.As a curiosity: all models had their lowest accuracy when trying to predict the digit nine. 4. Error estimation In this section, the SVM model will be trained twice: once with only $D_{train}$ and then with the whole training set. Each time, its performance will be analysed using the testing set. 4.1 With only $D_{train}$ data
###Code
from sklearn.svm import SVC
svm_model = SVC(
C=10,
decision_function_shape='ovo',
random_state=42)
svm_model.fit(D_X_train, D_y_train)
test_1_predictions = svm_model.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, test_1_predictions, digits=4))
from sklearn.metrics import r2_score
r2_score(y_test, test_1_predictions)
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
plot_confusion_matrix(svm_model, X_test, y_test, normalize='pred', ax=ax)
plt.plot()
###Output
_____no_output_____
###Markdown
4.2 With all training data
###Code
svm_model.fit(X_train, y_train)
test_2_predictions = svm_model.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, test_2_predictions, digits=4))
from sklearn.metrics import r2_score
r2_score(y_test, test_2_predictions)
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
plot_confusion_matrix(svm_model, X_test, y_test, normalize='pred', ax=ax)
plt.plot()
###Output
_____no_output_____ |
examples/bigquery-auditlog-anomaly-detection/audit_log_anomaly_detection.ipynb | ###Markdown
BigQuery Audit Log Anomaly DetectionThis example uses BigQuery Data Access Audit logs (cloudaudit_googleapis_com_data_access_*) to identify outlier and anomalous usage within your BigQuery data environment - it uses estimated cost (calculated by $5USD/TB) and total Tables Processed per job as the metric to identify anomalies. This example show cases two methods for identifying anomalies: 1. Outliers in groups: This method looks for a datapoint that differs signicantly from others. This means that it identifies users who use BQ way more or less than others. 2. Time Series Analysis: Looking for outliers in periodic trends by looking at audit logs chronologically. This method has an underlying assumption that BigQuery usage has trends.Other possible metrics include: * 'runtimeMs',* 'runtimeSecs',* 'lagtimeMs',* 'lagtimeSecs',* 'totalLoadOutputBytes',* 'totalSlotMs',* 'avgSlotsMS',* 'totalTablesProcessed',* 'totalViewsProcessed',* 'totalProcessedBytes',* 'totalBilledBytes',* 'querylength', * 'estimatedCostUsd', SetupThis cell downloads all requirements and creates required views for analysis.
###Code
! echo "Installing Dependencies..."
! pip install -r requirements.txt || echo 'Error installing other dependencies'
! jupyter nbextension enable --py widgetsnbextension
! jupyter serverextension enable voila --sys-prefix
from google.cloud import bigquery
import os
from dotenv import load_dotenv
import viewsFactory as Vf
from google.cloud import bigquery
from google.cloud.exceptions import NotFound
from IPython.display import display, Markdown, Latex
import ipywidgets as widgets
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import chart_studio.plotly as py
import plotly.graph_objs as go
import plotly.express as px
from plotly.offline import iplot, init_notebook_mode
import cufflinks as cf
import plotly.io as pio
cf.go_offline()
init_notebook_mode(connected='true')
from ipywidgets import HBox, VBox, IntSlider, interactive_output, FloatSlider, interact, interact_manual
from traitlets import directional_link
jupyter = 'plotly_mimetype+notebook_connected'
github = 'svg'
# Comment out the Github line and Uncomment the jupyter line when running in jupyter to see interactive graphs
pio.renderers.default = github
# pio.renderers.default = jupyter
###Output
_____no_output_____
###Markdown
Importing environment variables
###Code
%reload_ext autoreload
%autoreload 2
load_dotenv("var.env")
project_id = os.environ.get("project_id")
data_project_id = os.environ.get("data_project_id")
dataset_id = os.environ.get("dataset_id")
audit_log_table = os.environ.get("audit_log_table")
location = os.environ.get("location")
audit_log_partitioned = (os.environ.get("audit_log_partitioned") == "True")
## Destination View
destination_project_id = os.environ.get("destination_project_id")
destination_dataset_id = os.environ.get("destination_dataset_id")
summary_table_name = os.environ.get("summary_table_name")
def query_to_df(query):
"""Run query string in BigQuery and return pandas dataframe from query result"""
query_job = client.query(
query,
# Location must match that of the dataset(s) referenced in the query.
location=location,
) # API request - starts the query
df = query_job.to_dataframe()
return df
###Output
_____no_output_____
###Markdown
Creating summary view of Audit Log table
###Code
# %%capture
client = bigquery.Client(location=location, project=project_id)
print("Client creating using default project: {}".format(client.project))
view = Vf.CreateView(project_id, data_project_id, dataset_id, audit_log_table, location, audit_log_partitioned,
destination_project_id, destination_dataset_id, summary_table_name)
view.create_all_job_view()
###Output
_____no_output_____
###Markdown
Understanding the data - distributions and statisticsThis section shows an example of a function to help in understanding the data for different metrics. Change the metric variable `metric_understanding` if you want to explore a different metric. Note: Since the values are Log Values, metrics which has zero values the below query will error with a ln(0) error, remove the `LOG` in the query if that happens.
###Code
# Distribution and Statistic plots Controls
metric_understanding = 'estimatedCostUsd'
buck_query = """
SELECT
MAX(LOG({metric})) AS maxvalue,
MIN(LOG({metric})) AS minvalue
FROM
`{project_id}.{destination_dataset_id}.{summary_table_name}`
WHERE
{metric} IS NOT NULL
AND {metric} <> 0
""".format(project_id = destination_project_id,
destination_dataset_id = destination_dataset_id ,
summary_table_name = summary_table_name,
metric = metric_understanding)
buk_df = query_to_df(buck_query)
buckets = []
val = buk_df.minvalue[0]
for r in range(300+1):
buckets.append(val)
val += (abs(buk_df.maxvalue[0] - buk_df.minvalue[0]))/300
dist_metric_query = """
WITH
help AS (
SELECT
bucket,
CONCAT(IFNULL(ranges[SAFE_OFFSET(bucket - 1)],
0), '-', ranges[SAFE_OFFSET(bucket)]) AS prange,
COUNT(*) AS countmetric,
MAX(IFNULL(ranges[SAFE_OFFSET(bucket - 1)],
0)) AS startRange,
MAX(ranges[SAFE_OFFSET(bucket)]) AS endRange
FROM
`{project_id}.{destination_dataset_id}.{summary_table_name}`,
UNNEST([STRUCT({buckets} AS ranges)]),
UNNEST([RANGE_BUCKET(LOG({metric}),
ranges)]) bucket
WHERE
{metric} IS NOT NULL
AND {metric} <> 0
GROUP BY
1,
2
ORDER BY
bucket )
SELECT
*
FROM
help
WHERE
prange IS NOT NULL
""".format(project_id = destination_project_id,
destination_dataset_id = destination_dataset_id ,
summary_table_name = summary_table_name,
metric = metric_understanding,
buckets = buckets
)
dist_metric_df = query_to_df(dist_metric_query)
fig = go.Figure(data=[go.Bar(
x=dist_metric_df['startRange'] + 0.5*(dist_metric_df['endRange'] - dist_metric_df['startRange']),
y=dist_metric_df['countmetric'],
width=dist_metric_df['endRange'] - dist_metric_df['startRange'],
customdata = dist_metric_df.to_numpy(),
hovertemplate='<b>Range: %{customdata[1]}</b><br>Count: %{customdata[2]:.3f}<br>',
)])
fig.update_layout(title_text='Log scale distribution of {} per job'.format(metric_understanding),
xaxis_title_text=metric_understanding, # xaxis label
yaxis_title_text='Count',)
fig.show()
stat_query = """
WITH
mode AS (
SELECT
{metric} AS modevalue,
FROM
`{proj}.{ds}.{summ}`
WHERE
{metric} <> 0
AND {metric} IS NOT NULL
GROUP BY
{metric}
ORDER BY
COUNT(*) DESC
LIMIT
1 )
SELECT
COUNT(NULLIF(main.{metric},
0)) AS totalcount,
AVG({metric}) AS average,
MAX({metric}) AS maxvalue,
MIN({metric}) AS minvalue,
MAX({metric}) - MIN({metric}) AS rangevalues,
MAX(modevalue) AS modevalue,
MAX(medianvalue) AS medianvalue,
FROM
`{proj}.{ds}.{summ}` main,
mode,
(
SELECT
PERCENTILE_CONT({metric},
0.5) OVER() AS medianvalue
FROM
`{proj}.{ds}.{summ}`
LIMIT
1 )
WHERE
{metric} <> 0
AND {metric} IS NOT NULL
""".format(metric=metric_understanding,
proj=destination_project_id,
ds=destination_dataset_id,
summ=summary_table_name)
stats_df = query_to_df(stat_query)
stats_df = stats_df.T.round(5)
fig = go.Figure(data=[go.Table(header=dict(values=['Statistic', 'Value']),
cells=dict(values=[stats_df.index, stats_df.iloc[:, 0]]))
])
display(Markdown('<p> <b> Descriptive Statistics for {}:</b></p>'.format(metric_understanding)))
fig.show()
###Output
_____no_output_____
###Markdown
1. The distribution plot is useful to understand the variability of your BigQuery usage related to the metric under investigation. Metrics with longer tails in the distribution plot or with spike in high or low values could be indication for need for further analysis.2. The descriptive statistics overview is useful for quick understanding of the environments. For example, have a abnormally high maximium value could indicate a good candidate for further evaluation. *** 1. Outliers in groupsGenerate insights of usage patterns around different groups (which can be one of the following: 'principalEmail', 'eventName', 'projectId', 'dayOfWeek', 'hourOfDay'). This provides information on anomalous usage and pinpoints areas for strategic optimization.
###Code
# Outlier Group Analysis controls
metric_outlier = 'estimatedCostUsd' # The query metric to investigate.
group = "principalEmail" # Group to identify outliers in
sigma = 1.5 # Value of SD from mean to be considered an outlier.
metric_query = """ WITH
get_metrics AS (
SELECT
{group} AS groupedby,
AVG({metric}) AS aggaverage,
AVG(AVG({metric})) OVER () AS mean,
STDDEV(AVG({metric})) OVER () AS sd
FROM
`{project_id}.{destination_dataset_id}.{summary_table_name}`
GROUP BY
groupedby)
SELECT
groupedby,
aggaverage,
(aggaverage - mean)/sd AS zscore,
IF
(ABS((aggaverage - mean)/sd) > {out},
1,
0) AS outlier
FROM
get_metrics
ORDER BY
zscore DESC
""".format(metric = metric_outlier,
group = group,
project_id = destination_project_id,
destination_dataset_id = destination_dataset_id ,
summary_table_name = summary_table_name,
out = sigma)
metric_df = query_to_df(metric_query)
fig = go.Figure()
fig.add_trace(go.Box(
x = metric_df["aggaverage"],
pointpos = -2,
opacity = 1,
customdata = metric_df.to_numpy(),
name = group,
hovertemplate='<b>%{customdata[0]}</b><br><br>Aggregated Average: %{customdata[1]:.3f}<br>SD from Mean: %{customdata[2]:.3f}',
boxpoints = "all",
selectedpoints = metric_df[metric_df['outlier']==1].index,
selected = dict(marker = dict( color = "#DB4437", opacity = 1)),
text=metric_df["aggaverage"],
))
fig.update_layout(
title="Average {metric} grouped by {group} with red points as outliers > {sd} sd from mean".format(metric=metric_outlier, group=group, sd=sigma),
xaxis_title= metric_outlier)
biggest = metric_df[metric_df['outlier']==1]
fig.show()
display(Markdown('<h2> Summary </h2>'))
display(Markdown('<p> Hover over the datapoints in the plot to see the group labels. </p>'))
if biggest.empty:
display(Markdown('<h4> There are no outliers greater than your selection of {sigma} SD from the mean. </h4>'.format(sigma=sigma)))
else:
display(Markdown('<h4> The biggest outlier found is: <b>{} </b> with an average of {:10.2f} which is {:10.4f} SD from the mean. </h4>'.format(metric_df[metric_df['outlier']==1].iloc[0].groupedby,
metric_df[metric_df['outlier']==1].iloc[0].aggaverage,
metric_df[metric_df['outlier']==1].iloc[0].zscore)))
###Output
_____no_output_____
###Markdown
The outlier highlighted in red represents an entity within the group that could require further optimization. The boxplot allows quick identification that an individual has much a higher average per query compared to other members in the group. *** 2. Time Series AnalysisUses 'totalTablesProcessed' as the metric for analysis. The analysis is carried out as follows: 1. Calculated the average totalTablesProcessed per query per hour.2. Carry out STL composition to decompose the time series into Seasonlity and Trend which is used to calculate an estimate: `estimation = trend + seasonal`3. Outliers are determined by finding the difference between the estimated reconstructed time series and the actual value (residual). Any residual above a sigma threshold (user defined) is flagged as an outlier in the plot below.
###Code
from statsmodels.tsa.seasonal import STL
from plotly.subplots import make_subplots
from scipy import fftpack
from traitlets import directional_link
import datetime
# Time Series Controls
metric_time = 'totalTablesProcessed' # The query metric to investigate.
sigma_time = 3 # Value of SD from mean to be considered an outlier.
time = 'HOUR' # The time interval to aggregate on. For example, selecting HOUR will generate an hourly average of the selected metric.
period = 24 # Taking each day as one cyclic period.
# Query for STL decomposition
stl_query = """
SELECT
AVG({metric}) AS avgmetric,
TIMESTAMP_TRUNC(createTime, {time}) AS time
FROM
`{project_id}.{destination_dataset_id}.{summary_table_name}`
GROUP BY
2
ORDER BY
2
""".format(project_id = destination_project_id,
destination_dataset_id = destination_dataset_id ,
summary_table_name = summary_table_name,
metric = metric_time,
time = time)
stl_o_df = query_to_df(stl_query)
#STL DECOMPOSITION
stl_df = stl_o_df.reset_index()
stl_df = stl_df.set_index('time').drop(columns = ['index'])
stl = STL(stl_df, period=period)
result = stl.fit()
seasonal, trend, resid = result.seasonal, result.trend, result.resid
estimated = trend + seasonal
resid_mu = resid.mean()
resid_dev = resid.std()
# Plots for STL
fig_2 = make_subplots(rows=4, cols=1, row_heights=[15 for x in range(4)],
specs=[[{'type':'xy'}], [{'type':'xy'}], [{'type':'xy'}], [{'type':'xy'}]],
horizontal_spacing = 0.2,
subplot_titles=['Original Series', 'Trend','Seasonal','Residual'])
fig_2.add_trace(go.Scatter(x=stl_df.index, y=stl_df['avgmetric'], name='original'), 1,1)
fig_2.add_trace(go.Scatter(x=trend.index, y=trend, name='Trend'), 2,1)
fig_2.add_trace(go.Scatter(x=seasonal.index, y=seasonal, name='Seasonal'), 3,1)
fig_2.add_trace(go.Scatter(x=resid.index, y=resid, name='Resid'), 4,1)
fig_2.update_layout(
autosize=True,
width=1000,
height=800,
title='STL Decomposition')
fig_2.show()
upper = resid_mu + sigma_time*resid_dev
anomalies = stl_df[(resid > upper)]
fig = go.Figure()
fig.add_trace(go.Scatter(x=stl_df.index, y=stl_df['avgmetric'], name='original'))
fig.add_trace(go.Scatter(x=estimated.index, y=estimated, name='estimation'))
fig.add_trace(go.Scatter(x=anomalies.index, y=anomalies.avgmetric,
mode='markers+text',
marker=dict(color='red'),
marker_symbol='x',
marker_line_width=2,
marker_size=10,
name='outliers',
textposition="top center",
text=np.round(anomalies.avgmetric.values, 2)))
display(Markdown('<p> <b> Outliers are detected if their residual (estimation - actual) is more than {sigma} from the residual mean </b></p>'.format(sigma=sigma_time)))
fig.update_layout(
title="Detected outliers with residual > {sigma} SD from mean".format(sigma=sigma_time),
xaxis_title="Creation Time",
yaxis_title=metric_time
)
fig.show()
anomalies = anomalies.round(2)
fig = go.Figure(data=[go.Table(header=dict(values=['Time', 'Average {}'.format(metric_time)]),
cells=dict(values=[anomalies.index, anomalies.avgmetric] ))
])
display(Markdown('<h2> Summary </h2>'))
if anomalies.empty:
display(Markdown('<h4> There are no outliers greater than your selection of {sigma} SD from the mean. </h4>'.format(sigma=sigma_time)))
else:
display(Markdown('<p> <b> There are potential outliers in the following {}(s): </b></p>'.format(time)))
fig.show()
###Output
_____no_output_____ |
d2l-en/mxnet/chapter_convolutional-modern/alexnet.ipynb | ###Markdown
Deep Convolutional Neural Networks (AlexNet):label:`sec_alexnet`Although CNNs were well knownin the computer vision and machine learning communitiesfollowing the introduction of LeNet,they did not immediately dominate the field.Although LeNet achieved good results on early small datasets,the performance and feasibility of training CNNson larger, more realistic datasets had yet to be established.In fact, for much of the intervening time between the early 1990sand the watershed results of 2012,neural networks were often surpassed by other machine learning methods,such as support vector machines.For computer vision, this comparison is perhaps not fair.That is although the inputs to convolutional networksconsist of raw or lightly-processed (e.g., by centering) pixel values, practitioners would never feed raw pixels into traditional models.Instead, typical computer vision pipelinesconsisted of manually engineering feature extraction pipelines.Rather than *learn the features*, the features were *crafted*.Most of the progress came from having more clever ideas for features,and the learning algorithm was often relegated to an afterthought.Although some neural network accelerators were available in the 1990s,they were not yet sufficiently powerful to makedeep multichannel, multilayer CNNswith a large number of parameters.Moreover, datasets were still relatively small.Added to these obstacles, key tricks for training neural networksincluding parameter initialization heuristics,clever variants of stochastic gradient descent,non-squashing activation functions,and effective regularization techniques were still missing.Thus, rather than training *end-to-end* (pixel to classification) systems,classical pipelines looked more like this:1. Obtain an interesting dataset. In early days, these datasets required expensive sensors (at the time, 1 megapixel images were state-of-the-art).2. Preprocess the dataset with hand-crafted features based on some knowledge of optics, geometry, other analytic tools, and occasionally on the serendipitous discoveries of lucky graduate students.3. Feed the data through a standard set of feature extractors such as the SIFT (scale-invariant feature transform) :cite:`Lowe.2004`, the SURF (speeded up robust features) :cite:`Bay.Tuytelaars.Van-Gool.2006`, or any number of other hand-tuned pipelines.4. Dump the resulting representations into your favorite classifier, likely a linear model or kernel method, to train a classifier.If you spoke to machine learning researchers,they believed that machine learning was both important and beautiful.Elegant theories proved the properties of various classifiers.The field of machine learning was thriving, rigorous, and eminently useful. However, if you spoke to a computer vision researcher,you would hear a very different story.The dirty truth of image recognition, they would tell you,is that features, not learning algorithms, drove progress.Computer vision researchers justifiably believedthat a slightly bigger or cleaner datasetor a slightly improved feature-extraction pipelinemattered far more to the final accuracy than any learning algorithm. Learning RepresentationsAnother way to cast the state of affairs is thatthe most important part of the pipeline was the representation.And up until 2012 the representation was calculated mechanically.In fact, engineering a new set of feature functions, improving results, and writing up the method was a prominent genre of paper.SIFT :cite:`Lowe.2004`,SURF :cite:`Bay.Tuytelaars.Van-Gool.2006`,HOG (histograms of oriented gradient) :cite:`Dalal.Triggs.2005`,[bags of visual words](https://en.wikipedia.org/wiki/Bag-of-words_model_in_computer_vision)and similar feature extractors ruled the roost.Another group of researchers,including Yann LeCun, Geoff Hinton, Yoshua Bengio,Andrew Ng, Shun-ichi Amari, and Juergen Schmidhuber,had different plans.They believed that features themselves ought to be learned.Moreover, they believed that to be reasonably complex,the features ought to be hierarchically composedwith multiple jointly learned layers, each with learnable parameters.In the case of an image, the lowest layers might cometo detect edges, colors, and textures.Indeed,Alex Krizhevsky, Ilya Sutskever, and Geoff Hintonproposed a new variant of a CNN,*AlexNet*,that achieved excellent performance in the 2012 ImageNet challenge.AlexNet was named after Alex Krizhevsky,the first author of the breakthrough ImageNet classification paper :cite:`Krizhevsky.Sutskever.Hinton.2012`.Interestingly in the lowest layers of the network,the model learned feature extractors that resembled some traditional filters.:numref:`fig_filters` is reproduced from the AlexNet paper :cite:`Krizhevsky.Sutskever.Hinton.2012`and describes lower-level image descriptors.:width:`400px`:label:`fig_filters`Higher layers in the network might build upon these representationsto represent larger structures, like eyes, noses, blades of grass, and so on.Even higher layers might represent whole objectslike people, airplanes, dogs, or frisbees.Ultimately, the final hidden state learns a compact representationof the image that summarizes its contentssuch that data belonging to different categories can be easily separated.While the ultimate breakthrough for many-layered CNNscame in 2012, a core group of researchers had dedicated themselvesto this idea, attempting to learn hierarchical representations of visual datafor many years.The ultimate breakthrough in 2012 can be attributed to two key factors. Missing Ingredient: DataDeep models with many layers require large amounts of datain order to enter the regimewhere they significantly outperform traditional methodsbased on convex optimizations (e.g., linear and kernel methods).However, given the limited storage capacity of computers,the relative expense of sensors,and the comparatively tighter research budgets in the 1990s,most research relied on tiny datasets.Numerous papers addressed the UCI collection of datasets,many of which contained only hundreds or (a few) thousands of imagescaptured in unnatural settings with low resolution.In 2009, the ImageNet dataset was released,challenging researchers to learn models from 1 million examples,1000 each from 1000 distinct categories of objects.The researchers, led by Fei-Fei Li, who introduced this datasetleveraged Google Image Search to prefilter large candidate setsfor each category and employedthe Amazon Mechanical Turk crowdsourcing pipelineto confirm for each image whether it belonged to the associated category.This scale was unprecedented.The associated competition, dubbed the ImageNet Challengepushed computer vision and machine learning research forward,challenging researchers to identify which models performed bestat a greater scale than academics had previously considered. Missing Ingredient: HardwareDeep learning models are voracious consumers of compute cycles.Training can take hundreds of epochs, and each iterationrequires passing data through many layers of computationally-expensivelinear algebra operations.This is one of the main reasons why in the 1990s and early 2000s,simple algorithms based on the more-efficiently optimizedconvex objectives were preferred.*Graphical processing units* (GPUs) proved to be a game changerin making deep learning feasible.These chips had long been developed for acceleratinggraphics processing to benefit computer games.In particular, they were optimized for high throughput $4 \times 4$ matrix-vector products, which are needed for many computer graphics tasks.Fortunately, this math is strikingly similarto that required to calculate convolutional layers.Around that time, NVIDIA and ATI had begun optimizing GPUsfor general computing operations,going as far as to market them as *general-purpose GPUs* (GPGPU).To provide some intuition, consider the cores of a modern microprocessor(CPU).Each of the cores is fairly powerful running at a high clock frequencyand sporting large caches (up to several megabytes of L3).Each core is well-suited to executing a wide range of instructions,with branch predictors, a deep pipeline, and other bells and whistlesthat enable it to run a large variety of programs.This apparent strength, however, is also its Achilles heel:general-purpose cores are very expensive to build.They require lots of chip area,a sophisticated support structure(memory interfaces, caching logic between cores,high-speed interconnects, and so on),and they are comparatively bad at any single task.Modern laptops have up to 4 cores,and even high-end servers rarely exceed 64 cores,simply because it is not cost effective.By comparison, GPUs consist of $100 \sim 1000$ small processing elements(the details differ somewhat between NVIDIA, ATI, ARM and other chip vendors),often grouped into larger groups (NVIDIA calls them warps).While each core is relatively weak,sometimes even running at sub-1GHz clock frequency,it is the total number of such cores that makes GPUs orders of magnitude faster than CPUs.For instance, NVIDIA's recent Volta generation offers up to 120 TFlops per chip for specialized instructions(and up to 24 TFlops for more general-purpose ones),while floating point performance of CPUs has not exceeded 1 TFlop to date.The reason for why this is possible is actually quite simple:first, power consumption tends to grow *quadratically* with clock frequency.Hence, for the power budget of a CPU core that runs 4 times faster (a typical number),you can use 16 GPU cores at $1/4$ the speed,which yields $16 \times 1/4 = 4$ times the performance.Furthermore, GPU cores are much simpler(in fact, for a long time they were not even *able*to execute general-purpose code),which makes them more energy efficient.Last, many operations in deep learning require high memory bandwidth.Again, GPUs shine here with buses that are at least 10 times as wide as many CPUs.Back to 2012. A major breakthrough camewhen Alex Krizhevsky and Ilya Sutskeverimplemented a deep CNNthat could run on GPU hardware.They realized that the computational bottlenecks in CNNs,convolutions and matrix multiplications,are all operations that could be parallelized in hardware.Using two NVIDIA GTX 580s with 3GB of memory,they implemented fast convolutions.The code [cuda-convnet](https://code.google.com/archive/p/cuda-convnet/)was good enough that for several yearsit was the industry standard and poweredthe first couple years of the deep learning boom. AlexNetAlexNet, which employed an 8-layer CNN,won the ImageNet Large Scale Visual Recognition Challenge 2012by a phenomenally large margin.This network showed, for the first time,that the features obtained by learning can transcend manually-designed features, breaking the previous paradigm in computer vision.The architectures of AlexNet and LeNet are very similar,as :numref:`fig_alexnet` illustrates.Note that we provide a slightly streamlined version of AlexNetremoving some of the design quirks that were needed in 2012to make the model fit on two small GPUs.:label:`fig_alexnet`The design philosophies of AlexNet and LeNet are very similar,but there are also significant differences.First, AlexNet is much deeper than the comparatively small LeNet5.AlexNet consists of eight layers: five convolutional layers,two fully-connected hidden layers, and one fully-connected output layer. Second, AlexNet used the ReLU instead of the sigmoidas its activation function.Let us delve into the details below. ArchitectureIn AlexNet's first layer, the convolution window shape is $11\times11$.Since most images in ImageNet are more than ten times higher and widerthan the MNIST images,objects in ImageNet data tend to occupy more pixels.Consequently, a larger convolution window is needed to capture the object.The convolution window shape in the second layeris reduced to $5\times5$, followed by $3\times3$.In addition, after the first, second, and fifth convolutional layers,the network adds maximum pooling layerswith a window shape of $3\times3$ and a stride of 2.Moreover, AlexNet has ten times more convolution channels than LeNet.After the last convolutional layer there are two fully-connected layerswith 4096 outputs.These two huge fully-connected layers produce model parameters of nearly 1 GB.Due to the limited memory in early GPUs,the original AlexNet used a dual data stream design,so that each of their two GPUs could be responsiblefor storing and computing only its half of the model.Fortunately, GPU memory is comparatively abundant now,so we rarely need to break up models across GPUs these days(our version of the AlexNet model deviatesfrom the original paper in this aspect). Activation FunctionsBesides, AlexNet changed the sigmoid activation function to a simpler ReLU activation function. On one hand, the computation of the ReLU activation function is simpler. For example, it does not have the exponentiation operation found in the sigmoid activation function. On the other hand, the ReLU activation function makes model training easier when using different parameter initialization methods. This is because, when the output of the sigmoid activation function is very close to 0 or 1, the gradient of these regions is almost 0, so that backpropagation cannot continue to update some of the model parameters. In contrast, the gradient of the ReLU activation function in the positive interval is always 1. Therefore, if the model parameters are not properly initialized, the sigmoid function may obtain a gradient of almost 0 in the positive interval, so that the model cannot be effectively trained. Capacity Control and PreprocessingAlexNet controls the model complexity of the fully-connected layerby dropout (:numref:`sec_dropout`),while LeNet only uses weight decay.To augment the data even further, the training loop of AlexNetadded a great deal of image augmentation,such as flipping, clipping, and color changes.This makes the model more robust and the larger sample size effectively reduces overfitting.We will discuss data augmentation in greater detail in :numref:`sec_image_augmentation`.
###Code
from d2l import mxnet as d2l
from mxnet import np, npx
from mxnet.gluon import nn
npx.set_np()
net = nn.Sequential()
# Here, we use a larger 11 x 11 window to capture objects. At the same time,
# we use a stride of 4 to greatly reduce the height and width of the output.
# Here, the number of output channels is much larger than that in LeNet
net.add(nn.Conv2D(96, kernel_size=11, strides=4, activation='relu'),
nn.MaxPool2D(pool_size=3, strides=2),
# Make the convolution window smaller, set padding to 2 for consistent
# height and width across the input and output, and increase the
# number of output channels
nn.Conv2D(256, kernel_size=5, padding=2, activation='relu'),
nn.MaxPool2D(pool_size=3, strides=2),
# Use three successive convolutional layers and a smaller convolution
# window. Except for the final convolutional layer, the number of
# output channels is further increased. Pooling layers are not used to
# reduce the height and width of input after the first two
# convolutional layers
nn.Conv2D(384, kernel_size=3, padding=1, activation='relu'),
nn.Conv2D(384, kernel_size=3, padding=1, activation='relu'),
nn.Conv2D(256, kernel_size=3, padding=1, activation='relu'),
nn.MaxPool2D(pool_size=3, strides=2),
# Here, the number of outputs of the fully-connected layer is several
# times larger than that in LeNet. Use the dropout layer to mitigate
# overfitting
nn.Dense(4096, activation='relu'), nn.Dropout(0.5),
nn.Dense(4096, activation='relu'), nn.Dropout(0.5),
# Output layer. Since we are using Fashion-MNIST, the number of
# classes is 10, instead of 1000 as in the paper
nn.Dense(10))
###Output
_____no_output_____
###Markdown
We construct a single-channel data example with both height and width of 224 to observe the output shape of each layer. It matches the AlexNet architecture in :numref:`fig_alexnet`.
###Code
X = np.random.uniform(size=(1, 1, 224, 224))
net.initialize()
for layer in net:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
###Output
conv0 output shape: (1, 96, 54, 54)
pool0 output shape: (1, 96, 26, 26)
conv1 output shape: (1, 256, 26, 26)
pool1 output shape: (1, 256, 12, 12)
conv2 output shape: (1, 384, 12, 12)
conv3 output shape: (1, 384, 12, 12)
conv4 output shape: (1, 256, 12, 12)
pool2 output shape: (1, 256, 5, 5)
dense0 output shape: (1, 4096)
dropout0 output shape: (1, 4096)
dense1 output shape: (1, 4096)
dropout1 output shape: (1, 4096)
dense2 output shape: (1, 10)
###Markdown
Reading the DatasetAlthough AlexNet is trained on ImageNet in the paper, we use Fashion-MNIST heresince training an ImageNet model to convergence could take hours or dayseven on a modern GPU.One of the problems with applying AlexNet directly on Fashion-MNISTis that its images have lower resolution ($28 \times 28$ pixels)than ImageNet images.To make things work, we upsample them to $224 \times 224$(generally not a smart practice,but we do it here to be faithful to the AlexNet architecture).We perform this resizing with the `resize` argument in the `d2l.load_data_fashion_mnist` function.
###Code
batch_size = 128
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
###Output
_____no_output_____
###Markdown
TrainingNow, we can start training AlexNet.Compared with LeNet in :numref:`sec_lenet`,the main change here is the use of a smaller learning rateand much slower training due to the deeper and wider network,the higher image resolution, and the more costly convolutions.
###Code
lr, num_epochs = 0.01, 10
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr)
###Output
loss 0.336, train acc 0.878, test acc 0.882
4107.6 examples/sec on gpu(0)
|
tutorials/Simple Plot.ipynb | ###Markdown
Plotting pandapipes Networks This tutorial shows how to plot a pandapipes network. Simple Plotting The simple plot function allows you to plot networks to get a fast visualisation. There is no need to gain a deep understanding of the plotting module.First of all, a simple network with genuine geodata is created. To get a better understanding of creating networks, follow the Creating a simple network tutorial.
###Code
try: # import pandapipes
import pandapipes as pp
except ImportError: # add pandapipes to system path, if it is not found
import sys
sys.path.insert(0, "..")
import pandapipes as pp
# create an empty network
net = pp.create_empty_network(fluid="lgas")
# create network elements, such as junctions, external grid, pipes, valves, sinks and sources
junction1 = pp.create_junction(net, pn_bar=1.05, tfluid_k=293.15, name="Connection to External Grid", geodata=(0, 0))
junction2 = pp.create_junction(net, pn_bar=1.05, tfluid_k=293.15, name="Junction 2", geodata=(2, 0))
junction3 = pp.create_junction(net, pn_bar=1.05, tfluid_k=293.15, name="Junction 3", geodata=(7, 4))
junction4 = pp.create_junction(net, pn_bar=1.05, tfluid_k=293.15, name="Junction 4", geodata=(7, -4))
junction5 = pp.create_junction(net, pn_bar=1.05, tfluid_k=293.15, name="Junction 5", geodata=(5, 3))
junction6 = pp.create_junction(net, pn_bar=1.05, tfluid_k=293.15, name="Junction 6", geodata=(5, -3))
ext_grid = pp.create_ext_grid(net, junction=junction1, p_bar=1.1, t_k=293.15, name="Grid Connection")
pipe1 = pp.create_pipe_from_parameters(net, from_junction=junction1, to_junction=junction2, length_km=10, diameter_m=0.3, name="Pipe 1", geodata=[(0, 0), (2, 0)])
pipe2 = pp.create_pipe_from_parameters(net, from_junction=junction2, to_junction=junction3, length_km=2, diameter_m=0.3, name="Pipe 2", geodata=[(2, 0), (2, 4), (7, 4)])
pipe3 = pp.create_pipe_from_parameters(net, from_junction=junction2, to_junction=junction4, length_km=2.5, diameter_m=0.3, name="Pipe 3", geodata=[(2, 0), (2, -4), (7, -4)])
pipe4 = pp.create_pipe_from_parameters(net, from_junction=junction3, to_junction=junction5, length_km=1, diameter_m=0.3, name="Pipe 4", geodata=[(7, 4), (7, 3), (5, 3)])
pipe5 = pp.create_pipe_from_parameters(net, from_junction=junction4, to_junction=junction6, length_km=1, diameter_m=0.3, name="Pipe 5", geodata=[(7, -4), (7, -3), (5, -3)])
valve = pp.create_valve(net, from_junction=junction5, to_junction=junction6, diameter_m=0.05, opened=True)
sink = pp.create_sink(net, junction=junction4, mdot_kg_per_s=0.545, name="Sink 1")
source = pp.create_source(net, junction=junction3, mdot_kg_per_s=0.234)
pp.pipeflow(net)
net.res_junction
###Output
_____no_output_____
###Markdown
The simple network contains the most common elements that are supported by the pandapipes format. In comparison to the above image, the simple plot function shows the network as follows.
###Code
# import the plotting module
import pandapipes.plotting as plot
# plot network
plot.simple_plot(net, plot_sinks=True, plot_sources=True, sink_size=4.0, source_size=4.0)
###Output
_____no_output_____
###Markdown
Plotting Collections Within the simple plot function, simple collections are generated automatically. For example a collection for all junctions, which are then plotted as red circles. However, the users can also define their own collections and plot them together. This allows for easy design modifications. What is a collection?A collection constist of an assemblage of different information about patchtype, colour, size and others. PatchesPatches are pre-designed symbols. There exist individual patches for valves and sources, or for symbols in the shape of a circle, rectangle, etc. Why using collections?It is easier for the plotting module to sort certain elements in collections. This makes the plotting itself faster by reducing time and effort for calculations. Additionally you can control the layout of your plot individually by creating your own additional collections. Additional collectionsIf you want to mark some of the elements of your network differently, you can add them to an individual collections. For example, you can add all junctions with a sink connection to a collection called *junction_sink_collection* and set the collection configurations to an orange circle. By using this functions you can easily organize the plot or create an individual plot. Element sizesThe size of the elements corresponds with the size and the type of the plot. The size can be chosen manually or be fetched with the function *get_collection_sizes*. To point out different elements, you can create additional collections for these elements.
###Code
# create additional junction collections for junctions with sink connections and junctions with valve connections
junction_sink_collection = plot.create_junction_collection(net, junctions=[3], patch_type="circle", size=0.1, color="orange", zorder=200)
junction_source_collection = plot.create_junction_collection(net, junctions=[2], patch_type="circle", size=0.1, color="green", zorder=200)
junction_valve_collection = plot.create_junction_collection(net, junctions=[4, 5], patch_type="rect",size=0.1, color="red", zorder=200)
# create additional pipe collection
pipe_collection = plot.create_pipe_collection(net, pipes=[3,4], linewidths=5., zorder=100)
###Output
_____no_output_____
###Markdown
Now, it is posssible to plot only the collections you designed individually.
###Code
# plot collections of junctions and pipes
plot.draw_collections([junction_sink_collection, junction_source_collection, junction_valve_collection, pipe_collection], figsize=(8,6))
###Output
_____no_output_____
###Markdown
If you want to plot your network including the additional collections you need to add them to the simple collections which are created automatically with the simple plot function.
###Code
# create a list of simple collections
simple_collections = plot.create_simple_collections(net, as_dict=False)
# add additional collections to the list
simple_collections.append([junction_sink_collection, junction_source_collection, junction_valve_collection, pipe_collection])
# plot list of all collections
plot.draw_collections(simple_collections)
###Output
_____no_output_____ |
YOLO v5/YOLOv5x6_map0.65/YOLOv5x6_map0.65.ipynb | ###Markdown
Importing Libraries
###Code
# For working with images and displaying them
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import random
# For File Handling
import os
import shutil as sh
from shutil import copyfile
# For Splitting dataset
from sklearn.model_selection import train_test_split
# For creating a progress bar
from tqdm.auto import tqdm
# For integrating YOLO with WandB.ai
import wandb
from kaggle_secrets import UserSecretsClient
user_secrets = UserSecretsClient()
import gc
# For YOLO model saving
import yaml
###Output
_____no_output_____
###Markdown
Installing Yolov5
###Code
# Download YOLOv5
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
# Install dependencies
%pip install -qr requirements.txt # install dependencies
%cd ../
import torch
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
###Output
Cloning into 'yolov5'...
remote: Enumerating objects: 12390, done.[K
remote: Counting objects: 100% (7/7), done.[K
remote: Compressing objects: 100% (6/6), done.[K
remote: Total 12390 (delta 1), reused 7 (delta 1), pack-reused 12383[K
Receiving objects: 100% (12390/12390), 11.56 MiB | 30.44 MiB/s, done.
Resolving deltas: 100% (8620/8620), done.
/kaggle/working/yolov5
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
Note: you may need to restart the kernel to use updated packages.
/kaggle/working
Setup complete. Using torch 1.7.1+cu110 (Tesla P100-PCIE-16GB)
###Markdown
Setting Up WandB
###Code
# Install W&B
!pip install -q --upgrade wandb
# Login
personal_key_for_api = user_secrets.get_secret("wandb")
! wandb login $personal_key_for_api
###Output
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
[34m[1mwandb[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc
###Markdown
Combine category folders
###Code
combined_path = "combi"
train_label_path = combined_path + "/labels/train/"
train_image_path = combined_path + "/images/train/"
val_label_path = combined_path + "/labels/val/"
val_image_path = combined_path + "/images/val/"
os.mkdir(combined_path)
os.mkdir(combined_path+"/labels")
os.mkdir(combined_path+"/images")
os.mkdir(train_image_path)
os.mkdir(train_label_path)
os.mkdir(val_label_path)
os.mkdir(val_image_path)
input_images_path = "../input/african-wildlife/"
dirnames = next(os.walk(input_images_path), (None, None, []))[1]
for dir in dirnames:
print(dir)
filenames = next(os.walk(input_images_path + dir), (None, None, []))[2] # [] if no file
for file in filenames:
if ".txt" in file:
if(random.uniform(0, 1) > .33):
try:
sh.copy(input_images_path + dir + "/" + file, train_label_path + file)
sh.copy(input_images_path + dir + "/" + file.replace('.txt', '.jpg'), train_image_path + file.replace('.txt', '.jpg'))
except:
if os.path.exists(train_label_path + file):
os.remove(train_label_path + file)
if os.path.exists(train_image_path + file.replace('.txt', '.jpg')):
os.remove(train_image_path + file.replace('.txt', '.jpg'))
else:
try:
sh.copy(input_images_path + dir + "/" + file, val_label_path + file)
sh.copy(input_images_path + dir + "/" + file.replace('.txt', '.jpg'), val_image_path + file.replace('.txt', '.jpg'))
except:
if os.path.exists(val_label_path + file):
os.remove(val_label_path + file)
if os.path.exists(val_image_path + file.replace('.txt', '.jpg')):
os.remove(val_image_path + file.replace('.txt', '.jpg'))
###Output
buffalo
elephant
zebra
rhino
###Markdown
YAML File
###Code
# Create .yaml file
data_yaml = dict(
train = '../combi/images/train',
val = '../combi/images/val',
nc = 4,
names = ['buffalo', 'elephant', 'rhino', 'zebra']
)
# Note that I am creating the file in the yolov5/data/ directory.
with open('yolov5/data/data.yaml', 'w') as outfile:
yaml.dump(data_yaml, outfile, default_flow_style=True)
%cat yolov5/data/data.yaml
###Output
{names: [buffalo, elephant, rhino, zebra], nc: 4, train: ../combi/images/train, val: ../combi/images/val}
###Markdown
Checking the number of Workers
###Code
import multiprocessing
multiprocessing.cpu_count()
###Output
_____no_output_____
###Markdown
Training YOLOv5x6 Model
###Code
%cd yolov5/
!python train.py --img 256 --batch 2 --epochs 50 --data data.yaml --weights yolov5x6.pt --workers 2 --cache --project African_Wildlife
###Output
/kaggle/working/yolov5
[34m[1mwandb[0m: Currently logged in as: [33mvermaayush680[0m (use `wandb login --relogin` to force relogin)
2022-04-01 15:22:21.938636: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-01 15:22:27.036295: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[34m[1mwandb[0m: Tracking run with wandb version 0.12.11
[34m[1mwandb[0m: Run data is saved locally in [35m[1m/kaggle/working/yolov5/wandb/run-20220401_152225-bcybknf5[0m
[34m[1mwandb[0m: Run [1m`wandb offline`[0m to turn off syncing.
[34m[1mwandb[0m: Syncing run [33mlilac-disco-5[0m
[34m[1mwandb[0m: ⭐️ View project at [34m[4mhttps://wandb.ai/vermaayush680/African_Wildlife[0m
[34m[1mwandb[0m: 🚀 View run at [34m[4mhttps://wandb.ai/vermaayush680/African_Wildlife/runs/bcybknf5[0m
Downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x6.pt to yolov5x6.pt...
100%|████████████████████████████████████████| 270M/270M [00:03<00:00, 72.4MB/s]
[34m[1mtrain: [0mScanning '/kaggle/working/yolov5/../combi/labels/train' images and labels[0m
[34m[1mtrain: [0mCaching images (0.0GB ram): 100%|██████████| 375/375 [00:04<00:00, 87.39i[0m
[34m[1mval: [0mScanning '/kaggle/working/yolov5/../combi/labels/val' images and labels...3[0m
[34m[1mval: [0mCaching images (0.0GB ram): 100%|██████████| 305/305 [00:05<00:00, 59.90it/[0m
0/49 2.92G 0.07669 0.02246 0.03423 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
1/49 3.31G 0.06961 0.02053 0.0261 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
2/49 3.31G 0.06909 0.01757 0.02231 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
3/49 3.31G 0.0646 0.01611 0.0189 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
4/49 3.31G 0.05555 0.01617 0.01831 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
5/49 3.31G 0.05118 0.01601 0.01813 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
6/49 3.31G 0.04533 0.01536 0.01503 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
7/49 3.31G 0.04168 0.01521 0.01424 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
8/49 3.31G 0.03912 0.01421 0.01083 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
9/49 3.31G 0.03288 0.01406 0.01136 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
10/49 3.31G 0.03392 0.01344 0.01192 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
11/49 3.31G 0.03071 0.01378 0.01085 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
12/49 3.31G 0.03132 0.0131 0.009895 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
13/49 3.31G 0.02924 0.01315 0.008889 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
14/49 3.31G 0.03105 0.01309 0.008978 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
15/49 3.31G 0.02989 0.01296 0.008637 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
16/49 3.31G 0.02863 0.01233 0.008419 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
17/49 3.31G 0.02702 0.01273 0.008406 8 256: 100%|███
Class Images Labels P R [email protected] mAP@
18/49 3.31G 0.02524 0.01234 0.008438 11 256: 100%|███
Class Images Labels P R [email protected] mAP@
19/49 3.31G 0.02633 0.01181 0.007339 6 256: 100%|███
Class Images Labels P R [email protected] mAP@
20/49 3.31G 0.02381 0.01189 0.006705 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
21/49 3.31G 0.02361 0.01186 0.007775 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
22/49 3.31G 0.02262 0.01191 0.007164 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
23/49 3.31G 0.02255 0.01187 0.007304 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
24/49 3.31G 0.02138 0.01163 0.006276 1 256: 100%|███
Class Images Labels P R [email protected] mAP@
25/49 3.31G 0.02016 0.01146 0.006052 8 256: 100%|███
Class Images Labels P R [email protected] mAP@
26/49 3.31G 0.02034 0.01103 0.005666 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
27/49 3.31G 0.01983 0.01086 0.006084 6 256: 100%|███
Class Images Labels P R [email protected] mAP@
28/49 3.31G 0.02032 0.01079 0.007079 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
29/49 3.31G 0.01919 0.01111 0.006255 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
30/49 3.31G 0.01947 0.01088 0.006124 6 256: 100%|███
Class Images Labels P R [email protected] mAP@
31/49 3.31G 0.01792 0.01079 0.005958 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
32/49 3.31G 0.01782 0.009994 0.00506 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
33/49 3.31G 0.01722 0.01065 0.004836 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
34/49 3.31G 0.01628 0.01069 0.005541 8 256: 100%|███
Class Images Labels P R [email protected] mAP@
35/49 3.31G 0.01637 0.01001 0.005382 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
36/49 3.31G 0.01608 0.009687 0.005114 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
37/49 3.31G 0.01652 0.01027 0.005425 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
38/49 3.31G 0.0156 0.01026 0.004248 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
39/49 3.31G 0.01516 0.009576 0.00478 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
40/49 3.31G 0.01471 0.009462 0.004461 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
41/49 3.31G 0.01336 0.009666 0.003799 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
42/49 3.31G 0.0127 0.009268 0.004757 3 256: 100%|███
Class Images Labels P R [email protected] mAP@
43/49 3.31G 0.01235 0.009614 0.004434 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
44/49 3.31G 0.01247 0.009428 0.00424 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
45/49 3.31G 0.01216 0.009249 0.004476 2 256: 100%|███
Class Images Labels P R [email protected] mAP@
46/49 3.31G 0.01179 0.009474 0.003286 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
47/49 3.31G 0.01198 0.009155 0.003872 5 256: 100%|███
Class Images Labels P R [email protected] mAP@
48/49 3.31G 0.01132 0.008807 0.003504 4 256: 100%|███
Class Images Labels P R [email protected] mAP@
49/49 3.31G 0.0112 0.009174 0.003763 6 256: 100%|███
Class Images Labels P R [email protected] mAP@
Class Images Labels P R [email protected] mAP@
[34m[1mwandb[0m: Waiting for W&B process to finish... [32m(success).[0m
[34m[1mwandb[0m:
[34m[1mwandb[0m:
[34m[1mwandb[0m: Run history:
[34m[1mwandb[0m: metrics/mAP_0.5 ▁▁▂▂▅▄▆▆▆▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇███████████
[34m[1mwandb[0m: metrics/mAP_0.5:0.95 ▁▁▁▁▃▃▄▅▅▅▆▅▅▆▆▆▆▆▆▇▇▇▇▇▇▇▇▇▇▇▇▇████████
[34m[1mwandb[0m: metrics/precision ▁▁▄▂▇▇▆▆▇▇▇█▇▇█████████████████▅████▆▆▆▆
[34m[1mwandb[0m: metrics/recall ▄▃▁▁▃▃▅▅▅▅▆▅▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆▆█▆▆▆▆████
[34m[1mwandb[0m: train/box_loss █▇▇▇▅▅▄▄▃▃▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁
[34m[1mwandb[0m: train/cls_loss █▆▅▅▄▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁
[34m[1mwandb[0m: train/obj_loss █▇▅▅▅▄▄▄▃▃▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▁▂▁▁▂▂▁▁▁▁▁▁▁▁
[34m[1mwandb[0m: val/box_loss █▇██▄▄▄▃▃▃▂▃▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
[34m[1mwandb[0m: val/cls_loss █▃▃▃▂▂▂▂▂▂▂▂▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
[34m[1mwandb[0m: val/obj_loss █▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
[34m[1mwandb[0m: x/lr0 ▃▆████▇▇▇▇▇▆▆▆▆▆▅▅▅▅▅▅▄▄▄▄▄▃▃▃▃▃▂▂▂▂▂▁▁▁
[34m[1mwandb[0m: x/lr1 ▃▆████▇▇▇▇▇▆▆▆▆▆▅▅▅▅▅▅▄▄▄▄▄▃▃▃▃▃▂▂▂▂▂▁▁▁
[34m[1mwandb[0m: x/lr2 █▅▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
[34m[1mwandb[0m:
[34m[1mwandb[0m: Run summary:
[34m[1mwandb[0m: best/epoch 47
[34m[1mwandb[0m: best/mAP_0.5 0.82041
[34m[1mwandb[0m: best/mAP_0.5:0.95 0.65079
[34m[1mwandb[0m: best/precision 0.65095
[34m[1mwandb[0m: best/recall 0.78863
[34m[1mwandb[0m: metrics/mAP_0.5 0.82019
[34m[1mwandb[0m: metrics/mAP_0.5:0.95 0.65058
[34m[1mwandb[0m: metrics/precision 0.65046
[34m[1mwandb[0m: metrics/recall 0.78863
[34m[1mwandb[0m: train/box_loss 0.0112
[34m[1mwandb[0m: train/cls_loss 0.00376
[34m[1mwandb[0m: train/obj_loss 0.00917
[34m[1mwandb[0m: val/box_loss 0.01584
[34m[1mwandb[0m: val/cls_loss 0.01066
[34m[1mwandb[0m: val/obj_loss 0.00485
[34m[1mwandb[0m: x/lr0 0.0005
[34m[1mwandb[0m: x/lr1 0.0005
[34m[1mwandb[0m: x/lr2 0.0005
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced [33mlilac-disco-5[0m: [34m[4mhttps://wandb.ai/vermaayush680/African_Wildlife/runs/bcybknf5[0m
[34m[1mwandb[0m: Synced 5 W&B file(s), 337 media file(s), 1 artifact file(s) and 0 other file(s)
[34m[1mwandb[0m: Find logs at: [35m[1m./wandb/run-20220401_152225-bcybknf5/logs[0m
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7fb3a38258c0>
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in __del__
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1144, in _shutdown_workers
AttributeError: 'NoneType' object has no attribute 'python_exit_status'
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7fb3a38258c0>
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in __del__
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1144, in _shutdown_workers
AttributeError: 'NoneType' object has no attribute 'python_exit_status'
|
2S2018/IA898A_Ex05_FernandoFurusato.ipynb | ###Markdown
Ex05 - Filtros de aguçamento 1. Unsharp maskUm filtro bastante utilizado para aguçar a imagem é denominado *unsharp mask*. Ele é capaz de realçar bordas calculando a diferença entre a imagem original e uma versão suavizada da imagem filtrada pela gaussiana. Para conseguir o realce de bordas, faça:- Calcule primeiro a *unsharp mask* ($df$)- Faça uma ponderação entre a imagem original e a imagem diferença: $$((1-k)*f + k*df)$$ onde $f$ é a imagem, $df$ é a *unsharp mask* e $k$ é o fator de ponderação - Mude o fator de ponderacao $k$ e veja o efeito na imagem final
###Code
import sys, os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
import numpy as np
import scipy.signal as sc
import scipy.ndimage as sn
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from math import sqrt, pi
def border_highlight(img, df, k):
return(((1 - k) * img) + (k * df))
def gauss_kernel(kernel_size=7, sigma=1.0, mean=0):
x = np.linspace(mean-1, mean+1, kernel_size)
g_x = 1/(sqrt(2*pi)*sigma) * np.exp(-(x**2/2.*(sigma**2)))
g_y = g_x.reshape(len(g_x),1)
g_xy = g_x * g_y
# plt.plot(x, g_x)
# plt.show()
return g_xy
###Output
_____no_output_____
###Markdown
Utilizando uma imagem maior, percebi que a ponderação e o filtro gaussiano devem ser maiores para que tenhamos diferenças visíveis. Neste caso, a imagem é relativamente pequena.
###Code
f_ex1 = np.array(mpimg.imread('../data/retina.tif')).astype('float32')
gauss_kernel1_ex1 = gauss_kernel(3, 1, 0)
f_ex1_filtered = sc.convolve2d(f_ex1, gauss_kernel1_ex1, mode='same')
# f_ex1_filtered = ia.normalize(sn.gaussian_filter(f_ex1, 1))
n = 0
m = 200
df_ex1 = f_ex1 - f_ex1_filtered
fig1_ex1 = plt.figure(figsize=(14,14))
fig1_ex1.add_subplot(131).axis('off')
plt.title('Original')
plt.imshow(f_ex1[n:m,n:m], cmap='gray')
fig1_ex1.add_subplot(132).axis('off')
plt.title('Filtrado gaussian')
plt.imshow(f_ex1_filtered[n:m,n:m], cmap='gray')
fig1_ex1.add_subplot(133).axis('off')
plt.title('Original - filtrado')
plt.imshow(df_ex1[n:m,n:m], cmap='gray')
plt.show()
fig2_ex1 = plt.figure(figsize=(14,14))
fig2_ex1.add_subplot(131).axis('off')
plt.title('Ponderação 0.5')
plt.imshow((ia.normalize(border_highlight(f_ex1, df_ex1, .5))[n:m,n:m]), cmap='gray')
fig2_ex1.add_subplot(132).axis('off')
plt.title('Ponderação 1')
plt.imshow(border_highlight(f_ex1, df_ex1, 1)[n:m,n:m], cmap='gray')
fig2_ex1.add_subplot(133).axis('off')
plt.title('Ponderação 2')
plt.imshow(border_highlight(f_ex1, df_ex1, 2)[n:m,n:m], cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
2. Filtro de SobelExistem vários filtros que procuram realçar as bordas da imagem. Um dos mais conhecidos é o Operador Sobel, composto por uma máscara vertical (Sv) e uma máscara horizontal (Sh).
###Code
import numpy as np
Sv = np.array([[1,0,-1],[2,0,-2],[1,0,-1]])
print('Sv =\n',Sv)
Sh = np.array([[1,2,1],[0,0,0],[-1,-2,-1]])
print('Sh =\n',Sh)
###Output
_____no_output_____
###Markdown
2.1 Implementar o operador Magnitude Sobel de uma imagem.A função MagSobel a ser implementada possui como parâmetro a imagem de entrada e deve seguir a seguinte equação:$$MagSobel = \sqrt{f_h^2 + f_v^2}$$onde $f_h$ é a imagem de entrada convolvida com o operador de Sobel horizontal e $f_v$ é a imagem de entrada convolvida com o operador de Sobel vertical.Existem alguns cuidados necessários:- As operações devem todas serem feitas em ponto flutuante e os valores finais serão maiores de 255. Assim, a função que calcula a magnitude do gradiente Sobel é feita de acordo com a equação dada. - Lembre-se que para visualizar a imagem será necessário antes normalizar a imagem utilizando, por exemplo, o ianormalize. - Adicionalmente, como a máscara Sobel é 3x3, a imagem resultante terá altura e largura maiores que a original por 2x2 pixels já que a imagem resultante da convolução linear é a soma dos tamanhos em cada dimensão, menos 1.
###Code
def MagSobel(img):
Sv = np.array([[1,0,-1],[2,0,-2],[1,0,-1]])
Sh = np.array([[1,2,1],[0,0,0],[-1,-2,-1]])
fh = sc.convolve2d(img, Sh, mode='same').astype('float')
fv = sc.convolve2d(img, Sv, mode='same').astype('float')
mag_sobel = np.sqrt((fv**2)+(fh**2))
return mag_sobel, fh, fv
f = mpimg.imread('../data/retina.tif')
sobel = MagSobel(f)
fig = plt.figure(figsize=[5, 5])
plt.imshow(ia.normalize(sobel[0]), cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
2.2 Implementar uma funcao que retorna o ângulo da borda de SobelApós implementar a função que retorna o ângulo da borda de Sobel:- calcule o histograma da distribuição deste ângulo, somente para valores de magnitude de borda acima de um limiar $T$;- visualize a imagem de ângulo utilizando uma tabela de cores circular para os ângulos e uma cor diferente para usar nos locais onde a magnitude for menor que $T$. Plote junto com a figura, a tabela de cores usada;
###Code
def sobel_angle(fh, fv):
return(np.arctan2(fv, fh))
###Output
_____no_output_____
###Markdown
**Fernando:** Imprimi os valores abaixo somente para ter certeza da faixa de ângulos resultantes na operação ângulos mod pi, para que só tivéssemos ângulos positivos, de 0 a 180 graus. Sabemos que utilizar abs não seria exatamente efetivo, pois em nosso caso, -pi/4 seria equivalente a 3/4 * pi, por exemplo.
###Code
f = mpimg.imread('../data/retina.tif')
# f = mpimg.imread('../data/cameraman.tif')
sobel = MagSobel(f)
angles = (sobel_angle(sobel[1], sobel[2]))
print(angles.max())
print(angles.min())
print(sobel[0].max())
angles = angles % (pi)
print(angles.max())
print(angles.min())
###Output
_____no_output_____
###Markdown
**Fernando:** Pelo histograma da magnitude normalizada, podemos perceber que a maior quantidade de magnitudes está concentrada nos valores 50 e abaixo. Fui experimentando a partir de 50, e a imagem ficou mais interessante a partir de 130
###Code
plt.plot(np.histogram(sobel[0], bins=1070)[0])
plt.show()
sobel_normalized = ia.normalize(sobel[0])
plt.plot(ia.histogram(sobel_normalized))
plt.show()
###Output
_____no_output_____
###Markdown
Eu tinha feito primeiro a circunferência abaixo, a partir do exemplo no ia.sobel
###Code
import matplotlib.colors as colors
f2_ex2 = ia.circle([200,300], 90, [100,150])
sobel2_ex2 = MagSobel(f2_ex2)
f2_angles = sobel_angle(sobel2_ex2[1], sobel2_ex2[2])
f2_angles = f2_angles % pi
f2_mag_angles = np.select([sobel2_ex2[0] > 1], [f2_angles])
f2_angles_color = np.ones((f2_angles.shape[0], f2_angles.shape[1], 3))
f2_angles_color[:,:,0] = ia.normalize(f2_mag_angles, [0, 1])
f2_angles_color[:,:,1] = ia.normalize(f2_mag_angles, [0, 1])
f2_angles_color[:,:,2] = ia.normalize(f2_mag_angles, [0, 1])
f2_angles_color[:,:,1][f2_angles_color[:,:,0] != 0] = 1
f2_angles_color[:,:,2][f2_angles_color[:,:,0] != 0] = 1
print(f2_mag_angles.min())
plt.imshow(f2_mag_angles, cmap='gray')
plt.colorbar()
plt.show()
plt.imshow(colors.hsv_to_rgb(f2_angles_color), cmap='hsv', vmax=180)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
**Fernando:** Eu acho que não era bem essa tabela de cores que a professora queria, mas perdi muito tempo fazendo os outros exercícios. Então acredito que vou deixar desta forma.
###Code
mag_angles = np.select([sobel[0] > 130], [angles])
plt.figure(figsize=[7,7])
plt.imshow(ia.normalize(mag_angles, [0, 180]), cmap='gray', vmax=180)
plt.colorbar()
plt.show()
colored_angles = np.ones((mag_angles.shape[0], mag_angles.shape[1], 3))
colored_angles[:,:,0] = ia.normalize(mag_angles, [0, 1])
colored_angles[:,:,1] = ia.normalize(mag_angles, [0, 1])
colored_angles[:,:,2] = ia.normalize(mag_angles, [0, 1])
colored_angles[:,:,1][colored_angles[:,:,0] != 0] = 1
colored_angles[:,:,2][colored_angles[:,:,0] != 0] = 1
plt.figure(figsize=[7,7])
plt.imshow(colors.hsv_to_rgb(colored_angles), cmap='hsv', vmax=180)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
3. Propriedades da convoluçãoRealize experimentos para demostrar as propriedades da convolução:- Comutativa $$f(x,y)*g(x,y) = g(x,y)*f(x,y)$$- Associativa $$f(x,y)*[g(x,y)*h(x,y)] = [f(x,y)*g(x,y)]*h(x,y)$$Crie um exemplo que demonstre como usar a propriedade associativa para realizar a filtragem de uma imagem por um filtro passa-faixa. (Dica: use uma máscara passa-baixas e uma máscara passa-altas). **Fernando**: Fiz a convolução comutativa com ia.conv e scipy.signal.convolve. Isso porque eu sei que o numpy.convolve alterna qual array convolve em qual, dependendo do tamanho, porém não tenho certeza quanto ao scipy. Tirei a diferença entre os resultados nos dois casos, e foi zero.
###Code
kernel_ex3 = gauss_kernel(7)
f_filtered_ex3_1 = ia.conv(f, kernel_ex3)
f_filtered_ex3_2 = ia.conv(kernel_ex3, f)
fig_ex3 = plt.figure(figsize=[10, 10])
fig_ex3.add_subplot(121).axis('off')
plt.imshow(f_filtered_ex3_1, cmap='gray')
fig_ex3.add_subplot(122).axis('off')
plt.imshow(f_filtered_ex3_2, cmap='gray')
plt.show()
print(np.sum(f_filtered_ex3_1 - f_filtered_ex3_2))
f_filtered_ex3_1 = sc.convolve(f, kernel_ex3)
f_filtered_ex3_2 = sc.convolve(kernel_ex3, f)
fig_ex3 = plt.figure(figsize=[10, 10])
fig_ex3.add_subplot(121).axis('off')
plt.imshow(f_filtered_ex3_1, cmap='gray')
fig_ex3.add_subplot(122).axis('off')
plt.imshow(f_filtered_ex3_2, cmap='gray')
plt.show()
print(np.sum(f_filtered_ex3_1 - f_filtered_ex3_2))
###Output
_____no_output_____
###Markdown
- Associativa $$f(x,y)*[g(x,y)*h(x,y)] = [f(x,y)*g(x,y)]*h(x,y)$$
###Code
f_xy = f.copy()
# Laplaciano
g_xy = np.array([[0, 0, -1, 0, 0],
[0, -1, -2, -1, 0],
[-1, -2, 16, -2, -1],
[0, -1, -2, -1, 0],
[0, 0, -1, 0, 0]])
h_xy = np.ones((5, 5))/5*5
# f(x, y) * [g(x, y) * h(x, y)]
term1 = sc.convolve2d(g_xy, h_xy, mode='full')
result1 = sc.convolve2d(term1, f_xy, mode='full')
# g(x, y) * [f(x, y) * h(x, y)]
term2 = sc.convolve2d(f_xy, h_xy, mode='full')
result2 = sc.convolve2d(term2, g_xy, mode='full')
# h(x, y) * [f(x, y) * g(x, y)]
term3 = sc.convolve2d(f_xy, g_xy, mode='full')
result3 = sc.convolve2d(term3, h_xy, mode='full')
fig_ex3_3 = plt.figure(figsize=[12, 12])
fig_ex3_3.add_subplot(131)
plt.imshow(result1, cmap='gray')
fig_ex3_3.add_subplot(132)
plt.imshow(result2, cmap='gray')
fig_ex3_3.add_subplot(133)
plt.imshow(result2, cmap='gray')
plt.show()
print(np.sum(result1-result2))
print(np.sum(result1-result3))
###Output
_____no_output_____ |
user-story-2-software-citations/py-software-citations.ipynb | ###Markdown
 | [FREYA](https://www.project-freya.eu/en) WP2 [User Story 2](https://github.com/datacite/freya/issues/63) | As a software author, I want to be able to see the citations of my software aggregated across all versions, so that I see a complete picture of reuse. :------------- | :------------- | :-------------Software development process involves versioned releases. Consequently, different software versions may be used for scientific discovery and thus referenced in publications. In order to quantify impact of a software, its author must be able to capture the reuse of the software across all its versions.This notebook uses the [DataCite GraphQL API](https://api.datacite.org/graphql) to retrieve metadata about software titled: [Calculation Package: Inverting topography for landscape evolution model process representation](https://doi.org/10.5281/zenodo.2799488), including all its versions, so that its overall reuse can be quantified.**Goal**: By the end of this notebook, for a given software you should be able to display:- Counts of citations, views and downloads metrics, aggregated across all versions of the software- An interactive stacked bar plot showing how the metric counts of each version contribute to the corresponding aggregated metric counts, e.g. Install libraries and prepare GraphQL client
###Code
%%capture
# Install required Python packages
!pip install gql requests numpy plotly
# Prepare the GraphQL client
import requests
from IPython.display import display, Markdown
from gql import gql, Client
from gql.transport.requests import RequestsHTTPTransport
_transport = RequestsHTTPTransport(
url='https://api.datacite.org/graphql',
use_json=True,
)
client = Client(
transport=_transport,
fetch_schema_from_transport=True,
)
###Output
_____no_output_____
###Markdown
Define and run GraphQL queryDefine the GraphQL query to retrieve metadata for the software titled: [Calculation Package: Inverting topography for landscape evolution model process representation](https://doi.org/10.5281/zenodo.2799488) using its DOI.
###Code
# Generate the GraphQL query to retrieve the required software's metadata
query_params = {
"softwareId" : "https://doi.org/10.5281/zenodo.2799488"
}
query = gql("""query getSoftware($softwareId: ID!)
{
software(id: $softwareId) {
id
titles {
title
}
publicationYear
citations {
nodes {
id
titles {
title
}
}
}
version
versionCount
versionOfCount
citationCount
downloadCount
viewCount
versions {
nodes {
id
version
publicationYear
titles {
title
}
citations {
nodes {
id
titles {
title
}
}
}
version
versionCount
versionOfCount
citationCount
downloadCount
viewCount
}
}
}
}
""")
###Output
_____no_output_____
###Markdown
Run the above query via the GraphQL client
###Code
import json
data = client.execute(query, variable_values=json.dumps(query_params))
###Output
_____no_output_____
###Markdown
Display total software metricsDisplay total number of citations, views and downloads across all versions of software: [Calculation Package: Inverting topography for landscape evolution model process representation](https://doi.org/10.5281/zenodo.2799488).
###Code
# Get the total count per metric, aggregated for across all versions of the software
software = data['software']
# Initialise metric counts
metricCounts = {}
for metric in ['citationCount', 'viewCount', 'downloadCount']:
metricCounts[metric] = 0
# Aggregate metric counts across all the version
for node in software['versions']['nodes']:
for metric in metricCounts:
metricCounts[metric] += node[metric]
# Display the aggregated metric counts
tableBody=""
for metric in metricCounts:
tableBody += "%s | **%s**\n" % (metric, str(metricCounts[metric]))
if tableBody:
display(Markdown("Aggregated metric counts for software: [Calculation Package: Inverting topography for landscape evolution model process representation](https://doi.org/10.5281/zenodo.2799488) across all its versions:"))
display(Markdown("|Metric | Aggregated Count|\n|---|---|\n%s" % tableBody))
###Output
_____no_output_____
###Markdown
Plot metric counts per software versionPlot stacked bar plot showing how the individual versions of software: [Calculation Package: Inverting topography for landscape evolution model process representation](https://doi.org/10.5281/zenodo.2799488) contribute their metric counts to the corresponding aggregated total.
###Code
import plotly.io as pio
import plotly.express as px
from IPython.display import IFrame
import pandas as pd
# Adapted from: https://stackoverflow.com/questions/58766305/is-there-any-way-to-implement-stacked-or-grouped-bar-charts-in-plotly-express
def px_stacked_bar(df, color_name='Metric', y_name='Metrics', **pxargs):
idx_col = df.index.name
m = pd.melt(df.reset_index(), id_vars=idx_col, var_name=color_name, value_name=y_name)
# For Plotly colour sequences see: https://plotly.com/python/discrete-color/
return px.bar(m, x=idx_col, y=y_name, color=color_name, **pxargs,
color_discrete_sequence=px.colors.qualitative.Pastel1)
# Collect metric counts
software = data['software']
version = software['version']
# Initialise dicts for the stacked bar plot
labels = {0: 'All Software Versions'}
citationCounts = {}
viewCounts = {}
downloadCounts = {}
# Collect software/version labels
versionCnt = 1
for node in software['versions']['nodes']:
version = software['version']
labels[versionCnt] = '%s (%s)' % (version, node['publicationYear'])
versionCnt += 1
# Initialise aggregated metric counts (key: 0)
citationCounts[0] = 0
viewCounts[0] = 0
downloadCounts[0] = 0
# Populate metric counts for individual versions (key: versionCnt) and add them to the aggregated counts (key: 0)
versionCnt = 1
for node in software['versions']['nodes']:
citationCounts[0] += node['citationCount']
viewCounts[0] += node['viewCount']
downloadCounts[0] += node['downloadCount']
citationCounts[versionCnt] = node['citationCount']
viewCounts[versionCnt] = node['viewCount']
downloadCounts[versionCnt] = node['downloadCount']
versionCnt += 1
# Create stacked bar plot
df = pd.DataFrame({'Software/Versions': labels,
'Citations': citationCounts,
'Views': viewCounts,
'Downloads': downloadCounts})
fig = px_stacked_bar(df.set_index('Software/Versions'), y_name = "Counts")
# Set plot background to transparent
fig.update_layout({
'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)'
})
# Write interactive plot out to html file
pio.write_html(fig, file='out.html')
# Display plot from the saved html file
display(Markdown("Citations, views and downloads counts for software: [Calculation Package: Inverting topography for landscape evolution model process representation](https://doi.org/10.5281/zenodo.2799488) across all its versions, shown as stacked bar plot:"))
IFrame(src="./out.html", width=500, height=500)
###Output
_____no_output_____ |
notebooks/iomega-3-classical-spectra-similarities.ipynb | ###Markdown
Iomega workflow Calculate classical spectra similarity scoresCalculate all-vs-all similarity matrices for the data subset "Unique InchiKeys" (>12,000 spectra).
###Code
import os
import sys
import time
#path_data = os.path.join(os.path.dirname(os.getcwd()), 'data')
path_data = 'C:\\OneDrive - Netherlands eScience Center\\Project_Wageningen_iOMEGA\\matchms\\data\\'
path_root = os.path.join(os.path.dirname(os.getcwd()))
sys.path.insert(0, path_root)
###Output
_____no_output_____
###Markdown
Import pre-processed data subset "Unique InchiKeys"
###Code
from matchms.importing import load_from_json
filename = os.path.join(path_data,'gnps_positive_ionmode_unique_inchikey_cleaned_by_matchms_and_lookups.json')
spectrums = load_from_json(filename)
print("number of spectra:", len(spectrums))
###Output
number of spectra: 13717
###Markdown
Post-process spectra+ Normalize spectrum+ Remove peaks outside m/z ratios between 0 and 1000.0+ Discard spectra with less then 10 remaining peaks (to make it consistent with later spec2vec analysis)+ Remove peaks with relative intensity lower than 0.01
###Code
from matchms.filtering import normalize_intensities
from matchms.filtering import require_minimum_number_of_peaks
from matchms.filtering import select_by_mz
from matchms.filtering import select_by_relative_intensity
def post_process(s):
s = normalize_intensities(s)
s = select_by_mz(s, mz_from=0, mz_to=1000)
s = require_minimum_number_of_peaks(s, n_required=10)
s = select_by_relative_intensity(s, intensity_from=0.01, intensity_to=1.0)
return s
# apply filters to the data
spectrums = [post_process(s) for s in spectrums]
# omit spectrums that didn't qualify for analysis
spectrums = [s for s in spectrums if s is not None]
print("Remaining number of spectra:", len(spectrums))
###Output
Remaining number of spectra: 12797
###Markdown
Display number of peaks per spectrum
###Code
import numpy as np
from matplotlib import pyplot as plt
number_of_peaks = [len(spec.peaks) for spec in spectrums]
plt.figure(figsize=(12,7))
hist = plt.hist(number_of_peaks, np.arange(0,2000,20))
plt.xlabel("number of peaks in spectrum")
plt.ylabel("number of spectra in respective bin")
###Output
_____no_output_____
###Markdown
Calculate similarity score matrices+ Similarities between all possible pairs of spectra will be calculated. This will give a similarity score matrix of size 12,797 x 12,797.+ Careful: for the dataset used here, calculating the all-vs-all similarity score matrix will take a while (few hours). Calculate cosine similarity scores+ here using ``tolerance = 0.005``, ``mz_power = 0.0``, ``intensity_power = 1.0``+ ``safety_points=10`` is optional, this will simply make sure that the intermediate results are occationally saved (10x during the process).
###Code
import numpy as np
from matchms.similarity import CosineGreedy
# Define similarity measure
similarity_measure = CosineGreedy(tolerance=0.005, mz_power=0, intensity_power=1.0)
filename = os.path.join(path_data, "similarities_cosine_tol0005_201207.npy")
tstart = time.time()
similarity_matrix = similarity_measure.matrix(spectrums, spectrums, is_symmetric=True)
tend = time.time()
# Save results and print computation time
np.save(filename, similarity_matrix)
print(f"Calculated {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores in {tend-tstart} s.")
print(f"Calculated {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores in {(tend-tstart)/60} min.")
from matchms.similarity import CosineGreedy
from custom_functions.similarity_matrix import all_vs_all_similarity_matrix
# Define similarity measure
similarity_measure = CosineGreedy(tolerance=0.005)
filename = os.path.join(path_data, "similarities_cosine_tol0005_200708.npy")
similarities, num_matches = all_vs_all_similarity_matrix(spectrums, similarity_measure,
filename, safety_points=10)
###Output
About 99.990% of similarity scores calculated.
###Markdown
Compare calculation time with Spec2vec- run on the same filtered data
###Code
import gensim
path_models = os.path.join(path_data, "trained_models")
model_file = os.path.join(path_models, "spec2vec_UniqueInchikeys_ratio05_filtered_iter_50.model")
# Load pretrained model
model = gensim.models.Word2Vec.load(model_file)
from spec2vec import Spec2Vec
from spec2vec import SpectrumDocument
tstart = time.time()
documents = [SpectrumDocument(s, n_decimals=2) for s in spectrums]
tend = time.time()
print(f"Time to create {len(documents)} documents: {tend-tstart} s.")
spec2vec_similarity = Spec2Vec(model, intensity_weighting_power=0.5, allowed_missing_percentage=20.0)
tstart = time.time()
similarity_matrix = spec2vec_similarity.matrix(documents, documents, is_symmetric=True)
tend = time.time()
print(f"Calculated {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores in {tend-tstart} s.")
print(f"Calculated {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores in {(tend-tstart)/60} min.")
###Output
Found 6 word(s) missing in the model. Weighted missing percentage not covered by the given model is 2.30%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.18%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.18%.
Found 6 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.81%.
Found 17 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.75%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.44%.
Found 6 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.11%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.33%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.11%.
Found 6 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.25%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.61%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.19%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.47%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.63%.
Found 6 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.47%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.80%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.17%.
Found 4 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.21%.
Found 43 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.98%.
Found 342 word(s) missing in the model. Weighted missing percentage not covered by the given model is 5.34%.
Found 219 word(s) missing in the model. Weighted missing percentage not covered by the given model is 5.30%.
Found 307 word(s) missing in the model. Weighted missing percentage not covered by the given model is 4.89%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.22%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.08%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.41%.
Found 5 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.18%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.24%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.12%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.21%.
Found 4 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.74%.
Found 7 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.26%.
Found 14 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.81%.
Found 25 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.30%.
Found 19 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.36%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.26%.
Found 21 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.68%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.12%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.16%.
Found 306 word(s) missing in the model. Weighted missing percentage not covered by the given model is 8.12%.
Found 13 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.76%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.15%.
Found 5 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.44%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.17%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.18%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.17%.
Found 2156 word(s) missing in the model. Weighted missing percentage not covered by the given model is 9.37%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.20%.
Found 33 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.19%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.16%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.14%.
Found 12 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.17%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.19%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.25%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.20%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.24%.
Found 731 word(s) missing in the model. Weighted missing percentage not covered by the given model is 5.67%.
Found 301 word(s) missing in the model. Weighted missing percentage not covered by the given model is 5.21%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.31%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.12%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.21%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.33%.
Found 4 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.33%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.15%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.29%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.18%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.17%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.11%.
Found 5 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.26%.
Found 6 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.50%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 1.22%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.23%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.14%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.41%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.21%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.13%.
Found 2 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.38%.
Found 1 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.32%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.47%.
Found 3 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.47%.
Found 5 word(s) missing in the model. Weighted missing percentage not covered by the given model is 0.84%.
###Markdown
Calculate cosine similarity scores (alternative parameters)+ here using ``tolerance = 0.005``, ``mz_power = 0.0``, ``intensity_power = 0.33``+ ``safety_points=10`` is optional, this will simply make sure that the intermediate results are occationally saved (10x during the process). --> Took about 8 hours
###Code
from matchms.similarity import CosineGreedy
from custom_functions.similarity_matrix import all_vs_all_similarity_matrix
# Define similarity measure
similarity_measure = CosineGreedy(tolerance=0.005, mz_power=0.0, intensity_power=0.33)
filename = os.path.join(path_data, "similarities_cosine_tol0005_intpow033_200716.npy")
similarities, num_matches = all_vs_all_similarity_matrix(spectrums, similarity_measure,
filename, safety_points=10)
###Output
About 99.990% of similarity scores calculated.
###Markdown
Calculate cosine similarity scores (NIST settings)+ here using ``tolerance = 0.005``, ``mz_power = 3.0``, ``intensity_power = 0.6``+ ``safety_points=10`` is optional, this will simply make sure that the intermediate results are occationally saved (10x during the process). Computation time: about 6 hours (run on: Intel i7-8550U)
###Code
from matchms.similarity import CosineGreedy
from custom_functions.similarity_matrix import all_vs_all_similarity_matrix
# Define similarity measure
similarity_measure = CosineGreedy(tolerance=0.005, mz_power=3.0, intensity_power=0.6)
filename = os.path.join(path_data, "similarities_cosine_tol0005_NIST_200716.npy")
similarities, num_matches = all_vs_all_similarity_matrix(spectrums, similarity_measure,
filename, safety_points=10)
###Output
About 99.990% of similarity scores calculated.
###Markdown
Calculate cosine similarity scores (MassBank settings)+ here using ``tolerance = 0.005``, ``mz_power = 2.0``, ``intensity_power = 0.5``+ ``safety_points=10`` is optional, this will simply make sure that the intermediate results are occationally saved (10x during the process). Computation time: about 6 hours (run on: Intel i7-8550U)
###Code
from matchms.similarity import CosineGreedy
from custom_functions.similarity_matrix import all_vs_all_similarity_matrix
# Define similarity measure
similarity_measure = CosineGreedy(tolerance=0.005, mz_power=2.0, intensity_power=0.5)
filename = os.path.join(path_data, "similarities_cosine_tol0005_MassBank_200716.npy")
similarities, num_matches = all_vs_all_similarity_matrix(spectrums, similarity_measure,
filename, safety_points=10)
###Output
About 99.990% of similarity scores calculated.
###Markdown
Calculate modified cosine similarity scores+ here using ``tolerance = 0.005``, ``mz_power = 0.0``, ``intensity_power = 1.0``
###Code
from matchms.similarity import ModifiedCosine
# Define similarity measure
similarity_measure = ModifiedCosine(tolerance=0.005, mz_power=0, intensity_power=1.0)
filename = os.path.join(path_data, "similarities_mod_cosine_tol0005_201202.npy")
tstart = time.time()
similarity_matrix = similarity_measure.matrix(spectrums, spectrums, is_symmetric=True)
tend = time.time()
np.save(filename, similarity_matrix)
print(f"Calculated {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores in {tend-tstart} s.")
print(f"Calculated {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores in {(tend-tstart)/60} min.")
print(f"Matrix with {similarity_matrix.shape[0]}x{similarity_matrix.shape[1]} scores")
num_of_calculated_scores = (similarity_matrix.shape[0] ** 2 ) / 2 + similarity_matrix.shape[0]/2
print(f"Corresponding to {num_of_calculated_scores} unique scores.")
print(f"On average it took {1000 * (tend-tstart) / num_of_calculated_scores} ms per score.")
similarity_matrix[10, :25]
a, b = zip(*similarity_matrix[:5, :5])
a
###Output
_____no_output_____ |
Pandas - Data Analysis with Pandas and Python - BP/05_DataFrames_3_Extracting_Data.ipynb | ###Markdown
1) Intro to DataFrame III + Import Dataset
###Code
import pandas as pd
bond = pd.read_csv('Data/jamesbond.csv')
bond.head(3)
###Output
_____no_output_____
###Markdown
---- 2) The `set_index` and `reset_index` Methods using `index_col`
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond.head(3)
###Output
_____no_output_____
###Markdown
using `set_index` for single level
###Code
bond = pd.read_csv('Data/jamesbond.csv')
bond.head(3)
bond = bond.set_index('Film')
bond.head(3)
###Output
_____no_output_____
###Markdown
using `.reset_index()`+ `drop=False`: don't drop the column which was formally an index
###Code
bond.reset_index()
bond.reset_index(drop=False) # Film column is still back in df
bond.reset_index(drop=True) # now Flim column is gone
###Output
_____no_output_____
###Markdown
--------- Let's say we want to replace `Year` column with current index of `Film`
###Code
bond.set_index('Year') # if we do like this, the original Film index will be gone, So we need to avoid this.
###Output
_____no_output_____
###Markdown
to avoid the above scenarios, we need to like below+ reset the index+ then set the index of preferred column
###Code
bond = bond.reset_index()
bond = bond.set_index('Year')
bond.head(3)
###Output
_____no_output_____
###Markdown
------- 3) Retrieve Rows by Index Label with `.loc[]` Accessor
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
TIP: sort the index beforehand will boost pandas searching performanceIt is same rule. It is easy for us to search and find the meaing of the keyword in dictionary if the keywords are sorted in order beforehand, compared to randomly jumbled. ------ `.loc[]` Accessor access the rows by `index label`+ the possible reason we need to use square brackets [] is in pandas, when we use indexing to access values are almost using []. Maybe that's why pandas developers developed .loc with [] square brackets.
###Code
bond.loc['Goldfinger']
bond.loc['GoldenEye']
# bond.loc['blah']
###Output
_____no_output_____
###Markdown
If there are more than one row for that Index label, it returns as DF instead of series
###Code
bond.loc['Casino Royale']
bond.loc['Diamonds Are Forever': 'From Russia With Love'] # in index label in pandas, starting bounds are inclusive
bond.loc['Diamonds Are Forever': 'From Russia With Love': 2] # jump 2 everytime
bond.loc['GoldenEye': ]
bond.loc[ : 'Skyfall' ]
bond.loc[['Skyfall', 'Goldfinger', 'Octopussy']] # order are kept in the same way
bond.loc[['Octopussy', 'Die Another Day']]
# bond.loc[['Octopussy', 'Die Another Day', 'Blah']] # this will result in KeyError
###Output
_____no_output_____
###Markdown
to avoid getting such KeyError, we should always check the key exist or not
###Code
'Skyfall' in bond.index
'Blah' in bond.index
if 'Skyfall' in bond.index:
print(bond.loc['Skyfall'])
###Output
Year 2012
Actor Daniel Craig
Director Sam Mendes
Box Office 943.5
Budget 170.2
Bond Actor Salary 14.5
Name: Skyfall, dtype: object
###Markdown
------- 4) Retrieve Rows by Index Position with `iloc` Accessor
###Code
bond = pd.read_csv('Data/jamesbond.csv')
bond.head(3)
bond.iloc[0]
bond.iloc[15]
bond.iloc[[10, 20, 25]]
# bond.iloc[100]
bond.iloc[15: 20] # ending bound is exclusive
bond.iloc[20:]
bond.iloc[: 5]
bond = bond.set_index('Film')
bond = bond.sort_index()
bond.head(3)
bond.loc['A View to a Kill']
bond.iloc[0]
bond.iloc[15]
bond.iloc[10:16]
# bond.iloc[[10, 20, 30]]
###Output
_____no_output_____
###Markdown
------- 5) Second Arguments to `loc` and `iloc` Accessors
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
Getting Rows, Columns Values (intersection of value)+ mix and matches of slicing
###Code
bond.loc['Moonraker', 'Actor']
bond.loc['Moonraker', 'Director']
bond.loc['Moonraker', ['Director', 'Box Office']]
bond.loc[['Moonraker', 'A View to a Kill'], ['Director', 'Box Office']]
bond.loc['Moonraker', 'Director': 'Budget']
bond.loc['Moonraker': 'Thunderball', 'Director': 'Budget']
bond.loc['Moonraker': , 'Director': ]
bond.loc[: 'Moonraker', : 'Budget']
bond.iloc[14]
bond.iloc[14, 2]
bond.iloc[14, 2:5]
bond.iloc[[14, 17], [2, 4]]
bond.iloc[: 15, : 4]
bond.iloc[7: , [0, 5]]
###Output
_____no_output_____
###Markdown
---- 6) Set New Value for a Specific Cell
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
we can directly assign new value
###Code
bond.loc['Dr. No', 'Actor'] = 'Sir Sean Connery'
bond.loc['Dr. No', 'Actor']
bond.loc['Dr. No', ['Box Office', 'Budget', 'Bond Actor Salary']] = [4480000, 7000000, 6000000]
bond.loc['Dr. No', ['Box Office', 'Budget', 'Bond Actor Salary']]
###Output
_____no_output_____
###Markdown
----- 7) Set Multiple Values in DataFrame
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
actor_is_sean_connery = bond['Actor'] == 'Sean Connery'
# bond[actor_is_sean_connery]['Actor'] = 'Sir Sean Connery' # incorrect way to do that, will result in warning
###Output
_____no_output_____
###Markdown
using `.loc` by passing `series` to make direct changes
###Code
# this will make reference to the original dataframe
# this is a subset of orignal data frame, so if we make changes it will directly change to orignal one
bond.loc[actor_is_sean_connery, 'Actor'] = 'Sir Sean Connery'
bond[actor_is_sean_connery]
###Output
_____no_output_____
###Markdown
------ 8) Rename Index Labels or Columns in a `DataFrame`
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
8.1) Renaming Index Labels Option 1) using `mapper` parameter+ provide `dictionary with {current_name: new_name}`+ mapper needs to be combined with `axis` parameter
###Code
bond.rename(mapper={'GoldenEye': 'Golden Eye',
'The World Is Not Enough': 'Best Bond Movie Ever'})
bond.rename(mapper={'GoldenEye': 'Golden Eye',
'The World Is Not Enough': 'Best Bond Movie Ever'}, axis=0)
bond.rename(mapper={'GoldenEye': 'Golden Eye',
'The World Is Not Enough': 'Best Bond Movie Ever'}, axis='rows')
bond.rename(mapper={'GoldenEye': 'Golden Eye',
'The World Is Not Enough': 'Best Bond Movie Ever'}, axis='index')
###Output
_____no_output_____
###Markdown
Option 2) using `index` parameter (Prefer Approach)+ when using index, no need to specify axis because both are basically the same. If use both, it will cause error.
###Code
bond.rename(index={'GoldenEye': 'Golden Eye',
'The World Is Not Enough': 'Best Bond Movie Ever'})
bond = bond.rename(index={'GoldenEye': 'Golden Eye',
'The World Is Not Enough': 'Best Bond Movie Ever'})
bond
###Output
_____no_output_____
###Markdown
8.1) Renaming Columns
###Code
bond.head(1)
###Output
_____no_output_____
###Markdown
Option 1) using `mapper` parameter with `axis`
###Code
bond.rename(mapper={
'Year': 'Released Date',
'Box Office': 'Revenue'
}, axis=1)
bond.rename(mapper={
'Year': 'Released Date',
'Box Office': 'Revenue'
}, axis='columns')
###Output
_____no_output_____
###Markdown
Option 2) using `columns` parameter (Prefer Method)
###Code
bond.rename(columns={
'Year': 'Released Date',
'Box Office': 'Revenue'
})
bond = bond.rename(columns={
'Year': 'Released Date',
'Box Office': 'Revenue'
})
bond.head(1)
###Output
_____no_output_____
###Markdown
Option 3) changing columns name using `list values`+ in this approach, we need to pass all the columns name regardless of whether we want to change or not
###Code
bond.columns
bond.columns = ['Release Date', 'Actor', 'Director', 'Gross', 'Cost', 'Bond Actor Salary']
bond.head(1)
###Output
_____no_output_____
###Markdown
-------- 9) Delete Rows or Columns from a DataFrame
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
9.1) Dropping Row/s using `.drop()` Method
###Code
bond.drop('A View to a Kill')
bond.drop('Casino Royale')
bond.drop(['A View to a Kill', 'Die Another Day', 'Goldfinger'])
###Output
_____no_output_____
###Markdown
9.1.2) Dropping Column/s using `.drop()` Method
###Code
bond.drop('Year', axis=1)
bond.drop('Year', axis='columns')
bond.drop(['Actor', 'Budget', 'Year'], axis=1)
bond.head(1)
###Output
_____no_output_____
###Markdown
---------- 9.2) Popping and Deleteing columns using `.pop()` Method+ remove the value from original dataframe+ also return that value
###Code
actor = bond.pop('Actor')
actor
bond.head(1) # now 'Actor' column is removed
###Output
_____no_output_____
###Markdown
----------
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
9.3) directly deleting columns using `del` keyword
###Code
del bond['Director']
bond.head(1) # now Director column is removed
del bond['Year']
bond.head(1)
###Output
_____no_output_____
###Markdown
-------- 10) Create Random Sample using `.sample()` Method+ `n`: number of samples+ `frac`: fraction or % percentage
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
bond.sample() # this will return random one sample row
bond.sample(n=5)
bond.sample(n=5, axis=0)
bond.sample(n=5, axis='index')
bond.shape
26 * .25 # 6 rows is 25% of original 26 rows
bond.sample(frac = .25)
bond.sample(frac = .25, axis=0)
bond.sample(n = 3, axis=1) # 3 random columns
bond.sample(n=3, axis='columns')
###Output
_____no_output_____
###Markdown
------- 11) The `.nsmallest()` and `.nlargest()` MethodsNOTE: These methods are **very efficient** in sorting in very large dataframe.
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
Comparing to sort_values, `nlargest` is much more efficient in Speed sake in very large dataset.
###Code
bond.sort_values('Box Office', ascending=False).head(3)
bond.nlargest(3, columns=['Box Office', 'Budget'])
bond.nsmallest(2, columns='Box Office')
bond.nlargest(3, columns='Budget')
bond.nsmallest(n=6, columns='Bond Actor Salary')
bond['Box Office'].nlargest(8)
bond['Year'].nsmallest(2)
# 3 smallest Box Office moves starring Sean Connery
bond[bond['Actor'] == 'Sean Connery'].nsmallest(3, columns='Box Office')
# Top 3 Box Office Movies starring Sean Connery
bond[bond['Actor'] == 'Sean Connery'].nlargest(3, columns='Box Office')
###Output
_____no_output_____
###Markdown
----- 12) Filtering with `where` Method
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
is_sean_connery = bond['Actor'] == 'Sean Connery'
bond[is_sean_connery]
###Output
_____no_output_____
###Markdown
by using `where`, we can get the visual of whole data frame where rows which didn't fulfill condition will also return, but with NaN in columnsIt is useful if you don't want just a subset which fulfill condition, but also the whole data frame
###Code
bond.where(is_sean_connery)
bond.where(bond['Box Office'] > 800)
is_box_office_more_than_800 = bond['Box Office'] > 800
bond.where(is_sean_connery & is_box_office_more_than_800)
###Output
_____no_output_____
###Markdown
------------ 13) The `.query()` Method NOTE: it only works if column names **doesn't have any spaces in between**
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
Removing Space in Column Names
###Code
bond.columns
new_column_names = [col_name.replace(' ', '_') for col_name in bond.columns]
new_column_names
bond.columns = new_column_names
bond.head(1)
###Output
_____no_output_____
###Markdown
Querying on DataFrame+ we are writing literal string
###Code
bond.query('Actor == "Sean Connery"')
bond.query('Director == "Terence Young"')
bond.query('Actor != "Roger Moore"')
bond.query('Box_Office > 600')
###Output
_____no_output_____
###Markdown
`and` / `or` can be used here
###Code
bond.query('Actor == "Roger Moore" and Director == "John Glen"')
###Output
_____no_output_____
###Markdown
using `in` and `not in`
###Code
bond.query('Actor in ["Timothy Dalton", "George Lazenby"]')
bond.query('Actor not in ["Roger Moore", "Sean Connery"]')
###Output
_____no_output_____
###Markdown
---- 14) A Review of the `.apply()` Method on Single Column
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
We want to add `millions` as suffix to the last 3 columns
###Code
def convert_to_string_add_millions(number):
return str(number) + ' millions!'
bond['Box Office'] = bond['Box Office'].apply(convert_to_string_add_millions)
bond['Budget'] = bond['Budget'].apply(convert_to_string_add_millions)
###Output
_____no_output_____
###Markdown
Instead of applying function one by one as above, there is more elegant way to do this.
###Code
columns = ['Box Office', 'Budget', 'Bond Actor Salary']
for col in columns:
bond[col] = bond[col].apply(convert_to_string_add_millions)
bond.head(3)
###Output
_____no_output_____
###Markdown
--------- 15) The `.apply()` Method with Row values
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
###Output
_____no_output_____
###Markdown
We want to assign 3 different types of classifications for Movies
###Code
def good_movie(row):
actor = row[1]
budget = row [4]
if actor == 'Pierce Brosnan':
return 'The Best'
elif actor == 'Roger Moore' and budget > 40:
return 'Enjoyable'
else:
return 'I have no clue.'
bond['classification'] = bond.apply(good_movie, axis='columns') # every row, we are moving left to right to check the conditions
bond.head()
###Output
_____no_output_____
###Markdown
------ 16) The `.copy()` Method
###Code
bond = pd.read_csv('Data/jamesbond.csv', index_col=['Film'])
bond = bond.sort_index()
bond.head(3)
directors = bond['Director']
directors.head(3)
directors['A View to a Kill'] = 'Mr. John Glen'
directors.head(3)
###Output
_____no_output_____
###Markdown
we can see the changes is also affected in original Data Frame
###Code
bond.head(3)
###Output
_____no_output_____
###Markdown
What if we don't want original Data Frame get affected? we can use `.copy()`
###Code
directors = bond['Director'].copy()
directors.head(3)
directors['A View to a Kill'] = 'Mr. John Glen'
directors.head(3)
bond.head(3)
###Output
_____no_output_____ |
04-suggested-answers.ipynb | ###Markdown
SQL 入門> 聯結表格:隨堂練習參考解答郭耀仁
###Code
import sqlite3
import pandas as pd
from sqlFrameCheck import checkAnsQuery
from test_queries.test_queries_03 import extract_test_queries as etq
conn_twelection = sqlite3.connect('twelection.db')
conn_nba = sqlite3.connect('nba.db')
###Output
_____no_output_____
###Markdown
隨堂練習:將 `presidential2016` 與 `presidential2020` 三組候選人的得票數以 `UNION` 垂直合併,創建 `year` 變數區分 `number`、`candidates` 與 `total_votes`
###Code
ans_query = """
SELECT 2016 AS year,
number,
candidates,
SUM(votes) AS total_votes
FROM presidential2016
GROUP BY number
UNION
SELECT 2020 AS year,
number,
candidates,
SUM(votes) AS total_votes
FROM presidential2020
GROUP BY number;
"""
# 試跑看看結果
pd.read_sql(ans_query, conn_twelection)
###Output
_____no_output_____
###Markdown
測資比對
###Code
caq = checkAnsQuery(etq('0316'), ans_query, conn_twelection)
caq.run_test()
###Output
測資比對正確!
###Markdown
隨堂練習:將 `presidential2016` 與 `presidential2020` 三組候選人的得票率以 `UNION` 垂直合併,創建 `year` 變數區分 `number`、`candidates` 與 `votes_percentage`
###Code
ans_query = """
SELECT 2016 AS year,
number,
candidates,
CAST(SUM(votes) AS REAL) / CAST((SELECT SUM(votes) FROM presidential2016) AS REAL) AS votes_percentage
FROM presidential2016
GROUP BY number
UNION
SELECT 2020 AS year,
number,
candidates,
CAST(SUM(votes) AS REAL) / CAST((SELECT SUM(votes) FROM presidential2020) AS REAL) AS votes_percentage
FROM presidential2020
GROUP BY number;
"""
# 試跑看看結果
pd.read_sql(ans_query, conn_twelection)
###Output
_____no_output_____
###Markdown
測資比對
###Code
caq = checkAnsQuery(etq('0317'), ans_query, conn_twelection)
caq.run_test()
###Output
測資比對正確!
###Markdown
隨堂練習:查詢 `nba.db` 目前湖人隊(Los Angeles Lakers)的球員陣容生涯場均得分(`ppg`)、場均籃板(`rpg`)與場均助攻(`apg`),選擇 `fullName`、`firstName`、`lastName`、`ppg`、`rpg`、`apg` 並以 `firstName` 遞增排序
###Code
ans_query = """
SELECT teams.fullName,
players.firstName,
players.lastName,
careerSummaries.ppg,
careerSummaries.rpg,
careerSummaries.apg
FROM players
JOIN teams
ON players.teamId = teams.teamId
JOIN careerSummaries
ON players.personId = careerSummaries.personId
WHERE teams.nickname = 'Lakers'
ORDER BY players.firstName;
"""
# 試跑看看結果
pd.read_sql(ans_query, conn_nba)
###Output
_____no_output_____
###Markdown
測資比對
###Code
caq = checkAnsQuery(etq('0320'), ans_query, conn_nba)
caq.run_test()
###Output
測資比對正確!
###Markdown
隨堂練習:計算 `presidential2020` 韓國瑜/張善政與蔡英文/賴清德這兩組候選人在臺北市 12 個行政區中各自的得票數,選擇 `town`、`Kuo_Cheng` 與 `Ing_Te` 三個變數
###Code
ans_query = """
SELECT ing_te.town,
kuo_cheng.Kuo_Cheng,
ing_te.Ing_Te
FROM (SELECT town,
SUM(votes) AS Kuo_Cheng
FROM presidential2020
WHERE county = '臺北市' AND
number = 2
GROUP BY town
) AS kuo_cheng
LEFT JOIN (SELECT town,
SUM(votes) AS Ing_Te
FROM presidential2020
WHERE county = '臺北市' AND
number = 3
GROUP BY town
) AS ing_te
ON kuo_cheng.town = ing_te.town;
"""
# 試跑看看結果
pd.read_sql(ans_query, conn_twelection)
###Output
_____no_output_____
###Markdown
測資比對
###Code
caq = checkAnsQuery(etq('0318'), ans_query, conn_twelection)
caq.run_test()
###Output
測資比對正確!
###Markdown
隨堂練習:計算 `presidential2020` 韓國瑜/張善政與蔡英文/賴清德這兩組候選人在臺北市 12 個行政區中各自的得票數,選擇 `town`、`Kuo_Cheng` 與 `Ing_Te` 三個變數,並找出韓國瑜/張善政在哪些行政區得票數較多
###Code
ans_query = """
SELECT ing_te.town,
kuo_cheng.Kuo_Cheng,
ing_te.Ing_Te
FROM (SELECT town,
SUM(votes) AS Kuo_Cheng
FROM presidential2020
WHERE county = '臺北市' AND
number = 2
GROUP BY town
) AS kuo_cheng
LEFT JOIN (SELECT town,
SUM(votes) AS Ing_Te
FROM presidential2020
WHERE county = '臺北市' AND
number = 3
GROUP BY town
) AS ing_te
ON kuo_cheng.town = ing_te.town
WHERE kuo_cheng.Kuo_Cheng > ing_te.Ing_Te;
"""
# 試跑看看結果
pd.read_sql(ans_query, conn_twelection)
###Output
_____no_output_____
###Markdown
測資比對
###Code
caq = checkAnsQuery(etq('0319'), ans_query, conn_twelection)
caq.run_test()
###Output
測資比對正確!
|
lab_3/lab_03.ipynb | ###Markdown
Lab 03 - Convolutional Neural Networks (CNNs)Machine Learning, University of St. Gallen, Spring Term 2022 The lab environment is based on Jupyter Notebooks (https://jupyter.org), which allow to perform a variety of statistical evaluations and data analyses. In this lab, we will learn how to enhance vanilla Artificial Neural Networks (ANNs) using `PyTorch` to classify even more complex images. Therefore, we use a special type of deep neural network referred to **Convolutional Neural Networks (CNNs)**. CNNs encompass the ability to take advantage of the hierarchical pattern in data and assemble more complex patterns using smaller and simpler patterns. Therefore, CNNs are capable to learn a set of discriminative features 'pattern' and subsequently utilize the learned pattern to classify the content of an image.We will again use the functionality of the `PyTorch` library to implement and train an CNN based neural network. The network will be trained on a set of tiny images to learn a model of the image content. Upon successful training, we will utilize the learned CNN model to classify so far unseen tiny images into distinct categories such as aeroplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The figure below illustrates a high-level view on the machine learning process we aim to establish in this lab. (Image of the CNN architecture created via http://alexlenail.me/)As always, pls. don't hesitate to ask all your questions either during the lab, post them in our CANVAS (StudyNet) forum (https://learning.unisg.ch), or send us an email (using the course email). 1. Lab Objectives: After today's lab, you should be able to:> 1. Understand the basic concepts, intuitions and major building blocks of **Convolutional Neural Networks (CNNs)**.> 2. Know how to **implement and to train a CNN** to learn a model of tiny image data.> 3. Understand how to apply such a learned model to **classify images** images based on their content into distinct categories.> 4. Know how to **interpret and visualize** the model's classification results. 2. Setup of the Jupyter Notebook Environment Similar to the previous labs, we need to import a couple of Python libraries that allow for data analysis and data visualization. We will mostly use the `PyTorch`, `Numpy`, `Sklearn`, `Matplotlib`, `Seaborn` and a few utility libraries throughout this lab:
###Code
# import standard python libraries
import os, urllib, io
from datetime import datetime
import numpy as np
###Output
_____no_output_____
###Markdown
Import Python machine / deep learning libraries:
###Code
# import the PyTorch deep learning library
import torch, torchvision
import torch.nn.functional as F
from torch import nn, optim
from torch.autograd import Variable
###Output
_____no_output_____
###Markdown
Import the sklearn classification metrics:
###Code
# import sklearn classification evaluation library
from sklearn import metrics
from sklearn.metrics import classification_report, confusion_matrix
###Output
_____no_output_____
###Markdown
Import Python plotting libraries:
###Code
# import matplotlib, seaborn, and PIL data visualization libary
import matplotlib.pyplot as plt
import seaborn as sns
from PIL import Image
###Output
_____no_output_____
###Markdown
Enable notebook matplotlib inline plotting:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Create notebook folder structure to store the data as well as the trained neural network models:
###Code
# create the data sub-directory
data_directory = './data_cifar10'
if not os.path.exists(data_directory): os.makedirs(data_directory)
# create the models sub-directory
models_directory = './models_cifar10'
if not os.path.exists(models_directory): os.makedirs(models_directory)
###Output
_____no_output_____
###Markdown
Set a random `seed` value to obtain reproducable results:
###Code
# init deterministic seed
seed_value = 1234
np.random.seed(seed_value) # set numpy seed
torch.manual_seed(seed_value) # set pytorch seed CPU
###Output
_____no_output_____
###Markdown
3. Dataset Download and Data Assessment The **CIFAR-10 database** (**C**anadian **I**nstitute **F**or **A**dvanced **R**esearch) is a collection of images that are commonly used to train machine learning and computer vision algorithms. The database is widely used to conduct computer vision research using machine learning and deep learning methods: (Source: https://www.kaggle.com/c/cifar-10) Further details on the dataset can be obtained via: *Krizhevsky, A., 2009. "Learning Multiple Layers of Features from Tiny Images", ( https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf )."* The CIFAR-10 database contains **60,000 color images** (50,000 training images and 10,000 validation images). The size of each image is 32 by 32 pixels. The collection of images encompasses 10 different classes that represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Let's define the distinct classs for further analytics:
###Code
cifar10_classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
###Output
_____no_output_____
###Markdown
Thereby the dataset contains 6,000 images for each of the ten classes. The CIFAR-10 is a straightforward dataset that can be used to teach a computer how to recognize objects in images.Let's download, transform and inspect the training images of the dataset. Therefore, we first will define the directory we aim to store the training data:
###Code
train_path = './data/train_cifar10'
###Output
_____no_output_____
###Markdown
Now, let's download the training data accordingly:
###Code
# define pytorch transformation into tensor format
transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# download and transform training images
cifar10_train_data = torchvision.datasets.CIFAR10(root=train_path, train=True, transform=transf, download=True)
###Output
_____no_output_____
###Markdown
Verify the volume of training images downloaded:
###Code
# get the length of the training data
len(cifar10_train_data)
###Output
_____no_output_____
###Markdown
Furthermore, let's investigate a couple of the training images:
###Code
# set (random) image id
image_id = 1800
# retrieve image exhibiting the image id
cifar10_train_data[image_id]
###Output
_____no_output_____
###Markdown
Ok, that doesn't seem easily interpretable ;) Let's first seperate the image from its label information:
###Code
cifar10_train_image, cifar10_train_label = cifar10_train_data[image_id]
###Output
_____no_output_____
###Markdown
Great, now we are able to visually inspect our sample image:
###Code
# define tensor to image transformation
trans = torchvision.transforms.ToPILImage()
# set image plot title
plt.title('Example: {}, Label: "{}"'.format(str(image_id), str(cifar10_classes[cifar10_train_label])))
# un-normalize cifar 10 image sample
cifar10_train_image_plot = cifar10_train_image / 2.0 + 0.5
# plot 10 image sample
plt.imshow(trans(cifar10_train_image_plot))
###Output
_____no_output_____
###Markdown
Fantastic, right? Let's now decide on where we want to store the evaluation data:
###Code
eval_path = './data/eval_cifar10'
###Output
_____no_output_____
###Markdown
And download the evaluation data accordingly:
###Code
# define pytorch transformation into tensor format
transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# download and transform validation images
cifar10_eval_data = torchvision.datasets.CIFAR10(root=eval_path, train=False, transform=transf, download=True)
###Output
_____no_output_____
###Markdown
Verify the volume of validation images downloaded:
###Code
# get the length of the training data
len(cifar10_eval_data)
###Output
_____no_output_____
###Markdown
4. Neural Network Implementation In this section we, will implement the architecture of the **neural network** we aim to utilize to learn a model that is capable of classifying the 32x32 pixel CIFAR 10 images according to the objects contained in each image. However, before we start the implementation, let's briefly revisit the process to be established. The following cartoon provides a birds-eye view: Our CNN, which we name 'CIFAR10Net' and aim to implement consists of two **convolutional layers** and three **fully-connected layers**. In general, convolutional layers are specifically designed to learn a set of **high-level features** ("patterns") in the processed images, e.g., tiny edges and shapes. The fully-connected layers utilize the learned features to learn **non-linear feature combinations** that allow for highly accurate classification of the image content into the different image classes of the CIFAR-10 dataset, such as, birds, aeroplanes, horses. Let's implement the network architecture and subsequently have a more in-depth look into its architectural details:
###Code
# implement the CIFAR10Net network architecture
class CIFAR10Net(nn.Module):
# define the class constructor
def __init__(self):
# call super class constructor
super(CIFAR10Net, self).__init__()
# specify convolution layer 1
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0)
# define max-pooling layer 1
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
# specify convolution layer 2
self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=0)
# define max-pooling layer 2
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
# specify fc layer 1 - in 16 * 5 * 5, out 120
self.linear1 = nn.Linear(16 * 5 * 5, 120, bias=True) # the linearity W*x+b
self.relu1 = nn.ReLU(inplace=True) # the non-linearity
# specify fc layer 2 - in 120, out 84
self.linear2 = nn.Linear(120, 84, bias=True) # the linearity W*x+b
self.relu2 = nn.ReLU(inplace=True) # the non-linarity
# specify fc layer 3 - in 84, out 10
self.linear3 = nn.Linear(84, 10) # the linearity W*x+b
# add a softmax to the last layer
self.logsoftmax = nn.LogSoftmax(dim=1) # the softmax
# define network forward pass
def forward(self, images):
# high-level feature learning via convolutional layers
# define conv layer 1 forward pass
x = self.pool1(self.relu1(self.conv1(images)))
# define conv layer 2 forward pass
x = self.pool2(self.relu2(self.conv2(x)))
# feature flattening
# reshape image pixels
x = x.view(-1, 16 * 5 * 5)
# combination of feature learning via non-linear layers
# define fc layer 1 forward pass
x = self.relu1(self.linear1(x))
# define fc layer 2 forward pass
x = self.relu2(self.linear2(x))
# define layer 3 forward pass
x = self.logsoftmax(self.linear3(x))
# return forward pass result
return x
###Output
_____no_output_____
###Markdown
You may have noticed that we applied two more layers (compared to the MNIST example described in the last lab) before the fully-connected layers. These layers are referred to as **convolutional** layers and are usually comprised of three operations, (1) **convolution**, (2) **non-linearity**, and (3) **max-pooling**. Those operations are usually executed in sequential order during the forward pass through a convolutional layer. In the following, we will have a detailed look into the functionality and number of parameters in each layer. We will start with providing images of 3x32x32 dimensions to the network, i.e., the three channels (red, green, blue) of an image each of size 32x32 pixels. 4.1. High-Level Feature Learning by Convolutional Layers Let's first have a look into the convolutional layers of the network as illustrated in the following: **First Convolutional Layer**: The first convolutional layer expects three input channels and will convolve six filters each of size 3x5x5. Let's briefly revisit how we can perform a convolutional operation on a given image. For that, we need to define a kernel which is a matrix of size 5x5, for example. To perform the convolution operation, we slide the kernel along with the image horizontally and vertically and obtain the dot product of the kernel and the pixel values of the image inside the kernel ('receptive field' of the kernel). The following illustration shows an example of a discrete convolution: The left grid is called the input (an image or feature map). The middle grid, referred to as kernel, slides across the input feature map (or image). At each location, the product between each element of the kernel and the input element it overlaps is computed, and the results are summed up to obtain the output in the current location. In general, a discrete convolution is mathematically expressed by: $y(m, n) = x(m, n) * h(m, n) = \sum^{m}_{j=0} \sum^{n}_{i=0} x(i, j) * h(m-i, n-j)$, where $x$ denotes the input image or feature map, $h$ the applied kernel, and, $y$ the output. When performing the convolution operation the 'stride' defines the number of pixels to pass at a time when sliding the kernel over the input. While 'padding' adds the number of pixels to the input image (or feature map) to ensure that the output has the same shape as the input. Let's have a look at another animated example: (Source: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)In our implementation padding is set to 0 and stride is set to 1. As a result, the output size of the convolutional layer becomes 6x28x28, because (32 - 5) + 1 = 28. This layer exhibits ((5 x 5 x 3) + 1) x 6 = 456 parameter. **First Max-Pooling Layer:** The max-pooling process is a sample-based discretization operation. The objective is to down-sample an input representation (image, hidden-layer output matrix, etc.), reducing its dimensionality and allowing for assumptions to be made about features contained in the sub-regions binned.To conduct such an operation, we again need to define a kernel. Max-pooling kernels are usually a tiny matrix of, e.g, of size 2x2. To perform the max-pooling operation, we slide the kernel along the image horizontally and vertically (similarly to a convolution) and compute the maximum pixel value of the image (or feature map) inside the kernel (the receptive field of the kernel). The following illustration shows an example of a max-pooling operation: The left grid is called the input (an image or feature map). The middle grid, referred to as kernel, slides across the input feature map (or image). We use a stride of 2, meaning the step distance for stepping over our input will be 2 pixels and won't overlap regions. At each location, the max value of the region that overlaps with the elements of the kernel and the input elements it overlaps is computed, and the results are obtained in the output of the current location. In our implementation, we do max-pooling with a 2x2 kernel and stride 2 this effectively drops the original image size from 6x28x28 to 6x14x14. **Second Convolutional Layer:** The second convolutional layer expects 6 input channels and will convolve 16 filters each of size 6x5x5x. Since padding is set to 0 and stride is set 1, the output size is 16x10x10, because (14 - 5) + 1 = 10. This layer therefore has ((5 x 5 x 6) + 1 x 16) = 24,16 parameter.**Second Max-Pooling Layer:** The second down-sampling layer uses max-pooling with 2x2 kernel and stride set to 2. This effectively drops the size from 16x10x10 to 16x5x5. 4.2. Flattening of Learned Features The output of the final-max pooling layer needs to be flattened so that we can connect it to a fully connected layer. This is achieved using the `torch.Tensor.view` method. Setting the parameter of the method to `-1` will automatically infer the number of rows required to handle the mini-batch size of the data. 4.3. Learning of Feature Classification Let's now have a look into the non-linear layers of the network illustrated in the following: The first fully connected layer uses 'Rectified Linear Units' (ReLU) activation functions to learn potential nonlinear combinations of features. The layers are implemented similarly to the fifth lab. Therefore, we will only focus on the number of parameters of each fully-connected layer: **First Fully-Connected Layer:** The first fully-connected layer consists of 120 neurons, thus in total exhibits ((16 x 5 x 5) + 1) x 120 = 48,120 parameter. **Second Fully-Connected Layer:** The output of the first fully-connected layer is then transferred to second fully-connected layer. The layer consists of 84 neurons equipped with ReLu activation functions, this in total exhibits (120 + 1) x 84 = 10,164 parameter. The output of the second fully-connected layer is then transferred to the output-layer (third fully-connected layer). The output layer is equipped with a softmax (that you learned about in the previous lab 05) and is made up of ten neurons, one for each object class contained in the CIFAR-10 dataset. This layer exhibits (84 + 1) x 10 = 850 parameter.As a result our CIFAR-10 convolutional neural exhibits a total of 456 + 2,416 + 48,120 + 10,164 + 850 = 62,006 parameter.(Source: https://www.stefanfiott.com/machine-learning/cifar-10-classifier-using-cnn-in-pytorch/) Now, that we have implemented our first Convolutional Neural Network we are ready to instantiate a network model to be trained:
###Code
model = CIFAR10Net()
###Output
_____no_output_____
###Markdown
Once the model is initialized we can visualize the model structure and review the implemented network architecture by execution of the following cell:
###Code
# print the initialized architectures
print('[LOG] CIFAR10Net architecture:\n\n{}\n'.format(model))
###Output
_____no_output_____
###Markdown
Looks like intended? Brilliant! Finally, let's have a look into the number of model parameters that we aim to train in the next steps of the notebook:
###Code
# init the number of model parameters
num_params = 0
# iterate over the distinct parameters
for param in model.parameters():
# collect number of parameters
num_params += param.numel()
# print the number of model paramters
print('[LOG] Number of to be trained CIFAR10Net model parameters: {}.'.format(num_params))
###Output
_____no_output_____
###Markdown
Ok, our "simple" CIFAR10Net model already encompasses an impressive number 62'006 model parameters to be trained. Now that we have implemented the CIFAR10Net, we are ready to train the network. However, before starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the classification error of the true class $c^{i}$ of a given CIFAR-10 image $x^{i}$ and its predicted class $\hat{c}^{i} = f_\theta(x^{i})$ as faithfully as possible. In this lab we use (similarly to lab 05) the **'Negative Log-Likelihood (NLL)'** loss. During training the NLL loss will penalize models that result in a high classification error between the predicted class labels $\hat{c}^{i}$ and their respective true class label $c^{i}$. Now that we have implemented the CIFAR10Net, we are ready to train the network. Before starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the classification error of the true class $c^{i}$ of a given CIFAR-10 image $x^{i}$ and its predicted class $\hat{c}^{i} = f_\theta(x^{i})$ as faithfully as possible. Let's instantiate the NLL via the execution of the following PyTorch command:
###Code
# define the optimization criterion / loss function
nll_loss = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
Based on the loss magnitude of a certain mini-batch PyTorch automatically computes the gradients. But even better, based on the gradient, the library also helps us in the optimization and update of the network parameters $\theta$.We will use the **Stochastic Gradient Descent (SGD) optimization** and set the `learning-rate to 0.001`. Each mini-batch step the optimizer will update the model parameters $\theta$ values according to the degree of classification error (the NLL loss).
###Code
# define learning rate and optimization strategy
learning_rate = 0.001
optimizer = optim.SGD(params=model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Now that we have successfully implemented and defined the three CNN building blocks let's take some time to review the `CIFAR10Net` model definition as well as the `loss`. Please, read the above code and comments carefully and don't hesitate to let us know any questions you might have. 5. Neural Network Model Training In this section, we will train our neural network model (as implemented in the section above) using the transformed images. More specifically, we will have a detailed look into the distinct training steps as well as how to monitor the training progress. 5.1. Preparing the Network Training So far, we have pre-processed the dataset, implemented the CNN and defined the classification error. Let's now start to train a corresponding model for **20 epochs** and a **mini-batch size of 128** CIFAR-10 images per batch. This implies that the whole dataset will be fed to the CNN 20 times in chunks of 128 images yielding to **391 mini-batches** (50.000 training images / 128 images per mini-batch) per epoch. After the processing of each mini-batch, the parameters of the network will be updated.
###Code
# specify the training parameters
num_epochs = 20 # number of training epochs
mini_batch_size = 128 # size of the mini-batches
###Output
_____no_output_____
###Markdown
Furthermore, lets specifiy and instantiate a corresponding PyTorch data loader that feeds the image tensors to our neural network:
###Code
cifar10_train_dataloader = torch.utils.data.DataLoader(cifar10_train_data, batch_size=mini_batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
5.2. Running the Network Training Finally, we start training the model. The training procedure for each mini-batch is performed as follows: >1. do a forward pass through the CIFAR10Net network, >2. compute the negative log-likelihood classification error $\mathcal{L}^{NLL}_{\theta}(c^{i};\hat{c}^{i})$, >3. do a backward pass through the CIFAR10Net network, and >4. update the parameters of the network $f_\theta(\cdot)$.To ensure learning while training our CNN model, we will monitor whether the loss decreases with progressing training. Therefore, we obtain and evaluate the classification performance of the entire training dataset after each training epoch. Based on this evaluation, we can conclude on the training progress and whether the loss is converging (indicating that the model might not improve any further).The following elements of the network training code below should be given particular attention: >- `loss.backward()` computes the gradients based on the magnitude of the reconstruction loss,>- `optimizer.step()` updates the network parameters based on the gradient.
###Code
# init collection of training epoch losses
train_epoch_losses = []
# set the model in training mode
model.train()
# train the CIFAR10 model
for epoch in range(num_epochs):
# init collection of mini-batch losses
train_mini_batch_losses = []
# iterate over all-mini batches
for i, (images, labels) in enumerate(cifar10_train_dataloader):
# run forward pass through the network
output = model(images)
# reset graph gradients
model.zero_grad()
# determine classification loss
loss = nll_loss(output, labels)
# run backward pass
loss.backward()
# update network paramaters
optimizer.step()
# collect mini-batch reconstruction loss
train_mini_batch_losses.append(loss.data.item())
# determine mean min-batch loss of epoch
train_epoch_loss = np.mean(train_mini_batch_losses)
# print epoch loss
now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] epoch: {} train-loss: {}'.format(str(now), str(epoch), str(train_epoch_loss)))
# save model to local directory
model_name = 'cifar10_model_epoch_{}.pth'.format(str(epoch))
torch.save(model.state_dict(), os.path.join("./models", model_name))
# determine mean min-batch loss of epoch
train_epoch_losses.append(train_epoch_loss)
###Output
_____no_output_____
###Markdown
Upon successfull training let's visualize and inspect the training loss per epoch:
###Code
# prepare plot
fig = plt.figure()
ax = fig.add_subplot(111)
# add grid
ax.grid(linestyle='dotted')
# plot the training epochs vs. the epochs' classification error
ax.plot(np.array(range(1, len(train_epoch_losses)+1)), train_epoch_losses, label='epoch loss (blue)')
# add axis legends
ax.set_xlabel("[training epoch $e_i$]", fontsize=10)
ax.set_ylabel("[Classification Error $\mathcal{L}^{NLL}$]", fontsize=10)
# set plot legend
plt.legend(loc="upper right", numpoints=1, fancybox=True)
# add plot title
plt.title('Training Epochs $e_i$ vs. Classification Error $L^{NLL}$', fontsize=10);
###Output
_____no_output_____
###Markdown
Ok, fantastic. The training error converges nicely. We could definitely train the network a couple more epochs until the error converges. But let's stay with the 20 training epochs for now and continue with evaluating our trained model. 6. Neural Network Model Evaluation Prior to evaluating our model, let's load the best performing model. Remember, that we stored a snapshot of the model after each training epoch to our local model directory. We will now load the last snapshot saved.
###Code
# restore pre-trained model snapshot
best_model_name = './models/cifar10_model_epoch_19.pth'
# load state_dict from path
state_dict_best = torch.load(best_model_name)
# init pre-trained model class
best_model = CIFAR10Net()
# load pre-trained models
best_model.load_state_dict(torch.load(model_buffer, map_location=torch.device('cpu')))
###Output
_____no_output_____
###Markdown
Let's inspect if the model was loaded successfully:
###Code
# set model in evaluation mode
best_model.eval()
###Output
_____no_output_____
###Markdown
In order to evaluate our trained model, we need to feed the CIFAR10 images reserved for evaluation (the images that we didn't use as part of the training process) through the model. Therefore, let's again define a corresponding PyTorch data loader that feeds the image tensors to our neural network:
###Code
cifar10_eval_dataloader = torch.utils.data.DataLoader(cifar10_eval_data, batch_size=10000, shuffle=False)
###Output
_____no_output_____
###Markdown
We will now evaluate the trained model using the same mini-batch approach as we did when training the network and derive the mean negative log-likelihood loss of all mini-batches processed in an epoch:
###Code
# init collection of mini-batch losses
eval_mini_batch_losses = []
# iterate over all-mini batches
for i, (images, labels) in enumerate(cifar10_eval_dataloader):
# run forward pass through the network
output = best_model(images)
# determine classification loss
loss = nll_loss(output, labels)
# collect mini-batch reconstruction loss
eval_mini_batch_losses.append(loss.data.item())
# determine mean min-batch loss of epoch
eval_loss = np.mean(eval_mini_batch_losses)
# print epoch loss
now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] eval-loss: {}'.format(str(now), str(eval_loss)))
###Output
_____no_output_____
###Markdown
Ok, great. The evaluation loss looks in-line with our training loss. Let's now inspect a few sample predictions to get an impression of the model quality. Therefore, we will again pick a random image of our evaluation dataset and retrieve its PyTorch tensor as well as the corresponding label:
###Code
# set (random) image id
image_id = 777
# retrieve image exhibiting the image id
cifar10_eval_image, cifar10_eval_label = cifar10_eval_data[image_id]
###Output
_____no_output_____
###Markdown
Let's now inspect the true class of the image we selected:
###Code
cifar10_classes[cifar10_eval_label]
###Output
_____no_output_____
###Markdown
Ok, the randomly selected image should contain a two (2). Let's inspect the image accordingly:
###Code
# define tensor to image transformation
trans = torchvision.transforms.ToPILImage()
# set image plot title
plt.title('Example: {}, Label: {}'.format(str(image_id), str(cifar10_classes[cifar10_eval_label])))
# un-normalize cifar 10 image sample
cifar10_eval_image_plot = cifar10_eval_image / 2.0 + 0.5
# plot cifar 10 image sample
plt.imshow(trans(cifar10_eval_image_plot))
###Output
_____no_output_____
###Markdown
Ok, let's compare the true label with the prediction of our model:
###Code
cifar10_eval_image.unsqueeze(0).shape
best_model(cifar10_eval_image.unsqueeze(0))
###Output
_____no_output_____
###Markdown
We can even determine the likelihood of the most probable class:
###Code
cifar10_classes[torch.argmax(best_model(Variable(cifar10_eval_image.unsqueeze(0))), dim=1).item()]
###Output
_____no_output_____
###Markdown
Let's now obtain the predictions for all the CIFAR-10 images of the evaluation data:
###Code
predictions = torch.argmax(best_model(iter(cifar10_eval_dataloader).next()[0]), dim=1)
###Output
_____no_output_____
###Markdown
Furthermore, let's obtain the overall classification accuracy:
###Code
metrics.accuracy_score(cifar10_eval_data.targets, predictions.detach())
###Output
_____no_output_____
###Markdown
Let's also inspect the confusion matrix of the model predictions to determine major sources of misclassification:
###Code
# determine classification matrix of the predicted and target classes
mat = confusion_matrix(cifar10_eval_data.targets, predictions.detach())
# initialize the plot and define size
plt.figure(figsize=(8, 8))
# plot corresponding confusion matrix
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False, cmap='YlOrRd_r', xticklabels=cifar10_classes, yticklabels=cifar10_classes)
plt.tick_params(axis='both', which='major', labelsize=8, labelbottom = False, bottom=False, top = False, left = False, labeltop=True)
# set plot title
plt.title('CIFAR-10 classification matrix')
# set plot axis lables
plt.xlabel('[true label]')
plt.ylabel('[predicted label]');
###Output
_____no_output_____ |
examples/core_functionality/Summary.ipynb | ###Markdown
Kiri Core Example: Text SummaryText summary takes a chunk of text, and extracts the key information.
###Code
# If you've got one, change it here.
api_key = None
from kiri import Kiri
if api_key:
kiri = Kiri(api_key=api_key)
else:
kiri = Kiri(local=True)
# Change this up.
input_text = """
Britain began its third COVID-19 lockdown on Tuesday with the government calling for one last major national effort to defeat the spread of a virus that has infected an estimated one in 50 citizens before mass vaccinations turn the tide.
Finance minister Rishi Sunak announced a new package of business grants worth 4.6 billion pounds ($6.2 billion) to help keep people in jobs and firms afloat until measures are relaxed gradually, at the earliest from mid-February but likely later.
Britain has been among the countries worst-hit by COVID-19, with the second highest death toll in Europe and an economy that suffered the sharpest contraction of any in the Group of Seven during the first wave of infections last spring.
Prime Minister Boris Johnson said the latest data showed 2% of the population were currently infected - more than a million people in England.
“When everybody looks at the position, people understand overwhelmingly that we have no choice,” he told a news conference.
More than 1.3 million people in Britain have already received their first dose of a COVID-19 vaccination, but this is not enough to have an impact on transmission yet.
Johnson announced the new lockdown late on Monday, saying the highly contagious new coronavirus variant first identified in Britain was spreading so fast the National Health Service risked being overwhelmed within 21 days.
In England alone, some 27,000 people are in hospital with COVID, 40% more than during the first peak in April, with infection numbers expected to rise further after increased socialising during the Christmas period.
Since the start of the pandemic, more than 75,000 people have died in the United Kingdom within 28 days of testing positive for coronavirus, according to official figures. The number of daily new infections passed 60,000 for the first time on Tuesday.
A Savanta-ComRes poll taken just after Johnson’s address suggested four in five adults in England supported the lockdown.
“I definitely think it was the right decision to make,” said Londoner Kaitlin Colucci, 28. “I just hope that everyone doesn’t struggle too much with having to be indoors again.”
Downing Street said Johnson had cancelled a visit to India later this month to focus on the response to the virus, and Buckingham Palace called off its traditional summer garden parties this year.
nder the new rules in England, schools are closed to most pupils, people should work from home if possible, and all hospitality and non-essential shops are closed. Semi-autonomous executives in Scotland, Wales and Northern Ireland have imposed similar measures.
As infection rates soar across Europe, other countries are also clamping down on public life. Germany is set to extend its strict lockdown until the end of the month, and Italy will keep nationwide restrictions in place this weekend while relaxing curbs on weekdays.
Sunak’s latest package of grants adds to the eye-watering 280 billion pounds in UK government support already announced for this financial year to stave off total economic collapse.
The new lockdown is likely to cause the economy to shrink again, though not as much as during the first lockdown last spring. JP Morgan economist Allan Monks said he expected the economy to shrink by 2.5% in the first quarter of 2021 -- compared with almost 20% in the second quarter of 2020.
To end the cycle of lockdowns, the government is pinning its hopes on vaccines. It aims to vaccinate all elderly care home residents and their carers, everyone over the age of 70, all frontline health and social care workers, and everyone who is clinically extremely vulnerable by mid-February.
"""
summary = kiri.summarise(input_text)
print(summary)
###Output
Britain begins its third COVID-19 lockdown. Finance minister Rishi Sunak announces a package of business grants. The government is pinning its hopes on vaccines.
|
notebooks/pascal-exploration-based-upon-fast-ai-course.ipynb | ###Markdown
Helper functions for setting up `pandas.DataFrame` fed to the torch `Dataset`
###Code
def get_filenames(data):
filenames = {o[ID]:o[FILE_NAME] for o in data[IMAGES]}
print('get_id_filename_dict')
print('length:', len(filenames), 'next item:', next(iter(filenames.items())))
return filenames
def get_image_ids(data):
image_ids = [o[ID] for o in data[IMAGES]]
print('get_image_ids')
print('length:', len(image_ids), 'next item:', image_ids[0])
return image_ids
def pascal_bb_hw(bb):
return bb[2:]
bbox = train_data[ANNOTATIONS][0][BBOX]
pascal_bb_hw(bbox)
def get_image_w_area(data, image_ids):
image_w_area = {i:None for i in image_ids}
image_w_area = copy.deepcopy(image_w_area)
for x in data[ANNOTATIONS]:
bbox = x[BBOX]
new_category_id = x[CATEGORY_ID]
image_id = x[IMAGE_ID]
h, w = pascal_bb_hw(bbox)
new_area = h*w
cat_id_area = image_w_area[image_id]
if not cat_id_area:
image_w_area[image_id] = (new_category_id, new_area)
else:
category_id, area = cat_id_area
if new_area > area:
image_w_area[image_id] = (new_category_id, new_area)
print('get_image_w_area')
print('length:', len(image_w_area), 'next item:', next(iter(image_w_area.items())))
return image_w_area
###Output
_____no_output_____
###Markdown
train data structs
###Code
train_filenames = get_filenames(train_data)
train_image_ids = get_image_ids(train_data)
train_image_w_area = get_image_w_area(train_data, train_image_ids)
###Output
_____no_output_____
###Markdown
val data structs
###Code
val_filenames = get_filenames(val_data)
val_image_ids = get_image_ids(val_data)
val_image_w_area = get_image_w_area(val_data, val_image_ids)
###Output
_____no_output_____
###Markdown
test data structs
###Code
test_filenames = get_filenames(test_data)
test_image_ids = get_image_ids(test_data)
test_image_w_area = get_image_w_area(test_data, test_image_ids)
###Output
_____no_output_____
###Markdown
train data structs (Legacy)
###Code
train_filenames = {o[ID]:o[FILE_NAME] for o in train_data[IMAGES]}
print('length:', len(train_filenames))
image1_id, image1_fn = next(iter(train_filenames.items()))
image1_id, image1_fn
train_image_ids = [o[ID] for o in train_data[IMAGES]]
print('length:', len(train_image_ids))
train_image_ids[:BATCH_SIZE]
IMAGE_PATH
image1_path = IMAGE_PATH/image1_fn
image1_path
str(image1_path)
im = open_image(str(IMAGE_PATH/image1_fn))
print(type(im))
im.shape
len(train_data[ANNOTATIONS])
# get the biggest object label per image
train_data[ANNOTATIONS][0]
bbox = train_data[ANNOTATIONS][0][BBOX]
bbox
def fastai_bb(bb):
return np.array([bb[1], bb[0], bb[3]+bb[1]-1, bb[2]+bb[0]-1])
print(bbox)
print(fastai_bb(bbox))
fbb = fastai_bb(bbox)
fbb
def fastai_bb_hw(bb):
h= bb[3]-bb[1]+1
w = bb[2]-bb[0]+1
return [h,w]
fastai_bb_hw(fbb)
def pascal_bb_hw(bb):
return bb[2:]
bbox = train_data[ANNOTATIONS][0][BBOX]
pascal_bb_hw(bbox)
###Output
_____no_output_____
###Markdown
show image training example
###Code
def show_img(im, figsize=None, ax=None):
if not ax:
fig,ax = plt.subplots(figsize=figsize)
ax.imshow(im)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
return ax
def draw_rect(ax, b):
patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor='white', lw=2))
draw_outline(patch, 4)
def draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(
linewidth=lw, foreground='black'), patheffects.Normal()])
def draw_text(ax, xy, txt, sz=14):
text = ax.text(*xy, txt,
verticalalignment='top', color='white', fontsize=sz, weight='bold')
draw_outline(text, 1)
ax = show_img(im)
image1_ann = train_data[ANNOTATIONS][0]
b = image1_ann[BBOX]
print(b)
draw_rect(ax, b)
draw_text(ax, b[:2], categories[image1_ann[CATEGORY_ID]])
###Output
_____no_output_____
###Markdown
Pandas DataFrames
###Code
# TRAIN - create a Pandas dataframe for: image_id, filename, category
train_df = pd.DataFrame({
IMAGE_ID: image_id,
IMAGE: str(IMAGE_PATH/image_fn),
CATEGORY: train_image_w_area[image_id][0]
} for image_id, image_fn in train_filenames.items())
print('count:', len(train_df))
print(train_df.iloc[0])
train_df.head()
# VAL - create a Pandas dataframe for: image_id, filename, category
val_df = pd.DataFrame({
IMAGE_ID: image_id,
IMAGE: str(IMAGE_PATH/image_fn),
CATEGORY: val_image_w_area[image_id][0]
} for image_id, image_fn in val_filenames.items())
print('count:', len(val_df))
print(val_df.iloc[0])
val_df.head()
# NOTE: won't work in Kaggle Kernal b/c read-only file system
# BIGGEST_OBJECT_CSV = '../input/pascal/pascal/tmp/biggest-object.csv'
# train_df.to_csv(BIGGEST_OBJECT_CSV, index=False)
###Output
_____no_output_____
###Markdown
subclass Dataset
###Code
class BiggestObjectDataset(Dataset):
def __init__(self, df):
self.df = df
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
im = open_image(self.df.iloc[idx][IMAGE]) # HW
resized_image = cv2.resize(im, (SIZE, SIZE)) # HW
image = np.transpose(resized_image, (2, 0, 1)) # CHW
category = self.df.iloc[idx][CATEGORY]
return image, category
dataset = BiggestObjectDataset(train_df)
inputs, label = dataset[0]
print('label:', label, 'shape:', inputs.shape)
###Output
_____no_output_____
###Markdown
DataLoader
###Code
BATCH_SIZE = 64
NUM_WORKERS = 0
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=NUM_WORKERS)
batch_inputs, batch_labels = next(iter(dataloader))
batch_inputs.size()
batch_labels
val_dataset = BiggestObjectDataset(val_df)
val_dataloader = DataLoader(val_dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=NUM_WORKERS)
dataloaders = {
'train': dataloader,
'val': val_dataloader
}
dataset_sizes = {
'train': len(dataset),
'val': len(val_dataset)
}
dataset_sizes
# train the model
NUM_CATEGORIES = len(categories)
NUM_CATEGORIES
model_ft = models.resnet18(pretrained=True)
for layer in model_ft.parameters():
layer.requires_grad = False
num_ftrs = model_ft.fc.in_features
print(num_ftrs, NUM_CATEGORIES)
model_ft.fc = nn.Linear(num_ftrs, NUM_CATEGORIES)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer = optim.SGD(model_ft.parameters(), lr=0.01, momentum=0.9)
# epoch - w/ train
epoch_losses = []
epoch_accuracies = []
for epoch in tqdm(range(EPOCHS)):
print('epoch:', epoch)
running_loss = 0.0
running_correct = 0
for inputs, labels in dataloader:
inputs = inputs.to(device)
labels = labels.to(device)
# clear gradients
optimizer.zero_grad()
# forward pass
outputs = model_ft(inputs)
_, preds = torch.max(outputs, dim=1)
labels_0_indexed = labels - 1
loss = criterion(outputs, labels_0_indexed)
# backwards pass
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_correct += torch.sum(preds == labels_0_indexed)
epoch_loss = running_loss / len(dataset)
epoch_acc = running_correct.double().item() / len(dataset)
epoch_losses.append(epoch_loss)
epoch_accuracies.append(epoch_acc)
print('loss:', epoch_loss, 'acc:', epoch_acc)
# epoch - w/ train and val
epoch_loss = {'train': np.inf, 'val': np.inf}
epoch_acc = {'train': 0, 'val': 0}
epoch_losses = {'train': [], 'val': []}
epoch_accuracies = {'train': [], 'val': []}
for epoch in tqdm(range(EPOCHS)):
print('epoch:', epoch)
for phase in ['train', 'val']:
if phase == 'train':
model_ft.train()
else:
model_ft.eval()
running_loss = 0.0
running_correct = 0
for inputs, labels in dataloader:
inputs = inputs.to(device)
labels = labels.to(device)
# clear gradients
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
# forward pass
outputs = model_ft(inputs)
_, preds = torch.max(outputs, dim=1)
labels_0_indexed = labels - 1
loss = criterion(outputs, labels_0_indexed)
# backwards pass
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_correct += torch.sum(preds == labels_0_indexed)
epoch_acc[phase] = running_correct.double().item() / len(dataset)
epoch_loss[phase] = running_loss / len(dataset)
# running sums
epoch_losses[phase].append(epoch_loss[phase])
epoch_accuracies[phase].append(epoch_acc[phase])
print('phase', phase, 'train loss:', epoch_loss['train'], 'train acc:', epoch_acc['train'], 'val loss:', epoch_loss['val'], 'val acc:', epoch_acc['val'])
###Output
_____no_output_____
###Markdown
Graph loss and accuracy
###Code
epoch_losses
epoch_accuracies
###Output
_____no_output_____
###Markdown
check predictions
###Code
plt.plot(epoch_losses['train'])
plt.plot(epoch_losses['val'])
plt.plot(epoch_accuracies['train'])
plt.plot(epoch_accuracies['val'])
###Output
_____no_output_____
###Markdown
show predictions
###Code
preds_count = len(preds)
fig, axes = plt.subplots(1, preds_count, figsize=(16, 16))
for i, ax in enumerate(axes.flat):
im = np.transpose(inputs[i], (1, 2, 0))
ax = show_img(im, ax=ax)
draw_text(ax, (0,0), categories[preds[i].item()+1])
###Output
_____no_output_____ |
tensorflow_privacy/privacy/privacy_tests/membership_inference_attack/codelabs/codelab.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Assess privacy risks with TensorFlow Privacy Membership Inference Attacks Run in Google Colab View source on GitHub OverviewIn this codelab we'll train a simple image classification model on the CIFAR10 dataset, and then use the "membership inference attack" against this model to assess if the attacker is able to "guess" whether a particular sample was present in the training set. SetupFirst, set this notebook's runtime to use a GPU, under Runtime > Change runtime type > Hardware accelerator. Then, begin importing the necessary libraries.
###Code
#@title Import statements.
import numpy as np
from typing import Tuple, Text
from scipy import special
import tensorflow as tf
import tensorflow_datasets as tfds
# Set verbosity.
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
from warnings import simplefilter
from sklearn.exceptions import ConvergenceWarning
simplefilter(action="ignore", category=ConvergenceWarning)
simplefilter(action="ignore", category=FutureWarning)
###Output
_____no_output_____
###Markdown
Install TensorFlow Privacy.
###Code
!pip3 install git+https://github.com/tensorflow/privacy
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack import membership_inference_attack as mia
###Output
_____no_output_____
###Markdown
Train a model
###Code
#@markdown Train a simple model on CIFAR10 with Keras.
dataset = 'cifar10'
num_classes = 10
num_conv = 3
activation = 'relu'
lr = 0.02
momentum = 0.9
batch_size = 250
epochs = 100 # Privacy risks are especially visible with lots of epochs.
def small_cnn(input_shape: Tuple[int],
num_classes: int,
num_conv: int,
activation: Text = 'relu') -> tf.keras.models.Sequential:
"""Setup a small CNN for image classification.
Args:
input_shape: Integer tuple for the shape of the images.
num_classes: Number of prediction classes.
num_conv: Number of convolutional layers.
activation: The activation function to use for conv and dense layers.
Returns:
The Keras model.
"""
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Input(shape=input_shape))
# Conv layers
for _ in range(num_conv):
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation=activation))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(64, activation=activation))
model.add(tf.keras.layers.Dense(num_classes))
return model
print('Loading the dataset.')
train_ds = tfds.as_numpy(
tfds.load(dataset, split=tfds.Split.TRAIN, batch_size=-1))
test_ds = tfds.as_numpy(
tfds.load(dataset, split=tfds.Split.TEST, batch_size=-1))
x_train = train_ds['image'].astype('float32') / 255.
y_train_indices = train_ds['label'][:, np.newaxis]
x_test = test_ds['image'].astype('float32') / 255.
y_test_indices = test_ds['label'][:, np.newaxis]
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train_indices, num_classes)
y_test = tf.keras.utils.to_categorical(y_test_indices, num_classes)
input_shape = x_train.shape[1:]
model = small_cnn(
input_shape, num_classes, num_conv=num_conv, activation=activation)
print('learning rate %f', lr)
optimizer = tf.keras.optimizers.SGD(lr=lr, momentum=momentum)
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])
model.summary()
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
print('Finished training.')
###Output
_____no_output_____
###Markdown
Calculate logits, probabilities and loss values for training and test sets.We will use these values later in the membership inference attack to separate training and test samples.
###Code
print('Predict on train...')
logits_train = model.predict(x_train, batch_size=batch_size)
print('Predict on test...')
logits_test = model.predict(x_test, batch_size=batch_size)
print('Apply softmax to get probabilities from logits...')
prob_train = special.softmax(logits_train, axis=1)
prob_test = special.softmax(logits_test, axis=1)
print('Compute losses...')
cce = tf.keras.backend.categorical_crossentropy
constant = tf.keras.backend.constant
loss_train = cce(constant(y_train), constant(prob_train), from_logits=False).numpy()
loss_test = cce(constant(y_test), constant(prob_test), from_logits=False).numpy()
###Output
_____no_output_____
###Markdown
Run membership inference attacks.We will now execute a membership inference attack against the previously trained CIFAR10 model. This will generate a number of scores, most notably, attacker advantage and AUC for the membership inference classifier.An AUC of close to 0.5 means that the attack wasn't able to identify training samples, which means that the model doesn't have privacy issues according to this test. Higher values, on the contrary, indicate potential privacy issues.
###Code
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import AttackInputData
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import SlicingSpec
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.data_structures import AttackType
import tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.plotting as plotting
labels_train = np.argmax(y_train, axis=1)
labels_test = np.argmax(y_test, axis=1)
input = AttackInputData(
logits_train = logits_train,
logits_test = logits_test,
loss_train = loss_train,
loss_test = loss_test,
labels_train = labels_train,
labels_test = labels_test
)
# Run several attacks for different data slices
attacks_result = mia.run_attacks(input,
SlicingSpec(
entire_dataset = True,
by_class = True,
by_classification_correctness = True
),
attack_types = [
AttackType.THRESHOLD_ATTACK,
AttackType.LOGISTIC_REGRESSION])
# Plot the ROC curve of the best classifier
fig = plotting.plot_roc_curve(
attacks_result.get_result_with_max_auc().roc_curve)
# Print a user-friendly summary of the attacks
print(attacks_result.summary(by_slices = True))
###Output
_____no_output_____ |
Training the CNN + RNN Model.ipynb | ###Markdown
Step 1: Training Setup Task 1Setting the following variables
###Code
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
import nltk
nltk.download('punkt')
batch_size = 128 # batch size
vocab_threshold = 7 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 256 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 3 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# writing image transformations
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# Define the optimizer.
optimizer = torch.optim.Adam(params, lr=0.001)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
Vocabulary successfully loaded from vocab.pkl file!
loading annotations into memory...
Done (t=1.04s)
creating index...
###Markdown
Step 2: Train your Model
###Code
from workspace_utils import active_session
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
with active_session():
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
###Output
Epoch [1/3], Step [100/3236], Loss: 2.0393, Perplexity: 7.6855
Epoch [1/3], Step [200/3236], Loss: 2.0037, Perplexity: 7.41624
Epoch [1/3], Step [300/3236], Loss: 2.0837, Perplexity: 8.03388
Epoch [1/3], Step [400/3236], Loss: 2.2009, Perplexity: 9.03354
Epoch [1/3], Step [500/3236], Loss: 2.1110, Perplexity: 8.25680
Epoch [1/3], Step [600/3236], Loss: 2.1678, Perplexity: 8.73933
Epoch [1/3], Step [700/3236], Loss: 2.0030, Perplexity: 7.41118
Epoch [1/3], Step [800/3236], Loss: 2.1789, Perplexity: 8.83651
Epoch [1/3], Step [900/3236], Loss: 2.3058, Perplexity: 10.0317
Epoch [1/3], Step [1000/3236], Loss: 1.9142, Perplexity: 6.7816
Epoch [1/3], Step [1100/3236], Loss: 1.9716, Perplexity: 7.18253
Epoch [1/3], Step [1200/3236], Loss: 2.0434, Perplexity: 7.71652
Epoch [1/3], Step [1300/3236], Loss: 1.9894, Perplexity: 7.31139
Epoch [1/3], Step [1400/3236], Loss: 2.3539, Perplexity: 10.5265
Epoch [1/3], Step [1500/3236], Loss: 1.9957, Perplexity: 7.35769
Epoch [1/3], Step [1600/3236], Loss: 2.2887, Perplexity: 9.86222
Epoch [1/3], Step [1700/3236], Loss: 2.0040, Perplexity: 7.41893
Epoch [1/3], Step [1800/3236], Loss: 1.9025, Perplexity: 6.70271
Epoch [1/3], Step [1900/3236], Loss: 1.8822, Perplexity: 6.56810
Epoch [1/3], Step [2000/3236], Loss: 2.0622, Perplexity: 7.86307
Epoch [1/3], Step [2100/3236], Loss: 1.8720, Perplexity: 6.50140
Epoch [1/3], Step [2200/3236], Loss: 2.2023, Perplexity: 9.04583
Epoch [1/3], Step [2300/3236], Loss: 2.1674, Perplexity: 8.73529
Epoch [1/3], Step [2400/3236], Loss: 1.9473, Perplexity: 7.00948
Epoch [1/3], Step [2500/3236], Loss: 2.7885, Perplexity: 16.2569
Epoch [1/3], Step [2600/3236], Loss: 1.8777, Perplexity: 6.53832
Epoch [1/3], Step [2700/3236], Loss: 2.0274, Perplexity: 7.59430
Epoch [1/3], Step [2800/3236], Loss: 1.9749, Perplexity: 7.20615
Epoch [1/3], Step [2900/3236], Loss: 1.9760, Perplexity: 7.21419
Epoch [1/3], Step [3000/3236], Loss: 2.0106, Perplexity: 7.46804
Epoch [1/3], Step [3100/3236], Loss: 1.9717, Perplexity: 7.18316
Epoch [1/3], Step [3200/3236], Loss: 2.1337, Perplexity: 8.44641
Epoch [2/3], Step [100/3236], Loss: 1.8344, Perplexity: 6.261401
Epoch [2/3], Step [200/3236], Loss: 1.9551, Perplexity: 7.06449
Epoch [2/3], Step [300/3236], Loss: 1.9761, Perplexity: 7.21487
Epoch [2/3], Step [400/3236], Loss: 1.9204, Perplexity: 6.82369
Epoch [2/3], Step [500/3236], Loss: 2.5438, Perplexity: 12.7282
Epoch [2/3], Step [600/3236], Loss: 2.2102, Perplexity: 9.11753
Epoch [2/3], Step [700/3236], Loss: 1.7914, Perplexity: 5.99816
Epoch [2/3], Step [800/3236], Loss: 1.8178, Perplexity: 6.15812
Epoch [2/3], Step [900/3236], Loss: 2.0820, Perplexity: 8.02039
Epoch [2/3], Step [1000/3236], Loss: 2.0786, Perplexity: 7.9930
Epoch [2/3], Step [1100/3236], Loss: 1.8634, Perplexity: 6.44552
Epoch [2/3], Step [1200/3236], Loss: 1.9886, Perplexity: 7.30529
Epoch [2/3], Step [1300/3236], Loss: 1.8705, Perplexity: 6.49162
Epoch [2/3], Step [1400/3236], Loss: 1.8621, Perplexity: 6.43750
Epoch [2/3], Step [1500/3236], Loss: 1.9245, Perplexity: 6.85179
Epoch [2/3], Step [1600/3236], Loss: 1.8070, Perplexity: 6.09217
Epoch [2/3], Step [1700/3236], Loss: 1.8529, Perplexity: 6.37806
Epoch [2/3], Step [1800/3236], Loss: 2.3940, Perplexity: 10.9569
Epoch [2/3], Step [1900/3236], Loss: 1.9118, Perplexity: 6.76503
Epoch [2/3], Step [2000/3236], Loss: 1.9764, Perplexity: 7.21643
Epoch [2/3], Step [2100/3236], Loss: 2.5962, Perplexity: 13.4131
Epoch [2/3], Step [2200/3236], Loss: 2.4606, Perplexity: 11.7120
Epoch [2/3], Step [2300/3236], Loss: 1.9472, Perplexity: 7.00872
Epoch [2/3], Step [2400/3236], Loss: 1.9294, Perplexity: 6.88561
Epoch [2/3], Step [2500/3236], Loss: 3.5868, Perplexity: 36.1201
Epoch [2/3], Step [2600/3236], Loss: 2.1147, Perplexity: 8.28713
Epoch [2/3], Step [2700/3236], Loss: 1.7557, Perplexity: 5.78756
Epoch [2/3], Step [2800/3236], Loss: 1.8051, Perplexity: 6.08063
Epoch [2/3], Step [2900/3236], Loss: 1.8938, Perplexity: 6.64480
Epoch [2/3], Step [3000/3236], Loss: 1.8904, Perplexity: 6.62222
Epoch [2/3], Step [3100/3236], Loss: 1.9451, Perplexity: 6.99419
Epoch [2/3], Step [3200/3236], Loss: 1.8358, Perplexity: 6.27057
Epoch [3/3], Step [100/3236], Loss: 1.8486, Perplexity: 6.350779
Epoch [3/3], Step [200/3236], Loss: 1.8276, Perplexity: 6.21907
Epoch [3/3], Step [300/3236], Loss: 1.8740, Perplexity: 6.51453
Epoch [3/3], Step [400/3236], Loss: 1.8461, Perplexity: 6.33515
Epoch [3/3], Step [500/3236], Loss: 1.8411, Perplexity: 6.30379
Epoch [3/3], Step [600/3236], Loss: 1.8993, Perplexity: 6.68116
Epoch [3/3], Step [700/3236], Loss: 1.7979, Perplexity: 6.03711
Epoch [3/3], Step [800/3236], Loss: 1.8720, Perplexity: 6.50100
Epoch [3/3], Step [900/3236], Loss: 1.7748, Perplexity: 5.89893
Epoch [3/3], Step [1000/3236], Loss: 1.8434, Perplexity: 6.3177
Epoch [3/3], Step [1100/3236], Loss: 1.8287, Perplexity: 6.22581
Epoch [3/3], Step [1200/3236], Loss: 1.8255, Perplexity: 6.20594
Epoch [3/3], Step [1300/3236], Loss: 1.9524, Perplexity: 7.04567
Epoch [3/3], Step [1400/3236], Loss: 1.7988, Perplexity: 6.04225
Epoch [3/3], Step [1500/3236], Loss: 1.9713, Perplexity: 7.18014
Epoch [3/3], Step [1600/3236], Loss: 2.0450, Perplexity: 7.72958
Epoch [3/3], Step [1700/3236], Loss: 1.8698, Perplexity: 6.48678
Epoch [3/3], Step [1800/3236], Loss: 1.7946, Perplexity: 6.01731
Epoch [3/3], Step [1900/3236], Loss: 1.7594, Perplexity: 5.80900
Epoch [3/3], Step [2000/3236], Loss: 1.9765, Perplexity: 7.21761
Epoch [3/3], Step [2100/3236], Loss: 1.8626, Perplexity: 6.44026
Epoch [3/3], Step [2200/3236], Loss: 2.2390, Perplexity: 9.38419
Epoch [3/3], Step [2300/3236], Loss: 1.8710, Perplexity: 6.49481
Epoch [3/3], Step [2400/3236], Loss: 1.8341, Perplexity: 6.25950
Epoch [3/3], Step [2500/3236], Loss: 1.8793, Perplexity: 6.54864
Epoch [3/3], Step [2600/3236], Loss: 1.7952, Perplexity: 6.02100
Epoch [3/3], Step [2700/3236], Loss: 1.6823, Perplexity: 5.37800
Epoch [3/3], Step [2800/3236], Loss: 1.9075, Perplexity: 6.73600
Epoch [3/3], Step [2900/3236], Loss: 1.8130, Perplexity: 6.12919
Epoch [3/3], Step [3000/3236], Loss: 1.7719, Perplexity: 5.88202
Epoch [3/3], Step [3100/3236], Loss: 1.7932, Perplexity: 6.00883
Epoch [3/3], Step [3200/3236], Loss: 1.8262, Perplexity: 6.21033
Epoch [3/3], Step [3236/3236], Loss: 1.9442, Perplexity: 6.98773 |
doc/BookChapters/.ipynb_checkpoints/chapter6-checkpoint.ipynb | ###Markdown
Two-body problems, from the Gravitational Force to Two-body Scattering Introduction and DefinitionsCentral forces are forces which are directed towards or away from apoint called center of force. A familiar force is the gravitionalforce and as we have seen earlier, one of the classical examples isthat of the motion of our Earth around the Sun. The Sun, beingapproximately sixth order of magnitude heavier than the Earth, servesas our origin. A force like the gravitational force is a function of therelative distance $\boldsymbol{r}=\boldsymbol{r}_1-\boldsymbol{r}_2$ only, where here$\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ are the positions relative to a definedorigin for object one and object two, respectively.These forces depend on the spatial degrees of freedom only (thepositions of the interacting objects/particles and as we have discussed before, these forces are so-called conservative forces meaning. As we will see in this chapter, this implies conservation of energy, the total linear momentum and angular momentum, and their are defined in terms of the gradient of apotential which also depends only on the positions of the particles.With a scalar potential $V(\boldsymbol{r})$ we define the force as the gradient of the potential $$\boldsymbol{F}(\boldsymbol{r})=-\boldsymbol{\nabla}V(\boldsymbol{r}).$$ In general these potentials depend only on the magnitude of therelative position and we will write the potential as $V(r)$ where $r$is defined as, $$r = |\boldsymbol{r}_1-\boldsymbol{r}_2|.$$ In three dimensions our vectors are defined as (for an object/particle $i$) $$\boldsymbol{r}_i = x_i\boldsymbol{e}_1+y_i\boldsymbol{e}_2+z_i\boldsymbol{e}_3,$$ while in two dimensions we have $$\boldsymbol{r}_i = x_i\boldsymbol{e}_1+y_i\boldsymbol{e}_2.$$ In two dimensions the radius $r$ is defined as $$r = |\boldsymbol{r}_1-\boldsymbol{r}_2|=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}.$$ If we consider the gravitational potential involving two masses $1$ and $2$, we have $$V_{12}(r)=V(r)=-\frac{Gm_1m_2}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}=-\frac{Gm_1m_2}{r}.$$ Calculating the gradient of this potential we obtain the force $$\boldsymbol{F}(\boldsymbol{r})=-\frac{Gm_1m_2}{|\boldsymbol{r}_1-\boldsymbol{r}_1|^2}\hat{\boldsymbol{r}}_{12}=-\frac{Gm_am_b}{r^2}\hat{\boldsymbol{r}},$$ where we have the unit vector $$\hat{\boldsymbol{r}}=\hat{\boldsymbol{r}}_{12}=\frac{\boldsymbol{r}_2-\boldsymbol{r}_1}{|\boldsymbol{r}_1-\boldsymbol{r}_2|}.$$ Here $G=6.67\times 10^{-11}$ Nm$^2$/kg$^2$, and $\boldsymbol{F}$ is the forceon $2$ due to $1$. By inspection, one can see that the force on $2$due to $1$ and the force on $1$ due to $2$ are equal and opposite. Thenet potential energy for a large number of masses would be $$V=\sum_{i<j}V_{ij}=\frac{1}{2}\sum_{i\ne j}V_{ij}.$$ In general, the central forces that we will study can be written mathematically as $$\boldsymbol{F}(\boldsymbol{r})=f(r)\hat{r},$$ where $f(r)$ is a scalar function. For the above gravitational force this scalar term is$-Gm_1m_2/r^2$.In general we will simply write this scalar function $f(r)=\alpha/r^2$ where $\alpha$ is a constant that be either negative or positive. We will also see examples of other types of potentials in the examples below.Besides general expressions for the potentials/forces, we will discussin detail different types of motion that arise, from circular toelliptical or hyperbolic or parabolic. By transforming to either polarcoordinates or spherical coordinates, we will be able to obtainanalytical solutions for the equations of motion and thereby obtainnew insights about the properties of a system. Where possible, we willcompare our analytical equations with numerical studies.However, before we arrive at these lovely insights, we need tointroduce some mathematical manipulations and definitions. We concludethis chapter with a discussion of two-body scattering. Center of Mass and Relative CoordinatesThus far, we have considered the trajectory as if the force iscentered around a fixed point. For two bodies interacting only withone another, both masses circulate around the center of mass. Onemight think that solutions would become more complex when bothparticles move, but we will see here that the problem can be reducedto one with a single body moving according to a fixed force byexpressing the trajectories for $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ into thecenter-of-mass coordinate $\boldsymbol{R}$ and the relativecoordinate $\boldsymbol{r}$. We define the center-of-mass (CoM) coordinate as $$\boldsymbol{R}\equiv\frac{m_1\boldsymbol{r}_1+m_2\boldsymbol{r}_2}{m_1+m_2},\\$$ and the relative coordinate as $$\boldsymbol{r}\equiv\boldsymbol{r}_1-\boldsymbol{r_2}.$$ We can then rewrite $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ in terms of the relative and CoM coordinates as $$\boldsymbol{r}_1=\boldsymbol{R}+\frac{m_2}{M}\boldsymbol{r},$$ and $$\boldsymbol{r}_2=\boldsymbol{R}-\frac{m_1}{M}\boldsymbol{r}.$$ Conservation of total Linear MomentumIn our discussions on conservative forces we defined the total linear momentum is defined as $$\boldsymbol{P}=\sum_{i=1}^Nm_i\frac{\boldsymbol{r}_i}{dt},$$ where $N=2$ in our case. With the above definition of the center of mass position, we see that we can rewrite the total linear momentum as (multiplying the CoM coordinate with $M$) $$\boldsymbol{P}=M\frac{d\boldsymbol{R}}{dt}=M\dot{\boldsymbol{R}}.$$ The net force acting on the system is given by the time derivative of the linear momentum (assuming mass is time independent)and we have $$\boldsymbol{F}^{\mathrm{net}}=\dot{\boldsymbol{P}}=M\ddot{\boldsymbol{R}}.$$ The net force acting on the system is given by the sum of the forces acting on the two object, that is we have $$\boldsymbol{F}^{\mathrm{net}}=\boldsymbol{F}_1+\boldsymbol{F}_2=\dot{\boldsymbol{P}}=M\ddot{\boldsymbol{R}}.$$ In our case the forces are given by the internal forces only. The force acting on object $1$ is thus $\boldsymbol{F}_{12}$ and the one acting on object $2$ is $\boldsymbol{F}_{12}$. We have also defined that $\boldsymbol{F}_{12}=-\boldsymbol{F}_{21}$. This means thar we have $$\boldsymbol{F}_1+\boldsymbol{F}_2=\boldsymbol{F}_{12}+\boldsymbol{F}_{21}=0=\dot{\boldsymbol{P}}=M\ddot{\boldsymbol{R}}.$$ We could alternatively had written this as $$\ddot{\boldsymbol{R}}_{\rm cm}=\frac{1}{m_1+m_2}\left\{m_1\ddot{\boldsymbol{r}}_1+m_2\ddot{\boldsymbol{r}}_2\right\}=\frac{1}{m_1+m_2}\left\{\boldsymbol{F}_{12}+\boldsymbol{F}_{21}\right\}=0.$$ This has the important consequence that the CoM velocity is a constantof the motion. And since the total linear momentum is given by thetime-derivative of the CoM coordinate times the total mass$M=m_1+m_2$, it means that linear momentum is also conserved.Stated differently, the center-of-mass coordinate$\boldsymbol{R}$ moves at a fixed velocity.This has also another important consequence for our forces. If weassume that our force depends only on the relative coordinate, itmeans that the gradient of the potential with respect to the center ofmass position is zero, that is $$M\ddot{d\boldsymbol{R}}=-\boldsymbol{\nabla}_{\boldsymbol{R}}V =0!$$ If we now switch to the equation of motion for the relative coordinate, we have $$\ddot{\boldsymbol{r}}=\ddot{\boldsymbol{r}}_1-\ddot{\boldsymbol{r}}_2=\left(\frac{\boldsymbol{F}_{12}}{m_1}-\frac{\boldsymbol{F}_{21}}{m_2}\right)=\left(\frac{1}{m_1}+\frac{1}{m_2}\right)\boldsymbol{F}_{12},$$ which we can rewrite in terms of the reduced mass $$\mu=\frac{m_1m_2}{m_1+m_2},$$ as $$\mu \ddot{\boldsymbol{r}}=\boldsymbol{F}_{12}=\frac{1}{\mu}=\frac{1}{m_1}+\frac{1}{m_2}.$$ This has a very important consequence for our coming analysis of the equations of motion for the two-body problem.Since the acceleration for the CoM coordinate is zero, we can nowtreat the trajectory as a one-body problem where the mass is given by the reduced mass $\mu$ plus a second trivial problem for the center ofmass. The reduced mass is especially convenient when one isconsidering forces that depend only on the relative coordinate (like the Graviational force or the electrostatic force between two charges) because then for say the gravitational force we have $$\mu \ddot{\boldsymbol{r}}=-\frac{Gm_1m_2}{r^2}\hat{\boldsymbol{r}}=-\frac{GM\mu}{r^2}\hat{\boldsymbol{r}},$$ where we have defined $M= m_1+m_2$. It means that the acceleration of the relative coordinate is $$\ddot{\boldsymbol{r}}=-\frac{GM}{r^2}\hat{\boldsymbol{r}},$$ and we have that for the gravitational problem, the reduced mass then falls out and thetrajectory depends only on the total mass $M$.The standard strategy is to transform into the center of mass frame,then treat the problem as one of a single particle of mass $\mu$undergoing a force $\boldsymbol{F}_{12}$. Scattering angles can also beexpressed in this frame, then transformed into the lab frame. Before we proceed to our definition of the CoM frame, where we have $\boldsymbol{R}=0$, we need to set up energy in terms of the relative and CoM coordinates. Kinetic and total EnergyThe kinetic energy and momenta also have analogues in center-of-masscoordinates. We have defined the total linear momentum as $$\boldsymbol{P}=\sum_{i=1}^Nm_i\frac{d\boldsymbol{r}_i}{dt}=M\dot{\boldsymbol{R}}.$$ For the relative momentum $\boldsymbol{q}$, we have that the time derivative of $\boldsymbol{r}$ is $$\dot{\boldsymbol{r}} =\dot{\boldsymbol{r}}_1-\dot{\boldsymbol{r}}_2,$$ We know also that the momenta $\boldsymbol{p}_1=m_1\dot{\boldsymbol{r}}_1$ and$\boldsymbol{p}_2=m_2\dot{\boldsymbol{r}}_2$. Using these expressions we can rewrite $$\dot{\boldsymbol{r}} =\frac{\boldsymbol{p}_1}{m_1}-\frac{\boldsymbol{p}_2}{m_2},$$ which gives $$\dot{\boldsymbol{r}} =\frac{m_2\boldsymbol{p}_1-m_1\boldsymbol{p}_2}{m_1m_2},$$ and dividing both sides with $M$ we have $$\frac{m_1m_2}{M}\dot{\boldsymbol{r}} =\frac{m_2\boldsymbol{p}_1-m_1\boldsymbol{p}_2}{M}.$$ Introducing the reduced mass $\mu=m_1m_2/M$ we have finally $$\mu\dot{\boldsymbol{r}} =\frac{m_2\boldsymbol{p}_1-m_1\boldsymbol{p}_2}{M}.$$ And $\mu\dot{\boldsymbol{r}}$ defines the relative momentum $\boldsymbol{q}=\mu\dot{\boldsymbol{r}}$. With these definitions we can then calculate the kinetic energy in terms of the relative and CoM coordinates.We have that $$K=\frac{p_1^2}{2m_1}+\frac{p_2^2}{2m_2},$$ and with $\boldsymbol{p}_1=m_1\dot{\boldsymbol{r}}_1$ and $\boldsymbol{p}_2=m_2\dot{\boldsymbol{r}}_2$ and using $$\dot{\boldsymbol{r}}_1=\dot{\boldsymbol{R}}+\frac{m_2}{M}\dot{\boldsymbol{r}},$$ and $$\dot{\boldsymbol{r}}_2=\dot{\boldsymbol{R}}-\frac{m_1}{M}\dot{\boldsymbol{r}},$$ we obtain after squaring the expressions for $\dot{\boldsymbol{r}}_1$ and $\dot{\boldsymbol{r}}_2$ $$K=\frac{(m_1+m_2)\dot{\boldsymbol{R}}^2}{2}+\frac{(m_1+m_2)m_1m_2\dot{\boldsymbol{r}}^2}{2M^2},$$ which we simplify to $$K=\frac{\dot{\boldsymbol{P}}^2}{2M}+\frac{\mu\dot{\boldsymbol{q}}^2}{2}.$$ Below we will define a reference frame, the so-called CoM-frame, where$\boldsymbol{R}=0$. This is going to simplify our equations further. Conservation of Angular MomentumThe angular momentum (the total one) is the sum of the individual angular momenta. In our case we have two particles/pbjects only, meaning that our angular momentum is defined as $$\boldsymbol{L} = \boldsymbol{r}_1 \times \boldsymbol{p}_1+\boldsymbol{r}_2 \times \boldsymbol{p}_2,$$ and using that $m_1\dot{\boldsymbol{r}}_1=\boldsymbol{p}_1$ and $m_2\dot{\boldsymbol{r}}_2=\boldsymbol{p}_2$ we have $$\boldsymbol{L} = m_1\boldsymbol{r}_1 \times \dot{\boldsymbol{r}}_1+m_2\boldsymbol{r}_2 \times \dot{\boldsymbol{r}}_2.$$ We define now the CoM-Frame where we set $\boldsymbol{R}=0$. This means that the equations for $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ in terms of the relative motion simplify and we have $$\boldsymbol{r}_1=\frac{m_2}{M}\boldsymbol{r},$$ and $$\boldsymbol{r}_2=-\frac{m_1}{M}\boldsymbol{r}.$$ resulting in $$\boldsymbol{L} = m_1 \frac{m_2}{M}\boldsymbol{r}\times\frac{m_2}{M}\boldsymbol{r} +m_2 \frac{m_1}{M}\boldsymbol{r} \times \frac{m_1}{M}\dot{\boldsymbol{r}}.$$ We see that can rewrite this equation as $$\boldsymbol{L}=\boldsymbol{r}\times \mu\dot{\boldsymbol{r}}=\mu\boldsymbol{r}\times \dot{\boldsymbol{r}}.$$ If we now use a central force, we now that $$\mu\dot{\boldsymbol{r}}=\boldsymbol{F}(\boldsymbol{r})=f(r)\hat{\boldsymbol{r}},$$ and inserting this in the equation for the angular momentum we have $$\boldsymbol{L}=\boldsymbol{r}\times f(r)\hat{\boldsymbol{r}},$$ which equals zero since we are taking the cross product of the vector$\boldsymbol{r}$ with itself. Angular momentum is thus conserved and inaddition to the total linear momentum being conserved, we know thatenergy is also conserved with forces that depend only on position andthe relative coordinate only.Since angular momentum is conserved, and this is going to be animportant element in the analysis which follows, we can then idealizethe motion of our two objects as mmoving in a plane spanned by therelative coordinate and the relative mommentum since the angularmomentum is perpendicular to the plane spanned by these two vectors.It means also, since $\boldsymbol{L}$ is conserved, that we can reduce ourproblem to motion in the $xy$-plane. What we have done then is toreduce a two-body problem in three-dimension with six degrees offreedom (the six coordinates of the two objects) to a problem definedentirely by the relative coordinate in two dimensions. We have thusmoved from a problem with six degrees of freedom to one with two only.Since we also deal with central forces that depend only on therelative coordinate, we will show below that transforming to polarcoordinates, we cna find analytical solution tothe equation of motion $$\mu\dot{\boldsymbol{r}}=\boldsymbol{F}(\boldsymbol{r})=f(r)\hat{\boldsymbol{r}}.$$ Note the boldfaced symbols for the relative position $\boldsymbol{r}$. Our vector $\boldsymbol{r}$ is defined as $$\boldsymbol{r}=x\boldsymbol{e}_1+y\boldsymbol{e}_2$$ and introducing polar coordinates $r\in[0,\infty)$ and $\phi\in [0,2\pi]$ and the transformation $$r=sqrt{x^2+y^2},$$ and $x=r\cos\phi$ and $y=r\sin\phi$, we will rewrite our equation of motion by transforming from Cartesian coordinates to Polar coordinates. By so doing, we end up with two differential equations which can be solved analytically (it depends on the form of the potential).What follows now is a rewrite of these equations and the introduction of Kepler's laws as well. Deriving Elliptical OrbitsKepler's laws state that a gravitational orbit should be an ellipsewith the source of the gravitational field at one focus. Deriving thisis surprisingly messy. To do this, we first use angular momentumconservation to transform the equations of motion so that it is interms of $r$ and $\theta$ instead of $r$ and $t$. The overall strategyis to1. Find equations of motion for $r$ and $t$ with no angle ($\theta$) mentioned, i.e. $d^2r/dt^2=\cdots$. Angular momentum conservation will be used, and the equation will involve the angular momentum $L$.2. Use angular momentum conservation to find an expression for $\dot{\theta}$ in terms of $r$.3. Use the chain rule to convert the equations of motions for $r$, an expression involving $r,\dot{r}$ and $\ddot{r}$, to one involving $r,dr/d\theta$ and $d^2r/d\theta^2$. This is quitecomplicated because the expressions will also involve a substitution $u=1/r$ so that one finds an expression in terms of $u$ and $\theta$.4. Once $u(\theta)$ is found, you need to show that this can be converted to the familiar form for an ellipse.The equations of motion give $$\begin{eqnarray}\label{eq:radialeqofmotion} \tag{1}\frac{d}{dt}r^2&=&\frac{d}{dt}(x^2+y^2)=2x\dot{x}+2y\dot{y}=2r\dot{r},\\\nonumber\dot{r}&=&\frac{x}{r}\dot{x}+\frac{y}{r}\dot{y},\\\nonumber\ddot{r}&=&\frac{x}{r}\ddot{x}+\frac{y}{r}\ddot{y}+\frac{\dot{x}^2+\dot{y}^2}{r}-\frac{\dot{r}^2}{r}.\end{eqnarray}$$ Recognizing that the numerator of the third term is the velocity squared, and that it can be written in polar coordinates, $$\begin{equation}v^2=\dot{x}^2+\dot{y}^2=\dot{r}^2+r^2\dot{\theta}^2,\label{_auto1} \tag{2}\end{equation}$$ one can write $\ddot{r}$ as $$\begin{eqnarray}\label{eq:radialeqofmotion2} \tag{3}\ddot{r}&=&\frac{F_x\cos\theta+F_y\sin\theta}{m}+\frac{\dot{r}^2+r^2\dot{\theta}^2}{r}-\frac{\dot{r}^2}{r}\\\nonumber&=&\frac{F}{m}+\frac{r^2\dot{\theta}^2}{r}\\\nonumberm\ddot{r}&=&F+\frac{L^2}{mr^3}.\end{eqnarray}$$ This derivation used the fact that the force was radial,$F=F_r=F_x\cos\theta+F_y\sin\theta$, and that angular momentum is$L=mrv_{\theta}=mr^2\dot{\theta}$. The term $L^2/mr^3=mv^2/r$ behaveslike an additional force. Sometimes this is referred to as acentrifugal force, but it is not a force. Instead, it is theconsequence of considering the motion in a rotating (and thereforeaccelerating) frame.Now, we switch to the particular case of an attractive inverse squareforce, $F=-\alpha/r^2$, and show that the trajectory, $r(\theta)$, isan ellipse. To do this we transform derivatives w.r.t. time toderivatives w.r.t. $\theta$ using the chain rule combined with angularmomentum conservation, $\dot{\theta}=L/mr^2$. $$\begin{eqnarray}\label{eq:rtotheta} \tag{4}\dot{r}&=&\frac{dr}{d\theta}\dot{\theta}=\frac{dr}{d\theta}\frac{L}{mr^2},\\\nonumber\ddot{r}&=&\frac{d^2r}{d\theta^2}\dot{\theta}^2+\frac{dr}{d\theta}\left(\frac{d}{dr}\frac{L}{mr^2}\right)\dot{r}\\\nonumber&=&\frac{d^2r}{d\theta^2}\left(\frac{L}{mr^2}\right)^2-2\frac{dr}{d\theta}\frac{L}{mr^3}\dot{r}\\\nonumber&=&\frac{d^2r}{d\theta^2}\left(\frac{L}{mr^2}\right)^2-\frac{2}{r}\left(\frac{dr}{d\theta}\right)^2\left(\frac{L}{mr^2}\right)^2\end{eqnarray}$$ Equating the two expressions for $\ddot{r}$ in Eq.s ([3](eq:radialeqofmotion2)) and ([4](eq:rtotheta)) eliminates all the derivatives w.r.t. time, and provides a differential equation with only derivatives w.r.t. $\theta$, $$\begin{equation}\label{eq:rdotdot} \tag{5}\frac{d^2r}{d\theta^2}\left(\frac{L}{mr^2}\right)^2-\frac{2}{r}\left(\frac{dr}{d\theta}\right)^2\left(\frac{L}{mr^2}\right)^2=\frac{F}{m}+\frac{L^2}{m^2r^3},\end{equation}$$ that when solved yields the trajectory, i.e. $r(\theta)$. Up to thispoint the expressions work for any radial force, not just forces thatfall as $1/r^2$.The trick to simplifying this differential equation for the inversesquare problems is to make a substitution, $u\equiv 1/r$, and rewritethe differential equation for $u(\theta)$. $$\begin{eqnarray}r&=&1/u,\\\nonumber\frac{dr}{d\theta}&=&-\frac{1}{u^2}\frac{du}{d\theta},\\\nonumber\frac{d^2r}{d\theta^2}&=&\frac{2}{u^3}\left(\frac{du}{d\theta}\right)^2-\frac{1}{u^2}\frac{d^2u}{d\theta^2}.\end{eqnarray}$$ Plugging these expressions into Eq. ([5](eq:rdotdot)) gives anexpression in terms of $u$, $du/d\theta$, and $d^2u/d\theta^2$. Aftersome tedious algebra, $$\begin{equation}\frac{d^2u}{d\theta^2}=-u-\frac{F m}{L^2u^2}.\label{_auto2} \tag{6}\end{equation}$$ For the attractive inverse square law force, $F=-\alpha u^2$, $$\begin{equation}\frac{d^2u}{d\theta^2}=-u+\frac{m\alpha}{L^2}.\label{_auto3} \tag{7}\end{equation}$$ The solution has two arbitrary constants, $A$ and $\theta_0$, $$\begin{eqnarray}\label{eq:Ctrajectory} \tag{8}u&=&\frac{m\alpha}{L^2}+A\cos(\theta-\theta_0),\\\nonumberr&=&\frac{1}{(m\alpha/L^2)+A\cos(\theta-\theta_0)}.\end{eqnarray}$$ The radius will be at a minimum when $\theta=\theta_0$ and at amaximum when $\theta=\theta_0+\pi$. The constant $A$ is related to theeccentricity of the orbit. When $A=0$ the radius is a constant$r=L^2/(m\alpha)$, and the motion is circular. If one solved theexpression $mv^2/r=-\alpha/r^2$ for a circular orbit, using thesubstitution $v=L/(mr)$, one would reproduce the expression$r=L^2/(m\alpha)$.The form describing the elliptical trajectory inEq. ([8](eq:Ctrajectory)) can be identified as an ellipse with onefocus being the center of the ellipse by considering the definition ofan ellipse as being the points such that the sum of the two distancesbetween the two foci are a constant. Making that distance $2D$, thedistance between the two foci as $2a$, and putting one focus at theorigin, $$\begin{eqnarray}2D&=&r+\sqrt{(r\cos\theta-2a)^2+r^2\sin^2\theta},\\\nonumber4D^2+r^2-4Dr&=&r^2+4a^2-4ar\cos\theta,\\\nonumberr&=&\frac{D^2-a^2}{D+a\cos\theta}=\frac{1}{D/(D^2-a^2)-a\cos\theta/(D^2-a^2)}.\end{eqnarray}$$ By inspection, this is the same form as Eq. ([8](eq:Ctrajectory)) with $D/(D^2-a^2)=m\alpha/L^2$ and $a/(D^2-a^2)=A$.Let us remind ourselves about what an ellipse is before we proceed.
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from math import pi
u=1. #x-position of the center
v=0.5 #y-position of the center
a=2. #radius on the x-axis
b=1.5 #radius on the y-axis
t = np.linspace(0, 2*pi, 100)
plt.plot( u+a*np.cos(t) , v+b*np.sin(t) )
plt.grid(color='lightgray',linestyle='--')
plt.show()
###Output
_____no_output_____
###Markdown
Effective or Centrifugal PotentialThe total energy of a particle is $$\begin{eqnarray}E&=&V(r)+\frac{1}{2}mv_\theta^2+\frac{1}{2}m\dot{r}^2\\\nonumber&=&V(r)+\frac{1}{2}mr^2\dot{\theta}^2+\frac{1}{2}m\dot{r}^2\\\nonumber&=&V(r)+\frac{L^2}{2mr^2}+\frac{1}{2}m\dot{r}^2.\end{eqnarray}$$ The second term then contributes to the energy like an additionalrepulsive potential. The term is sometimes referred to as the"centrifugal" potential, even though it is actually the kinetic energyof the angular motion. Combined with $V(r)$, it is sometimes referredto as the "effective" potential, $$\begin{eqnarray}V_{\rm eff}(r)&=&V(r)+\frac{L^2}{2mr^2}.\end{eqnarray}$$ Note that if one treats the effective potential like a real potential, one would expect to be able to generate an effective force, $$\begin{eqnarray}F_{\rm eff}&=&-\frac{d}{dr}V(r) -\frac{d}{dr}\frac{L^2}{2mr^2}\\\nonumber&=&F(r)+\frac{L^2}{mr^3}=F(r)+m\frac{v_\perp^2}{r},\end{eqnarray}$$ which is indeed matches the form for $m\ddot{r}$ in Eq. ([3](eq:radialeqofmotion2)), which included the **centrifugal** force.The following code plots this effective potential for a simple choice of parameters, with a standard gravitational potential $-\alpha/r$. Here we have chosen $L=m=\alpha=1$.
###Code
# Common imports
import numpy as np
from math import *
import matplotlib.pyplot as plt
Deltax = 0.01
#set up arrays
xinitial = 0.3
xfinal = 5.0
alpha = 1.0 # spring constant
m = 1.0 # mass, you can change these
AngMom = 1.0 # The angular momentum
n = ceil((xfinal-xinitial)/Deltax)
x = np.zeros(n)
for i in range(n):
x[i] = xinitial+i*Deltax
V = np.zeros(n)
V = -alpha/x+0.5*AngMom*AngMom/(m*x*x)
# Plot potential
fig, ax = plt.subplots()
ax.set_xlabel('r[m]')
ax.set_ylabel('V[J]')
ax.plot(x, V)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Gravitational force exampleUsing the above parameters, we can now study the evolution of the system using for example the velocity Verlet method.This is done in the code here for an initial radius equal to the minimum of the potential well. We seen then that the radius is always the same and corresponds to a circle (the radius is always constant).
###Code
# Common imports
import numpy as np
import pandas as pd
from math import *
import matplotlib.pyplot as plt
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
# Simple Gravitational Force -alpha/r
DeltaT = 0.01
#set up arrays
tfinal = 100.0
n = ceil(tfinal/DeltaT)
# set up arrays for t, v and r
t = np.zeros(n)
v = np.zeros(n)
r = np.zeros(n)
# Constants of the model, setting all variables to one for simplicity
alpha = 1.0
AngMom = 1.0 # The angular momentum
m = 1.0 # scale mass to one
c1 = AngMom*AngMom/(m*m)
c2 = AngMom*AngMom/m
rmin = (AngMom*AngMom/m/alpha)
# Initial conditions
r0 = rmin
v0 = 0.0
r[0] = r0
v[0] = v0
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up acceleration
a = -alpha/(r[i]**2)+c1/(r[i]**3)
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
anew = -alpha/(r[i+1]**2)+c1/(r[i+1]**3)
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots(2,1)
ax[0].set_xlabel('time')
ax[0].set_ylabel('radius')
ax[0].plot(t,r)
ax[1].set_xlabel('time')
ax[1].set_ylabel('Velocity')
ax[1].plot(t,v)
save_fig("RadialGVV")
plt.show()
###Output
_____no_output_____
###Markdown
Changing the value of the initial position to a value where the energy is positive, leads to an increasing radius with time, a so-called unbound orbit. Choosing on the other hand an initial radius that corresponds to a negative energy and different from the minimum value leads to a radius that oscillates back and forth between two values. Harmonic Oscillator in two dimensionsConsider a particle of mass $m$ in a 2-dimensional harmonic oscillator with potential $$V=\frac{1}{2}kr^2=\frac{1}{2}k(x^2+y^2).$$ If the orbit has angular momentum $L$, we can find the radius and angular velocity of the circular orbit as well as the b) the angular frequency of small radial perturbations.We consider the effective potential. The radius of a circular orbit is at the minimum of the potential (where the effective force is zero).The potential is plotted here with the parameters $k=m=0.1$ and $L=1.0$.
###Code
# Common imports
import numpy as np
from math import *
import matplotlib.pyplot as plt
Deltax = 0.01
#set up arrays
xinitial = 0.5
xfinal = 3.0
k = 1.0 # spring constant
m = 1.0 # mass, you can change these
AngMom = 1.0 # The angular momentum
n = ceil((xfinal-xinitial)/Deltax)
x = np.zeros(n)
for i in range(n):
x[i] = xinitial+i*Deltax
V = np.zeros(n)
V = 0.5*k*x*x+0.5*AngMom*AngMom/(m*x*x)
# Plot potential
fig, ax = plt.subplots()
ax.set_xlabel('r[m]')
ax.set_ylabel('V[J]')
ax.plot(x, V)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
$$\begin{eqnarray*}V_{\rm eff}&=&\frac{1}{2}kr^2+\frac{L^2}{2mr^2}\end{eqnarray*}$$ The effective potential looks like that of a harmonic oscillator forlarge $r$, but for small $r$, the centrifugal potential repels theparticle from the origin. The combination of the two potentials has aminimum for at some radius $r_{\rm min}$. $$\begin{eqnarray*}0&=&kr_{\rm min}-\frac{L^2}{mr_{\rm min}^3},\\r_{\rm min}&=&\left(\frac{L^2}{mk}\right)^{1/4},\\\dot{\theta}&=&\frac{L}{mr_{\rm min}^2}=\sqrt{k/m}.\end{eqnarray*}$$ For particles at $r_{\rm min}$ with $\dot{r}=0$, the particle does notaccelerate and $r$ stays constant, i.e. a circular orbit. The radiusof the circular orbit can be adjusted by changing the angular momentum$L$.For the above parameters this minimum is at $r_{\rm min}=1$. Now consider small vibrations about $r_{\rm min}$. The effective spring constant is the curvature of the effective potential. $$\begin{eqnarray*}k_{\rm eff}&=&\left.\frac{d^2}{dr^2}V_{\rm eff}(r)\right|_{r=r_{\rm min}}=k+\frac{3L^2}{mr_{\rm min}^4}\\&=&4k,\\\omega&=&\sqrt{k_{\rm eff}/m}=2\sqrt{k/m}=2\dot{\theta}.\end{eqnarray*}$$ Here, the second step used the result of the last step from part(a). Because the radius oscillates with twice the angular frequency,the orbit has two places where $r$ reaches a minimum in onecycle. This differs from the inverse-square force where there is oneminimum in an orbit. One can show that the orbit for the harmonicoscillator is also elliptical, but in this case the center of thepotential is at the center of the ellipse, not at one of the foci.The solution is also simple to write down exactly in Cartesian coordinates. The $x$ and $y$ equations of motion separate, $$\begin{eqnarray*}\ddot{x}&=&-kx,\\\ddot{y}&=&-ky.\end{eqnarray*}$$ So the general solution can be expressed as $$\begin{eqnarray*}x&=&A\cos\omega_0 t+B\sin\omega_0 t,\\y&=&C\cos\omega_0 t+D\sin\omega_0 t.\end{eqnarray*}$$ The code here finds the solution for $x$ and $y$ using the code we developed in homework 5 and 6 and the midterm. Note that this code is tailored to run in Cartesian coordinates. There is thus no angular momentum dependent term.
###Code
DeltaT = 0.01
#set up arrays
tfinal = 10.0
n = ceil(tfinal/DeltaT)
# set up arrays
t = np.zeros(n)
v = np.zeros((n,2))
r = np.zeros((n,2))
radius = np.zeros(n)
# Constants of the model
k = 1.0 # spring constant
m = 1.0 # mass, you can change these
omega02 = sqrt(k/m) # Frequency
AngMom = 1.0 # The angular momentum
rmin = (AngMom*AngMom/k/m)**0.25
# Initial conditions as compact 2-dimensional arrays
x0 = rmin-0.5; y0= sqrt(rmin*rmin-x0*x0)
r0 = np.array([x0,y0])
v0 = np.array([0.0,0.0])
r[0] = r0
v[0] = v0
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up the acceleration
a = -r[i]*omega02
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
anew = -r[i+1]*omega02
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
# Plot position as function of time
radius = np.sqrt(r[:,0]**2+r[:,1]**2)
fig, ax = plt.subplots(3,1)
ax[0].set_xlabel('time')
ax[0].set_ylabel('radius squared')
ax[0].plot(t,r[:,0]**2+r[:,1]**2)
ax[1].set_xlabel('time')
ax[1].set_ylabel('x position')
ax[1].plot(t,r[:,0])
ax[2].set_xlabel('time')
ax[2].set_ylabel('y position')
ax[2].plot(t,r[:,1])
fig.tight_layout()
save_fig("2DimHOVV")
plt.show()
###Output
_____no_output_____
###Markdown
With some work using double angle formulas, one can calculate $$\begin{eqnarray*}r^2&=&x^2+y^2\\\nonumber&=&(A^2+C^2)\cos^2(\omega_0t)+(B^2+D^2)\sin^2\omega_0t+(AB+CD)\cos(\omega_0t)\sin(\omega_0t)\\\nonumber&=&\alpha+\beta\cos 2\omega_0 t+\gamma\sin 2\omega_0 t,\\\alpha&=&\frac{A^2+B^2+C^2+D^2}{2},~~\beta=\frac{A^2-B^2+C^2-D^2}{2},~~\gamma=AB+CD,\\r^2&=&\alpha+(\beta^2+\gamma^2)^{1/2}\cos(2\omega_0 t-\delta),~~~\delta=\arctan(\gamma/\beta),\end{eqnarray*}$$ and see that radius oscillates with frequency $2\omega_0$. Thefactor of two comes because the oscillation $x=A\cos\omega_0t$ has twomaxima for $x^2$, one at $t=0$ and one a half period later.The following code shows first how we can solve this problem using the radial degrees of freedom only.
###Code
DeltaT = 0.01
#set up arrays
tfinal = 10.0
n = ceil(tfinal/DeltaT)
# set up arrays for t, v and r
t = np.zeros(n)
v = np.zeros(n)
r = np.zeros(n)
E = np.zeros(n)
# Constants of the model
AngMom = 1.0 # The angular momentum
m = 1.0
k = 1.0
omega02 = k/m
c1 = AngMom*AngMom/(m*m)
c2 = AngMom*AngMom/m
rmin = (AngMom*AngMom/k/m)**0.25
# Initial conditions
r0 = rmin
v0 = 0.0
r[0] = r0
v[0] = v0
E[0] = 0.5*m*v0*v0+0.5*k*r0*r0+0.5*c2/(r0*r0)
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up acceleration
a = -r[i]*omega02+c1/(r[i]**3)
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
anew = -r[i+1]*omega02+c1/(r[i+1]**3)
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
E[i+1] = 0.5*m*v[i+1]*v[i+1]+0.5*k*r[i+1]*r[i+1]+0.5*c2/(r[i+1]*r[i+1])
# Plot position as function of time
fig, ax = plt.subplots(2,1)
ax[0].set_xlabel('time')
ax[0].set_ylabel('radius')
ax[0].plot(t,r)
ax[1].set_xlabel('time')
ax[1].set_ylabel('Energy')
ax[1].plot(t,E)
save_fig("RadialHOVV")
plt.show()
###Output
_____no_output_____
###Markdown
Stability of OrbitsThe effective force can be extracted from the effective potential, $V_{\rm eff}$. Beginning from the equations of motion, Eq. ([1](eq:radialeqofmotion)), for $r$, $$\begin{eqnarray}m\ddot{r}&=&F+\frac{L^2}{mr^3}\\\nonumber&=&F_{\rm eff}\\\nonumber&=&-\partial_rV_{\rm eff},\\\nonumberF_{\rm eff}&=&-\partial_r\left[V(r)+(L^2/2mr^2)\right].\end{eqnarray}$$ For a circular orbit, the radius must be fixed as a function of time,so one must be at a maximum or a minimum of the effectivepotential. However, if one is at a maximum of the effective potentialthe radius will be unstable. For the attractive Coulomb force theeffective potential will be dominated by the $-\alpha/r$ term forlarge $r$ because the centrifugal part falls off more quickly, $\sim1/r^2$. At low $r$ the centrifugal piece wins and the effectivepotential is repulsive. Thus, the potential must have a minimumsomewhere with negative potential. The circular orbits are then stableto perturbation.The effective potential is sketched for two cases, a $1/r$ attractivepotential and a $1/r^3$ attractive potential. The $1/r$ case has astable minimum, whereas the circular orbit in the $1/r^3$ case isunstable.If one considers a potential that falls as $1/r^3$, the situation isreversed and the point where $\partial_rV$ disappears will be a localmaximum rather than a local minimum. **Fig to come here with code**The repulsive centrifugal piece dominates at large $r$ and the attractiveCoulomb piece wins out at small $r$. The circular orbit is then at amaximum of the effective potential and the orbits are unstable. It isthe clear that for potentials that fall as $r^n$, that one must have$n>-2$ for the orbits to be stable.Consider a potential $V(r)=\beta r$. For a particle of mass $m$ withangular momentum $L$, find the angular frequency of a circularorbit. Then find the angular frequency for small radial perturbations.For the circular orbit you search for the position $r_{\rm min}$ where the effective potential is minimized, $$\begin{eqnarray*}\partial_r\left\{\beta r+\frac{L^2}{2mr^2}\right\}&=&0,\\\beta&=&\frac{L^2}{mr_{\rm min}^3},\\r_{\rm min}&=&\left(\frac{L^2}{\beta m}\right)^{1/3},\\\dot{\theta}&=&\frac{L}{mr_{\rm min}^2}=\frac{\beta^{2/3}}{(mL)^{1/3}}\end{eqnarray*}$$ Now, we can find the angular frequency of small perturbations about the circular orbit. To do this we find the effective spring constant for the effective potential, $$\begin{eqnarray*}k_{\rm eff}&=&\partial_r^2 \left.V_{\rm eff}\right|_{r_{\rm min}}\\&=&\frac{3L^2}{mr_{\rm min}^4},\\\omega&=&\sqrt{\frac{k_{\rm eff}}{m}}\\&=&\frac{\beta^{2/3}}{(mL)^{1/3}}\sqrt{3}.\end{eqnarray*}$$ If the two frequencies, $\dot{\theta}$ and $\omega$, differ by aninteger factor, the orbit's trajectory will repeat itself each timearound. This is the case for the inverse-square force,$\omega=\dot{\theta}$, and for the harmonic oscillator,$\omega=2\dot{\theta}$. In this case, $\omega=\sqrt{3}\dot{\theta}$,and the angles at which the maxima and minima occur change with eachorbit. Code example with gravitional forceThe code example here is meant to illustrate how we can make a plot of the final orbit. We solve the equations in polar coordinates (the example here uses the minimum of the potential as initial value) and then we transform back to cartesian coordinates and plot $x$ versus $y$. We see that we get a perfect circle when we place ourselves at the minimum of the potential energy, as expected.
###Code
# Simple Gravitational Force -alpha/r
DeltaT = 0.01
#set up arrays
tfinal = 8.0
n = ceil(tfinal/DeltaT)
# set up arrays for t, v and r
t = np.zeros(n)
v = np.zeros(n)
r = np.zeros(n)
phi = np.zeros(n)
x = np.zeros(n)
y = np.zeros(n)
# Constants of the model, setting all variables to one for simplicity
alpha = 1.0
AngMom = 1.0 # The angular momentum
m = 1.0 # scale mass to one
c1 = AngMom*AngMom/(m*m)
c2 = AngMom*AngMom/m
rmin = (AngMom*AngMom/m/alpha)
# Initial conditions, place yourself at the potential min
r0 = rmin
v0 = 0.0 # starts at rest
r[0] = r0
v[0] = v0
phi[0] = 0.0
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up acceleration
a = -alpha/(r[i]**2)+c1/(r[i]**3)
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
anew = -alpha/(r[i+1]**2)+c1/(r[i+1]**3)
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
phi[i+1] = t[i+1]*c2/(r0**2)
# Find cartesian coordinates for easy plot
x = r*np.cos(phi)
y = r*np.sin(phi)
fig, ax = plt.subplots(3,1)
ax[0].set_xlabel('time')
ax[0].set_ylabel('radius')
ax[0].plot(t,r)
ax[1].set_xlabel('time')
ax[1].set_ylabel('Angle $\cos{\phi}$')
ax[1].plot(t,np.cos(phi))
ax[2].set_ylabel('y')
ax[2].set_xlabel('x')
ax[2].plot(x,y)
save_fig("Phasespace")
plt.show()
###Output
_____no_output_____
###Markdown
Try to change the initial value for $r$ and see what kind of orbits you get.In order to test different energies, it can be useful to look at the plot of the effective potential discussed above.However, for orbits different from a circle the above code would need modifications in order to allow us to display say an ellipse. For the latter, it is much easier to run our code in cartesian coordinates, as done here. In this code we test also energy conservation and see that it is conserved to numerical precision. The code here is a simple extension of the code we developed for homework 4.
###Code
# Common imports
import numpy as np
import pandas as pd
from math import *
import matplotlib.pyplot as plt
DeltaT = 0.01
#set up arrays
tfinal = 10.0
n = ceil(tfinal/DeltaT)
# set up arrays
t = np.zeros(n)
v = np.zeros((n,2))
r = np.zeros((n,2))
E = np.zeros(n)
# Constants of the model
m = 1.0 # mass, you can change these
alpha = 1.0
# Initial conditions as compact 2-dimensional arrays
x0 = 0.5; y0= 0.
r0 = np.array([x0,y0])
v0 = np.array([0.0,1.0])
r[0] = r0
v[0] = v0
rabs = sqrt(sum(r[0]*r[0]))
E[0] = 0.5*m*(v[0,0]**2+v[0,1]**2)-alpha/rabs
# Start integrating using the Velocity-Verlet method
for i in range(n-1):
# Set up the acceleration
rabs = sqrt(sum(r[i]*r[i]))
a = -alpha*r[i]/(rabs**3)
# update velocity, time and position using the Velocity-Verlet method
r[i+1] = r[i] + DeltaT*v[i]+0.5*(DeltaT**2)*a
rabs = sqrt(sum(r[i+1]*r[i+1]))
anew = -alpha*r[i+1]/(rabs**3)
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
E[i+1] = 0.5*m*(v[i+1,0]**2+v[i+1,1]**2)-alpha/rabs
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots(3,1)
ax[0].set_ylabel('y')
ax[0].set_xlabel('x')
ax[0].plot(r[:,0],r[:,1])
ax[1].set_xlabel('time')
ax[1].set_ylabel('y position')
ax[1].plot(t,r[:,0])
ax[2].set_xlabel('time')
ax[2].set_ylabel('y position')
ax[2].plot(t,r[:,1])
fig.tight_layout()
save_fig("2DimGravity")
plt.show()
print(E)
###Output
_____no_output_____ |
viral_events/viral_event_data_download.ipynb | ###Markdown
Notebook description This notebook contains code for generating query strings from Futusome viral event data, contained in file "keyword_hashtags_initials.csv", and for downloading the data using the generated strings.The "keyword_hashtags_initials.csv" file contained columns for event id, Futusome score, event type, various other quantities describing the event, and the query string corresponding to the event.This notebook produces the file "viral_event_queries.csv", which contains the queries to be used in data download, and the file "queries_orig_match.csv", which contains the queries mapped to event ids and origin times in Futusome data.Event origin times are used for getting data before and after the event origin in the notebook "event_select_days.ipynb". The scripts generate a .csv file as an output in each step. These can be used for inspecting how the query strings are being processed. Setup Hybra Core The results were obtained using Hybra Core version corresponding to commit e5f1c36 in https://github.com/HIIT/hybra-core/
###Code
from core import hybra
hybra.set_data_path("data/")
###Output
_____no_output_____
###Markdown
Methods for generating queries
###Code
import csv
def get_queries(path, out_path):
# Get query strings and save to .csv file given in out_path
with open(path, 'rb') as f:
reader = csv.reader(f, delimiter=',')
reader.next() # Skip file headings
out = open(out_path, 'wb') # Create .csv file for query strings
writer = csv.writer(out, delimiter=',')
for row in reader:
writer.writerow([row[25]]) # Write query string to .csv
out.close()
def format_queries(query_path, out_path):
# Format query strings not to contain platform types part and save to .csv file given in out_path
with open(query_path, 'rb') as f:
reader = csv.reader(f, delimiter=',')
out = open(out_path, "wb") # Create .csv file for formatted queries
writer = csv.writer(out, delimiter=',')
for row in reader: # Format queries not to contain platform types
query = row[0]
query = query.replace('type:twitter_tweet AND ', '')
query = query.replace('type:facebook* AND ', '')
query = query.replace('type:instagram* AND ', '')
query = query.replace(' AND type:facebook*', '')
writer.writerow( [query] ) # Write formatted query to file
out.close()
def remove_duplicate_queries(path, out_path):
# Remove duplicates from pruned queries and save to .csv file given in out_path
reader = csv.reader(open(path, 'rb'), delimiter=',')
out = open(out_path, "wb") # Create .csv file for unique queries
writer = csv.writer(out, delimiter=',')
dupl_removed = set()
for row in reader:
dupl_removed.add(row[0]) # Only keep unique queries
for q in dupl_removed:
writer.writerow( [q] ) # Write unique queries to file
out.close()
def match_query_ids(orig_file, queries_file, out_path):
# Match queries to ids of the original events and write with original queries to .csv file given in out_path
# Note that formatted queries can match more than one original queries and thus more than one event id
reader_queries = csv.reader(open(queries_file, 'rb'), delimiter = ',')
reader_orig = csv.reader(open(orig_file, 'rb'), delimiter = ',')
out = open(out_path, 'wb') # Create .csv for mapping queries to ids and original queries
writer = csv.writer(out, delimiter = ',')
writer.writerow( ['query', 'event_id', 'orig_query'] ) # Create header row
reader_orig.next() # Skip header
orig = []
for row in reader_orig:
orig.append([row[25], row[0]]) # Get original queries and corresponding ids
for query in reader_queries:
q = query[0]
for item in orig:
if q in item[0]:
writer.writerow( [q, item[1], item[0]] ) # Write queries and each match on own row in out file
out.close()
def find_query_orig_dates(orig_file, queries_id_file, out_path):
# Match queries to event origin times in Futusome viral events data and write to .csv file given in out_path
# Note that formatted queries can match more than one event id and thus have more than one origin time
reader_orig = csv.reader(open(orig_file, 'rb'), delimiter = ',')
reader_id = csv.reader(open(queries_id_file, 'rb'), delimiter = ',')
out = open(out_path, 'wb') # Create .csv file for mapping queries to event origin times
writer = csv.writer(out, delimiter = ',')
reader_id.next() # Skip headers
# Create a dictionary for matching formatted queries to event ids and origin times
match = {}
for row in reader_id:
# Use formatted queries as keys and add dictionary for ids and origin times as value for each key
if row[0] not in match.keys():
match[row[0]] = {'ids' : [row[1]], 'dates' : []}
else:
match[row[0]]['ids'].append(row[1])
# Get origin times from Futusome data
orig_data = {}
for row in reader_orig: # Use event id as key and origin time as value
orig_data[row[0]] = row[10]
for q in match.keys():
for i in match[q]['ids']:
# Match origin times from Futusome data to their corresponding queries using event ids
match[q]['dates'].append(orig_data[i])
writer.writerow(['query', 'id', 'orig_at']) # Write header row
for key, value in match.items():
# Format event ids and origin times and write with corresponding queries to out file
ids = str(value['ids']).replace('[', '')
ids = ids.replace('\'', '')
ids = ids.replace(']', '')
ids = ids.replace(', ', ';')
dates = str(value['dates']).replace('[', '')
dates = dates.replace('\'', '')
dates = dates.replace(']', '')
dates = dates.replace(', ', ';')
writer.writerow([key, ids, dates])
out.close()
###Output
_____no_output_____
###Markdown
Generate queries and origin times and save in .csv
###Code
get_queries('data/csv/keywords_hashtags_initial.csv', 'data/csv/viral_event_queries.csv')
format_queries('data/csv/viral_event_queries.csv', 'data/csv/queries_formatted_dupl.csv')
remove_duplicate_queries('data/csv/queries_formatted_dupl.csv', 'data/csv/queries_formatted.csv')
match_query_ids('data/csv/keywords_hashtags_initial.csv', 'data/csv/queries_formatted.csv', 'data/csv/queries_id_matched.csv')
find_query_orig_dates('data/csv/viral_events.csv', 'data/csv/queries_id_matched.csv', 'data/csv/queries_orig_matched.csv')
###Output
_____no_output_____
###Markdown
Read queries from .csv and download data Note that there are a number of queries which are case-sensitive. If your file system is case-insensitive, you should download these queries into a separate directory to avoid overwriting downloaded data.The following queries come in both lowercase and uppercase varieties:text.hashtag:HIFKlive text.hashtag:hifklive text.hashtag:Huoneentaulu text.hashtag:huoneentaulu text.hashtag:IsacElliotFollowSpree text.hashtag:isacelliotfollowspree text.hashtag:kakutus text.hashtag:kaKUtus text.hashtag:KOVAA text.hashtag:Kovaa text.hashtag:MiskalleKoti text.hashtag:miskallekoti text.hashtag:Museokortti text.hashtag:museokortti text.hashtag:SDPlive text.hashtag:sdplive text.hashtag:SJS2014 text.hashtag:sjs2014 text.hashtag:Taiteeniltakoulu text.hashtag:taiteeniltakoulu text.hashtag:työTetris text.hashtag:työtetris text.hashtag:Vero150v text.hashtag:vero150v text.hashtag:VIIMEISENKERRAN text.hashtag:ViimeisenKerran text.hashtag:Visio2025 text.hashtag:visio2025 text.hashtag:WU19 text.hashtag:wu19 text.hashtag:TongueOutTuesday text.hashtag:tongueouttuesday
###Code
import csv
queries = []
# Get queries from file and add to list
with open('data/csv/viral_event_queries.csv', 'rb') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
queries.append(row[0])
# Data download requires a Futusome API key
for q in queries:
data = hybra.data('futusome', data_folder = 'json/', query = q , api_key = '')
###Output
_____no_output_____ |
Course_project/sounds_classify/sounds_classes.ipynb | ###Markdown
测试集 训练集 label
###Code
train_label = pd.read_excel('./dataset/train.xlsx', engine='openpyxl', sheet_name="Sheet1")
test_label = pd.read_excel('./dataset/test.xlsx', engine='openpyxl', sheet_name="Sheet1")
###Output
_____no_output_____
###Markdown
循环方案 窗格长度3s 步长1s 循环方案+投票表决 解决长度不一致问题
###Code
fs = 16000
window = 3
step = 1
path = './dataset/train/'
class Generate_data():
def __init__(self, path, train_label, split_rate=0.2, random_state=1, window=3, step=1, fs=16000):
self.path = path
self.train_label = train_label
self.split_rate = split_rate
self.random_state = random_state
self.window = window
self.step = step
self.fs = fs
self.random_state = random_state
# def get_subsig(self, label, filename, sig, fs=self.fs, window=self.window, step=self.step):
def get_subsig(self, label, filename, sig):
subsig_len = np.ceil(sig.shape[0]/(self.step * self.fs)).astype('int') - (self.window - self.step)
window_size = self.fs * self.window
step_size = self.fs * self.step
dataset = []
for i in range(subsig_len):
subsig = sig[i*step_size: i*step_size+window_size]
if i == subsig_len - 1:
subsig = np.concatenate((subsig,sig[0:window_size-len(subsig)]), axis=0)
dataset.append({'label': label, 'filename': filename, 'segement':i+1, 'sig': subsig})
return dataset
def get_dataset(self, data_pd):
dataset = []
for _, line in tqdm(data_pd.iterrows()):
sig, _ = librosa.load(path+'{}'.format(line['filename']), sr=fs)
label = line['label']
dataset += self.get_subsig(label=label, filename=line['filename'], sig=sig)
return dataset
def get_Train_Test(self):
Train_pd, Test_pd = train_test_split(self.train_label, test_size=self.split_rate, random_state=self.random_state)
self.Train_dataset = self.get_dataset(Train_pd)
self.Test_dataset = self.get_dataset(Test_pd)
return self.Train_dataset, self.Test_dataset
dataset = Generate_data(path=path, train_label=train_label, split_rate=0.2, random_state=1, window=3, step=1, fs=16000)
Train, Test = dataset.get_Train_Test()
###Output
1600it [00:05, 275.00it/s]
400it [00:01, 290.40it/s]
###Markdown
特征构造
###Code
class feature_engine():
def __init__(self, dataset):
self.dataset = dataset
self.fs = fs
def get_feature(self, sig):
n_mels=80
melspec = librosa.feature.melspectrogram(sig, sr=self.fs, n_fft=1024, hop_length=512, n_mels=80)
logmelspec = librosa.power_to_db(melspec)
return logmelspec.reshape(n_mels, -1, 1)
def get_XY(self):
X_train = []
Y_train = []
for item in tqdm(self.dataset):
Y_train.append(item['label'])
X_train.append(self.get_feature(item['sig']))
self.X = np.array(X_train)
self.Y = to_categorical(np.array(Y_train).reshape(-1, 1), 2)
def save_train(self, name):
np.save('{}_X'.format(name), self.X)
np.save('{}_Y'.format(name), self.Y)
Train_feature = feature_engine(Train)
Test_feature = feature_engine(Test)
Train_feature.get_XY()
Test_feature.get_XY()
Train_feature.Y.sum(axis=0)
Test_feature.Y.sum(axis=0)
###Output
_____no_output_____
###Markdown
模型结构模型设计 监督学习分类器 输入:序列(length:`window*fs`) 输出:0 or 1
###Code
tf.test.gpu_device_name() # test if GPU is availabe. GPU will be much faster.
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
# def identity_block(X, f, filters, stage, block):
# conv_name_base = 'res' + str(stage) + block + '_branch'
# bn_name_base = 'bn' + str(stage) + block + '_branch'
# F1, F2, F3 = filters
# X_shortcut = X
# X = Conv2D(filters = F1, kernel_size = (1,1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed = 0))(X)
# X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
# X = Activation('relu')(X)
# X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed = 0))(X)
# X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
# X = Activation('relu')(X)
# X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed = 0))(X)
# X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# X = Add()([X, X_shortcut])
# X = Activation('relu')(X)
# return X
# def convolution_block(X, f, filters, stage, block, s=2):
# conv_name_base = 'res' + str(stage) + block + '_branch'
# bn_name_base = 'bn' + str(stage) + block + '_branch'
# F1, F2, F3 = filters
# X_shortcut = X
# X = Conv2D(filters = F1, kernel_size = (1,1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed = 0))(X)
# X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
# X = Activation('relu')(X)
# X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed = 0))(X)
# X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
# X = Activation('relu')(X)
# X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed = 0))(X)
# X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# X_shortcut = Conv2D(F3, (1,1), strides = (s,s), name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
# X_shortcut = BatchNormalization(axis = 3, name=bn_name_base + '1')(X_shortcut)
# X = Add()([X, X_shortcut])
# X = Activation('relu')(X)
# return X
# def my_model(input_shape = (80, 94, 1), classes = 2):
# X_input = Input(input_shape)
# # conv1
# X = ZeroPadding2D((3, 3))(X_input)
# X = Conv2D(64, (7, 7), strides = (2,2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
# X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
# X = Activation('relu')(X)
# X = MaxPooling2D((3, 3), strides = (2,2))(X)
# # basicblock1
# X = convolution_block(X, f = 3, filters = [64,64,256], stage = 2, block = 'a', s = 1)
# X = Dropout(0.2)(X)
# X = identity_block(X, 3, [64,64,256], stage=2, block='b')
# X = Dropout(0.2)(X)
# X = identity_block(X, 3, [64,64,256], stage=2, block='c')
# X = Dropout(0.2)(X)
# # X = convolution_block(X, f = 3, filters = [128,128,512], stage = 3, block = 'a', s = 2)
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [128,128,512], stage=3, block='b')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [128,128,512], stage=3, block='c')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [128,128,512], stage=3, block='d')
# # X = convolution_block(X, f = 3, filters = [256,256,1024], stage = 4, block = 'a', s = 2)
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [256,256,1024], stage=4, block='b')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [256,256,1024], stage=4, block='c')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [256,256,1024], stage=4, block='d')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [256,256,1024], stage=4, block='e')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [256,256,1024], stage=4, block='f')
# # X = convolution_block(X, f = 3, filters = [512,512,2048], stage = 5, block = 'a', s = 2)
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [512,512,2048], stage=5, block='b')
# # X = Dropout(0.2)(X)
# # X = identity_block(X, 3, [512,512,2048], stage=5, block='c')
# X = AveragePooling2D((2, 2), name='avg_pool')(X)
# X = Flatten()(X)
# X = Dense(classes, activation = 'softmax', name = 'fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# model = Model(inputs = X_input, outputs = X, name = 'my_model')
# model.compile(optimizer='adam', loss = 'categorical_crossentropy', metrics=['accuracy'])
# return model
from tensorflow.keras import layers, models, Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout, BatchNormalization, Activation, GlobalAveragePooling2D
# 继承Layer,建立resnet18和34卷积层模块
class CellBlock(layers.Layer):
def __init__(self, filter_num, stride=1):
super(CellBlock, self).__init__()
self.conv1 = Conv2D(filter_num, (3,3), strides=stride, padding='same')
self.bn1 = BatchNormalization()
self.relu = Activation('relu')
self.conv2 = Conv2D(filter_num, (3,3), strides=1, padding='same')
self.bn2 = BatchNormalization()
if stride !=1:
self.residual = Conv2D(filter_num, (1,1), strides=stride)
else:
self.residual = lambda x:x
def call (self, inputs, training=None):
x = self.conv1(inputs)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
r = self.residual(inputs)
x = layers.add([x, r])
output = tf.nn.relu(x)
return output
#继承Model, 创建resnet18和34
class ResNet(models.Model):
def __init__(self, layers_dims, nb_classes):
super(ResNet, self).__init__()
self.stem = Sequential([
Conv2D(64, (7,7), strides=(2,2),padding='same'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D((3,3), strides=(2,2), padding='same')
]) #开始模块
self.layer1 = self.build_cellblock(64, layers_dims[0])
self.layer2 = self.build_cellblock(128, layers_dims[1], stride=2)
self.layer3 = self.build_cellblock(256, layers_dims[2], stride=2)
self.layer4 = self.build_cellblock(512, layers_dims[3], stride=2)
self.avgpool = GlobalAveragePooling2D()
self.fc = Dense(nb_classes, activation='softmax')
def call(self, inputs, training=None):
x=self.stem(inputs)
# print(x.shape)
x=self.layer1(x)
x=self.layer2(x)
x=self.layer3(x)
x=self.layer4(x)
x=self.avgpool(x)
x=self.fc(x)
return x
def build_cellblock(self, filter_num, blocks, stride=1):
res_blocks = Sequential()
res_blocks.add(CellBlock(filter_num, stride)) #每层第一个block stride可能为非1
for _ in range(1, blocks): #每一层由多少个block组成
res_blocks.add(CellBlock(filter_num, stride=1))
return res_blocks
def build_ResNet(NetName, nb_classes):
ResNet_Config = {'ResNet18':[2,2,2,2],
'ResNet34':[3,4,6,3]}
return ResNet(ResNet_Config[NetName], nb_classes)
np.random.seed(40)
model = build_ResNet('ResNet18', 2)
model.build(input_shape=(None, 80, 94, 1))
model.compile(optimizer='adam', loss = 'categorical_crossentropy', metrics=['accuracy'])
model.summary()
epoch=82
batch_size=64
class_weights = 1/Train_feature.Y.sum(axis=0)*Train_feature.Y.sum()/2
class_weights = {0:class_weights[0], 1:class_weights[1]}
history1 = model.fit(Train_feature.X, Train_feature.Y,
batch_size=batch_size, epochs=epoch, verbose=1,
validation_split=0.2, validation_steps=(int(0.2*len(Train_feature.X)) // batch_size),
class_weight=class_weights, shuffle=True)
import matplotlib.pyplot as plt
plt.plot(history1.history['loss'])
plt.plot(history1.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.ylim(-0.2, 5)
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.savefig('loss_res18.jpg')
plt.show()
max(history1.history['val_accuracy'])
plt.plot(history1.history['accuracy'])
plt.plot(history1.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.ylim(0.5, 1.05)
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.savefig('accuracy_res18.jpg')
plt.show()
###Output
_____no_output_____
###Markdown
测试集检验
###Code
def sub_devote(filename, sig, fs=16000, window=3, step=1):
subsig_len = np.ceil(sig.shape[0]/(step * fs)).astype('int') - (window - step)
window_size = fs * window
step_size = fs * step
dataset = []
for i in range(subsig_len):
subsig = sig[i*step_size: i*step_size+window_size]
if i == subsig_len - 1:
subsig = np.concatenate((subsig,sig[0:window_size-len(subsig)]), axis=0)
dataset.append(subsig)
return dataset
def get_feature(sig):
n_mels=80
melspec = librosa.feature.melspectrogram(sig, sr=fs, n_fft=1024, hop_length=512, n_mels=80)
logmelspec = librosa.power_to_db(melspec)
return logmelspec.reshape(n_mels, -1, 1)
def clasify(sequence, filename, sig, window=window, step=step):
predict_data = sub_devote(filename=filename, sig=sig, window=window, step=step)
maps = np.array([get_feature(sequencei) for sequencei in predict_data])
results = np.argmax(model.predict(maps), axis=1)
if results.mean()>0.5:
return 1
else:
return 0
test_Pred = []
for line in tqdm(Test):
sig = line['sig']
predict_label = clasify(sig, filename=line['filename'], sig=sig, window=window, step=step)
test_Pred.append(predict_label)
from sklearn.metrics import classification_report, confusion_matrix
import numpy as np
classes=['Woman', 'Man']
print(classification_report(Test_feature.Y.argmax(axis=1), test_Pred, target_names=classes, digits=4))
cm = confusion_matrix(Test_feature.Y.argmax(axis=1), test_Pred)
sns.heatmap(cm, annot=True, fmt='.20g')
plt.xlabel('Y_Pred')
plt.ylabel('Y_True')
plt.savefig('confusion.jpg')
###Output
_____no_output_____
###Markdown
预测结果
###Code
result = []
for _, line in tqdm(test_label.iterrows()):
sig, _ = librosa.load('./dataset/test/{}'.format(line['filename']), sr=fs)
# model predict
predict_label = clasify(sig, filename=line['filename'], sig=sig, window=window, step=step)
result.append([line['filename'], predict_label])
result = pd.DataFrame(result, columns=['filename', 'label'])
result.to_excel('result.xlsx', index=False)
###Output
500it [00:21, 23.45it/s]
###Markdown
PA
###Code
sig, _ = librosa.load('test3.wav', sr=fs)
# model predict
predict_label = clasify(sig, filename=line['filename'], sig=sig, window=window, step=step)
predict_label
import librosa.display
sig, _ = librosa.load('test1.wav', sr=fs)
melspec = librosa.feature.melspectrogram(sig, sr=fs, n_fft=1024, hop_length=512, n_mels=80)
logmelspec = librosa.power_to_db(melspec)
librosa.display.specshow(logmelspec, y_axis='mel', x_axis='time');
plt.title('Spectrogram for 李玉刚');
plt.colorbar();
plt.savefig('李玉刚.png')
sig, _ = librosa.load('test2.wav', sr=fs)
melspec = librosa.feature.melspectrogram(sig, sr=fs, n_fft=1024, hop_length=512, n_mels=80)
logmelspec = librosa.power_to_db(melspec)
librosa.display.specshow(logmelspec, y_axis='mel', x_axis='time');
plt.title('Spectrogram for 周深');
plt.colorbar();
plt.savefig('周深.png')
sig, _ = librosa.load('test3.wav', sr=fs)
melspec = librosa.feature.melspectrogram(sig, sr=fs, n_fft=1024, hop_length=512, n_mels=80)
logmelspec = librosa.power_to_db(melspec)
librosa.display.specshow(logmelspec, y_axis='mel', x_axis='time');
plt.title('Mel Spectrogram for 潘倩倩');
plt.colorbar();
plt.savefig('潘倩倩.png')
librosa.display.specshow(Train_feature.X[7].reshape(80, -1), y_axis='mel', x_axis='time');
librosa.display.specshow(Train_feature.X[4].reshape(80, -1), y_axis='mel', x_axis='time');
plt.title('Mel Spectrogram for Male');
plt.colorbar();
plt.savefig('mel_trainM.png')
###Output
_____no_output_____ |
samples/algorithms/variational-algorithms/Variational Quantum Algorithms.ipynb | ###Markdown
Variational Quantum Algorithms in Q\ Abstract In this sample, we will explore how the rich classical control provided by Q can be used to efficiently write out variational quantum algorithms. In particular, we'll focus on the example of the _variational quantum eigensolver_, also known as VQE. Preamble$ \renewcommand{\ket}[1]{\left|1\right\rangle}$
###Code
from itertools import product, repeat, starmap
import numpy as np
import qutip as qt
import qsharp
###Output
Preparing Q# environment...
###Markdown
For Q code embedded in this notebook, it will be helpful to open a few namespaces before we proceed.
###Code
%%qsharp
open Microsoft.Quantum.Arrays;
open Microsoft.Quantum.Characterization;
open Microsoft.Quantum.Convert;
open Microsoft.Quantum.Diagnostics;
open Microsoft.Quantum.Random;
open Microsoft.Quantum.Math;
###Output
_____no_output_____
###Markdown
Introducing the Variational Quantum Eigensolver In variational quantum algorithms, rather than performing all of our computation on the quantum device itself, we use a quantum program to estimate some quantity, then use a classical optimizer to find inputs to our quantum program that minimize or maximize that quantity. That is, rather than thinking of classical computation purely as a pre-processing or post-processing step, our computation uses both classical and quantum computation together. > **💡 TIP:** We could also consider using classical computation while qubits are still alive, rather than returning an estimate to a classical optimizer. This approach is indeed very useful, for instance in iterative phase estimation. For this sample, however, we'll focus on the variational case. For example, consider the problem of finding the _ground state energy_ of a given operator $H$, often called a _Hamiltonian_. The ground state energy of $H$ is defined as its smallest eigenvalue $E_0$, as the Hamiltonian represents the possible energy configurations of a given physical system.As stated, even though this is a minimization, we need to know the eigenvalues of $H$ to make any progress. It's pretty straightforward to rephrase the problem in terms of arbitrary states, however. In particular, the expectation value $\left\langle H \right\rangle H = \left\langle \psi | H | \psi \right\rangle$ must be at least as large as $E_0$. To see this, we can expand the expectation value in terms of the eigenvectors $\{\ket{\phi_i}\}$ of $H$, such that $H\ket{\phi_i} = E_i$.Since the decomposition of $H$ into eigenvectors gives us a basis, we know that $\left\langle \phi_i | \phi_j \right\rangle$ is zero whenever $i \ne j$, allowing us to expand $\ket{\psi}$ into the eigenbasis of $H$ as $\ket{\psi} = \sum_i \alpha_i \ket{\phi_i}$ for some complex coefficients $\{\alpha_i\}$. Using this decomposition, we can expand the expectation $\left\langle \psi | H | \psi \right\rangle$ in terms of the eigenvalues of $H$:$$\begin{aligned} \left\langle \psi | H | \psi \right\rangle & = \sum_i |\alpha_i|^2 \left\langle \phi_i | H | \phi_i \right\rangle \\ & = \sum_i |\alpha_i|^2 \left\langle \phi_i | E_i \phi_i \right\rangle \\ & = \sum_i |\alpha_i|^2 E_i.\end{aligned}$$Since each $|\alpha_i|^2$ is nonnegative, and since $\sum_i |\alpha_i|^2 = 1$, the above gives us that$$ E_0 = \min_i E_i \le \left\langle \psi | H | \psi \right\rangle \le \max_i E_i.$$Thus, we can rephrase the original problem as a minimization not just over eigenstates, but of all arbitrary states,$$ E_0 = \min_{\ket{\psi}} \left\langle \psi | H | \psi \right\rangle,$$where the minimum is achieved when $\ket{\psi} = \ket{\phi_0}$.Using this rephrasing, the _variational quantum eigensolver_ algorithm is just the variational algorithm that we get by using a classical optimizer to find $\min_{\ket{\psi}} \left\langle \psi | H | \psi \right\rangle$. In pseudocode:```operation FindMinimumEigenvalue { Pick an initial guess |ψ⟩. until target accuracy reached { Estimate E = ⟨ψ | H | ψ⟩ using a quantum operation. Use a classical optimizer to pick the next |ψ⟩. } Return the minimum E that we found and the state |ψ⟩ that achieved that minimum.}```  To turn the above pseudocode into a real quantum program, we still need to figure out two things:- How to estimate $\left\langle \psi | H | \psi \right\rangle$ for a given state $\ket{\psi}$.- How to optimize over all quantum states $\ket{\psi}$.In the rest of this sample, we'll see how to do each of these, and how you can use Q to write a VQE implementation. Estimating expectation values of $H$ Before proceeding to see how to estimate $\left\langle \psi | H | \psi \right\rangle$, however, it helps to consider what that expectation value _is_. To do so, let's consider a concrete example of a two-qubit Hamiltonian, using QuTiP to construct a Python object representing $H$.
###Code
H = qt.Qobj([
[-2, 0, 0, 3],
[0, 7, 0, 1],
[0, 0, -4, 0],
[3, 1, 0, -1]
], dims=[[2, 2]] * 2)
H
###Output
_____no_output_____
###Markdown
Here, `H` is a Python object representing the 4 × 4 matrix $H$. We can use the `eigenstates` method provided by QuTiP to quickly find the eigenvalues and eigenvectors of $H$:
###Code
H.eigenstates()
###Output
_____no_output_____
###Markdown
In particular, we can use the `min` function provided by Python to minimize over the eigenvalues of $H$ and find its ground state $\ket{\phi_0}$.
###Code
min_energy, ground_state = min(zip(*H.eigenstates()), key=lambda eig: eig[0])
min_energy, ground_state
###Output
_____no_output_____
###Markdown
Here, `ground_state` represents the eigenvector $\ket{\phi_0}$ corresponding to the smallest eigenvalue $E_0$ of $H$. By the above argument, we would expect that $\left\langle \phi_0 | H | \phi_0 \right\rangle = E_0$. We can check that using QuTiP as well:
###Code
(ground_state.dag() * H * ground_state)[0, 0]
###Output
_____no_output_____
###Markdown
What's going on here? Effectively, for any operator $O$ such that $O^{\dagger} = O$, expressions like $\left\langle \psi | O | \psi \right\rangle$ represent another way of thinking about quantum measurement. In particular, if we think of the eigenvalues of $O$ as labels for its corresponding various eigenvectors, then $\left\langle \psi | O | \psi \right\rangle$ is the average label that we get when we measure $O$ in the basis of its eigenvectors. > **💡 TIP:** This way of thinking about quantum measurement is sometimes called the _observable framework_, with $O$ being called an _observable_. We avoid using this terminology here to avoid confusion, however, as the expectation value of $O$ cannot be observed directly, but only inferred from repeated measurements. For example, if $O = Z$, then the eigenstate $\ket{0}$ is labeled by its eigenvalue $+1$, while $\ket{1}$ is labeled by $-1$. The expectation value $\left\langle \psi | Z | \psi \right\rangle$ is then the probability of getting a `Zero` minus the probability of getting a `One`.More generally, finding the expectation value of an arbitrary Pauli operator for an arbitrary input state is straightforward using the `Measure` and `EstimateFrequencyA` operations.
###Code
%%qsharp
operation EstimatePauliExpectation(pauli : Pauli[], preparation : (Qubit[] => Unit is Adj), nShots : Int) : Double {
return 2.0 * EstimateFrequencyA(
preparation,
Measure(pauli, _),
Length(pauli),
nShots
) - 1.0;
}
###Output
_____no_output_____
###Markdown
Here, we've represented the state $\ket{\psi}$ by an operation `preparation` that prepares that state. Since each Pauli operator other than the identity operator has exactly two eigenvalues, $+1$ and $-1$, corresponding to `Zero` and `One` respectively, we can turn the estimate of the probability $p_0$ with which `Measure(pauli, _)` returns `Zero` into an expectation value $p_0 - p_1 = p_0 - (1 - p_0) = 2 p_0 - 1$.For example, doing nothing prepares the state $\ket{0}$, so we get an expectation value of $1$:
###Code
%%qsharp
operation EstimateExpectationOfZero() : Double {
return EstimatePauliExpectation(
[PauliZ],
NoOp,
100
);
}
EstimateExpectationOfZero.simulate()
###Output
_____no_output_____
###Markdown
Similarly, using `X` to prepare $\ket{1}$ gives us an expectation of $-1$, while using `H` to prepare $\ket{+} = \frac{1}{\sqrt{2}} (\ket{0} + \ket{1})$ gives an expectation value of $0$.
###Code
%%qsharp
operation EstimateExpectationOfOne() : Double {
return EstimatePauliExpectation(
[PauliZ],
ApplyToEachCA(X, _),
100
);
}
EstimateExpectationOfOne.simulate()
%%qsharp
operation EstimateExpectationOfPlus() : Double {
return EstimatePauliExpectation(
[PauliZ],
ApplyToEachCA(H, _),
100
);
}
EstimateExpectationOfPlus.simulate()
###Output
_____no_output_____
###Markdown
Note that in practice, we won't always get exactly 0 due to using a finite number of measurements. In any case, to recap, it's easy to use a quantum program to find $\left\langle \psi | H | \psi \right\rangle$ in the special case that $H$ is a multi-qubit Pauli operator. What about the more general case? The linearity of quantum mechanics saves us again! It turns out that we can expand the expectation of any other operator in terms of expectations of Pauli operators. To see how that works, suppose that $H = 2 Z - X$.
###Code
2 * qt.sigmaz() - qt.sigmax()
###Output
_____no_output_____
###Markdown
We can expand $\left\langle \psi | (2Z - X) | \psi \right\rangle$ by using linearity:$$ \left\langle \psi | (2Z - X) | \psi \right\rangle = 2 \left\langle \psi | Z | \psi \right\rangle - \left\langle \psi | X | \psi \right\rangle.$$ Each of the terms in this expansion is something that we can estimate easily using our `EstimatePauliExpectation` operation from above.
###Code
%%qsharp
operation EstimateExpectation(terms : (Double, Pauli[])[], preparation : (Qubit[] => Unit is Adj), nShots : Int) : Double {
mutable sum = 0.0;
for (coefficient, pauli) in terms {
set sum += coefficient * EstimatePauliExpectation(
pauli, preparation, nShots
);
}
return sum;
}
###Output
_____no_output_____
###Markdown
With this, we almost have everything we need to estimate expectations $\left\langle \psi | H | \psi \right\rangle$. We just need a way of finding the decomposition of $H$ into Pauli operators. Thankfully, QuTiP can help here as well.
###Code
def pauli_basis(n_qubits):
scale = 2 ** n_qubits
return {
tuple(P): qt.tensor(*(p.as_qobj() for p in P)) / scale
for P in product(qsharp.Pauli, repeat=n_qubits)
}
def expand_in_pauli_basis(op):
return [
(coeff, list(label))
for label, coeff in {
label: (P.dag() * op).tr()
for label, P in pauli_basis(n_qubits=len(op.dims[0])).items()
}.items()
if abs(coeff) >= 1e-10
]
###Output
_____no_output_____
###Markdown
For example, we can use the two functions above to find that $H = -3 𝟙 \otimes Z + 0.5 X \otimes 𝟙 + 1.5 X \otimes X - 0.5 X \otimes Z - 1.5 Y \otimes Y + 2.5 Z \otimes 𝟙 - 1.5 Z \otimes Z$:
###Code
H_decomposition = expand_in_pauli_basis(H)
H_decomposition
###Output
_____no_output_____
###Markdown
This decomposition is exactly what we need to pass as `terms` above. For example, to estimate the expectation of the $\ket{++}$ state, $\left\langle ++ | H | ++ \right\rangle$, we can pass `terms` to `EstimateExpectation`. In doing so, we'll use the name "energy" for our new operation, pointing to that expectation values of Hamiltonian operators $H$ represent the average energy of a system given the quantum state of that system.
###Code
%%qsharp
operation EstimateEnergyOfPlus(terms : (Double, Pauli[])[]) : Double {
return EstimateExpectation(
terms,
ApplyToEachCA(H, _),
100
);
}
EstimateEnergyOfPlus.simulate(terms=H_decomposition)
###Output
_____no_output_____
###Markdown
This is pretty far from the minimum $E_0$ we found from the eigenvalue decomposition above, so in the next part we'll see one more trick we can use to write our VQE implementation. Preparing ansatz states Recall that our goal is to use a classical optimizer to find the minimum energy $E_0$ of some Hamiltonian operator $H$:$$ E_0 = \min_{\ket{\psi}} \left\langle \psi | H | \psi \right\rangle.$$ In practice, we can't reasonably optimize over all possible quantum states $\ket{\psi}$, so that the above optimization problem is intractable for all but the smallest systems. Instead, we note that we can find an upper-bound for $E_0$ by solving a simpler problem. Suppose that there is a set $A$ of states that are easier to prepare. Then,$$ E_0 = \min_{\ket{\psi}} \left\langle \psi | H | \psi \right\rangle \le \min_{\ket{\psi} \in A} \left\langle \psi | H | \psi \right\rangle.$$ If we choose $A$ carefully so that $\ket{\phi_0} \in A$, this inequality will be tight. More often, however, $\ket{\phi_0}$ won't actually be in $A$, but will be close to some other state in $A$, giving us a reasonable approximation to $E_0$. We say that $A$ is our _ansatz_ for running VQE; in this way, $A$ plays a similar role to a model in an ML training loop, representing what we believe to be a reasonable set of guesses for $\ket{\phi_0}$. One can get pretty involved with their choice of ansatz, but for this sample, we'll choose a really simple one, and set $A$ to be those states that can be prepared by a small number of Pauli rotations.
###Code
%%qsharp
operation PrepareAnsatz(axes : Pauli[][], angles : Double[], register : Qubit[]) : Unit is Adj {
for (axis, angle) in Zipped(axes, angles) {
Exp(axis, angle, register);
}
}
###Output
_____no_output_____
###Markdown
Our minimization $\min_{\ket{\psi} \in A} \left\langle \psi | H | \psi \right\rangle$ is now a minimization over `angles` for some fixed list of Pauli rotation axes. Using `DumpMachine` and `DumpRegister`, we can visualize how this ansatz works for a few different choices of parameters `angles`, convincing ourselves that $A$ contains enough interesting states to find $E_0$.
###Code
%%qsharp
operation DumpAnsatz(axes : Pauli[][], angles : Double[]) : Unit {
use register = Qubit[Length(angles)];
within {
PrepareAnsatz(axes, angles, register);
} apply {
DumpRegister((), register);
}
}
ansatz_axes = [[qsharp.Pauli.Z, qsharp.Pauli.Z], [qsharp.Pauli.X, qsharp.Pauli.Y]]
DumpAnsatz.simulate(axes=ansatz_axes, angles=[1.2, 1.9])
DumpAnsatz.simulate(axes=ansatz_axes, angles=[1.2, -0.7])
###Output
_____no_output_____
###Markdown
At this point, it's helpful to write some new operations that directly estimate the energy of a state given a parameterization of our ansatz, rather than an opaque operation that prepares the state.
###Code
%%qsharp
operation EstimateEnergyAtAnsatz(terms : (Double, Pauli[])[], axes : Pauli[][], angles : Double[], nShots : Int) : Double {
return EstimateExpectation(
terms,
PrepareAnsatz(axes, angles, _),
nShots
);
}
EstimateEnergyAtAnsatz.simulate(
terms=H_decomposition,
axes=ansatz_axes,
angles=[1.2, 1.9],
nShots=1000
)
###Output
_____no_output_____
###Markdown
Running VQE in Q\ Using this new operation, we have something that looks much more like classical optimization problems that we may be used to! Indeed, we can simply use our favorite optimization algorithm to pick different values for `angles` until we find a good approximation for $E_0$. In this sample, we'll use the _SPSA algorithm_ to optimize `angles`, as this algorithm works especially well for objective functions that have some amount of noise, such as that added by taking a finite number of shots above.We won't go through the details of SPSA here, but if you're interested to learn more, check out [`Optimization.qs`](../edit/Optimization.qs) in this folder to see how we implemented SPSA in Q.
###Code
%%qsharp
open Microsoft.Quantum.Samples;
operation FindMinimumEnergy(terms : (Double, Pauli[])[], axes : Pauli[][], initialGuess : Double[], nShots : Int) : (Double[], Double) {
let oracle = EstimateEnergyAtAnsatz(terms, axes, _, nShots);
let options = DefaultSpsaOptions();
return FindMinimumWithSpsa(
oracle,
initialGuess,
options
);
}
FindMinimumEnergy.simulate(terms=H_decomposition, axes=ansatz_axes, initialGuess=[1.2, 1.9], nShots=10_000)
min_energy
###Output
_____no_output_____ |
clustering/tfidf_svd_norm.ipynb | ###Markdown
Import required libraries
###Code
import os
import re
from collections import Counter
from time import time
import gensim
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
from ds_utils.config import set_display_options
from ds_utils.clustering import Tokenizer, load_data, clean_news_data, vectorize, mbkmeans_clusters
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
set_display_options()
set_random_seed()
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/dylancastillo/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Read data
###Code
df = load_data("news")
df.shape
df.columns
###Output
_____no_output_____
###Markdown
Clean data
###Code
df = clean_news_data(df)
df.sample(1).T
###Output
_____no_output_____
###Markdown
Review tokens and vocabulary Tokens
###Code
sample_text = df.sample(1)
print(f"SAMPLE TEXT: {sample_text['text'].values[0]}")
print(f"------")
print(f"TOKENS: {sample_text['tokens'].values[0]}")
###Output
SAMPLE TEXT: Klopp savours quality of Liverpool goals after fightback against Newcastle | Sadio Mane grabs two as Liverpool continue perfect start to season at Anfield | Liverpool 3 Newcastle 1
Jürgen Klopp hailed the quality of the goals his Liverpool side scored as the Premier League leaders came from behind to beat Newcastle 3-1 at Anfield.
The Reds were stunned as Steve Bruces Newcastle claimed a shock early lead with a… [+2641 chars]
------
TOKENS: ['klopp', 'savours', 'quality', 'liverpool', 'goals', 'fightback', 'newcastle', 'sadio', 'mane', 'grabs', 'two', 'liverpool', 'continue', 'perfect', 'start', 'season', 'anfield', 'liverpool', 'newcastle', 'jürgen', 'klopp', 'hailed', 'quality', 'goals', 'liverpool', 'side', 'scored', 'premier', 'league', 'leaders', 'came', 'behind', 'beat', 'newcastle', 'anfield', 'reds', 'stunned', 'steve', 'bruces', 'newcastle', 'claimed', 'shock', 'early', 'lead']
###Markdown
Vocabulary
###Code
docs = df["text"].values
tokenized_docs = df["tokens"].values
vocab = Counter()
for token in tokenized_docs:
vocab.update(token)
len(vocab)
vocab.most_common(10)
###Output
_____no_output_____
###Markdown
BoW + SVD + Normalizer
###Code
analyzer = Tokenizer()
bow = TfidfVectorizer(analyzer=analyzer, max_df=.5, min_df=5)
svd = TruncatedSVD(200)
normalizer = Normalizer(copy=False)
vectorizer = make_pipeline(bow, svd, normalizer)
vectorized_docs = vectorizer.fit_transform(docs)
vectorized_docs.shape
###Output
_____no_output_____
###Markdown
Generate and analyze clusters
###Code
clustering, cluster_labels = mbkmeans_clusters(vectorized_docs, 50, print_silhouette_values=True)
df_clusters = pd.DataFrame({
"text": docs,
"tokens": [" ".join(text) for text in tokenized_docs],
"cluster": cluster_labels
})
###Output
For n_clusters = 50
Silhouette coefficient: 0.07
Inertia:7382.775819185531
Silhouette values:
Cluster 37: Size:73 | Avg:0.33 | Min:0.13 | Max: 0.48
Cluster 19: Size:136 | Avg:0.29 | Min:0.06 | Max: 0.43
Cluster 25: Size:119 | Avg:0.26 | Min:-0.00 | Max: 0.44
Cluster 32: Size:106 | Avg:0.24 | Min:0.06 | Max: 0.39
Cluster 47: Size:73 | Avg:0.22 | Min:0.03 | Max: 0.37
Cluster 2: Size:282 | Avg:0.20 | Min:0.05 | Max: 0.36
Cluster 40: Size:185 | Avg:0.19 | Min:0.07 | Max: 0.34
Cluster 26: Size:362 | Avg:0.19 | Min:0.00 | Max: 0.36
Cluster 12: Size:109 | Avg:0.18 | Min:-0.03 | Max: 0.34
Cluster 33: Size:311 | Avg:0.17 | Min:0.01 | Max: 0.30
Cluster 10: Size:179 | Avg:0.17 | Min:-0.02 | Max: 0.35
Cluster 36: Size:104 | Avg:0.14 | Min:0.01 | Max: 0.28
Cluster 41: Size:143 | Avg:0.14 | Min:0.01 | Max: 0.29
Cluster 16: Size:107 | Avg:0.14 | Min:-0.02 | Max: 0.30
Cluster 42: Size:87 | Avg:0.14 | Min:-0.01 | Max: 0.28
Cluster 45: Size:127 | Avg:0.12 | Min:-0.01 | Max: 0.28
Cluster 5: Size:98 | Avg:0.11 | Min:0.00 | Max: 0.28
Cluster 6: Size:106 | Avg:0.11 | Min:-0.07 | Max: 0.28
Cluster 34: Size:164 | Avg:0.10 | Min:-0.10 | Max: 0.24
Cluster 38: Size:127 | Avg:0.09 | Min:-0.09 | Max: 0.20
Cluster 20: Size:200 | Avg:0.08 | Min:-0.08 | Max: 0.20
Cluster 27: Size:265 | Avg:0.08 | Min:-0.05 | Max: 0.20
Cluster 7: Size:125 | Avg:0.07 | Min:-0.08 | Max: 0.19
Cluster 3: Size:213 | Avg:0.07 | Min:-0.05 | Max: 0.19
Cluster 17: Size:206 | Avg:0.07 | Min:-0.05 | Max: 0.18
Cluster 14: Size:205 | Avg:0.07 | Min:-0.04 | Max: 0.21
Cluster 1: Size:143 | Avg:0.06 | Min:-0.04 | Max: 0.21
Cluster 31: Size:165 | Avg:0.06 | Min:-0.07 | Max: 0.21
Cluster 21: Size:158 | Avg:0.06 | Min:-0.04 | Max: 0.16
Cluster 48: Size:115 | Avg:0.05 | Min:-0.06 | Max: 0.17
Cluster 46: Size:169 | Avg:0.05 | Min:-0.08 | Max: 0.17
Cluster 23: Size:231 | Avg:0.05 | Min:-0.08 | Max: 0.15
Cluster 43: Size:109 | Avg:0.04 | Min:-0.08 | Max: 0.17
Cluster 9: Size:293 | Avg:0.04 | Min:-0.06 | Max: 0.12
Cluster 44: Size:187 | Avg:0.04 | Min:-0.06 | Max: 0.13
Cluster 8: Size:297 | Avg:0.04 | Min:-0.08 | Max: 0.16
Cluster 24: Size:130 | Avg:0.04 | Min:-0.08 | Max: 0.14
Cluster 13: Size:205 | Avg:0.03 | Min:-0.06 | Max: 0.12
Cluster 35: Size:165 | Avg:0.02 | Min:-0.08 | Max: 0.14
Cluster 30: Size:249 | Avg:0.02 | Min:-0.06 | Max: 0.13
Cluster 4: Size:186 | Avg:0.02 | Min:-0.07 | Max: 0.12
Cluster 18: Size:307 | Avg:0.02 | Min:-0.06 | Max: 0.10
Cluster 39: Size:120 | Avg:0.01 | Min:-0.07 | Max: 0.10
Cluster 29: Size:379 | Avg:0.01 | Min:-0.09 | Max: 0.10
Cluster 11: Size:371 | Avg:0.00 | Min:-0.12 | Max: 0.09
Cluster 22: Size:327 | Avg:0.00 | Min:-0.12 | Max: 0.09
Cluster 28: Size:252 | Avg:-0.01 | Min:-0.09 | Max: 0.07
Cluster 15: Size:346 | Avg:-0.01 | Min:-0.10 | Max: 0.06
Cluster 49: Size:341 | Avg:-0.02 | Min:-0.09 | Max: 0.03
Cluster 0: Size:425 | Avg:-0.02 | Min:-0.14 | Max: 0.06
###Markdown
Evaluate top terms per cluster (based on clusters' centroids)
###Code
print("Top terms per cluster (based on centroids):")
original_space_centroids = svd.inverse_transform(clustering.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
terms = bow.get_feature_names()
for i in range(50):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :5]:
print(' %s' % terms[ind], end='')
print()
###Output
Top terms per cluster (based on centroids):
Cluster 0: vaping said car strike us
Cluster 1: food meat fast plant based
Cluster 2: hurricane dorian bahamas storm carolina
Cluster 3: man arrested charged old murder
Cluster 4: government nuclear deal iran anti
Cluster 5: chief executive officer ceo said
Cluster 6: south africa african korea attacks
Cluster 7: minister prime trudeau kashmir justin
Cluster 8: year old woman boy died
Cluster 9: trump president donald trumps us
Cluster 10: saudi oil arabia attacks drone
Cluster 11: credit card make know people
Cluster 12: impeachment trump inquiry house democrats
Cluster 13: two years one men women
Cluster 14: million year pay jackpot nearly
Cluster 15: gun state states us united
Cluster 16: russia russian ukraine moscow putin
Cluster 17: climate school change students high
Cluster 18: found could health study may
Cluster 19: hong kong protests protesters police
Cluster 20: police officer officers shot paris
Cluster 21: hours murder air reports correspondent
Cluster 22: former court sexual federal judge
Cluster 23: company billion wework valuation public
Cluster 24: election party leader presidential opposition
Cluster 25: facebook messenger unfolds chat happening
Cluster 26: video abc broadcast interviews breaking
Cluster 27: trade us china economy stocks
Cluster 28: people dublin three said city
Cluster 29: like film one review years
Cluster 30: first time since years year
Cluster 31: day ashes england test australias
Cluster 32: apple iphone event apples pro
Cluster 33: brexit johnson boris deal prime
Cluster 34: democratic presidential debate biden warren
Cluster 35: house trump white president whistleblower
Cluster 36: nfl brown season antonio raiders
Cluster 37: mugabe robert zimbabwe died zimbabwes
Cluster 38: taliban security bolton afghanistan trump
Cluster 39: nintendo announced switch google microsoft
Cluster 40: cup world rugby japan ireland
Cluster 41: league premier manchester season united
Cluster 42: google tech general antitrust facebook
Cluster 43: amazon china chinese companies trade
Cluster 44: bank open us rates rate
Cluster 45: fire boat california people killed
Cluster 46: media facebook social story business
Cluster 47: netanyahu israeli minister prime election
Cluster 48: border trump us mexico president
Cluster 49: one ireland football season win
###Markdown
Evaluate top terms per cluster (based on words frequencies)
###Code
print("Top terms per cluster (based on words frequencies):")
for i in range(50):
empty = ""
most_frequent = Counter(" ".join(df_clusters.query(f"cluster == {i}")["tokens"]).split()).most_common(5)
for t in most_frequent:
empty += f"{t[0]}({str(t[1])}) "
print(f"Cluster {i}: {empty}")
###Output
Top terms per cluster (based on words frequencies):
Cluster 0: said(176) us(146) vaping(143) strike(104) workers(103)
Cluster 1: food(136) meat(73) fast(59) plant(43) said(42)
Cluster 2: hurricane(616) dorian(546) bahamas(280) storm(238) carolina(99)
Cluster 3: man(462) year(75) old(69) said(65) police(58)
Cluster 4: government(279) nuclear(81) deal(76) iran(67) said(66)
Cluster 5: chief(189) executive(119) said(56) officer(37) reuters(36)
Cluster 6: south(207) africa(68) african(55) korea(52) north(39)
Cluster 7: minister(166) prime(98) trudeau(74) kashmir(73) justin(68)
Cluster 8: year(466) old(374) woman(174) boy(75) died(71)
Cluster 9: trump(697) president(464) donald(335) us(166) trumps(99)
Cluster 10: saudi(440) oil(322) arabia(129) attacks(112) us(106)
Cluster 11: credit(94) get(93) money(87) time(83) card(81)
Cluster 12: impeachment(214) trump(177) house(146) president(136) inquiry(111)
Cluster 13: two(371) years(56) one(44) us(29) image(25)
Cluster 14: million(499) year(90) us(68) said(65) pay(50)
Cluster 15: us(186) state(149) states(131) gun(121) said(101)
Cluster 16: russia(199) russian(125) ukraine(85) said(58) moscow(58)
Cluster 17: climate(245) school(216) change(137) students(80) high(40)
Cluster 18: health(134) found(119) could(109) study(107) one(63)
Cluster 19: hong(367) kong(323) police(118) protests(104) protesters(76)
Cluster 20: police(502) officer(104) said(99) officers(84) say(67)
Cluster 21: air(103) hours(58) plane(52) flight(52) murder(45)
Cluster 22: former(264) court(222) sexual(95) charges(82) said(81)
Cluster 23: company(347) billion(108) said(79) business(67) wework(64)
Cluster 24: election(188) party(97) leader(66) opposition(60) presidential(52)
Cluster 25: us(130) world(121) chat(119) facebook(119) messenger(119)
Cluster 26: video(377) national(374) get(368) world(365) online(365)
Cluster 27: us(409) trade(403) china(174) economy(152) stocks(127)
Cluster 28: people(149) said(105) dublin(97) three(81) city(76)
Cluster 29: like(203) one(96) film(90) years(77) review(71)
Cluster 30: first(401) time(125) years(64) year(64) us(55)
Cluster 31: day(229) ashes(101) england(93) test(65) one(48)
Cluster 32: apple(270) iphone(236) apples(74) pro(71) watch(55)
Cluster 33: brexit(577) johnson(464) boris(377) minister(310) prime(288)
Cluster 34: democratic(219) presidential(149) debate(105) biden(103) warren(95)
Cluster 35: house(253) trump(201) president(167) white(163) us(109)
Cluster 36: nfl(136) brown(97) season(84) antonio(65) week(62)
Cluster 37: mugabe(140) robert(139) zimbabwe(63) president(46) died(43)
Cluster 38: trump(140) taliban(135) security(128) bolton(127) us(120)
Cluster 39: nintendo(63) switch(53) microsoft(46) system(42) surface(42)
Cluster 40: cup(353) world(335) rugby(168) ireland(113) win(83)
Cluster 41: league(177) manchester(78) united(76) premier(70) season(64)
Cluster 42: google(184) tech(69) facebook(39) general(39) antitrust(34)
Cluster 43: amazon(165) china(99) years(28) us(26) military(25)
Cluster 44: us(167) bank(167) open(121) rates(99) interest(85)
Cluster 45: fire(170) boat(133) california(126) people(104) killed(67)
Cluster 46: facebook(147) media(138) social(116) story(77) business(76)
Cluster 47: israeli(117) netanyahu(113) minister(83) prime(80) election(66)
Cluster 48: border(163) us(111) trump(74) president(68) syria(67)
Cluster 49: one(162) ireland(135) football(99) season(91) win(71)
###Markdown
Retrieve most representative documents (based on clusters' centroids)
###Code
test_cluster = 37
most_representative_docs = np.argsort(
np.linalg.norm(vectorized_docs - clustering.cluster_centers_[test_cluster], axis=1)
)
for d in most_representative_docs[:10]:
print(docs[d])
print("-------------")
###Output
Opinion: Zimbabwe’s Robert Mugabe Ruined a Once Prosperous Country | The former dictator of Zimbabwe, Robert Mugabe, died September 6, 2019, leaving behind a legacy of economic failure and mass oppression. Image: Meng Chenguang / Zuma Wire |
-------------
Robert Mugabe's most famous quotes | A quick look at President Mugabe's colourful language throughout his 37-year reign as leader of Zimbabwe. | Zimbabwe's Former President Robert Mugabe has died aged 95. The death was announced by his succesor Emmerson Mnangagwa who mourned him as an "icon of liberation."
"It is with the utmost sadness that I announce the passing on of Zimbabwe's founding father and… [+4531 chars]
-------------
Robert Mugabe: World Reacts to Death of Ex-president Who Ruled Zimbabwe for 37 Years—'Even Dictators Finally Die' | The 95-year-old, who died in Singapore, ruled Zimbabwe with an iron fist until toppled by a military coup in 2017. | Robert Mugabe, the former leader of Zimbabwe, has died in a Singapore hospital aged 95. During his long political career, Mugabe went from a widely respected anti-colonial revolutionary to a pariah dictator, destroying the country he once did so much to free.… [+2753 chars]
-------------
Tributes -- and also fierce criticism -- pour in after death of Robert Mugabe | Robert Mugabe, the controversial founding father of Zimbabwe, has died at 95, sparking wildly different reactions around the world. | (CNN)Robert Mugabe, the controversial founding father of Zimbabwe, has died at 95, sparking wildly different reactions around the world.
The uncompromising ex-president, who was deposed in a coup in 2017, left a mixed legacy. He had been touted worldwide as … [+2986 chars]
-------------
Robert Mugabe granted national hero status and official mourning | Days of national mourning commence in Zimbabwe for its controversial former leader. | Image copyrightReutersImage caption
Robert Mugabe was known for his fiery speeches, even in his 90s
Three days of national mourning have begun in Zimbabwe following the death of former president Robert Mugabe.
Mr Mugabe, Zimbabwe's first post-independence… [+5385 chars]
-------------
Robert Mugabe Was Zimbabwe’s Hero and Its Tyrant | The “icon of liberation” who led Zimbabwe to independence also drove the nation into poverty. | Obituaries of Robert Mugabe, the former leader of Zimbabwe who died in Singapore on Friday at the age of 95, tend to divide his history into three eras: the revolutionary leader of the liberation struggle against white minority rule; the statesman who negotia… [+1898 chars]
-------------
Rest in peace, Robert Mugabe: Hero, villain, human | Robert Mugabe helped build Zimbabwe and helped destroy it. | Robert Mugabe has died. May he rest in peace.
He was one of the engineers who built the foundations of modern-day Zimbabwe.
If you ever get the chance, please visit Zimbabwe. It is home to music and fields of maize for as far as the eye can see. There are… [+5223 chars]
-------------
Robert Mugabe: The orator | Robert Mugabe, the ex-Zimbabwean leader who has died aged 95, was known for his fiery speeches. | VideoRobert Mugabe, the ex-Zimbabwean leader who has died aged 95, was known for his fiery speeches.
-------------
Robert Mugabe obituary: Zimbabwe liberator turned ruthless despot | President once hailed as beacon of African liberation whose rule bankrupted country he had fought so hard to win | Robert MugabeBorn: February 21st, 1924Died: September 6th, 2019
As the armoured vehicles rolled in to Harare in November 2017, after weeks of political fencing and brinkmanship, Robert Mugabe could not believe he had lost. The senior military leadership who … [+24522 chars]
-------------
Former Zimbabwe President Robert Mugabe dies aged 95 | The country's first post-independent leader passed away in Singapore where he was seeking treatment. | Former Zimbabwe President Robert Mugabe has died at the age of 95, President Emmerson Mnangagwa said.
"It is with the utmost sadness that I announce the passing on of Zimbabwe's founding father and former President, Cde Robert Mugabe," Mnangagwa tweet early… [+1617 chars]
-------------
###Markdown
Random sample of documents for a given cluster
###Code
for i,t in enumerate(df_clusters.query(f"cluster == {test_cluster}").sample(10).iterrows()):
print(t[1]["text"])
print("-------------")
###Output
Timeline: Key dates in the life of Robert Mugabe | Zimbabwe's former leader died on Friday in Singapore, at the age of 95. | Zimbabwe's former President Robert Mugabe has died at the age of 95.
The rebel who led Zimbabwe to independence and ruled the country for 37 years died on Friday in Singapore, where he had often visited in recent years for medical treatment.
Below are the k… [+4172 chars]
-------------
Robert Mugabe's most famous quotes | A quick look at President Mugabe's colourful language throughout his 37-year reign as leader of Zimbabwe. | Zimbabwe's Former President Robert Mugabe has died aged 95. The death was announced by his succesor Emmerson Mnangagwa who mourned him as an "icon of liberation."
"It is with the utmost sadness that I announce the passing on of Zimbabwe's founding father and… [+4531 chars]
-------------
Robert Mugabe to be buried next week in his village: Family | The ex-Zimbabwean leader's family opposes government plan to bury him at the national monument for liberation heroes. | Zimbabwe's former President Robert Mugabe will not be buried at a national monument for liberation heroes in the capital, Harare, but at his village early next week, his family said.
Mugabe's family and Zimbabwe's government have been at odds over whether he… [+1668 chars]
-------------
Influential Swiss-American photographer Robert Frank dies age 94 | Frank was most known for his work The Americans, which captured ordinary scenes from US life in the 1950s. | Swiss-American photographer Robert Frank, one of the most influential photographers of the 20th century, died on Tuesday at age 94.
Frank was most known for his work The Americans that captured ordinary scenes of life in the 1950s' United States. The Swiss-b… [+1952 chars]
-------------
Robert Mugabe died a 'very bitter' man, nephew says | The long-time Zimbabwean leader was "bitter" about being ousted in 2017, a relative tells the BBC. | Image copyrightGetty ImagesImage caption
Robert Mugabe was ousted by a military coup in 2017 after nearly four decades in power
Robert Mugabe's nephew has said the former Zimbabwean leader died a "very bitter" man.
Mr Mugabe, who died aged 95 last week, l… [+3658 chars]
-------------
Robert Harris: ‘Nobody cares if you lie anymore’ | ‘I didn’t want to write a Brexit novel, but The Second Sleep is informed by the anxieties of the just-in-time supply chain’ | It is one of the most beautiful days of the year when I meet the novelist and political journalist Robert Harris near his west Berkshire home. A haze rises off the nearby canal, and it seems impossible this thickened English tranquillity could be anything but… [+10038 chars]
-------------
Tributes -- and also fierce criticism -- pour in after death of Robert Mugabe | Robert Mugabe, the controversial founding father of Zimbabwe, has died at 95, sparking wildly different reactions around the world. | (CNN)Robert Mugabe, the controversial founding father of Zimbabwe, has died at 95, sparking wildly different reactions around the world.
The uncompromising ex-president, who was deposed in a coup in 2017, left a mixed legacy. He had been touted worldwide as … [+2986 chars]
-------------
Zimbabweans, foreign leaders bid farewell to Robert Mugabe | Thousands pay final respects to former president at state funeral held at national stadium in the capital, Harare. | Zimbabwean politicians, international dignitaries and thousands of citizens have gathered at a stadium in Harare to pay their final respects to the country's founding father, Robert Mugabe.
The state funeral in the capital on Saturday follows a week of dispu… [+3566 chars]
-------------
Your Friday Briefing | North Carolina, Robert Mugabe, Serena Williams: Here’s what you need to know. | The store became a national chain, making its inventor, Clarence Saunders, a tycoon. His Pink Palace Mansion is a Tennessee landmark.
But by 1923 he was involved in a bitter dispute with the New York Stock Exchange. He had cornered Piggly Wiggly stock in ret… [+1112 chars]
-------------
Robert Frank, Pivotal Figure in Documentary Photography, Is Dead at 94 | Mr. Frank’s visually raw and personally expressive style made him one of the most influential photographers of the 20th century. | Robert Frank, one of the most influential photographers of the 20th century, whose visually raw and personally expressive style was pivotal in changing the course of documentary photography, died on Monday in Inverness, Nova Scotia. He was 94.
His death was … [+1327 chars]
-------------
|
9_postdoubleselection.ipynb | ###Markdown
Session 9 - Post-double Selection Contents- [Frisch-Waugh Theorem](Frisch-Waugh-Theorem)- [Omitted Variable Bias](Omitted-Variable-Bias)- [Pre-Test Bias](Pre-Test-Bias)- [Post-double selection](Post-double-selection)- [Double/debiased machine learning](Double/debiased-machine-learning)
###Code
# Import everything
import pandas as pd
import numpy as np
import seaborn as sns
import statsmodels.api as sm
from numpy.linalg import inv
from statsmodels.iolib.summary2 import summary_col
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Lasso, Ridge
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
# Import matplotlib for graphs
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
# Set global parameters
%matplotlib inline
plt.style.use('seaborn-white')
plt.rcParams['lines.linewidth'] = 3
plt.rcParams['figure.figsize'] = (10,6)
plt.rcParams['figure.titlesize'] = 20
plt.rcParams['axes.titlesize'] = 18
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['legend.fontsize'] = 14
###Output
_____no_output_____
###Markdown
Frisch-Waugh theorem Consider the data $D = \{ x_i, y_i, z_i \}_{i=1}^n$ with DGP:$$y_i = x_i' \alpha_0+ z_i' \beta_0 + \varepsilon_i$$. The following estimators of $\alpha$ are numerically equivalent (if $[x, z]$ has full rank):- OLS: $\hat{\alpha}$ from regressing $y$ on $x, z$- Partialling out: $\tilde{\alpha}$ from regressing $y$ on $\tilde{x}$- "Double" partialling out: $\bar{\alpha}$ from regressing $\tilde{y}$ on $\tilde{x}$where the operation of passing to $y, x$ to $\tilde{y}, \tilde{x}$ is called *projection out $z$*, e.g. $\tilde{x}$ are the residuals from regressing $x$ on $z$.$$\tilde{x} = x - \hat \gamma z = (I - z (z' z)^{-1} z' ) x = (I-P_z) x = M_z x$$I.e we have done the following: 1. regress $x$ on $z$ 2. compute $\hat x$ 3. compute the residuals $\tilde x = x - \hat x$ We now explore the theorem through simulation. In particular, we generate a sample from the following model:$$y_i = x_i - 0.3 z_i + \varepsilon_i$$where $x_i,z_i,\varepsilon_i \sim N(0,1)$ and $n=1000$.
###Code
np.random.seed(1)
# Init
n = 1000
a = 1
b = -.3
# Generate data
x = np.random.uniform(0,1,n).reshape(-1,1)
z = np.random.uniform(0,1,n).reshape(-1,1)
e = np.random.normal(0,1,n).reshape(-1,1)
y = a*x + b*z + e
# Estimate alpha by OLS
xz = np.concatenate([x,z], axis=1)
ols_coeff = inv(xz.T @ xz) @ xz.T @ y
alpha_ols = ols_coeff[0][0]
print('alpha OLS: %.4f (true=%1.0f)' % (alpha_ols, a))
# Partialling out
x_tilde = (np.eye(n) - z @ inv(z.T @ z) @ z.T ) @ x
alpha_po = inv(x_tilde.T @ x_tilde) @ x_tilde.T @ y
print('alpha partialling out: %.4f (true=%1.0f)' % (alpha_po, a))
# "Double" partialling out
y_tilde = (np.eye(n) - z @ inv(z.T @ z) @ z.T ) @ y
alpha_po2 = inv(x_tilde.T @ x_tilde) @ x_tilde.T @ y_tilde
print('alpha double partialling out: %.4f (true=%1.0f)' % (alpha_po2, a))
###Output
alpha double partialling out: 1.0928 (true=1)
###Markdown
Omitted Variable Bias Consider two separate statistical models. Assume the following **long regression** of interest:$$y_i = x_i' \alpha_0+ z_i' \beta_0 + \varepsilon_i$$Define the corresponding **short regression** as$$ y_i = x_i' \alpha_0 + v_i \quad \text{ with } \quad x_i = z_i' \beta_0 + u_i$$**OVB Theorem**:Suppose that the DGP for the long regression corresponds to $\alpha_0$, $\beta_0$. Suppose further that $\mathbb E[x_i] = 0$, $\mathbb E[z_i] = 0$, $\mathbb E[\varepsilon_i |x_i,z_i] = 0$. Then, unless $\beta_0 = 0$ or $z_i$ is orthogonal to $x_i$, the (sole) stochastic regressor $x_i$ is correlated with the error term in the short regression which implies that the OLS estimator of the short regression is inconsistent for $\alpha_0$ due to the omitted variable bias. In particular, one can show that the plim of the OLS estimator of $\hat{\alpha}_{SHORT}$ from the short regression is$$\hat{\alpha}_{SHORT} \overset{p}{\to} \frac{Cov(y_i, x_i)}{Var(x_i)} = \alpha_0 + \beta_0 \frac{Cov(z_i, x_i)}{Var(x_i)}$$Consider data $D= (y_i, x_i, z_i)_{i=1}^n$, where the true model is:$$\begin{aligned}& y_i = x_i' \alpha_0 + z_i' \beta_0 + \varepsilon_i \\& x_i = z_i' \gamma_0 + u_i\end{aligned}$$Let's investigate the Omitted Variable Bias by simulation. In particular, we generate a sample from the following model:$$\begin{aligned}& y_i = x_i - 0.3 z_i + \varepsilon_i \\& x_i = 3 z_i + u_i \\\end{aligned}$$where $z_i,\varepsilon_i,u_i \sim N(0,1)$ and $n=1000$.
###Code
def generate_data(a, b, c, n):
# Generate data
z = np.random.normal(0,1,n).reshape(-1,1)
u = np.random.normal(0,1,n).reshape(-1,1)
x = c*z + u
e = np.random.normal(0,1,n).reshape(-1,1)
y = a*x + b*z + e
return x, y, z
# Init
n = 1000
a = 1
b = -.3
c = 3
x, y, z = generate_data(a, b, c, n)
# Estimate alpha by OLS
ols_coeff = inv(x.T @ x) @ x.T @ y
alpha_short = ols_coeff[0][0]
print('alpha OLS: %.4f (true=%1.0f)' % (alpha_short, a))
###Output
alpha OLS: 0.9115 (true=1)
###Markdown
In our case the expected bias is:$$\begin{aligned}Bias & = \beta_0 \frac{Cov(z_i, x_i)}{Var(x_i)} = \\& = \beta_0 \frac{Cov(z_i' \gamma_0 + u_i, x_i)}{Var(z_i' \gamma_0 + u_i)} = \\& = \beta_0 \frac{\gamma_0 Var(z_i)}{\gamma_0^2 Var(z_i) + Var(u_i)}\end{aligned}$$which in our case is $b \frac{c}{c^2 + 1}$.
###Code
# Expected bias
bias = alpha_short - a
exp_bias = b * c / (c**2 + 1)
print('Empirical bias: %.4f \nExpected bias: %.4f' % (bias, exp_bias))
###Output
Empirical bias: -0.0885
Expected bias: -0.0900
###Markdown
Pre-test bias Consider data $D= (y_i, x_i, z_i)_{i=1}^n$, where the true model is:$$\begin{aligned}& y_i = x_i' \alpha_0 + z_i' \beta_0 + \varepsilon_i \\& x_i = z_i' \gamma_0 + u_i\end{aligned}$$Where $x_i$ is the variable of interest (we want to make inference on $\alpha_0$) and $z_i$ is a high dimensional set of control variables. From now on, we will work under the following assumptions:- $\dim(x_i)=1$ for all $n$- $\beta_0$ uniformely bounded in $n$- Strict exogeneity: $\mathbb E[\varepsilon_i | x_i, z_i] = 0$ and $\mathbb E[u_i | z_i] = 0$- $\beta_0$ and $\gamma_0$ have dimension (and hence value) that depend on $n$Pre-Testing procedure:1. Regress $y_i$ on $x_i$ and $z_i$2. For each $j = 1, ..., p = \dim(z_i)$ calculate a test statistic $t_j$3. Let $\hat{T} = \{ j: |t_j| > C > 0 \}$ for some constant $C$ (set of statistically significant coefficients).4. Re-run the new "model" using $(x_i, z_{\hat{T},i})$ (i.e. using the selected covariates with statistically significant coefficients).5. Perform statistical inference (i.e. confidence intervals and hypothesis tests) as if no model selection had been done.Pre-testing leads to incorrect inference. Why? Because of test errors in the first stage.
###Code
# T-test
def t_test(y, x, k):
beta_hat = inv(x.T @ x) @ x.T @ y
residuals = y - x @ beta_hat
sigma2_hat = np.var(residuals)
beta_std = np.sqrt(np.diag(inv(x.T @ x)) * sigma2_hat )
return beta_hat[k,0]/beta_std[k]
###Output
_____no_output_____
###Markdown
First of all the t-test for $H_0: \beta_0 = 0$:$$t = \frac{\hat \beta_k}{\hat \sigma_{\beta_k}}$$where the standard deviation of the ols coefficient is given by$$\hat \sigma_{\beta_k} = \sqrt{ \hat \sigma^2 \cdot (X'X)^{-1}_{[k,k]} }$$where we estimate the variance of the error term with the variance of the residuals$$\hat \sigma^2 = Var \big( y - \hat y \big) = Var \big( y - X (X'X)^{-1}X'y \big)$$
###Code
# Pre-testing
def pre_testing(a, b, c, n, simulations=1000):
np.random.seed(1)
# Init
alpha = {'Long': np.zeros((simulations,1)),
'Short': np.zeros((simulations,1)),
'Pre-test': np.zeros((simulations,1))}
# Loop over simulations
for i in range(simulations):
# Generate data
x, y, z = generate_data(a, b, c, n)
xz = np.concatenate([x,z], axis=1)
# Compute coefficients
alpha['Long'][i] = (inv(xz.T @ xz) @ xz.T @ y)[0][0]
alpha['Short'][i] = inv(x.T @ x) @ x.T @ y
# Compute significance of z on y
t = t_test(y, xz, 1)
# Select specification based on test
if np.abs(t)>1.96:
alpha['Pre-test'][i] = alpha['Long'][i]
else:
alpha['Pre-test'][i] = alpha['Short'][i]
return alpha
# Get pre_test alpha
alpha = pre_testing(a, b, c, n)
for key, value in alpha.items():
print('Mean alpha %s = %.4f' % (key, np.mean(value)))
###Output
Mean alpha Long = 0.9994
Mean alpha Short = 0.9095
Mean alpha Pre-test = 0.9925
###Markdown
The pre-testing coefficient is much less biased than the short regression one. But it's still biased.However, the main effect of pre-testing is on inference. With pre-testing, the distribution of the estimator is not gaussian anymore.
###Code
def plot_alpha(alpha, a):
fig = plt.figure(figsize=(17,6))
# Plot distributions
x_max = np.max([np.max(np.abs(x-a)) for x in alpha.values()])
# All axes
for i, key in enumerate(alpha.keys()):
# Reshape exisiting subplots
k = len(fig.axes)
for i in range(k):
fig.axes[i].change_geometry(1, k+1, i+1)
# Add new plot
ax = fig.add_subplot(1, k+1, k+1)
ax.hist(alpha[key], bins=30)
ax.set_title(key)
ax.set_xlim([a-x_max, a+x_max])
ax.axvline(a, c='r', ls='--')
legend_text = [r'$\alpha_0=%.0f$' % a, r'$\hat \alpha=%.4f$' % np.mean(alpha[key])]
ax.legend(legend_text, prop={'size': 10})
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
As we can see, the main problem of pre-testing is inference.Because of the testing procedure, the distribution of the estimator is a combination of tho different distributions: the one resulting from the long regression and the one resulting from the short regression. **Pre-testing is not a problem in 3 cases**: - when $\beta_0$ is very large: in this case the test always rejects the null hypothesis $H_0 : \beta_0=0$ and we always run the correct specification, i.e. the long regression - when $\beta_0$ is very small: in this case the test has very little power. However, as we saw from the Omitted Varaible Bias formula, the bias is small. - when $\gamma_0$ is very small: also in this case the test has very little power. However, as we saw from the Omitted Varaible Bias formula, the bias is small.
###Code
# Case 1: different betas and same sample size
b_sequence = b*np.array([0.1,0.3,1,3])
alpha = {}
# Get sequence
for k, b_ in enumerate(b_sequence):
label = 'beta = %.2f' % b_
alpha[label] = pre_testing(a, b_, c, n)['Pre-test']
print('Mean with beta=%.2f: %.4f' % (b_, np.mean(alpha[label])))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
However, the magnitue of $\beta_0$ is a relative concept. For an infinite sample size, $\beta_0$ is always going to be "big enough", in the sense that with an infinite sample size the probability fo false positives in testing $H_0: \beta_0 = 0$ is going to zero. I.e. we always select the correct model specification, the long regression.Let's have a look at the distibution of $\hat \alpha_{\text{PRE-TEST}}$ when the sample size increaes.
###Code
# Case 2: same beta and different sample sizes
n_sequence = [100,300,1000,3000]
alpha = {}
# Get sequence
for k, n_ in enumerate(n_sequence):
label = 'n = %.0f' % n_
alpha[label] = pre_testing(a, b, c, n_)['Pre-test']
print('Mean with n=%.0f: %.4f' % (n_, np.mean(alpha[label])))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
As we can see, for large samples, $\beta_0$ is never "small". In the limit, when $n \to \infty$, the probability of false positives while testing $H_0: \beta_0 = 0$ goes to zero. We face a dilemma: - pre-testing is clearlly a problem in finite samples - all our econometric results are based on the assumption that $n \to \infty$The problem is solved by assuming that the value of $\beta_0$ depends on the sample size. This might seems like a weird assumption but is just to have an asymptotically meaningful concept of "big" and "small".We now look at what happens in the simulations when $beta_0$ is proportional to $\frac{1}{\sqrt{n}}$.
###Code
# Case 3: beta proportional to 1/sqrt(n) and different sample sizes
beta = b * 30 / np.sqrt(n_sequence)
# Get sequence
alpha = {}
for k, n_ in enumerate(n_sequence):
label = 'n = %.0f' % n_
alpha[label] = pre_testing(a, beta[k], c, n_)['Pre-test']
print('Mean with n=%.0f: %.4f' % (n_, np.mean(alpha[label])))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
Note that the estimator is consistent. But confidence interval coverage is wrong. Pre-testing and Machine Learning How are machine learning and pre-testing related? The best example is Lasso. Suppose you have a dataset with many variables. This means that you have very few degrees of freedom and your OLS estimates are going to be very imprecise. At the extreme, you have more variables than observations so that your OLS coefficient is undefined since you cannot invert the design matrix $X'X$.In this case, you might want to do variable selection. One way of doing variable selection is pre-testing. Another way is Lasso. A third alternative is to use machine learning methods that do not suffer this curse of dimensionality.The purpose and outcome of pre-testing and Lasso are the same: - you have too many variables - you exclude some of them from the regression / set their coefficients to zero As a consequence, also the problems are the same, i.e. pre-test bias. Post-double selection Consider again data $D= (y_i, x_i, z_i)_{i=1}^n$, where the true model is:$$\begin{aligned}& y_i = x_i' \alpha_0 + z_i' \beta_0 + \varepsilon_i \\& x_i = z_i' \gamma_0 + u_i\end{aligned}$$We would like to guard against pretest bias if possible, in order to handle high dimensional models. A good pathway towards motivating procedures which guard against pretest bias is a discussion of classical partitioned regression.Consider a regression $y_i$ on $x_i$ and $z_i$. $x_i$ is the 1-dimensional variable of interest, $z_i$ is a high-dimensional set of control variables. We have the following procedure:1. **First Stage** selection: regress $x_i$ on $z_i$. Select the statistically significant variables in the set $S_{FS} \subseteq z_i$2. **Reduced Form** selection: lasso $y_i$ on $z_i$. Select the statistically significant variables in the set $S_{RF} \subseteq z_i$3. Regress $y_i$ on $x_i$ and $S_{FS} \cup S_{RF}$**Theorem**:Let $\{P^n\}$ be a sequence of data-generating processes for $D_n = (y_i, x_i, z_i)^n_{i=1} \in (\mathbb R \times \mathbb R \times \mathbb R^p) ^n$ where $p$ depends on $n$. For each $n$, the data are iid with $yi = x_i'\alpha_0^{(n)} + z_i' \beta_0^{(n)} + \varepsilon_i$ and $x_i = z_i' \gamma_0^{(n)} + u_i$ where $\mathbb E[\varepsilon_i | x_i,z_i] = 0$ and $\mathbb E[u_i|z_i] = 0$. The sparsity of the vectors $\beta_0^{(n)}$, $\gamma_0^{(n)}$ is controlled by $|| \beta_0^{(n)} ||_0 \leq s$ with $s^2 (\log p)^2/n \to 0$. Suppose that additional regularity conditions on the model selection procedures and moments of the random variables $y_i$ , $x_i$ , $z_i$ as documented in Belloni et al. (2014). Then the confidence intervals, CI, from the post double selection procedure are uniformly valid. That is, for any confidence level $\xi \in (0, 1)$$$ \Pr(\alpha_0 \in CI) \to 1- \xi$$In order to have valid confidence intervals you want their bias to be negligibly. Since$$ CI = \left[ \hat{\alpha} \pm \frac{1.96 \cdot \hat{\sigma}}{\sqrt{n}} \right]$$If the bias is $o \left( \frac{1}{\sqrt{n}} \right)$ then there is no problem since it is asymptotically negligible w.r.t. the magnitude of the confidence interval. If however the the bias is $O \left( \frac{1}{\sqrt{n}} \right)$ then it has the same magnitude of the confidence interval and it does not asymptotically vanish. The idea of the proof is to use partitioned regression. An alternative way to think about the argument is: bound the omitted variables bias. Omitted variable bias comes from the product of 2 quantities related to the omitted variable:1. Its partial correlation with the outcome, and2. Its partial correlation with the variable of interest.If both those partial correlations are $O( \sqrt{\log p/n})$, then the omitted variables bias is $(s \times O( \sqrt{\log p/n})^2 = o \left( \frac{1}{\sqrt{n}} \right)$, provided $s^2 (\log p)^2/n \to 0$. Relative to the $ \frac{1}{\sqrt{n}} $ convergence rate, the omitted variables bias is negligible. In our omitted variable bias case, we want $| \beta_0 \gamma_0 | = o \left( \frac{1}{\sqrt{n}} \right)$. Post-double selection guarantees that - *Reduced form* selection (pre-testing): any "missing" variable has $|\beta_{0j}| \leq \frac{c}{\sqrt{n}}$- *First stage* selection (additional): any "missing" variable has $|\gamma_{0j}| \leq \frac{c}{\sqrt{n}}$As a consequence, as long as the number of omitted variables is finite, the omitted variable bias is $$ OVB(\alpha) = |\beta_{0j}| \cdot|\gamma_{0j}| \leq \frac{c}{\sqrt{n}} \cdot \frac{c}{\sqrt{n}} = \frac{c^2}{n} = o \left(\frac{1}{\sqrt{n}}\right)$$
###Code
# Pre-testing code
def post_double_selection(a, b, c, n, simulations=1000):
np.random.seed(1)
# Init
alpha = {'Long': np.zeros((simulations,1)),
'Short': np.zeros((simulations,1)),
'Pre-test': np.zeros((simulations,1)),
'Post-double': np.zeros((simulations,1))}
# Loop over simulations
for i in range(simulations):
# Generate data
x, y, z = generate_data(a, b, c, n)
# Compute coefficients
xz = np.concatenate([x,z], axis=1)
alpha['Long'][i] = (inv(xz.T @ xz) @ xz.T @ y)[0][0]
alpha['Short'][i] = inv(x.T @ x) @ x.T @ y
# Compute significance of z on y (beta hat)
t1 = t_test(y, xz, 1)
# Compute significance of z on x (gamma hat)
t2 = t_test(x, z, 0)
# Select specification based on first test
if np.abs(t1)>1.96:
alpha['Pre-test'][i] = alpha['Long'][i]
else:
alpha['Pre-test'][i] = alpha['Short'][i]
# Select specification based on both tests
if np.abs(t1)>1.96 or np.abs(t2)>1.96:
alpha['Post-double'][i] = alpha['Long'][i]
else:
alpha['Post-double'][i] = alpha['Short'][i]
return alpha
# Get pre_test alpha
alpha = post_double_selection(a, b, c, n)
for key, value in alpha.items():
print('Mean alpha %s = %.4f' % (key, np.mean(value)))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
As we can see, post-double selection has solved the pre-testing problem. Does it work for any magnitude of $\beta$ (relative to the sample size)?We first have a look at the case in which the sample size is fixed and $\beta_0$ changes.
###Code
# Case 1: different betas and same sample size
b_sequence = b*np.array([0.1,0.3,1,3])
alpha = {}
# Get sequence
for k, b_ in enumerate(b_sequence):
label = 'beta = %.2f' % b_
alpha[label] = post_double_selection(a, b_, c, n)['Post-double']
print('Mean with beta=%.2f: %.4f' % (b_, np.mean(alpha[label])))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
Post-double selection always selects the correct specification, the long regression, even when $\beta$ is very small.Now we check the same but for fixed $\beta_0$ and different sample sizes.
###Code
# Case 2: same beta and different sample sizes
n_sequence = [100,300,1000,3000]
alpha = {}
# Get sequence
for k, n_ in enumerate(n_sequence):
label = 'N = %.0f' % n_
alpha[label] = post_double_selection(a, b, c, n_)['Post-double']
print('Mean with n=%.0f: %.4f' % (n_, np.mean(alpha[label])))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
Post-double selection always selects the correct specification, the long regression, even when the sample size is very small.Last, we check the case of $\beta_0$ proportional to $\frac{1}{\sqrt{n}}$.
###Code
# Case 3: beta proportional to 1/sqrt(n) and different sample sizes
beta = b * 30 / np.sqrt(n_sequence)
# Get sequence
alpha = {}
for k, n_ in enumerate(n_sequence):
label = 'N = %.0f' % n_
alpha[label] = post_double_selection(a, beta[k], c, n_)['Post-double']
print('Mean with n=%.0f: %.4f' % (n_, np.mean(alpha[label])))
# Plot
plot_alpha(alpha, a)
###Output
_____no_output_____
###Markdown
Once again post-double selection always selects the correct specification, the long regression. Post-double selection and Machine Learning As we have seen at the end of the previous section, Lasso can be used to perform variable selection in high dimensional settings. Therefore, post-double selection solves the pre-test bias problem in those settings. The post-double selection procedure with Lasso is:1. **First Stage** selection: lasso $x_i$ on $z_i$. Let the selected variables be collected in the set $S_{FS} \subseteq z_i$2. **Reduced Form** selection: lasso $y_i$ on $z_i$. Let the selected variables be collected in the set $S_{RF} \subseteq z_i$3. Regress $y_i$ on $x_i$ and $S_{FS} \cup S_{RF}$ Double/debiased machine learning This section is taken from [Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). "*Double/debiased machine learning for treatment and structural parameters*"](https://onlinelibrary.wiley.com/doi/abs/10.1111/ectj.12097). Consider the following partially linear model$$y = \beta_0 D + g_0(X) + u \\D = m_0(X) + v$$where $y$ is the outcome variable, $D$ is the treatment to interest and $X$ is a potentially high-dimensional set of controls. Naive approachA naive approach to estimation of $\beta_0$ using ML methods would be, for example, to construct a sophisticated ML estimator $\beta_0 D + g_0(X)$ for learning the regression function $\beta_0 D$ + $g_0(X)$.1. Split the sample in two: main sample and auxiliary sample2. Use the auxiliary sample to estimate $\hat g_0(X)$3. Use the main sample to compute the orthogonalized component of $Y$ on $X$: $\hat u = \left(Y_{i}-\hat{g}_{0}\left(X_{i}\right)\right)$ 3. Use the main sample to estimate the residualized OLS estimator$$\hat{\beta}_{0}=\left(\frac{1}{n} \sum_{i \in I} D_{i}^{2}\right)^{-1} \frac{1}{n} \sum_{i \in I} D_{i} \hat u_i$$This estimator is going to have two problems:1. Slow rate of convergence, i.e. slower than $\sqrt(n)$2. It will be biased because we are employing highdimensional regularized estimators (e.g. we are doing variable selection) OrthogonalizationNow consider a second construction that employs an orthogonalized formulation obtained by directly partialling out the effect of $X$ from $D$ to obtain the orthogonalized regressor $v = D − m_0(X)$.1. Split the sample in two: main sample and auxiliary sample2. Use the auxiliary sample to estimate $\hat g_0(X)$ from $$ y = \beta_0 D + g_0(X) + u \\ $$3. Use the auxiliary sample to estimate $\hat m_0(X)$ from $$ D = m_0(X) + v $$4. Use the main sample to compute the orthogonalized component of $D$ on $X$ as $$ \hat v = D - \hat m_0(X) $$ 5. Use the main sample to estimate the double-residualized OLS estimator as $$ \hat{\beta}_{0}=\left(\frac{1}{n} \sum_{i \in I} \hat v_i D_{i} \right)^{-1} \frac{1}{n} \sum_{i \in I} \hat v_i \left( Y - \hat g_0(X) \right) $$The estimator is unbiased but still has a lower rate of convergence because of sample splitting. The problem is solved by inverting the split sample, re-estimating the coefficient and averaging the two estimates. Note that this procedure is valid since the two estimates are independent by the sample splitting procedure. Application to AJR02 In this section we are going to replicate 6.3 of the "*Double/debiased machine learning*" paper based on [Acemoglu, Johnson, Robinson (2002), "*The Colonial Origins of Comparative Development*"](https://economics.mit.edu/files/4123).We first load the dataset
###Code
# Load Acemoglu Johnson Robinson Dataset
df = pd.read_csv('data/AJR02.csv',index_col=0)
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 64 entries, 1 to 64
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 GDP 64 non-null float64
1 Exprop 64 non-null float64
2 Mort 64 non-null float64
3 Latitude 64 non-null float64
4 Neo 64 non-null int64
5 Africa 64 non-null int64
6 Asia 64 non-null int64
7 Namer 64 non-null int64
8 Samer 64 non-null int64
9 logMort 64 non-null float64
10 Latitude2 64 non-null float64
dtypes: float64(6), int64(5)
memory usage: 6.0 KB
###Markdown
In their paper, AJR note that their IV strategy will be invalidated if other factors are also highly persistent and related to the development of institutions within a country and to the country’s GDP. A leading candidate for such a factor, as they discuss, is geography. AJR address this by assuming that the confounding effect of geography is adequately captured by a linear term in distance from the equator and a set of continent dummy variables. They inclue their results in table 2.
###Code
# Add constant term to dataset
df['const'] = 1
# Create lists of variables to be used in each regression
X1 = df[['const', 'Exprop']]
X2 = df[['const', 'Exprop', 'Latitude', 'Latitude2']]
X3 = df[['const', 'Exprop', 'Latitude', 'Latitude2', 'Asia', 'Africa', 'Namer', 'Samer']]
y = df['GDP']
# Estimate an OLS regression for each set of variables
reg1 = sm.OLS(y, X1, missing='drop').fit()
reg2 = sm.OLS(y, X2, missing='drop').fit()
reg3 = sm.OLS(y, X3, missing='drop').fit()
info_dict={'No. observations' : lambda x: f"{int(x.nobs):d}"}
results_table = summary_col(results=[reg1,reg2,reg3],
float_format='%0.2f',
stars = True,
model_names=['Model 1','Model 2','Model 3'],
info_dict=info_dict,
regressor_order=['const','Exprop','Latitude','Latitude2'])
results_table
###Output
_____no_output_____
###Markdown
Using DML allows us to relax this assumption and to replace it by a weaker assumption that geography can be sufficiently controlled by an unknown function of distance from the equator and continent dummies, which can be learned by ML methods.In particular, our framework is$${GDP} = \beta_0 \times {Exprop} + g_0({geography}) + u \\{Exprop} = m_0({geography}) + u$$So that the double/debiased machine learning procedure is1. Split the sample in two: main sample and auxiliary sample2. Use the auxiliary sample to estimate $\hat g_0({geography})$ from $$ {GDP} = \beta_0 \times {Exprop} + g_0({geography}) + u $$3. Use the auxiliary sample to estimate $\hat m_0({geography})$ from $$ {Exprop} = m_0({geography}) + v $$4. Use the main sample to compute the orthogonalized component of ${Exprop}$ on ${geography}$ as $$ \hat v = {Exprop} - \hat m_0({geography}) $$5. Use the main sample to estimate the double-residualized OLS estimator as $$ \hat{\beta}_{0}=\left(\frac{1}{n} \sum_{i \in I} \hat v_i \times {Exprop}_{i} \right)^{-1} \frac{1}{n} \sum_{i \in I} \hat v_i \times \left( {GDP} - \hat g_0({geography}) \right) $$Since we employ an **intrumental variable** strategy, we replace $m_0({geography})$ with $m_0({geography},{logMort})$ in the first stage.
###Code
# Generate variables
D = df['Exprop'].values.reshape(-1,1)
X = df[['const', 'Latitude', 'Latitude2', 'Asia', 'Africa', 'Namer', 'Samer']].values
y = df['GDP'].values.reshape(-1,1)
Z = df[['const', 'Latitude', 'Latitude2', 'Asia', 'Africa', 'Namer', 'Samer','logMort']].values
###Output
_____no_output_____
###Markdown
Now we write down the whole procedure.
###Code
def estimate_beta(algorithm, alg_name, D, X, y, Z, sample):
# Split sample
D_main, D_aux = (D[sample==1], D[sample==0])
X_main, X_aux = (X[sample==1], X[sample==0])
y_main, y_aux = (y[sample==1], y[sample==0])
Z_main, Z_aux = (Z[sample==1], Z[sample==0])
# Residualize y on D
b_hat = inv(D_aux.T @ D_aux) @ D_aux.T @ y_aux
y_resid_aux = y_aux - D_aux @ b_hat
# Estimate g0
alg_fitted = algorithm.fit(X=X_aux, y=y_resid_aux.ravel())
g0 = alg_fitted.predict(X_main).reshape(-1,1)
# Compute v_hat
u_hat = y_main - g0
# Estimate m0
alg_fitted = algorithm.fit(X=Z_aux, y=D_aux.ravel())
m0 = algorithm.predict(Z_main).reshape(-1,1)
# Compute u_hat
v_hat = D_main - m0
# Estimate beta
beta = inv(v_hat.T @ D_main) @ v_hat.T @ u_hat
return beta
def ddml(algorithm, alg_name, D, X, y, Z, p=0.5, verbose=False):
# Expand X if Lasso or Ridge
if alg_name in ['Lasso ','Ridge ']:
X = PolynomialFeatures(degree=2).fit_transform(X)
# Generate split (fixed proportions)
split = np.array([i in train_test_split(range(len(D)), test_size=p)[0] for i in range(len(D))])
# Compute beta
beta = [estimate_beta(algorithm, alg_name, D, X, y, Z, split==k) for k in range(2)]
beta = np.mean(beta)
# Print and return
if verbose:
print('%s : %.4f' % (alg_name, beta))
return beta
p = 0.5
split = np.random.binomial(1, p, len(D))
split
###Output
_____no_output_____
###Markdown
We now repeat the same process with different algorithms. In particular, we consider:1. Lasso Regression2. Ridge Regression3. Regression Trees4. Random Forest5. Boosted Forests
###Code
# List all algorithms
algorithms = {'Ridge ': Ridge(alpha=.1),
'Lasso ': Lasso(alpha=.01),
'Tree ': DecisionTreeRegressor(),
'Forest ': RandomForestRegressor(n_estimators=30),
'Boosting': GradientBoostingRegressor(n_estimators=30)}
# Loop over algorithms
for alg_name, algorithm in algorithms.items():
ddml(algorithm, alg_name, D, X, y, Z, verbose=True)
# Repeat K times
def estimate_beta_median(algorithms, D, X, y, Z, K):
# Loop over algorithms
for alg_name, algorithm in algorithms.items():
betas = []
# Iterate n times
for k in range(K):
beta = ddml(algorithm, alg_name, D, X, y, Z)
betas = np.append(betas, beta)
print('%s : %.4f' % (alg_name, np.median(betas)))
np.random.seed(123)
# Repeat 100 times and take median
estimate_beta_median(algorithms, D, X, y, Z, 100)
###Output
Ridge : 0.6670
Lasso : 1.2511
Tree : 0.9605
Forest : 0.5327
Boosting : 1.0338
|
backups/read_ascii_cams_diag.ipynb | ###Markdown
Reading of ASCII files created for cam diagnostics tool
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
from glob import glob
import os
import helper_funcs as helpers
import ipywidgets as ipw
###Output
_____no_output_____
###Markdown
1. Paths and global settings (GLOB) Please change accordingly if you execute this notebook on your local machine. 1.1. Paths (PATHS)Here you can specify your paths.
###Code
#folder with ascii files
data_dir = "./data/michael_ascii_read/"
file_type = "webarchive"
# file containing additional information about variables (long names, can be interactively updated below)
varinfo_csv = "./data/var_info.csv"
# Config file for different groups
vargroups_cfg = "./data/varconfig.ini"
#directy to store results
output_dir = "./output/"
###Output
_____no_output_____
###Markdown
Global settings (SETUP)In the following cells you can specify global default settings. Define group of variables that you are interested inDefault group of variables. Variable groups can be defined in [varconfig.ini](https://github.com/jgliss/my_notebooks/blob/master/data/varconfig.ini). Use ``[group_name]`` to define a new group and add below all variables that should belong to the group in the desired display order (should be self-explanatory when looking at the file, I hope).
###Code
var_group = "test" #group_name (AS STRING, e.g. "test") from varconfig.ini (use None, if you want to use all)
###Output
_____no_output_____
###Markdown
Add data columns to indexUse the following list to specify table columns that should be added to the multiindex (Ada, here is where you can add "Obs").
###Code
add_to_index = ["Obs"] #NEEDS TO BE A LIST, EVEN FOR ONLY ONE ITEM
###Output
_____no_output_____
###Markdown
Define which parts of index should be unstackedThe following list can be used to specify how the final lists are displayed. The items in the list need to be names of sub-indices in the the Multiindex of the originally loaded file (i.e. "Run", "Years", "Variable", "Description") or data columns that were added to index (previous option). All values specified here will be unstacked, i.e. put from the original row into a column index representation (makes table view wider).
###Code
unstack_indices = ["Run", "Years"]
###Output
_____no_output_____
###Markdown
2. Importing and editing supplementary informationLet's begin with reading additional variable information from the file ``varinfo_csv``. Note that this is not strictly required but helps us below to display the results in a more intuitive manner, when analysing the data.Note that the following method makes sure the CSV file exists, i.e. if it has not been created before, the information is loaded from Michaels Excel table and then saved at ``varinfo_csv``.
###Code
var_info_dict = helpers.load_varinfo(varinfo_csv)
###Output
_____no_output_____
###Markdown
The following cell opens an interactive widget that can be used to edit the information available for each variable (stored in file ``varinfo_csv``, see previous cell).
###Code
from my_widgets import EditDictCSV
edit_config = EditDictCSV(varinfo_csv)
#show
edit_config()
###Output
_____no_output_____
###Markdown
Now update to the current selection (run everything below if you change the previous cell).
###Code
var_info_dict = edit_config.var_dict
###Output
_____no_output_____
###Markdown
3. Search and load ASCII files, either using .asc or .webarchive file type (GET_FILES)The following cell finds all files in folder ``data_dir``.
###Code
files = sorted(glob(data_dir + "*.{}".format(file_type)))
for file in files:
print(file)
###Output
./data/michael_ascii_read/N1850C53CLM45L32_f09_tn11_191017 (yrs 71-100).webarchive
./data/michael_ascii_read/N1850_f09_tn14_230218 (yrs 1-20).webarchive
./data/michael_ascii_read/N1850_f19_tn14_r227_ctrl (yrs 185-215).webarchive
./data/michael_ascii_read/N1850_f19_tn14_r227_ctrl (yrs 310-340).webarchive
./data/michael_ascii_read/N1850_f19_tn14_r227_ctrl (yrs 80-110).webarchive
./data/michael_ascii_read/N1850_f19_tn14_r265_ctrl_20180411 (yrs 90-120).webarchive
###Markdown
3.1 Shortcuts for Run IDs (optional may also be changed interactively below)Define list of shortnames for model runs or define a prefix. If undefined (i.e. empty list and ``None``), the original names are used.
###Code
#either
run_ids = list("ABCD") #renames the first 4 runs
#or
run_id_prefix = "Run"
###Output
_____no_output_____
###Markdown
4. Importing multiple result files and concatenating them into one Dataframe (LOAD_FILES)In the following, we load all files into one `Dataframe`. To do this, a custom method `read_and_merge_all` was defined in [helper_funcs.py](https://github.com/jgliss/my_py3_scripts/blob/master/notebooks/helper_funcs.py). The method basically loops over all files and calls the method ``read_file_custom``, which you can also find in [helper_funcs.py](https://github.com/jgliss/my_py3_scripts/blob/master/notebooks/helper_funcs.py).
###Code
merged = helpers.read_and_merge_all(file_list=files, var_info_dict=var_info_dict, replace_runid_prefix=run_id_prefix)
merged
###Output
/home/jonasg/github/my_notebooks/helper_funcs.py:152: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
df.test_case = pd.Series(mapping)
###Markdown
5. Rearranging and restructuring of the imported data (REARRANGE) 5.1 Computing RMSE relative error (GET_RMSE_REL)In the following we extract the subset containing the *RSME* information of the flagged variables for all runs in order to compute the relative error for each run based on the average *RMSE* of all runs:$$\frac{RMSE_{Run}\,-\,\overline{RMSE_{All\,Runs}}}{\overline{RMSE_{All\,Runs}}}$$
###Code
merged = helpers.calc_and_add_relerror(merged, colname="RMSE", unstack_indices=unstack_indices)
merged
###Output
_____no_output_____
###Markdown
5.2 Interactive manipulation of Dataframe (DF_EDIT)The following table widget uses the loaded Dataframes and applies all settings that were defined above.
###Code
from my_widgets import TableEditor
edit = TableEditor(df=merged,
save_dir=output_dir, #defined above
preconfig_file=vargroups_cfg, #defined above
default_group=var_group,
new_run_names=run_ids,
add_to_index_vars=add_to_index,
unstack_indices=unstack_indices)
edit()
###Output
_____no_output_____
###Markdown
Now access the current selection and continue.
###Code
selection = edit.df_edit
selection["Bias"].columns.names
len(selection.columns.levels[0])
###Output
_____no_output_____
###Markdown
5.3 Extracting the Bias of each model run relative to the observations (GET_BIAS)Retrieving a table that illustrates the Bias of each run for each flagged variable is straight forward. We just extract the `Bias` column from our flagged frame:
###Code
bias = selection["Bias"]
bias
###Output
_____no_output_____
###Markdown
5.4 Extracting the RMSE error of each model run relative to the observations (GET_RMSE_ERR)In section 5.1 we computed and added the relative RMSE error as a new column to the original table. These data can now be accessed as simply as the ``Bias`` table:
###Code
rmse_err_rel = selection["RMSE_ERR"]
rmse_err_rel
###Output
_____no_output_____
###Markdown
6. Conditional formatting of tables (Dataframes) (VISUALISE)This section illustrates, how we can perform conditional formatting of the color tables. As discussed above, we can apply background colour gradients to the data. In the example above we had a multiindex data type specifying model run, year-range and variable in stacked format (long table) and the four data columns specifying results from model and observation as well as bias and RMSE. Now, in the following we illustrate how we can apply this colour highlighting for the two unstacked tables that we just created and that contain Bias and relative error.
###Code
bias
###Output
_____no_output_____
###Markdown
6.2 How we want it (VIS_RIGHT)In the following, we use a custom display method `my_table_display` (that is defined in [helper_funcs.py](https://github.com/jgliss/my_py3_scripts/blob/master/notebooks/helper_funcs.py)) in order to perform colour formatting considering all rows and columns at the same time and furthermore, using a diverging colour map that is dynamically shifted such that value 0 corresponds to the colour white (method `shifted_color_map`) also if `-vmin != vmax` (like usually).
###Code
from helper_funcs import my_table_display
my_table_display(bias)
from helper_funcs import df_to_heatmap
ax = df_to_heatmap(bias, figsize=(12,6))
ax.set_xlabel("Variable", weight="bold")
ax.set_ylabel("Run", weight="bold")
ax.figure.tight_layout()
ax.figure.savefig(os.path.join(output_dir, "bias_table.png"))
###Output
_____no_output_____
###Markdown
Now for the typical RMSE error
###Code
my_table_display(rmse_err_rel)
###Output
/home/jonasg/anaconda3/lib/python3.6/site-packages/matplotlib/colors.py:489: RuntimeWarning: invalid value encountered in less
np.copyto(xa, -1, where=xa < 0.0)
###Markdown
7. Concatenate and save results (Bias and typical RMSE) as table (EXPORT)In the following, the two result tables ``bias_table`` and ``typical_rmse`` are merged into one result table and then saved both as excel table and as csv file.
###Code
result = pd.concat([bias, rmse_err_rel],axis=1, keys=["Bias", "RMSE relative Error"])
result
###Output
_____no_output_____
###Markdown
Now save both tables as excel file.
###Code
writer = pd.ExcelWriter('{}/result.xlsx'.format(output_dir))
result.to_excel(writer)
writer.save()
writer.close()
###Output
_____no_output_____ |
04_Questions_Classification/02_Questions_Classification_FastText.ipynb | ###Markdown
Questions Classification Custom dataset Fast Text.In this Notebook we are hoing to use the previous notebook as the base to our `FastText` model for predicting question classes. In the last notebook we used `RNN`, in this notebook we are going to use `FastText`.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Imports
###Code
import time
from prettytable import PrettyTable
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
import torch, os, random
from torch import nn
import torch.nn.functional as F
torch.__version__
###Output
_____no_output_____
###Markdown
Setting up the seeds
###Code
SEED = 42
np.random.seed(SEED)
random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deteministic = True
###Output
_____no_output_____
###Markdown
Loading files.Now we have 3 files for three sets that were created which are:```train.csvtest.csvval.csv```We are going to use torchtext to load these files.**Note:** In the previous notebooks we loaded these our files as json files. This time around we are going to load `csv` files instead. The procedure is the same. Paths
###Code
files_path = '/content/drive/MyDrive/NLP Data/questions-classification/pytorch'
train_path = 'train.csv'
test_path = 'test.csv'
val_path = 'val.csv'
###Output
_____no_output_____
###Markdown
Fast Text.Accoding to the `TochText` paper we need to generate bigrams for each sentence.We are going to create a function called `generate_bigram()` that will generate bigrams for us. We will pass this function to the `Text` field as the preprocessing function.
###Code
def generate_bigrams(x):
n_grams = set(zip(*[x[i: ] for i in range(2)]))
for n_gram in n_grams:
x.append(' '.join(n_gram))
return x
generate_bigrams(['What', 'is', 'the', 'meaning', "of", "OCR", "in", "python"])
###Output
_____no_output_____
###Markdown
Creating the Fields.
###Code
from torchtext.legacy import data
TEXT = data.Field(
tokenize="spacy",
preprocessing = generate_bigrams,
tokenizer_language = 'en_core_web_sm',
)
LABEL = data.LabelField()
fields = {
"Questions": ('text', TEXT),
"Category1": ('label', LABEL)
}
###Output
_____no_output_____
###Markdown
Creating the dataset.We ar going to use the `TabularDataset.split()` to create the datasets.
###Code
train_data, val_data, test_data = data.TabularDataset.splits(
files_path,
train=train_path,
test= train_path,
validation= train_path,
format = "csv",
fields=fields,
)
len(train_data), len(test_data), len(val_data)
print(vars(train_data.examples[0]))
###Output
{'text': ['What', 'is', 'the', 'name', 'of', 'Miss', 'India', '1994', '?', 'India 1994', 'the name', 'Miss India', 'name of', 'is the', 'of Miss', 'What is', '1994 ?'], 'label': 'HUM'}
###Markdown
Building the Vocabulary and Loading the `pretrained` word vectors.We are going to use the `glove.6B.100d` word vectors which was trained with 6 billion words and each word is a 100 dimesional vector.**Note** We should only build the vocabulary on the `train` dataset only.
###Code
MAX_VOCAB_SIZE = 100_000_000
TEXT.build_vocab(
train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_
)
LABEL.build_vocab(train_data)
###Output
_____no_output_____
###Markdown
Device.
###Code
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
LABEL.vocab.stoi
###Output
_____no_output_____
###Markdown
Creating iterators.We are going to use our favorite iterator known as the `BucketIterator` to create iterators for all the sets that we have.For the `batch_size` this time around we want to test a huge batch.
###Code
BATCH_SIZE = 128
train_iter, val_iter, test_iter = data.BucketIterator.splits(
(train_data, val_data, test_data),
device = device,
batch_size = BATCH_SIZE,
sort_key = lambda x: len(x.text),
)
###Output
_____no_output_____
###Markdown
Creating the Model.
###Code
class QuestionsFastText(nn.Module):
def __init__(self,
vocab_size,
embedding_size,
output_dim,
pad_index,
):
super(QuestionsFastText, self).__init__()
self.embedding = nn.Embedding(
vocab_size,
embedding_size,
padding_idx = pad_index
)
self.out = nn.Linear(
embedding_size,
out_features = output_dim
)
def forward(self, text):
embedded = self.embedding(text).permute(1 ,0, 2)
pooled = F.avg_pool2d(embedded,
(embedded.shape[1], 1)
).squeeze(1)
return self.out(pooled)
###Output
_____no_output_____
###Markdown
Creating the model instance.
###Code
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
OUTPUT_DIM = 6
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
questions_model = QuestionsFastText(
INPUT_DIM,
EMBEDDING_DIM,
OUTPUT_DIM,
pad_index = PAD_IDX
).to(device)
questions_model
###Output
_____no_output_____
###Markdown
Model parameters
###Code
def count_trainable_params(model):
return sum(p.numel() for p in model.parameters()), sum(p.numel() for p in model.parameters() if p.requires_grad)
n_params, trainable_params = count_trainable_params(questions_model)
print(f"Total number of paramaters: {n_params:,}\nTotal tainable parameters: {trainable_params:,}")
###Output
Total number of paramaters: 3,726,506
Total tainable parameters: 3,726,506
###Markdown
Loading pretrained vextors to the embedding layer.
###Code
pretrained_embeddings = TEXT.vocab.vectors
questions_model.embedding.weight.data.copy_(pretrained_embeddings)
###Output
_____no_output_____
###Markdown
Zeroing the `` and `` tokens.
###Code
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] or TEXT.vocab.stoi["<unk>"]
questions_model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
questions_model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
questions_model.embedding.weight.data
###Output
_____no_output_____
###Markdown
Loss and optimizer.We are going to use the Adam as our optimizer with the default leaning rate. We are also going to use `CrossEntropyLoss()` as our loss function.
###Code
optimizer = torch.optim.Adam(questions_model.parameters())
criterion = nn.CrossEntropyLoss().to(device)
###Output
_____no_output_____
###Markdown
Accuracy function.We are going to create the `categorical_accuracy()` function that will calculate the categorical accuracy for predicted labels and actual labels.
###Code
def categorical_accuracy(preds, y):
top_pred = preds.argmax(1, keepdim = True)
correct = top_pred.eq(y.view_as(top_pred)).sum()
return correct.float() / y.shape[0]
###Output
_____no_output_____
###Markdown
Training and Evaluation functions.
###Code
def train(model, iterator, optimizer, criterion):
epoch_loss ,epoch_acc = 0, 0
model.train()
for batch in iterator:
optimizer.zero_grad()
text = batch.text
predictions = model(text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss , epoch_acc = 0, 0
model.eval()
with torch.no_grad():
for batch in iterator:
text = batch.text
predictions = model(text)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
Training loop.We are going to create helper functions that will help us to visualize our training.1. Time to string
###Code
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
###Output
_____no_output_____
###Markdown
2. tabulate training epoch.
###Code
def visualize_training(start, end, train_loss, train_accuracy, val_loss, val_accuracy, title):
data = [
["Training", f'{train_loss:.3f}', f'{train_accuracy:.3f}', f"{hms_string(end - start)}" ],
["Validation", f'{val_loss:.3f}', f'{val_accuracy:.3f}', "" ],
]
table = PrettyTable(["CATEGORY", "LOSS", "ACCURACY", "ETA"])
table.align["CATEGORY"] = 'l'
table.align["LOSS"] = 'r'
table.align["ACCURACY"] = 'r'
table.align["ETA"] = 'r'
table.title = title
for row in data:
table.add_row(row)
print(table)
N_EPOCHS = 100
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start = time.time()
train_loss, train_acc = train(questions_model, train_iter, optimizer, criterion)
valid_loss, valid_acc = evaluate(questions_model, val_iter, criterion)
title = f"EPOCH: {epoch+1:02}/{N_EPOCHS:02} {'saving best model...' if valid_loss < best_valid_loss else 'not saving...'}"
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(questions_model.state_dict(), 'best-model.pt')
end = time.time()
visualize_training(start, end, train_loss, train_acc, valid_loss, valid_acc, title)
###Output
+--------------------------------------------+
| EPOCH: 01/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.751 | 0.249 | 0:00:00.30 |
| Validation | 1.641 | 0.346 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 02/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.683 | 0.298 | 0:00:00.44 |
| Validation | 1.529 | 0.433 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 03/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.620 | 0.383 | 0:00:00.42 |
| Validation | 1.408 | 0.516 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 04/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.553 | 0.491 | 0:00:00.41 |
| Validation | 1.269 | 0.606 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 05/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.472 | 0.580 | 0:00:00.41 |
| Validation | 1.114 | 0.669 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 06/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.378 | 0.670 | 0:00:00.41 |
| Validation | 0.952 | 0.730 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 07/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.266 | 0.737 | 0:00:00.41 |
| Validation | 0.798 | 0.782 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 08/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.158 | 0.793 | 0:00:00.42 |
| Validation | 0.658 | 0.823 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 09/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 1.023 | 0.830 | 0:00:00.41 |
| Validation | 0.540 | 0.854 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 10/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.922 | 0.858 | 0:00:00.41 |
| Validation | 0.446 | 0.880 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 11/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.802 | 0.886 | 0:00:00.41 |
| Validation | 0.368 | 0.903 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 12/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.723 | 0.903 | 0:00:00.41 |
| Validation | 0.310 | 0.920 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 13/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.623 | 0.921 | 0:00:00.29 |
| Validation | 0.261 | 0.932 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 14/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.559 | 0.933 | 0:00:00.29 |
| Validation | 0.222 | 0.942 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 15/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.484 | 0.941 | 0:00:00.30 |
| Validation | 0.188 | 0.951 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 16/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.427 | 0.950 | 0:00:00.45 |
| Validation | 0.162 | 0.958 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 17/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.372 | 0.955 | 0:00:00.41 |
| Validation | 0.139 | 0.964 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 18/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.336 | 0.963 | 0:00:00.41 |
| Validation | 0.121 | 0.970 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 19/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.295 | 0.967 | 0:00:00.41 |
| Validation | 0.105 | 0.974 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 20/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.266 | 0.972 | 0:00:00.41 |
| Validation | 0.092 | 0.979 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 21/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.243 | 0.977 | 0:00:00.42 |
| Validation | 0.082 | 0.981 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 22/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.219 | 0.980 | 0:00:00.41 |
| Validation | 0.072 | 0.984 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 23/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.198 | 0.983 | 0:00:00.41 |
| Validation | 0.066 | 0.985 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 24/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.175 | 0.985 | 0:00:00.41 |
| Validation | 0.059 | 0.986 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 25/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.162 | 0.987 | 0:00:00.41 |
| Validation | 0.054 | 0.988 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 26/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.148 | 0.988 | 0:00:00.29 |
| Validation | 0.049 | 0.989 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 27/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.136 | 0.989 | 0:00:00.29 |
| Validation | 0.044 | 0.990 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 28/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.126 | 0.990 | 0:00:00.31 |
| Validation | 0.041 | 0.991 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 29/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.116 | 0.991 | 0:00:00.45 |
| Validation | 0.038 | 0.992 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 30/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.108 | 0.991 | 0:00:00.41 |
| Validation | 0.035 | 0.993 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 31/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.095 | 0.993 | 0:00:00.42 |
| Validation | 0.032 | 0.993 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 32/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.090 | 0.993 | 0:00:00.40 |
| Validation | 0.030 | 0.994 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 33/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.087 | 0.993 | 0:00:00.41 |
| Validation | 0.028 | 0.994 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 34/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.078 | 0.994 | 0:00:00.41 |
| Validation | 0.025 | 0.994 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 35/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.073 | 0.994 | 0:00:00.41 |
| Validation | 0.024 | 0.995 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 36/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.067 | 0.995 | 0:00:00.41 |
| Validation | 0.022 | 0.995 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 37/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.066 | 0.995 | 0:00:00.41 |
| Validation | 0.021 | 0.996 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 38/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.057 | 0.995 | 0:00:00.42 |
| Validation | 0.019 | 0.996 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 39/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.054 | 0.996 | 0:00:00.28 |
| Validation | 0.018 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 40/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.054 | 0.996 | 0:00:00.29 |
| Validation | 0.017 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 41/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.047 | 0.996 | 0:00:00.31 |
| Validation | 0.015 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 42/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.048 | 0.997 | 0:00:00.45 |
| Validation | 0.014 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 43/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.044 | 0.997 | 0:00:00.41 |
| Validation | 0.013 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 44/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.041 | 0.997 | 0:00:00.41 |
| Validation | 0.012 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 45/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.039 | 0.997 | 0:00:00.41 |
| Validation | 0.012 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 46/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.038 | 0.997 | 0:00:00.41 |
| Validation | 0.011 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 47/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.037 | 0.997 | 0:00:00.42 |
| Validation | 0.010 | 0.997 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 48/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.034 | 0.997 | 0:00:00.41 |
| Validation | 0.009 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 49/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.031 | 0.997 | 0:00:00.41 |
| Validation | 0.008 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 50/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.030 | 0.997 | 0:00:00.41 |
| Validation | 0.007 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 51/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.029 | 0.997 | 0:00:00.41 |
| Validation | 0.007 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 52/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.028 | 0.998 | 0:00:00.29 |
| Validation | 0.006 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 53/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.026 | 0.998 | 0:00:00.29 |
| Validation | 0.005 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 54/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.025 | 0.998 | 0:00:00.29 |
| Validation | 0.005 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 55/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.023 | 0.998 | 0:00:00.31 |
| Validation | 0.004 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 56/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.022 | 0.998 | 0:00:00.44 |
| Validation | 0.004 | 0.998 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 57/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.021 | 0.997 | 0:00:00.41 |
| Validation | 0.003 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 58/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.020 | 0.998 | 0:00:00.41 |
| Validation | 0.003 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 59/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.021 | 0.998 | 0:00:00.40 |
| Validation | 0.003 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 60/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.019 | 0.998 | 0:00:00.41 |
| Validation | 0.002 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 61/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.018 | 0.998 | 0:00:00.41 |
| Validation | 0.002 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 62/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.017 | 0.999 | 0:00:00.41 |
| Validation | 0.002 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 63/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.017 | 0.999 | 0:00:00.41 |
| Validation | 0.001 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 64/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.016 | 0.999 | 0:00:00.42 |
| Validation | 0.001 | 0.999 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 65/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.015 | 0.999 | 0:00:00.41 |
| Validation | 0.001 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 66/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.015 | 0.999 | 0:00:00.31 |
| Validation | 0.001 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 67/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.015 | 0.999 | 0:00:00.30 |
| Validation | 0.001 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 68/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.013 | 1.000 | 0:00:00.32 |
| Validation | 0.001 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 69/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.013 | 1.000 | 0:00:00.45 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 70/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.013 | 1.000 | 0:00:00.38 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 71/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.012 | 1.000 | 0:00:00.43 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 72/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.011 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 73/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.011 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 74/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.010 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 75/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.009 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 76/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.010 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 77/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.010 | 1.000 | 0:00:00.40 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 78/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.009 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 79/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.009 | 1.000 | 0:00:00.31 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 80/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.009 | 1.000 | 0:00:00.31 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 81/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.008 | 1.000 | 0:00:00.32 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 82/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.008 | 1.000 | 0:00:00.44 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 83/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.008 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 84/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.007 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 85/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.007 | 1.000 | 0:00:00.32 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 86/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.007 | 1.000 | 0:00:00.44 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 87/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.007 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 88/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.007 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 89/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 90/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 91/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 92/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 93/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.32 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 94/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.31 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 95/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.006 | 1.000 | 0:00:00.44 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 96/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.005 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 97/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.005 | 1.000 | 0:00:00.41 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 98/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.005 | 1.000 | 0:00:00.39 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 99/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.004 | 1.000 | 0:00:00.43 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 100/100 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.004 | 1.000 | 0:00:00.42 |
| Validation | 0.000 | 1.000 | |
+------------+-------+----------+------------+
###Markdown
Model Evaluation.
###Code
questions_model.load_state_dict(torch.load('best-model.pt'))
test_loss, test_acc = evaluate(questions_model, test_iter, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.000 | Test Acc: 100.00%
###Markdown
Model Inference.We are now ready to make predictions with our model.
###Code
import en_core_web_sm
nlp = en_core_web_sm.load()
reversed_labels = dict([(v, k) for (k, v) in LABEL.vocab.stoi.items()])
reversed_labels
def tabulate(column_names, data, title="QUESTIONS PREDICTIONS TABLE"):
table = PrettyTable(column_names)
table.align[column_names[0]] = "l"
table.align[column_names[1]] = "l"
for row in data:
table.add_row(row)
print(table)
def predict_question_type(model, sentence, min_len = 5, actual_class=0):
model.eval()
with torch.no_grad():
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
if len(tokenized) < min_len:
tokenized += ['<pad>'] * (min_len - len(tokenized))
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device).unsqueeze(1)
probabilities = model(tensor)
prediction = torch.argmax(probabilities, dim=1)
prediction = prediction.item()
table_headers =["KEY", "VALUE"]
table_data = [
["PREDICTED CLASS", prediction],
["ACTUAL CLASS", actual_class],
["PREDICTED CLASS NAME", reversed_labels[prediction]],
]
tabulate(table_headers, table_data)
reversed_labels
###Output
_____no_output_____
###Markdown
Location
###Code
predict_question_type(questions_model, "What are the largest libraries in the US ?", actual_class=4)
###Output
+----------------------+-------+
| KEY | VALUE |
+----------------------+-------+
| PREDICTED CLASS | 4 |
| ACTUAL CLASS | 4 |
| PREDICTED CLASS NAME | LOC |
+----------------------+-------+
###Markdown
Human
###Code
predict_question_type(questions_model, "Who is John Macarthur , 1767-1834 ?", actual_class=1)
###Output
+----------------------+-------+
| KEY | VALUE |
+----------------------+-------+
| PREDICTED CLASS | 1 |
| ACTUAL CLASS | 1 |
| PREDICTED CLASS NAME | HUM |
+----------------------+-------+
###Markdown
DESCRIPTION
###Code
predict_question_type(questions_model, "What is the root of all evil ? ", actual_class=2)
###Output
+----------------------+-------+
| KEY | VALUE |
+----------------------+-------+
| PREDICTED CLASS | 2 |
| ACTUAL CLASS | 2 |
| PREDICTED CLASS NAME | DESC |
+----------------------+-------+
###Markdown
Numeric
###Code
predict_question_type(questions_model, "How many watts make a kilowatt ?", actual_class=3)
###Output
+----------------------+-------+
| KEY | VALUE |
+----------------------+-------+
| PREDICTED CLASS | 3 |
| ACTUAL CLASS | 3 |
| PREDICTED CLASS NAME | NUM |
+----------------------+-------+
###Markdown
ENTITY
###Code
predict_question_type(questions_model, "What films featured the character Popeye Doyle ?", actual_class=0)
###Output
+----------------------+-------+
| KEY | VALUE |
+----------------------+-------+
| PREDICTED CLASS | 0 |
| ACTUAL CLASS | 0 |
| PREDICTED CLASS NAME | ENTY |
+----------------------+-------+
###Markdown
ABBREVIATION
###Code
predict_question_type(questions_model, "What does NECROSIS mean ?", actual_class=5)
###Output
+----------------------+-------+
| KEY | VALUE |
+----------------------+-------+
| PREDICTED CLASS | 2 |
| ACTUAL CLASS | 5 |
| PREDICTED CLASS NAME | DESC |
+----------------------+-------+
|
notebooks/old/training_dnn-test-Copy1.ipynb | ###Markdown
This notebook is a step by step guide about how to train a deep neural network (DNN) in the DeepDeconv framework.
###Code
## Set up the sys.path in order to be able to import our modules
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import keras.utils
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
tf.__path__
from DeepDeconv.deepnetFCS.DeconvNet import DeconvNet
from deepnetFCS.DeconvNet_custom import UNet2D
nb_scales = 2 #4
nb_layers_per_block = [2,2] #[2,2]#,2] #[4,5,6,7]
nb_filters=8
activation_function= 'relu' #'swish'
resNet=True
layer_string='layer{0}'.format(nb_layers_per_block[0])
for k in range(1,len(nb_layers_per_block)):
layer_string+='x{0}'.format(nb_layers_per_block[k])
network_name='UNet2D_FCS_sc{0}_{1}_{2}_filt{3}'.format(nb_scales,layer_string,activation_function,nb_filters)
if resNet:
network_name+='_resNet'
print("Network Name:",network_name)
dnn = UNet2D(network_name = network_name, img_rows = 96, img_cols = 96, model_file='', verbose=True,
filters=nb_filters,nb_scales=nb_scales, nb_layers_per_block=nb_layers_per_block,
activation_function=activation_function,resNet=resNet)
from keras.utils import plot_model
plot_model(dnn.model, to_file='{0}.png'.format(network_name),show_shapes=True)
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
pydot_obj=model_to_dot(dnn.model,show_shapes=True,show_layer_names=False)
SVG(pydot_obj.create(prog='dot', format='svg'))
from astropy.io import fits as fits
from matplotlib import pyplot as plt
#Input the directory containing the fits file
data_directory = '/data/DeepDeconv/data/vsc_euclidpsfs/'
#Retrieves the list of all the files
import glob
gal_files = glob.glob(data_directory+'image-0*-0.fits')
gal_files.sort()
print(gal_files)
ff=fits.open(gal_files[0])
plt.figure()
for k in range(5):
plt.subplot(2,3,k+1),plt.imshow(ff[k].data[0:96,0:96])
#HDU 0: noisy
#HDU 1: noise free convolved with euclid PSF
#HDU 2: noise free convolved with large PSF (gauss 0.15 FWHM)
#HDU 3: euclid PSF
#HDU 4: noise free convolved with target PSF (gauss 0.07 FWHM)
plt.figure()
plt.imshow(ff[1].data[0:96,0:96]-ff[4].data[0:96,0:96])
#SNR = [20,100]
SNR=100
noiseless_img_hdu = 1
targets_hdu = 4
psf_hdu = 3
deconv_mode = 'TIKHONOV'
#Train with the image-000-0.fits as validation and all the other files as training set
dnn.train_generator(gal_files[1:3], gal_files[0], epochs=4, batch_size=32,
nb_img_per_file=10000, validation_set_size=1000,
noise_std=None, SNR=SNR, model_file='',
noiseless_img_hdu=noiseless_img_hdu, targets_hdu=targets_hdu, psf_hdu=psf_hdu,
image_dim=96, image_per_row=100,
deconv_mode=deconv_mode)
#The train_generator is:
#1) running get_batch_from_fits for validation data: read files, deconv if necessary, return as [ngal,X2D,Y2D,1]
#2) setting a checkpoint for model, saving the model if lower validation loss
#3) using a generator function to obtain dynamic batches: dynamic_batches
# that I modified because it was assuming nb_img_per_file to be 10000 (hardcoded)
#4) running fit_generator with logging and checkpoint callbacks
#I modified
###Output
Model will be saved at /home/fsureau/programs/DeepDeconv/UNet2D_FCS_sc2_layer2x2_relu_filt8_resNet.hdf5
Memory usage for the model + one batch (GB): 0.226000
|
IF402AMI/Gauss_Seidel.ipynb | ###Markdown
###Code
# Defining our function as seidel which takes 3 arguments
# as A matrix, Solution and B matrix
def seidel(a, x ,b):
#Finding length of a(3)
n = len(a)
# for loop for 3 times as to calculate x, y , z
for j in range(0, n):
# temp variable d to store b[j]
d = b[j]
# to calculate respective xi, yi, zi
for i in range(0, n):
if(j != i):
d-=a[j][i] * x[i]
# updating the value of our solution
x[j] = d / a[j][j]
# returning our updated solution
return x
# int(input())input as number of variable to be solved
n = 3
a = []
b = []
# initial solution depending on n(here n=3)
x = [0, 0, 0]
a = [[3, -0.1, -0.2],[0.1, 7, -0.3],[0.3, -0.2, 10]]
#a = [[4, 1, 2],[3, 5, 1],[1, 1, 3]]
#b = [4,7,3]
b = [7.85,-19.3,71.4]
print(x)
#loop run for m times depending on m the error value
for i in range(0, 25):
x = seidel(a, x, b)
#print each time the updated solution
print(x)
###Output
[0, 0, 0]
[2.6166666666666667, -2.7945238095238096, 7.005609523809525]
[2.990556507936508, -2.499624684807256, 7.00029081106576]
[3.0000318979108087, -2.499987992353051, 6.999999283215615]
[3.000000352469273, -2.5000000357546064, 6.99999998871083]
[2.9999999980555683, -2.500000000456044, 7.000000000049214]
[2.9999999999880793, -2.4999999999977205, 7.000000000000403]
[3.000000000000103, -2.4999999999999845, 6.999999999999997]
[3.0, -2.5000000000000004, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
[3.0, -2.5, 7.0]
|
ATSC_500/ATSC_500_Assignment_VII_Stull_Chap_5_6.ipynb | ###Markdown
ATSC-500 Assignment VII (Stull Chap. 5 & 6)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
% matplotlib inline
###Output
_____no_output_____
###Markdown
Chapter V - Q3Given the following wind speeds measured at various heights in the boundary layer:| z [m] | U [m/s] || ------ | ------- || 2000 | 10.0 || 1000 | 10.0 || 500 | 9.5 || 300 | 9.0 || 100 | 8.0 || 50 | 7.4 || 20 | 6.5 || 10 | 5.8 || 4 | 5.0 || 1 | 3.7 |Assume that the potential temperature increases with height at the constant rate of 6 K/km. Calculate the bulk Richardson number for each layer and indicate the static and dynamic stability of each layer. Also, show what part of the atmosphere is expected to be turbulent in these conditions.**Ans**Recall the definition of bulk Richardson number ($R_B$):$$R_B = \frac{g\Delta\overline{\theta_v}\Delta z}{\overline{\theta_v}\left[\left(\Delta\overline{U}\right)^2+\left(\Delta\overline{V}\right)^2\right]}$$Based on the given condition, $\Delta\overline{\theta_v} = 6\times 10^{-3}\ \mathrm{K\cdot m^{-1}}$$\overline{\theta_v}$ is not mentioned in the question, here we take $\overline{\theta_v} = 273.15\ K$ as an example.Based on the definition of statistical and dynamical stabilities, if $R_B R_T$, the flow is dynamically stable. The critical values here are taken as $R_c = 0.21, R_T = 1.0$.
###Code
Z = np.array([1, 4, 10, 20, 50, 100, 300, 500, 1000, 2000])
U = np.array([3.7, 5.0, 5.8, 6.5, 7.4, 8.0, 9.0, 9.5, 10.0, 10.0])
Theta = 6e-3*Z + 273.15
def Rb_1d(Z, U, Theta):
dZ = np.gradient(Z, edge_order=2)
dU = np.gradient(U, edge_order=2)
dTheta = np.gradient(Theta, edge_order=2)
return (9.8*dTheta*dZ)/(Theta*dU**2)
Rb = Rb_1d(Z, U, Theta)
stats_flag = Rb > 0
dyn_flag1 = Rb > 1.0
dyn_flag2 = Rb < 0.21
df = pd.DataFrame()
df['Height [m]'] = Z; df['Rb'] = Rb
df['statistically stable'] = stats_flag
df['dynamically stable'] = dyn_flag1
df['dynamically unstable'] = dyn_flag2
df
###Output
_____no_output_____
###Markdown
All the layers in this measurement is statistically stable as the potential temperature increases with height, the positive lapse rate and wind shear keeps $R_B$ to be positive.The layer from surface to 20 m is dynamically unstable due to the strong wind shear, which contributes to the total TKE budget through mechanical production. Kelvin-Helmholtz waves could be found in this layer. As the wind shear decreases with height, the mechanical production decreases and the buoyant production dominates the TKE budget. Thus, the inversion makes all the layers above 100 m become dynamically stable.Layers between 20 and 50 m is neither dynamically stable nor unstable, the stability of this layer may depend on its previous state, the air in this layer with keep being turbulent (stable) if it was turbulent (stable) before the measurement. Chapter V - Q17Given the following data:$$\begin{equation*}\begin{array}{ll}\overline{w^{'}\theta^{'}}=0.2\ \mathrm{K\cdot m\cdot s^{-1}} & u_* = 0.2\ \mathrm{m\cdot s^{-1}} \\z_i = 500\ \mathrm{m} & k = 0.4 \\\displaystyle\frac{g}{\overline{\theta}} = 0.0333\ \mathrm{m\cdot s^{-2}\cdot K^{-1}} & z = 6\ \mathrm{m} \\z_O = 0.01\ \mathrm{m} & \mathrm{no\ mositure}\end{array}\end{equation*}$$Find:$$\begin{equation*}\begin{array}{ll}L & R_f\ \mathrm{at\ 6m\ (make\ assumptions\ to\ find\ this)} \\z/L & R_i\ \mathrm{at\ 6m\ (make\ assumptions\ to\ find\ this)} \\w_* & \mathrm{dynamic\ stability} \\\theta_* & \mathrm{flow\ state\ (turbulent\ or\ not)} \\\mathrm{static\ stability} & \\\end{array}\end{equation*}$$**Ans**a, b & e) Recall the difinition of Obukhov length:$$L = \frac{-\overline{\theta_v u_*^3}}{k g \left(\overline{w^{'}\theta_v^{'}}\right)}$$When the surface-layer scaling parameter $z/L$ is positive, the layer is statically stablec & d) Convective velocity scale and temperature scale are defined as:$$w_* = \left[\frac{gz_i}{\overline{\theta_v}}\left(\overline{w^{'}\theta_v^{'}}\right)\right]^{\frac{1}{3}}$$$$\theta_* = \frac{\left(\overline{w^{'}\theta_v^{'}}\right)}{w_*}$$f, g, h & i) By the definition of flux and gradient Richardson number:$$R_f = \frac{\displaystyle\frac{g}{\overline{\theta}}\left(\overline{w^{'}\theta_v^{'}}\right)}{\left(\overline{u_i^{'}u_j^{'}}\right)\displaystyle\frac{\partial\overline{U_i}}{\partial x_j}}$$$$R_i = \frac{\displaystyle\frac{g}{\overline{\theta}}\frac{\partial\overline{\theta_v}}{\partial\partial z}}{\left[\left(\frac{\partial\overline{U}}{\partial z}\right)^2+\left(\frac{\partial\overline{V}}{\partial z}\right)^2\right]}$$According to similarity theory, the mean wind speed is related with the roughness length and friction velocity *(assuming $\overline{V} = 0$)*:$$\frac{\overline{U}}{u_*} = \frac{1}{k}\ln\frac{z}{z_O} $$$$\frac{\partial\overline{U}}{\partial z} = \frac{u_*}{k\cdot z\cdot z_O}$$And friction velocity is also related with Reynold's stress:$$u_*^2 = \frac{\tau}{\rho} = \overline{u_i^{'}u_j^{'}}$$For the gradient Richardson number, here we *assume the air follows the dry adiabatic process, $\displaystyle\frac{\partial{\theta}}{\partial z} = 0$ as there is no moisture.*
###Code
w_theta = 0.2; u_s = 0.2
zi = 500; k = 0.4
g_theta = 0.0333; z = 6
z_r = 0.01
L = (-1.0/g_theta)*(u_s**3)/w_theta/k
zeta = z/L
w_s = (zi*g_theta*w_theta)**(1/3)
theta_s = w_theta/w_s
gradU = k*u_s/(z*z_r)
uu = u_s**2
gamma = -9.8e-3
Rf = g_theta*w_theta/gradU/uu
Ri = 0
print("a) Obukhov length: {} [m]".format(L))
print("b) Surface-layer scaling parameter: {}".format(zeta))
print("c) Convective velocity scale {} [m/s]".format(w_s))
print("d) Temperature scale {} [K]".format(theta_s))
print("e) Statically unstable (zeta < 0)")
print("f) Rf = {}".format(Rf))
print("g) Ri = {}, assuming dry adiabatic".format(Ri))
print("h) Dynamically unstable (Ri < Rc)")
print("i) Likely to be turbulent (Rf < 1, Ri < Rc)")
###Output
a) Obukhov length: -3.003003003003003 [m]
b) Surface-layer scaling parameter: -1.9980000000000002
c) Convective velocity scale 1.493303482254768 [m/s]
d) Temperature scale 0.13393124865550848 [K]
e) Statically unstable (zeta < 0)
f) Rf = 0.12487499999999996
g) Ri = 0, assuming dry adiabatic
h) Dynamically unstable (Ri < Rc)
i) Likely to be turbulent (Rf < 1, Ri < Rc)
###Markdown
Chapter VI - Q14, 1514) Let $K_m = 5\ \mathrm{m^2\cdot s^{-1}}$ constant with height. Calculate and plot:$$u^{'}w^{'}\qquad\qquad w^{'}\theta_v^{'}$$from 0 to 50 m using the data from problem 26 of Chapt. 5| $z$ [m] | $\overline{\theta_v}$ [K] | $\overline{U}$ [m/s] || -------- | ------------------------- | -------------------- || 50 | 300 | 14 || 40 | 298 | 10 || 30 | 294 | 8 || 20 | 292 | 7 || 10 | 292 | 7 || 0 | 293 | 2 |15.a) Using the answers from problem (14) above, find the initial tendency for virtual potential temperature for air at a height of 10 m.b) If this tendency does not change with time, what is the new $\mathrm{\overline{\theta_v}}$ at 10 m, onehour after the initial state (i.e., the state of problem 26, Chapt. 5 )?**Ans**According to K-theory:$$\overline{u_j^{'}\zeta^{'}} = -K\frac{\partial \overline{\zeta}}{\partial x_j}$$In the local closure, the initial tendency for virtual potential temperature is described as:$$\frac{\partial \overline{\theta}}{\partial t} = -\frac{\partial \left(\overline{w^{'}\theta_v^{'}}\right)}{\partial z}$$
###Code
K = 5
z = np.array([0, 10, 20, 30, 40, 50])
theta_v = np.array([293, 292, 292, 294, 298, 300])
U = np.array([2, 7, 7, 8, 10, 14])
dz = np.gradient(z, edge_order=2)
uw = -1*K*np.gradient(U, edge_order=2)/dz
w_theta = -1*K*np.gradient(theta_v, edge_order=2)/dz
theta_trend = -1*np.gradient(w_theta)/dz
theta_v_t1 = theta_trend*60*60+theta_v
print("Height: {}".format(z))
print("U-flux: {}".format(uw))
print("T-flux: {}".format(w_theta))
print("The initial tendency for theta_v at 10 m is {} [K/s]".format(theta_trend[1]))
print("One hour later, theta_v at 10 m is {} [K]".format(theta_v_t1[1]))
fig = plt.figure(figsize=(3, 4))
ax = fig.gca()
ax.grid(linestyle=':')
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.xaxis.set_tick_params(labelsize=14)
ax.yaxis.set_tick_params(labelsize=14)
[j.set_linewidth(2.5) for j in ax.spines.values()]
ax.tick_params(axis="both", which="both", bottom="off", top="off", \
labelbottom="on", left="off", right="off", labelleft="on")
ax.set_ylabel('Height [m]', fontsize=14)
ax.set_xlabel('Fluxes', fontsize=(14))
ax.plot(uw, z, lw=3, label='U-flux')
ax.plot(w_theta, z, lw=3, label='T-flux')
LG = ax.legend(bbox_to_anchor=(1.035, 1), prop={'size':14}); LG.draw_frame(False)
###Output
Height: [ 0 10 20 30 40 50]
U-flux: [-3.75 -1.25 -0.25 -0.75 -1.5 -2.5 ]
T-flux: [ 0.75 0.25 -0.5 -1.5 -1.5 -0.5 ]
The initial tendency for theta_v at 10 m is 0.0625 [K/s]
One hour later, theta_v at 10 m is 517.0 [K]
|
Machine_Learning/Regression.ipynb | ###Markdown
Regression - Machine Learning Mateus Victor GitHub: mateusvictor ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement simple Linear Regression, Multiple Linear Regression and Polynomial Regressoin.- Create a model, train,test and use the model Setup and downloading data
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# To create the linear regression model
from sklearn import linear_model
# Metric to evaluate the model
from sklearn.metrics import r2_score
# Drivers a new features sets from the original feature set
from sklearn.preprocessing import PolynomialFeatures
%matplotlib inline
#Downloading the .csv file
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
# Converting the .csv file to a pandas dataframe
df = pd.read_csv("FuelConsumption.csv")
df.head()
###Output
_____no_output_____
###Markdown
Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0
###Code
# Selecting some features
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Data Vizualization
###Code
# Relation between FUELCOMSUMPTION and CO2EMISSIONS
plt.figure(figsize=(8, 6))
plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue')
plt.xlabel('Fuel Comsumption')
plt.ylabel('CO2 Emission')
plt.show()
# Relation between ENGINESIZE and CO2EMISSIONS
plt.figure(figsize=(8, 6))
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel('Engine Size')
plt.ylabel('CO2 Emission')
plt.show()
# Relation between CYLINDERS and CO2EMISSIONS
plt.figure(figsize=(8, 6))
plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')
plt.xlabel('Cylinders')
plt.ylabel('CO2 Emission')
plt.show()
###Output
_____no_output_____
###Markdown
Creating the train and test dataset Lets split our dataset into train and test sets, 80% of the entire data for training, and the 20% for testing. We create a mask to select random rows using **np.random.rand()** function:
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
# Let's vizualise the train and test data distribution
plt.figure(figsize=(8, 6))
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
plt.figure(figsize=(8, 6))
plt.scatter(test.ENGINESIZE, test.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Simple Linear Regression Creating the model
###Code
regr = linear_model.LinearRegression()
train_x = train[['ENGINESIZE']]
train_y = train[['CO2EMISSIONS']]
# Fiting
regr.fit(train_x, train_y)
# Storing and Printing the coefficient and the intercption
intercept = regr.intercept_[0]
coefficients = regr.coef_[0]
print(f"Coefficient: {coefficients}")
print(f"Intercept: {intercept}")
print(f"Y = {intercept:.2f} + {coefficients[0]:.2f} * X")
# Ploting the output
plt.figure(figsize=(8, 6))
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
# Plot the linear function
plt.plot(train_x, intercept + train_x * coefficients[0], 'r')
plt.xlabel('Engine size')
plt.ylabel('Emission')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation Evaluation metrics:- Mean absolute error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.- Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean absolute error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.- Root Mean Squared Error (RMSE).- R-squared is not error, but is a popular metric for accuracy of your model. It represents how close the data are to the fitted regression line. The higher the R-squared, the better the model fits your data. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
###Code
test_x = test[['ENGINESIZE']]
test_y = test[['CO2EMISSIONS']]
y_hat = regr.predict(test_x)
print(f"Mean absolute errorr: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print(f"Residual sum of squares: %.2f" % np.mean((y_hat - test_y) ** 2))
print(f"R2-score: {r2_score(test_y, y_hat):.2f}")
###Output
Mean absolute errorr: 24.39
Residual sum of squares: 1026.45
R2-score: 0.77
###Markdown
Multiple Linear Regression Creating the model
###Code
regr = linear_model.LinearRegression()
train_x = train[['ENGINESIZE', 'CYLINDERS', 'FUELCONSUMPTION_COMB']]
train_y = train[['CO2EMISSIONS']]
regr.fit(train_x, train_y)
# Storing and Printing the coefficients and the intercption
intercept = regr.intercept_[0]
coefficients = regr.coef_[0]
print(f"Coefficients: {coefficients}")
print(f"Intercept: {intercept}")
###Output
Coefficients: [10.9027431 7.71840794 9.44086076]
Intercept: 65.88595692659285
###Markdown
Ordinary Least Squares (OLS) OLS is a method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by minimizing the sum of the squares of the differences between the target dependent variable and those predicted by the linear function. In other words, it tries to minimizes the sum of squared errors (SSE) or mean squared error (MSE) between the target variable (y) and our predicted output ($\hat{y}$) over all samples in the dataset.OLS can find the best parameters using of the following methods:- Solving the model parameters analytically using closed-form equations- Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, etc.)
###Code
# Test data
test_x = test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]
test_y = test[['CO2EMISSIONS']]
# Precition
y_hat = regr.predict(test_x)
# Residual sum of squares
print("Residual Sum of Score %.2f" % np.mean((y_hat - test_y) ** 2))
# 1 is the perfect prediction
print("Variance score: %.2f" % regr.score(test_x, test_y))
###Output
Residual Sum of Score 556.06
Variance score: 0.88
###Markdown
Polynomial Regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: Transforming
###Code
# Training and test set
train_x = train[['ENGINESIZE']]
train_y = train[['CO2EMISSIONS']]
test_x = test[['ENGINESIZE']]
test_y = test[['CO2EMISSIONS']]
# Intanciatnig a PolunomialFeatures object
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$ Modeling Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it:
###Code
pregr = linear_model.LinearRegression()
pregr.fit(train_x_poly, train_y)
intercept = pregr.intercept_[0]
coefficients = pregr.coef_[0]
print(f"Coefficients: {coefficients}")
print(f"Intercept: {intercept}")
# Ploting
plt.figure(figsize=(8, 6))
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
# Selecting values from 0 to 10 by 0.1
x_temp = np.arange(0.0, 10.0, 0.1)
plt.plot(x_temp, intercept + coefficients[1] * x_temp + coefficients[2] * (x_temp ** 2), color='red')
plt.xlabel('Engine Size')
plt.ylabel('CO2 Emissions')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation
###Code
test_x_poly = poly.fit_transform(test_x)
y_hat = pregr.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y)))
print("Residual sum of squares: (MSE) %.2f" % np.mean((y_hat - test_y) ** 2))
print("R-Score : %.2f" % r2_score(y_hat, test_y))
###Output
Mean absolute error: 24.42
Residual sum of squares: (MSE) 1010.03
R-Score : 0.72
###Markdown
Modeling with degree three
###Code
train_x = train[['ENGINESIZE']]
train_y = train[['CO2EMISSIONS']]
test_x = test[['ENGINESIZE']]
test_y = test[['CO2EMISSIONS']]
poly = PolynomialFeatures(degree=3)
train_x_poly = poly.fit_transform(train_x)
pregr = linear_model.LinearRegression()
pregr.fit(train_x_poly, train_y)
intercept = pregr.intercept_[0]
coefficients = pregr.coef_[0]
print(f"Intercept: {intercept}")
print(f"Coefficients: {coefficients}")
# Ploting
plt.figure(figsize=(8, 6))
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
x_temp = np.arange(0.0, 10.0, 0.1)
plt.plot(x_temp, intercept + coefficients[1] * x_temp + coefficients[2] * (x_temp ** 2) + coefficients[3] * (x_temp ** 3), '-r')
plt.xlabel('Engine Size')
plt.ylabel('CO2 Emissions')
plt.show()
## Evaluating
test_x_poly = poly.fit_transform(test_x)
y_hat = pregr.predict(test_x_poly)
print(f"R2-Score: {r2_score(y_hat, test_y)}")
###Output
R2-Score: 0.7157985707269677
|
06_starbucksUdacity_trainDNN_classifier_offerViewed.ipynb | ###Markdown
Prepare and Split Data
###Code
Y
X
# Print columns used as model features
print(X.columns)
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
min_max_scaler = preprocessing.MinMaxScaler()
X = min_max_scaler.fit_transform(X)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, shuffle=True)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.2, shuffle=True)
print(X_train)
print(X_train.shape)
print(Y_train)
print(Y_train.shape)
#test_tensor = torch.Tensor(X.values)
# Convert data into torch.Tensor:
X_train = torch.Tensor(X_train).to(device)
X_val = torch.Tensor(X_val).to(device)
X_test = torch.Tensor(X_test).to(device)
Y_train = torch.Tensor(Y_train.values).to(device)
Y_val = torch.Tensor(Y_val.values).to(device)
Y_test = torch.Tensor(Y_test.values).to(device)
# Create datasets for the dataloaders:
train_data = TensorDataset(X_train, Y_train)
test_data = TensorDataset(X_test, Y_test)
val_data = TensorDataset(X_val, Y_val)
# print out some data stats
print('# of training samples: ', len(train_data))
print('# of validation samples: ', len(val_data))
print('# of test samples: ', len(test_data))
batch_size = 128
# Creating the data loaders:
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_data, batch_size=batch_size, shuffle=True)
loaders = {
'train': train_loader,
'valid': val_loader,
'test': test_loader
}
print(X_train.shape, Y_train.shape)
print(X_test.shape, Y_test.shape)
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
n_classes = 1 # Number of classes
n_features = 17 # Number of features
# define the DNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self, n_features):
super(Net, self).__init__()
## Define layers of a DNN
self.fc1 = nn.Linear(n_features, 32)
# linear layer (17 -> 32)
self.fc2 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc3 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc4 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc5 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc6 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc7 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc8 = nn.Linear(32, 32)
# linear layer (32 -> 32)
self.fc9 = nn.Linear(32, 32)
# linear layer (32 -> 16)
self.fc10 = nn.Linear(32, 16)
# linear layer (16 -> 8)
self.fc11 = nn.Linear(16, 8)
# linear layer (8 -> 1)
self.fc12 = nn.Linear(8, n_classes)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
## Define forward behavior
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc2(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc3(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc4(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc5(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc6(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc7(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc8(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc9(x))
#print("a3: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = F.relu(self.fc10(x))
#print("a4: ",x.shape)
# add dropout layer
x = self.dropout(x)
# add 3rd hidden layer
x = F.relu(self.fc11(x))
#print("a4: ",x.shape)
# add dropout layer
#x = self.dropout(x)
# add 3rd hidden layer
x = self.fc12(x)
#print("a5: ",x.shape)
return x
# instantiate the CNN
model = Net(n_features)
print(model)
# move tensors to GPU if CUDA is available
if use_cuda:
model.cuda()
from torch import optim
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
#optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
print("Begin training...")
train_losses, valid_losses = [], []
for epoch in range(1, n_epochs+1):
#print("Epoch: ",epoch)
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
#if batch_idx % 100 == 0:
#print("Epoch: {}, Batch: {}".format(epoch,batch_idx))
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
output = output.view(-1)
target = target.unsqueeze(0).view(-1)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# record the average training loss, using something like
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
with torch.no_grad():
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
#output = outputs.view(1, -1) # make it the same shape as output
target = target.unsqueeze(0).view(-1)
output = output.view(-1)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
train_losses.append(train_loss)
valid_losses.append(valid_loss)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss < valid_loss_min:
torch.save(model.state_dict(), save_path)
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min, valid_loss))
valid_loss_min = valid_loss
print("Training Complete!")
# return trained model
return model, train_losses, valid_losses
n_epochs = 300 # Number of training epochs
# train the model
model, train_losses, valid_losses = train(n_epochs, loaders, model, optimizer,
criterion, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model.load_state_dict(torch.load('model_scratch.pt'))
plt.plot(train_losses, label='Training loss')
plt.plot(valid_losses, label='Validation loss')
plt.legend(frameon=False)
from sklearn.metrics import roc_auc_score
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
def calcROC(model, X, y):
''' Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. '''
print('ROC AUC Score:',roc_auc_score(y, model.predict_proba(X)[:, 1]))
plot_roc_curve(model, X, y)
plt.show()
def test(loaders, model, criterion, use_cuda):
print("Begin Test!")
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
y_pred_list = []
target_list = []
y_true = []
y_pred = []
model.eval()
with torch.no_grad():
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
output = output.view(-1)
target = target.unsqueeze(0).view(-1)
target_list.append(target.cpu().numpy())
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
#plt.plot(torch.sigmoid(output.data))
pred = torch.round(torch.sigmoid(output.data))
y_pred_list.append(pred.cpu().numpy())
y_pred.append(output.data.cpu().numpy())
#pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
y_pred_list = [item for sublist in y_pred_list for item in sublist]
target_list = [item for sublist in target_list for item in sublist]
fpr, tpr, thresholds = roc_curve(target_list, y_pred_list)
roc_auc = auc(fpr, tpr)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (100. * correct / total, correct, total))
print('\nConfusion Matrix:\n')
print(confusion_matrix(target_list, y_pred_list))
print('\nClassification Report:\n')
print(classification_report(target_list, y_pred_list))
# Calculate the ROC Curve and AUC score for the DNN Classifier
print('\nROC AUC Score:\n')
print(roc_auc_score(target_list, y_pred_list))
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
# call test function
test(loaders, model, criterion, use_cuda)
###Output
_____no_output_____ |
notebooks/daily-drinking-notebook-demo.ipynb | ###Markdown
A Bayesian mixed effects Support Vector Machine for learning and predicting daily substance use disorder patternsThis notebook takes daily substance use logs and learns patterns of alcohol use. Features include:* Standard summaries and graphics of drinking data (e.g., 7-day, 30-day frequencies)* Handles both fixed and random effects in a flexible Bayesian mixed effects SVM (we call it Mixed.SVM)* Provides classification for the variable of interest (heavy drinking in this example). * Provides effect estimates and 95% credible intervals (unlike standard SVMs).* Inference can be drawn using the credible intervals (is the variable associated with heavy drinking days?).* Estimates individual-level risk patterns. DatasetIMPORTANT: THIS NOTEBOOK APPLIES THE TOOLKIT TO SIMULATED DATA! To illustrate the use of the Bayesian mixed effects SVM we simulated data for a cohort of heavy drinkers. You may run the entire notebook on this simulated dataFirst we read in the simulated daily log data. This tracks if the individual used alcohol, nicotine, cannabis, or other drugs.***
###Code
rm(list=ls()) # cleans memory
# !!! Change the next two lines to the location of example.RData.
setwd('/home/path-to-project-folder/') # change path to directory below your Mixed.SVM folder
load('Mixed.SVM/data/processed/example.RData') # loads simulated daily log
head(drinks)
###Output
_____no_output_____
###Markdown
Above we loaded the daily drinking and drug use data:| Variable | Description | Coding || :--------| :--------- | ------|| ursi | Individual ID | || male | Male | 1=male;2=female | | age | Age in years | | Day | Day | || SD | Standard drinks consumed | | AD | Did drinking take place? | 1=yes;0=no || HD | Did heavy drinking take place? | 1=yes;0=no || MD | Did moderate drinking take place? | 1=yes;0=no || NICUSEDAY | Any nicotine use (cigarettes, e-cigs, smokekess, etc) | 1=yes;0=no || THCUSEDAY | Any use of THC | 1=yes;0=no || OTHERDRUGUSE | Any use of any other drugs | 1=yes;0=no | We want to learn the alcohol drinking patterns in this data. In other words, we want to fit a model of heavy drinking (`HD`) considering demographics (`age`, `male`) and additional variables that may change over time and influence risk of heavy drinking, including other substances (nicotine, cannabis, and other drugs; `NICUSEDAY`, `THCUSEDAY`, and `OTHERDRUGUSE`, respectively). *** ModelThe Mixed.SVM is described in detail in the manuscript. The analysis requires several R libraries. We'll go ahead and install and load these now.
###Code
packages <- c("statmod", "mvtnorm",
"Matrix", "splines","xtable","IRdisplay","repr","plotly",
"extrafont","ggplot2","gridExtra","grid","table1")
## Now load or install and load
package.check <- lapply(
packages,
FUN = function(x) {
if (!require(x, character.only = TRUE)) {
install.packages(x, dependencies = TRUE)
library(x, character.only = TRUE)
}
}
)
# set up for plotting figures
options(bitmapType="cairo") #linux
#options(bitmapType="quartz") # mac
set.seed(2) #sets a seed for random number generators used by the algorithm
###Output
Loading required package: statmod
Loading required package: mvtnorm
Loading required package: Matrix
Loading required package: splines
Loading required package: xtable
Loading required package: IRdisplay
Attaching package: ‘IRdisplay’
The following object is masked from ‘package:xtable’:
display
Loading required package: repr
Loading required package: plotly
Loading required package: ggplot2
Attaching package: ‘plotly’
The following object is masked from ‘package:ggplot2’:
last_plot
The following object is masked from ‘package:stats’:
filter
The following object is masked from ‘package:graphics’:
layout
Loading required package: extrafont
Registering fonts with R
Loading required package: gridExtra
Loading required package: grid
Loading required package: table1
Attaching package: ‘table1’
The following objects are masked from ‘package:xtable’:
label, label<-
The following objects are masked from ‘package:base’:
units, units<-
###Markdown
Next we source the algorithm code as well as tools to help summarize the results.
###Code
source('Mixed.SVM/src/R/mixed-svm.R')
###Output
_____no_output_____
###Markdown
The arguments of Mixed.SVM are the following| Argument | Description | Setting || -------- | ----------- | --------- | | Y | Response vector encoded 0 and 1 | `HD` || X | Design matrix corresponding to Y | `male`, `age`, `NICUSEDAY`, `THCUSEDAY`, `OTHERDRUGUSE` || T | Vector of times at which the observations in Y were taken | `Day` || U | Vector which indentifies the user IDs for the observations in `Y`, `X`, and `T` | `ursi` || Tmax | Maximum time to be included in the analysis | 720 || knot.seq | Interior knot set for subject specific trajectories | 0.5 | | Iter | Number of MCMC iterations | 100000 || burn.in | Number of MCMC iterations to discard | 50000We set these parameters for modeling:
###Code
Y <- as.vector(drinks$HD) # outcome
X <- as.matrix(drinks[,c("male","age","NICUSEDAY","THCUSEDAY","OTHERDRUGUSE")]) # covariates
T <- as.vector(drinks$Day) # times
U <- as.vector(drinks$ursi) # individual id
Tmax <- 720
knot.seq <- c(0.5)
Iter <- 100000
burn.in <- 50000
###Output
_____no_output_____
###Markdown
Next we call the Mixed.SVM algorithm.
###Code
# Runs the algorithm
# Remove # to run
#MCMC.res <- Mixed.SVM(Y, X, T, U, Tmax, knot.seq, Iter)
# Saves the results
# Remove # to run
# save(MCMC.res,file='Mixed.SVM/reports/demo-results.RData')
# Here we pull up previously saved results
load(file='Mixed.SVM/reports/demo-results.RData')
###Output
_____no_output_____
###Markdown
*** Results Daily drinking summariesFirst we summarize daily drinking for the simulated input dataset:| Variable | Description || -------- | ----------- || pdd | proportion drinking days || phdd | proportion heavy drinking days || ddd | average standard drinks per drinking day || dpd | average standard drinks per day || totaldays | total days || totaldrinkdays | total number of drinking days || totalheavydays | total number of heavy drinking days | | phwd | proportion heavy drinking days while drinking | | mxd | maximum standard drinks| At the baseline assessment in this study, the participants are interviewed and a detailed log of daily drinking is created for the previous 3 months (90 days). So we start summarizing drinking on day 90. Day 181 corresponds to three month after baseline, day 364 for the nine month visit, and day 637 for the eighteen month visit. For these four visits, we'll summarize these variables for the previous month (30 days).These are easily adjustable for your study, see `time` and `visitdays` below. The `log.summaries` function generates individual level summaries for the the variables described above. `includevisitday` indicates if the visit day should be included in the summary or not.
###Code
iid <- unique(drinks$ursi) # individual id's found in the dataset
time <- 30 # summarize for one month before the visit
visitdays <- c(90,181,364,637) # visit days: baseline, 3-months, 9-months, 18-months
# summarize previous week at baseline, 3-,9-,and 12-months
base <- log.summaries(iid, timeframe=time,visitday=visitdays[1],includevisitday=FALSE)
mo3 <- log.summaries(iid, timeframe=time,visitday=visitdays[2],includevisitday=FALSE)
mo9 <- log.summaries(iid, timeframe=time,visitday=visitdays[3],includevisitday=FALSE)
mo18 <- log.summaries(iid, timeframe=time,visitday=visitdays[4],includevisitday=FALSE)
###Output
_____no_output_____
###Markdown
We summarize drinking over the participants in the study by computing the mean, standard deviation, median, minimum, maximum, and number of missing observations for one month of drinking logs at baseline, 3 month, 9 months, and 18 months.
###Code
base$visit <- "Baseline"
mo3$visit <- "3 Months"
mo9$visit <- "9 Months"
mo18$visit <- "18 Months"
allmo <- rbind(base,mo3,mo9,mo18)
allmo$visit <- factor(allmo$visit,levels=c("Baseline","3 Months", "9 Months", "18 Months"))
label(allmo$pdd) <- "<b>Proportion drinking days"
label(allmo$phdd) <- "<b>Proportion heavy drinking days"
label(allmo$ddd) <- "<b>Average standard drinks per drinking day"
label(allmo$dpd) <- "<b>Average standard drinks per day"
label(allmo$totaldays) <- "<b>Total days"
label(allmo$totaldrinks) <- "<b>Total drinks"
label(allmo$totaldrinkdays) <- "<b>Total number of drinking days"
label(allmo$totalheavydays) <- "<b>Total number of heavy drinking days"
label(allmo$phwd) <- "<b>Proportion heavy drinking days while drinking"
label(allmo$mxd) <- "<b>Maximum standard drinks"
display_html(table1(~pdd + phdd + ddd + dpd + totaldrinks + totaldays + totaldrinkdays +
totalheavydays + phwd + mxd | visit, data=allmo,caption="30-Day Drinking Summaries in Simulated Data", overall=NULL))
###Output
_____no_output_____
###Markdown
Let's take a look at the individual trends in heavy drinking over the duration of the simulated study.
###Code
# put together the summaries of heavy drinking days in the past month at the four assessments
trend <- cbind(base=base$phdd,mo3=mo3$phdd,mo9=mo9$phdd,mo18=mo18$phdd)
h <- t(trend) # transpose for plotting
ave <- rowMeans(h,na.rm=TRUE) # average values across individuals
matplot(visitdays, h, type='l', xlab="Assessment", axes=FALSE, ylab='Proportion Heavy Drinking Days',main="Proportion heavy drinking days 30 days before the baseline, 3-month, 9-month,
and 18-month visits") # plot the trend over the study
axis(2)
axis(side=1,at=visitdays,labels=c("Baseline","3 Months","9 Months","18 Months"))
matlines(visitdays,ave,lwd=4,col="black") # plot the average
###Output
_____no_output_____
###Markdown
The individuals in this study on average drank heavily 25-30% of the past 30 days before each assessment. However, each individual has their own drinking pattern that typical summaries would not capture. Mixed-SVM resultsThe `Mixed.SVM` algorithm can take some time to run 100K iterations. We have loaded the results of a previous run so we can look at the results. Classification Rate
###Code
mean(1-(MCMC.res$Misclassified[burn.in:Iter]/2)/nrow(drinks))
###Output
_____no_output_____
###Markdown
For the simulated data, the model classifies 90% of heavy drinking days correctly. Performance Metrics
###Code
confusionmatrix <- Pred.summary.MSVM(MCMC.res,Y, X, T, U, Tmax, knot.seq=c(.5), Iter, burn.in=50000)
###Output
_____no_output_____
###Markdown
Here we compute a summary of the performance of the Mixed-SVM algorithm in predicting heavy drinking days. This includes the rate of true positives (TP or sensitivity), false positives (FP), true negatives (TN or specificity), and false negatives (FN).
###Code
print(confusionmatrix)
###Output
$TP
[1] 0.6956982
$FP
[1] 0.03300538
$TN
[1] 0.9669946
$FN
[1] 0.3043018
###Markdown
In this simulated data, the model predicts 70% of the days with heavy drinking and 97% of days without heavy drinking correctly. The false positive rate is 3% and the false negative rate is 30%. Parameter Summary
###Code
# Regression parameter summary
names <- c("male","age","NICUSEDAY","THCUSEDAY","OTHERDRUGUSE")
summary <- Reg.summary(MCMC.res,burn.in,names)
summary
###Output
_____no_output_____
###Markdown
Overall among heavy drinkers in this simulation, we find that age (younger), gender (female), and using nicotine and other drugs are associated with increase risk of heavy drinking episodes. A variable is statistically significant when the 95% credible intervals do not contain zero. Report of subject specific heavy drinking riskNext we look at the estimated risk trajectories for each study participant. Since there are many trajectories and the user may want to quickly examine the entire cohort, we include some code to layout the graphs 6 per page. For space, we only display the graphs for the first 18 participants.Each graph is labeled with the de-identified participant id. The risk of heavy drinking is charted by days that individual is followed in the study up to `Tmax`. Values above zero indicate increased risk of heavy drinking and values below zero indicate decreased risk of heavy drinking. The influence of using or discontinuing use of other substances (e.g, tobacco) can be seen by jumps in risk.
###Code
# prints 6 estimated trajectories per page
subjects<-unique(U)
vplayout <- function(x, y) viewport(layout.pos.row = x, layout.pos.col = y)
pages <- 1 # How many pages of output, 6 plots to a page
for (i in 1:pages) {
grid.newpage()
pushViewport(viewport(layout = grid.layout(3, 2)))
for (j in 1:6) {
index <- i*j
# function that summaries the results from the SVM.Mixed into a plot for an individual
fig1 <- Ind.trajectory(subj=subjects[index], X, T, U, MCMC.res, burn.in, knot.seq, Tmax)
if ((j %% 2) == 0) {
print(fig1, vp = vplayout(j/2, 2))
} else {
print(fig1, vp = vplayout(ceiling(j/2), 1))
}
}
}
###Output
_____no_output_____
###Markdown
Analysis of specific subject drinking trajectoriesThe user may look closely at an individual's estimated risk of heavy drinking trajectory over time. The function `Ind.trajectory.pretty` takes in the subject indentifier and the results of `SVM.Mixed` and presents the estimated heavy drinking trajectories for two individuals in the simulated data (subject `18` and `24`). Each point is the value of the linear predictor for that given day. The further the values are away from zero, the more or less likely at individualis going to have a heavy drinking day. These values are color coded by the 8 combinations of the 3 time varying covariates (nicotine, cannabis (THC), and other drug use).
###Code
subjects<-unique(U)
subj <- c(18,24)
options(repr.plot.width=10, repr.plot.height=15)
vplayout <- function(x, y) viewport(layout.pos.row = x, layout.pos.col = y)
grid.newpage()
pushViewport(viewport(layout = grid.layout(length(subj), 1)))
for (j in 1:length(subj)) {
fig1 <- Ind.trajectory.pretty(subj=subj[j], X, T, U, MCMC.res, burn.in, knot.seq, Tmax)
print(fig1,vp=vplayout(j,1))
}
###Output
_____no_output_____
###Markdown
Significance of the individual specific random trajectoriesWe now look at the individual random trajectories and corresponding 95% credible bands for subject `18`. If the bands exclude zero, the random effects are significant for that individual.
###Code
subjects<-unique(U)
subj <- c(18) # specify which subjects to plot
options(repr.plot.width=10, repr.plot.height=15)
vplayout <- function(x, y) viewport(layout.pos.row = x, layout.pos.col = y)
grid.newpage()
pushViewport(viewport(layout = grid.layout(length(subj), 1)))
for (j in 1:length(subj)) {
fig1 <- Ind.trajectory.CI(subj=subj[j], T, U, MCMC.res, burn.in, knot.seq, Tmax)
print(fig1,vp=vplayout(j,1))
}
###Output
_____no_output_____ |
pandas/assignment_week2/.ipynb_checkpoints/Assignment+2-checkpoint.ipynb | ###Markdown
---_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 2 - Pandas IntroductionAll questions are weighted the same in this assignment. Part 1The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on [All Time Olympic Games Medals](https://en.wikipedia.org/wiki/All-time_Olympic_Games_medal_table), and does some basic data cleaning. The columns are organized as of Summer games, Summer medals, of Winter games, Winter medals, total number of games, total of medals. Use this dataset to answer the questions below.
###Code
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
df.tail()
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.tail()
# print(names_ids.str[0].str[:3])
###Output
_____no_output_____
###Markdown
Question 0 (Example)What is the first country in df?*This function should return a Series.*
###Code
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Which country has won the most gold medals in summer games?*This function should return a single string value.*
###Code
def answer_one():
return df['Gold'].argmax()
answer_one()
###Output
C:\ProgramData\Anaconda6\lib\site-packages\ipykernel_launcher.py:2: FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
###Markdown
Question 2Which country had the biggest difference between their summer and winter gold medal counts?*This function should return a single string value.*
###Code
def answer_two():
df['G_diff']=abs(df['Gold']-df['Gold.1'])
return df['G_diff'].argmax()
answer_two()
###Output
C:\ProgramData\Anaconda6\lib\site-packages\ipykernel_launcher.py:3: FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Question 3Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? $$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$Only include countries that have won at least 1 gold in both summer and winter.*This function should return a single string value.*
###Code
def answer_three():
df['T_Gold']=df['Gold']+df['Gold.1']+df['Gold.2']
eligible_df=df[(df['Gold']>0) & (df['Gold.1']>0)]
eligible_df['G_diff_rel']=abs(eligible_df['Gold']-eligible_df['Gold.1'])/eligible_df['T_Gold']
eligible_df['G_diff_rel'].argmax()
return eligible_df['G_diff_rel'].argmax()
answer_three()
###Output
C:\ProgramData\Anaconda6\lib\site-packages\ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
C:\ProgramData\Anaconda6\lib\site-packages\ipykernel_launcher.py:5: FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
"""
C:\ProgramData\Anaconda6\lib\site-packages\ipykernel_launcher.py:10: FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
# Remove the CWD from sys.path while we load stuff.
###Markdown
Question 4Write a function that creates a Series called "Points" which is a weighted value where each gold medal (`Gold.2`) counts for 3 points, silver medals (`Silver.2`) for 2 points, and bronze medals (`Bronze.2`) for 1 point. The function should return only the column (a Series object) which you created, with the country names as indices.*This function should return a Series named `Points` of length 146*
###Code
def answer_four():
Points=df['Gold.2']*3 +df['Silver.2']*2+df['Bronze.2']*1
return Points
answer_four()
###Output
_____no_output_____
###Markdown
Part 2For the next set of questions, we will be using census data from the [United States Census Bureau](http://www.census.gov). Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. [See this document](https://www2.census.gov/programs-surveys/popest/technical-documentation/file-layouts/2010-2015/co-est2015-alldata.pdf) for a description of the variable names.The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate. Question 5Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)*This function should return a single string value.*
###Code
census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
# census_df = pd.read_csv('census.csv')
df1=census_df.set_index(['STNAME'])
u=df1.index.unique() # unique state in index
s=[len(df1.loc[i]) for i in u] #list all the county one by oone from list of unique states
S2=pd.Series(s, index=u)
return S2.argmax()
return u
answer_five()
df1=census_df.set_index(['STNAME'])
df.groupby['']
###Output
_____no_output_____
###Markdown
Question 6**Only looking at the three most populous counties for each state**, what are the three most populous states (in order of highest population to lowest population)? Use `CENSUS2010POP`.*This function should return a list of string values.*
###Code
def answer_six():
df6=census_df
u_county=set(df6['CTYNAME'])
u_State=set(df6['STNAME'])
df6=census_df.set_index(['STNAME','CTYNAME'])
s6=df6['CENSUS2010POP']
l=[] #list l for storing sum of 3 largest(populus county) value of each state
for s in u_State: # iterating over all the states
# going in by multilevel index ['Alabama','Alabama County'] this will list each county index
k=s6.loc[[s,s]].nlargest(3).sum() # k is the sum of 3 largst county for each state
l.append(k) # storing value of each k in list l
pdS=pd.Series(l, index=u_State) # Making a new series from l (list of ) with its indexes from u_state
pdS.nlargest(3) # again applying nlargest on this series
dd=pdS.nlargest(3)
# print(dd.index)
l=dd.index
j=[(l[i]) for i in range(len(l))]
# print(type(l[0]))
return j
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.*This function should return a single string value.*
###Code
def answer_seven():
df7a=census_df[census_df['SUMLEV']==50] #filtering all the county with same name as state
u_county=set(df7a['CTYNAME']) # Reading all the u_county
u_State=set(df7a['STNAME']) # Reading all the u_state
df7b=df7a.set_index(['CTYNAME','STNAME'])
df7=df7b[['POPESTIMATE2010','POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014','POPESTIMATE2015']]
df7['Min']=df7.loc[:,['POPESTIMATE2010','POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014','POPESTIMATE2015']].min(axis=1)
df7['Max']=df7.loc[:,['POPESTIMATE2010','POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014','POPESTIMATE2015']].max(axis=1)
df7['Diff']=df7['Max']-df7['Min'] # Column showing difference between highest populus and lowest populus year
largest_C=df7['Diff'].nlargest(1) # finding the largest among difference column
r=largest_C.argmax() # returning the index of our largest difference val
return r[0]
answer_seven()
# return "YOUR ANSWER HERE"
###Output
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:10: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
# Remove the CWD from sys.path while we load stuff.
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
# This is added back by InteractiveShellApp.init_path()
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if sys.path[0] == '':
###Markdown
Question 8In this datafile, the United States is broken up into four regions using the "REGION" column. Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.*This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).*
###Code
def answer_eight():
census_df = pd.read_csv('census.csv')
df8a=census_df[(census_df['REGION']==2) | (census_df['REGION']==1)]
df8b=df8a[(df8a['CTYNAME']=='Washington County')]
df8c=df8b[(df8b['POPESTIMATE2015']>df8b['POPESTIMATE2014'])]
df8d=df8c[['STNAME','CTYNAME','POPESTIMATE2014','POPESTIMATE2015']]
df8e=df8d[['STNAME','CTYNAME']]
return df8e
answer_eight()
###Output
_____no_output_____ |
K Nearest Neighbours Implementation.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from IPython.display import Markdown
from IPython.display import Image
from IPython.display import clear_output
display(Markdown("<center><br><h1>Implementing K Nearest Neighbours For Data Mining Class</h1><center><br><br>"))
def euclidean(X , Y):
return np.sqrt(((X-Y)**2).sum())
def manhattan(X , Y):
return np.abs((X-Y)).sum()
class KNN:
def __init__(self , use_distance = 'euc' , K = 'auto'):
if use_distance == 'euc':
self.distance = euclidean
elif use_distance == 'man':
self.distance = manhattan
self.K = K
def fit(self , X , Y):
self.X = X
self.Y = Y
if self.K == 'auto':
self.K = 5
def predict_one(self , X):
if X.shape[0] != self.X.shape[1]:
raise Exception('Wrong Sized Data. Size Mismatch')
else:
dis = []
for i in range(self.X.shape[0]):
dis.append([self.distance(self.X[i] , X) , self.Y[i]])
dis = np.array(dis)
dis = dis[dis[:,0].argsort(kind = 'mergesort')]
vals , count = np.unique(dis[:self.K], return_counts=True)
return vals[np.argmax(count)]
def predict(self , X):
if X.shape[1] != self.X.shape[1]:
raise Exception('Wrong Sized Data. Size Mismatch')
else:
dis = []
for i in X:
dis.append(self.predict_one(i))
dis = np.array(dis)
return dis
def predict_accuracy(self , X , Y):
dis = self.predict(X)
acc = sum(1 for x,y in zip(dis,Y) if x == y) / len(dis)
return acc
def print_accuracy(acc , pred , cm = True):
print(' -> Precision Score of KNN Algorithm is ',round(precision_score(pred, acc)*100,2))
print(' -> Recall Score of KNN Algorithm is ',round(recall_score(pred, acc)*100,2))
print(' -> Accuracy Score of KNN Algorithm is ',round(accuracy_score(pred, acc)*100,2))
print(' -> F1 Score of KNN Algorithm is ',round(f1_score(pred,acc)*100,2))
if cm:
cf_matrix = confusion_matrix(acc,pred)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix.flatten()/np.sum(cf_matrix)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix, annot=labels, fmt='', cmap='Blues')
class_1 = np.concatenate((np.random.normal(loc = -1, scale = 1 , size = (200,3)),np.array([1]*200).reshape(200,1)) , axis = 1)
class_2 = np.concatenate((np.random.normal(loc = 1, scale = 1 , size = (200,3)),np.array([-1]*200).reshape(200,1)) , axis = 1)
train, test = pd.DataFrame(np.concatenate((class_1[:160],class_2[:160]),axis = 0), columns = ('C_1','C_2','C_3','C')) , pd.DataFrame(np.concatenate((class_1[160:],class_2[160:]),axis = 0), columns = ('C_1','C_2','C_3','C'))
train , test = train.sample(frac = 1) , test.sample(frac = 1)
sns.pairplot(train, hue ="C", palette ='coolwarm')
sns.displot(data = train , x = 'C_1', kde = True, rug = True, color ='red', bins = 50, hue = 'C' ,palette ='coolwarm')
sns.displot(data = train , x ='C_2', kde = True, rug = True, color ='blue', bins = 50, hue = 'C' ,palette ='coolwarm')
sns.displot(data = train , x ='C_3', kde = True, rug = True, color ='green', bins = 50, hue = 'C' ,palette ='coolwarm')
sns.jointplot(x= 'C_1' , y = 'C_2',data =train , hue = 'C',palette ='coolwarm')
display(Markdown("<h2>Implementing K Nearest Neighbours With Euclidean Distance </h2><br>")
knn = KNN(K = 3)
knn.fit(train[['C_1','C_2','C_3']].values , train[['C']].values)
out = knn.predict(test[['C_1','C_2','C_3']].values )
display(Markdown("<h4>Results for KNN Algorithm for K = 3 are : </h4>"))
print_accuracy(out , test[['C']].values )
knn = KNN(K = 5)
knn.fit(train[['C_1','C_2','C_3']].values , train[['C']].values)
out = knn.predict(test[['C_1','C_2','C_3']].values )
display(Markdown("<h4>Results for KNN Algorithm for K = 5 are : </h4>"))
print_accuracy(out , test[['C']].values )
knn = KNN(K = 7)
knn.fit(train[['C_1','C_2','C_3']].values , train[['C']].values)
out = knn.predict(test[['C_1','C_2','C_3']].values )
display(Markdown("<h4>Results for KNN Algorithm for K = 7 are : </h4>"))
print_accuracy(out , test[['C']].values )
display(Markdown("<h2>Implementing K Nearest Neighbours With Manhattan Distance </h2><br>"))
knn = KNN(K = 3, use_distance = 'man')
knn.fit(train[['C_1','C_2','C_3']].values , train[['C']].values)
out = knn.predict(test[['C_1','C_2','C_3']].values )
display(Markdown("<h4>Results for KNN Algorithm for K = 3 are : </h4>"))
print_accuracy(out , test[['C']].values )
knn = KNN(K = 5, use_distance = 'man')
knn.fit(train[['C_1','C_2','C_3']].values , train[['C']].values)
out = knn.predict(test[['C_1','C_2','C_3']].values )
display(Markdown("<h4>Results for KNN Algorithm for K = 5 are : </h4>"))
print_accuracy(out , test[['C']].values )
knn = KNN(K = 7, use_distance = 'man')
knn.fit(train[['C_1','C_2','C_3']].values , train[['C']].values)
out = knn.predict(test[['C_1','C_2','C_3']].values )
display(Markdown("<h4>Results for KNN Algorithm for K = 7 are : </h4>"))
print_accuracy(out , test[['C']].values )
###Output
_____no_output_____ |
Lab9/121_ex1.ipynb | ###Markdown
**LAB9**
**AIM: SVM classifier on MNIST dataset, compare the preformance of linear, polynomial and RBF kernels.**
Jwalit Shah | CE121
###Code
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.model_selection import train_test_split
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
from sklearn import datasets
mnist=datasets.load_digits()
# print("Features: ",mnist.)
print("Targets: ",mnist.target_names)
mnist.data.shape
print(mnist.target)
X_train,X_test,Y_train,Y_test=train_test_split(mnist.data,mnist.target,test_size=0.2,random_state=121)
clf_linear = svm.SVC(kernel='linear',random_state=121) # Linear Kernel
clf_linear.fit(X_train,Y_train)
#Predict the response for test dataset
y_pred_linear = clf_linear.predict(X_test)
print(y_pred_linear)
from sklearn import metrics
# Model Accuracy: how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(Y_test, y_pred_linear))
# Model Precision: what percentage of positive tuples are labeled as such?
print("Precision:",metrics.precision_score(Y_test, y_pred_linear,average='weighted'))
# Model Recall: what percentage of positive tuples are labelled as such?
print("Recall:",metrics.recall_score(Y_test, y_pred_linear,average='weighted'))
cm_linear = metrics.confusion_matrix(Y_test, y_pred_linear)
plt.subplots(figsize=(10, 6))
sb.heatmap(cm_linear, annot = True, fmt = 'g')
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix")
plt.show()
#using rbf kernel
clf_rbf = svm.SVC(kernel='rbf',gamma=0.005,random_state=122) # rbf Kernel
clf_rbf.fit(X_train,Y_train)
#Predict the response for test dataset
y_pred_rbf = clf_rbf.predict(X_test)
print(y_pred_rbf)
# Model Accuracy: how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(Y_test, y_pred_rbf))
# Model Precision: what percentage of positive tuples are labeled as such?
print("Precision:",metrics.precision_score(Y_test, y_pred_rbf,average='weighted'))
# Model Recall: what percentage of positive tuples are labelled as such?
print("Recall:",metrics.recall_score(Y_test, y_pred_rbf,average='weighted'))
cm_rbf = metrics.confusion_matrix(Y_test, y_pred_rbf)
plt.subplots(figsize=(10, 6))
sb.heatmap(cm_rbf, annot = True, fmt = 'g')
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix")
plt.show()
#using poly kernel
clf_poly = svm.SVC(kernel='poly',degree=3,random_state=121) # poly Kernel
clf_poly.fit(X_train,Y_train)
#Predict the response for test dataset
y_pred_poly = clf_poly.predict(X_test)
print(y_pred_poly)
# Model Accuracy: how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(Y_test, y_pred_poly))
# Model Precision: what percentage of positive tuples are labeled as such?
print("Precision:",metrics.precision_score(Y_test, y_pred_poly,average='weighted'))
# Model Recall: what percentage of positive tuples are labelled as such?
print("Recall:",metrics.recall_score(Y_test, y_pred_poly,average='weighted'))
cm_poly = metrics.confusion_matrix(Y_test, y_pred_poly)
plt.subplots(figsize=(10, 6))
sb.heatmap(cm_poly, annot = True, fmt = 'g')
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix")
plt.show()
###Output
_____no_output_____ |
examples/mdis_extra_cmp.ipynb | ###Markdown
Comparing various values from ISIS and USGSCSM cameras for a Messenger MDIS NAC PDS3 image
###Code
import ale
from ale.drivers.messenger_drivers import MessengerMdisPds3NaifSpiceDriver
from ale.formatters.usgscsm_formatter import to_usgscsm
import json
import os
from pysis import isis
import pvl
import numpy as np
import knoten
import csmapi
from knoten import csm
# printing config displays the yaml formatted string
print(ale.config)
# config object is a dictionary so it has the same access patterns
print('MDIS spice directory:', ale.config['mdis'])
# updating config for new MDIS path in this notebook
# Note: this will not change the path in `.ale/config.yml`. This change only lives in the notebook.
# ale.config['mdis'] = '/home/kdlee/builds/knoten'
# change to desired PDS3 image path
fileName = '/home/kdlee/builds/ale/EN1072174528M.IMG'
# metakernels are furnsh-ed when entering the context (with block) with a driver instance
# most driver constructors simply accept an image path
with MessengerMdisPds3NaifSpiceDriver(fileName) as driver:
# Get rotation from target_frame to j2000
j2000 = driver.frame_chain
target_frame = j2000.find_child_frame(driver.target_frame_id)
rotation = target_frame.rotation_to(j2000)
# Apply rotation to sensor position and velocity
j2000RotationPos = rotation._rots.apply(driver.sensor_position[0])
j2000RotationVel = rotation._rots.apply(driver.sensor_position[1])
# pass driver instance into formatter function
usgscsmString = to_usgscsm(driver)
# load the json encoded string ISD
usgscsm_dict = json.loads(usgscsmString)
# strip the image file extension and append .json
jsonFile = os.path.splitext(fileName)[0] + '.json'
# write to disk
with open(jsonFile, 'w') as fp:
json.dump(usgscsm_dict, fp)
# Constructs a camera model using usgscsm
model="USGS_ASTRO_FRAME_SENSOR_MODEL" # Make sure this matches your camera model
plugin = csmapi.Plugin.getList()[0]
isd = csmapi.Isd(fileName)
warns = csmapi.WarningList()
if plugin.canModelBeConstructedFromISD(isd, model, warns):
print("CONSTRUCTED CAMERA")
camera = plugin.constructModelFromISD(isd, model)
else:
print("CAN'T CONSTRUCT CAMERA")
for item in warns:
print(item.getMessage())
# Ingest image and spiceinit it
cube = os.path.splitext(fileName)[0] + '.cub'
isis.mdis2isis(from_=fileName, to=cube)
isis.spiceinit(from_=cube, shape='ellipsoid')
# Grab campt output on spiceinit'd cube and load it as a pvl
output = isis.campt(from_=cube)
pvl_output = pvl.loads(output)
# Grab body fixed coordinates from campt pvl output
campt_bodyfixed = pvl_output['GroundPoint']['BodyFixedCoordinate']
campt_bodyfixed = np.asarray(campt_bodyfixed.value) * 1000
# Grab body fixed coordinates from csm
ale_bodyfixed = csm.generate_ground_point(0, (256 - .5, 256 - .5), camera)
ale_bodyfixed = np.array([ale_bodyfixed.x, ale_bodyfixed.y, ale_bodyfixed.z])
# Compare the two body fixed coordinates
ale_bodyfixed - campt_bodyfixed
# Grab sensor position from isd
ale_position = usgscsm_dict['sensor_position']['positions']
ale_position = np.asarray(ale_position)
# Grab spacecraft position from campt pvl output
campt_position = pvl_output['GroundPoint']['SpacecraftPosition']
campt_position = np.asarray(campt_position.value) * 1000
# Compare the two positions
ale_position - campt_position
# Grab InstrumentPosition table from the isis cube using tabledump
instrument_pos_table = str(isis.tabledump(from_=cube, name='InstrumentPosition'))
parsed_string = instrument_pos_table.split(',')
# Grab sensor position from the table dump output
isis_j2000_pos = np.asarray([float(parsed_string[6][4:]), float(parsed_string[7]), float(parsed_string[8])]) * 1000
# Grab ALE's sensor position
ale_j2000_pos = np.asarray(j2000RotationPos)
# Compare the two sensor positions that are in the j2000 reference frame
ale_j2000_pos - isis_j2000_pos
# Grab velocities from the table dump output
isis_j2000_vel = np.asarray([float(parsed_string[9]), float(parsed_string[10]), float(parsed_string[11])]) * 1000
# Grab ALE's velocities
ale_j2000_vel = np.asarray(j2000RotationVel)
# Compare the two velocity lists that are in the j2000 reference frame
ale_j2000_vel - isis_j2000_vel
# Grab spacecraft position and body fixed look vector from csm
locus = camera.imageToRemoteImagingLocus(csmapi.ImageCoord(256 - .5, 256 - .5))
csm_bodyfixedLV = np.asarray([locus.direction.x, locus.direction.y, locus.direction.z])
csm_position = np.asarray([locus.point.x, locus.point.y, locus.point.z])
# Grab spacecraft position and body fixed look vector from campt pvl output
campt_bodyfixedLV = np.asarray(pvl_output['GroundPoint']['LookDirectionBodyFixed'])
campt_position = pvl_output['GroundPoint']['SpacecraftPosition']
campt_position = np.asarray(campt_position.value) * 1000
# Compute the differences
print(csm_bodyfixedLV - campt_bodyfixedLV)
print(csm_position - campt_position)
###Output
[ 3.20945290e-05 -4.88871593e-03 -4.34349143e-03]
[ -66.51548142 -182.55737144 248.80339983]
|
_drafts/modeling-the-nhl-better/old_notebooks/Hockey Model Ideal Data.ipynb | ###Markdown
Generate Ideal Data
###Code
n_days = 200
n_teams = 32
gpd = 8
true_Δi_σ = 0.0
true_Δh_σ = 0.0
true_Δod_σ = 0.002
true_i_0 = 1.12
true_h_0 = 0.25
true_o_0 = np.random.normal(0, 0.15, n_teams)
true_o_0 = true_o_0 - np.mean(true_o_0)
true_d_0 = np.random.normal(0, 0.15, n_teams)
true_d_0 = true_d_0 - np.mean(true_d_0)
true_i = np.zeros(n_days)
true_h = np.zeros(n_days)
true_o = np.zeros((n_days, n_teams))
true_d = np.zeros((n_days, n_teams))
true_i[0] = true_i_0
true_h[0] = true_h_0
true_o[0,:] = true_o_0
true_d[0,:] = true_d_0
games_list = []
matches = np.arange(12)
np.random.shuffle(matches)
for t in range(1, n_days):
true_i[t] = true_i[t-1] + np.random.normal(0.0, true_Δi_σ)
true_h[t] = true_h[t-1] + np.random.normal(0.0, true_Δh_σ)
true_o[t,:] = true_o[t-1,:] + np.random.normal(0.0, true_Δod_σ, n_teams)
true_o[t,:] = true_o[t,:] - np.mean(true_o[t,:])
true_d[t,:] = true_d[t-1,:] + np.random.normal(0.0, true_Δod_σ, n_teams)
true_d[t,:] = true_d[t,:] - np.mean(true_d[t,:])
if matches.shape[0]//2 < gpd:
new_matches = np.arange(n_teams)
np.random.shuffle(new_matches)
matches = np.concatenate([matches, new_matches])
for _ in range(gpd):
idₕ = matches[0]
idₐ = matches[1]
logλₕ = true_i[t] + true_h[t] + true_o[t,idₕ] - true_d[t,idₐ]
logλₐ = true_i[t] + true_o[t,idₐ] - true_d[t,idₕ]
sₕ = np.random.poisson(np.exp(logλₕ))
sₐ = np.random.poisson(np.exp(logλₐ))
if sₕ > sₐ:
hw = 1
elif sₕ == sₐ:
p = np.exp(logλₕ)/(np.exp(logλₕ) + np.exp(logλₐ))
hw = np.random.binomial(1, p)
else:
hw = 0
games_list.append([t, idₕ, sₕ, idₐ, sₐ, hw])
matches = matches[2:]
games = pd.DataFrame(games_list, columns=['day', 'idₕ', 'sₕ', 'idₐ', 'sₐ', 'hw'])
games.head()
games['idₕ'].value_counts() + games['idₐ'].value_counts()
###Output
_____no_output_____
###Markdown
Model 1: Daily Updates, No Deltas
###Code
def get_m1_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
return posteriors
def m1_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h = pm.Normal('h', mu=priors['h'][0], sigma=priors['h'][1])
i = pm.Normal('i', mu=priors['i'][0], sigma=priors['i'][1])
# Team-specific poisson model parameters
o_star = pm.Normal('o_star', mu=priors['o'][0], sigma=priors['o'][1], shape=n_teams)
d_star = pm.Normal('d_star', mu=priors['d'][0], sigma=priors['d'][1], shape=n_teams)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d = pm.Deterministic('d', d_star - tt.mean(d_star))
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(1000, tune=1000, cores=3, progressbar=False)
posteriors = get_m1_posteriors(trace)
return posteriors
ws = 7
iv1_rows = []
priors = {
'h': [0.25, 0.1],
'i': [1.0, 0.1],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)]
}
for t in tqdm(range(ws, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m1_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) +list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1])
iv1_rows.append(iv_row)
###Output
_____no_output_____
###Markdown
Model 2: Daily Updates with Deltas
###Code
def get_m2_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
# Deltas
Δ_h_μ, Δ_h_σ = norm.fit(trace['Δ_h'])
posteriors['Δ_h'] = [Δ_h_μ, Δ_h_σ]
Δ_i_μ, Δ_i_σ = norm.fit(trace['Δ_i'])
posteriors['Δ_i'] = [Δ_i_μ, Δ_i_σ]
Δ_od_μ_μ, Δ_od_μ_σ = norm.fit(trace['Δ_od_μ'])
posteriors['Δ_od_μ'] = [Δ_od_μ_μ, Δ_od_μ_σ]
Δ_od_σ_α, _, Δ_od_σ_β = invgamma.fit(trace['Δ_od_σ'])
posteriors['Δ_od_σ'] = [Δ_od_σ_α, Δ_od_σ_β]
return posteriors
def m2_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h_init = pm.Normal('h_init', mu=priors['h'][0], sigma=priors['h'][1])
Δ_h = pm.Normal('Δ_h', mu=priors['Δ_h'][0], sigma=priors['Δ_h'][1])
h = pm.Deterministic('h', h_init + Δ_h)
i_init = pm.Normal('i_init', mu=priors['i'][0], sigma=priors['i'][1])
Δ_i = pm.Normal('Δ_i', mu=priors['Δ_i'][0], sigma=priors['Δ_i'][1])
i = pm.Deterministic('i', i_init + Δ_i)
Δ_od_μ = pm.Normal('Δ_od_μ', mu=priors['Δ_od_μ'][0], sigma=priors['Δ_od_μ'][1])
Δ_od_σ = pm.InverseGamme('Δ_od_σ', mu=priors['Δ_od_σ'][0], sigma=priors['Δ_od_σ'][1])
# Team-specific poisson model parameters
o_star_init = pm.Normal('o_star_init', mu=priors['o'][0], sigma=priors['o'][1], shape=n_teams)
Δ_o = pm.Normal('Δ_o', mu=Δ_od_μ, sigma=Δ_od_σ, shape=n_teams)
o_star = pm.Deterministic('o_star', o_star_init + Δ_o)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d_star_init = pm.Normal('d_star_init', mu=priors['d'][0], sigma=priors['d'][1], shape=n_teams)
Δ_d = pm.Normal('Δ_d', mu=Δ_od_μ, sigma=Δ_od_σ, shape=n_teams)
d_star = pm.Deterministic('d_star', d_star_init + Δ_d)
d = pm.Deterministic('d', d_star - tt.mean(d_star))
# Regulation game time goal Poisson rates
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(1000, tune=1000, cores=3, progressbar=False)
posteriors = get_m2_posteriors(trace)
return posteriors
ws = 7
iv2_rows = []
priors = {
'h': [0.25, 0.1],
'i': [1.0, 0.1],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'Δ_h': [0.0, 0.001],
'Δ_i': [0.0, .001],
'Δ_od_μ': [0.0, 0.0005],
'Δ_od_σ': [5.0, 0.01],
}
for t in tqdm(range(ws, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m2_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) + list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1] + posteriors['Δ_h'] + posteriors['Δ_i'] +\
posteriors['Δ_od_μ'] + posteriors['Δ_od_σ'])
iv2_rows.append(iv_row)
###Output
_____no_output_____
###Markdown
Model 3: Daily Updates with Zero Cenetered Deltas
###Code
def get_m3_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
# Deltas
Δ_h_μ, Δ_h_σ = norm.fit(trace['Δ_h'], loc=0.0)
posteriors['Δ_h'] = [0.0, Δ_h_σ]
Δ_i_μ, Δ_i_σ = norm.fit(trace['Δ_i'], loc=0.0)
posteriors['Δ_i'] = [0.0, Δ_i_σ]
Δ_od_σ_α, _, Δ_od_σ_β = invgamma.fit(trace['Δ_od_σ'])
posteriors['Δ_od_σ'] = [Δ_od_σ_α, Δ_od_σ_β]
return posteriors
def m3_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h_init = pm.Normal('h_init', mu=priors['h'][0], sigma=priors['h'][1])
Δ_h = pm.Normal('Δ_h', mu=priors['Δ_h'][0], sigma=priors['Δ_h'][1])
h = pm.Deterministic('h', h_init + Δ_h)
i_init = pm.Normal('i_init', mu=priors['i'][0], sigma=priors['i'][1])
Δ_i = pm.Normal('Δ_i', mu=priors['Δ_i'][0], sigma=priors['Δ_i'][1])
i = pm.Deterministic('i', i_init + Δ_i)
Δ_od_σ = pm.InverseGamma('Δ_od_σ', alpha=priors['Δ_od_σ'][0], beta=priors['Δ_od_σ'][1])
# Team-specific poisson model parameters
o_star_init = pm.Normal('o_star_init', mu=priors['o'][0], sigma=priors['o'][1], shape=n_teams)
Δ_o = pm.Normal('Δ_o', mu=0.0, sigma=Δ_od_σ, shape=n_teams)
o_star = pm.Deterministic('o_star', o_star_init + Δ_o)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d_star_init = pm.Normal('d_star_init', mu=priors['d'][0], sigma=priors['d'][1], shape=n_teams)
Δ_d = pm.Normal('Δ_d', mu=0.0, sigma=Δ_od_σ, shape=n_teams)
d_star = pm.Deterministic('d_star', d_star_init + Δ_d)
d = pm.Deterministic('d', d_star - tt.mean(d_star))
# Regulation game time goal Poisson rates
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
#pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
#hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(10000, tune=10000, cores=3)#, progressbar=False)
posteriors = get_m3_posteriors(trace)
return posteriors
ws = 28
iv3_rows = []
# Initialize model with model1 parameters on first 75 days of data
init_priors = {
'h': [0.25, 0.01],
'i': [1.12, 0.01],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)]
}
init_data = games[(games['day'] <= 75)]
priors = m1_iteration(init_data, init_priors)
priors['Δ_h'] = [0.0, 0.005]
priors['Δ_i'] = [0.0, 0.005]
priors['Δ_od_σ'] = [5.0, 0.01]
for t in tqdm(range(ws, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m3_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) + list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1]) + posteriors['Δ_h'] + posteriors['Δ_i'] +\
posteriors['Δ_od_σ']
iv3_rows.append(iv_row)
###Output
_____no_output_____
###Markdown
Model 4: Do not vary h and i with each step
###Code
def get_m4_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
# Unified o and d variances
o_σ_α, _, o_σ_β = invgamma.fit(trace['o_σ'])
posteriors['o_σ'] = [o_σ_α, o_σ_β]
d_σ_α, _, d_σ_β = invgamma.fit(trace['d_σ'])
posteriors['d_σ'] = [d_σ_α, d_σ_β]
return posteriors
def m4_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h = pm.Normal('h', mu=priors['h'][0], sigma=priors['h'][1])
i = pm.Normal('i', mu=priors['i'][0], sigma=priors['i'][1])
o_σ = pm.InverseGamma('o_σ', alpha=priors['o_σ'][0], beta=priors['o_σ'][1])
d_σ = pm.InverseGamma('d_σ', alpha=priors['d_σ'][0], beta=priors['d_σ'][1])
# Team-specific poisson model parameters
o_star_init = pm.Normal('o_star_init', mu=priors['o'][0], sigma=o_σ, shape=n_teams)
Δ_o = pm.Normal('Δ_o', mu=0.0, sigma=0.0025, shape=n_teams)
o_star = pm.Deterministic('o_star', o_star_init + Δ_o)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d_star_init = pm.Normal('d_star_init', mu=priors['d'][0], sigma=d_σ, shape=n_teams)
Δ_d = pm.Normal('Δ_d', mu=0.0, sigma=0.0025, shape=n_teams)
d_star = pm.Deterministic('d_star', d_star_init + Δ_d)
d = pm.Deterministic('d', d_star - tt.mean(d_star))
# Regulation game time goal Poisson rates
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(10000, tune=10000, target_accept=0.90, cores=3)#, progressbar=False)
posteriors = get_m4_posteriors(trace)
return posteriors
start_day = 150
ws = 14
iv4_rows = []
# Initialize model with model1 parameters on first 75 days of data
init_priors = {
'h': [0.25, 0.01],
'i': [1.12, 0.01],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
}
init_data = games[(games['day'] <= start_day)]
priors = m1_iteration(init_data, init_priors)
priors['o_σ'] = [5.0, 0.4]
priors['d_σ'] = [5.0, 0.4]
priors['Δ_od_σ'] = [5.0, 0.1]
print(priors)
for t in tqdm(range(start_day, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m4_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) + list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1]) + posteriors['o_σ'] +\
posteriors['d_σ']
iv4_rows.append(iv_row)
true_o
np.array(iv4_rows)
np.array(iv4_rows)[:,4:36]
col_names = ['h_μ', 'h_σ', 'i_μ', 'i_σ'] + ['o{}_μ'.format(i) for i in range(n_teams)] + \
['o{}_σ'.format(i) for i in range(n_teams)] + ['d{}_μ'.format(i) for i in range(n_teams)] + \
['d{}_σ'.format(i) for i in range(n_teams)] + \
['o_σ_α', 'o_σ_β', 'd_σ_α', 'd_σ_β', 'Δ_od_σ_α', 'Δ_od_σ_β']
iv4_df = pd.DataFrame(iv4_rows, columns=col_names)
iv4_df['day'] = list(range(start_day, n_days+1))
iv4_df.head()
iv4_df.to_csv('iv4_df.csv')
lv_df = pd.DataFrame(data={'h':true_h, 'i':true_i})
lv_df = pd.concat([lv_df, pd.DataFrame(data=true_o, columns=['o{}'.format(i) for i in range(n_teams)])], axis=1)
lv_df = pd.concat([lv_df, pd.DataFrame(data=true_d, columns=['d{}'.format(i) for i in range(n_teams)])], axis=1)
lv_df['day'] = list(range(1,n_days+1))
lv_df.iloc[150:155,:].head()
lv_df.to_csv('lv_df.csv')
###Output
_____no_output_____ |
notebooks/01-ULMFiT-Yelp.ipynb | ###Markdown
`ULMFiT` Sentiment Analysis of Yelp Reviews* The ULMFiT NLP transfer learning technique, was introduced in this 2018 paper (https://arxiv.org/pdf/1801.06146.pdf)* Explaination of usage comes from this excellent course by Rachel Thomas (https://www.fast.ai/2019/07/08/fastai-nlp/) The model works in three stages1. The `AWD-LSTM SequentialRNN` is pretrained on a general-domain corpus, in our case the `WikiText103` dataset.2. The `AWD-LSTM Language model`, training as a sequence generator, is then fine-tuned on the domain-specific corpus (Yelp reviews).3. The embeddings learnt from these first two steps are imported into a new `classifier model`, which is fine-tuned on the target task (star ratings) with gradual unfreezing of the final layers.
###Code
path = untar_data(URLs.YELP_REVIEWS)
df_train = pd.read_csv(path/'train.csv', header=None, names=['rating', 'text']) \
.sample(frac=0.05, random_state=1)
df_test = pd.read_csv(path/'test.csv', header=None, names=['rating', 'text']) \
.sample(frac=0.05, random_state=1)
###Output
_____no_output_____
###Markdown
* Taking sample set for development
###Code
print(df_train.shape, df_test.shape)
df_train.head()
# First review
df_train['text'][21194][:500]
###Output
_____no_output_____
###Markdown
Split into `training`, `validation` and `hold-out` test set (df_test)
###Code
from sklearn.model_selection import train_test_split
# Split data into training and validation set
df_trn, df_val = train_test_split(df_train, stratify=df_train['rating'],
test_size=0.2, random_state=1)
print(df_trn.shape, df_val.shape)
%%time
df_trn.to_csv(path / 'train_sample.csv')
df_val.to_csv(path / 'val_sample.csv')
df_test.to_csv(path / 'test_sample.csv')
path.ls()
###Output
_____no_output_____
###Markdown
Tokenization* The first step of processing we make the texts go through is to split the raw sentences into words, or `tokens`.
###Code
%%time
data = TextClasDataBunch.from_csv(path, 'train_sample.csv',
text_cols='text', label_cols='rating')
data.show_batch()
###Output
_____no_output_____
###Markdown
Numericalization into `vocab`* Creating unique tokens for words* Top 60,000 used by default - unknown token `xxunk` used for remainders* Special characters are also tokenised (spaces, punctuation, new lines)* `xxbos` is the token for beginning of sentence etc..
###Code
# Top 10 words
data.vocab.itos[:10]
# Example tokenised review
data.train_ds[0][0].text[:200]
# Example numerical token mapping onto index
data.train_ds[0][0].data[:10]
###Output
_____no_output_____
###Markdown
For sentiment analysis we are creating two models1. A language model `data_lm` (fine-tuned encoder, no labels)2. A text classification model `data_clas` (with labels) 1. Language Model* Model `AWD_LSTM` is pretrained on a processed subset of wikipedia `wikitext-103`* This RNN model is trained to predict what the next word in the sequence is* It has a recurrent structure and a hidden state (updated each time it sees a new word), which contains information about the sentence
###Code
# Decrease batchsize if GPU can't handle the load
bs = 24 # range 12 - 48
%%time
# Language Model data
data_lm = TextLMDataBunch.from_df(path, df_trn, df_val)
print('Training and validation shape:\n', df_trn.shape, df_val.shape)
data_lm.show_batch(rows=1)
# Transfer learning Model AWD_LSTM pre-trained on WikiText103
learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
%%time
# Find best learning rate from slope
learn.lr_find()
learn.recorder.plot(skip_end=15)
# Only the last layer is unfrozen during training
learn.summary()
# Training/fine-tuning final layer to yelp reviews
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))
###Output
_____no_output_____
###Markdown
* `Accuracy` here is the ability for the model to predict the next word in the sequence
###Code
%%time
# Run until valid_loss comes down to training_loss (past this is overfitting to training set)
learn.fit_one_cycle(3, 1e-3, moms=(0.8,0.7))
learn.save_encoder('fine_tuned_enc')
###Output
_____no_output_____
###Markdown
* Now, the encoder is fine tuned to `Yelp Reviews`* The encoder can be used to predict the next word in a sentence* The next step is to remove the final layers of the encoder, and replace them with a classification/regression model
###Code
data_lm.train_ds.inner_df.shape
data_lm.valid_ds.inner_df.shape
learn.predict("I really loved the restaurant, the food was")
###Output
_____no_output_____
###Markdown
Predicting `next word` with language model
###Code
learn.predict("I really loved the restaurant, the food was")
learn.predict("I hated the restaurant, the food tasted")
###Output
_____no_output_____
###Markdown
Generating fake yelp reviews with `RNN` output sequence
###Code
text = "The food is good and the staff"
words = 40
print(learn.predict(text, words, temperature=0.75))
###Output
The food is good and the staff is friendly . They make it a great alternative to the Asian food fare , and the food is delicious . It 's always a good place to eat .
Its fun , very tasty
###Markdown
2. Classification Model for `Sentiment Analysis`
###Code
%%time
# 2. Classification Model
data_clas = TextClasDataBunch.from_df(path, df_trn, df_val, vocab=data_lm.train_ds.vocab)
data_clas.show_batch(rows=1)
###Output
_____no_output_____
###Markdown
* Instantiate new learner, and load embeddings from fine-tuning
###Code
learn_c = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.3)
learn_c.load_encoder('fine_tuned_enc')
learn_c.freeze()
learn_c.lr_find()
learn_c.recorder.plot(skip_end=15)
learn_c.fit_one_cycle(1, 2e-2, moms=(0.8,0.7))
learn_c.save('first')
learn_c.load('first');
###Output
_____no_output_____
###Markdown
* Gradual unfreezing is used to preserve low-level representations and adapt high-level ones
###Code
learn_c.freeze_to(-2)
learn_c.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7))
learn_c.save('2nd')
learn_c.freeze_to(-3)
learn_c.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7))
###Output
_____no_output_____
###Markdown
The accuracy suggests the model guesses the correct star value 62% of the time Review Predictions
###Code
# Example review
learn_c.predict("I really loved the restaurant, it was awesome!")
learn_c.predict("I really hated the restaurant, it was disgusting!")
learn_c.predict("I went there with my friends, it was okay.")
# Random sample from the hold-out test set
for index, row in df_test.sample(5).iterrows():
print("\nPrediction:", learn_c.predict(row[1])[0])
print(" Actual:", row[0])
print(f"({index})", row[1])
# Get predictions on validation set
preds, targets = learn_c.get_preds()
predictions = np.argmax(preds, axis = 1)
###Output
_____no_output_____
###Markdown
Validation set: `Actual` vs. `Prediction`
###Code
ct = pd.crosstab(predictions, targets)
ct.columns.name = 'Actual'
ct.index.name = 'Predicted'
ct.style.background_gradient()
###Output
_____no_output_____
###Markdown
* Blue horizontal line counts the correct guesses* Samples that fall outside the diagonal are incorrect, but usually close to the right star number Prediction errors in `hold-out` test set* The model works well but it's interesting to see it's failure cases
###Code
%%time
h_df = df_test.copy().reset_index()
preds = [int(learn_c.predict(text)[0])+1 for text in df_test.text.head(500)]
h_df['predicition'] = pd.Series(preds)
h_df.head()
###Output
CPU times: user 1min 5s, sys: 13.6 s, total: 1min 19s
Wall time: 1min 19s
###Markdown
Reviews rated `1` but model predicts `5`
###Code
f1 = h_df[(h_df.rating == 1) & (h_df.predicition == 5)].head(3)
f1
f1.iloc[0,2]
###Output
_____no_output_____
###Markdown
Reviews rated `5` but model predicts `1`
###Code
f2 = h_df[(h_df.rating == 5) & (h_df.predicition == 1)].head(3)
f2
f2.iloc[0,2]
###Output
_____no_output_____ |
01-Node2Vec/notebooks/Node2Vec.ipynb | ###Markdown
Node2Vec
###Code
%matplotlib inline
import random
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
from gensim.models import Word2Vec
###Output
_____no_output_____
###Markdown
读取图结构
###Code
INPUT_PATH = "../data/karate.edgelist"
OUTPUT_PATH = "../data/karate.w2v"
def read_graph():
"""
从输入文件构建图
"""
# 创建有向无权图
nx_graph = nx.read_edgelist(INPUT_PATH, nodetype=int, create_using=nx.DiGraph())
# 设置边权重
for edge in nx_graph.edges():
nx_graph[edge[0]][edge[1]]["weight"] = 1
# 将有向图转换为无向图
nx_graph = nx_graph.to_undirected()
return nx_graph
nx_graph = read_graph()
nx.draw(nx_graph, with_labels=True)
plt.show()
###Output
_____no_output_____
###Markdown
计算节点与边的 Alias Table
###Code
def alias_setup(normalized_probs):
"""
Alias方法初始化
"""
n = len(normalized_probs)
probs = np.zeros(n)
alias = np.zeros(n, dtype=np.int)
# 将概率分成大于1与小于1的两组
smaller, larger = [], []
for i, p in enumerate(normalized_probs):
probs[i] = p * n
if probs[i] < 1.0:
smaller.append(i)
else:
larger.append(i)
# 使用贪心算法将概率小于1的不断填满
while len(smaller) > 0 and len(larger) > 0:
small = smaller.pop()
large = larger.pop()
alias[small] = large
# 更新概率
probs[large] = probs[large] - (1.0 - probs[small])
if probs[large] < 1.0:
smaller.append(large)
else:
larger.append(large)
return probs, alias
def alias_draw(probs, alias):
"""
Alias方法采样
"""
n = len(probs)
index = int(np.floor(np.random.rand() * n))
if np.random.rand() < probs[index]:
return index
else:
return alias[index]
class Graph:
def __init__(self, nx_graph, p, q):
self.nx_graph = nx_graph
self.p = p
self.q = q
def get_alias_edge(self, u, v):
"""
获取指定边的Alias初始化列表
"""
unnormalized_probs = []
# 论文算法
for k in sorted(self.nx_graph.neighbors(v)):
if k == u:
unnormalized_probs.append(self.nx_graph[v][k]["weight"] / self.p)
elif self.nx_graph.has_edge(k, u):
unnormalized_probs.append(self.nx_graph[v][k]["weight"])
else:
unnormalized_probs.append(self.nx_graph[v][k]["weight"] / self.q)
# 归一化
norm_const = sum(unnormalized_probs)
normalized_probs = [p / norm_const for p in unnormalized_probs]
return alias_setup(normalized_probs)
def preprocess_transition_probs(self):
"""
预处理转移概率
"""
alias_nodes = {}
# 节点概率和归一化
for u in self.nx_graph.nodes():
unnormalized_probs = [self.nx_graph[u][v]["weight"] for v in sorted(self.nx_graph.neighbors(u))]
norm_const = sum(unnormalized_probs)
normalized_probs = [p / norm_const for p in unnormalized_probs]
alias_nodes[u] = alias_setup(normalized_probs)
# 输出展示信息
if u == 2:
print("node 2 unnormalized_probs:", unnormalized_probs)
print("node 2 norm_const:", norm_const)
print("node 2 normalized_probs:", normalized_probs)
print("node 2 alias_node:", alias_nodes[u])
alias_edges = {}
for e in self.nx_graph.edges():
alias_edges[(e[0], e[1])] = self.get_alias_edge(e[0], e[1])
alias_edges[(e[1], e[0])] = self.get_alias_edge(e[1], e[0])
print("edge 2->3 alias_edge:", alias_edges[(2, 3)])
self.alias_nodes = alias_nodes
self.alias_edges = alias_edges
def generate_node2vec_walk(self, start_node, walk_length):
"""
生成一条基于Node2Vec算法的随机游走路径
"""
walk = [start_node]
# 生成指定长度的节点序列
while len(walk) < walk_length:
curr_node = walk[-1]
# 这里对节点进行排序,保证与AliasTable计算顺序对齐
neighbor_nodes = sorted(self.nx_graph.neighbors(curr_node))
if len(neighbor_nodes) > 0:
# 序列当中只有一个节点
if len(walk) == 1:
index = alias_draw(self.alias_nodes[curr_node][0], self.alias_nodes[curr_node][1])
walk.append(neighbor_nodes[index])
# 序列当中大于一个节点
else:
prev_node = walk[-2]
index = alias_draw(self.alias_edges[(prev_node, curr_node)][0], self.alias_edges[(prev_node, curr_node)][1])
walk.append(neighbor_nodes[index])
else:
break
return walk
def generate_node2vec_walks(self, num_walks, walk_length):
"""
生成多组基于Node2Vec算法的随机游走路径
"""
walks, nodes = [], list(self.nx_graph.nodes())
print("Walk Iteration:")
for i in range(num_walks):
print(i + 1, "/", num_walks)
# 打乱节点顺序
random.shuffle(nodes)
for u in nodes:
walks.append(self.generate_node2vec_walk(u, walk_length))
return walks
graph = Graph(nx_graph, p=1, q=1)
graph.preprocess_transition_probs()
###Output
node 2 unnormalized_probs: [1, 1, 1, 1, 1, 1, 1, 1, 1]
node 2 norm_const: 9
node 2 normalized_probs: [0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111, 0.1111111111111111]
node 2 alias_node: (array([1., 1., 1., 1., 1., 1., 1., 1., 1.]), array([0, 0, 0, 0, 0, 0, 0, 0, 0]))
edge 2->3 alias_edge: (array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]), array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))
###Markdown
生成节点序列
###Code
walks = graph.generate_node2vec_walks(num_walks=10, walk_length=80)
print("Walk Length:", len(walks[0]))
###Output
Walk Iteration:
1 / 10
2 / 10
3 / 10
4 / 10
5 / 10
6 / 10
7 / 10
8 / 10
9 / 10
10 / 10
Walk Length: 80
###Markdown
使用Word2Vec训练模型
###Code
def learn_word2vec_embedding(walks):
"""
根据节点序列学习Embedding
"""
# 将节点由int类型转换为string类型
sentences = []
for walk in walks:
sentences.append([str(x) for x in walk])
model = Word2Vec(sentences, size=128, window=10, min_count=0, sg=1, workers=4, iter=1)
model.wv.save_word2vec_format(OUTPUT_PATH)
return model
model = learn_word2vec_embedding(walks)
print("Learning Finish.")
###Output
Learning Finish.
###Markdown
结果展示
###Code
print("Node 17 Embedding:")
print(model.wv["17"])
print("Node 17 and 6 Similarity:", model.wv.similarity("17", "6"))
print("Node 17 and 25 Similarity:", model.wv.similarity("17", "25"))
###Output
Node 17 and 6 Similarity: 0.9986441
Node 17 and 25 Similarity: 0.76830983
|
2021-07-29-data-preprocessing.ipynb | ###Markdown
Data preprocessing This notebook is based on https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling.
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_regression
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
###Output
_____no_output_____
###Markdown
We create synthetic data from Gaussian distribution:
###Code
X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=1)
plt.subplot(211)
plt.hist(X[:, 0])
plt.subplot(212)
plt.hist(X[:, 1]);
plt.tight_layout(pad=0.1, h_pad=1)
plt.hist(y);
%config InlineBackend.figure_format = "retina"
###Output
_____no_output_____
###Markdown
Multilayer perceptron with unpreprocessed data
###Code
n_split = 500
train_x, test_x = X[:n_split], X[n_split:]
train_y, test_y = y[:n_split], y[n_split:]
###Output
_____no_output_____
###Markdown
Model is a multilayer perceptron with one hidden layer:
###Code
model = tf.keras.Sequential()
model.add(Dense(25, input_dim=20, activation="relu", kernel_initializer="he_uniform"))
model.add(Dense(1, activation="linear"))
model.compile(loss="mse", optimizer=SGD(lr=0.01, momentum=0.9))
###Output
_____no_output_____
###Markdown
Train the model:
###Code
history = model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=100, verbose=0)
train_mse = model.evaluate(train_x, train_y, verbose=0)
test_mse = model.evaluate(test_x, test_y, verbose=0)
print("Train MSE: {:.3f}, test MSE: {:.3f}".format(train_mse, test_mse))
plt.figure()
plt.plot(history.history["loss"], label="Train")
plt.plot(history.history["val_loss"], label="Test");
###Output
_____no_output_____
###Markdown
What happened? The parameters of the model have exploded during the training:
###Code
model.trainable_variables[0]
###Output
_____no_output_____
###Markdown
Multilayer perceptron with scaled output variable
###Code
train_y = np.reshape(train_y, (-1, 1))
test_y = np.reshape(test_y, (-1, 1))
###Output
_____no_output_____
###Markdown
Now we can apply standardization to the data:
###Code
scaler = StandardScaler()
scaler.fit(train_y)
train_y = scaler.transform(train_y)
test_y = scaler.transform(test_y)
model = Sequential([
Dense(25, input_dim=20, activation="relu", kernel_initializer="he_uniform"),
Dense(1, activation="linear"),
])
model.compile(loss="mse", optimizer=SGD(lr=0.01, momentum=0.9))
history = model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=10000, verbose=0)
train_mse = model.evaluate(train_x, train_y, verbose=0)
test_mse = model.evaluate(test_x, test_y, verbose=0)
print("Train MSE: {:.3f}, test MSE: {:.3f}".format(train_mse, test_mse))
plt.figure()
plt.semilogy(history.history["loss"], "-", label="Train loss")
plt.semilogy(history.history["val_loss"], "--", label="Test loss")
plt.legend(loc="upper right")
plt.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Multilayer perceptron with scaled input variables
###Code
def get_dataset(input_scaler, output_scaler):
X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=1)
n_split = 500
train_x, test_x = X[:n_split], X[n_split:]
train_y, test_y = y[:n_split], y[n_split:]
if input_scaler is not None:
input_scaler.fit(train_x)
train_x = input_scaler.transform(train_x)
test_x = input_scaler.transform(test_x)
if output_scaler is not None:
train_y = np.reshape(train_y, (-1, 1))
test_y = np.reshape(test_y, (-1, 1))
output_scaler.fit(train_y)
train_y = output_scaler.transform(train_y)
test_y = output_scaler.transform(test_y)
return train_x, train_y, test_x, test_y
def evaluate_model(train_x, train_y, test_x, test_y):
model = Sequential([
Dense(25, input_dim=20, activation="relu", kernel_initializer="he_uniform"),
Dense(1, activation="linear"),
])
model.compile(loss="mse", optimizer=SGD(lr=0.01, momentum=0.9))
history = model.fit(train_x, train_y, validation_data=(test_x, test_y), epochs=100, verbose=0)
train_mse = model.evaluate(train_x, train_y, verbose=0)
test_mse = model.evaluate(test_x, test_y, verbose=0)
return history, train_mse, test_mse
###Output
_____no_output_____
###Markdown
We need to repeat model training multiple time to average over randomness:
###Code
def repeat_model_evaluation(input_scaler, output_scaler, n_repeat=30):
train_x, train_y, test_x, test_y = get_dataset(input_scaler, output_scaler)
results = []
for i in range(n_repeat):
__, __, test_mse = evaluate_model(train_x, train_y , test_x, test_y)
print(f"Run {i:2d}, test_mse = {test_mse:.3f}")
results.append(test_mse)
return results
###Output
_____no_output_____
###Markdown
Now we run simulations with the following configurations of the transformations:- no transformation for input variables, standardization for output variables;- normalization for input variables, standardization for output variables;- standardization for input variables, standardization for output variables.
###Code
results_1 = repeat_model_evaluation(None, StandardScaler())
results_2 = repeat_model_evaluation(MinMaxScaler(), StandardScaler())
results_3 = repeat_model_evaluation(StandardScaler(), StandardScaler())
print("Unscaled inputs, test mse {:.3f} \pm {:.3f}".format(np.mean(results_1), np.std(results_1)))
print("Normalized inputs, test mse {:.3f} \pm {:.3f}".format(np.mean(results_2), np.std(results_2)))
print("Standardized inputs, test mse {:.3f} \pm {:.3f}".format(np.mean(results_3), np.std(results_3)))
###Output
Unscaled inputs, test mse 0.007 \pm 0.005
Normalized inputs, test mse 0.000 \pm 0.000
Standardized inputs, test mse 0.006 \pm 0.003
###Markdown
Let's plot to see visually the differences:
###Code
results = [results_1, results_2, results_3]
labels = ["unscaled", "normalized", "standardized"]
plt.figure()
plt.boxplot(results, labels=labels)
plt.tight_layout(pad=0.1)
np.mean(results_2), np.std(results_2)
###Output
_____no_output_____ |
examples/examples_features.ipynb | ###Markdown
Examples of feature set usage:
###Code
import os,sys
sys.path.append(os.path.abspath(".."))
from features_set import features_set
import pandas as pd
###Output
_____no_output_____
###Markdown
Structure of a module  Binary classes
###Code
# set up the parameters
parameters = {
'feature_path': "../data/features/features.xlsx", # path to csv/xls file with features
'outcome_path': "../data/features/extended_clinical_df.xlsx", #path to csv/xls file with outcome
'patient_column': 'Patient', # name of column with patient id
'patient_in_outcome_column': 'PatientID', # name of column with patient id in clinical data file
'outcome_column': '1yearsurvival' # name of outcome column
}
# initialize feature set
fs = features_set(**parameters)
# excluding patients with unknown outcome (in case they are represented)
fs.handle_nan(axis=0)
fs._feature_outcome_dataframe.head(5)
# visualization of feature values distribution in classes (in .html report)
fs.plot_distribution(fs._feature_column[:100])
###Output
_____no_output_____
###Markdown
Example of plotted distributions of feature values in classes:
###Code
# visualization of feature mutual (Spearman) correlation coefficient matrix (in .html report)
fs.plot_correlation_matrix(fs._feature_column[:100])
###Output
_____no_output_____
###Markdown
Example of feature correlation matrix:
###Code
# visualization of Mann-Whitney Bonferroni corrected p-values for binary classes test (in .html report)
fs.plot_MW_p(fs._feature_column[:100])
###Output
_____no_output_____
###Markdown
Example of Mann-Whitney p-values:
###Code
# visualization of univariate ROC-curves (in .html report)
fs.plot_univariate_roc(fs._feature_column[:100])
###Output
_____no_output_____
###Markdown
Example of univariate feature ROC:
###Code
# calculation of basic statistics for each feature (in .xlsx):
# number of NaN, mean, std, min, max; if applicable: MW-p, univariate ROC AUC, volume correlation
fs.calculate_basic_stats(volume_feature='original_shape_VoxelVolume')
# checking the excel table
print('Basic statistics for each feature')
pd.read_excel('../data/features/features_basic_stats.xlsx')
# volume analysis
fs.volume_analysis(volume_feature='original_shape_VoxelVolume')
###Output
_____no_output_____
###Markdown
Example of volume precision-recall curve: Example of volume Spearman correlation coefficients: Multi-class
###Code
parameters = {
'feature_path': "../data/features/features.xlsx", # path to csv/xls file with features
'outcome_path': "../data/features/extended_clinical_df.xlsx", #path to csv/xls file with outcome
'patient_column': 'Patient', # name of column with patient id
'patient_in_outcome_column': 'PatientID', # name of column with patient id in clinical data file
'outcome_column': 'Overall.Stage' # name of outcome column
}
fs = features_set(**parameters)
fs._feature_outcome_dataframe[fs._feature_outcome_dataframe['Overall.Stage'].isnull()]
fs.handle_nan(axis=0)
fs.plot_distribution(fs._feature_column[:100])
fs.plot_distribution(fs._feature_column[:100], ['I', 'IIIb'])
fs.plot_correlation_matrix(fs._feature_column[:100])
fs.plot_MW_p(fs._feature_column[:100], ['I', 'IIIb'])
fs.plot_univariate_roc(fs._feature_column[:100], ['I', 'IIIb'])
fs.calculate_basic_stats(volume_feature='original_shape_VoxelVolume')
fs.volume_analysis(volume_feature='original_shape_VoxelVolume')
print('Basic statistics for each feature')
pd.read_excel('../data/features/features_basic_stats.xlsx')
###Output
_____no_output_____ |
tutorials/summer2017_nctu/4-3.lane_filter_drive.ipynb | ###Markdown
4-3 Lane Filter and Car Commands 4-3-1 Line Detection 4-3-2 Lane Filter 4-3-3 Car Commands 4-3-1 Line Detection Import Packages and Load the image and resize
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Use your own image
img = cv2.imread("images/curve-right.jpg")
image_cv = cv2.resize(img, (160, 120),interpolation=cv2.INTER_NEAREST)
dst1 = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.subplot(121),plt.imshow(dst1,cmap = 'brg')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
dst2 = cv2.cvtColor(image_cv,cv2.COLOR_BGR2RGB)
plt.subplot(122),plt.imshow(dst2,cmap = 'brg')
plt.title('Resized Image'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Find the EdgesYou should find the config file 'universal.yaml'
###Code
gray = cv2.cvtColor(image_cv,cv2.COLOR_BGR2GRAY)
edges=cv2.Canny(gray,100,350)
plt.imshow(edges,cmap = 'gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Setup HSV Space ThresholdYou should find the config file 'universal.yaml'
###Code
hsv_white1 = np.array([0,0,150])
hsv_white2 = np.array([180,100,255])
hsv_yellow1 = np.array([25,50,50])
hsv_yellow2 = np.array([45,255,255])
hsv_red1 = np.array([0,100,100])
hsv_red2 = np.array([15,255,255])
hsv_red3 = np.array([165,100,100])
hsv_red4 = np.array([180,255,255])
###Output
_____no_output_____
###Markdown
Threshold colors in HSV space
###Code
hsv = cv2.cvtColor(image_cv,cv2.COLOR_BGR2HSV)
white = cv2.inRange(hsv,hsv_white1,hsv_white2)
yellow = cv2.inRange(hsv,hsv_yellow1,hsv_yellow2)
red1 = cv2.inRange(hsv,hsv_red1,hsv_red2)
red2 = cv2.inRange(hsv,hsv_red3,hsv_red4)
red = cv2.bitwise_or(red1,red2)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3, 3))
white = cv2.dilate(white, kernel)
yellow = cv2.dilate(yellow, kernel)
red = cv2.dilate(red, kernel)
# Uncomment '#' to plot with color
x = cv2.cvtColor(yellow, cv2.COLOR_GRAY2BGR)
x[:,:,2] *= 1
x[:,:,1] *= 1
x[:,:,0] *= 0
x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
y = cv2.cvtColor(red, cv2.COLOR_GRAY2BGR)
y[:,:,2] *= 1
y[:,:,1] *= 0
y[:,:,0] *= 0
y = cv2.cvtColor(y, cv2.COLOR_BGR2RGB)
plt.subplot(131),plt.imshow(white,cmap = 'gray')
plt.title('White'), plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(yellow,cmap = 'gray')
plt.subplot(132),plt.imshow(x,cmap = 'brg')
plt.title('Yellow'), plt.xticks([]), plt.yticks([])
plt.subplot(133),plt.imshow(red,cmap = 'gray')
plt.subplot(133),plt.imshow(y,cmap = 'brg')
plt.title('Red'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Combine Edge and Colors
###Code
edge_color_white=cv2.bitwise_and(edges,white)
edge_color_yellow=cv2.bitwise_and(edges,yellow)
edge_color_red=cv2.bitwise_and(edges,red)
plt.imshow(edge_color_yellow,cmap = 'gray')
plt.title('Edge Color Y'), plt.xticks([]), plt.yticks([])
plt.subplot(131),plt.imshow(edge_color_white,cmap = 'gray')
plt.title('Edge Color W'), plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(edge_color_yellow,cmap = 'gray')
plt.title('Edge Color Y'), plt.xticks([]), plt.yticks([])
plt.subplot(133),plt.imshow(edge_color_red,cmap = 'gray')
plt.title('Edge Color R'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Find the lines
###Code
lines_white = cv2.HoughLinesP(edge_color_white,1,np.pi/180,10,np.empty(1),1.5,1)
lines_yellow = cv2.HoughLinesP(edge_color_yellow,1,np.pi/180,10,np.empty(1),1.5,1)
lines_red = cv2.HoughLinesP(edge_color_red,1,np.pi/180,10,np.empty(1),1.5,1)
color = "yellow"
lines = lines_yellow
bw = yellow
if lines is not None:
lines = np.array(lines[0])
print "found lines"
else:
lines = []
print "no lines"
###Output
found lines
###Markdown
Show the lines (yellow)
###Code
image_with_lines = np.copy(dst2)
if len(lines)>0:
for x1,y1,x2,y2 in lines:
cv2.line(image_with_lines, (x1,y1), (x2,y2), (0,0,255), 2)
cv2.circle(image_with_lines, (x1,y1), 2, (0,255,0))
cv2.circle(image_with_lines, (x2,y2), 2, (255,0,0))
plt.imshow(image_with_lines,cmap = 'brg')
plt.title('Line Image'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Find normals
###Code
arr_cutoff = np.array((0, 40, 0, 40))
arr_ratio = np.array((1./160, 1./120, 1./160, 1./120))
normals = []
centers = []
if len(lines)>0:
#find the normalized coordinates
lines_normalized = ((lines + arr_cutoff) * arr_ratio)
length = np.sum((lines[:,0:2]-lines[:,2:4])**2,axis=1,keepdims=True)**0.5
dx = 1.*(lines[:,3:4]-lines[:,1:2])/length
dy = 1.*(lines[:,0:1]-lines[:,2:3])/length
centers = np.hstack([(lines[:,0:1]+lines[:,2:3])/2,(lines[:,1:2]+lines[:,3:4])/2])
#find the vectors' direction
x3 = (centers[:,0:1] - 3.*dx).astype('int')
x3[x3<0]=0
x3[x3>=160]=160-1
y3 = (centers[:,1:2] - 3.*dy).astype('int')
y3[y3<0]=0
y3[y3>=120]=120-1
x4 = (centers[:,0:1] + 3.*dx).astype('int')
x4[x4<0]=0
x4[x4>=160]=160-1
y4 = (centers[:,1:2] + 3.*dy).astype('int')
y4[y4<0]=0
y4[y4>=120]=120-1
flag_signs = (np.logical_and(bw[y3,x3]>0,bw[y4,x4]==0)).astype('int')*2-1
normals = np.hstack([dx, dy]) * flag_signs
flag = ((lines[:,2]-lines[:,0])*normals[:,1] - (lines[:,3]-lines[:,1])*normals[:,0])>0
for i in range(len(lines)):
if flag[i]:
x1,y1,x2,y2 = lines[i, :]
lines[i, :] = [x2,y2,x1,y1]
###Output
_____no_output_____
###Markdown
Draw the normals
###Code
image_with_lines = np.copy(dst2)
if len(centers)>0:
for x,y,dx,dy in np.hstack((centers,normals)):
x3 = int(x - 2.*dx)
y3 = int(y - 2.*dy)
x4 = int(x + 2.*dx)
y4 = int(y + 2.*dy)
cv2.line(image_with_lines, (x3,y3), (x4,y4), (0,0,255), 1)
cv2.circle(image_with_lines, (x3,y3), 1, (0,255,0))
cv2.circle(image_with_lines, (x4,y4), 1, (255,0,0))
plt.subplot(121),plt.imshow(image_with_lines,cmap = 'brg')
plt.title('Line Normals'), plt.xticks([]), plt.yticks([])
image_with_lines = np.copy(dst2)
if len(lines)>0:
for x1,y1,x2,y2 in lines:
cv2.line(image_with_lines, (x1,y1), (x2,y2), (0,0,255), 2)
cv2.circle(image_with_lines, (x1,y1), 2, (0,255,0))
cv2.circle(image_with_lines, (x2,y2), 2, (255,0,0))
plt.subplot(122),plt.imshow(image_with_lines,cmap = 'brg')
plt.title('Line Image'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
4-3-2 Lane Filter Import packages
###Code
import numpy as np
from scipy.stats import multivariate_normal, entropy
from scipy.ndimage.filters import gaussian_filter
from math import floor, atan2, pi, cos, sin, sqrt
import time
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Environment Setup
###Code
# constant
WHITE = 0
YELLOW = 1
RED = 2
lanewidth = 0.4
linewidth_white = 0.04
linewidth_yellow = 0.02
###Output
_____no_output_____
###Markdown
Generate Vote from Line Segments Setup a line segment* left edge of white lane* right edge of white lane* left edge of yellow lane* right edge of white lane
###Code
# right edge of white lane
#p1 = np.array([0.8, 0.24])
#p2 = np.array([0.4, 0.24])
p1 = np.array([lines_normalized[0][0],lines_normalized[0][1]])
p2 = np.array([lines_normalized[0][2],lines_normalized[0][3]])
seg_color = YELLOW
# left edge of white lane
#p1 = np.array([0.4, 0.2])
#p2 = np.array([0.8, 0.2])
#seg_color = WHITE
#plt.plot([p1[0], p2[0]], [p1[1], p2[1]], 'ro')
#plt.plot([p1[0], p2[0]], [p1[1], p2[1]])
#plt.ylabel('y')
#plt.axis([0, 5, 0, 5])
#plt.show()
###Output
_____no_output_____
###Markdown
compute d_i, phi_i, l_i
###Code
t_hat = (p2-p1)/np.linalg.norm(p2-p1)
n_hat = np.array([-t_hat[1],t_hat[0]])
d1 = np.inner(n_hat,p1)
d2 = np.inner(n_hat,p2)
l1 = np.inner(t_hat,p1)
l2 = np.inner(t_hat,p2)
print (d1, d2, l1, l2)
if (l1 < 0):
l1 = -l1;
if (l2 < 0):
l2 = -l2;
l_i = (l1+l2)/2
d_i = (d1+d2)/2
phi_i = np.arcsin(t_hat[1])
if seg_color == WHITE: # right lane is white
if(p1[0] > p2[0]): # right edge of white lane
d_i = d_i - linewidth_white
print ('right edge of white lane')
else: # left edge of white lane
d_i = - d_i
phi_i = -phi_i
print ('left edge of white lane')
d_i = d_i - lanewidth/2
elif seg_color == YELLOW: # left lane is yellow
if (p2[0] > p1[0]): # left edge of yellow lane
d_i = d_i - linewidth_yellow
phi_i = -phi_i
print ('right edge of yellow lane')
else: # right edge of white lane
d_i = -d_i
print ('right edge of yellow lane')
d_i = lanewidth/2 - d_i
print (d_i, phi_i, l_i)
###Output
(0.87462972586141063, 0.87462972586141063, -0.25675984969884147, -0.19137503792186139)
right edge of yellow lane
(-0.65462972586141066, 0.53495507378609641, 0.22406744381035143)
###Markdown
Measurement Likelihood
###Code
# initialize measurement likelihood
d_min = -0.7
d_max = 0.5
delta_d = 0.02
phi_min = -pi/2
phi_max = pi/2
delta_phi = 0.02
d, phi = np.mgrid[d_min:d_max:delta_d, phi_min:phi_max:delta_phi]
measurement_likelihood = np.zeros(d.shape)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(measurement_likelihood, interpolation='nearest')
fig.colorbar(cax)
plt.ylabel('phi')
plt.xlabel('d')
#ax.set_xticklabels(['']+alpha)
#ax.set_yticklabels(['']+alpha)
plt.show()
i = floor((d_i - d_min)/delta_d)
j = floor((phi_i - phi_min)/delta_phi)
measurement_likelihood[i,j] = measurement_likelihood[i,j] + 1/(l_i)
print (i, j)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(measurement_likelihood, interpolation='nearest')
fig.colorbar(cax)
plt.ylabel('phi')
plt.xlabel('d')
#ax.set_xticklabels(['']+alpha)
#ax.set_yticklabels(['']+alpha)
plt.show()
###Output
/home/robotvision/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py:3: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
app.launch_new_instance()
###Markdown
4-3-3 Car Command Import Packages
###Code
import numpy as np
import scipy as sp
import cv2
import time
from matplotlib import pyplot as plt
%matplotlib inline
# set display defaults
plt.rcParams['figure.figsize'] = (10, 10) # large images
plt.rcParams['image.interpolation'] = 'nearest' # don't interpolate: show square pixels
###Output
_____no_output_____
###Markdown
ROS Setup
###Code
import sys
# rospy
sys.path.insert(0, '/opt/ros/indigo/lib/python2.7/dist-packages')
# rospkg
sys.path.insert(0, '/usr/lib/python2.7/dist-packages/')
# duckietown_msgs
duckietown_root = '../../' # this file should be run from {duckietown_root}/turorials/python (otherwise change this line)
sys.path.insert(0, duckietown_root + 'catkin_ws/devel/lib/python2.7/dist-packages')
import rospy
from duckietown_msgs.msg import Twist2DStamped
###Output
_____no_output_____
###Markdown
Take a look at the ROS Topics
###Code
%%bash
rostopic list
###Output
ERROR: Unable to communicate with master!
###Markdown
Initial a ROS Node* modified "car13" to your duckiebot's name
###Code
rospy.init_node("joystick_jupyter",anonymous=False)
#please replace "car13" with your duckiebot name
pub_car_cmd = rospy.Publisher("/car13/joystick_jupyter/car_cmd",Twist2DStamped,queue_size=1)
###Output
_____no_output_____
###Markdown
Define a function for publishing car command
###Code
def car_command(v, omega, duration):
# Send stop command
car_control_msg = Twist2DStamped()
car_control_msg.v = v
car_control_msg.omega = omega
pub_car_cmd.publish(car_control_msg)
rospy.sleep(duration)
#rospy.loginfo("Shutdown")
car_control_msg.v = 0.0
car_control_msg.omega = 0.0
pub_car_cmd.publish(car_control_msg)
###Output
_____no_output_____
###Markdown
Forward (F), Turn Left (L), or Turn Right (R)Send commands and calibrate your duckiebot Ex1: Forward 0.5 Tile Width
###Code
car_command(0.5, 0, 0.75)
###Output
_____no_output_____
###Markdown
EX2: Turn 45 or 90 Degrees
###Code
car_command(0.2, 4, 1.005)
###Output
_____no_output_____
###Markdown
Setup a Switch for concat of primitives
###Code
class switch(object):
def __init__(self, value):
self.value = value
self.fall = False
def __iter__(self):
"""Return the match method once, then stop"""
yield self.match
raise StopIteration
def match(self, *args):
"""Indicate whether or not to enter a case suite"""
if self.fall or not args:
return True
elif self.value in args: # changed for v1.5, see below
self.fall = True
return True
else:
return False
def motion_concat(concat):
for i in range(len(concat)):
primitives = concat[i]
for case in switch(primitives):
if case('S'):
car_command(0.5, 0, 0.25)
break
if case('L'):
car_command(0.2, 4, 0.82)
break
if case('R'):
car_command(0.2, -4, 0.78)
break
if case('B'):
car_command(-0.4, 0, 0.5)
break
###Output
_____no_output_____
###Markdown
Ex3: Overtaking
###Code
overtaking = "LSRSSSSRSLSS"
motion_concat(overtaking)
###Output
_____no_output_____ |
lectures/notes/python_notes/07a_ode.ipynb | ###Markdown
ODEWe will solve the following linear Cauchy model\begin{align}y^{\prime}(t) &= \lambda y(t)\\y(0) & = 1\end{align}whose exact solution is$$y(t) = e^{\lambda t}$$
###Code
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
import scipy.linalg
import numpy.linalg
l = -5.
t0 = 0.
tf = 10.
y0 = 1.
s = linspace(t0,tf,5000)
exact = lambda x: exp(l*x)
###Output
_____no_output_____
###Markdown
Forward Euler$$\frac{y_{n}-y_{n-1}}{h} = f(y_{n-1}, t_{n-1})$$
###Code
def fe(l,y0,t0,tf,h):
timesteps = arange(t0,tf+1e-10, h)
sol = zeros_like(timesteps)
sol[0] = y0
for i in range(1,len(sol)):
sol[i] = sol[i-1]*(1+l*h)
return sol, timesteps
y, t = fe(l,y0,t0,tf,0.1)
_ = plot(t,y, 'o-')
_ = plot(s,exact(s))
error = numpy.linalg.norm(exact(t) - y, 2)
print(error)
###Output
0.211605395525
###Markdown
Backward Euler$$\frac{y_{n}-y_{n-1}}{h} = f(y_{n}, t_{n})$$
###Code
def be(l,y0,t0,tf,h):
pass # TODO
###Output
_____no_output_____
###Markdown
$\theta$-method$$\frac{y_{n}-y_{n-1}}{h} = \theta\, f(y_{n}, t_{n}) + (1-\theta)\,f(y_{n-1}, t_{n-1})$$
###Code
def tm(theta,l,y0,t0,tf,h):
pass # TODO
###Output
_____no_output_____
###Markdown
Simple adaptive time stepperFor each time step:- Compute solution with CN- Compute solution with BE- Check the difference- If the difference satisfy a given tolerance: - keep the solution of higher order - double the step size - go to the next step- Else: - half the step size and repeat the time step
###Code
def adaptive(l,y0,t0,tf,h0, hmax=0.9,tol=1e-3):
sol = []
sol.append(y0)
t = []
t.append(t0)
h = h0
while t[-1] < tf:
#print 'current t =', t[-1], ' h=', h
current_sol = sol[-1]
current_t = t[-1]
sol_cn, _ = tm(0.5,l,current_sol,current_t, current_t + h, h)
sol_be, _ = tm(1.,l,current_sol,current_t, current_t + h, h)
if (abs(sol_cn[-1] - sol_be[-1]) < tol): #accept
sol.append(sol_cn[-1])
t.append(current_t+h)
h *= 2.
if h > hmax:
h=hmax
else:
h /= 2.
return sol, t
y,t = adaptive(l,y0,t0,tf,0.9)
_ = plot(t,y, 'o-')
_ = plot(s,exact(array(s)))
error = numpy.linalg.norm(exact(array(t)) - y, infty)
print error, len(y)
###Output
0.000817298421905 74
|
code/algorithms/course_udemy_1/Linked Lists/Problems - PRACTICE/Implement a Doubly Linked List.ipynb | ###Markdown
Implement a Doubly Linked ListFor this interview problem, implement a node class and show how it can be used to create a doubly linked list.
###Code
class Node(object):
def __init__(self, value):
self.prev = None
self.val = value
self.next = None
pass
# Create a Doubly Linked List here
###Output
_____no_output_____ |
week-5/submission/EVA5_WK_5_Base.ipynb | ###Markdown
Import Libraries
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
###Output
_____no_output_____
###Markdown
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
###Code
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
###Output
_____no_output_____
###Markdown
Dataset and Creating Train/Test Split
###Code
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
###Output
_____no_output_____
###Markdown
Dataloader Arguments & Test/Train Dataloaders
###Code
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
###Output
CUDA Available? True
###Markdown
Data StatisticsIt is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like
###Code
# We'd need to convert it into Numpy! Remember above we have converted it into tensors already
train_data = train.train_data
train_data = train.transform(train_data.numpy())
print('[Train]')
print(' - Numpy Shape:', train.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', train.train_data.size())
print(' - min:', torch.min(train_data))
print(' - max:', torch.max(train_data))
print(' - mean:', torch.mean(train_data))
print(' - std:', torch.std(train_data))
print(' - var:', torch.var(train_data))
dataiter = iter(train_loader)
images, labels = dataiter.next()
print(images.shape)
print(labels.shape)
# Let's visualize some of the images
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(images[0].numpy().squeeze(), cmap='gray_r')
###Output
/usr/local/lib/python3.6/dist-packages/torchvision/datasets/mnist.py:55: UserWarning: train_data has been renamed data
warnings.warn("train_data has been renamed data")
###Markdown
MOREIt is important that we view as many images as possible. This is required to get some idea on image augmentation later on
###Code
figure = plt.figure()
num_of_images = 60
for index in range(1, num_of_images + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
###Output
_____no_output_____
###Markdown
The modelLet's start with the model we first saw
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 24
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 22
# TRANSITION BLOCK 1
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 11
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=128, out_channels=32, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU()
) # output_size = 11
# CONVOLUTION BLOCK 2
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 9
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 7
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=128, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU()
) # output_size = 7
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(7, 7), padding=0, bias=False),
# nn.ReLU() NEVER!
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
###Output
_____no_output_____
###Markdown
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
###Code
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
###Output
Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1)
cuda
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 26, 26] 288
ReLU-2 [-1, 32, 26, 26] 0
Conv2d-3 [-1, 64, 24, 24] 18,432
ReLU-4 [-1, 64, 24, 24] 0
Conv2d-5 [-1, 128, 22, 22] 73,728
ReLU-6 [-1, 128, 22, 22] 0
MaxPool2d-7 [-1, 128, 11, 11] 0
Conv2d-8 [-1, 32, 11, 11] 4,096
ReLU-9 [-1, 32, 11, 11] 0
Conv2d-10 [-1, 64, 9, 9] 18,432
ReLU-11 [-1, 64, 9, 9] 0
Conv2d-12 [-1, 128, 7, 7] 73,728
ReLU-13 [-1, 128, 7, 7] 0
Conv2d-14 [-1, 10, 7, 7] 1,280
ReLU-15 [-1, 10, 7, 7] 0
Conv2d-16 [-1, 10, 1, 1] 4,900
================================================================
Total params: 194,884
Trainable params: 194,884
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 2.20
Params size (MB): 0.74
Estimated Total Size (MB): 2.94
----------------------------------------------------------------
###Markdown
Training and TestingLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
###Code
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Train set: Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
###Output
_____no_output_____
###Markdown
Let's Train and test our model
###Code
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 20
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/RIPTiDe testing-checkpoint.ipynb | ###Markdown
Contextualizing Transcriptomic Data
###Code
#!/usr/bin/python
# Dependencies
import copy
import time
import numpy
import cobra
import pandas
import bisect
import symengine
from cobra.util import solver
from optlang.symbolics import Zero
from cobra.manipulation.delete import remove_genes
from cobra.flux_analysis.sampling import ACHRSampler
from cobra.flux_analysis import flux_variability_analysis
# Read in transcriptomic read abundances, default is tsv with no header
def read_transcription_file(read_abundances_file, header=False, replicates=False, sep='\t'):
'''Generates dictionary of transcriptomic abundances from a file.
Parameters
----------
read_abundances_file : string
User-provided file name which contains gene IDs and associated transcription values
header : boolean
Defines if read abundance file has a header that needs to be ignored
replicates : boolean
Defines if read abundances contains replicates and medians require calculation
sep: string
Defines what character separates entries on each line
'''
abund_dict = {}
with open(read_abundances_file, 'r') as transcription:
if header == True: header_line = transcription.readline()
for line in transcription:
line = line.split(sep)
gene = str(line[0])
if replicates == True:
abundance = float(numpy.median([float(x) for x in line[1:]]))
else:
abundance = float(line[1])
if gene in abund_dict.keys():
abund_dict[gene] += abundance
else:
abund_dict[gene] = abundance
return abund_dict
# Ensure that the user provided model and transcriptomic data are ready for RIPTiDe
def initialize_model(model):
# Create a copy of the original model and set new id
riptide_model = copy.deepcopy(model)
riptide_model.id = str(riptide_model.id) + '_riptide'
# Check that the model can grow
solution = riptide_model.optimize()
if solution.objective_value < 1e-6 or str(solution.objective_value) == 'nan':
raise ValueError('ERROR: Provided model objective cannot carry flux! Please correct')
# Calculate flux ranges and remove totally blocked reactions
flux_span = flux_variability_analysis(riptide_model, fraction_of_optimum=0.1)
flux_ranges = {}
blocked_rxns = []
for rxn_id, min_max in flux_span.iterrows():
if max(abs(min_max)) < 1e-6:
blocked_rxns.append(rxn_id)
else:
flux_ranges[rxn_id] = [min(min_max), max(min_max)]
for rxn in blocked_rxns:
riptide_model.reactions.get_by_id(rxn).remove_from_model(remove_orphans=True)
return riptide_model
# Converts a dictionary of transcript distribution percentiles
# Loosely based on:
# Schultz, A, & Qutub, AA (2016). Reconstruction of Tissue-Specific Metabolic Networks Using CORDA.
# PLoS Computational Biology. https://doi.org/10.1371/journal.pcbi.1004808
def assign_coefficients(raw_transcription_dict, model, percentiles, min_coefficients):
# Screen transcriptomic abundances for genes that are included in model
transcription_dict = {}
for gene in model.genes:
try:
transcription_dict[gene.id] = raw_transcription_dict[gene.id]
except KeyError:
continue
# Calculate transcript abundance cutoffs
distribution = transcription_dict.values()
abund_cutoffs = [numpy.percentile(distribution, x) for x in percentiles]
# Screen transcript distribution by newly defined abundance intervals
coefficient_dict = {}
for gene in transcription_dict.iterkeys():
transcription = transcription_dict[gene]
if transcription in abund_cutoffs:
index = abund_cutoffs.index(transcription)
min_coefficient = min_coefficients[index]
else:
index = bisect.bisect_right(abund_cutoffs, transcription) - 1
min_coefficient = min_coefficients[index]
# Assign corresponding coefficients to reactions associated with each gene
for rxn in list(model.genes.get_by_any(gene)[0].reactions):
if rxn.id in coefficient_dict.keys():
coefficient_dict[rxn.id].append(min_coefficient)
else:
coefficient_dict[rxn.id] = [min_coefficient]
# Assign final coefficients
nogene_coefficient = numpy.median(min_coefficients)
for rxn in model.reactions:
try:
# Take smallest value for reactions assigned multiple coefficients
coefficient_dict[rxn.id] = min(coefficient_dict[rxn.id])
except KeyError:
coefficient_dict[rxn.id] = nogene_coefficient
continue
return coefficient_dict
# Read in user defined reactions to keep or exclude
def incorporate_user_defined_reactions(rm_rxns, reaction_file):
print('Integrating user definitions...')
sep = ',' if '.csv' in str(reaction_file) else '\t'
# Check if file actually exists
try:
with open(reaction_file, 'r') as reactions:
include_rxns = set(reactions.readline().split(sep))
exclude_rxns = set(reactions.readline().split(sep))
except FileNotFoundError:
raise FileNotFoundError('ERROR: Defined reactions file not found! Please correct.')
rm_rxns = rm_rxns.difference(include_rxns)
rm_rxns |= exclude_rxns
return rm_rxns
# Determine those reactions that carry flux in a pFBA objective set to a threshold of maximum
# Based on:
# Lewis NE, et al.(2010). Omic data from evolved E. coli are consistent with computed optimal growth from
# genome-scale models. Molecular Systems Biology. 6, 390.
# Holzhütter, HG. (2004). The principle of flux minimization and its application to estimate
# stationary fluxes in metabolic networks. Eur. J. Biochem. 271; 2905–2922.
def constrain_and_analyze_model(model, coefficient_dict, fraction, sampling_depth):
with model as constrained_model:
# Apply weigths to new expression
pfba_expr = Zero
if sampling_depth == 'minimization':
for rxn in constrained_model.reactions:
pfba_expr += coefficient_dict[rxn.id] * rxn.forward_variable
pfba_expr += coefficient_dict[rxn.id] * rxn.reverse_variable
else:
coeff_range = float(max(coefficient_dict.values())) + float(min(coefficient_dict.values()))
for rxn in constrained_model.reactions:
max_coeff = coeff_range - float(coefficient_dict[rxn.id])
pfba_expr += max_coeff * rxn.forward_variable
pfba_expr += max_coeff * rxn.reverse_variable
# Calculate sum of fluxes constraint
if sampling_depth == 'minimization':
prev_obj_val = constrained_model.slim_optimize()
# Set previous objective as a constraint, allow deviation
prev_obj_constraint = constrained_model.problem.Constraint(constrained_model.objective.expression, lb=prev_obj_val*fraction, ub=prev_obj_val)
constrained_model.add_cons_vars([prev_obj_constraint])
constrained_model.objective = constrained_model.problem.Objective(pfba_expr, direction='min', sloppy=True)
constrained_model.solver.update()
solution = constrained_model.optimize()
# Determine reactions that do not carry any flux in the constrained model
inactive_rxns = set([rxn.id for rxn in constrained_model.reactions if abs(solution.fluxes[rxn.id]) < 1e-6])
return inactive_rxns
else:
# Calculate upper fraction of optimum
fraction_hi = 1.0 + (1.0 - fraction)
# Explore solution space of constrained model with flux sampling, allow deviation
constrained_model.objective = constrained_model.problem.Objective(pfba_expr, direction='max', sloppy=True)
solution = constrained_model.optimize()
flux_sum_obj_val = solution.objective_value
flux_sum_constraint = constrained_model.problem.Constraint(pfba_expr, lb=flux_sum_obj_val*fraction, ub=flux_sum_obj_val*fraction_hi)
constrained_model.add_cons_vars([flux_sum_constraint])
constrained_model.solver.update()
# Perform flux sampling (or FVA)
flux_object = explore_flux_ranges(constrained_model, sampling_depth)
return flux_object
# Prune model based on blocked reactions from minimization as well as user-defined reactions
def prune_model(new_model, rm_rxns, defined_rxns):
# Integrate user definitions
if defined_rxns != False:
rm_rxns = incorporate_user_defined_reactions(rm_rxns, defined_rxns)
# Parse elements highlighted for pruning based on GPRs
final_rm_rxns = []
for rxn in rm_rxns:
test = 'pass'
current_genes = list(new_model.reactions.get_by_id(rxn).genes)
for gene in current_genes:
for rxn_sub in gene.reactions:
if rxn_sub.id not in rm_rxns:
test = 'fail'
else:
pass
if test == 'pass': final_rm_rxns.append(rxn)
# Screen for duplicates
final_rm_rxns = list(set(final_rm_rxns))
# Prune inactive reactions
for rxn in final_rm_rxns:
new_model.reactions.get_by_id(rxn).remove_from_model(remove_orphans=True)
# Prune possible residual orphans
removed = 1
while removed == 1:
removed = 0
for cpd in new_model.metabolites:
if len(cpd.reactions) == 0:
cpd.remove_from_model(); removed = 1
for rxn in new_model.reactions:
if len(rxn.metabolites) == 0:
rxn.remove_from_model(); removed = 1
return new_model
# Analyze the possible ranges of flux in the constrained model
def explore_flux_ranges(model, samples):
try:
sampling_object = ACHRSampler(model)
flux_object = sampling_object.sample(samples)
analysis = 'flux_sampling'
except:
# Handle errors for models that are now too small
print('Constrained solution space too narrow for sampling, performing FVA instead')
flux_object = flux_variability_analysis(model, fraction_of_optimum=0.9)
analysis = 'fva'
return flux_object, analysis
# Constrain bounds for remaining reactions in model based on RIPTiDe results
def apply_bounds(constrained_model, flux_object):
flux_ranges = {}
# Handle FVA dataframe if necessary
if len(flux_object.columns) == 2:
for rxn in constrained_model.reactions:
min_max = list(flux_object.loc[rxn.id])
new_lb = min(min_max)
new_ub = max(min_max)
constrained_model.reactions.get_by_id(rxn.id).bounds = (new_lb, new_ub)
flux_ranges[rxn.id] = [new_lb, new_ub]
# Handle flux sampling results
else:
for rxn in constrained_model.reactions:
distribution = list(flux_object[rxn.id])
new_lb = min(distribution)
new_ub = max(distribution)
constrained_model.reactions.get_by_id(rxn.id).bounds = (new_lb, new_ub)
flux_ranges[rxn.id] = [new_lb, new_ub]
return constrained_model
# Reports how long RIPTiDe took to run
def operation_report(start_time, model, riptide):
# Pruning
perc_removal = 100.0 - ((float(len(riptide.reactions)) / float(len(model.reactions))) * 100.0)
perc_removal = round(perc_removal, 1)
print('\nReactions pruned to ' + str(len(riptide.reactions)) + ' from ' + str(len(model.reactions)) + ' (' + str(perc_removal) + '% reduction)')
perc_removal = 100.0 - ((float(len(riptide.metabolites)) / float(len(model.metabolites))) * 100.0)
perc_removal = round(perc_removal, 1)
print('Metabolites pruned to ' + str(len(riptide.metabolites)) + ' from ' + str(len(model.metabolites)) + ' (' + str(perc_removal) + '% reduction)')
# Flux through objective
new_ov = round(riptide.slim_optimize(), 3)
old_ov = round(model.slim_optimize(), 3)
per_shift = 100.0 - ((new_ov / old_ov) * 100.0)
if per_shift == 0.0:
print('\nNo change in flux through the objective')
elif per_shift > 0.0:
per_shift = round(abs(per_shift), 2)
print('\nFlux through the objective REDUCED to ' + str(new_ov) + ' from ' + str(old_ov) + ' (' + str(per_shift) + '% shift)')
elif per_shift < 0.0:
per_shift = round(abs(per_shift), 2)
print('\nFlux through the objective INCREASED to ' + str(new_ov) + ' from ' + str(old_ov) + ' (' + str(per_shift) + '% shift)')
# Check that prune model can still achieve flux through the objective (just in case)
if riptide.slim_optimize() < 1e-6 or str(riptide.slim_optimize()) == 'nan':
print('\nWARNING: Contextualized model objective can no longer carry flux')
# Run time
duration = time.time() - start_time
if duration < 60.0:
duration = round(duration)
print '\nRIPTiDe completed in ' + str(duration) + ' seconds'
elif duration < 3600.0:
duration = round((duration / 60.0), 1)
print '\nRIPTiDe completed in ' + str(duration) + ' minutes'
else:
duration = round((duration / 3600.0), 1)
print '\nRIPTiDe completed in ' + str(duration) + ' hours'
# Create context-specific model based on transcript distribution
def riptide(model, transcription, defined = False, sampling = 10000, percentiles = [50.0, 62.5, 75.0, 87.5], coefficients = [1.0, 0.5, 0.1, 0.01, 0.001], fraction = 0.8):
'''Reaction Inclusion by Parsimony and Transcriptomic Distribution or RIPTiDe
Creates a contextualized metabolic model based on parsimonious usage of reactions defined
by their associated transcriptomic abundances. Returns a pruned, context-specific cobra.Model
and a pandas.DataFrame of associated flux sampling distributions
Parameters
----------
model : cobra.Model
The model to be contextualized
transcription : dictionary
Dictionary of transcript abundances, output of read_transcription_file()
defined : False or File
Text file containing reactions IDs for forced inclusion listed on the first line and exclusion
listed on the second line (both .csv and .tsv formats supported)
sampling : int or False
Number of flux samples to collect, default is 10000, If False, sampling skipped
percentiles : list of floats
Percentile cutoffs of transcript abundance for linear coefficient assignments to associated reactions
Defaults are [50.0, 62.5, 75.0, 87.5]
coefficients : list of floats
Linear coefficients to weight reactions based on distribution placement
Defaults are [1.0, 0.5, 0.1, 0.01, 0.001]
fraction : float
Minimum percent of optimal objective value during FBA steps
Default is 0.8
'''
start_time = time.time()
# Correct some possible user error
if sampling == False:
pass
elif sampling <= 0:
sampling = 10000
else:
samples = int(sampling)
if len(set(transcription.values())) == 1:
raise ValueError('ERROR: All transcriptomic abundances are identical! Please correct')
if len(coefficients) != len(percentiles) + 1:
raise ValueError('ERROR: Invalid ratio of percentile cutoffs to linear coefficients! Please correct')
fraction = float(fraction)
if fraction <= 0.0:
fraction = 0.8
percentiles.sort() # sort ascending
coefficients.sort(reverse=True) # sort descending
# Check original model functionality
# Partition reactions based on transcription percentile intervals, assign corresponding reaction coefficients
print('Initializing model and parsing transcriptome...')
riptide_model = initialize_model(model)
coefficient_dict = assign_coefficients(transcription, riptide_model, percentiles, coefficients)
# Prune now inactive network sections based on coefficients
print('Pruning zero flux subnetworks...')
rm_rxns = constrain_and_analyze_model(riptide_model, coefficient_dict, fraction, 'minimization')
riptide_model = prune_model(riptide_model, rm_rxns, defined)
# Find optimal solution space based on transcription and final constraints
if sampling != False:
print('Sampling context-specific solution space (longest step)...')
flux_object, analysis_type = constrain_and_analyze_model(riptide_model, coefficient_dict, fraction, samples)
# Constrain new model
riptide_model = apply_bounds(riptide_model, flux_object)
operation_report(start_time, model, riptide_model)
return riptide_model, flux_object
else:
operation_report(start_time, model, riptide_model)
return riptide_model
iCdJ794_cef, iCdJ794_cef_samples = riptide(iCdJ794, cef_dict)
len([x.id for x in iCdJ794.reactions if len(x.gene_reaction_rule) == 0])
len(iCdJ794.genes)
len(iJO1366.reactions)
len([x.id for x in iJO1366.reactions if len(x.gene_reaction_rule) == 0])
len(iJO1366.genes)
794.0/1129.0
1367.0/2583.0
###Output
_____no_output_____
###Markdown
Testing with Toy Model
###Code
import numpy
import pandas
import operator
from cobra.flux_analysis.parsimonious import *
# Use FBA calculations to find new shadow prices
def shadow_prices(model, compartment='all', top=25):
with model as m: solution = pfba(m)
shadow_prices = {}
for cpd, price in solution.shadow_prices.iteritems():
cpd = model.metabolites.get_by_any(cpd)[0]
if compartment != 'all' and compartment != cpd.compartment:
continue
else:
if price > 0.0: shadow_prices[cpd.name] = price
sorted_prices = sorted(shadow_prices.items(), key=operator.itemgetter(1), reverse=True)[0:top]
sorted_prices = pandas.DataFrame(sorted_prices, columns=['metabolite', 'shadow_price'])
return sorted_prices
# Load in example model
toy_model = cobra.io.read_sbml_model('/home/mjenior/Desktop/repos/Cdiff_modeling/data/itest8.sbml')
# gene1 = Glucose transporter
# gene2 = Proline transporter
# gene3 = Glycine transporter
# gene4 = Hydrogen efflux
# gene5 = Carbon dioxide efflux
# gene6 = Phosphate transporter
# gene7 = Glycolysis
# gene8 = Stickland fermentation
toy_model
shadow_prices(toy_model)
# Find most parsimonious route of flux
from cobra.flux_analysis.parsimonious import pfba
toy_solution = pfba(toy_model)
print(toy_solution.fluxes)
# Create associated transcriptomes
glucose_transcriptome = {'gene1':100, 'gene2':1, 'gene3':1, 'gene4':1,
'gene5':1, 'gene6':100, 'gene7':10000, 'gene8':1}
peptide_transcriptome = {'gene1':1, 'gene2':100, 'gene3':100, 'gene4':1,
'gene5':1, 'gene6':1, 'gene7':1, 'gene8':10000}
# Contextualize toy model
toy_model_glucose, glucose_samples = riptide(toy_model, glucose_transcriptome)
shadow_prices(toy_model_glucose)
# Contextualize toy model
toy_model_peptide, peptide_samples = riptide(toy_model, peptide_transcriptome)
shadow_prices(toy_model_peptide)
# Test difference in objective fluxes
gluc_arp = glucose_samples['DM_atp_c']
pep_arp = peptide_samples['DM_atp_c']
import scipy.stats
scipy.stats.wilcoxon(x=gluc_arp, y=pep_arp)
###Output
_____no_output_____
###Markdown
Testing with E.coli K-12 MG1655 model
###Code
def max_doubling_time(model):
with model as m:
growth = m.slim_optimize()
if growth < 1e-6:
growth = 'No growth'
else:
growth = (1.0 / growth) * 3600.0
if growth < 60.0:
growth = str(round(growth, 1)) + ' minutes'
else:
growth = growth / 60.0
growth = str(round(growth, 3)) + ' hours'
print(growth)
def collect_doubling_times(flux_samples, biomass):
biomass = list(flux_samples[biomass])
times = []
for x in biomass:
growth = (1.0 / x) * 3600.0 # Calculated in minutes
growth = round(growth, 2)
times.append(growth)
return times
def collect_growth_rates(flux_samples, biomass):
biomass = list(flux_samples[biomass])
rates = []
for x in biomass:
rate = x / 60.0
rate = round(rate, 3)
rates.append(rate)
return rates
iJO1366_m9_aerobic = cobra.io.read_sbml_model('/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_aerobic.sbml')
iJO1366_m9_aerobic.objective = iJO1366_m9_aerobic.reactions.BIOMASS_Ec_iJO1366_WT_53p95M
iJO1366_m9_anaerobic = cobra.io.read_sbml_model('/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_anaerobic.sbml')
iJO1366_m9_anaerobic.objective = iJO1366_m9_anaerobic.reactions.BIOMASS_Ec_iJO1366_WT_53p95M
iJO1366_lb_aerobic = cobra.io.read_sbml_model('/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_lb_aerobic.sbml')
iJO1366_lb_aerobic.objective = iJO1366_lb_aerobic.reactions.BIOMASS_Ec_iJO1366_WT_53p95M
iJO1366 = cobra.io.read_sbml_model('/home/mjenior/Desktop/iJO1366.xml')
iJO1366.objective = iJO1366.reactions.BIOMASS_Ec_iJO1366_WT_53p95M
# Open all exchanges
exchanges = set()
for rxn in iJO1366.reactions:
if len(rxn.reactants) == 0 or len(rxn.products) == 0:
rxn.bounds = (min(rxn.lower_bound, -1000), max(rxn.upper_bound, 1000))
exchanges |= set([rxn.id])
failed = 0
passed = 0
lens = []
for gene in iJO1366.genes:
current = len(gene.reactions)
lens.append(current)
if current == 271:
print(gene)
if current == 0:
failed += 1
else:
passed += 1
print(failed)
print(passed)
iJO1366.genes.get_by_id('b2215')
iJO1366.genes.get_by_id('b0929')
max_doubling_time(iJO1366)
iJO1366
len(iJO1366.genes)
# Remove blocked reactions to speed up sampling
flux_span = flux_variability_analysis(iJO1366, fraction_of_optimum=0.75)
blocked_rxns = []
for rxn_id, min_max in flux_span.iterrows():
if max(abs(min_max)) < 1e-6:
blocked_rxns.append(rxn_id)
for rxn in blocked_rxns:
iJO1366.reactions.get_by_id(rxn).remove_from_model(remove_orphans=True)
# Flux sampling of base model
iJO1366_sampling_object = ACHRSampler(iJO1366)
iJO1366_base_aerobic_flux_samples = iJO1366_sampling_object.sample(10000)
# Collect base model growth information
base_times = collect_doubling_times(iJO1366_base_aerobic_flux_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(base_times), numpy.median(base_times), max(base_times)])
base_rates = collect_growth_rates(iJO1366_base_aerobic_flux_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(base_rates), numpy.median(base_rates), max(base_rates)])
# Screen data to actually biologically feasible ranges
# Doubling time
screened_base_times = []
for x in base_times:
if x > 0.0: screened_base_times.append(x)
print(len(screened_base_times))
print([min(screened_base_times), numpy.median(screened_base_times), max(screened_base_times)])
# Growth rate
screened_base_rates = []
for x in base_rates:
if x > 0.0: screened_base_rates.append(x)
print(len(screened_base_rates))
print([min(screened_base_rates), numpy.median(screened_base_rates), max(screened_base_rates)])
# Append NAs to be compatible with other data and R
total_nas = len(base_times) - len(screened_base_times)
total_nas = total_nas * ['NA']
screened_base_times += total_nas
total_nas = len(base_rates) - len(screened_base_rates)
total_nas = total_nas * ['NA']
screened_base_rates += total_nas
# Flux sampling of base model
#iJO1366.reactions.get_by_id('EX_o2_e').bounds = (0.0, 0.0) # make anaerobic
#prev_obj_val = iJO1366.slim_optimize()
#prev_obj_constraint = iJO1366.problem.Constraint(iJO1366.objective.expression, lb=prev_obj_val*0.5, ub=prev_obj_val*1.5)
#iJO1366.add_cons_vars([prev_obj_constraint])
#iJO1366_sampling_object = OptGPSampler(iJO1366, processes=4)
#iJO1366_base_anaerobic_flux_samples = iJO1366_sampling_object.sample(10000)
# Write it to a pickle
#import pickle
#pickle_out = open('/home/mjenior/Desktop/iJO1366_base_anaerobic_flux_samples.pickle', 'wb')
#pickle.dump(iJO1366_base_flux_samples, pickle_out)
#pickle_out.close()
# Load in flux samples to avoid repeated computation
import pickle
pickle_in = open('/home/mjenior/Desktop/iJO1366_base_aerobic_flux_samples.pickle', 'rb')
iJO1366_base_aerobic_flux_samples = pickle.load(pickle_in)
pickle_in = open('/home/mjenior/Desktop/iJO1366_base_anaerobic_flux_samples.pickle', 'rb')
iJO1366_base_anaerobic_flux_samples = pickle.load(pickle_in)
base_aerobic_rates = collect_growth_rates(iJO1366_base_aerobic_flux_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(base_aerobic_rates), numpy.median(base_aerobic_rates), max(base_aerobic_rates)])
base_anaerobic_rates = collect_growth_rates(iJO1366_base_anaerobic_flux_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(base_anaerobic_rates), numpy.median(base_anaerobic_rates), max(base_anaerobic_rates)])
# Normalize transcriptome
#gpr_dict = {}
#with open('/home/mjenior/Desktop/Monk_et_al_2016/iJO1366_genes.tsv', 'r') as genes:
# for line in genes:
# line = line.split()
# gpr_dict[line[1]] = line[0]
# Read in transcriptomes
# Data collected from:
# Monk et al. (2016). Multi-omics Quantification of Species Variation of Escherichia coli
# Links Molecular Features with Strain Phenotypes. Cell Systems. 3; 238–251.
# Load in GPR translations
gpr_dict = {}
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/transcript/Monk_et_al_2016/iJO1366_genes.tsv', 'r') as genes:
for line in genes:
line = line.split()
gpr_dict[line[1]] = line[0]
# Normalized abundances
# Separate into treatment goups and calculate medians
m9_aerobic = {}
m9_anaerobic = {}
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/transcript/Monk_et_al_2016/normalized.tsv', 'r') as transcription:
for line in transcription:
line = line.split()
if line[0] == 'gene':
continue
else:
try:
gene = gpr_dict[line[0]]
except:
continue
m9_aerobic[gene] = numpy.median([int(x) for x in line[1:4]])
m9_anaerobic[gene] = numpy.median([int(y) for y in line[4:7]])
# Rich media (LB) data from:
# Double-stranded transcriptome of E. coli
# Meghan Lybecker, Bob Zimmermann, Ivana Bilusic, Nadezda Tukhtubaeva, Renée Schroeder
# Proceedings of the National Academy of Sciences Feb 2014, 111 (8) 3134-3139; DOI: 10.1073/pnas.1315974111
lb_aerobic = {}
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/transcript/SRR941894.mapped.norm.tsv', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
lb_aerobic[line[0]] = float(line[1])
# Aerobic growth in M9 + glucose
iJO1366_m9_aerobic, m9_aerobic_samples = riptide(iJO1366, m9_aerobic)
iJO1366_m9_aerobic
max_doubling_time(iJO1366_m9_aerobic)
m9_aerobic_times = collect_doubling_times(m9_aerobic_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(m9_aerobic_times), numpy.median(m9_aerobic_times), max(m9_aerobic_times)])
m9_aerobic_rates = collect_growth_rates(m9_aerobic_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(m9_aerobic_rates), numpy.median(m9_aerobic_rates), max(m9_aerobic_rates)])
# Run in aerobic exchange conditions
#iJO1366_m9_anaerobic_test, m9_anaerobic_samples_test = riptide(iJO1366, m9_anaerobic)
#max_doubling_time(iJO1366_m9_anaerobic_test)
#m9_aerobic_rates_test = collect_growth_rates(m9_anaerobic_samples_test, 'BIOMASS_Ec_iJO1366_WT_53p95M')
#print([min(m9_aerobic_rates_test), numpy.median(m9_aerobic_rates_test), max(m9_aerobic_rates_test)])
# Anaerobic growth in M9 + glucose
#iJO1366.reactions.get_by_id('EX_o2_e').bounds = (0.0, 0.0) # make anaerobic
iJO1366_m9_anaerobic, m9_anaerobic_samples = riptide(iJO1366, m9_anaerobic)
#iJO1366.reactions.get_by_id('EX_o2_e').bounds = (-1000.0, 1000.0) # revert change
iJO1366_m9_anaerobic
max_doubling_time(iJO1366_m9_anaerobic)
m9_anaerobic_times = collect_doubling_times(m9_anaerobic_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(m9_anaerobic_times), numpy.median(m9_anaerobic_times), max(m9_anaerobic_times)])
m9_anaerobic_rates = collect_growth_rates(m9_anaerobic_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(m9_anaerobic_rates), numpy.median(m9_anaerobic_rates), max(m9_anaerobic_rates)])
# Aerobic growth in LB
iJO1366_lb_aerobic, lb_samples = riptide(iJO1366, lb_aerobic)
iJO1366_lb_aerobic
max_doubling_time(iJO1366_lb_aerobic)
lb_aerobic_times = collect_doubling_times(lb_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(lb_aerobic_times), numpy.median(lb_aerobic_times), max(lb_aerobic_times)])
lb_aerobic_rates = collect_growth_rates(lb_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(lb_aerobic_rates), numpy.median(lb_aerobic_rates), max(lb_aerobic_rates)])
# Compare to base implementation of pFBA
# All coefficients set to 1.0, so transcriptome is irrelevant
iJO1366_pfba, pfba_samples = riptide(iJO1366, m9_aerobic, coefficients=[1.0,1.0,1.0,1.0,1.0])
iJO1366_pfba
max_doubling_time(iJO1366_pfba)
pfba_times = collect_doubling_times(pfba_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(pfba_times), numpy.median(pfba_times), max(pfba_times)])
pfba_rates = collect_growth_rates(pfba_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(pfba_rates), numpy.median(pfba_rates), max(pfba_rates)])
# Compares lists to create diagrams for 4 groups
def venn_comparison(list1, list2, list3, list4):
# Confirm correct data types
list1 = set(list1)
list2 = set(list2)
list3 = set(list3)
list4 = set(list4)
# Identify exclusive elements
list1_only = list1.difference(list2)
list1_only = list1_only.difference(list3)
list1_only = list1_only.difference(list4)
list2_only = list2.difference(list1)
list2_only = list2_only.difference(list3)
list2_only = list2_only.difference(list4)
list3_only = list3.difference(list1)
list3_only = list3_only.difference(list2)
list3_only = list3_only.difference(list4)
list4_only = list4.difference(list1)
list4_only = list4_only.difference(list2)
list4_only = list4_only.difference(list3)
# Find overlap between just 2 groups
list1_list2_overlap = list1.intersection(list2)
list1_list2_overlap = list1_list2_overlap.difference(list3)
list1_list2_overlap = list1_list2_overlap.difference(list4)
list1_list3_overlap = list1.intersection(list3)
list1_list3_overlap = list1_list3_overlap.difference(list2)
list1_list3_overlap = list1_list3_overlap.difference(list4)
list1_list4_overlap = list1.intersection(list4)
list1_list4_overlap = list1_list4_overlap.difference(list2)
list1_list4_overlap = list1_list4_overlap.difference(list3)
list2_list3_overlap = list2.intersection(list3)
list2_list3_overlap = list2_list3_overlap.difference(list1)
list2_list3_overlap = list2_list3_overlap.difference(list4)
list2_list4_overlap = list2.intersection(list4)
list2_list4_overlap = list2_list4_overlap.difference(list1)
list2_list4_overlap = list2_list4_overlap.difference(list3)
list3_list4_overlap = list3.intersection(list4)
list3_list4_overlap = list3_list4_overlap.difference(list1)
list3_list4_overlap = list3_list4_overlap.difference(list2)
# Find overlap in 3 groups
list1_list2_list3_overlap = list1.intersection(list2)
list1_list2_list3_overlap = list1_list2_list3_overlap.intersection(list3)
list1_list2_list3_overlap = list1_list2_list3_overlap.difference(list4)
list1_list2_list4_overlap = list1.intersection(list2)
list1_list2_list4_overlap = list1_list2_list4_overlap.intersection(list4)
list1_list2_list4_overlap = list1_list2_list4_overlap.difference(list3)
list1_list3_list4_overlap = list1.intersection(list3)
list1_list3_list4_overlap = list1_list3_list4_overlap.intersection(list4)
list1_list3_list4_overlap = list1_list3_list4_overlap.difference(list2)
list2_list3_list4_overlap = list2.intersection(list3)
list2_list3_list4_overlap = list2_list3_list4_overlap.intersection(list4)
list2_list3_list4_overlap = list2_list3_list4_overlap.difference(list1)
# Find overlap between all groups
all_list_overlap = list1.intersection(list2)
all_list_overlap = all_list_overlap.intersection(list3)
all_list_overlap = all_list_overlap.intersection(list4)
# Calculate totals in each group
list1_total = float(len(list1))
list2_total = float(len(list2))
list3_total = float(len(list3))
list4_total = float(len(list4))
list1_only_total = float(len(list1_only))
list2_only_total = float(len(list2_only))
list3_only_total = float(len(list3_only))
list4_only_total = float(len(list4_only))
list1_list2_overlap_total = float(len(list1_list2_overlap))
list1_list3_overlap_total = float(len(list1_list3_overlap))
list1_list4_overlap_total = float(len(list1_list4_overlap))
list2_list3_overlap_total = float(len(list2_list3_overlap))
list2_list4_overlap_total = float(len(list2_list4_overlap))
list3_list4_overlap_total = float(len(list3_list4_overlap))
list1_list2_list3_overlap_total = float(len(list1_list2_list3_overlap))
list1_list2_list4_overlap_total = float(len(list1_list2_list4_overlap))
list1_list3_list4_overlap_total = float(len(list1_list3_list4_overlap))
list2_list3_list4_overlap_total = float(len(list2_list3_list4_overlap))
all_list_overlap_total = float(len(all_list_overlap))
# Calculate percent overlaps
list1_only_percent = round(((list1_only_total / list1_total) * 100.0), 1)
list2_only_percent = round(((list2_only_total / list2_total) * 100.0), 1)
list3_only_percent = round(((list3_only_total / list3_total) * 100.0), 1)
list4_only_percent = round(((list4_only_total / list4_total) * 100.0), 1)
temp1 = (list1_list2_overlap_total / list1_total) * 100.0
temp2 = (list1_list2_overlap_total / list2_total) * 100.0
list1_list2_overlap_percent = round(numpy.mean([temp1, temp2]), 1)
temp1 = (list1_list3_overlap_total / list1_total) * 100.0
temp2 = (list1_list3_overlap_total / list3_total) * 100.0
list1_list3_overlap_percent = round(numpy.mean([temp1, temp2]), 1)
temp1 = (list1_list4_overlap_total / list1_total) * 100.0
temp2 = (list1_list4_overlap_total / list4_total) * 100.0
list1_list4_overlap_percent = round(numpy.mean([temp1, temp2]), 1)
temp1 = (list2_list3_overlap_total / list2_total) * 100.0
temp2 = (list2_list3_overlap_total / list3_total) * 100.0
list2_list3_overlap_percent = round(numpy.mean([temp1, temp2]), 1)
temp1 = (list2_list4_overlap_total / list2_total) * 100.0
temp2 = (list2_list4_overlap_total / list4_total) * 100.0
list2_list4_overlap_percent = round(numpy.mean([temp1, temp2]), 1)
temp1 = (list3_list4_overlap_total / list3_total) * 100.0
temp2 = (list3_list4_overlap_total / list4_total) * 100.0
list3_list4_overlap_percent = round(numpy.mean([temp1, temp2]), 1)
temp1 = (list1_list2_list3_overlap_total / list1_total) * 100.0
temp2 = (list1_list2_list3_overlap_total / list2_total) * 100.0
temp3 = (list1_list2_list3_overlap_total / list3_total) * 100.0
list1_list2_list3_overlap_percent = round(numpy.mean([temp1, temp2, temp3]), 1)
temp1 = (list1_list2_list4_overlap_total / list1_total) * 100.0
temp2 = (list1_list2_list4_overlap_total / list2_total) * 100.0
temp3 = (list1_list2_list4_overlap_total / list4_total) * 100.0
list1_list2_list4_overlap_percent = round(numpy.mean([temp1, temp2, temp3]), 1)
temp1 = (list1_list3_list4_overlap_total / list1_total) * 100.0
temp2 = (list1_list3_list4_overlap_total / list3_total) * 100.0
temp3 = (list1_list3_list4_overlap_total / list4_total) * 100.0
list1_list3_list4_overlap_percent = round(numpy.mean([temp1, temp2, temp3]), 1)
temp1 = (list2_list3_list4_overlap_total / list2_total) * 100.0
temp2 = (list2_list3_list4_overlap_total / list3_total) * 100.0
temp3 = (list2_list3_list4_overlap_total / list4_total) * 100.0
list2_list3_list4_overlap_percent = round(numpy.mean([temp1, temp2, temp3]), 1)
temp1 = (all_list_overlap_total / list1_total) * 100.0
temp2 = (all_list_overlap_total / list2_total) * 100.0
temp3 = (all_list_overlap_total / list3_total) * 100.0
temp4 = (all_list_overlap_total / list4_total) * 100.0
all_list_overlap_percent = round(numpy.mean([temp1, temp2, temp3, temp4]), 1)
# Print report to the screen
print('List 1 only: ' + str(list1_only_percent) + '% (' + str(int(list1_only_total)) + ')')
print('List 2 only: ' + str(list2_only_percent) + '% (' + str(int(list2_only_total)) + ')')
print('List 3 only: ' + str(list3_only_percent) + '% (' + str(int(list3_only_total)) + ')')
print('List 4 only: ' + str(list4_only_percent) + '% (' + str(int(list4_only_total)) + ')')
print('')
print('List 1 + List 2: ' + str(list1_list2_overlap_percent) + '% (' + str(int(list1_list2_overlap_total)) + ')')
print('List 1 + List 3: ' + str(list1_list3_overlap_percent) + '% (' + str(int(list1_list3_overlap_total)) + ')')
print('List 1 + List 4: ' + str(list1_list4_overlap_percent) + '% (' + str(int(list1_list4_overlap_total)) + ')')
print('List 2 + List 3: ' + str(list2_list3_overlap_percent) + '% (' + str(int(list2_list3_overlap_total)) + ')')
print('List 2 + List 4: ' + str(list2_list4_overlap_percent) + '% (' + str(int(list2_list4_overlap_total)) + ')')
print('List 3 + List 4: ' + str(list3_list4_overlap_percent) + '% (' + str(int(list3_list4_overlap_total)) + ')')
print('')
print('List 1 + List 2 + List 3: ' + str(list1_list2_list3_overlap_percent) + '% (' + str(int(list1_list2_list3_overlap_total)) + ')')
print('List 1 + List 2 + List 4: ' + str(list1_list2_list4_overlap_percent) + '% (' + str(int(list1_list2_list4_overlap_total)) + ')')
print('List 1 + List 3 + List 4: ' + str(list1_list3_list4_overlap_percent) + '% (' + str(int(list1_list3_list4_overlap_total)) + ')')
print('List 2 + List 3 + List 4: ' + str(list2_list3_list4_overlap_percent) + '% (' + str(int(list2_list3_list4_overlap_total)) + ')')
print('')
print('Shared: ' + str(all_list_overlap_percent) + '% (' + str(int(all_list_overlap_total)) + ')')
# Return new lists
return [list1_only,list2_only,list3_only,list4_only,list1_list2_overlap, list1_list3_overlap, list1_list4_overlap, list2_list3_overlap, list2_list4_overlap, list3_list4_overlap, list1_list2_list3_overlap, list1_list2_list4_overlap, list1_list3_list4_overlap, list2_list3_list4_overlap, all_list_overlap]
# Reactions
iJO1366_m9_aerobic_reactions = [x.id for x in iJO1366_m9_aerobic.reactions]
iJO1366_m9_anaerobic_reactions = [x.id for x in iJO1366_m9_anaerobic.reactions]
iJO1366_lb_aerobic_reactions = [x.id for x in iJO1366_lb_aerobic.reactions]
iJO1366_pfba_reactions = [x.id for x in iJO1366_pfba.reactions]
reactions_comparisons = venn_comparison(iJO1366_pfba_reactions, iJO1366_lb_aerobic_reactions, iJO1366_m9_aerobic_reactions, iJO1366_m9_anaerobic_reactions)
# Metabolites
iJO1366_m9_aerobic_metabolites = [x.id for x in iJO1366_m9_aerobic.metabolites]
iJO1366_m9_anaerobic_metabolites = [x.id for x in iJO1366_m9_anaerobic.metabolites]
iJO1366_lb_aerobic_metabolites = [x.id for x in iJO1366_lb_aerobic.metabolites]
iJO1366_pfba_metabolites = [x.id for x in iJO1366_pfba.metabolites]
metabolites_comparisons = venn_comparison(iJO1366_pfba_metabolites, iJO1366_lb_aerobic_metabolites, iJO1366_m9_aerobic_metabolites, iJO1366_m9_anaerobic_metabolites)
# Screen context specific growth rates by each model's optimal growth
# Determine bounds
pfba_obj_val_lb = iJO1366_pfba.slim_optimize() * 0.8
lb_aerobic_obj_val_lb = iJO1366_lb_aerobic.slim_optimize() * 0.8
m9_aerobic_obj_val_lb = iJO1366_m9_aerobic.slim_optimize() * 0.8
m9_anaerobic_obj_val_lb = iJO1366_m9_anaerobic.slim_optimize() * 0.8
pfba_obj_val_ub = iJO1366_pfba.slim_optimize()
lb_aerobic_obj_val_ub = iJO1366_lb_aerobic.slim_optimize()
m9_aerobic_obj_val_ub = iJO1366_m9_aerobic.slim_optimize()
m9_anaerobic_obj_val_ub = iJO1366_m9_anaerobic.slim_optimize()
# Collect fluxes
pfba_biomass = list(pfba_samples['BIOMASS_Ec_iJO1366_WT_53p95M'])
lb_biomass = list(lb_samples['BIOMASS_Ec_iJO1366_WT_53p95M'])
m9_aerobic_biomass = list(m9_aerobic_samples['BIOMASS_Ec_iJO1366_WT_53p95M'])
m9_anaerobic_biomass = list(m9_anaerobic_samples['BIOMASS_Ec_iJO1366_WT_53p95M'])
# Screen fluxes
pfba_biomass = [x for x in pfba_biomass if x >= pfba_obj_val_lb and x <= pfba_obj_val_ub]
lb_biomass = [x for x in lb_biomass if x >= lb_aerobic_obj_val_lb and x <= lb_aerobic_obj_val_ub]
m9_aerobic_biomass = [x for x in m9_aerobic_biomass if x >= m9_aerobic_obj_val_lb and x <= m9_aerobic_obj_val_ub]
m9_anaerobic_biomass = [x for x in m9_anaerobic_biomass if x >= m9_anaerobic_obj_val_lb and x <= m9_anaerobic_obj_val_ub]
# Convert to per hour rate
pfba_rates = [round((x / 60.0), 3) for x in pfba_biomass]
lb_aerobic_rates = [round((x / 60.0), 3) for x in lb_biomass]
m9_aerobic_rates = [round((x / 60.0), 3) for x in m9_aerobic_biomass]
m9_anaerobic_rates = [round((x / 60.0), 3) for x in m9_anaerobic_biomass]
# Subsample evenly
import random
sub_level = min([len(pfba_rates), len(lb_aerobic_rates), len(m9_aerobic_rates), len(m9_anaerobic_rates)])
pfba_sub = random.sample(range(0,len(pfba_rates)), sub_level)
lb_sub = random.sample(range(0,len(lb_aerobic_rates)), sub_level)
m9a_sub = random.sample(range(0,len(m9_aerobic_rates)), sub_level)
m9n_sub = random.sample(range(0,len(m9_anaerobic_rates)), sub_level)
pfba_rates = [pfba_rates[i] for i in pfba_sub]
lb_aerobic_rates = [lb_aerobic_rates[i] for i in lb_sub]
m9_aerobic_rates = [m9_aerobic_rates[i] for i in m9a_sub]
m9_anaerobic_rates = [m9_anaerobic_rates[i] for i in m9n_sub]
# Convert to strings
pfba_rates = [str(x) for x in pfba_rates]
pfba_rates = 'base_pfba\t' + '\t'.join(pfba_rates) + '\n'
m9_aerobic_rates = [str(x) for x in m9_aerobic_rates]
m9_aerobic_rates = 'm9_gluc_aerobic\t' + '\t'.join(m9_aerobic_rates) + '\n'
m9_anaerobic_rates = [str(x) for x in m9_anaerobic_rates]
m9_anaerobic_rates = 'm9_gluc_anaerobic\t' + '\t'.join(m9_anaerobic_rates) + '\n'
lb_aerobic_rates = [str(x) for x in lb_aerobic_rates]
lb_aerobic_rates = 'lb_aerobic\t' + '\t'.join(lb_aerobic_rates) + '\n'
# Write to file
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/new_growth_rates.tsv', 'w') as rates:
rates.write(pfba_rates)
rates.write(m9_aerobic_rates)
rates.write(m9_anaerobic_rates)
rates.write(lb_aerobic_rates)
# Write contextualized models to SBMLs and JSONs
cobra.io.write_sbml_model(iJO1366_m9_aerobic, '/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_aerobic.sbml')
cobra.io.save_json_model(iJO1366_m9_aerobic, '/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_aerobic.json')
cobra.io.write_sbml_model(iJO1366_m9_anaerobic, '/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_anaerobic.sbml')
cobra.io.save_json_model(iJO1366_m9_anaerobic, '/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_anaerobic.json')
cobra.io.write_sbml_model(iJO1366_lb_aerobic, '/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_lb_aerobic.sbml')
cobra.io.save_json_model(iJO1366_lb_aerobic, '/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_lb_aerobic.json')
# Correct the sample labels
def label_flux_samples(file_name, label):
new_name = file_name.rstrip('tsv') + 'format.tsv'
new_file = open(new_name, 'w')
with open(file_name, 'r') as samples:
header = samples.readline()
header = 'sample\t' + header
new_file.write(header)
current = 1
for line in samples:
line = label + '_' + str(current) + '\t' + line
new_file.write(line)
current += 1
new_file.close()
# Write chosen flux sample tables to tsvs
m9_aerobic_samples.to_csv('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/M9_aerobic.flux_samples.tsv', sep='\t')
m9_anaerobic_samples.to_csv('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/M9_anaerobic.flux_samples.tsv', sep='\t')
lb_samples.to_csv('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/LB_aerobic.flux_samples.tsv', sep='\t')
pfba_samples.to_csv('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/pFBA.flux_samples.tsv', sep='\t')
label_flux_samples('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/M9_aerobic.flux_samples.tsv', 'm9_aer')
label_flux_samples('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/M9_anaerobic.flux_samples.tsv', 'm9_anaer')
label_flux_samples('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/LB_aerobic.flux_samples.tsv', 'lb')
label_flux_samples('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/pFBA.flux_samples.tsv', 'pfba')
###Output
_____no_output_____
###Markdown
Context-specific Essentiality
###Code
import cobra.flux_analysis
iJO1366_essential_genes = cobra.flux_analysis.find_essential_genes(iJO1366)
print('Essential genes: ' + str(len(iJO1366_essential_genes)))
iJO1366_pfba_essential_genes = cobra.flux_analysis.find_essential_genes(iJO1366_pfba)
iJO1366_pfba_essential_genes = set([x.id for x in iJO1366_pfba_essential_genes])
print('Essential genes: ' + str(len(iJO1366_pfba_essential_genes)))
iJO1366_lb_aerobic_essential_genes = cobra.flux_analysis.find_essential_genes(iJO1366_lb_aerobic)
iJO1366_lb_aerobic_essential_genes = set([x.id for x in iJO1366_lb_aerobic_essential_genes])
print('Essential genes: ' + str(len(iJO1366_lb_aerobic_essential_genes)))
iJO1366_m9_aerobic_essential_genes = cobra.flux_analysis.find_essential_genes(iJO1366_m9_aerobic)
iJO1366_m9_aerobic_essential_genes = set([x.id for x in iJO1366_m9_aerobic_essential_genes])
print('Essential genes: ' + str(len(iJO1366_m9_aerobic_essential_genes)))
iJO1366_m9_anaerobic_essential_genes = cobra.flux_analysis.find_essential_genes(iJO1366_m9_anaerobic)
iJO1366_m9_anaerobic_essential_genes = set([x.id for x in iJO1366_m9_anaerobic_essential_genes])
print('Essential genes: ' + str(len(iJO1366_m9_anaerobic_essential_genes)))
# Find those genes shared in all models
core_essential = iJO1366_pfba_essential_genes.intersection(iJO1366_lb_aerobic_essential_genes)
core_essential = core_essential.intersection(iJO1366_m9_aerobic_essential_genes)
core_essential = core_essential.intersection(iJO1366_m9_anaerobic_essential_genes)
print('Essential in all GENREs: ' + str(len(core_essential)))
# Substract as background from each
iJO1366_pfba_essential_genes = iJO1366_pfba_essential_genes.difference(core_essential)
iJO1366_lb_aerobic_essential_genes = iJO1366_lb_aerobic_essential_genes.difference(core_essential)
iJO1366_m9_aerobic_essential_genes = iJO1366_m9_aerobic_essential_genes.difference(core_essential)
iJO1366_m9_anaerobic_essential_genes = iJO1366_m9_anaerobic_essential_genes.difference(core_essential)
# Find non-essentiality across models
iJO1366_pfba_genes = set([x.id for x in iJO1366_pfba.genes])
iJO1366_lb_aerobic_genes = set([x.id for x in iJO1366_lb_aerobic.genes])
iJO1366_m9_aerobic_genes = set([x.id for x in iJO1366_m9_aerobic.genes])
iJO1366_m9_anaerobic_genes = set([x.id for x in iJO1366_m9_anaerobic.genes])
# Compare overlapping genes
total_genes = set()
total_genes |= iJO1366_pfba_essential_genes
total_genes |= iJO1366_lb_aerobic_essential_genes
total_genes |= iJO1366_m9_aerobic_essential_genes
total_genes |= iJO1366_m9_anaerobic_essential_genes
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/essentiality.tsv', 'w') as outfile:
outfile.write('gene\tpfba\tlb_aerobic\tm9_aerobic\tm9_anaerobic\n')
for gene in total_genes:
entry = ['filler','filler','filler','filler']
if gene in iJO1366_pfba_essential_genes:
entry[0] = 2
elif gene in iJO1366_pfba_genes:
entry[0] = 1
else:
entry[0] = 0
if gene in iJO1366_lb_aerobic_essential_genes:
entry[1] = 2
elif gene in iJO1366_lb_aerobic_genes:
entry[1] = 1
else:
entry[1] = 0
if gene in iJO1366_m9_aerobic_essential_genes:
entry[2] = 2
elif gene in iJO1366_m9_aerobic_genes:
entry[2] = 1
else:
entry[2] = 0
if gene in iJO1366_m9_anaerobic_essential_genes:
entry[3] = 2
elif gene in iJO1366_m9_anaerobic_genes:
entry[3] = 1
else:
entry[3] = 0
entry = gene + '\t' + '\t'.join([str(x) for x in entry]) + '\n'
outfile.write(entry)
###Output
Essential in all GENREs: 187
###Markdown
Metatranscriptomic analysis
###Code
def find_source(model, met_id):
generating = set()
for rxn in model.reactions:
for met in rxn.products:
if met_id in met.id:
generating |= set([rxn.id])
print('Metabolite sources: ' + str(len(generating)))
return generating
clinda_k12_metaT = {}
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/transcript/clinda_k12.mapped.norm.tsv', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
clinda_k12_metaT[line[0]] = float(line[2])
# Eliminate O2 exchange
iJO1366.reactions.get_by_id('EX_o2_e').bounds = (0.0, 0.0) # make anaerobic
iJO1366_m9_anaerobic, m9_anaerobic_samples = riptide(iJO1366, m9_anaerobic)
m9_anaerobic_atp = find_source(iJO1366_m9_anaerobic, 'atp_c')
m9_anaerobic_atp
max_doubling_time(iJO1366_m9_anaerobic)
iJO1366_invivo_metaT, invivo_metaT_samples = riptide(iJO1366, clinda_k12_metaT)
invivo_rates = collect_growth_rates(invivo_metaT_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(invivo_rates), numpy.median(invivo_rates), max(invivo_rates)])
with open('/home/mjenior/Desktop/repos/Cdiff_modeling/data/invivo_growth_rates.tsv', 'w') as output_file:
for x in invivo_rates: output_file.write(str(x) + '\n')
invivo_anaerobic_atp = find_source(iJO1366_invivo_metaT, 'atp_c')
vitro_ex = set([x.id for x in iJO1366_m9_anaerobic.reactions if 'EX_' in x.id])
vivo_ex = set([x.id for x in iJO1366_invivo_metaT.reactions if 'EX_' in x.id])
vitro_ex_only = vitro_ex.difference(vivo_ex)
vitro_ex_only_input = set()
for y in vitro_ex_only:
if abs(iJO1366_m9_anaerobic.reactions.get_by_id(y).lower_bound) > abs(iJO1366_m9_anaerobic.reactions.get_by_id(y).upper_bound):
vitro_ex_only_input |= set([y])
vivo_ex_only = vivo_ex.difference(vitro_ex)
vivo_ex_only_input = set()
for y in vivo_ex_only:
if abs(iJO1366_invivo_metaT.reactions.get_by_id(y).lower_bound) > abs(iJO1366_invivo_metaT.reactions.get_by_id(y).upper_bound):
vivo_ex_only_input |= set([y])
for x in vitro_ex_only_input: print(iJO1366_m9_anaerobic.reactions.get_by_id(x).reactants[0].name)
for x in vivo_ex_only_input: print(iJO1366_invivo_metaT.reactions.get_by_id(x).reactants[0].name)
max_doubling_time(iJO1366_invivo_metaT)
# Write flux sample tables to tsv
m9_anaerobic_samples.to_csv('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/invitro.flux_samples.tsv', sep='\t')
label_flux_samples('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/invitro.flux_samples.tsv', 'invitro')
invivo_metaT_samples.to_csv('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/invivo.flux_samples.tsv', sep='\t')
label_flux_samples('/home/mjenior/Desktop/repos/Cdiff_modeling/data/flux_samples/invivo.flux_samples.tsv', 'invivo')
invivo_metaT_times = collect_doubling_times(invivo_metaT_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(invivo_metaT_times), numpy.median(invivo_metaT_times), max(invivo_metaT_times)])
invivo_metaT_rates = collect_growth_rates(invivo_metaT_samples, 'BIOMASS_Ec_iJO1366_WT_53p95M')
print([min(invivo_metaT_rates), numpy.median(invivo_metaT_rates), max(invivo_metaT_rates)])
m9 = cobra.io.read_sbml_model('/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_m9_aerobic.sbml')
lb = cobra.io.read_sbml_model('/home/mjenior/Desktop/repos/Cdiff_modeling/data/riptide_models/iJO1366_lb_aerobic.sbml')
m9_nadph = find_source(m9, 'nadph_c')
lb_nadph = find_source(lb, 'nadph_c')
###Output
Metabolite sources: 4
Metabolite sources: 3
###Markdown
Testing Previous Integration Algorithms
###Code
# Comparison to GIMME and iMAT
import copy
import cobra
from driven.flux_analysis.transcriptomics import *
# Read in formatted data
m9_aerobic_driven = ExpressionProfile.from_csv('/home/mjenior/Desktop/m9_aerobic_expression.csv')
# iMAT
start_time = time.time()
iJO1366_imat_result = imat(iJO1366, expression_profile=m9_aerobic_driven, low_cutoff=100, high_cutoff=1000, fraction_of_optimum=0.75)
duration = time.time() - start_time
duration = round(duration)
print('iMAT finished in ' + str(duration) + ' seconds')
iJO1366_imat_result
iJO1366_imat_result.data_frame
# GIMME
start_time = time.time()
iJO1366_gimme_result = gimme(iJO1366, cutoff=100, expression_profile=m9_aerobic_driven, fraction_of_optimum=0.75)
duration = time.time() - start_time
duration = round(duration)
print('GIMME finished in ' + str(duration) + ' seconds')
iJO1366_gimme_result
test = iJO1366_gimme_result.trim_model(iJO1366)
test.slim_optimize()
iJO1366_gimme_result.data_frame
# Constrain fluxes to match output - GIMME
iJO1366_gimme = copy.deepcopy(iJO1366)
for rxn_id, flux in iJO1366_gimme_result.fluxes.items():
iJO1366_gimme.reactions.get_by_id(rxn_id).bounds = (flux, flux)
# Additional test datasets
# Gao, Y., Yurkovich, J. T., Seo, S. W., Kabimoldayev, I., Dräger, A., Chen, K., … Palsson, B. O. (2018).
# Systematic discovery of uncharacterized transcription factors in Escherichia coli K-12 MG1655.
# Nucleic Acids Research. https://doi.org/10.1093/nar/gky752
mops_glc = {}
with open('/home/mjenior/Desktop/Gao_et_al_2018/GSM3022135_wt_glc1.txt', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
if len(line) < 2: continue
mops_glc[line[0]] = float(line[1])
wt_ph5 = {}
with open('/home/mjenior/Desktop/Gao_et_al_2018/GSM3108934_wt_ph5_1.txt', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
if len(line) < 2: continue
wt_ph5[line[0]] = float(line[1])
wt_ph8 = {}
with open('/home/mjenior/Desktop/Gao_et_al_2018/GSM3108936_wt_ph8_1.txt', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
if len(line) < 2: continue
wt_ph8[line[0]] = float(line[1])
ydcI_ph5 = {}
with open('/home/mjenior/Desktop/Gao_et_al_2018/GSM3108944_delydci_ph5_1.txt', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
if len(line) < 2: continue
ydcI_ph5[line[0]] = float(line[1])
ydcI_ph8 = {}
with open('/home/mjenior/Desktop/Gao_et_al_2018/GSM3108946_delydci_ph8_1.txt', 'r') as transcription:
header = transcription.readline()
for line in transcription:
line = line.split()
if len(line) < 2: continue
ydcI_ph8[line[0]] = float(line[1])
###Output
_____no_output_____
###Markdown
Analyzing the C. difficile 630 model
###Code
# Read in transcript abundance table
def read_transcription_all(transcript_table):
cef = {}
clinda = {}
strep = {}
gnoto = {}
with open(transcript_table, 'r') as transcripts:
firstline = transcripts.readline()
for line in transcripts:
line = line.split(',')
cef[str(line[0])] = float(line[1])
clinda[str(line[0])] = float(line[2])
strep[str(line[0])] = float(line[3])
gnoto[str(line[0])] = float(line[4])
return cef, clinda, strep, gnoto
# Read in all transcription
cef_dict, clinda_dict, strep_dict, gnoto_dict = read_transcription_all('data/transcript/cdf_transcription.sub.format.csv')
iCdJ794 = cobra.io.read_sbml_model('data/iCdJ794.sbml')
iCdJ794_cef, iCdJ794_cef_samples = riptide(iCdJ794, cef_dict)
iCdJ794_clinda, iCdJ794_clinda_samples = riptide(iCdJ794, clinda_dict)
iCdJ794_strep, iCdJ794_strep_samples = riptide(iCdJ794, strep_dict)
iCdJ794_gnoto, iCdJ794_gnoto_samples = riptide(iCdJ794, gnoto_dict)
iCdJ794_pfba, iCdJ794_pfba_samples = riptide(iCdJ794, gnoto_dict, coefficients=[1.0,1.0,1.0,1.0,1.0])
# Save contextualized SBMLs
cobra.io.write_sbml_model(iCdJ794_cef, '/home/mjenior/Desktop/iCdJ794_cef.sbml')
cobra.io.write_sbml_model(iCdJ794_clinda, '/home/mjenior/Desktop/iCdJ794_clinda.sbml')
cobra.io.write_sbml_model(iCdJ794_strep, '/home/mjenior/Desktop/iCdJ794_strep.sbml')
cobra.io.write_sbml_model(iCdJ794_gnoto, '/home/mjenior/Desktop/iCdJ794_gnoto.sbml')
cobra.io.write_sbml_model(iCdJ794_pfba, '/home/mjenior/Desktop/iCdJ794_pfba.sbml')
# Collect fluxes
pfba_biomass = list(iCdJ794_pfba_samples['biomass'])
cef_biomass = list(iCdJ794_cef_samples['biomass'])
clinda_biomass = list(iCdJ794_clinda_samples['biomass'])
strep_biomass = list(iCdJ794_strep_samples['biomass'])
gnoto_biomass = list(iCdJ794_gnoto_samples['biomass'])
# Convert to per hour rate
pfba_biomass_rates = [round((x / 60.0), 3) for x in pfba_biomass]
cef_biomass_rates = [round((x / 60.0), 3) for x in cef_biomass]
clinda_biomass_rates = [round((x / 60.0), 3) for x in clinda_biomass]
strep_biomass_rates = [round((x / 60.0), 3) for x in strep_biomass]
gnoto_biomass_rates = [round((x / 60.0), 3) for x in gnoto_biomass]
# Convert to strings and label
pfba_rate_str = [str(x) for x in pfba_biomass_rates]
pfba_rate_str = 'base_pfba\t' + '\t'.join(pfba_rate_str) + '\n'
cef_rate_str = [str(x) for x in cef_biomass_rates]
cef_rate_str = 'cef\t' + '\t'.join(cef_rate_str) + '\n'
clinda_rate_str = [str(x) for x in clinda_biomass_rates]
clinda_rate_str = 'clinda\t' + '\t'.join(clinda_rate_str) + '\n'
strep_rate_str = [str(x) for x in strep_biomass_rates]
strep_rate_str = 'strep\t' + '\t'.join(strep_rate_str) + '\n'
gnoto_rate_str = [str(x) for x in gnoto_biomass_rates]
gnoto_rate_str = 'gnoto\t' + '\t'.join(gnoto_rate_str) + '\n'
# Write to file
with open('/home/mjenior/Desktop/cdf_growth_rates.tsv', 'w') as rates:
rates.write(pfba_rate_str)
rates.write(cef_rate_str)
rates.write(clinda_rate_str)
rates.write(strep_rate_str)
rates.write(gnoto_rate_str)
def return_exhcanges(model):
exchanges = set()
for rxn in model.reactions:
if len(rxn.reactants) == 0 or len(rxn.products) == 0:
exchanges |= set([rxn.id])
return exchanges
# Collect context-specific exchanges
cef_exchanges = return_exhcanges(iCdJ794_cef)
clinda_exchanges = return_exhcanges(iCdJ794_clinda)
strep_exchanges = return_exhcanges(iCdJ794_strep)
pfba_exchanges = return_exhcanges(iCdJ794_pfba)
cef_only = cef_exchanges.difference(clinda_exchanges)
cef_only = cef_only.difference(strep_exchanges)
cef_only = cef_only.difference(pfba_exchanges)
cef_only
clinda_only = clinda_exchanges.difference(cef_exchanges)
clinda_only = clinda_only.difference(strep_exchanges)
clinda_only = clinda_only.difference(pfba_exchanges)
clinda_only
strep_only = strep_exchanges.difference(clinda_exchanges)
strep_only = strep_only.difference(cef_exchanges)
strep_only = strep_only.difference(pfba_exchanges)
strep_only
pfba_only = pfba_exchanges.difference(clinda_exchanges)
pfba_only = pfba_only.difference(cef_exchanges)
pfba_only = pfba_only.difference(strep_exchanges)
pfba_only
cef_EX_cpd00221_e = list(iCdJ794_cef_samples['EX_cpd00221_e'])
strep_EX_cpd00138_e = list(iCdJ794_strep_samples['EX_cpd00138_e'])
pfba_EX_cpd00064_e = list(iCdJ794_pfba_samples['EX_cpd00064_e'])
pfba_EX_cpd05178_e = list(iCdJ794_pfba_samples['EX_cpd05178_e'])
import numpy
cef_EX_cpd00221_e = numpy.quantile(cef_EX_cpd00221_e, [0.25,0.5,0.75])
strep_EX_cpd00138_e = numpy.quantile(strep_EX_cpd00138_e, [0.25,0.5,0.75])
pfba_EX_cpd00064_e = numpy.quantile(pfba_EX_cpd00064_e, [0.25,0.5,0.75])
pfba_EX_cpd05178_e = numpy.quantile(pfba_EX_cpd05178_e, [0.25,0.5,0.75])
print(list(cef_EX_cpd00221_e))
print(list(strep_EX_cpd00138_e))
print(list(pfba_EX_cpd00064_e))
print(list(pfba_EX_cpd05178_e))
# Substrate demand calculations
import numpy
import pandas
import operator
from cobra.flux_analysis.parsimonious import *
# Calculate extracellular metabolite shadow prices
def sampled_exchange_fluxes(model, flux_samples):
exch_fluxes = {}
for rxn in model.reactions:
if 'EX_' not in rxn.id: continue
exch_fluxes[rxn.id] = []
print('Collecting exchange fluxes...')
with model as m:
for index in range(0, len(flux_samples.index)):
for rxn in flux_samples.columns:
try:
m.reactions.get_by_id(rxn).bounds = (list(flux_samples[rxn])[index], list(flux_samples[rxn])[index])
except:
continue
solution = m.optimize()
for rxn, flux in solution.fluxes.iteritems():
if 'EX_' not in rxn: continue
exch_fluxes[rxn].append(flux)
print('Calculating ranges...')
summary_stats = {}
for rxn in exch_fluxes.keys():
substrate = model.reactions.get_by_any(rxn).metabolites[0].name
summary_stats[rxn] = [substrate, numpy.percentile(exch_fluxes[rxn], 25), numpy.median(exch_fluxes[rxn]), numpy.percentile(exch_fluxes[rxn], 75),]
ranking = []
for rxn in summary_stats.keys(): ranking.append([rxn] + summary_stats[rxn])
ranking = sorted(ranking, key=operator.itemgetter(3), reverse=True)
temp_dict = {}
for x in ranking: temp_dict[x[0]] = x[1:]
flux_df = pandas.DataFrame.from_dict(temp_dict, orient='index', columns=['Name', 'Q25', 'Median', 'Q75'])
return flux_df
sampled_exchange_fluxes(iJO1366_glucose, glucose_flux_samples)
###Output
Collecting exchange fluxes...
Calculating ranges...
###Markdown
Cross-reference shadow price with metabolomics data
###Code
# Metabolomics
# Read in metabolomics data and collect intensities groups
def read_metabolomics(intensities_file, group1, group2):
with open(intensities_file, 'r') as intensities:
header = intensities.readline().split()
group1_idx = []
for index in group1:
group1_idx.append(group1.index(index))
group2_idx = []
for index in group2:
group2_idx.append(group2.index(index))
group1_dict = {}
group2_dict = {}
for line in intensities:
group1_dict[line[0]] = [float(line[x]) for x in group1_idx]
group2_dict[line[0]] = [float(line[x]) for x in group2_idx]
return group1_dict, group2_dict
# Test direction of change and significant differences in metaboalite values
def test_differences(dict1, dict2, cutoff=0.05):
diff_dict = {}
for index in dict1.keys():
median1 = median(dict1[index])
median2 = median(dict2[index])
if median1 > median2:
direction = 1
elif median1 < median2:
direction = -1
else:
direction = 0
pval = round(list(scipy.stats.wilcoxon(x=dict1[index], y=dict2[index], zero_method='wilcox'))[1], 3)
if pval <= cutoff:
continue
else:
diff_dict[index] = [direction, sig]
return diff_dict
# Identify required grwoth substrates
def identify_requirements(model):
with model as m:
solution = flux_variability_analysis(m)
necessary = []
for index, row in solution.iterrows():
if 'EX_' not in index:
continue
elif row['minimum'] < 0.0 and row['maximum'] <= 0.0:
cpd = list(m.reactions.get_by_id(index).metabolites)[0].id
necessary.append(cpd)
return(necessary)
# Alter exchange fluxes based on metabolomic shifts
def integrate_changes(model, shifts, required):
new_model = copy.deepcopy(model)
for rxn in new_model.reactions:
if 'EX_' in rxn.id:
cpd = list(rxn.metabolites)[0].id
if cpd in required:
continue
else:
#integrate data here, just remove the reaction i guess...
if shifts[rxn.id][0]
rxn.remove_from_model()
else:
continue
return new_model
# Read in metabolomic results
untreated_mock = {}
cef_630 = {}
cef_mock = {}
clinda_630 = {}
clinda_mock = {}
strep_630 = {}
strep_mock = {}
gnoto_630 = {}
gnoto_mock = {}
with open('data/metabolome/scaled_intensities.tsv', 'r') as metabolome:
firstLine = metabolome.readline()
for line in metabolome:
line = line.split()
untreated_mock[line[0]] = numpy.median([float(x) for x in line[5:14]])
cef_630[line[0]] = numpy.median([float(x) for x in line[14:23]])
cef_mock[line[0]] = numpy.median([float(x) for x in line[23:32]])
clinda_630[line[0]] = numpy.median([float(x) for x in line[32:41]])
clinda_mock[line[0]] = numpy.median([float(x) for x in line[41:50]])
strep_630[line[0]] = numpy.median([float(x) for x in line[50:59]])
strep_mock[line[0]] = numpy.median([float(x) for x in line[59:68]])
gnoto_630[line[0]] = numpy.median([float(x) for x in line[68:77]])
gnoto_mock[line[0]] = numpy.median([float(x) for x in line[77:86]])
# calculates change in concentration of a metabolite across metabolomes
def compare_concentration(metabolome1, metabolome2, metabolite):
conc1 = 10 ** metabolome1[metabolite]
conc2 = 10 ** metabolome2[metabolite]
change = conc2 - conc1
if change == 0.0:
change = change
elif change < 0.0:
change = -numpy.log10(abs(change))
else:
change = numpy.log10(change)
print(metabolite + ': ' + str(change))
# Cefoperazone
compare_concentration(cef_mock, cef_630, 'fructose')
compare_concentration(cef_mock, cef_630, 'N-acetyl-beta-glucosaminylamine')
# Clindamycin
compare_concentration(clinda_mock, clinda_630, 'fructose')
# Streptomycin
compare_concentration(strep_mock, strep_630, 'fructose')
# Gnotobiotic
compare_concentration(strep_mock, strep_630, 'fructose')
compare_concentration(strep_mock, strep_630, 'proline')
###Output
fructose: 1.411164101349768
proline: -2.2705935392452883
|
scripts/gan/04_keras_gan_load.ipynb | ###Markdown
- Reference - Blog: https://work-in-progress.hatenablog.com/entry/2019/04/06/113629 - Source: https://github.com/eriklindernoren/Keras-GAN/blob/master/gan/gan.py - Source: https://github.com/eriklindernoren/Keras-GAN/pull/117 - Added functionality to save/load Keras model for intermittent training.
###Code
import os
os.makedirs('data/images', exist_ok=True)
os.makedirs('data/saved_models', exist_ok=True)
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.models import Sequential, Model, model_from_json
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime
class GAN():
def __init__(self):
self.history = pd.DataFrame({}, columns=['d_loss', 'acc', 'g_loss'])
self.img_save_dir = 'data/images'
self.model_save_dir = 'data/saved_models'
self.discriminator_name = 'discriminator_model'
self.generator_name = 'generator_model'
self.combined_name = 'combined_model'
self.discriminator = None
self.generator = None
self.combined = None
self.img_rows = 28
self.img_cols = 28
self.channels = 1
self.img_shape = (self.img_rows, self.img_cols, self.channels)
self.latent_dim = 100
def init(self):
optimizer = Adam(0.0002, 0.5)
self.discriminator = self.build_discriminator()
self.discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
self.generator = self.build_generator()
z = Input(shape=(self.latent_dim,))
img = self.generator(z)
# For the combined model we will only train the generator
self.discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
validity = self.discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
self.combined = Model(z, validity)
self.combined.summary()
self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
def load(self):
optimizer = Adam(0.0002, 0.5)
self.discriminator = self.load_model(self.discriminator_name)
self.discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
self.discriminator.summary()
self.generator = self.load_model(self.generator_name)
self.generator.summary()
#z = Input(shape=(self.latent_dim,))
#img = self.generator(z)
# For the combined model we will only train the generator
self.discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
#validity = self.discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
#self.combined = Model(z, validity)
self.combined = self.load_model(self.combined_name)
self.combined.summary()
self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
def build_generator(self):
model = Sequential()
model.add(Dense(256, input_dim=self.latent_dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(np.prod(self.img_shape), activation='tanh'))
model.add(Reshape(self.img_shape))
model.summary()
noise = Input(shape=(self.latent_dim,))
img = model(noise)
return Model(noise, img)
def build_discriminator(self):
model = Sequential()
model.add(Flatten(input_shape=self.img_shape))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=self.img_shape)
validity = model(img)
return Model(img, validity)
def train(self, epochs, batch_size=128, sample_interval=50, save_interval=50):
# Load the dataset
(X_train, _), (_, _) = mnist.load_data()
print(X_train.shape)
# Rescale -1 to 1
X_train = X_train / 127.5 - 1.
X_train = np.expand_dims(X_train, axis=3)
print(X_train.shape)
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
print(datetime.datetime.now().isoformat(), 'Epoch Start')
for epoch in range(epochs):
# Select a random batch of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
gen_imgs = self.generator.predict(noise)
d_loss_real = self.discriminator.train_on_batch(imgs, valid)
d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
g_loss = self.combined.train_on_batch(noise, valid)
# print (datetime.datetime.now().isoformat(), '%d [D loss: %f, acc.: %.2f%%] [G loss: %f]' % (epoch, d_loss[0], 100*d_loss[1], g_loss))
self.history = self.history.append({'d_loss': d_loss[0], 'acc': d_loss[1], 'g_loss': g_loss}, ignore_index=True)
if epoch % sample_interval == 0:
print (datetime.datetime.now().isoformat(), '%d [D loss: %f, acc.: %.2f%%] [G loss: %f]' % (epoch, d_loss[0], 100*d_loss[1], g_loss))
self.sample_images(epoch)
if epoch != 0 and epoch % save_interval == 0:
self.save_models()
print(datetime.datetime.now().isoformat(), 'Epoch End')
def sample_images(self, epoch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, self.latent_dim))
gen_imgs = self.generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
axs[i, j].axis('off')
cnt += 1
file_name = '{}.png'.format(epoch)
file_path = os.path.join(self.img_save_dir, file_name)
fig.savefig(file_path)
plt.close()
def plot_hisotry(self, columns=[]):
if len(columns) == 0:
columns = ['d_loss', 'acc', 'g_loss']
self.history[columns].plot()
def save_models(self):
self.save_model(self.discriminator, self.discriminator_name)
self.save_model(self.generator, self.generator_name)
self.save_model(self.combined, self.combined_name)
def save_model(self, model, model_name):
json_path = os.path.join(self.model_save_dir, '{}.json'.format(model_name))
weights_path = os.path.join(self.model_save_dir, '{}.h5'.format(model_name))
with open(json_path, 'w') as f:
f.write(model.to_json())
model.save_weights(weights_path)
print (datetime.datetime.now().isoformat(), 'Model saved.', model_name)
def load_model(self, model_name):
json_path = os.path.join(self.model_save_dir, '{}.json'.format(model_name))
weights_path = os.path.join(self.model_save_dir, '{}.h5'.format(model_name))
with open(json_path, 'r') as f:
loaded_json = f.read()
model = model_from_json(loaded_json)
model.load_weights(weights_path)
return model
gan = GAN()
#gan.load()
gan.init()
gan.train(epochs=100, batch_size=32, sample_interval=10, save_interval=10)
gan.save_models()
###Output
2020-07-20T16:09:46.480285 Model saved. discriminator_model
2020-07-20T16:09:46.496689 Model saved. generator_model
2020-07-20T16:09:46.517935 Model saved. combined_model
|
thinkpython/10_listes_student.ipynb | ###Markdown
Listes Les listes en Python sont un ensemble ordonnés d'objets. Les objets peuvent être de type variés. Une liste peux contenir une liste. Une liste est une séquence* Une liste est délimité par des crochets `[]`* Les éléments sont séparé par une virgule `,`* Un élément peut être accédé par son indice `L[1]`* Une list peut être vide `L=[]`
###Code
a = [10, 20, 30]
b = ['banane', 'orange', 'pomme']
c = [1, 12, 123]
###Output
_____no_output_____
###Markdown
Une **indice** permet d'accéder à un élément de liste.
###Code
a[1], b[2]
###Output
_____no_output_____
###Markdown
Une liste peux **contenir différents types** d'éléments.
###Code
c = [1, 1.2, '1.2']
for i in c:
print(i, ' - ', type(i))
###Output
_____no_output_____
###Markdown
Une liste à l'intérieur d'une autre liste est dite **imbriquée**.
###Code
[1, 2, [3, 4]]
###Output
_____no_output_____
###Markdown
Une liste qui ne contient aucun élément est une liste **vide**. Les listes sont modifiables Contairement à une chaines de caractères, une liste est **modifiable**
###Code
a = [10, 20, 30]
a
a[1] = 'twenty'
a
###Output
_____no_output_____
###Markdown
Le deuxième élément `a[1]` contenant la valeur numérique 20 a été remplacé par une chaine `'twenty'`.
###Code
b = list(range(10))
b
###Output
_____no_output_____
###Markdown
Un tranche d'une liste peux être remplacé par un élément.
###Code
b[3:7] = 'x'
b
###Output
_____no_output_____
###Markdown
Un élément peux être remplacé par une liste.
###Code
b[4] = [10, 20]
b
###Output
_____no_output_____
###Markdown
Un élément peut être remplacé par une référence à une liste.
###Code
b[5] = a
b
###Output
_____no_output_____
###Markdown
Si la liste insérée `a` est modifié, la liste contenante `b` est également modififiée. Une variable pour une liste ne contient donc pas une copie de la liste, mais une référence vers cette liste.
###Code
a[0] = 'xxx'
b
###Output
_____no_output_____
###Markdown
Parcourir une liste La boucle `for` permet de parcourir une liste, exactement de la même façon comme pour les chaînes.
###Code
for i in a:
print(i)
###Output
_____no_output_____
###Markdown
Pour modifier une liste, on a besoin de l'indice. La boucle parcourt la liste et multiplie chaque élément par 2.
###Code
n = len(a)
for i in range(n):
a[i] *= 2
a
###Output
_____no_output_____
###Markdown
Si une liste est vide, la boucle n'est jamais parcourue.
###Code
for x in []:
print('this never prints')
###Output
_____no_output_____
###Markdown
**Exercice** Compare l'itération à travers: une liste `[1, 2, 3]`, une chaine de caractère `'abc'` et une plage `range[3]` Opérations sur listesLes opérateurs d'adition `+` et de multiplication `*` des nombres, ont une interprétation différents pour les listes.
###Code
a = [1, 2, 3]
b = ['a', 'b']
###Output
_____no_output_____
###Markdown
L'opérateur `+` concatène des listes.
###Code
a+b
###Output
_____no_output_____
###Markdown
L'opérateur `*` répète une liste.
###Code
b * 3
[0] * 10
###Output
_____no_output_____
###Markdown
La fonction `list` transforme un itérable comme `range(10)` en vraie liste.
###Code
list(range(10))
###Output
_____no_output_____
###Markdown
La fonction `list` transforme aussi des chaines en vraie liste.
###Code
list('hello')
###Output
_____no_output_____
###Markdown
Tranches de listes
###Code
t = list('abcdef')
t
###Output
_____no_output_____
###Markdown
L'opérateur de **tranche** `[m:n]` peut être utilisé avec des listes.
###Code
t[1:3]
###Output
_____no_output_____
###Markdown
Tous les éléments depuis le début:
###Code
t[:4]
###Output
_____no_output_____
###Markdown
Tous les éléments jusqu'à la fin:
###Code
t[4:]
a = list(range(10))
a
a[:4]
a[4:]
###Output
_____no_output_____
###Markdown
Méthodes de listes La méthode `append` ajoute un élément à la fin d'une liste.
###Code
a = [1, 2, 3]
a.append('a')
a
###Output
_____no_output_____
###Markdown
La méthode `extent` ajoute les éléments d'une liste à la fin d'une liste.
###Code
a.extend([10, 20])
a
###Output
_____no_output_____
###Markdown
La méthode `sort` trie les éléments d'une liste. Elle ne retourne pas une nouvelle liste trié, mais modifie la liste.
###Code
c = [23, 12, 54, 2]
c.sort()
c
###Output
_____no_output_____
###Markdown
Le paramètre optionnel `reverse` permet d'inverser l'ordre du tri.
###Code
c.sort(reverse=True)
c
###Output
_____no_output_____
###Markdown
On peut trier des lettres
###Code
a = list('world')
a.sort()
a
###Output
_____no_output_____
###Markdown
La pluspart des méthodes de liste renvoie rien (`None`).
###Code
b = a.sort()
print(a)
print(b)
###Output
_____no_output_____
###Markdown
La méthode `sorted(L)` par contre retourne une nouvelle list trié.
###Code
a = list('world')
b = sorted(a)
print(a)
print(b)
###Output
_____no_output_____
###Markdown
Mapper, filtrer et réduire Pour additionner toutes les éléments d'une liste vous pouvez initialiser la variable `total` à zéro, et additionner à chaque itération un élément de la liste. Une variable utilisée d'une telle façon est appelé un **accumulateur**.
###Code
def somme(t):
total = 0
for i in t:
total += i
return total
b = [1, 2, 32, 42]
somme(b)
###Output
_____no_output_____
###Markdown
L'addition des éléments d'une liste est fréquente et Python proprose une fonction `sum`.
###Code
sum(b)
def tout_en_majuscules(t):
"""t: une liste de mots."""
res = []
for s in t:
res.append(s.capitalize())
return res
tout_en_majuscules(['good', 'hello', 'world'])
###Output
_____no_output_____
###Markdown
La méthode `isupper` est vrai si toutes les lettres sont majuscules.
###Code
def seulement_majuscules(t):
res = []
for s in t:
if s.isupper():
res.append(s)
return res
b = ['aa', 'AAA', 'Hello', 'HELLO']
seulement_majuscules(b)
###Output
_____no_output_____
###Markdown
Une fonction comme `seulement_majuscules` s'appelle **filtre** car elle sélectionne certains éléments seuelement. Supprimer des éléments Avec la méthode `pop` vous pouvez supprimer un élément.
###Code
a = list('hello')
a
###Output
_____no_output_____
###Markdown
La méthode `pop` modifie la liste et retourne un élément. Utilisé sans argument `pop` enlève le dernière élément de la liste.
###Code
a.pop()
a
###Output
_____no_output_____
###Markdown
Utilisé avec un argument, c'est cet élément qui est enlevé de la liste.
###Code
a.pop(0)
a
###Output
_____no_output_____
###Markdown
L'opérateur `del` permet également de supprimer un élément.
###Code
del(a[0])
a
###Output
_____no_output_____
###Markdown
Liste de chaines de caractères Une chaîne est une séquence de caractères, et de caractères uniquement. Une liste par contre est une séquence de n'importe quel type d'éléments. La fonction `list` permet de transformer un itérable comme une chaîne en liste.
###Code
s = 'spam'
print(s)
print(list(s))
###Output
_____no_output_____
###Markdown
Comme `list` est le nom d'une fonction interne, il ne faut pas l'utiliser comme nom de variable. Evitez d'utiliser la petite lettre L (`l`), car elle est pratiqument identique avec le chiffre un (`1`), donc ici le `t` est utilisé à la place.La fonction `split` permet de découper une phrase en mots et de les retourner dans une liste.
###Code
s = 'je suis ici en ce moment'
t = s.split()
t
###Output
_____no_output_____
###Markdown
`join` est l'inverse de `split`.
###Code
' - '.join(t)
###Output
_____no_output_____
###Markdown
Objets et valeursDeux variables qui font référence à la même chaine pointent vers le même objet. L'opérateur `is` retourne vrai si les deux variables pointent vers le même objet.
###Code
a = 'banane'
b = 'banane'
a is b
###Output
_____no_output_____
###Markdown
Deux variables qui sont initialisé avec la même liste ne sont pas les même objets.
###Code
a = [1, 2, 3]
b = [1, 2, 3]
a is b
###Output
_____no_output_____
###Markdown
Dans ce cas on dit que les deux listes sont **équivalentes**, mais pas identiques, car il ne s'agit pas du même objet. Aliasing Si une variable est initialisé avec une autre variable, alors les deux pointent vers le même objet.
###Code
a = [1, 2, 3]
b = a
a is b
###Output
_____no_output_____
###Markdown
Si un élément de `b` est modifié, la variable `a` change également.
###Code
b[0] = 42
print(a)
print(b)
###Output
_____no_output_____
###Markdown
L'association entre une variable est un objet s'appelle **référence**. Dans cet exemple il existent deux références `a` et `b` vers le même objet. Si les objets sont immuable (chaines, tuples) ceci ne pose pas de problème, mais avec deux variables qui font référence à la même liste, il faut faire attention de ne pas modifier une par inadvertance. Arguments de type liste Si une liste est passée comme argument de fonction, la fonction peut modifier la list.
###Code
def modifie_list(t):
t[0] *= 2 # multiplie par deux
t[1] = 42 # nouveelle affectation
del t[2] # suppression
a = [1, 2, 3, 4, 5]
print(a)
modifie_list(a)
a
b = list('abcde')
modifie_list(b)
b
###Output
_____no_output_____
###Markdown
La méthode `append` modifie une liste, mais l'opérateur `+` crée une nouvelle liste.
###Code
a = [1, 2]
b = a.append(3)
print('a =', a)
print('b =', b)
###Output
_____no_output_____
###Markdown
`append` modifie la liste et retourne `None`.
###Code
b = a + [4]
print('a =', a)
print('b =', b)
###Output
_____no_output_____
###Markdown
Exercices **Exercice 1** Écrivez une fonction appelée `nested_sum` qui prend une liste de listes d'entiers et additionne les éléments de toutes les listes imbriquées.
###Code
def nested_sum(t):
pass
t = [[1, 2], [3], [4, 5, 6]]
nested_sum(t)
###Output
_____no_output_____
###Markdown
**Exercice 2** Écrivez une fonction appelée `cumsum` qui prend une liste de nombres et renvoie la somme cumulative ; c'est-à-dire une nouvelle liste où le n-ième élément est la somme des premiers n + 1 éléments de la liste originale.
###Code
def cumsum(t):
pass
t = range(5)
cumsum(t)
###Output
_____no_output_____
###Markdown
**Exercice 3** Écrivez une fonction appelée `middle` qui prend une liste et renvoie une nouvelle liste qui contient tous les éléments, sauf le premier et le dernier.
###Code
def middle(t):
pass
t = list(range(10))
print(t)
print(middle(t))
print(t)
###Output
_____no_output_____
###Markdown
**Exercice 4** Écrivez une fonction appelée `chop` qui prend une liste, la modifie en supprimant le premier et le dernier élément, et retourne `None`.
###Code
def chop(t):
pass
t = list(range(10))
print(t)
print(chop(t))
print(t)
###Output
_____no_output_____
###Markdown
**Exercice 5** Écrivez une fonction appelée `is_sorted` qui prend une liste comme paramètre et renvoie True si la liste est triée par ordre croissant et False sinon.
###Code
def is_sorted(t):
pass
is_sorted([11, 2, 3])
###Output
_____no_output_____
###Markdown
**Exercice 6** Deux mots sont des anagrammes si vous pouvez réarranger les lettres de l'un pour en former l'autre (par exemple ALEVIN et NIVELA sont des anagrammes). Écrivez une fonction appelée `is_anagram` qui prend deux chaînes et renvoie `True` si ce sont des anagrammes.
###Code
def is_anagram(s1, s2):
pass
is_anagram('ALEVIN', 'NIVELA')
is_anagram('ALEVIN', 'NIVEL')
###Output
_____no_output_____
###Markdown
**Exercice 7** Écrivez une fonction appelée `has_duplicates` qui prend une liste et renvoie `True` s'il y a au moins un élément qui apparaît plus d'une fois. La méthode ne devrait pas modifier la liste originale.
###Code
def has_duplicates(t):
pass
t = [1, 2, 3, 4, 1]
has_duplicates(t)
t = [1, 2, 3, 4, '1']
has_duplicates(t)
###Output
_____no_output_____
###Markdown
**Exercice 8** Cet exercice est relatif à ce que l'on appelle le paradoxe des anniversaires, au sujet duquel vous pouvez lire sur https://fr.wikipedia.org/wiki/Paradoxe_des_anniversaires .S'il y a 23 étudiants dans votre classe, quelles sont les chances que deux d'entre vous aient le même anniversaire ? Vous pouvez estimer cette probabilité en générant des échantillons aléatoires de 23 anniversaires et en vérifiant les correspondances. Indice : vous pouvez générer des anniversaires aléatoires avec la fonction randint du module random.
###Code
import random
def birthdays(n):
pass
m = 1000
n = 0
for i in range(m):
pass
print(n/m)
###Output
_____no_output_____
###Markdown
**Exercice 9**Écrivez une fonction qui lit le fichier mots.txt du chapitre précédent et construit une liste avec un élément par mot. Écrivez deux versions de cette fonction, l'une qui utilise la méthode append et l'autre en utilisant la syntaxe `t = t + [x]`. Laquelle prend plus de temps pour s'exécuter ? Pourquoi ?
###Code
%%time
fin = open('mots.txt')
t = []
for line in fin:
pass
len(t)
%%time
fin = open('mots.txt')
t = []
i = 0
for line in fin:
pass
###Output
_____no_output_____
###Markdown
La deuxième version devient de plus en plus lente car elle doit chaque fois copier et créer une nouvelle liste. **Exercice 10**Pour vérifier si un mot se trouve dans la liste de mots, vous pouvez utiliser l'opérateur `in` , mais cela serait lent, car il vérifie les mots un par un dans l'ordre de leur apparition.Si les mots sont dans l'ordre alphabétique, nous pouvons accélérer les choses avec une recherche dichotomique (aussi connue comme recherche binaire), qui est similaire à ce que vous faites quand vous recherchez un mot dans le dictionnaire. Vous commencez au milieu et vérifiez si le mot que vous recherchez vient avant le mot du milieu de la liste. Si c'est le cas, vous recherchez de la même façon dans la première moitié de la liste. Sinon, vous regardez dans la seconde moitié.Dans les deux cas, vous divisez en deux l'espace de recherche restant. Si la liste de mots a 130 557 mots, il faudra environ 17 étapes pour trouver le mot ou conclure qu'il n'y est pas.Écrivez une fonction appelée `in_bisect` qui prend une liste triée et une valeur cible et renvoie l'index de la valeur dans la liste si elle s'y trouve, ou si elle n'y est pas. N'oubliez pas qu'il faut préalablement trier la liste par ordre alphabétique pour que cet algorithme puisse fonctionner ; vous gagnerez du temps si vous commencez par trier la liste en entrée et la stockez dans un nouveau fichier (vous pouvez utiliser la fonction sort de votre système d'exploitation si elle existe, ou sinon le faire en Python), vous n'aurez ainsi besoin de le faire qu'une seule fois.
###Code
fin = open('mots.txt')
t = []
for line in fin:
mot = line.strip()
t.append(mot)
t.sort()
len(t)
def in_bisect(t, val):
a = 0
b = len(t)-1
while b > a:
i = (b+a) // 2
print(t[a], t[i], t[b], sep=' - ')
if val == t[i]:
return True
if val > t[i]:
a = i
else:
b = i
return False
in_bisect(t, 'MAISON')
###Output
_____no_output_____ |
Embedding/TensorFlow/Advanced/ProtBert-BFD.ipynb | ###Markdown
Extracting protein sequences' features using ProtBert-BFD pretrained-model 1. Load necessry libraries including huggingface transformers
###Code
!pip install -q transformers
import tensorflow as tf
from transformers import TFBertModel, BertTokenizer,BertConfig
import re
import numpy as np
###Output
_____no_output_____
###Markdown
2. Load the vocabulary and ProtBert-BFD Model
###Code
tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert_bfd", do_lower_case=False )
model = TFBertModel.from_pretrained("Rostlab/prot_bert_bfd", from_pt=True)
###Output
_____no_output_____
###Markdown
3. Create or load sequences and map rarely occured amino acids (U,Z,O,B) to (X)
###Code
sequences_Example = ["A E T C Z A O","S K T Z P"]
sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example]
###Output
_____no_output_____
###Markdown
4. Tokenize, encode sequences and load it into the GPU if possibile
###Code
ids = tokenizer.batch_encode_plus(sequences_Example, add_special_tokens=True, padding=True, return_tensors="tf")
input_ids = ids['input_ids']
attention_mask = ids['attention_mask']
###Output
_____no_output_____
###Markdown
5. Extracting sequences' features and load it into the CPU if needed
###Code
embedding = model(input_ids)[0]
embedding = np.asarray(embedding)
attention_mask = np.asarray(attention_mask)
###Output
_____no_output_____
###Markdown
6. Remove padding ([PAD]) and special tokens ([CLS],[SEP]) that is added by ProtBert-BFD model
###Code
features = []
for seq_num in range(len(embedding)):
seq_len = (attention_mask[seq_num] == 1).sum()
seq_emd = embedding[seq_num][1:seq_len-1]
features.append(seq_emd)
print(features)
###Output
[array([[ 0.05551099, -0.10461281, -0.03254102, ..., 0.05091645,
0.0431913 , 0.10181017],
[ 0.13895634, -0.04658355, 0.02193595, ..., 0.06942667,
0.1476294 , 0.0650388 ],
[ 0.14610632, -0.0809287 , -0.12500374, ..., -0.03651187,
0.02485547, 0.0797754 ],
...,
[ 0.02349947, -0.01549821, -0.05685321, ..., -0.01342234,
0.01704329, 0.0643108 ],
[ 0.08129992, -0.10929591, -0.03022921, ..., 0.08717711,
0.02061502, 0.05156738],
[ 0.06197424, -0.06417827, -0.02039693, ..., -0.02796511,
0.08840055, 0.07532725]], dtype=float32), array([[-0.12292586, -0.1095154 , -0.1050995 , ..., 0.08474061,
0.10824808, 0.03636007],
[ 0.11176239, -0.06970412, -0.04924221, ..., 0.02316407,
-0.00177084, -0.0239673 ],
[ 0.10432272, -0.13054034, -0.08879344, ..., -0.14225952,
-0.01330808, -0.04152388],
[-0.00434562, -0.0853354 , -0.05003072, ..., -0.16549371,
-0.03906402, -0.02910578],
[ 0.04236348, -0.1407812 , -0.07057446, ..., -0.00277795,
-0.00963059, -0.03968422]], dtype=float32)]
|
IE/406_machine_learning/m_v_joshi/2019/lab_sessions/solutions/myLab4.ipynb | ###Markdown
QDA on MultiClass 1.Fitting Classifier
###Code
x_train_class = [] ; size_train_class = [] ; y_train_class =[];
cov_class = []; mean_class = []
# u = np.zeros((10,784,784));
first_term = np.zeros((10));
third_term = np.zeros((10));
for k in range(10):
tmp = Xtrain[[i for i, j in zip(count(), ytrain) if j == k],:]
x_train_class.append(tmp)
y_train_class.append(np.ones((len(tmp),1))*k)
# print(len(tmp[0]))
# print(len(x_train_class[0][0]))
size_train_class.append(len(tmp))
cov_class.append(np.cov(tmp.T))
mean_class.append(np.mean(tmp,axis = 0))
# u[k], s, vh = la.svd(cov_class[k])
# S = np.diag(s)
# s_invhalf = np.sqrt(la.inv(S))
first_term[k] = -1*(np.log(np.sqrt(la.norm(cov_class[k]))))
third_term[k] = np.log(size_train_class[k]/ytrain.shape[0])
###Output
_____no_output_____
###Markdown
2.Prediction
###Code
def predict1(x):
delta = []
for k in range(10):
value = np.array([x - mean_class[k]])
tmp = (-1/2)*np.dot(value,np.dot(la.pinv(cov_class[k]),(value.T)))
delta.append((first_term[k] + tmp + third_term[k]))
return np.argmax(delta)
###Output
_____no_output_____
###Markdown
3.Accuracy
###Code
accu = 0
preds = np.zeros((5,1))
for j in range(5):
rand_index = random.randint(1,9989)
k = 0
for i in range(rand_index,5+rand_index):
preds[k] = predict(Xtest[i])
print(i,j)
k +=1
accu += (preds==ytest[rand_index:5+rand_index]).mean()
print("Accuracy on Test Dataset :",(accu)/5*100,"%")
###Output
Accuracy on Test Dataset : 80.0 %
###Markdown
LDA on Multiclass : $\Sigma$ is common = I
###Code
mean_class_ldamulti = []
u = np.zeros((10,784,784));
second_term = np.zeros((10));
for k in range(10):
tmp = x_train_class[k]
mean_class_ldamulti.append(np.mean(tmp,axis = 0))
second_term[k] = np.log(size_train_class[k]/ytrain.shape[0])
def predict2(x):
delta = []
for k in range(10):
value = np.array([x - mean_class_ldamulti[k]])
tmp = (-1/2)*np.square(la.norm(value))
delta.append((tmp + second_term[k]))
return np.argmax(delta)
accu1 = 0
accu2 = 0
preds1 = np.zeros((5,1))
preds2 = np.zeros((5,1))
for j in range(5):
rand_index = random.randint(1,9989)
k = 0
for i in range(rand_index,5+rand_index):
preds1[k] = predict1(Xtest[i])
preds2[k] = predict2(Xtest[i])
# print(i,j)
k +=1
accu1 += (preds1==ytest[rand_index:5+rand_index]).mean()
accu2 += (preds2==ytest[rand_index:5+rand_index]).mean()
print('QDA -->Accuracy on Test dataset:',(accu1/5)*100)
print('LDA -->Accuracy on Test dataset:',(accu2/5)*100)
###Output
QDA -->Accuracy on Test dataset: 87.99999999999999
LDA -->Accuracy on Test dataset: 80.0
###Markdown
LDA on Binary Class
###Code
mean = []
term = np.zeros((2));
for k in range(2):
tmp = x_train_class[k]
cov_class.append(np.cov(tmp.T))
mean_class.append(np.mean(tmp,axis = 0))
term[k] = np.log(size_train_class[k]/ytrain.shape[0])
cov = (cov_class[0] + cov_class[1])/2
def predict3(x):
delta = []
for k in range(2):
value = np.array([x - mean_class[k]])
tmp = (-1/2)*np.dot(value,np.dot(la.pinv(cov),(value.T)))
delta.append((tmp + term[k]))
return np.argmax(delta)
Xtest_b = Xtest[[i for i, j in zip(count(), ytest) if (j == 0 or j==1)],:]
ytest_b = ytest[[i for i, j in zip(count(), ytest) if (j == 0 or j==1)],:]
ytest_b.shape
accu_b = 0
preds_b = np.zeros((200,1))
for i in range(200):
preds_b[i] = predict3(Xtest_b[i])
# print(i)
accu_b = (preds_b[:200]==ytest_b[:200]).mean()
print('LDA in Binary Classification -->Accuracy on Test dataset:',(accu_b)*100)
###Output
LDA in Binary Classification -->Accuracy on Test dataset: 99.5
|
GAN/All_C_CNN.ipynb | ###Markdown
The All Convolutional Net - PyTorch imprementationThis will go over my implementation of All-CNN-C model, introduced in paper [Striving For Simplicity: The All Convolutional Net](https://arxiv.org/abs/1412.6806), using pytorch libraryIn usual CNN, 3 types of layers are used- Convolution Layer- Pooling Layer- Fully Connected LayerThis paper present All-CNN-C convolutional network, which utilizes - **Convolution with 2 strides** instead of MaxPool- **Global Averaging and softmax** instead of Fully Connected Layer Architecture | Layer | Kernel | Stride | Image Size | |-------|--------|--------|| Conv, ReLU 96 | 3 x 3 | 1 x 1 | 32 x 32 | | Conv, ReLU 96 | 3 x 3 | 1 x 1 | | **Conv, ReLU 96** | **3 x 3** | **2 x 2** || Conv, ReLU 192 | 3 x 3 | 1 x 1 || Conv, ReLU 192 | 3 x 3 | 1 x 1 || **Conv, ReLU 192** | **3 x 3** | **2 x 2** || Conv, ReLU 192 | 3 x 3 | 1 x 1 || Conv, ReLU 192 | 1 x 1 | 1 x 1 | | Conv, ReLU 10 | 1 x 1 | 1 x 1 | 6 x 6 || **Global Average** | **6 x 6** | **1 x 1** || 10 Way Softmax |- *Batch Normalization* was applied for each layers, except the first Conv layer- Refer to [this](https://arxiv.org/abs/1502.03167) research paper Creating the Model using Pytorch LIbrary Importing neccesary modules
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
Modulizing the model - Here, we will break down the models into smaller 'modules', which contains... * 2D convolution * Batch Normalization * LeakyReLU activationWe'll do this by creating pytorch class, called CUnit, and each has- \__init\__: this initializes neccesary layers- forward: perform the calculations using defined layers in \__init\__
###Code
class CUnit(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1, batch_norm=True):
super(CUnit, self).__init__()
pass
self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding)
self.bn = nn.BatchNorm2d(num_features=out_channels)
self.lrelu = nn.LeakyReLU(negative_slope=0.2)
def forward(self, inp, batch_norm=True):
out = self.conv(inp)
if batch_norm:
out = self.bn(out)
out = self.lrelu(out)
return out
###Output
_____no_output_____
###Markdown
Attributes for CUnit class* \__init\__() - in_channels: depth/channels of input images to the unit - out_channels: depth/channels of output images from the unit - kernel_size: kernel/filter size of the convolution - stride: how many pixels the filter moves in one step of convolution - padding: padding on input images - batch_norm: whether or not to apply batch normalization in the unit * forward() - inp: input (image matrix) - batch_norm: Boolean value Building the whole model**Now that we constructed the unit class, let's use them to construct the model**
###Code
class all_CNN(nn.Module):
def __init__(self, image_depth, num_classes):
# first, set up parameters and configs
self.image_depth = image_depth
self.num_classes = num_classes
self.num_out1 = 96
self.num_out2 = 64
# Defining dropouts with defined probability
self.drop1 = nn.Dropout(p=0.2)
self.drop2 = nn.Dropout(p=0.5)
# now we create units using the CUnit class, based on the
# model table above...
self.conv1 = CUnit(in_channels=self.image_depth, out_channels=96, stride=1, batch_norm=False)
self.conv2 = CUnit(in_channels=96, out_channels=96)
# here, we'll use 2 stride convolution layer instead of pooling layer
self.convPool1 = CUnit(in_channels=96, out_channels=96, stride=2, padding=0)
self.conv3 = CUnit(in_channels=96, out_channels=192)
self.conv4 = CUnit(in_channels=192, out_channels=192)
# Second ConvPool Layer
self.convPool2 = CUnit(in_channels=192, out_channels=192, stride=2)
self.conv5 = CUnit(in_channels=192, out_channels=192, padding=0)
self.conv6 = CUnit(in_channels=192, out_channels=192, kernel_size=1, padding=0)
self.conv7 = CUnit(in_channels=192, out_channels=self.num_classes, kernel_size=1, padding=0)
# Average Pooling and softmax layers
self.avp = nn.AvgPool2d(6)
self.softmax = nn.softmax(dim=1)
def forward(self, x):
# Convolution and convPool computations
x = self.conv1(x)
x = self.conv2(x)
x = self.convPool1(x)
x = self.drop2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.convPool2(x)
x = self.drop2(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
# average pooling
avg = self.avp(x)
# changing shape
avg = avg.view(-1, self.num_classes)
# applying softmax
out = self.softmax(avg)
return out
###Output
_____no_output_____
###Markdown
Data Preprocessing and Model Training**Dataset**Here, we will use image dataset, CIFAR10. Image datasets for classification.- [CIFAR 10 & 100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)- 32 x 32 pixel images of 10 classes- 60000 images total, 10000 for testing, 50000 for training.- 6000 images per class**Preprocessing**- Horizontal Flip- Normalizationare used for the data.**Note**: the code is designed so that it will take advantage of GPU if it is available.Let's get started!
###Code
# Importing stuff...
import os
import torch
from torch.autograd import Variable
import torchvision
import torch.nn as nn
from torch.optim.lr_scheduler import MultiStepLR
from torchvision import transforms
from torchvision.utils import save_image
from all_CNN_model import all_CNN
from logger import Logger
###Output
_____no_output_____
###Markdown
Imported Modules- `os`: used to execute system command in python- `torch`: that's the ML library- `torchvision`: importing CIFAR10. Other major datasets are also available through this.- `torch.nn`: for layers, activations- `MultiStepLR`: for adaptive learning rate, refer the paper- `transforms`: preprocessing CIFAR10- `save_image`: Saving image- `all_CNN`: our model- `Logger`: custom logger class, logging training data using tensorflow. thanks to someone from github
###Code
# Setting up the device, CPU or GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Hyper Parameters
lr = 0.04 #0.25, 0.01, 0.05, ,
image_size = 32# for image's hight and witdh
num_epochs = 50 # how many times to go through the training set
num_classes = 10
batch_size = 64
image_depth = 3 # or channels
sample_dir = 'CIFAR10_sample'
# Create a directory
if not os.path.exists(sample_dir):
os.makedirs(sample_dir)
# Initializing the logger
logPath = 'logs_CNN/'
record_name = 'CIFAR10_' + str(lr)
logger = Logger(logPath + record_name)
###Output
_____no_output_____
###Markdown
Little bit about torch.device...`torch.cuda.is_available()` returns if cuda is available. `torch.device` create device, that can be used later for Variable/tensor setting.This is done, so we don't have to rewrite the whole program for cuda and cpu option. Data Preprocessing- `transforms.Compose()` accepts list of transformation and define transformation to apply. Here we'll use horizontal Flip and Normalize- `transform.RandomHorizontalFlip(p=n)` flips the image horizontally, with probability of n- `transform.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))` normalize the images, with provided mean and standard deviation values. One value for each channel, here 3 channels
###Code
transform = transforms.Compose([ # transforms.Compose, list of transforms to perform
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))])
### Loading the dataset
train_dataset = torchvision.datasets.CIFAR10(root='CIFAR10_data/', # where at??
train=True,
transform=transform, # pass the transform we made
download=True)
test_dataset = torchvision.datasets.CIFAR10(root='CIFAR10_data/', # where at??
train=False,
transform=transform, # pass the transform we made
download=True)
### Data Loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True, drop_last=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False, drop_last=True)
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Dataset and LoadersWe first load the dataset for train and test, and create dataloader for each of them.**Dataset**- `torchvision.datasets.DATASET_NAME()` load the dataset with the name. * **root**: relative file path to store the data * **train**: bol, train set or not * **trainsform**: accept the pre-defined transformation. Here we provide the one we created * **download**: if we download the dataset **Data Loaders**- `torch.utils.data.DataLoader()` create data loader, which provide batches of the data to the model during training * **dataset**: dataset to create the loader from * **batch_size**: batch size * **shuffle**: whether to shuffle data * **drop_last**: drop the last data points in the loader which is smaller than the batch size, if it's true Model and Training Setups
###Code
# initialize the model with parameters
D = all_CNN(image_depth, num_classes)
# Device setting
# D.to() moves and/or casts the parameters and buffers to device(cuda), dtype
# setting to whatevefr the device set earlier
D = D.to(device)
# Loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(D.parameters(), lr=lr, momentum=0.9, weight_decay=0.001)
## Adaptive Learning Rate ##
scheduler = MultiStepLR(optimizer, milestones=[200, 250, 300], gamma=0.1)
###Output
_____no_output_____
###Markdown
- `model/variable/tensor.to(device)` casts/move the parameters and buffers to device(cpu/cuda) or dtypes- `MultiStepLR`: adaptive learning rate scheduler * `milestones`: specifies the epoch number to change the lr * `gamma`: $lr_{new} = gamma*lr_{old}$ Utility Functions
###Code
# denormalize the image
def denorm(x):
out = (x + 1) / 2
return out.clamp(0, 1)
# evaluate the model
def evaluate(mode, num):
'''
Evaluate using only first num batches from loader
'''
test_loss = 0
correct = 0
# define the mode, to use training set or testing set
if mode == 'train':
loader = train_loader
elif mode == 'test':
loader = test_loader
with torch.no_grad():
for i, (data, target) in enumerate(loader):
# create the variables for image and target
data, target = Variable(data), Variable(target)
# forward pass
output = D(data)
# calculate, and add the loss of the batch to total loss
test_loss += criterion(output, target).item()
# make prediction, and get the index numbers as class label
pred = output.data.max(1, keepdim=True)[1]
# compare prediction with the target
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
if i % 10 == 0:
print(i)
if i == num: # break out when numth number of batch
break
sample_size = batch_size * num # How many datapoints
test_loss /= sample_size # average loss
print('\n' + mode + 'set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, sample_size,
100. * correct / sample_size)) # acccuracy
return 100. * correct / sample_size
###Output
_____no_output_____
###Markdown
- `Variable.item()` returns the data, if Variable is 1D- `Variable.data` returns Tensor- `Tensor.max(dim, keepdim)` returns max values along the dim, and set keepdim=True, to retain the shape of the original tensor Train the Model!
###Code
#### Start Training ####
# num of batches in the total dataset, here 60000/100
total_step = len(train_loader)
count = 0
for epoch in range(num_epochs): # How many times to go through the dataset
D.train()
for i, (images, labels) in enumerate(train_loader): # each batch
count += 1
# reshape the data, forward pass, and calculate the loss
images = images.reshape(batch_size, image_depth, image_size, image_size).to(device) # reshape and set to cuda/cpu
outputs = D(images) # using real data
loss = criterion(outputs, labels)
# backpropagation
optimizer.zero_grad() # reset the grad
loss.backward() # backprop
optimizer.step() # update the parameters (weights)
# Printing the training info
if (i+1) % 2 == 0:
print('Epoch [{}/{}], Step [{}/{}], loss: {:.4f}'.format(epoch, num_epochs, i+1, total_step, loss.item()))
print('i+1', i+1, ' Lr:', lr)
### Tensorboard Logging ###
if i % 10 == 0:
# 1. Log scalar values (scalar summary)
info = {'loss':loss.item()}
for tag, value in info.items():
logger.scalar_summary(tag, value, count+1)
# 2. Log values and gradients of the parameters (histogram summary)
for tag, value in D.named_parameters():
tag = tag.replace('.', '/')
logger.histo_summary(tag, value.data.cpu().numpy(), count+1)
logger.histo_summary(tag+'/grad', value.grad.data.cpu().numpy(), i+1)
print('logging on tensorboard...', i)
if (i+1) % 100 == 0:
print('calculating accuracy....')
## Call the evaluate funtion, to get train and test accuracy
train_acc = evaluate('train', 50)
test_acc = evaluate('test', 50)
info = {'Train_acc':train_acc, 'Test_acc':test_acc}
for tag, value in info.items():
logger.scalar_summary(tag, value, count+1)
## Save the model checkpoint
torch.save(D.state_dict(), os.path.join(sample_dir, 'D.ckpt'))
###Output
_____no_output_____ |
_notebooks/2020-03-01-test.ipynb | ###Markdown
Statistical Theory> an introduction Statistical Learning Theory *11 October 2019* *DATA 1010*The goal of **statistical learning** is to draw conclusions about an unknown probability measure given independent observations drawn from the measure. These observations are called **training data**. In **supervised learning**, the unknown measure $\mathbb{P}$ is on a product space $\mathcal{X} \times \mathcal{Y}$. In other words, each training observation has the form $(\mathbf{X}, Y)$ where $\mathbf{X}$ is an element of $\mathcal{X}$ and $\mathbf{Y}$ is an element of $\mathcal{Y}$. We aim to use the training data to predict $Y$ given $\mathbf{X}$, where $(\mathbf{X},Y)$ denotes a random variable in $\mathcal{X} \times \mathcal{Y}$ with distribution $\mathbb{P}$. Problem 1Suppose that $\mathbf{X} = [X_1, X_2]$, where $X_1$ is the color of a banana, $X_2$ is the weight of the banana, and $Y$ is a measure of banana deliciousness. Values of $X_1, X_2,$ and $Y$ are recorded for many bananas, and they are used to predict $Y$ for other bananas whose $\mathbf{X}$ values are known. Do you expect the prediction function to be more sensitive to changes in $X_1$ or changes in $X_2$? *Solution.* A supervised learning problem is a **regression** problem if $Y$ is quantitative ($\mathcal{Y}\subset \mathbb{R}$) and a **classification** problem if $\mathcal{Y}$ is a set of labels. ---We call the components of $\mathbf{X}$ *features*, *predictors*, or *input variables*, and we call $Y$ the *response variable* or *output variable*. To predict $Y$ values from $\mathbf{X}$ values, we define a function $h$ from $\mathcal{X}$ to $\mathcal{Y}$, which is called a **prediction function**. Problem 2Explain how a machine learning task like image recognition fits into this framework. In other words, describe an appropriate feature space $\mathcal{X}$, and describe in intuitive terms what the prediction function needs to accomplish. *Solution*. Problem 3List an example of a real-world regression problem and a real-world classification problem. *Solution*. To make meaningful and unambiguous statements about a proposed prediction function $h: \mathcal{X} \to \mathcal{Y}$, we need a rubric by which to assess it. This is customarily done by defining a *loss* (or *risk*, or *error*) $L(h)$, with the idea that smaller loss is better. We might wish to define $L$ only for $h$'s in a specified class $\mathcal{H}$ of candidate functions. Since $L : \mathcal{H} \to \mathbb{R}$ is defined on a set of functions, we call $L$ the **loss functional**. Given a statistical learning problem, a space $\mathcal{H}$ of candidate prediction functions, and a loss functional $L: \mathcal{H} \to \mathbb{R}$, we define the **target function** to be $\operatorname{argmin}_{h \in \mathcal{H}}L(h)$. Let's look at some common loss functionals. For regression, we often use the **mean squared error**: $$L(h) = \mathbb{E}[(h(X)-Y)^2]$$ Problem 4Recall the exam scores example from the KDE section of the statistics course (plot reproduced below), in which we know the exact density of the distribution which generates the hours-score pairs for students taking an exam. (a) What must be true of the class $\mathcal{H}$ of candidate functions in order for the target function to be equal to the regression function $r$?(b) Suppose we collect six observations, as shown below. Can the loss value of the prediction function plotted be decreased by lowering its graph a bit?
###Code
using LinearAlgebra, Statistics, Roots, Optim, Plots, Random
Random.seed!(1234)
# the true regression function
r(x) = 2 + 1/50*x*(30-x)
# the true density function
σy = 3/2
f(x,y) = 3/4000 * 1/√(2π*σy^2) * x*(20-x)*exp(-1/(2σy^2)*(y-r(x))^2)
heatmap(0:0.02:20, -2:0.01:12, f, fillcolor = cgrad([:white, :MidnightBlue]), ratio = 1, fontfamily = "Palatino",
size = (600,300), xlims = (0,20), ylims = (0,12), xlabel = "hours studied", ylabel = "score")
scatter!([(5,2), (5,4), (7,4), (15,4.5), (18, 4), (10,6.1)], markersize = 3, label = "observations")
plot!(0:0.02:20, r, label = "target function", legend = :topleft, linewidth = 2)
###Output
_____no_output_____
###Markdown
*Solution*. Problem 5Think of another loss functional for a regression problem. *Solution.* For classification, we often consider the **misclassification probability** $$L(h) = \mathbb{E}\left[\boldsymbol{1}_{\{h(\mathbf{X}) \neq Y\}}\right] = \mathbb{P}(h(\mathbf{X}) \neq Y). $$ Problem 6Find the target function for the misclassification loss in the case where $\mathcal{X} = \mathbb{R}$, $\mathcal{Y} = \{0,1\}$ and the probability mass on $\mathcal{X} \times \mathcal{Y}$ is spread out according to the **one**-dimensional density function $$f(x,y) = \begin{cases}\frac{1}{3}\mathbf{1}_{\{x \in [0,2]\}} & \text{if }y = 0 \\\frac{1}{6}\mathbf{1}_{\{x \in [1,3]\}} & \text{if }y = 1 \\\end{cases}$$
###Code
plot([(0,0),(2,0)], linewidth = 4, color = :MidnightBlue, label = "probability mass", xlims = (-2,5), ylims = (-1,2))
plot!([(1,1),(3,1)], linewidth = 2, color = :MidnightBlue, primary = false)
###Output
_____no_output_____
###Markdown
If $\mathcal{H}$ contains $G(\mathbf{x}) = \operatorname{argmax}_c\mathbb{P}(Y=c | \mathbf{X} = \mathbf{x})$, then $G$ is the target function for this loss functional. Note that neither of these loss functionals can be computed directly unless the probability measure $\mathbb{P}$ on $\mathcal{X} \times \mathcal{Y}$ is known. Since the goal of statistical learning is to make inferences about $\mathbb{P}$ when it is *not* known, we must approximate $L$ (and likewise also the target function $h$) using the training data. The most straightforward way to do this is to replace $\mathbb{P}$ with the **empirical probability measure** associated with the training data $\{(\mathbf{X}_i, Y_i)\}_{i=1}^n$. This is the probability measure which places $\frac{1}{n}$ units of probability mass at $(\mathbf{X}_i, Y_i)$, for each $i$ from $1$ to $n$. The **empirical risk** of a candidate function $h \in \mathcal{H}$ is the risk functional evaluated with respect to the empirical measure of the training data. A **learner** is a function which takes a set of training data as input and returns a prediction function $\widehat{h}$ as output. A common way to specify a learner is to let $\widehat{h}$ be the **empirical risk minimizer** (ERM), which is the function in $\mathcal{H}$ which minimizes the empirical risk. Problem 7Suppose that $\mathcal{X} = [0,1]$ and $\mathcal{Y} = \mathbb{R}$, and that the probability measure on $\mathcal{X} \times \mathcal{Y}$ is the one which corresponds to sampling $X$ uniformly from $[0,1]$ and then sampling $Y$ from $\mathcal{N}(X/2 + 1, 1)$. Let $\mathcal{H}$ be the set of monic polynomials of degree six or less. Given training observations $\{(\mathbf{X}_i, Y_i)\}_{i=1}^6$, find the risk minimizer and the empirical risk minimizer for the mean squared error.
###Code
using Pkg
Pkg.add("Polynomials")
using Plots, Distributions, Polynomials, Random
Random.seed!(123)
X = rand(6)
Y = X/2 .+ 1 .+ randn(6)
p = Polynomials.fit(X,Y)
heatmap(0:0.01:1, -4:0.01:4, (x,y) -> pdf(Normal(x/2+1),y), opacity = 0.8, fontfamily = "Palatino",
color = cgrad([:white, :MidnightBlue]), xlabel = "x", ylabel = "y")
plot!(0:0.01:1, x->p(x), label = "empirical risk minimizer", color = :purple)
#plot!(0:0.01:1, ACTUAL-REGRESSION-FUNCTION, label = "risk minimizer")
scatter!(X, Y, label = "training points", ylims = (-1,4), color = :red)
###Output
_____no_output_____ |
jupyter/heidelburg_classifier_performance.ipynb | ###Markdown
__Name__: heidelburg_classifier_performance__Description__: Assess amr prediction performance in S. Heidelburg __Author__: Matthew Whiteside matthew dot whiteside at canada dot ca__Date__: Nov 6, 2017__TODO__:
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report
import xgboost as xgb
import os
os.chdir('../pangenome')
import utils
import classify
import config
amr = joblib.load(config.SH['amr'])
amr_list = joblib.load(config.SH['amr_list'])
sample_index = joblib.load(config.SH['sample_index'])
pg = joblib.load(config.SH['pg'])
locus_list = joblib.load(config.SH['locus_list'])
test_train_index = joblib.load(config.SH['test_train_index'])
X_train = pg[test_train_index == 'Training',:].toarray()
X_test = pg[test_train_index == 'Validation',:].toarray()
for drug in amr_list:
y_train = amr[test_train_index == 'Training', amr_list == drug]
y_test = amr[test_train_index == 'Validation', amr_list == drug]
dfile = drug.lower()
rfc = joblib.load(config.SH[dfile+'_rfc'])
gbc = joblib.load(config.SH[dfile+'_gbc'])
xbc = joblib.load(config.SH[dfile+'_xbc'])
rfc.fit(X_train,y_train)
y_pred = rfc.predict(X_test)
print("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
print("Drug: {}, Classifier: Random Forest\n{}\n".format(drug, classification_report(y_test, y_pred)))
gbc.fit(X_train,y_train)
y_pred = gbc.predict(X_test)
print("Drug: {}, Classifier: Gradient Boosting\n{}\n".format(drug, classification_report(y_test, y_pred)))
xbc.fit(X_train,y_train)
y_pred = xbc.predict(X_test)
print("Drug: {}, Classifier: XGBoost\n{}\n".format(drug, classification_report(y_test, y_pred)))
###Output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drug: AMP, Classifier: Random Forest
precision recall f1-score support
0 0.88 0.88 0.88 59
1 0.88 0.88 0.88 58
avg / total 0.88 0.88 0.88 117
Drug: AMP, Classifier: Gradient Boosting
precision recall f1-score support
0 0.91 0.86 0.89 59
1 0.87 0.91 0.89 58
avg / total 0.89 0.89 0.89 117
Drug: AMP, Classifier: XGBoost
precision recall f1-score support
0 0.87 0.93 0.90 59
1 0.93 0.86 0.89 58
avg / total 0.90 0.90 0.90 117
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drug: FOX, Classifier: Random Forest
precision recall f1-score support
0 0.96 0.99 0.97 73
1 0.98 0.93 0.95 44
avg / total 0.97 0.97 0.97 117
Drug: FOX, Classifier: Gradient Boosting
precision recall f1-score support
0 0.94 1.00 0.97 73
1 1.00 0.89 0.94 44
avg / total 0.96 0.96 0.96 117
Drug: FOX, Classifier: XGBoost
precision recall f1-score support
0 0.96 1.00 0.98 73
1 1.00 0.93 0.96 44
avg / total 0.98 0.97 0.97 117
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drug: STR, Classifier: Random Forest
precision recall f1-score support
0 0.93 0.99 0.96 87
1 0.96 0.80 0.87 30
avg / total 0.94 0.94 0.94 117
Drug: STR, Classifier: Gradient Boosting
precision recall f1-score support
0 0.98 0.99 0.98 87
1 0.97 0.93 0.95 30
avg / total 0.97 0.97 0.97 117
Drug: STR, Classifier: XGBoost
precision recall f1-score support
0 0.97 0.95 0.96 87
1 0.87 0.90 0.89 30
avg / total 0.94 0.94 0.94 117
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drug: SOX, Classifier: Random Forest
precision recall f1-score support
0 0.98 1.00 0.99 99
1 1.00 0.89 0.94 18
avg / total 0.98 0.98 0.98 117
Drug: SOX, Classifier: Gradient Boosting
precision recall f1-score support
0 0.98 1.00 0.99 99
1 1.00 0.89 0.94 18
avg / total 0.98 0.98 0.98 117
Drug: SOX, Classifier: XGBoost
precision recall f1-score support
0 0.95 1.00 0.98 99
1 1.00 0.72 0.84 18
avg / total 0.96 0.96 0.95 117
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Drug: TCY, Classifier: Random Forest
precision recall f1-score support
0 0.97 0.76 0.85 92
1 0.51 0.92 0.66 25
avg / total 0.87 0.79 0.81 117
Drug: TCY, Classifier: Gradient Boosting
precision recall f1-score support
0 0.96 0.85 0.90 92
1 0.61 0.88 0.72 25
avg / total 0.89 0.85 0.86 117
Drug: TCY, Classifier: XGBoost
precision recall f1-score support
0 0.95 0.91 0.93 92
1 0.72 0.84 0.78 25
avg / total 0.91 0.90 0.90 117
|
01 python/Lecture_01.ipynb | ###Markdown
2020 09 23 notebook test живем в этой штуке
###Code
help(print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
###Markdown
Jupyter supports *formatting* in md cells 1. test1. test3. test Latex supported inside ipynb files $$(x+x)^+\rho = \lambda$$ supports:1. links1. images3. html tags supportcan draw images 
###Code
import os
os.getcwd()
import this
###Output
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
###Markdown
pep8 standard for code _PEP 8_ -- Style Guide for Python * нижний регистр на английском языке* \_ для пробелов* не начинаться с числа* по русски можно, но не нужно
###Code
пук = 7
пук
###Output
_____no_output_____
###Markdown
не нужно так
###Code
a = 'HI'
if a == 'HI':
print(a)
###Output
HI
###Markdown
\' & \" взаимозаменяемые, но не смешивать code add
###Code
print('Hello world!')
print(1)
hello = 1; world = 2
print(hello, world)
print(1, 2, 4, sep = ' & ', end = '!!!\n')
###Output
1 & 2 & 4!!!
###Markdown
дали другие параметры
###Code
help (print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
###Markdown
вызов функции без аргументов - тоже со скобками
###Code
"""
комментарий
"""
print(1)
5 #string literal
###Output
1
###Markdown
помнить, что + - cancatenation
###Code
x = 'Hello world'
y = 'Hello python'
print(x + y)
###Output
Hello worldHello python
###Markdown
added try except btw, restart kernel pauses on error
###Code
try:
print(Ч)
except NameError:
print(x)
y = 2
x = 2.2
print(type(x))
print(type(y))
2**10
###Output
_____no_output_____
###Markdown
python uses float c округлением
###Code
0.1+0.2
type(0.1+0.2)
0.1+0.2-0.3
0.1+0.2>0.3
###Output
_____no_output_____
###Markdown
float - floating precision with limited precision
###Code
for i in range(100):
num = 0.5 + i
print(round(num))
###Output
0
2
2
4
4
6
6
8
8
10
10
12
12
14
14
16
16
18
18
20
20
22
22
24
24
26
26
28
28
30
30
32
32
34
34
36
36
38
38
40
40
42
42
44
44
46
46
48
48
50
50
52
52
54
54
56
56
58
58
60
60
62
62
64
64
66
66
68
68
70
70
72
72
74
74
76
76
78
78
80
80
82
82
84
84
86
86
88
88
90
90
92
92
94
94
96
96
98
98
100
###Markdown
округление в разную сторону каждый раз происходит. Половина чисел вверх, половину вниз. Округляется к ближайшему четному
###Code
a =[0.5 + x for x in range(10)]
a
import math
print(math.floor(2.4))
math.ceil(2.3)
###Output
2
###Markdown
modulus operator всегда положительный какое число нужно добавить к произведению x\*m чтобы получить m
###Code
print(10%3)
print(-10%3)
10%2.6
10.3%3.1
100%26
int('10',base=7)
###Output
_____no_output_____
###Markdown
V1
###Code
n = 179
r = 0
for i in str(n):
r += int(i)
print(r)
###Output
17
###Markdown
V2
###Code
a = n % 1000 //100
b = n % 100 //10
c = n % 10 //1
a + b + c
i = 0
d_i = n % 10 ** (i+1)//10**i
a, b, c
d_i
###Output
_____no_output_____
###Markdown
V3
###Code
sum(map(int, str(n)))
###Output
_____no_output_____
###Markdown
число минут - вывести часы и минуты часы
###Code
data = input()
type(data)
n = int(data)
h = ( n // 60) % 24
m = n %60
print(h, 'hours', m, 'minutes')
print(f'{h:02d}:{m:02d}')
###Output
03:51
###Markdown
boolean
###Code
2 == 3
3 == 3
2 < True
print(bool(0))
print(bool(10))
###Output
False
True
###Markdown
Строка не пустая превращается в ```TRUE```
###Code
print(bool('sdf'))
print(bool(''))
###Output
True
False
###Markdown
(∩`-´)⊃━☆゚.*・。゚ Задача Вася в Италии с 6 до 8 утра и с 16 до 17 вечера (включительно). Вася не мог попасть в магазин уже несколько дней и страдает от голода. Он может прийти в магазин в X часов. Если магазин открыт в X часов, то выведите True, а если закрыт - выведите False.В единственной строке входных данных вводится целое число X, число находится в пределах от 0 до 23
###Code
time = input('enter time:')
h = int(time)
if h <= 8:
res = h >= 6
else:
if h >= 16:
res = h <= 17
else:
res = False
print(res)
## (∩`-´)⊃━☆゚.*・。゚
time = input('enter time:')
can_visit = 6 <= time <= 8
can_visit2 = 16 <= time <= 17
print(can_visit or can_visit2)
###Output
True
###Markdown
(∩`-´)⊃━☆゚.*・。゚ Задача Pig Latin 1
###Code
word = input('word')
print(word + 'yay')
###Output
word ses
###Markdown
repeat twice and square
###Code
number = input('number')
print(int(number*int(number))**2)
###Output
number 1
###Markdown
Mult by 11
###Code
num = input()
p1 = num[0]
p2 = num[1]
mid = int(p1)+int(p2)
print(str(int(p1)+mid//10)+str(mid%10)+p2)
int(p1+mid//10)
mid//10
a = 'hello '
a[1]
for i in a:
print(i)
print(a[1:3])
print(a[:3])
numbers = '123456789'
print(numbers[1:8:2]) # take each nth element (from 1 to 8, taking )
print(numbers[::2])
print(numbers[::-1])
print('1', '2', '3', end='!')
print('1', '2', '3', end='\n') # default sep=' ', end='\n'
print('1')
help(print)
'abcb'.find('c')
'abcb'[2]
###Output
_____no_output_____
###Markdown
string literalsКроме того, в Python 3.6 и более поздних версиях появился еще более продвинутый способ форматирования строк ‒ f-strings (formatted string literals).
###Code
name = input("Введите Ваше имя: ")
age = int(input("Введите Ваш возраст: ")) # возраст будет целочисленным
print(f"Ваше имя: {name}. Ваш возраст: {age}. Рост: {height:.2f}")
###Output
_____no_output_____
###Markdown
(∩`-´)⊃━☆゚.*・。゚ Задача PiИз модуля math импортируйте переменную pi. С помощью %f (первое предложение) и format (второе предложение) форматирования выведите строки из примера. Везде, где встречается число pi - это должна быть переменная pi, округленная до определенного знака после точки. __Формат вывода:__ `Значение 22/7 (3.14) является приближением числа pi (3.1416)` `Значение 22/7 3.142 является приближением числа pi 3.141592653589793`
###Code
from math import pi
val = 22/7
print(f'`Значение 22/7 ({val:.2f}) является приближением числа pi ({pi:.4f})`')
print('`Значение 22/7 {:.3f} является приближением числа pi ({})`'.format(val, pi))
###Output
`Значение 22/7 (3.14) является приближением числа pi (3.1416)`
`Значение 22/7 3.143 является приближением числа pi (3.141592653589793)`
|
code/netset8barebone.ipynb | ###Markdown
Sustainable energy transitions data model
###Code
import pandas as pd, numpy as np, json, copy, zipfile, random, requests, StringIO
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
from IPython.core.display import Image
Image('favicon.png')
###Output
_____no_output_____
###Markdown
Country and region name converters
###Code
#country name converters
#EIA->pop
clist1={'North America':'Northern America',
'United States':'United States of America',
'Central & South America':'Latin America and the Caribbean',
'Bahamas, The':'Bahamas',
'Saint Vincent/Grenadines':'Saint Vincent and the Grenadines',
'Venezuela':'Venezuela (Bolivarian Republic of)',
'Macedonia':'The former Yugoslav Republic of Macedonia',
'Moldova':'Republic of Moldova',
'Russia':'Russian Federation',
'Iran':'Iran (Islamic Republic of)',
'Palestinian Territories':'State of Palestine',
'Syria':'Syrian Arab Republic',
'Yemen':'Yemen ',
'Congo (Brazzaville)':'Congo',
'Congo (Kinshasa)':'Democratic Republic of the Congo',
'Cote dIvoire (IvoryCoast)':"C\xc3\xb4te d'Ivoire",
'Gambia, The':'Gambia',
'Libya':'Libyan Arab Jamahiriya',
'Reunion':'R\xc3\xa9union',
'Somalia':'Somalia ',
'Sudan and South Sudan':'Sudan',
'Tanzania':'United Republic of Tanzania',
'Brunei':'Brunei Darussalam',
'Burma (Myanmar)':'Myanmar',
'Hong Kong':'China, Hong Kong Special Administrative Region',
'Korea, North':"Democratic People's Republic of Korea",
'Korea, South':'Republic of Korea',
'Laos':"Lao People's Democratic Republic",
'Macau':'China, Macao Special Administrative Region',
'Timor-Leste (East Timor)':'Timor-Leste',
'Virgin Islands, U.S.':'United States Virgin Islands',
'Vietnam':'Viet Nam'}
#BP->pop
clist2={u' European Union #':u'Europe',
u'Rep. of Congo (Brazzaville)':u'Congo (Brazzaville)',
'Republic of Ireland':'Ireland',
'China Hong Kong SAR':'China, Hong Kong Special Administrative Region',
u'Total Africa':u'Africa',
u'Total North America':u'Northern America',
u'Total S. & Cent. America':'Latin America and the Caribbean',
u'Total World':u'World',
u'Total World ':u'World',
'South Korea':'Republic of Korea',
u'Trinidad & Tobago':u'Trinidad and Tobago',
u'US':u'United States of America'}
#WD->pop
clist3={u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Congo, Rep.':u'Congo (Brazzaville)',
u'Caribbean small states':'Carribean',
u'East Asia & Pacific (all income levels)':'Eastern Asia',
u'Egypt, Arab Rep.':'Egypt',
u'European Union':u'Europe',
u'Hong Kong SAR, China':u'China, Hong Kong Special Administrative Region',
u'Iran, Islamic Rep.':u'Iran (Islamic Republic of)',
u'Kyrgyz Republic':u'Kyrgyzstan',
u'Korea, Rep.':u'Republic of Korea',
u'Latin America & Caribbean (all income levels)':'Latin America and the Caribbean',
u'Macedonia, FYR':u'The former Yugoslav Republic of Macedonia',
u'Korea, Dem. Rep.':u"Democratic People's Republic of Korea",
u'South Asia':u'Southern Asia',
u'Sub-Saharan Africa (all income levels)':u'Sub-Saharan Africa',
u'Slovak Republic':u'Slovakia',
u'Venezuela, RB':u'Venezuela (Bolivarian Republic of)',
u'Yemen, Rep.':u'Yemen ',
u'Congo, Dem. Rep.':u'Democratic Republic of the Congo'}
#COMTRADE->pop
clist4={u"Bosnia Herzegovina":"Bosnia and Herzegovina",
u'Central African Rep.':u'Central African Republic',
u'China, Hong Kong SAR':u'China, Hong Kong Special Administrative Region',
u'China, Macao SAR':u'China, Macao Special Administrative Region',
u'Czech Rep.':u'Czech Republic',
u"Dem. People's Rep. of Korea":"Democratic People's Republic of Korea",
u'Dem. Rep. of the Congo':"Democratic Republic of the Congo",
u'Dominican Rep.':u'Dominican Republic',
u'Fmr Arab Rep. of Yemen':u'Yemen ',
u'Fmr Ethiopia':u'Ethiopia',
u'Fmr Fed. Rep. of Germany':u'Germany',
u'Fmr Panama, excl.Canal Zone':u'Panama',
u'Fmr Rep. of Vietnam':u'Viet Nam',
u"Lao People's Dem. Rep.":u"Lao People's Democratic Republic",
u'Occ. Palestinian Terr.':u'State of Palestine',
u'Rep. of Korea':u'Republic of Korea',
u'Rep. of Moldova':u'Republic of Moldova',
u'Serbia and Montenegro':u'Serbia',
u'US Virgin Isds':u'United States Virgin Islands',
u'Solomon Isds':u'Solomon Islands',
u'United Rep. of Tanzania':u'United Republic of Tanzania',
u'TFYR of Macedonia':u'The former Yugoslav Republic of Macedonia',
u'USA':u'United States of America',
u'USA (before 1981)':u'United States of America',
}
#Jacobson->pop
clist5={u"Korea, Democratic People's Republic of":"Democratic People's Republic of Korea",
u'All countries':u'World',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Iran, Islamic Republic of':u'Iran (Islamic Republic of)',
u'Macedonia, Former Yugoslav Republic of':u'The former Yugoslav Republic of Macedonia',
u'Congo, Democratic Republic of':u"Democratic Republic of the Congo",
u'Korea, Republic of':u'Republic of Korea',
u'Tanzania, United Republic of':u'United Republic of Tanzania',
u'Moldova, Republic of':u'Republic of Moldova',
u'Hong Kong, China':u'China, Hong Kong Special Administrative Region',
u'All countries.1':"World"
}
#NREL solar->pop
clist6={u"Antigua & Barbuda":u'Antigua and Barbuda',
u"Bosnia & Herzegovina":u"Bosnia and Herzegovina",
u"Brunei":u'Brunei Darussalam',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u"Iran":u'Iran (Islamic Republic of)',
u"Laos":u"Lao People's Democratic Republic",
u"Libya":'Libyan Arab Jamahiriya',
u"Moldova":u'Republic of Moldova',
u"North Korea":"Democratic People's Republic of Korea",
u"Reunion":'R\xc3\xa9union',
u'Sao Tome & Principe':u'Sao Tome and Principe',
u'Solomon Is.':u'Solomon Islands',
u'St. Lucia':u'Saint Lucia',
u'St. Vincent & the Grenadines':u'Saint Vincent and the Grenadines',
u'The Bahamas':u'Bahamas',
u'The Gambia':u'Gambia',
u'Virgin Is.':u'United States Virgin Islands',
u'West Bank':u'State of Palestine'
}
#NREL wind->pop
clist7={u"Antigua & Barbuda":u'Antigua and Barbuda',
u"Bosnia & Herzegovina":u"Bosnia and Herzegovina",
u'Occupied Palestinian Territory':u'State of Palestine',
u'China Macao SAR':u'China, Macao Special Administrative Region',
#"C\xc3\xb4te d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'East Timor':u'Timor-Leste',
u'TFYR Macedonia':u'The former Yugoslav Republic of Macedonia',
u'IAM-country Total':u'World'
}
#country entroids->pop
clist8={u'Burma':'Myanmar',
u"Cote d'Ivoire":"C\xc3\xb4te d'Ivoire",
u'Republic of the Congo':u'Congo (Brazzaville)',
u'Reunion':'R\xc3\xa9union'
}
def cnc(country):
if country in clist1: return clist1[country]
elif country in clist2: return clist2[country]
elif country in clist3: return clist3[country]
elif country in clist4: return clist4[country]
elif country in clist5: return clist5[country]
elif country in clist6: return clist6[country]
elif country in clist7: return clist7[country]
elif country in clist8: return clist8[country]
else: return country
###Output
_____no_output_____
###Markdown
Population Consult the notebook entitled *pop.ipynb* for the details of mining the data from the UN statistics division online database. Due to being the reference database for country names cell, the cell below needs to be run first, before any other databases.
###Code
try:
import zlib
compression = zipfile.ZIP_DEFLATED
except:
compression = zipfile.ZIP_STORED
#pop_path='https://dl.dropboxusercontent.com/u/531697/datarepo/Set/db/
pop_path='E:/Dropbox/Public/datarepo/Set/db/'
#suppres warnings
import warnings
warnings.simplefilter(action = "ignore")
cc=pd.read_excel(pop_path+'Country Code and Name ISO2 ISO3.xls')
#http://unstats.un.org/unsd/tradekb/Attachment321.aspx?AttachmentType=1
ccs=cc['Country Code'].values
neighbors=pd.read_csv(pop_path+'contry-geotime.csv')
#https://raw.githubusercontent.com/ppKrauss/country-geotime/master/data/contry-geotime.csv
#country name converter from iso to comtrade and back
iso2c={}
isoc2={}
for i in cc.T.iteritems():
iso2c[i[1][0]]=i[1][1]
isoc2[i[1][1]]=i[1][0]
#country name converter from pop to iso
pop2iso={}
for i in cc.T.iteritems():
pop2iso[cnc(i[1][1])]=int(i[1][0])
#country name converter from alpha 2 to iso
c2iso={}
for i in neighbors.T.iteritems():
c2iso[str(i[1][0])]=i[1][1]
c2iso['NA']=c2iso['nan'] #adjust for namibia
c2iso.pop('nan');
#create country neighbor adjacency list based on iso country number codes
c2neighbors={}
for i in neighbors.T.iteritems():
z=str(i[1][4]).split(' ')
if (str(i[1][1])!='nan'): c2neighbors[int(i[1][1])]=[c2iso[k] for k in z if k!='nan']
#extend iso codes not yet encountered
iso2c[729]="Sudan"
iso2c[531]="Curacao"
iso2c[535]="Bonaire, Sint Eustatius and Saba"
iso2c[728]="South Sudan"
iso2c[534]="Sint Maarten (Dutch part)"
iso2c[652]="Saint Barthélemy"
#load h2 min
h2=json.loads(file(pop_path+'h2.json','r').read())
#load tradealpha d
#predata=json.loads(file(pop_path+'/trade/traded.json','r').read())
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
#load savedata
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata={}
#load grids
grid=json.loads(file(pop_path+'grid.json','r').read())
grid5=json.loads(file(pop_path+'grid5.json','r').read())
gridz=json.loads(file(pop_path+'gridz.json','r').read())
gridz5=json.loads(file(pop_path+'gridz5.json','r').read())
#load ndists
ndists=json.loads(file(pop_path+'ndists.json','r').read())
distancenorm=7819.98
#load goodcountries
#goodcountries=list(set(data.keys()).intersection(set(tradealpha.keys())))
goodcountries=json.loads(file(pop_path+'GC.json','r').read())
#goodcountries=goodcountries[:20] #dev
rgc={} #reverse goodcountries coder
for i in range(len(goodcountries)):
rgc[goodcountries[i]]=i
cid={} #reverse goodcountries coder
for i in range(len(goodcountries)):
cid[goodcountries[i]]=i
def save3(sd,countrylist=[]):
#if True:
print 'saving... ',sd,
popsave={}
countries=[]
if countrylist==[]:
c=sorted(goodcountries)
else: c=countrylist
for country in c:
popdummy={}
tosave=[]
for year in data[country]:
popdummy[year]=data[country][year]['population']
for fuel in data[country][year]['energy']:
#for fuel in allfuels:
if fuel not in {'nrg','nrg_sum'}:
tosave.append({"t":year,"u":fuel,"g":"f","q1":"pp","q2":999,
"s":round(0 if (('navg3' in data[country][year]['energy'][fuel]['prod']) \
and (np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']))) else \
data[country][year]['energy'][fuel]['prod']['navg3'] if \
'navg3' in data[country][year]['energy'][fuel]['prod'] else 0,3)
})
tosave.append({"t":year,"u":fuel,"g":"m","q1":"cc","q2":999,
"s":round(0 if (('navg3' in data[country][year]['energy'][fuel]['cons']) \
and (np.isnan(data[country][year]['energy'][fuel]['cons']['navg3']))) else \
data[country][year]['energy'][fuel]['cons']['navg3'] if \
'navg3' in data[country][year]['energy'][fuel]['cons'] else 0,3)
})
#save balances - only for dev
#if (year > min(balance.keys())):
# if year in balance:
# if country in balance[year]:
# tosave.append({"t":year,"u":"balance","g":"m","q1":"cc","q2":999,
# "s":balance[year][country]})
#no import export flows on global
if country not in {"World"}:
flowg={"Import":"f","Export":"m","Re-Export":"m","Re-Import":"f"}
if country in tradealpha:
for year in tradealpha[country]:
for fuel in tradealpha[country][year]:
for flow in tradealpha[country][year][fuel]:
for partner in tradealpha[country][year][fuel][flow]:
tosave.append({"t":int(float(year)),"u":fuel,"g":flowg[flow],"q1":flow,"q2":partner,
"s":round(tradealpha[country][year][fuel][flow][partner],3)
})
popsave[country]=popdummy
countries.append(country)
file('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/data.json','w').write(json.dumps(tosave))
zf = zipfile.ZipFile('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+str(country.encode('utf-8').replace('/','&&'))+'.zip', mode='w')
zf.write('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/data.json','data.json',compress_type=compression)
zf.close()
#save all countries list
file('E:/Dropbox/Public/datarepo/Set/universal/countries.json','w').write(json.dumps(countries))
#save countries populations
#file('E:/Dropbox/Public/datarepo/Set/json/pop.json','w').write(json.dumps(popsave))
print ' done'
###Output
_____no_output_____
###Markdown
Impex updating
###Code
def updatenormimpex(reporter,partner,flow,value,weight=0.1):
global nimportmatrix
global nexportmatrix
global nrimportmatrix
global nrexportmatrix
i=cid[reporter]
j=cid[partner]
if flow in {"Export","Re-Export"}:
nexportmatrix[i][j]=(nexportmatrix[i][j]*(1-weight))+(value*weight)
nrimportmatrix[j][i]=(nrimportmatrix[j][i]*(1-weight))+(value*weight)
if flow in {"Import","Re-Import"}:
nimportmatrix[i][j]=(nrimportmatrix[i][j]*(1-weight))+(value*weight)
nrexportmatrix[j][i]=(nrexportmatrix[j][i]*(1-weight))+(value*weight)
return
def influence(reporter,partner,selfinfluence=1.0,expfactor=3.0):
#country trade influence will tend to have an exponential distribution, therefore we convert to linear
#with a strength of expfactor
i=cid[reporter]
j=cid[partner]
if i==j: return selfinfluence
else: return (12.0/36*nimportmatrix[i][j]\
+6.0/36*nexportmatrix[j][i]\
+4.0/36*nrimportmatrix[i][j]\
+2.0/36*nrexportmatrix[j][i]\
+6.0/36*nexportmatrix[i][j]\
+3.0/36*nimportmatrix[j][i]\
+2.0/36*nrexportmatrix[i][j]\
+1.0/36*nrimportmatrix[j][i])**(1.0/expfactor)
#load ! careful, need to rebuild index if tradealpha or data changes
predata=json.loads(file(pop_path+'trade/nimpex.json','r').read())
nexportmatrix=predata["nexport"]
nimportmatrix=predata["nimport"]
nrexportmatrix=predata["nrexport"]
nrimportmatrix=predata["nrimport"]
predata={}
import scipy
import pylab
import scipy.cluster.hierarchy as sch
import matplotlib as mpl
import matplotlib.font_manager as font_manager
from matplotlib.ticker import NullFormatter
path = 'Inconsolata-Bold.ttf'
prop = font_manager.FontProperties(fname=path)
labeler=json.loads(file(pop_path+'../universal/labeler.json','r').read())
isoico=json.loads(file(pop_path+'../universal/isoico.json','r').read())
risoico=json.loads(file(pop_path+'../universal/risoico.json','r').read())
def dendro(sd='00',selfinfluence=1.0,expfactor=3.0):
returnmatrix=scipy.zeros([len(goodcountries),len(goodcountries)])
matrix=scipy.zeros([len(goodcountries),len(goodcountries)])
global labs
global labsorder
global labs2
global labs3
labs=[]
labs2=[]
labs3=[]
for i in range(len(goodcountries)):
labs.append(labeler[goodcountries[i]])
labsorder = pd.Series(np.array(labs)) #create labelorder
labsorder=labsorder.rank(method='dense').values.astype(int)-1
alphabetvector=[0 for i in range(len(labsorder))]
for i in range(len(labsorder)):
alphabetvector[labsorder[i]-1]=i
labs=[]
for i in range(len(goodcountries)):
labs.append(labeler[goodcountries[alphabetvector[i]]])
labs2.append(goodcountries[alphabetvector[i]])
labs3.append(isoico[goodcountries[alphabetvector[i]]])
for j in alphabetvector:
matrix[i][j]=influence(goodcountries[alphabetvector[i]],goodcountries[alphabetvector[j]],selfinfluence,expfactor)
returnmatrix[i][j]=influence(goodcountries[i],goodcountries[j],selfinfluence,expfactor)
title=u'Partner Importance of COLUMN Country for ROW Country in Energy Trade [self-influence $q='+\
str(selfinfluence)+'$, power factor $p='+str(expfactor)+'$]'
#cmap=plt.get_cmap('RdYlGn_r') #for logplot
cmap=plt.get_cmap('YlGnBu')
labelpad=32
# Generate random features and distance matrix.
D = scipy.zeros([len(matrix),len(matrix)])
for i in range(len(matrix)):
for j in range(len(matrix)):
D[i,j] =matrix[i][j]
# Compute and plot first dendrogram.
fig = pylab.figure(figsize=(17,15))
sch.set_link_color_palette(10*["#ababab"])
# Plot original matrix.
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
im = axmatrix.matshow(D[::-1], aspect='equal', origin='lower', cmap=cmap)
#im = axmatrix.matshow(E[::-1], aspect='auto', origin='lower', cmap=cmap) #for logplot
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.87,0.1,0.02,0.6])
pylab.colorbar(im, cax=axcolor)
# Label up
axmatrix.set_xticks(range(len(matrix)))
mlabs=list(labs)
for i in range(len(labs)):
kz='-'
for k in range(labelpad-len(labs[i])):kz+='-'
if i%2==1: mlabs[i]=kz+u' '+labs[i]+u' '+'-'
else: mlabs[i]='-'+u' '+labs[i]+u' '+kz
axmatrix.set_xticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.xaxis.set_label_position('top')
axmatrix.xaxis.tick_top()
pylab.xticks(rotation=-90, fontsize=8)
axmatrix.set_yticks(range(len(matrix)))
mlabs=list(labs)
for i in range(len(labs)):
kz='-'
for k in range(labelpad-len(labs[i])):kz+='-'
if i%2==0: mlabs[i]=kz+u' '+labs[i]+u' '+'-'
else: mlabs[i]='-'+u' '+labs[i]+u' '+kz
axmatrix.set_yticklabels(mlabs[::-1], minor=False,fontsize=7,fontproperties=prop)
axmatrix.yaxis.set_label_position('left')
axmatrix.yaxis.tick_left()
xlabels = axmatrix.get_xticklabels()
for label in range(len(xlabels)):
xlabels[label].set_rotation(90)
axmatrix.text(1.1, 0.5, title,
horizontalalignment='left',
verticalalignment='center',rotation=270,
transform=axmatrix.transAxes,size=10)
axmatrix.xaxis.grid(False)
axmatrix.yaxis.grid(False)
plt.savefig('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+'si'+str(selfinfluence)+'expf'+str(expfactor)+'dendrogram.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.close()
m1='centroid'
m2='single'
# Compute and plot first dendrogram.
fig = pylab.figure(figsize=(17,15))
ax1 = fig.add_axes([0.1245,0.1,0.1,0.6])
Y = sch.linkage(D, method=m1)
Z1 = sch.dendrogram(Y,above_threshold_color="#ababab", orientation='left')
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_axis_bgcolor('None')
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.335,0.825,0.5295,0.1])
Y = sch.linkage(D, method=m2)
Z2 = sch.dendrogram(Y,above_threshold_color="#ababab")
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_axis_bgcolor('None')
# Plot distance matrix.
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
#D = E[idx1,:] #for logplot
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='equal', origin='lower', cmap=cmap)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.87,0.1,0.02,0.6])
ac=pylab.colorbar(im, cax=axcolor)
# Label up
axmatrix.set_xticks(np.arange(len(matrix))-0)
mlabs=list(np.array(labs)[idx2])
for i in range(len(np.array(labs)[idx2])):
kz='-'
for k in range(labelpad-len(np.array(labs)[idx2][i])):kz+='-'
if i%2==1: mlabs[i]=kz+u' '+np.array(labs)[idx2][i]+u' '+'-'
else: mlabs[i]='-'+u' '+np.array(labs)[idx2][i]+u' '+kz
axmatrix.set_xticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.xaxis.set_label_position('top')
axmatrix.xaxis.tick_top()
pylab.xticks(rotation=-90, fontsize=8)
axmatrix.set_yticks(np.arange(len(matrix))+0)
mlabs=list(np.array(labs)[idx1])
for i in range(len(np.array(labs)[idx1])):
kz='-'
for k in range(labelpad-len(np.array(labs)[idx1][i])):kz+='-'
if i%2==0: mlabs[i]=kz+u' '+np.array(labs)[idx1][i]+u' '+'-'
else: mlabs[i]='-'+u' '+np.array(labs)[idx1][i]+u' '+kz
axmatrix.set_yticklabels(mlabs, minor=False,fontsize=7,fontproperties=prop)
axmatrix.yaxis.set_label_position('left')
axmatrix.yaxis.tick_left()
xlabels = axmatrix.get_xticklabels()
for label in xlabels:
label.set_rotation(90)
axmatrix.text(1.11, 0.5, title,
horizontalalignment='left',
verticalalignment='center',rotation=270,
transform=axmatrix.transAxes,size=10)
axmatrix.xaxis.grid(False)
axmatrix.yaxis.grid(False)
plt.savefig('E:/Dropbox/Public/datarepo/Set/json/'+str(sd)+'/'+'si'+str(selfinfluence)+'expf'+str(expfactor)+'dendrogram2.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.close()
return [returnmatrix,returnmatrix.T]
###Output
_____no_output_____
###Markdown
###Code
#run once
GC=[] #create backup of global country list
for i in goodcountries: GC.append(i)
file('E:/Dropbox/Public/datarepo/Set/db/GC.json','w').write(json.dumps(GC))
#create mini-world
goodcountries2=["United States of America",#mostinfluential
"Russian Federation",
"Netherlands",
"United Kingdom",
"Italy",
"France",
"Saudi Arabia",
"Singapore",
"Germany",
"United Arab Emirates",
"China",
"India",
"Iran (Islamic Republic of)",
"Nigeria",
"Venezuela (Bolivarian Republic of)",
"South Africa"]
###Output
_____no_output_____
###Markdown
###Code
#[importancematrix,influencematrix]=dendro('00',1,5)
c=['seaGreen','royalBlue','#dd1c77']
levels=[1,3,5]
toplot=[cid[i] for i in goodcountries2]
tolabel=[labeler[i] for i in goodcountries2]
fig,ax=plt.subplots(1,2,figsize=(12,5))
for j in range(len(levels)):
[importancematrix,influencematrix]=dendro('74',4,levels[j])
z=[np.mean(i) for i in influencematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
zplot=[]
zplot2=[]
for i in toplot:
zplot.append(s[i]+1)
zplot2.append(z[i])
ax[0].scatter(np.array(zplot),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[1].scatter(np.array(zplot2),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[0].set_ylim(-1,len(toplot))
ax[1].set_ylim(-1,len(toplot))
ax[0].set_xlim(0,20)
ax[1].set_xscale('log')
ax[0].set_yticks(range(len(toplot)))
ax[0].set_yticklabels(tolabel)
ax[1].set_yticks(range(len(toplot)))
ax[1].set_yticklabels([])
ax[0].set_xlabel("Rank in Country Influence Vector")
ax[1].set_xlabel("Average Country Influence")
ax[1].legend(loc=1,framealpha=0)
plt.subplots_adjust(wspace=0.1)
plt.suptitle("Power Factor ($p$) Sensitivity of Country Influence",fontsize=14)
#plt.savefig('powerfactor.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.show()
civector={}
for i in range(len(influencevector)):
civector[i+1]={"inf":np.round(z[influencevector[i]],2),"country":labeler[goodcountries[influencevector[i]]]}
pd.DataFrame(civector).T.to_excel('c.xlsx')
c=['seaGreen','royalBlue','#dd1c77']
levels=[1,3,5]
toplot=[cid[i] for i in goodcountries2]
tolabel=[labeler[i] for i in goodcountries2]
fig,ax=plt.subplots(1,2,figsize=(12,5))
for j in range(len(levels)):
[importancematrix,influencematrix]=dendro('00',1,levels[j])
z=[np.mean(i) for i in importancematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
zplot=[]
zplot2=[]
for i in toplot:
zplot.append(s[i]+1)
zplot2.append(z[i])
ax[0].scatter(np.array(zplot),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[1].scatter(np.array(zplot2),np.arange(len(zplot))-0.2+0.2*j,40,color=c[j],label=u'$p='+str(levels[j])+'$')
ax[0].set_ylim(-1,len(toplot))
ax[1].set_ylim(-1,len(toplot))
ax[0].set_xlim(0,20)
ax[1].set_xscale('log')
ax[0].set_yticks(range(len(toplot)))
ax[0].set_yticklabels(tolabel)
ax[1].set_yticks(range(len(toplot)))
ax[1].set_yticklabels([])
ax[0].set_xlabel("Rank in Country Dependence Vector")
ax[1].set_xlabel("Average Country Dependence")
ax[1].legend(loc=1,framealpha=0)
plt.subplots_adjust(wspace=0.1)
plt.suptitle("Power Factor ($p$) Sensitivity of Country Dependence",fontsize=14)
plt.savefig('powerfactor2.png',dpi=150,bbox_inches = 'tight', pad_inches = 0.1, )
plt.show()
###Output
_____no_output_____
###Markdown
Create energy cost by filling the matrix with the cost of row importing 1TWh from column. neglecting transport energy costs for now, this will be the extraction energy cost. Let us consider only solar for now. Try optimization with all three source, choose one with best objective value. 1TWh tier changes based on granurality.
###Code
#weighted resource class calculator
def re(dic,total):
if dic!={}:
i=max(dic.keys())
mi=min(dic.keys())
run=True
keys=[]
weights=[]
counter=0
while run:
counter+=1 #safety break
if counter>1000: run=False
if i in dic:
if total<dic[i]:
keys.append(i)
weights.append(total)
run=False
else:
total-=dic[i]
keys.append(i)
weights.append(dic[i])
i-=1
if i<mi: run=False
if sum(weights)==0: return 0
else: return np.average(keys,weights=weights)
else: return 0
region=pd.read_excel(pop_path+'regions.xlsx').set_index('Country')
#load
aroei=json.loads(file(pop_path+'aroei.json','r').read())
groei=json.loads(file(pop_path+'groei.json','r').read())
ndists=json.loads(file(pop_path+'ndists.json','r').read())
#average resource quality calculator for the globe
def update_aroei():
global aroei
aroei={}
groei={}
for c in res:
for r in res[c]:
if r not in groei: groei[r]={}
for cl in res[c][r]['res']:
if cl not in groei[r]: groei[r][cl]=0
groei[r][cl]+=res[c][r]['res'][cl]
for r in groei:
x=[]
y=[]
for i in range(len(sorted(groei[r].keys()))):
x.append(float(sorted(groei[r].keys())[i]))
y.append(float(groei[r][sorted(groei[r].keys())[i]]))
aroei[r]=np.average(x,weights=y)
#https://www.researchgate.net/publication/299824220_First_Insights_on_the_Role_of_solar_PV_in_a_100_Renewable_Energy_Environment_based_on_hourly_Modeling_for_all_Regions_globally
cost=pd.read_excel(pop_path+'/maps/storage.xlsx')
#1Bdi - grid
def normdistance(a,b):
return ndists[cid[a]][cid[b]]
def gridtestimator(country,partner,forceptl=False):
#return normdistance(country,partner)
def electricitytrade(country,partner):
scaler=1
gridpartners=grid5['electricity']
#existing trade partners
if ((partner in gridpartners[country]) or (country in gridpartners[partner])):
scaler+=cost.loc[region.loc[country]]['egrid'].values[0]/2.0
#neighbors, but need to build
elif pop2iso[country] in c2neighbors:
if (pop2iso[partner] in c2neighbors[pop2iso[country]]):
scaler+=cost.loc[region.loc[country]]['grid'].values[0]/2.0*normdistance(country,partner)
#not neighbors or partners but in the same region, need to build
elif (region.loc[country][0]==region.loc[partner][0]):
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*3.0/2.0*normdistance(country,partner)
#need to build supergrid, superlative costs
else:
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*10.0/2.0*normdistance(country,partner)
#need to build supergrid, superlative costs
else:
scaler+=cost.loc[region.loc[country]]['grid'].values[0]*10.0/2.0*normdistance(country,partner)
return scaler
def ptltrade(country,partner):
#ptg costs scale with distance
scaler=1+cost.loc[11]['ptg']*100.0*normdistance(country,partner)
return scaler
if ptltrade(country,partner)<electricitytrade(country,partner) or forceptl:
return {"scaler":ptltrade(country,partner),"tradeway":"ptl"}
else: return {"scaler":electricitytrade(country,partner),"tradeway":"grid"}
#1Bdii - storage &curtailment
def storagestimator(country):
return cost.loc[region.loc[country]]['min'].values[0]
#curtoversizer
def curtestimator(country):
return cost.loc[region.loc[country]]['curt'].values[0]
#global benchmark eroei, due to state of technology
eroei={
#'oil':13,
#'coal':27,
#'gas':14,
#'nuclear':10,
#'biofuels':1.5,
#'hydro':84,
#'geo_other':22,
'pv':13.74,#17.6,
'csp':7.31,#10.2,
'wind':11.17,#20.2 #24
}
#without esoei
#calibrated from global, from Table S1 in ERL paper
###Output
_____no_output_____
###Markdown
ALLINONE
###Code
#initialize renewable totals for learning
total2014={'csp':0,'solar':0,'wind':0}
learning={'csp':0.04,'solar':0.04,'wind':0.02}
year=2014
for fuel in total2014:
total2014[fuel]=np.nansum([np.nansum(data[partner][year]['energy'][fuel]['cons']['navg3'])\
for partner in goodcountries if fuel in data[partner][year]['energy']])
total2014
#scenario id (folder id)
#first is scenario family, then do 4 variations of scenarios (2 selfinluence, 2 power factor) as 01, 02...
sd='74' #only fossil profiles and non-scalable
#import resources
###################################
###################################
#load resources
#predata=json.loads(file(pop_path+'maps/newres.json','r').read())
predata=json.loads(file(pop_path+'maps/res.json','r').read())
res={}
for c in predata:
res[c]={}
for f in predata[c]:
res[c][f]={}
for r in predata[c][f]:
res[c][f][r]={}
for year in predata[c][f][r]:
res[c][f][r][int(year)]=predata[c][f][r][year]
predata={}
print 'scenario',sd,'loaded resources',
###################################
###################################
#load demand2
predata=json.loads(file(pop_path+'demand2.json','r').read())
demand2={}
for c in predata:
demand2[c]={}
for year in predata[c]:
demand2[c][int(year)]=predata[c][year]
predata={}
print 'demand',
###################################
###################################
#load tradealpha d
#predata=json.loads(file(pop_path+'/trade/traded.json','r').read())
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
print 'tradedata',
###################################
###################################
#reload impex and normalize
predata=json.loads(file(pop_path+'trade/nimpex.json','r').read())
nexportmatrix=predata["nexport"]
nimportmatrix=predata["nimport"]
nrexportmatrix=predata["nrexport"]
nrimportmatrix=predata["nrimport"]
predata={}
print 'impex',
###################################
###################################
#load latest savedata
#we dont change the data for now, everything is handled through trade
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata={}
print 'data'
###################################
###################################
save3('00') #save default
#reset balance
ybalance={}
#recalculate balances
for year in range(2015,2101):
balance={}
if year not in ybalance:ybalance[year]={}
for c in goodcountries:
balance[c]=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if fuel in data[c][year]['energy']:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance[c]-=f1
balance[c]+=demand2[c][year]*8760*1e-12
if 'balance' not in data[c][year]['energy']:
data[c][year]['energy']['balance']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[c][year]['energy']['balance']['prod']['navg3']=max(0,balance[c])#balance can't be negative
data[c][year]['energy']['balance']['cons']['navg3']=max(0,balance[c])
ybalance[year]=balance
save3('01') #save default
def cbalance(year,c):
balance=0
if c in tradealpha:
f1=0
for fuel in tradealpha[c][year]:
if 'Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Import'].values())])
if 'Re-Import' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,sum(tradealpha[c][year][fuel]['Re-Import'].values())])
if 'Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Export'].values())])
if 'Re-Export' in tradealpha[c][year][fuel]:
f1=np.nansum([f1,-sum(tradealpha[c][year][fuel]['Re-Export'].values())])
if '_' in fuel:
fuel=fuel[fuel.find('_')+1:]
#if fuel in data[c][year]['energy']:
# f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
for fuel in data[c][year]['energy']:
if fuel not in {"nrg_sum","nrg"}:
f1=np.nansum([f1,data[c][year]['energy'][fuel]['prod']['navg3']])
balance-=f1
balance+=demand2[c][year]*8760*1e-12
return balance
def res_adv(country,fuel): #this country's wavg resource compared to global
x=[]
y=[]
if fuel=='solar':fuel='pv'
d=groei[fuel] #global wavg resource class
for i in range(len(sorted(d.keys()))):
if float(d[sorted(d.keys())[i]])>0.1:
x.append(float(sorted(d.keys())[i]))
y.append(float(d[sorted(d.keys())[i]]))
x2=[]
y2=[]
if country not in res: return 0
d2=res[country][fuel]['res'] #country's wavg resource class
for i in range(len(sorted(d2.keys()))):
if float(d2[sorted(d2.keys())[i]])>0.1:
x2.append(float(sorted(d2.keys())[i]))
y2.append(float(d2[sorted(d2.keys())[i]]))
if y2!=[]:
print np.average(x2,weights=y2)
print np.average(x,weights=y)
return np.average(x2,weights=y2)*1.0/np.average(x,weights=y)
else: return 0
res_adv('Germany','wind')
def costvectorranker(cv):
k={}
for i in cv:
for j in cv[i]:
k[(i)+'_'+str(j)]=cv[i][j]
return sorted(k.items(), key=lambda value: value[1])
def trade(country,partner,y0,fuel,value,l0):
lifetime=l0+int(random.random()*l0)
tradeable[partner][fuel]-=value
key=tradeway[country][partner]+'_'+fuel
for year in range(y0,min(2101,y0+lifetime)):
#add production
if fuel not in data[partner][year]['energy']:
data[partner][year]['energy'][fuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[partner][year]['energy'][fuel]['prod']['navg3']+=value
data[partner][year]['energy']['nrg_sum']['prod']['navg3']+=value
#add consumption
if fuel not in data[country][year]['energy']:
data[country][year]['energy'][fuel]={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy'][fuel]['cons']['navg3']+=value
data[country][year]['energy']['nrg_sum']['cons']['navg3']+=value
#add storage on country side (if not ptl)
if tradeway[country][partner]=='grid':
if fuel not in {'csp'}:
if 'storage' not in data[country][year]['energy']:
data[country][year]['energy']['storage']={'prod':{'navg3':0},'cons':{'navg3':0}}
data[country][year]['energy']['storage']['prod']['navg3']+=value*storagestimator(country)
data[country][year]['energy']['storage']['cons']['navg3']+=value*storagestimator(country)
if country!=partner:
#add import flow
if key not in tradealpha[country][year]:tradealpha[country][year][key]={}
if 'Import' not in tradealpha[country][year][key]:tradealpha[country][year][key]["Import"]={}
if str(pop2iso[partner]) not in tradealpha[country][year][key]["Import"]:
tradealpha[country][year][key]["Import"][str(pop2iso[partner])]=0
tradealpha[country][year][key]["Import"][str(pop2iso[partner])]+=value
#add export flow
if key not in tradealpha[partner][year]:tradealpha[partner][year][key]={}
if 'Export' not in tradealpha[partner][year][key]:tradealpha[partner][year][key]["Export"]={}
if str(pop2iso[country]) not in tradealpha[partner][year][key]["Export"]:
tradealpha[partner][year][key]["Export"][str(pop2iso[country])]=0
tradealpha[partner][year][key]["Export"][str(pop2iso[country])]+=value
#trade diversificatioin necessity
def divfill(cv,divfactor,divbalance):
scaler=min(1.0,divbalance/\
sum([tradeable[cv[i][0][:cv[i][0].find('_')]]\
[cv[i][0][cv[i][0].find('_')+1:]] for i in range(divfactor)])) #take all or partial
for i in range(divfactor):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
trade(country,partner,year,fuel,max(0,tradeable[partner][fuel])*scaler,lifetime)
def tradefill(cv):
totrade=[]
tradesum=0
for i in range(len(cv)):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
if tradeable[partner][fuel]>balance-tradesum:
totrade.append((cv[i][0],balance-tradesum))
tradesum+=balance-tradesum
break
else:
totrade.append((cv[i][0],tradeable[partner][fuel]))
tradesum+=tradeable[partner][fuel]
for i in totrade:
partner=i[0][:i[0].find('_')]
fuel=i[0][i[0].find('_')+1:]
trade(country,partner,year,fuel,i[1],lifetime)
def omegafill(cv):
global wasalready
totrade=[]
tradesum=0
for i in range(len(cv)):
partner=cv[i][0][:cv[i][0].find('_')]
fuel=cv[i][0][cv[i][0].find('_')+1:]
if country==partner:
if fuel not in wasalready:
wasalready.add(fuel)
if tradeable[partner][fuel]>balance-tradesum:
totrade.append((cv[i][0],balance-tradesum))
tradesum+=balance-tradesum
break
else:
totrade.append((cv[i][0],tradeable[partner][fuel]))
tradesum+=tradeable[partner][fuel]
#trade(country,partner,year,fuel,min(cv[i][1],tradeable[partner][fuel]),lifetime)
for i in totrade:
partner=i[0][:i[0].find('_')]
fuel=i[0][i[0].find('_')+1:]
trade(country,partner,year,fuel,i[1],lifetime)
def nrgsum(country,year):
return np.nansum([data[country][year]['energy'][i]['prod']['navg3'] for i in data[country][year]['energy'] if i not in ['nrg_sum','sum','nrg']])
def liquidcheck(year,country):
oil=data[country][year]['energy']['oil']['prod']['navg3']
gas=data[country][year]['energy']['gas']['prod']['navg3']
try: ptl=sum([sum(tradealpha[country][year][i]['Import'].values()) for i in tradealpha[country][year] if 'ptl' in i])
except: ptl=0
liquidshare=(oil+gas+ptl)/nrgsum(country,year)
return max(0,(h2[country]-liquidshare)*nrgsum(country,year)) #return amount to fill with liquids
def liquidfill(country,year):
toadjust=0
tofill=liquidcheck(year,country)
adjustable={}
if tofill>0:
for fuel in data[country][year]['energy']:
if fuel not in {"nrg","nrg_sum","storage","oil","gas"}:
if data[country][year]['energy'][fuel]['prod']['navg3']>0:
if not np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']):
toadjust+=data[country][year]['energy'][fuel]['prod']['navg3']
for fuel in tradealpha[country][year]:
if fuel not in {"coal","oil","gas"}:
if 'ptl' not in fuel:
if 'Import' in tradealpha[country][year][fuel]:
toadjust+=np.nansum(tradealpha[country][year][fuel]["Import"].values())
#scan fuels to adjust, calculate adjust scaler
adjustscaler=1.0-tofill*1.0/toadjust
#scale down fuels, record what to put back as ptl
for fuel in data[country][year]['energy']:
if fuel not in {"nrg","nrg_sum","storage","oil","gas"}:
if data[country][year]['energy'][fuel]['prod']['navg3']>0:
if not np.isnan(data[country][year]['energy'][fuel]['prod']['navg3']):
data[country][year]['energy'][fuel]['prod']['navg3']*=adjustscaler
if fuel not in adjustable: adjustable[fuel]={}
adjustable[fuel][pop2iso[country]]=data[country][year]['energy'][fuel]['prod']['navg3']*(1-adjustscaler)
for fuel in tradealpha[country][year]:
if fuel not in {"coal","oil","gas"}:
if 'ptl' not in fuel:
if 'Import' in tradealpha[country][year][fuel]:
for p in tradealpha[country][year][fuel]["Import"]:
tradealpha[country][year][fuel]["Import"][p]*=adjustscaler
if fuel[fuel.find('_')+1:] not in adjustable: adjustable[fuel[fuel.find('_')+1:]]={}
adjustable[fuel[fuel.find('_')+1:]][p]=tradealpha[country][year][fuel]["Import"][p]*(1-adjustscaler)
#put back ptl
for fuel in adjustable:
for p in adjustable[fuel]:
if 'ptl_'+str(fuel) not in tradealpha[country][year]:
tradealpha[country][year]['ptl_'+str(fuel)]={}
if 'Import' not in tradealpha[country][year]['ptl_'+str(fuel)]:
tradealpha[country][year]['ptl_'+str(fuel)]["Import"]={}
tradealpha[country][year]['ptl_'+str(fuel)]["Import"][p]=adjustable[fuel][p]
[importancematrix,influencematrix]=dendro(sd,4,3) #2,5, or 4,3
z=[np.mean(i) for i in influencematrix] #sum country influence on columns
#if you wanted weighted influence, introduce weights (by trade volume i guess) here in the above mean
s = pd.Series(1/np.array(z)) #need to 1/ to create inverse order
s=s.rank(method='dense').values.astype(int)-1 #start from 0 not one
#s is a ranked array on which country ranks where in country influence
#we then composed the ordered vector of country influence
influencevector=[0 for i in range(len(s))]
for i in range(len(s)):
influencevector[s[i]]=i
CV={}
CV2={}
TB={}
#load data - if already saved
predata=json.loads(file(pop_path+'savedata6.json','r').read())
data={}
for c in predata:
data[c]={}
for year in predata[c]:
data[c][int(year)]=predata[c][year]
predata=json.loads(file(pop_path+'/trade/smalltrade.json','r').read())
tradealpha={}
for c in predata:
tradealpha[c]={}
for year in predata[c]:
tradealpha[c][int(year)]=predata[c][year]
predata={}
fc={"solar":'pv',"csp":'csp',"wind":'wind'}
divfactor=10 #min trade partners in trade diversification
divshare=0.2 #min share of the trade diversification, total
tradeway={}
lifetime=20 #base lifetime
maxrut=0.01 #for each type #max rampup total, if zero 5% of 1% 0.05 / 0.001
maxrur=1.5 #growth rate for each techno #max rampup rate 0.5
omegamin=0.1 #min share of the in-country diversification, per fuel
random.seed(2)
cs=set()
for year in range(2015,2101):
tradeable={}
if year not in TB:TB[year]={}
for i in range(len(goodcountries)):
country=goodcountries[i]
if country not in tradeable:tradeable[country]={'solar':0,'csp':0,'wind':0}
for fuel in {"solar","csp","wind"}:
if fuel not in data[country][year-1]['energy']:
tradeable[country][fuel]=nrgsum(country,year-1)*maxrut
elif data[country][year-1]['energy'][fuel]['prod']['navg3']==0:
tradeable[country][fuel]=nrgsum(country,year-1)*maxrut
else: tradeable[country][fuel]=max(nrgsum(country,year-1)*maxrut,
data[country][year-1]['energy'][fuel]['prod']['navg3']*maxrur)
for i in range(len(influencevector))[:]:#4344
country=goodcountries[influencevector[i]]
cs.add(country)
#if year==2015:
if True:
costvector={}
for j in range(len(goodcountries)):
partner=goodcountries[j]
if partner not in costvector:costvector[partner]={}
transactioncost=gridtestimator(country,partner)
if country not in tradeway:tradeway[country]={}
if partner not in tradeway[country]:tradeway[country][partner]=transactioncost["tradeway"]
for fuel in {"solar","csp","wind"}:
ru0=0
if fuel not in data[partner][year]['energy']: ru = ru0
elif partner not in res: ru = ru0
elif sum(res[partner][fc[fuel]]['res'].values())==0: ru=1
elif data[partner][year]['energy'][fuel]['prod']['navg3']==0: ru=ru0
else: ru=data[partner][year]['energy'][fuel]['prod']['navg3']*1.0/\
sum(res[partner][fc[fuel]]['res'].values())
ru=max(ru,0)
ru=max(1,0.3+ru**0.1) #or 0.3
costvector[partner][fuel]=1.0/influencematrix[influencevector[i]][j]*\
transactioncost['scaler']*\
ru*\
1.0/(eroei[fc[fuel]]*1.0/np.mean(eroei.values())*\
res_adv(partner,fuel)*\
aroei[fc[fuel]]*1.0/np.mean(aroei.values()))
cv=costvectorranker(costvector)
#fulfill trade diversification criterion
balance=divshare*cbalance(year,country)
if balance>0:
divfill(cv,divfactor,balance)
#fulfill in-country diversification criterion
wasalready=set()
balance=cbalance(year,country)*omegamin
if balance>0:
omegafill(cv) #fill first best source to min share
omegafill(cv) #fill second best source to min share
#fill up rest of trade
balance=cbalance(year,country)
if balance>0:
tradefill(cv)
#fill liquids up to min liquid level
liquidfill(country,year)
print i,
#CV2[country]=cv
print year
save3(sd,cs)
file('E:/Dropbox/Public/datarepo/Set/savedata/'+sd+'data.json','w').write(json.dumps(data))
file('E:/Dropbox/Public/datarepo/Set/savedata/'+sd+'trade.json','w').write(json.dumps(tradealpha))
###Output
_____no_output_____ |
Notebooks/05_Autoencoder_empty.ipynb | ###Markdown
Unsupervised learning with AutoencoderSome piece of codes taken from https://github.com/kevinzakka/vae-pytorchDescription given by [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder)
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading MNIST
###Code
#where your MNIST dataset is stored:
data_dir = '/home/lelarge/data'
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(data_dir, train=True, download=True, transform=transforms.ToTensor()),
batch_size=256, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(data_dir, train=False, download=True, transform=transforms.ToTensor()),
batch_size=10, shuffle=False)
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def to_img(x):
x = x.data.numpy()
x = 0.5 * (x + 1)
x = np.clip(x, 0, 1)
x = x.reshape([-1, 28, 28])
return x
def plot_reconstructions(model, conv=False):
"""
Plot 10 reconstructions from the test set. The top row is the original
digits, the bottom is the decoder reconstruction.
The middle row is the encoded vector.
"""
# encode then decode
data, _ = next(iter(test_loader))
if not conv:
data = data.view([-1, 784])
data.requires_grad = False
true_imgs = data
encoded_imgs = model.encoder(data)
decoded_imgs = model.decoder(encoded_imgs)
true_imgs = to_img(true_imgs)
decoded_imgs = to_img(decoded_imgs)
encoded_imgs = encoded_imgs.data.numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(3, n, i + 1)
plt.imshow(true_imgs[i])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax = plt.subplot(3, n, i + 1 + n)
plt.imshow(encoded_imgs[i].reshape(-1,4))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(3, n, i + 1 + n + n)
plt.imshow(decoded_imgs[i])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Simple Auto-EncoderWe'll start with the simplest autoencoder: a single, fully-connected layer as the encoder and decoder.
###Code
class AutoEncoder(nn.Module):
def __init__(self, input_dim, encoding_dim):
super(AutoEncoder, self).__init__()
self.encoder = nn.Linear(input_dim, encoding_dim)
self.decoder = nn.Linear(encoding_dim, input_dim)
def forward(self, x):
encoded = F.relu(self.encoder(x))
decoded = self.decoder(encoded)
return decoded
input_dim = 784
encoding_dim = 32
model = AutoEncoder(input_dim, encoding_dim)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.MSELoss()
###Output
_____no_output_____
###Markdown
Why did we take 784 as input dimension? What is the learning rate?
###Code
def train_model(model,loss_fn,data_loader=None,epochs=1,optimizer=None):
model.train()
for epoch in range(epochs):
for batch_idx, (data, _) in enumerate(train_loader):
data = data.view([-1, 784])
optimizer.zero_grad()
output = model(data)
loss = loss_fn(output, data)
loss.backward()
optimizer.step()
if batch_idx % 50 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(data_loader.dataset),
100. * batch_idx / len(data_loader), loss.data.item()))
train_model(model, loss_fn,data_loader=train_loader,epochs=15,optimizer=optimizer)
plot_reconstructions(model)
###Output
_____no_output_____
###Markdown
If you remove the non-linearity, what are you doing? Stacked Auto-Encoder
###Code
class DeepAutoEncoder(nn.Module):
def __init__(self, input_dim, encoding_dim):
super(DeepAutoEncoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(input_dim, 128),
nn.ReLU(True),
nn.Linear(128, 64),
nn.ReLU(True),
nn.Linear(64, encoding_dim),
nn.ReLU(True),
)
self.decoder = nn.Sequential(
nn.Linear(encoding_dim, 64),
nn.ReLU(True),
nn.Linear(64, 128),
nn.ReLU(True),
nn.Linear(128, input_dim),
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
input_dim = 784
encoding_dim = 32
model = DeepAutoEncoder(input_dim, encoding_dim)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.MSELoss()
model.encoder
model.decoder
train_model(model, loss_fn,data_loader=train_loader,epochs=15,optimizer=optimizer)
plot_reconstructions(model)
###Output
_____no_output_____
###Markdown
Exercise- Change the loss to a BCE loss. - Implement weight sharing.Hint, a rapid google search gives:https://discuss.pytorch.org/t/how-to-create-and-train-a-tied-autoencoder/2585 Convolutional Auto-EncoderDeconvolution are creating checkboard artefacts see [Odena et al.](https://distill.pub/2016/deconv-checkerboard/)
###Code
class ConvolutionalAutoEncoder(nn.Module):
def __init__(self):
super(ConvolutionalAutoEncoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=3, padding=1), # b, 16, 10, 10
nn.ReLU(True),
nn.MaxPool2d(2, stride=2), # b, 16, 5, 5
nn.Conv2d(16, 8, 3, stride=2, padding=1), # b, 8, 3, 3
nn.ReLU(True),
nn.MaxPool2d(2, stride=1) # b, 8, 2, 2
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(8, 16, 3, stride=2), # b, 16, 5, 5
nn.ReLU(True),
nn.ConvTranspose2d(16, 8, 5, stride=3, padding=1), # b, 8, 15, 15
nn.ReLU(True),
nn.ConvTranspose2d(8, 1, 2, stride=2, padding=1), # b, 1, 28, 28
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
model = ConvolutionalAutoEncoder()
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCEWithLogitsLoss()
###Output
_____no_output_____
###Markdown
Why is `train_model(model,loss_fn,data_loader=train_loader,epochs=15,optimizer=optimizer)` not working? Make the necessary modification.
###Code
def train_convmodel(model,loss_fn,data_loader=None,epochs=1,optimizer=None):
model.train()
for epoch in range(epochs):
for batch_idx, (data, _) in enumerate(train_loader):
#
# your code here
#
if batch_idx % 50 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(data_loader.dataset),
100. * batch_idx / len(data_loader), loss.data.item()))
train_convmodel(model, loss_fn,data_loader=train_loader,epochs=15,optimizer=optimizer)
plot_reconstructions(model, conv=True)
###Output
_____no_output_____ |
frameworks/keras/NetworkEditing_TFLite_Keras.ipynb | ###Markdown
Data Preparation
###Code
(x_train, y_train), (x_test, y_test) = mnist.load_data(path=os.path.join('.','keras_mnist'))
type(x_train)
x_train.shape
x_train[0]
y_train
###Output
_____no_output_____
###Markdown
Model
###Code
batch_size = 128
nb_classes = 10
nb_epoch = 1
img_rows, img_cols = 28, 28
pool_size = (2,2)
kernel_size = (3,3)
# (sample_num, row, col, channel)
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255.
x_test /= 255.
# one-hot encoding
# e.g. array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
# first convolutional layer
model.add(Conv2D(32, (kernel_size[0], kernel_size[1]), padding='valid', input_shape=input_shape))
model.add(BatchNormalization(momentum=0.99, epsilon=0.001))
model.add(Activation('relu'))
# second convolutional layer
model.add(Conv2D(32, (kernel_size[0], kernel_size[1])))
model.add(BatchNormalization(momentum=0.99, epsilon=0.001))
model.add(Activation('relu'))
# max pooling layer
model.add(MaxPooling2D(pool_size=pool_size))
# flatten
model.add(Flatten())
# First Dense layer (FC 1)
model.add(Dense(128))
model.add(BatchNormalization(momentum=0.99, epsilon=0.001))
model.add(Activation('relu'))
# Second Dense Layer (FC 2)
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
batch_normalization_1 (Batch (None, 26, 26, 32) 128
_________________________________________________________________
activation_1 (Activation) (None, 26, 26, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 32) 9248
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 32) 128
_________________________________________________________________
activation_2 (Activation) (None, 24, 24, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 589952
_________________________________________________________________
batch_normalization_3 (Batch (None, 128) 512
_________________________________________________________________
activation_3 (Activation) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
_________________________________________________________________
activation_4 (Activation) (None, 10) 0
=================================================================
Total params: 601,578
Trainable params: 601,194
Non-trainable params: 384
_________________________________________________________________
###Markdown
Learning Default loss
###Code
# define loss function, optimizer, and metrics
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Custom loss
###Code
def np_expert_loss(y_true, y_pred):
"""
y_true: [batch_size, nb_classes]
y_pred: [batch_size, nb_classes]
"""
if y_true.ndim == 1:
y_true = y_true.reshape(1, y_true.size)
y_pred = y_pred.reshape(1, y_pred.size)
if y_true.size == y_pred.size:
y_true_idx = y_true.argmax(axis=1)
batch_size = y_true.shape[0]
return -np.sum(np.log(y_pred[np.arange(batch_size), y_true_idx] + 1e-7)) / batch_size
def default_expert_loss(y_true, y_pred):
"""
(Tensor, not numpy array) y_true: [batch_size, nb_classes]
(Tensor, not numpy array) y_pred: [batch_size, nb_classes]
"""
y_res = keras.losses.categorical_crossentropy(y_true, y_pred)
return y_res
def mean_squared_error(y_true, y_pred):
y_res = K.mean(K.square(y_pred - y_true), axis=-1)
print(K.shape(y_res))
return y_res
def expert_loss(y_true, y_pred):
"""
(Tensor, not numpy array) y_true: [batch_size, nb_classes]
(Tensor, not numpy array) y_pred: [batch_size, nb_classes]
"""
if K.ndim(y_true) == 1:
y_true = K.reshape(y_true, [1, K.shape(y_true)[0]], dtype="float32")
y_pred = K.reshape(y_pred, [1, K.shape(y_pred)[0]], dtype="float32")
b_size = K.cast(K.shape(y_pred)[0], dtype="float32")
y_res = -K.sum(y_true * K.log(y_pred + 1e-7)) / b_size # return
return y_res
def correct_ratio(y_true, y_pred):
"""
return: (1) can be a tensor list, (2) can be a tensor scalar
"""
true_list = K.argmax(y_true)
pred_list = K.argmax(y_pred)
correct = K.cast(K.equal(true_list, pred_list), "float32")
mean_correct = K.mean(correct)
return mean_correct
#model.compile(loss=expert_loss, optimizer='adadelta', metrics=['acc', correct_ratio])
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['acc'], weighted_metrics=['accuracy'])
modelpath = os.path.join('.','keras_model','mlp.h5')
# load the model if it exists
if os.path.isfile(modelpath):
model = load_model(modelpath)
###Output
_____no_output_____
###Markdown
Training
###Code
tbCallBack = keras.callbacks.TensorBoard(log_dir=os.path.join('.','keras_model','graph'), \
histogram_freq=0, \
write_graph=True, \
write_images=True)
# training the model
model.fit(x_train, y_train, \
batch_size=batch_size, \
epochs=nb_epoch, \
verbose=1, \
validation_data=(x_test, y_test), \
shuffle=True, \
callbacks=[tbCallBack])
###Output
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py:2862: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 60000 samples, validate on 10000 samples
Epoch 1/1
60000/60000 [==============================] - 249s 4ms/step - loss: 0.1069 - acc: 0.9713 - weighted_acc: 0.9713 - val_loss: 0.0537 - val_acc: 0.9831 - val_weighted_acc: 0.9831
###Markdown
Evaluating
###Code
score = model.evaluate(x_test, y_test, verbose=0)
print('Test Score: {}.'.format(score[0]))
print('Test Accuracy: {}.'.format(score[1]))
###Output
Test Score: 0.05368674308508634.
Test Accuracy: 0.9831.
###Markdown
Inference
###Code
pred = model.predict(x_test[0:2])
print(np.argmax(pred, axis=1))
###Output
[7 2]
###Markdown
Export
###Code
save_model(model, modelpath)
###Output
_____no_output_____
###Markdown
Advanced: Specific Layer Output
###Code
inp = model.input # input placeholder
print(inp)
outputs = [layer.output for layer in model.layers] # all layer outputs
for out in range(len(outputs)):
print(out, ":", outputs[out])
# inp: the placeholder image
# outputs[-1]: the latest layer
functons = K.function([inp], [outputs[-1]])
keras_partial_res, = functons([x_test[0].reshape(1, 28, 28, 1)])
test_img = np.minimum(x_test[0] * 255., 255.).reshape((28,28)) # reshape to (28, 28) for plt.imshow()
test_img = test_img.astype("uint8")
plt.imshow(test_img, cmap='gray')
plt.show()
np.argmax(keras_partial_res)
###Output
_____no_output_____
###Markdown
Layer Editing Origin Layer
###Code
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
batch_normalization_1 (Batch (None, 26, 26, 32) 128
_________________________________________________________________
activation_1 (Activation) (None, 26, 26, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 32) 9248
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 32) 128
_________________________________________________________________
activation_2 (Activation) (None, 24, 24, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 589952
_________________________________________________________________
batch_normalization_3 (Batch (None, 128) 512
_________________________________________________________________
activation_3 (Activation) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
_________________________________________________________________
activation_4 (Activation) (None, 10) 0
=================================================================
Total params: 601,578
Trainable params: 601,194
Non-trainable params: 384
_________________________________________________________________
###Markdown
Layer Insertion
###Code
def show_layer(edit_model):
layers = [l for l in edit_model.layers]
for l in range(len(layers)):
print(l, ":", layers[l].input, "->", layers[l].output)
show_layer(model)
def insert_layer(edit_model, layer_id, new_layer):
"""
layer_id: start from input with 0
"""
layers = [l for l in edit_model.layers]
#for l in layers: print(l.input, l.output)
x = layers[0].output
for i in range(1, len(layers)):
if i == layer_id:
for layer_idx in new_layer:
x = layer_idx(x)
x = layers[i](x)
new_model = Model(input=layers[0].input, outputs=x)
return new_model
inserted_model = insert_layer(model, 11, [Dense(128), BatchNormalization(), Activation('relu')])
inserted_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1_input (InputLayer) (None, 28, 28, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
batch_normalization_1 (Batch (None, 26, 26, 32) 128
_________________________________________________________________
activation_1 (Activation) (None, 26, 26, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 32) 9248
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 32) 128
_________________________________________________________________
activation_2 (Activation) (None, 24, 24, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 589952
_________________________________________________________________
batch_normalization_3 (Batch (None, 128) 512
_________________________________________________________________
activation_3 (Activation) (None, 128) 0
_________________________________________________________________
dense_8 (Dense) (None, 128) 16512
_________________________________________________________________
batch_normalization_9 (Batch (None, 128) 512
_________________________________________________________________
activation_7 (Activation) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
_________________________________________________________________
activation_4 (Activation) (None, 10) 0
=================================================================
Total params: 618,602
Trainable params: 617,962
Non-trainable params: 640
_________________________________________________________________
###Markdown
Replace Layer
###Code
def replace_intermediate_single_layer_in_keras(edit_model, layer_id, new_layer):
layers = [l for l in edit_model.layers]
x = layers[0].output
for i in range(1, len(layers)):
if i == layer_id:
x = new_layer(x)
else:
x = layers[i](x)
new_model = Model(input=layers[0].input, output=x)
return new_model
def replace_intermediate_multiple_layer_in_keras(edit_model, layer_dict):
layers = [l for l in edit_model.layers]
layer_id = list(layer_dict.keys())
x = layers[0].output
for i in range(1, len(layers)):
if i in layer_id:
x = layer_dict[i](x)
else:
x = layers[i](x)
new_model = Model(input=layers[0].input, output=x)
return new_model
replaced_layer = OrderedDict()
replaced_layer[11] = Dense(2)
replaced_layer[12] = Activation('softmax')
replaced_model = replace_intermediate_multiple_layer_in_keras(model, replaced_layer)
replaced_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1_input (InputLayer) (None, 28, 28, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
batch_normalization_1 (Batch (None, 26, 26, 32) 128
_________________________________________________________________
activation_1 (Activation) (None, 26, 26, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 32) 9248
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 32) 128
_________________________________________________________________
activation_2 (Activation) (None, 24, 24, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 589952
_________________________________________________________________
batch_normalization_3 (Batch (None, 128) 512
_________________________________________________________________
activation_3 (Activation) (None, 128) 0
_________________________________________________________________
dense_10 (Dense) (None, 2) 258
_________________________________________________________________
activation_9 (Activation) (None, 2) 0
=================================================================
Total params: 600,546
Trainable params: 600,162
Non-trainable params: 384
_________________________________________________________________
###Markdown
Export from .h5 to .tflite
###Code
!pip install tf-nightly
import tensorflow as tf
print("Tensorflow version: {}".format(tf.__version__))
tflite_path = os.path.join('.','keras_model','mlp.tflite')
assert os.path.exists(modelpath), ".h5 file is not found."
converter = tf.contrib.lite.TocoConverter.from_keras_model_file(modelpath)
tflite_model = converter.convert()
open(tflite_path, "wb").write(tflite_model)
###Output
WARNING:tensorflow:From <ipython-input-23-df30169a2d9d>:1: TocoConverter.from_keras_model_file (from tensorflow.lite.python.lite) is deprecated and will be removed in a future version.
Instructions for updating:
Use `lite.TFLiteConverter.from_keras_model_file` instead.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/lite/python/lite.py:592: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
INFO:tensorflow:Froze 20 variables.
INFO:tensorflow:Converted 20 variables to const ops.
|
evaluation-core.ipynb | ###Markdown
Evaluation Core functions> Helper functions for evaluation operations.
###Code
# export
def box_area(box):
"""
Calculates the area of a bounding box.
Source code mainly taken from:
https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
`box`: the bounding box to calculate the area for with the format ((x_min, x_max), (y_min, y_max))
return: the bounding box area
"""
return max(0, box[0][1] - box[0][0] + 1) * max(0, box[1][1] - box[1][0] + 1)
# export
def intersection_box(box_a, box_b):
"""
Calculates the intersection box from two bounding boxes with the format ((x_min, x_max), (y_min, y_max)).
Source code mainly taken from:
https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
`box_a`: the first box
`box_b`: the second box
return: the intersection box
"""
# determine the (x, y)-coordinates of the intersection rectangle
x_a = max(box_a[0][0], box_b[0][0])
y_a = max(box_a[1][0], box_b[1][0])
x_b = min(box_a[0][1], box_b[0][1])
y_b = min(box_a[1][1], box_b[1][1])
return (x_a, x_b), (y_a, y_b)
# export
def union_box(box_a, box_b):
"""
Calculates the union box from two bounding boxes with the format ((x_min, x_max), (y_min, y_max)).
Source code mainly taken from:
https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
`box_a`: the first box
`box_b`: the second box
return: the union box
"""
# determine the (x, y)-coordinates of the intersection rectangle
x_a = min(box_a[0][0], box_b[0][0])
y_a = min(box_a[1][0], box_b[1][0])
x_b = max(box_a[0][1], box_b[0][1])
y_b = max(box_a[1][1], box_b[1][1])
return (x_a, x_b), (y_a, y_b)
# export
def intersection_over_union(box_a, box_b):
"""
Intersection over Union (IoU) algorithm.
Calculates the IoU from two bounding boxes with the format ((x_min, x_max), (y_min, y_max)).
Source code mainly taken from:
https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
`box_a`: the first box
`box_b`: the second box
return: the IoU
"""
# determine the (x, y)-coordinates of the intersection rectangle
inter_box = intersection_box(box_a, box_b)
# compute the area of intersection rectangle
inter_area = box_area(inter_box)
# compute the area of both the prediction and ground-truth
# rectangles
box_a_area = box_area(box_a)
box_b_area = box_area(box_b)
# compute the intersection over union by taking the intersection
# area and dividing it by the sum of prediction + ground-truth
# areas - the interesection area
iou = inter_area / float(box_a_area + box_b_area - inter_area)
# return the intersection over union value
return iou
###Output
_____no_output_____
###Markdown
Helper Methods
###Code
# export
def configure_logging(logging_level=logging.INFO):
"""
Configures logging for the system.
:param logging_level: The logging level to use.
"""
logger.setLevel(logging_level)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging_level)
logger.addHandler(handler)
###Output
_____no_output_____
###Markdown
Run from command line To run the data-set builder from command line, use the following command:`python -m mlcore.evaluation.core [parameters]` The following parameters are supported:- `[annotation]`: The path to the VIA annotation file (e.g.: *imagesets/segmentation/car_damage/via_region_data.json*)
###Code
# export
if __name__ == '__main__' and '__file__' in globals():
# for direct shell execution
configure_logging()
parser = argparse.ArgumentParser()
parser.add_argument("annotation",
help="The path to the VIA annotation file.")
args = parser.parse_args()
# hide
# for generating scripts from notebook directly
from nbdev.export import notebook2script
notebook2script()
###Output
Converted annotation-core.ipynb.
Converted annotation-folder_category_adapter.ipynb.
Converted annotation-multi_category_adapter.ipynb.
Converted annotation-via_adapter.ipynb.
Converted annotation-yolo_adapter.ipynb.
Converted annotation_converter.ipynb.
Converted annotation_viewer.ipynb.
Converted category_tools.ipynb.
Converted core.ipynb.
Converted dataset-core.ipynb.
Converted dataset-image_classification.ipynb.
Converted dataset-image_object_detection.ipynb.
Converted dataset-image_segmentation.ipynb.
Converted dataset-type.ipynb.
Converted dataset_generator.ipynb.
Converted evaluation-core.ipynb.
Converted geometry.ipynb.
Converted image-color_palette.ipynb.
Converted image-inference.ipynb.
Converted image-opencv_tools.ipynb.
Converted image-pillow_tools.ipynb.
Converted image-tools.ipynb.
Converted index.ipynb.
Converted io-core.ipynb.
Converted tensorflow-tflite_converter.ipynb.
Converted tensorflow-tflite_metadata.ipynb.
Converted tensorflow-tfrecord_builder.ipynb.
Converted tools-check_double_images.ipynb.
Converted tools-downloader.ipynb.
Converted tools-image_size_calculator.ipynb.
|
VeryNiceStarter_v2.ipynb | ###Markdown
helperclass info[Starter for Python Helpers module](https://gist.githubusercontent.com/bxck75/e2ec16b2eecbcd53f106c38ad350244a/raw/25d7ad2073336f191fd0449fb9a21439af812093/Helpers_Start.py) ``` 'HelpCore)', 'c_d', 'cd', 'cdr', 'check_img_list', 'cleanup_files', 'cloner', 'cprint', 'custom_reps_setup', 'docu', 'explore_mod', 'flickr_scrape', 'get_gdrive_dataset', 'get_other_reps', 'git_install_root', 'haar_detect', 'helpers_root', 'if_exists', 'img_batch_rename', 'importTboard', 'install_repos', 'into_func', 'landmarkdetect', 'landmarkdetecter', 'landmarker', 'list_to_file', 'no_action', 'path', 'rec_walk_folder', 'rep', 'root', 'root_dirname', 'root_filename', 'runProcess', 'set_maker', 'sorted_repos', 'sys_com', 'sys_log', 'system_log_file', 'valid_img', 'valid_list'``` code
###Code
!rm -r /content/installed_repos/Python_Helpers
'''#########################################'''
#''' K00B404 aka. BXCK75 '''#
''' notebook/colab kickstarter '''
#''' Look for stuff in main.py in the root '''#
''' Look for defs in Helpers/core.py file '''
################################################
from IPython.display import clear_output as clear
# from pprint import pprint as print
from pprint import pformat
from PIL import Image
import matplotlib.pyplot as plt
import cv2
import random
import os
import sys
import json
import IPython
''' default sample data delete '''
os.system('rm -r sample_data')
''' set root paths '''
root = '/content'
helpers_root = root + '/installed_repos/Python_Helpers'
''' setup install the Helpers module '''
os.system('git clone https://github.com/bxck75/Python_Helpers.git ' + helpers_root)
os.system('python ' + helpers_root + 'setup.py install')
''' preset helpers '''
os.chdir(helpers_root)
import main as main_core
MainCore = main_core.main()
HelpCore = MainCore.Helpers_Core
HC = HelpCore
HC.root = root
FScrape = HC.flickr_scrape
fromGdrive = HC.GdriveD
toGdrive = HC.ZipUp.ZipUp
''' Clear output '''
clear()
# clear output folders of scraper
HelpCore.sys_com('rm -r /content/*_images')
HelpCore.sys_com('rm -r /content/faces')
HelpCore.sys_com('rm -r /content/proc_images')
help(HelpCore.FaceRip)
''' Flickr scraper '''
# Set
search_list, qty, fl_dir = ['face','portrait'], 10, 'flr_images'
# Scrape
HC.FlickrS(search_list,qty,fl_dir)
clear()
''' Google, Bing, Baidu scraper '''
# Set
search,qty,gbb_dir = search_list[0], 10, 'BBS_images'
# Scrape
HC.ICrawL(search, qty, gbb_dir )
clear()
''' Combine the 2 resulting image folders sequencial into the final loot folder'''
# Set
loot_folder ='/content/final_loot'
fld1, fld2, out = fl_dir, gbb_dir, loot_folder
# Combine
HC.combine_img_folders(fld1, fld2, out)
clear()
''' Gloom over loot Mhoaohaoaohaha '''
img_lst = HelpCore.GlobX(loot_folder, '*.*g')
print('looted ' + str(len(img_lst)) + ' images')
img_lst.sort()
''' Capture all faces from the scrape loot'''
resized_face_img_lst = HC.FaceRip(loot_folder)
''' get the resized faces list '''
print('looted ' + str(len(resized_face_img_lst)) + ' images')
resized_face_img_lst.sort()
resized_face_img_lst
sys.exit(1)
''' Count the face loot '''
face_img_lst = HelpCore.GlobX('/content/faces', 'face_img_*.*g')
print('looted ' + str(len(face_img_lst)) + ' images')
face_img_lst.sort()
''' Count the landmark loot must be same nr as face loot '''
landmark_img_lst = HelpCore.GlobX('/content/faces', 'landmark_blank_face_img_*.*g')
print('looted ' + str(len(landmark_img_lst)) + ' images')
landmark_img_lst.sort()
HC.combine_AB('/content/resized_faces/org', '/content/resized_faces/transp')
# # help(HC.ZipUp)
# result=toGdrive('faces_landmark_set',HC.gdrive_root,'/content/faces').ZipUp
# result=toGdrive('combined_landmark_set',HC.gdrive_root,'/content/combined').ZipUp
###Output
_____no_output_____
###Markdown
Experimental
###Code
import numpy as np
def get_one_image(images):
img_list = []
padding = 200
for img in images:
img_list.append(cv2.imread(img))
max_width = []
max_height = 0
for img in img_list:
max_width.append(img.shape[0])
max_height += img.shape[1]
w = np.max(max_width)
h = max_height + padding
# create a new array with a size large enough to contain all the images
final_image = np.zeros((h, w, 3), dtype=np.uint8)
current_y = 0 # keep track of where your current image was last placed in the y coordinate
for image in img_list:
# add an image to the final array and increment the y coordinate
final_image[current_y:image.shape[0] + current_y, :image.shape[1], :] = image
current_y += image.shape[0]
cv2.imwrite('out.png', final_image)
landmark_img_lst
get_one_image(landmark_img_lst)
import numpy as np
# print(os.path.join('ikke','prikke','prakke','prokke.jpg'))
# black blank image
blank_image = np.zeros(shape=[512, 512, 3], dtype=np.uint8)
# print(blank_image.shape)
plt.imshow(blank_image)
# new figure
# shape
shape = blank_image.shape
print(shape)
plt.figure()
# white blank image
blank_image2 = 255 * np.ones(shape=[512, 512, 3], dtype=np.uint8)
plt.imshow(blank_image2)
# HC.Face('/content/final_loot/img_0008.jpg')
# HC.DFace.align((400,400,3), '/content/final_loot/img_0007.jpg')
# align(imgDim, rgbImg, bb=None, landmarks=None, landmarkIndices=INNER_EYES_AND_BOTTOM_LIP)
# help(HC.ColorPrint)
# from colorama import init, Fore, Back, Style
# init(convert=True)
# print(Fore.RED + 'some red text')
# print(Back.GREEN + 'and with a green background')
# print(Style.DIM + 'and in dim text')
# print(Style.RESET_ALL)
# print('back to normal now')
# HC.ColorPrint( 'some red text')
# help(cv2.face_FaceRecognizer.predict)
# # Research list
# cv2.dnn_Layer
# cv2.face_BIF
# cv2.face_FaceRecognizer
# cv2.face_BasicFaceRecognizer
# cv2.face_EigenFaceRecognizer
# cv2.face_FisherFaceRecognizer
# cv2.face_LBPHFaceRecognizer
# cv2.ximgproc_FastLineDetector
# cv2.CascadeClassifier.load() # loads a classifier file
# cv2.CirclesGridFinderParameters
# cv2.CirclesGridFinderParameters2
# cv2.VideoCapture
# cv2.text_TextDetector
# cv2.text_TextDetectorCNN
# dir(HC)
# ''' Go Home '''
# HC.c_d(HC.root,True)
# ''' Set the scrape details '''
# scrape_run_name = 'portrait'
# search_list = [ scrape_run_name ]
# quantity=200
# ''' Run the Scraper '''
# print(HC.FlickrS( search_list, quantity, scrape_run_name ))
# clear()
# img_folder = '/content/' + scrape_run_name
# img_lst = HelpCore.GlobX(img_folder, '*.*g')
# print(len(img_lst))
# for x in range(10):
# rnd_nr = random.randint(0,len(img_lst)-1)
# print(img_lst[rnd_nr])
# single_image = img_lst[rnd_nr]
# # l_img = cv2.imread(img_lst[rnd_nr])
# # plt.imshow(l_img)
# # plt.figure()
# plt.show
# clear()
# from random import Random
# print(Random(len(img_lst)-1))
# Random(5,5)
# HelpCore.GlobX(img_folder, '*.*g')
# img_lst
# rnd_nr = random.randint(0,len(img_lst)-1)
# HelpCore.ShowImg(img_lst[rnd_nr],1)
# HC.Resize.resize_single(img_lst[random.randint(0,len(img_lst)-1)], pad=True, size=400)
dir(HC)
# help()
# HC.Resize.resize_folder('/content/portrait')
#
# HelpCore.haar_detect(img,image_out)
# HelpCore.cloner
# HelpCore.FlickrS
# HelpCore.GlobX
# HelpCore.MethHelp
# HelpCore.fromGdrive
# HelpCore.toGdrive
# HelpCore.ShowImg
# # HelpCore.resize
# # HelperCore.path_split
# # HelpCore.HaarDetect
# # HelpCore.HaarDetect
# org_img = '/content/installed_repos/face-recognition/images/Colin_Powell/Colin_Powell_0004.jpg'
# org_path_dict = path_split(org_img)
# marked_path_dict = org_path_dict
# marked_path_dict['path'][len(marked_path_dict['path'])-1] = marked_path_dict['path'][len(marked_path_dict['path'])-1]+'_marked'
# print(org_path_dict['path'][len(org_path_dict['path'])-1])
# print(marked_path_dict['path'][len(marked_path_dict['path'])-1])
# # print(org_path_dict)
# new_img = org_img.replace('images/','images_marked/').replace('.jpg', '.jpg')
# os.makedirs(new_img, exist_ok=True)
# ''' Detect the landmarks '''
# HelpCore.landmarker( org_img, org_path_dict['path'][:2])
dir(HC)
# cv2.face_FaceRecognizer.predict()
imgs = HelpCore.GlobX('/content/images','*.*g')
n_row, n_col = 3, 3
_, axs = plt.subplots(n_row, n_col, figsize=(10,10))
axs = axs.flatten()
for img, ax in zip(imgs, axs):
i = cv2.imread(img)
ax.set_title(str(img))
ax.axis('off')
ax.imshow(i)
plt.show()
# import cv2
# import matplotlib.pyplot as plt
# random_path = img_lst[random.randint(0,len(img_lst)-1)]
# img = cv2.imread(random_path)
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# plt.imshow(img)
# from numpy import random
# import matplotlib.pyplot as plt
# print(img.shape)
# data = random.random((5,5))
# img = plt.imshow(img)
# img.set_cmap('tab10')
# plt.axis('off')
# plt.figure( num=65, figsize=(100,100), dpi=10, facecolor='blue', edgecolor='red', frameon=False, clear=False )
# cv2.imwrite("test.png", img)
# plt.savefig("test.png", bbox_inches='tight')
# plt.show
# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# plt.imshow(img_rgb)
# dir(cv2)
# help(plt.imshow)
# os.chdir('/content')
# import cv2
# img_path='/content/images_google/000005.jpg'
# img = cv2.imread(img_path)
# detector = cv2.FastFeatureDetector_create()
# # plt.figure(1)
# # img = cv2.cvtColor( img, cv2.COLOR_BGR2Luv)
# # plt.axis('off')
# # plt.imshow(img)
# plt.figure(1)
# img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# plt.axis('off')
# plt.title=''
# plt.imshow(img_rgb)
# # img save
# cv2.imwrite('final_image.png',img)
# plt.show
# plt.figure(2)
# # lege image
# image_blank = np.zeros(shape=(512,512,3),dtype=np.int16)
# plt.imshow(image_blank)
# plt.axis('off')
# plt.show
# cv2.destroyAllWindows()
# HelpCore.GdriveD.GdriveD('1fMOT_0f5clPbZXsphZyrGcLXkIhSDl3o','shape_predictor_194_face_landmarks.zip')
# os.system('unzip /content/shape_predictor_194_face_landmarks.zip')
# help()
import sys
import os
import dlib
import glob
import cv2
import numpy as np
import dlib
import matplotlib.pyplot as plt
def FaceRip(folder='/content/portrait'):
'''
Rip all faces from a images folder
Example:
FaceRip(folder='/content/portrait')
'''
''' Download and unzip the haar cascader '''
HelpCore.c_d(HelpCore.root)
predictor_file = ['shape_predictor_194_face_landmarks.zip','1fMOT_0f5clPbZXsphZyrGcLXkIhSDl3o']
HelpCore.GdriveD.GdriveD(predictor_file[1],predictor_file[0])
os.system('unzip /content/shape_predictor_194_face_landmarks.zip')
''' Detector predictor load'''
predictor = predictor_file[0].replace('zip','dat')
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor)
''' glob the folder '''
lst = HelpCore.GlobX(folder,'*.*g')
''' iterate of the images '''
iter = 0
for im in range(len(lst)-1):
# load the image
img = cv2.imread(lst[iter])
# make grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# detect faces
faces = detector(gray)
# iterate over the faces
for face in faces:
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
# cv2.rectangle(img, (x1-10, y1-10), (x2+10, y2+10), (0, 255, 0), 3) # draw rectangle on the image
size_increase = 50 # increase size of face image output
# save face without landmarkers
res2 = img[y1-size_increase :y2+size_increase ,x1-size_increase :x2+size_increase ]
#grayscale of face
res2_image_gray = gray[y1-size_increase :y2+size_increase ,x1-size_increase :x2+size_increase ]
#save
os.makedirs(HelpCore.root+'/faces', exist_ok=True)
cv2.imwrite(HelpCore.root+'/faces/face_img_'+str(iter)+'.jpg', res2)
cv2.imwrite(HelpCore.root+'/faces/gray_face_img_'+str(iter)+'.jpg', res2_image_gray)
# get thelandmarks of the face
landmarks = predictor(gray, face)
for n in range(0, 194):
x = landmarks.part(n).x
y = landmarks.part(n).y
cv2.circle(img, (x, y), 4, (255, 0, 100), -1)
cv2.circle(gray, (x, y), 4, (255, 100, 0), -1)
# face image
res2 = img[y1-size_increase :y2+size_increase ,x1-size_increase :x2+size_increase ]
#grayscale face image
res2_image_gray = gray[y1-size_increase :y2+size_increase ,x1-size_increase :x2+size_increase ]
#save
os.makedirs(HelpCore.root+'/faces', exist_ok=True)
cv2.imwrite(HelpCore.root+'/faces/landmark_face_img_'+str(iter)+'.jpg', res2)
cv2.imwrite(HelpCore.root+'/faces/landmark_gray_face_img_'+str(iter)+'.jpg', res2_image_gray)
# save total
os.makedirs(HelpCore.root + '/proc_images/total', exist_ok=True)
cv2.imwrite(HelpCore.root + '/proc_images/total/img_'+str(iter)+'.jpg', img)
cv2.imwrite(HelpCore.root + '/proc_images/total/gray_img_'+str(iter)+'.jpg', gray)
iter += 1
FaceRip()
img_lst=HelpCore.GlobX('/content/faces','landmark*.*g')
rnd_img_path = img_lst[random.randint(0,len(img_lst)-1)]
print(rnd_img_path)
HelpCore.ShowImg(rnd_img_path, 1)
dir(HC.ShowImg)
# HelpCore.GlobX('/content/images_bohdgaya', '*.*g')
# from pydrive.auth import GoogleAuth
# from pydrive.drive import GoogleDrive
# def GFoldeR(mode='show', file ,folder='/content')
# gauth = GoogleAuth()
# gauth.LocalWebserverAuth()
# drive = GoogleDrive(gauth)
# if mode == 'create':
# # Create folder.
# folder_metadata = {
# 'title' : '<your folder name here>',
# # The mimetype defines this new file as a folder, so don't change this.
# 'mimeType' : 'application/vnd.google-apps.folder'
# }
# folder = drive.CreateFile(folder_metadata)
# folder.Upload()
# if mode == 'info':
# # Get folder info and print to screen.
# folder_title = folder['title']
# folder_id = folder['id']
# print('title: %s, id: %s' % (folder_title, folder_id))
# if mode == 'upload':
# # Upload file to folder.
# f = drive.CreateFile({"parents": [{"kind": "drive#fileLink", "id": folder_id}]})
# # Make sure to add the path to the file to upload below.
# f.SetContentFile('<file path here>')
# f.Upload()
# import random
# import numpy as np
# import cv2 as cv
# frame1 = cv.imread(cv.samples.findFile('lena.jpg'))
# if frame1 is None:
# print("image not found")
# exit()
# frame = np.vstack((frame1,frame1))
# facemark = cv.face.createFacemarkLBF()
# try:
# facemark.loadModel(cv.samples.findFile('lbfmodel.yaml'))
# except cv.error:
# print("Model not found\nlbfmodel.yaml can be download at")
# print("https://raw.githubusercontent.com/kurnianggoro/GSOC2017/master/data/lbfmodel.yaml")
# cascade = cv.CascadeClassifier(cv.samples.findFile('lbpcascade_frontalface_improved.xml'))
# if cascade.empty() :
# print("cascade not found")
# exit()
# faces = cascade.detectMultiScale(frame, 1.05, 3, cv.CASCADE_SCALE_IMAGE, (30, 30))
# ok, landmarks = facemark.fit(frame, faces=faces)
# cv.imshow("Image", frame)
# for marks in landmarks:
# couleur = (random.randint(0,255),
# random.randint(0,255),
# random.randint(0,255))
# cv.face.drawFacemarks(frame, marks, couleur)
# cv.imshow("Image Landmarks", frame)
# cv.waitKey()
# 194_point_lABDNAKE1fMOT_0f5clPbZXsphZyrGcLXkIhSDl3o
import dlib
from skimage import io
def LandM(image=''):
''' Download amd unzip the haar cascader '''
lm_file = [
'shape_predictor_194_face_landmarks.zip',
'1fMOT_0f5clPbZXsphZyrGcLXkIhSDl3o',
]
HC.GdriveD.GdriveD(lm_file[1],lm_file[0])
os.system('unzip ' + HC.root + lm_file[0])
''' shape_predictor is the train dataset in the same directory '''
predictor_path = HC.root + lm_file[0].replace('zip','dat')
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
''' FDT.jpg is the picture file to be processed in the same directory '''
random_path = img_lst[random.randint(0,len(img_lst)-1)]
img = cv2.imread(random_path)
dets = detector(img)
print("Number of faces detected: {}".format(len(dets)))
for k, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
k, d.left(), d.top(), d.right(), d.bottom()))
# Get the landmarks/parts for the face in box d.
shape = predictor(img, d)
print(shape)
plt.imshow(shape)
print("Part 0: {}, Part 1: {} ...".format(shape.part(0),
shape.part(1)))
plt.show
# Draw the face landmarks on the screen.
###Output
_____no_output_____ |
3. Landmark Detection and Tracking(1).ipynb | ###Markdown
Project 3: Implement SLAM --- Project OverviewIn this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. > `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the worldYou can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:```mu = matrix([[Px0], [Py0], [Px1], [Py1], [Lx0], [Ly0], [Lx1], [Ly1]])```You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector. Generating an environmentIn a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.--- Create the worldUse the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! `data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`. Helper functionsYou will be working with the `robot` class that may look familiar from the first notebook, In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
###Code
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
###Output
Landmarks: [[62, 72], [69, 35], [83, 76], [94, 21], [5, 53]]
Robot: [x=79.67218 y=72.85053]
###Markdown
A note on `make_data`The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:1. Instantiating a robot (using the robot class)2. Creating a grid world with landmarks in it**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:```measurement = data[i][0]motion = data[i][1]```
###Code
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
###Output
Example measurements:
[[0, 13.647521163475375, 23.706783905618934], [1, 20.934049052716716, -14.573292925217373], [2, 34.1801983802559, 26.448965426336528], [4, -45.11318483455979, 2.98653563016196]]
Example motion:
[18.767291894847673, 6.91294112036149]
###Markdown
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam. Initialize ConstraintsOne of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.You may also choose to create two of each omega and xi (one for x and one for y positions). TODO: Write a function that initializes omega and xiComplete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
###Code
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
print('world size:', world_size, 'num_landmarks:', num_landmarks)
size = (N + num_landmarks) * 2
print("size:", size)
omega = np.zeros((size, size))
print("omega.size():", omega.shape)
omega[0,0] = 1
omega[1,1] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xi = np.zeros(size)
xi[0] = int(world_size / 2)
xi[1] = int(world_size / 2)
"""
print("omega:")
print(omega)
print("xi:")
print(xi)
"""
return omega, xi
###Output
_____no_output_____
###Markdown
Test as you goIt's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
###Code
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
###Output
_____no_output_____
###Markdown
--- SLAM inputs In addition to `data`, your slam function will also take in:* N - The number of time steps that a robot will be moving and sensing* num_landmarks - The number of landmarks in the world* world_size - The size (w/h) of your world* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise` A note on noiseRecall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`. TODO: Implement Graph SLAMFollow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! Updating with motion and measurementsWith a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
###Code
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
#print("omega size:", omega.shape, "xi.size():", xi.shape)
## TODO: Iterate through each time step in the data
#print("N:", N, "len (data):", len(data))
for i, dat in enumerate(data):
## get all the motion and measurement data as you iterate
measurement = dat[0]
motion = dat[1]
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
for meas in measurement:
#print('j:', j)
#diff_om = np.zeros(omega.shape)
#diff_xi = np.zeros(xi.shape)
id_m = meas[0]
dx = meas[1] * 1.0 / measurement_noise
dy = meas[2] * 1.0 / measurement_noise
posi_x0 = i * 2
posi_y0 = i * 2 + 1
posi_x1 = N * 2 + id_m * 2
posi_y1 = posi_x1 + 1
# Y direction
omega[posi_y0][posi_y0] += 1.0 / measurement_noise
omega[posi_y0][posi_y1] += -1.0 / measurement_noise
omega[posi_y1][posi_y0] += -1.0 / measurement_noise
omega[posi_y1][posi_y1] += 1.0 / measurement_noise
xi[posi_y0] += -dy
xi[posi_y1] += dy
# X direction
omega[posi_x0][posi_x0] += 1.0 / measurement_noise
omega[posi_x0][posi_x1] += -1.0 / measurement_noise
omega[posi_x1][posi_x0] += -1.0 / measurement_noise
omega[posi_x1][posi_x1] += 1.0 / measurement_noise
xi[posi_x0] += -dx
xi[posi_x1] += dx
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
dx = motion[0] * 1.0 / motion_noise
dy = motion[1] * 1.0 / motion_noise
#diff_om = np.zeros(omega.shape)
#diff_xi = np.zeros(xi.shape)
posi_x0 = i * 2
posi_y0 = i * 2 + 1
posi_x1 = i * 2 + 2
posi_y1 = i * 2 + 3
# X direction
omega[posi_x0][posi_x0] += 1.0 / motion_noise
omega[posi_x0][posi_x1] += -1.0 / motion_noise
omega[posi_x1][posi_x0] += -1.0 / motion_noise
omega[posi_x1][posi_x1] += 1.0 / motion_noise
xi[posi_x0] += -dx
xi[posi_x1] += dx
# Y direction
omega[posi_y0][posi_y0] += 1.0 / motion_noise
omega[posi_y0][posi_y1] += -1.0 / motion_noise
omega[posi_y1][posi_y0] += -1.0 / motion_noise
omega[posi_y1][posi_y1] += 1.0 / motion_noise
xi[posi_y0] += -dy
xi[posi_y1] += dy
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
print('det omega:', np.linalg.det(omega))
omega_inv = np.linalg.inv(omega)
mu = np.dot(omega_inv, xi)
return mu # return `mu`
###Output
_____no_output_____
###Markdown
Helper functionsTo check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
###Code
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
###Output
_____no_output_____
###Markdown
Run SLAMOnce you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks! What to ExpectThe `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.With these values in mind, you should expect to see a result that displays two lists:1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. Landmark LocationsIf you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
###Code
# call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks)
###Output
world size: 100.0 num_landmarks: 5
size: 50
omega.size(): (50, 50)
det omega: 3.22564288663e+15
Estimated Poses:
[50.000, 50.000]
[69.356, 58.219]
[87.653, 64.251]
[95.054, 44.974]
[99.080, 64.365]
[95.852, 84.442]
[75.783, 80.524]
[56.983, 76.790]
[39.632, 72.233]
[18.348, 68.887]
[18.561, 49.479]
[19.980, 29.602]
[21.985, 9.960]
[40.556, 0.891]
[51.846, 17.593]
[61.414, 33.358]
[71.731, 50.346]
[84.451, 66.114]
[96.612, 82.710]
[80.105, 71.417]
Estimated Landmarks:
[62.733, 72.053]
[69.975, 35.673]
[83.925, 76.316]
[94.403, 22.199]
[6.427, 53.221]
###Markdown
Visualize the constructed worldFinally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
###Code
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
###Output
Last pose: (80.10544596859094, 71.41662085749745)
###Markdown
Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters. **Answer**: The ground truth position of the robot is [x=79.67218 y=72.85053] and slightly different from the SLAM estimation (80.10544596859094, 71.41662085749745). This is considered to be caused by the noise terms in measurement and the motion. If the N increases or noises increase, the difference would be larger. TestingTo confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix. Submit your projectIf you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
###Code
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
###Output
world size: 100.0 num_landmarks: 5
size: 50
omega.size(): (50, 50)
det omega: 7.97912885801e+12
Estimated Poses:
[50.000, 50.000]
[69.181, 45.665]
[87.743, 39.703]
[76.270, 56.311]
[64.317, 72.176]
[52.257, 88.154]
[44.059, 69.401]
[37.002, 49.918]
[30.924, 30.955]
[23.508, 11.419]
[34.180, 27.133]
[44.155, 43.846]
[54.806, 60.920]
[65.698, 78.546]
[77.468, 95.626]
[96.802, 98.821]
[75.957, 99.971]
[70.200, 81.181]
[64.054, 61.723]
[58.107, 42.628]
Estimated Landmarks:
[76.779, 42.887]
[85.065, 77.438]
[13.548, 95.652]
[59.449, 39.595]
[69.263, 94.240]
|
_notebooks/2021-02-09-Contextual_bandits.ipynb | ###Markdown
"Thompson sampling for contextual bandits"> "An introduction to Thompson sampling and how to implement it with probabilistic machine learning to tackle contextual bandits."- toc: false- branch: master- badges: true- comments: true- author: Yves Barmaz- categories: [contextual bandits, reinforcement learning, bayesian modeling, variational inference, probabilistic machine learning, tensorflow-probability] The [multi-armed bandit problem](https://en.wikipedia.org/wiki/Multi-armed_bandit) is inspired by the situation of gamblers facing $N$ slot machines with a limited amount of resources to "invest" in them, without knowing the probability distribution of rewards from each machine. By playing with a machine, they can of course sample its distribution. Once they find a machine that performs well enough, the question is wheter they should try the other ones that might perform even better, at the risk of wasting money because they might be worse. This is an example of the exploration-exploitation tradeoff dilemma. Applications include clinical trial design, portfolio selection, and A/B testing.[Thompson sampling](https://en.wikipedia.org/wiki/Thompson_sampling) is an approximate solution applicable to bandits for which we have a Bayesian model of the reward $r$, namely a likelihood $P(r\vert \theta, a)$ that depends on the action $a$ (the choice of an arm to pull) and a vector of parameters $\theta$, and a prior distribution $P(\theta)$. In certain cases, called contextual bandits, the likelihood also depends on a set of features $x$ observed by the players before they choose an action, $P(r\vert \theta, a, x)$. After each round, the posterior distribution $P(\theta \vert \left\lbrace r_i, a_i, x_i\right\rbrace_{i})$ is updated with the newly observed data. Then a $\theta^\ast$ is sampled from it, the new context $x$ is observed, and the new action is chosen to maximize the expected reward, $a^\ast = \mathrm{argmax}_a \ \mathbb{E}(r\vert \theta, a, x)$.This approach solves the exploration-exploitation dilemma with the random sampling of $\theta^\ast$, which gives to every action a chance to be selected, yet favors the most promising ones. The more data is collected, the more informative the posterior distribution will become and the more it will favor its top performer.This mechanism is illustrated in the [chapter 6](https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter6_Priorities/Ch6_Priors_PyMC3.ipynb) of [Probabilistic Programming & Bayesian Methods for Hackers](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/) and the section 4.4 of [Bayesian Adaptive Methods for Clinical Trials](https://www.routledge.com/Bayesian-Adaptive-Methods-for-Clinical-Trials/Berry-Carlin-Lee-Muller/p/book/9781439825488).Both discuss the case of a binary reward (success and failure) for every action $a$ that follows a Bernoulli distribution with unknown probability of success $p_a$. They assume a beta prior for each of the $p_a$, which is the conjugate prior for the Bernoulli likelihood and makes inference of the posterior straightforward. This is particularly appealing when you have to update your posterior after each play.If there are covariates that can explain the probability of success, one of the simplest models for a binary response of the potential actions is the combination of generalized linear models for each action,$$P(r=1 \vert \theta, a, x) = \frac{1}{1 + e^{-\left(\alpha_a + \beta_a^T\,x\right)}}$$Unfortunately, there is no immediate congugate prior for this type of likelihood, so we have to rely on numerical methods to estimate the posterior distribution. A previous [blog post](https://ybarmaz.github.io/blog/bayesian%20modeling/variational%20inference/tensorflow-probability/2021/02/01/Variational-inference-with-tfp.html) discussed variational inference as a speedier alternative to MCMC algorithms, and we will see here how we can apply it to the problem of contextual bandits with binary response.This problem is relevant in the development of [personalized therapies](https://en.wikipedia.org/wiki/Personalized_medicine), where the actions represent the different treatments under investigation and the contexts $x$ are predictive biomarkers of their response. The goal of a trial would be to estimate the response to each treatment option given biomarkers $x$, and, based on that, to define the best treatment policy. Adaptive randomization through Thompson sampling ensures that more subjects enrolled in the trial get the optimal treatment based on their biomarkers and the knowledge accrued until their randomization, which is certainly more ethical than a randomization independent on the biomarkers.Another example is online ad serving, where the binary response corresponds to a successful conversion, the action is the selection of an ad for a specific user, and the context is a set of features related to that user. When a new ad enters the portfolio and a new click-through rate model needs to be deployed for it, Thompson sampling can accelerate the training phase and reduce the related costs. Bandit modelFor simplicity, we simulate bandits whose true probabilities of success follow logistic models, so we can see how the posterior distributions concentrate around the true values during training. You can run this notebook in Colab to experiment with more realistic models, and vary the number of arms or the context dimension.
###Code
#hide
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
#collapse-hide
class ContextualBandit(object):
"""
This class represents contextual bandit machines with n_arms arms and
linear logits of p-dimensional contexts.
parameters:
arm_true_weights: (n_arms, p) Numpy array of real weights.
arm_true_biases: (n_arms,) Numpy array of real biases
methods:
pull( arms, X ): returns the results, 0 or 1, of pulling
the arms[i]-th bandit given an input context X[i].
arms is an (n,) array of arms indices selected by the player
and X an (n, p) array of contexts observed by the player
before making a choice.
get_logits(X): returns the logits of all bandit arms for every context in
the (n, p) array X
get_probs(X): returns sigmoid(get_logits(X))
get_selected_logits(arms, X): returns from get_logits(X) only the logits
corresponding to the selected arms
get_selected_probs(arms, X): returns sigmoid(get_selected_logits(arms, X))
get_optimal_arm(X): returns the arm with the highest probability of success
for every context in X
"""
def __init__(self, arm_true_weights, arm_true_biases):
self._arm_true_weights = tf.convert_to_tensor(
arm_true_weights,
dtype=tf.float32,
name='arm_true_weights')
self._arm_true_biases = tf.convert_to_tensor(
arm_true_biases,
dtype=tf.float32,
name='arm_true_biases')
self._shape = np.array(
self._arm_true_weights.shape.as_list(),
dtype=np.int32)
self._dtype = tf.convert_to_tensor(
arm_true_weights,
dtype=tf.float32).dtype.base_dtype
@property
def dtype(self):
return self._dtype
@property
def shape(self):
return self._shape
def get_logits(self, X):
return tf.matmul(X, self._arm_true_weights, transpose_b=True) + \
self._arm_true_biases
def get_probs(self, X):
return tf.math.sigmoid(self.get_logits(X))
def get_selected_logits(self, arms, X):
all_logits = self.get_logits(X)
column_indices = tf.convert_to_tensor(arms, dtype=tf.int64)
row_indices = tf.range(X.shape[0], dtype=tf.int64)
full_indices = tf.stack([row_indices, column_indices], axis=1)
selected_logits = tf.gather_nd(all_logits, full_indices)
return selected_logits
def get_selected_probs(self, arms, X):
return tf.math.sigmoid(self.get_selected_logits(arms, X))
def pull(self, arms, X):
selected_logits = self.get_selected_logits(arms, X)
return tfd.Bernoulli(logits=selected_logits).sample()
def pull_all_arms(self, X):
logits = self.get_logits(X)
return tfd.Bernoulli(logits=logits).sample()
def get_optimal_arm(self, X):
return tf.argmax(
self.get_logits(X),
axis=-1)
###Output
_____no_output_____
###Markdown
Here we work with a two-dimensional context drawn from two independent standard normal distributions, and we select true weights and biases that correspond to an overall probability of success of about 30% for each arm, a situation that might be encountered in a personalized medicine question.
###Code
true_weights = np.array([[2., 0.],[0., 3.]])
true_biases = np.array([-1., -2.])
N_ARMS = true_weights.shape[0]
CONTEXT_DIM = true_weights.shape[1]
bandit = ContextualBandit(true_weights, true_biases)
population = tfd.Normal(loc=tf.zeros(CONTEXT_DIM, dtype=tf.float32),
scale=tf.ones(CONTEXT_DIM, dtype=tf.float32))
#hide_input
X = population.sample(500)
r_0 = bandit.pull(tf.zeros(500, dtype=tf.int64), X)
r_1 = bandit.pull(tf.ones(500, dtype=tf.int64), X)
df = pd.DataFrame(X.numpy(), columns=['x_0', 'x_1'])
df.loc[:,'r0'] = r_0.numpy()
df.loc[:,'r1'] = r_1.numpy()
plt.figure(figsize=(10.0, 5.0))
plt.subplot(1,2,1)
sns.scatterplot(x='x_0', y='x_1', hue='r0', data=df)
plt.title('Reward of arm_0')
plt.autoscale(tight = True)
plt.subplot(1,2,2)
sns.scatterplot(x='x_0', y='x_1', hue='r1', data=df)
plt.title('Reward of arm_1')
plt.autoscale(tight = True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Thompson samplingA Thompson sampler based on a logistic regression can be implemented as a generalization of the probabilistic machine learning model discussed in the [previous post](https://ybarmaz.github.io/blog/bayesian%20modeling/variational%20inference/tensorflow-probability/2021/02/01/Variational-inference-with-tfp.html). It is essentially a single dense variational layer with one unit per arm of the contextual bandit we want to solve. These units are fed into a Bernoulli distribution layer that simulates the pull of each arm.The parameters $\theta$ of the model are encoded in the `posterior_mean_field` used as a variational family for the dense variational layer, and when we fit the full model to data, it converges to an approximation of the true posterior $P(\theta \vert \left\lbrace r_i, a_i, x_i\right\rbrace_{i})$.A subsequent call of that dense variational layer on a new input $x$ will return random logits drawn from the approximate posterior predictive distribution and can thus be used to implement Thompson sampling (see the `randomize` method in the code). The $a^\ast = \mathrm{argmax}_a \ \mathbb{E}(r\vert \theta, a, x)$ step is the selection of the unit with the highest logit.For training, the loss function is the negative log-likelihood of the observed outcome $r_i$, but only for the unit corresponding to the selected action $a_i$, so it is convenient to combine them into a unique output $y_i=(a_i,r_i)$ and write a custom loss function.
###Code
#collapse-hide
class ThompsonLogistic(tf.keras.Model):
"""
This class represents a Thompson sampler for a Bayesian logistic regression
model.
It is essentially a keras Model of a single layer Bayesian neural network
with Bernoulli output enriched with a Thompson randomization method that
calls only the dense variational layer.
Parameters:
- context_dim: dimension of the context
- n_arms: number of arms of the multi-arm bandit under investigation
- sample_size: size of the current training set of outcome observations,
used to scale the kl_weight of the dense variational layer
Methods:
- randomize(inputs): returns a logit for each arm drawn from the (approximate)
posterior predictive distribution
- get_weights_stats(): returns means and sttdevs of the surrogate posterior
of the model parameters
- predict_probs(X, sample_size): returns the posterior probability of success
for each context in the array X and each arm of the bandit, sample_size specifies
the sample size of the Monte Carlo estimate
- assign_best_mc(X, sample_size): returns the arms with the highest
predict_probs(X, sample_size)
- assign_best(X): returns the arms with the highest expected logit, should
be very similar to assign_best_mc, a little bit less accurate
"""
def __init__(self, context_dim, n_arms, sample_size):
super().__init__()
self.context_dim = context_dim
self.n_arms = n_arms
self.densevar = tfp.layers.DenseVariational(n_arms, posterior_mean_field, prior_ridge, kl_weight=1/sample_size)
self.bernoullihead = tfp.layers.DistributionLambda(lambda t: tfd.Bernoulli(logits=t))
def call(self, inputs):
x = self.densevar(inputs)
return self.bernoullihead(x)
def randomize(self, inputs):
return self.densevar(inputs)
def get_weights_stats(self):
n_params = self.n_arms * (self.context_dim + 1)
c = np.log(np.expm1(1.))
weights = self.densevar.weights[0]
means = weights[:n_params].numpy().reshape(self.context_dim + 1, self.n_arms)
stddevs = (1e-5 + tf.nn.softplus(c + weights[n_params:])).numpy().reshape(self.context_dim + 1, self.n_arms)
mean_weights = means[:-1]
mean_biases = means[-1]
std_weights = stddevs[:-1]
std_biases = stddevs[-1]
return mean_weights, mean_biases, std_weights, std_biases
def assign_best(self, X):
mean_weights, mean_biases, std_weights, std_biases = self.get_weights_stats()
logits = tf.matmul(X, mean_weights) + mean_biases
return tf.argmax(logits, axis=1)
def predict_probs(self, X, sample_size=100):
mean_weights, mean_biases, std_weights, std_biases = self.get_weights_stats()
weights = tfd.Normal(loc=mean_weights, scale=std_weights).sample(sample_size)
biases = tfd.Normal(loc=mean_biases, scale=std_biases).sample(sample_size)
probs = tf.math.sigmoid(tf.matmul(X, weights)+biases[:,tf.newaxis,:])
return tf.reduce_mean(probs, axis=0)
def assign_best_mc(self, X, sample_size=100):
probs = self.predict_probs(X, sample_size)
return tf.argmax(probs, axis=1)
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n,
initializer=tfp.layers.BlockwiseInitializer([
'zeros',
tf.keras.initializers.Constant(np.log(np.expm1(.7))),
], sizes=[n, n]),
dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_ridge(kernel_size, bias_size, dtype=None):
return lambda _: tfd.Independent(
tfd.Normal(loc=tf.zeros(kernel_size + bias_size),
scale=tf.concat([2*tf.ones(kernel_size),
4*tf.ones(bias_size)],
axis=0)),
reinterpreted_batch_ndims=1
)
def build_model(context_dim, n_arms, sample_size, learning_rate=0.01):
model = ThompsonLogistic(context_dim, n_arms, sample_size)
# the loss function is the negloglik of the outcome y[:,1] and the head corresponding
# to the arm assignment y[:,0] is selected with a one-hot mask
loss_fn = lambda y, rv_y: tf.reduce_sum(-rv_y.log_prob(y[:,1, tf.newaxis]) * tf.one_hot(y[:,0], n_arms), axis=-1)
model.compile(optimizer=tf.optimizers.Adam(learning_rate=learning_rate), loss=loss_fn)
model.build(input_shape=(None, context_dim))
return model
#hide
class ModelTest:
"""
Automates the collection of performance statistics on test data.
Parameters:
- population: distribution from which the test data is sampled
- bandit: instance of ContextualBandit that specifies the true
data generating process
- sample_size: number of examples drawn from population
Method:
- call(model): returns statistics on how well the model performs on
the test data.
- best_arm_selection_rate is the proportion of optimal arm selection
- model_prob_of_success is the average probability of success based on
the policy implemented by the model
- arms_probs_of_success are the probabilities of success of the
individual arms for comparison
- kl_div is the kl_divergence between the true probabilities of
success and the ones estimated by the model, for each arm
"""
def __init__(self, population, bandit, sample_size):
self.sample_size = sample_size
self.bandit = bandit
self.X = population.sample(sample_size)
self.arms = tf.cast(tfd.Bernoulli(0.5).sample(sample_size), tf.int64)
self.outcomes = tf.cast(self.bandit.pull(self.arms, self.X), tf.int64)
self.y = tf.concat([self.arms[:, tf.newaxis], self.outcomes[:, tf.newaxis]], axis=1)
self.best_arms = self.bandit.get_optimal_arm(self.X)
self.probs = self.bandit.get_probs(self.X)
def __call__(self, model, plot=False):
selected_arms = model.assign_best_mc(self.X)
proportion_best_arm_selected = tf.reduce_mean(tf.cast(selected_arms==self.best_arms, tf.float32), axis=0).numpy()
model_prob_of_success = tf.reduce_mean(self.bandit.get_selected_probs(selected_arms, self.X), axis=0).numpy()
actual_probs = self.bandit.get_probs(self.X)
arms_probs_of_success = tf.reduce_mean(actual_probs, axis=0).numpy()
predicted_probs = model.predict_probs(self.X)
if plot:
df = pd.DataFrame(self.X.numoy(), columns=['x_0', 'x_1'])
df.loc[:, 'model prob of success'] = model_prob_of_success.numpy()
sns.scatterplot(x='x_0', y='x_1', hue='model prob of success', data=df)
plt.show()
return {'best_arm_selection_rate':proportion_best_arm_selected,
'model_prob_of_success': model_prob_of_success,
'arms_probs_of_success': arms_probs_of_success
}
test = ModelTest(population, bandit, 5000)
###Output
_____no_output_____
###Markdown
Learning strategyIn the learning phase of the model, at each step a new context $x_i$ is observed (or drawn from the population), an action $a_i$ is chosen, a reward $r_i$ is observed (or simulated with `bandit.pull`), and the model is updated.
###Code
#collapse-hide
class BayesianStrategy(object):
"""
Implements an online, learning strategy to solve
the contextual multi-armed bandit problem.
parameters:
bandit: an instance of the ContextualBandit class
methods:
thompson_randomize(X): draws logits from the posterior distribution and
returns the arms with the highest values
_update_step(X, y): updates the model with the new observations
one_trial(n, population): samples n elements from population, selects
an arm for each of them through Thompson sampling,
pulls it, updates the model
train_on_data(X_train, all_outcomes_train): implements Thompson sampling
on pre-sampled data where an omnicient being has
pulled all the arms. The reason is to compare with
standard Bayesian inference on the same data
evaluate_training_decisions: returns statistics about action selection
during training
"""
def __init__(self, bandit):
self.bandit = bandit
self.context_dim = bandit.shape[1]
self.n_arms = bandit.shape[0]
dtype = tf.float32
self.X = tf.cast(tf.reshape((), (0, self.context_dim)), tf.float32)
self.y = tf.cast(tf.reshape((), (0, 2)), tf.int32)
self.model = build_model(self.context_dim, self.n_arms, 1, learning_rate=0.008)
self.loss = []
self.weights = []
def thompson_randomize(self, X):
return tf.argmax(self.model.randomize(X), axis=1)
def _update_step(self, X, y, epochs=10):
self.X = tf.concat([self.X, X], axis=0)
self.y = tf.concat([self.y, y], axis=0)
weights = self.model.get_weights()
self.model = build_model(self.context_dim, self.n_arms, self.X.shape[0], learning_rate=0.008)
self.model.set_weights(weights)
hist = self.model.fit(self.X, self.y, verbose=False, epochs=epochs)
self.loss.append(hist.history['loss'])
self.weights.append(self.model.get_weights_stats())
def one_trial(self, n, population, epochs=10):
X = population.sample(n)
selected_arms = self.thompson_randomize(X)
outcomes = self.bandit.pull(selected_arms, X)
y = tf.concat([tf.cast(selected_arms[:,tf.newaxis], tf.int32), outcomes[:,tf.newaxis]], axis=1)
self._update_step(X, y, epochs)
def train_on_data_step(self, X, all_outcomes, epochs):
selected_arms = self.thompson_randomize(X)
column_indices = tf.convert_to_tensor(selected_arms, dtype=tf.int64)
row_indices = tf.range(X.shape[0], dtype=tf.int64)
full_indices = tf.stack([row_indices, column_indices], axis=1)
outcomes = tf.gather_nd(all_outcomes, full_indices)
y = tf.concat([tf.cast(selected_arms[:,tf.newaxis], tf.int32), outcomes[:,tf.newaxis]], axis=1)
self._update_step(X, y, epochs)
def train_on_data(self, X_train, all_outcomes_train, batch_size=1, epochs=10):
n_train = X_train.shape[0]
ds = tf.data.Dataset.from_tensor_slices((X_train, all_outcomes_train)).batch(batch_size)
for (X, all_outcomes) in ds:
self.train_on_data_step(X, all_outcomes, epochs)
def train_on_data_standard(self, X_train, all_outcomes_train, epochs=1000):
n_train = X_train.shape[0]
n_zeros = n_train//2
n_ones = n_train - n_zeros
selected_arms = tf.cast(tf.math.floormod(tf.range(n_train), 2), tf.int64)
column_indices = tf.convert_to_tensor(selected_arms, dtype=tf.int64)
row_indices = tf.range(n_train, dtype=tf.int64)
full_indices = tf.stack([row_indices, column_indices], axis=1)
outcomes_train = tf.gather_nd(all_outcomes_train, full_indices)
y_train = tf.concat([tf.cast(selected_arms[:,tf.newaxis], tf.int32), outcomes_train[:,tf.newaxis]], axis=1)
self._update_step(X_train, y_train, epochs)
def evaluate_training_decisions(self):
best_arm_proportion = tf.reduce_mean(tf.cast(
tf.cast(self.y[:,0], tf.int64)==self.bandit.get_optimal_arm(self.X), tf.float32)).numpy()
success_rate = self.y[:,1].numpy().sum()/self.y.shape[0]
prob_of_success = tf.reduce_mean(self.bandit.get_selected_probs(tf.cast(self.y[:,0], tf.int64), self.X), axis=0).numpy()
return {'training_best_arm_proportion': best_arm_proportion,
'training_success_rate': success_rate,
'training_prob_of_success': prob_of_success
}
#hide
X_train = population.sample(80)
outcomes_train = bandit.pull_all_arms(X_train)
strat = BayesianStrategy(bandit)
strat.train_on_data(X_train, outcomes_train, batch_size=1, epochs=15)
result_dict = strat.evaluate_training_decisions()
result_dict.update(test(strat.model))
result_df = pd.DataFrame.from_dict(result_dict, orient='index')
result_df.columns = ['Thompson randomization']
strat_standard = BayesianStrategy(bandit)
sample_size = X_train.shape[0]
strat_standard.train_on_data_standard(X_train[:sample_size], outcomes_train[:sample_size], epochs=800)
result_dict_st = strat_standard.evaluate_training_decisions()
result_dict_st.update(test(strat_standard.model))
result_df_st = pd.DataFrame.from_dict(result_dict_st, orient='index')
result_df_st.columns = [f'Standard randomization']
result_df = pd.concat([result_df, result_df_st], axis=1)
result_df
###Output
_____no_output_____
###Markdown
After 60 to 80 iterations, the surrogate posteriors seem to have converged to distributions that are compatible with the true values of the parameters.
###Code
#hide_input
def plot(w_m, b_m, w_s, b_s, arm, plt_vlines=True, plt_labels=True):
params = tf.cast(tf.linspace(start=-5. ,stop=5., num=500), dtype=tf.float32)
all_means = np.concatenate([b_m[np.newaxis], w_m], axis=0)
all_stds = np.concatenate([b_s[np.newaxis], w_s], axis=0)
true_params = np.concatenate([true_biases[:,np.newaxis], true_weights], axis=1).T
means = all_means[:,arm]
stds = all_stds[:,arm]
true_params = true_params[:,arm]
labels = ['alpha', 'beta_0', 'beta_1']
for i in range(3):
mean=means[i]
std=stds[i]
dist = tfd.Normal(mean,std)
pdf = dist.prob(params)
p =plt.plot(params, pdf)
c = p[0].get_markeredgecolor()
plt.fill_between(params, pdf,0, color = c, alpha = .1)
if plt_vlines:
plt.vlines(true_params[i], 0, dist.prob(true_params[i]) ,
colors = c, linestyles = "--", lw = 2)
if plt_labels:
plt.legend(labels)
sample_size = [1, 10, 20, 40, 60, 80]
plt.figure(figsize=(12.0, 12.))
for j, i in enumerate(sample_size):
plt_labels = (j==0)
n_rows = len(sample_size)
plt.subplot(n_rows, 2, 2*j+1)
plot(*strat.weights[i-1], 0, plt_labels=plt_labels)
plt.title(f'arm_0 weight posteriors after {i} Thompson iterations')
plt.autoscale(tight=True)
plt.subplot(n_rows, 2, 2*j+2)
plot(*strat.weights[i-1], 1, plt_labels=plt_labels)
plt.title(f'arm_1 weight posteriors after {i} Thompson iterations')
plt.autoscale(tight = True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
For comparison, we can train models on the same sample that has been assigned purely randomly to each arm.
###Code
#hide_input
plt.figure(figsize=(12.0, 2.))
plt.subplot(1,2,1)
plot(*strat_standard.weights[0], 0)
plt.title('arm_0 weight posteriors inferred from an N=40 sample')
plt.autoscale(tight=True)
plt.subplot(1,2,2)
plot(*strat_standard.weights[0], 1)
plt.title('arm_1 weight posteriors inferred from an N=40 sample')
plt.autoscale(tight=True)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
The surrogate posterior distributions look similar to the ones obtained from Thompson sampling, and the predictive performance on a test set are comparable. In the following table, "best_arm_selection_rate" describes how frequently the best action is selected for contexts in the test set according to the predictions of the two models, and "model_prob_of_success" is the average of the true probabilities of success for the actions selected by the model. For reference, "arms_probs_of_success" shows the average of the true probabilities of success for each action in the case it is always picked. The benefit of Thompson sampling is revealed in the predictive performance during training. In the same table, "training_best_arm_proportion" indicates how often the best action is selected during training (as expected, roughly half the time for standard randomization), "training_success_rate" the observed success rate during training and "training_prob_of_success" the average probability of success following the assignment decisions made during training.
###Code
#hide_input
result_df
###Output
_____no_output_____ |
notebooks/4.2-jjf-preprocessing-and-training-zipcodeVacancy_zillow_2014-2020.ipynb | ###Markdown
Preprocessing and Training Data Development - Vacancy Rates Goal: Create a cleaned development dataset you can use to complete the modeling step of your project Steps: ● 1. Create dummy or indicator features for categorical variables● 2. Standardize the magnitude of numeric features using a scaler● 3. Split into testing and training datasets
###Code
#imports
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import __version__ as sklearn_version
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split, cross_validate, GridSearchCV, learning_curve, TimeSeriesSplit
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.dummy import DummyRegressor
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.feature_selection import SelectKBest, f_regression
import datetime
from pandas_profiling import ProfileReport
#load data
path= '/Users/josephfrasca/Coding_Stuff/Springboard/Capstone_2/data/interim'
os.chdir(path)
# load cleaned data
df = pd.read_csv('vacancy_zillowHomeRent_merge_2014_2020.csv', dtype={'Zipcode': object})
df
#drop margin of error of vacancy rate
df.drop('MOE-VacancyRate%', axis=1, inplace=True)
df.dtypes
#split into two dataframes for future modeling and predicting vacancy rates in 2019-2020
df = df[df.Year < 2019]
df_2019_2020 = df[df.Year > 2018]
#check NaNs
df.isna().sum()
#drop NaNs
df.dropna(inplace=True)
df
#check unique values for each column
df['CountyName'].value_counts()/len(df)*100
#check partition sizes with a 80/20 train/test split
print('train size:', len(df) * .8, 'test size:', len(df) * .2)
###Output
train size: 12918.400000000001 test size: 3229.6000000000004
###Markdown
1. Create dummy or indicator features for categorical variablesHint: you’ll need to think about your old favorite pandas functions here likeget_dummies() . Consult this guide for help.
###Code
#change zipcode so it's not turned into a dummy variable
df.Zipcode = df.Zipcode.astype('int')
df.dtypes
#get dummy variables for 'object' columns
df = pd.get_dummies(df)
df
###Output
_____no_output_____
###Markdown
2. Split into testing and training datasetsHint: don’t forget your sklearn functions here, like train_test_split().
###Code
#define variable X, y
X = df.drop('Vacancy_Rate%', axis=1)
y = df['Vacancy_Rate%']
#train test split
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
#train, test split for timeseries
tss = TimeSeriesSplit(n_splits = 5)
for train_index, test_index in tss.split(X):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index,:]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
###Output
_____no_output_____
###Markdown
Establish Baseline Measurement ComparisonsUsing a Dummy Regressor see what R2, MSE, and MAE would be if the mean of the DataFrames were used
###Code
#initial not even a model
train_mean = y_train.mean()
print(train_mean)
#Fit the dummy regressor on the training data
dumb_reg = DummyRegressor(strategy='mean')
dumb_reg.fit(X_train, y_train)
#create dummy regressor predictions
y_tr_pred = dumb_reg.predict(X_train)
#Make prediction with the single value of the (training) mean.
y_te_pred = train_mean * np.ones(len(y_test))
r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)
#establish baseline for mean absolute error and mean square error
print('MAEs:', mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred))
print('MSEs:', mean_squared_error(y_train, y_tr_pred), mean_squared_error(y_test, y_te_pred))
###Output
MAEs: 5.074338214794146 5.188601429791711
MSEs: 55.026135025439736 58.8483100007387
###Markdown
3. Standardize the magnitude of numeric features using a scalerHint: you might need to employ Python code like this:
###Code
#NOTE Decided not to use scaled data as it lead to negative values for R2 scores.
#negative R2 may be because tried to scale data after creating dummy variables?
scaler = StandardScaler()
#fit the scaler on the training set
scaler.fit(X_train)
#apply the scaling to both the train and test split
X_tr_scaled = scaler.transform(X_train)
X_te_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Initial Model: Train the model on the train split
###Code
lm = LinearRegression().fit(X_train, y_train)
#Make predictions using the model on both train and test splits
y_tr_pred = lm.predict(X_train)
y_te_pred = lm.predict(X_test)
#Assess model performance
# r^2 - train, test
r2 = r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)
print('r2:', r2)
###Output
r2: (0.7703801642196975, 0.743751199645821)
###Markdown
**This is markedly better performance than when using Dummy variable/mean for R^2 (see earlier):**Dummy R2 = (0.0, -0.001031839268772039)
###Code
#MAE - train, test
mae = mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)
print('mae:', mae)
# MSE - train, test
mse = mean_squared_error(y_train, y_tr_pred), mean_squared_error(y_test, y_te_pred)
print('mse:', mse)
###Output
mse: (12.635092088166228, 15.07435889936279)
###Markdown
**This is markedly better performance than when using Dummy variable/mean for R^2 (see earlier):**Dummy -MAEs: 5.14329246126588 4.923647741748382MSEs: 57.38168678632161 48.79685160338582**MSE still high (possibly due to this being a large data set** Save processed data
###Code
#save vacancy rate data for modeling - remember to use random state=42!
df.to_csv(r'/Users/josephfrasca/Coding_Stuff/Springboard/Capstone_2/data/processed/VacancyRate_Zillow_2014_2018', index=False)
df_2019_2020.to_csv(r'/Users/josephfrasca/Coding_Stuff/Springboard/Capstone_2/data/processed/VacancyRate_Zillow_2019_2020', index=False)
#save the scaled training and test splits
#X_tr_scaled.to_csv(r'/Users/josephfrasca/Coding_Stuff/Springboard/Capstone_2/data/processed/X_tr_scaled', index=False)
#X_te_scaled
###Output
_____no_output_____ |
userguide/06. Channel Managers.ipynb | ###Markdown
Channel Managers When constructing a large graph of channels with interdependencies, it can be difficult to keep track of whether you have done a proper number of `tee()` calls. Because the danger of forgetting to tee can be so grave, one can tend to over-tee, leading to memory leaks.Also, there are patterns (like the Possible/Extant/All pattern) that are useful when setting up graphs, but, in the clutter of linear code creating channel after channel, it can become unclear when such patterns are in use or whether they are done properly.Finally, in the spirit of the rest of flowz, it would be nice if channel construction were done lazily, strictly on an as-needed basis.The `ChannelManager` aims to resolve all of those issues. Its use is, as usual, best demonstrated before describing.We'll start with some code borrowed from the Possible/Extant/All pattern example in chapter 4:
###Code
def expensive_deriver(num):
# 10 minutes pass...
return num * 100
# Our fake durable storage holding the first 8 derived elements
storage = {num: expensive_deriver(num) for num in range(8)}
# The ExtantArtifact accessing that data
class ExampleExtantArtifact(ExtantArtifact):
def __init__(self, num):
super(ExampleExtantArtifact, self).__init__(self.get_me, name='ExampleExtantArtifact')
self.num = num
@gen.coroutine
def get_me(self):
raise gen.Return(storage[self.num])
###Output
_____no_output_____
###Markdown
Now the `ChannelManager` code:
###Code
class GuideChannelManager(object):
def __init__(self):
# sets the 'channel_manager' property, which must exist on the class
self.channel_manager = mgmt.ChannelManager()
@mgmt.channelproperty
def extant(self):
print('Creating extant channel')
return IterChannel(KeyedArtifact(i, ExampleExtantArtifact(i)) for i in sorted(storage.keys()))
@mgmt.channelproperty
def possible(self):
print('Creating possible channel')
return IterChannel(KeyedArtifact(i, DerivedArtifact(expensive_deriver, i)) for i in range(10))
@mgmt.channelproperty
def all(self):
print('Creating all channel')
return merge_keyed_channels(self.possible, self.extant)
###Output
_____no_output_____
###Markdown
Note that this class is not actually a subclass of `ChannelManager`, but it makes use of it in two ways:1. It defines a `channel_manager` property on itself that is a private instance of a `ChannelManager`2. It makes references to the `@mgmt.channelproperty` decorator above three methods.Now using this class is straightforward...
###Code
print_chans(GuideChannelManager().all, mode='get')
###Output
Creating all channel
Creating possible channel
Creating extant channel
0
100
200
300
400
500
600
700
800
900
###Markdown
Nice. It worked. Here are the steps that happened:1. The `GuideChannelManager` object was instantiated, which did little other than instantiate a `ChannelManager`.2. The `all` property was asked of the `GuideChannelManager`.3. That called the `all()` method, which referenced the `possible` and `extant` properties.4. Those references, in turn, called the `possible()` and `extant()` methods.5. Each of those methods created an `IterChannel` and returned it.6. The decorators on those methods captured the channels and stored them internally. Subsequent accesses of these properties would no longer call the methods, but, rather, return a `tee()` of their corresponding channels.7. Control returned to the `all()` method where the two channels were used as input in the creation of a new channel, which is returned.8. Again, the decorator on `all()` stored this channel and would `tee()` it on any subsequent requests for the `all` property.9. The channel returned from the `all` property was passed to `print_chans()` and drained.Very notably, _no_ tees were performed on any of these channels. In this configuration, they were all needed only once, so that's what this pattern did. If hand-coded, the final channel would likely have worked with tees of the first two channels in an abundance of caution, possibly causing leaks of the original channels.
###Code
# recreate the storage to not mess up other parts of the notebook when run out of order
storage = {num: expensive_deriver(num) for num in range(8)}
###Output
_____no_output_____
###Markdown
Choosing named targets flowz is very likely to be principally used in scripts run as periodic (e.g., nightly) processes to synthesize and analyze data coming in from external sources. In such scripts, in can be handy to assign well-known names to some of the stages and choose the target via a script parameter. Here is a possible pattern for doing that.
###Code
targets = dict()
def configure_channels():
mgr = GuideChannelManager()
targets['possible'] = lambda: mgr.possible
targets['extant'] = lambda: mgr.extant
targets['all'] = lambda: mgr.all
targets['default'] = targets['all']
configure_channels()
def run_target(name='default'):
print_chans(targets[name](), mode='get')
###Output
_____no_output_____
###Markdown
With that in place, the `main()` processing of a script could capture the command-line arguments and end up calling a target like:
###Code
run_target('extant')
###Output
Creating extant channel
0
100
200
300
400
500
600
700
###Markdown
Or...
###Code
# Calling again to act like a fresh running of the script
configure_channels()
run_target()
# recreate the storage to not mess up other parts of the notebook when run out of order
storage = {num: expensive_deriver(num) for num in range(8)}
###Output
_____no_output_____ |
PS 3 - Lecture on PO.ipynb | ###Markdown
PS 3 Fall 2020 - Lecture Notebook for Week 6 - Potential Outcomes and ArraysHere is how we can do some potential outcomes calculations in Python, which will also give us practice with arrays.We'll be using "made up" data, but to make things more interesting let's use a real example which I've written about: international monitoring of elections.The outcome we care about is how fraudulent elections are, which we'll suppose is measured on a scale from 0 (perfectly clean) to 10 (completely fraudulent). Our independent or treatment variable will be whether international monitors are present. An interesting methodological challenge when studying this question in the real world is that we often measure how fraudulent elections are using reports from international monitors: so the presence of our independent variable may be required to measure the dependent variable! For our exercise here we will sweep this under the rug, and suppose that we get a reliable measurement of fraudulent elections are from other sources. First, let's create an array of potential outcomes without the treatment. That is, how fraudulent would the election be in the (sometimes hypothetical) scenario with no monitors.We are going to use the "numpy" library, which creates some nice functions for dealing with arrays.To keep things simple, we are going to imagine a data set with 8 elections.
###Code
import numpy as np
y0 = np.array([8, 2, 5, 8, 2, 3, 4, 3])
y0
###Output
_____no_output_____
###Markdown
Lets assume that the causal effect is equal to -1 for everyone. In words, monitors decrease the amount of fraud by 1 point on a 10 point scale.To do this, we will define a variable called k (think kappa from the slides), and add that to y0.
###Code
k=-1
y1 = y0 + k
y1
###Output
_____no_output_____
###Markdown
Note that we have done something kind of cool here: we added a number to an array, which is a list of numbers. Numpy deals with this the way that we would like: it subtracts 1 from all of the entries.Now let's suppose that 4 of the countries have election monitors while 4 do not. To capture this, we create an array of 0s and 1,s where 0 means not monitored and 1 means monitored.
###Code
d = np.array([1,0,1,1,0,0,1,0])
d
###Output
_____no_output_____
###Markdown
We are going to compute their realized fraud outcome with a clever array trick. Our goal is to get the value from y0 when d=0 and from y1 when d=1. To do this, we will multiply y1 times d, which will give us the realized outcome from those monitored and 0 otherwise, and then y0 times 1-d, which will give us the realized outcome for the non-monitored and 0 otherwise. So, by adding $y1*d$ and $y0*(1-d)$ we are always going to get the realized outcome plus 0, so the realized outcome.
###Code
y1*d
y0*(1-d)
y = y1*d + y0*(1-d)
y
###Output
_____no_output_____
###Markdown
What could we actually observe in reality? The monitor status, and the observed amount of fraud. Here is one way to print that.
###Code
print(np.column_stack((d,y)))
###Output
_____no_output_____
###Markdown
Now let's think about how we can compute a difference of means for those with and without monitors. We will do this in a few steps. First, we want to compute the average level of fraud for countries with monitors. There is a nice trick for this: we will take the "subset" of observed outcomes which are monitored (d==1). The syntax for this is to add the condition we want in square brackets:
###Code
y[d==1]
###Output
_____no_output_____
###Markdown
This returned an array with four entries, which makes sense because four of our countries have monitors. If you return to the printed version above, you can check that it pulled the outcome for the four countries with monitors. We can do the same for the non-monitored countries
###Code
y[d==0]
###Output
_____no_output_____
###Markdown
Now we can compute the average fraud level in the monitored countries with the np.mean function:
###Code
np.mean(y[d==1])
###Output
_____no_output_____
###Markdown
Note that if we did the same thing but looked at our y1 vector, we get the same reason, since the "average potential outcome with monitoring among the monitored" is just "average outcome among the monitored"
###Code
np.mean(y1[d==1])
###Output
_____no_output_____
###Markdown
Now lets do the non-monitored elections:
###Code
np.mean(y[d==0])
###Output
_____no_output_____
###Markdown
Finally, let's put this together and compute our difference of means, which we will save as a variable called dom
###Code
dom = np.mean(y[d==1]) - np.mean(y[d==0])
dom
###Output
_____no_output_____
###Markdown
What does this mean in words? The elections with monitors were almost 3 points more fraudulent than those with no monitors! Maybe the monitors should stayed home?But wait, as we learned in the lecture, this might not really capture the causal effect (which we assumed was -1). In particular, we can use our selection bias formula from the slides to calculate how wrong our difference of means is.
###Code
sb = np.mean(y0[d==1]) - np.mean(y0[d==0])
sb
###Output
_____no_output_____
###Markdown
However, notice that this requires knowing how fraudultent the monitored elections would have been without monitoring: it's an unobserved counterfactual!Still, in this hypothetical mode, we can check that the real causal effect plus our selection bias is equal to the difference of means:
###Code
print(k + sb,dom)
###Output
_____no_output_____
###Markdown
What if we flipped who the monitors went to go check. We can do this by defining a new alternative treatment variable d2 which is equal to 1 when d is 0 and equal to 0 when d is equal to 1
###Code
d2 = 1-d
y2 = y1*d2 + y0*(1-d2)
dom2 = np.mean(y2[d2==1]) - np.mean(y2[d2==0])
dom2
###Output
_____no_output_____
###Markdown
Now the difference of means is very negative! We can also compute the selection bias with this new monitoring regime
###Code
sb2 = np.mean(y0[d2==1]) - np.mean(y0[d2==0])
sb2
###Output
_____no_output_____ |
application_model_zoo/Example - TaoBao Commodity Dataset.ipynb | ###Markdown
Table of contents 1. Installation Instructions 2. Use trained model to segment roads in satellite imagery 3. How to train a custom segmenter using "Massachusetts Roads Dataset" About the networks1. UNet - https://arxiv.org/abs/1505.04597 - https://towardsdatascience.com/understanding-semantic-segmentation-with-unet-6be4f42d4b47 - https://towardsdatascience.com/unet-line-by-line-explanation-9b191c76baf52. FPN - http://openaccess.thecvf.com/content_cvpr_2017/papers/Lin_Feature_Pyramid_Networks_CVPR_2017_paper.pdf - https://towardsdatascience.com/review-fpn-feature-pyramid-network-object-detection-262fc7482610 - https://medium.com/@jonathan_hui/understanding-feature-pyramid-networks-for-object-detection-fpn-45b227b9106c3. PSPNet - https://arxiv.org/abs/1612.01105 - https://towardsdatascience.com/review-pspnet-winner-in-ilsvrc-2016-semantic-segmentation-scene-parsing-e089e5df177d - https://developers.arcgis.com/python/guide/how-pspnet-works/4. Linknet - https://arxiv.org/pdf/1707.03718.pdf - https://neptune.ai/blog/image-segmentation-tips-and-tricks-from-kaggle-competitions Installation - Run these commands - git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git - cd Monk_Object_Detection/9_segmentation_models/installation - Select the right requirements file and run - cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install
###Code
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
! cd Monk_Object_Detection/9_segmentation_models/installation && cat requirements_colab.txt | xargs -n 1 -L 1 pip install
# For Local systems and cloud select the right CUDA version
#! cd Monk_Object_Detection/9_segmentation_models/installation && cat requirements_cuda10.0.txt | xargs -n 1 -L 1 pip install
###Output
_____no_output_____
###Markdown
Use already trained model for demo
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/9_segmentation_models/lib/");
from infer_segmentation import Infer
gtf = Infer();
classes_dict = {
'background': 0,
'foreground': 1
};
classes_to_train = ['background', 'foreground'];
gtf.Data_Params(classes_dict, classes_to_train, image_shape=[384, 384])
# Download trained model
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1jLFbsBarJMe_KIdEabZXALNSss-nEOar' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1jLFbsBarJMe_KIdEabZXALNSss-nEOar" -O seg_taobao_trained.zip && rm -rf /tmp/cookies.txt
! unzip -qq seg_taobao_trained.zip
gtf.Model_Params(model="Unet", backbone="efficientnetb3", path_to_model='seg_taobao_trained/best_model.h5')
gtf.Setup();
gtf.Predict("seg_taobao_trained/test/1.png", vis=True);
gtf.Predict("seg_taobao_trained/test/2.png", vis=True);
gtf.Predict("seg_taobao_trained/test/3.png", vis=True);
gtf.Predict("seg_taobao_trained/test/4.png", vis=True);
###Output
_____no_output_____
###Markdown
Train you own detector Monk Format Dataset Directory Structure root_dir | | | |----train_img_dir | | | |---------img1.jpg | |---------img2.jpg | |---------..........(and so on) | |----train_mask_dir | | | |---------img1.jpg | |---------img2.jpg | |---------..........(and so on) | |----val_img_dir (optional) | | | |---------img1.jpg | |---------img2.jpg | |---------..........(and so on) | |----val_mask_dir (optional) | | | |---------img1.jpg | |---------img2.jpg | |---------..........(and so on) Sample Dataset Credits credits: http://www.sysu-hcp.net/taobao-commodity-dataset/
###Code
! wget http://www.sysu-hcp.net/wp-content/uploads/2016/03/Imgs_TCD.zip
! wget http://www.sysu-hcp.net/wp-content/uploads/2016/03/Mask_TCD.zip
! unzip -qq Imgs_TCD.zip
! unzip -qq Mask_TCD.zip
! mkdir updated_masks
! mkdir updated_imgs
import os
import cv2
import numpy as np
from tqdm import tqdm
img_list = os.listdir("Mask_TCD");
for i in tqdm(range(len(img_list))):
img = cv2.imread("Mask_TCD/" + img_list[i])
img = cv2.resize(img, (384, 384))
img[img > 0] = 1
cv2.imwrite("updated_masks/" + img_list[i], img)
img_list = os.listdir("Imgs_TCD");
for i in tqdm(range(len(img_list))):
img = cv2.imread("Imgs_TCD/" + img_list[i])
img = cv2.resize(img, (384, 384))
cv2.imwrite("updated_imgs/" + img_list[i].split(".")[0] + ".png", img)
###Output
_____no_output_____
###Markdown
Training
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/9_segmentation_models/lib/");
from train_segmentation import Segmenter
gtf = Segmenter();
img_dir = "updated_imgs";
mask_dir = "updated_masks";
classes_dict = {
'background': 0,
'foreground': 1
};
classes_to_train = ['background', 'foreground'];
gtf.Train_Dataset(img_dir, mask_dir, classes_dict, classes_to_train)
gtf.Val_Dataset(img_dir, mask_dir)
gtf.List_Backbones();
gtf.Data_Params(batch_size=2, backbone="efficientnetb3", image_shape=[384, 384])
gtf.List_Models();
gtf.Model_Params(model="Unet")
gtf.Train_Params(lr=0.0001)
gtf.Setup();
gtf.Train(num_epochs=300);
gtf.Visualize_Training_History();
###Output
_____no_output_____
###Markdown
Inference
###Code
import os
import sys
sys.path.append("Monk_Object_Detection/9_segmentation_models/lib/");
from infer_segmentation import Infer
gtf = Infer();
classes_dict = {
'background': 0,
'foreground': 1
};
classes_to_train = ['background', 'foreground'];
gtf.Data_Params(classes_dict, classes_to_train, image_shape=[384, 384])
gtf.Model_Params(model="Unet", backbone="efficientnetb3", path_to_model='best_model.h5')
gtf.Setup();
gtf.Predict("updated_imgs/01453.png", vis=True);
###Output
_____no_output_____ |
Assignment 1/Alexnet and SGD with Momentum.ipynb | ###Markdown
Alexnet and SGD with MomentumHere's the implementation mentioned in the report of alexnet and sgd with momentum.
###Code
class LikeAlexNet(nn.Module):
def init(self):
super(LikeAlexNet, self).init()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.classifier = nn.Sequential(
# nn.Dropout(),
nn.Linear(256 * 1 * 1, 4096),
nn.ReLU(inplace=True),
# nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, 2),
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 256 * 1 * 1)
x = self.classifier(x)
return x
from collections import defaultdict
class myoptimizer():
def init(self,model,lr,beta=0.9,lr_decay=.98):
self.v = defaultdict(int)
self.beta = beta
self.lr = lr
self.model = model
self.lr_decay = lr_decay
def step(self):
with torch.no_grad():
for i,params in enumerate(self.model.parameters()):
self.v[i] = self.beta*self.v[i] + (1-self.beta) * params.grad
params.data -= self.lr * self.v[i]
def zerograd(self):
for params in self.model.parameters():
if params.grad is not None:
params.grad.detach()
params.grad.zero_()
def update_lr(self):
self.lr = self.lr * self.lr_decay
###Output
_____no_output_____ |
bronze/.ipynb_checkpoints/B96_Homework_Ojars-checkpoint.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QuSoft@Riga) | December 09, 2018 I have some macros here. If there is a problem with displaying mathematical formulas, please run me to load these macros.$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\inner}[2]{\langle 1,2\rangle} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Homework (Rotations) Deadline: January 14, 2019Send your solutions to [email protected] free to ask questions by e-mail. Decision problems on streaming inputs 1. Suppose that you read a series of symbols from an alphabet $ \Sigma $. For example, $ \Sigma = \{a,b\} $, and your inputs can be $ aaabbbabababababab $ or $ aaaaaaa $ or $ bbbbbba $, etc.2. You may use one or more qubits for solving the given task. 3. At the beginning, each qubit is set to $ \ket{0} $.4. For each symbol, you fix certain operators and apply them to the quantum register whenever you read this symbol. For example, for each $ a $, you may apply x-gate on each qubit; and, for each $ b $, you may apply z-gate and then h-gate on each qubit.5. After reading whole the input, you make a measurement. You should make a decision on the given input. There will be two possible outcomes. So, you divide all possible outcomes into two sets, and give your decisions accordingly. Example 1 Let $ \Sigma = \{a\} $.We decide whether the length of the given input is odd or even.We use a single qubit. For each symbol, we apply x-gate.If we observe $ 0 $ (resp., $1$) at the end, we output "even" (resp., "odd"). We test our program on randomly generated 10 strings of length less than 50.
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
def parity_check(input):
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
for i in range(len(input)):
mycircuit.x(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(mycircuit)
return counts
from random import randrange
for i in range(10):
length = randrange(50)
input = ""
for j in range(length):
input = input + "a"
counts = parity_check(input)
print("the input is",input)
print("its length is",length)
print(counts)
for key in counts:
if key=="0":
print("the output 'even' is given",counts["0"],"times")
if key=="1":
print("the output 'odd' is given",counts["1"],"times")
print()
###Output
the input is aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
its length is 33
{'1': 100}
the output 'odd' is given 100 times
the input is aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
its length is 42
{'0': 100}
the output 'even' is given 100 times
the input is aaaaaaaaaaaaaaaaa
its length is 17
{'1': 100}
the output 'odd' is given 100 times
the input is aaaaaaaaaaaaaa
its length is 14
{'0': 100}
the output 'even' is given 100 times
the input is
its length is 0
{'0': 100}
the output 'even' is given 100 times
the input is aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
its length is 44
{'0': 100}
the output 'even' is given 100 times
the input is aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
its length is 33
{'1': 100}
the output 'odd' is given 100 times
the input is aaaaaaaaaaaaaaaaa
its length is 17
{'1': 100}
the output 'odd' is given 100 times
the input is aaaaaaaaaaaaaaaaaaaaaaaaaaaa
its length is 28
{'0': 100}
the output 'even' is given 100 times
the input is aaaaaaaaaaaaaaaaaaaaaaa
its length is 23
{'1': 100}
the output 'odd' is given 100 times
###Markdown
Example 2 Let $ \Sigma = \{a,b\} $.We decide whether the input contains odd numbers of $a$s and odd numbers of $b$s.We use two qubits. For each $a$, we apply x-gate to the first qubit.For each $b$, we apply x-gate to the second qubit.If we observe $ 11 $ at the end, we output "yes". Otherwise, we output "no". We test our program on randomly generated 20 strings of length less than 40.
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
def double_odd(input):
qreg = QuantumRegister(2)
creg = ClassicalRegister(2)
mycircuit = QuantumCircuit(qreg,creg)
for i in range(len(input)):
if input[i]=="a":
mycircuit.x(qreg[0])
if input[i]=="b":
mycircuit.x(qreg[1])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(mycircuit)
return counts
from random import randrange
for i in range(20):
length = randrange(40)
input = ""
number_of_as=0
number_of_bs=0
for j in range(length):
if randrange(2)==0:
input = input + "a"
number_of_as = number_of_as + 1
else:
input = input + "b"
number_of_bs = number_of_bs + 1
counts = double_odd(input)
print("the input is",input)
print("the number of as is",number_of_as)
print("the number of bs is",number_of_bs)
print(counts)
number_of_yes = 0
number_of_no = 0
for key in counts:
if key=="11":
number_of_yes = counts["11"]
elif key=="00":
number_of_no = number_of_no + counts["00"]
elif key=="01":
number_of_no = number_of_no + counts["01"]
elif key=="11":
number_of_no = number_of_no + counts["10"]
print("number of yes is",number_of_yes,"and number of no is",number_of_no)
print()
###Output
the input is ababaaaabbaaa
the number of as is 9
the number of bs is 4
{'01': 100}
number of yes is 0 and number of no is 100
the input is abaaaabbabbbab
the number of as is 7
the number of bs is 7
{'11': 100}
number of yes is 100 and number of no is 0
the input is aaaabababbbaaaaabaaaabbb
the number of as is 15
the number of bs is 9
{'11': 100}
number of yes is 100 and number of no is 0
the input is bbaaaaaab
the number of as is 6
the number of bs is 3
{'10': 100}
number of yes is 0 and number of no is 0
the input is baabbb
the number of as is 2
the number of bs is 4
{'00': 100}
number of yes is 0 and number of no is 100
the input is babaaabbbbbbaa
the number of as is 6
the number of bs is 8
{'00': 100}
number of yes is 0 and number of no is 100
the input is
the number of as is 0
the number of bs is 0
{'00': 100}
number of yes is 0 and number of no is 100
the input is bababaabbbbaababbbabbbaaaabbbabbabab
the number of as is 15
the number of bs is 21
{'11': 100}
number of yes is 100 and number of no is 0
the input is babbbaababbbbaabbaabbaaaababba
the number of as is 14
the number of bs is 16
{'00': 100}
number of yes is 0 and number of no is 100
the input is bbababaabaababababbbbbaba
the number of as is 11
the number of bs is 14
{'01': 100}
number of yes is 0 and number of no is 100
the input is abbabbbbbbbabaaabaabbbbbaababbbabaabbba
the number of as is 15
the number of bs is 24
{'01': 100}
number of yes is 0 and number of no is 100
the input is ababaaaaaaabaabbaaaabbbbbbbbabaaabbab
the number of as is 20
the number of bs is 17
{'10': 100}
number of yes is 0 and number of no is 0
the input is aaaaab
the number of as is 5
the number of bs is 1
{'11': 100}
number of yes is 100 and number of no is 0
the input is baaabaabababaaaaaaa
the number of as is 14
the number of bs is 5
{'10': 100}
number of yes is 0 and number of no is 0
the input is aabaaababbabbabaaaabaabaaabaaa
the number of as is 20
the number of bs is 10
{'00': 100}
number of yes is 0 and number of no is 100
the input is baabbbaaaabbaabaabbbbabaaaaababaaa
the number of as is 20
the number of bs is 14
{'00': 100}
number of yes is 0 and number of no is 100
the input is baaaabbabbababbbbbaab
the number of as is 9
the number of bs is 12
{'01': 100}
number of yes is 0 and number of no is 100
the input is bbbabbbaaababbaaabb
the number of as is 8
the number of bs is 11
{'10': 100}
number of yes is 0 and number of no is 0
the input is bbbbaaababbaabbbbbaa
the number of as is 8
the number of bs is 12
{'00': 100}
number of yes is 0 and number of no is 100
the input is a
the number of as is 1
the number of bs is 0
{'01': 100}
number of yes is 0 and number of no is 100
###Markdown
Task 1 Let $ \Sigma = \{a\} $.You will read an input of length which is a multiple of $ 8 $: $ 8i \in \{8,16,24,\ldots\} $.Use a single qubit and determine whether the multiple ($ i $) is odd or even.For each $a$, you can apply a rotation.Test your program with the inputs of lengths $ 8, 16, 24, 32, 40, 48, 56, 64, 72, 80 $.
###Code
#
# your solution
#
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
num = 100 # number of shots
theta = pi/2/8 # every quandrant gives answer
for i in range( 1, 11 ):
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
for j in range( 1, i*8+1 ):
mycircuit.ry( 2*theta, qreg[0] )
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=num)
counts = job.result().get_counts(mycircuit)
print( "For i=", i, "length=", i*8, " result is ", end="" )
#print( "For i=", i, "length=", i*8, " result is ", counts, end=", " )
if '1' in counts:
if counts['1'] == num:
print( "odd" )
if '0' in counts:
if counts['0'] == num:
print( "even" )
###Output
For i= 1 length= 8 result is odd
For i= 2 length= 16 result is even
For i= 3 length= 24 result is odd
For i= 4 length= 32 result is even
For i= 5 length= 40 result is odd
For i= 6 length= 48 result is even
For i= 7 length= 56 result is odd
For i= 8 length= 64 result is even
For i= 9 length= 72 result is odd
For i= 10 length= 80 result is even
###Markdown
Task 2 Let $ \Sigma= \{a\} $.Determine whether the length of the input is a multiple of 7 or not in the following manner:1. If it is a multiple of 7, then output "yes" with probability 1.2. If it is not a multiple of 7, then output "yes" with probability less than 1.For each $a$, you can apply a rotation.Test your program with all inputs of lengths less than 29.Determine the inputs for which you output "yes" nearly three times less than the output "no".
###Code
#
# your solution
#
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
num = 100 # number of shots
theta = pi/7 # 0 or pi gives answer '0' which is "good"
for i in range( 0, 29 ):
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
for j in range( 1, i+1 ):
mycircuit.ry( 2*theta, qreg[0] )
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=num)
counts = job.result().get_counts(mycircuit)
print( "For length=", i, " probability of 'yes' is ", end="" )
if '0' in counts:
print( counts['0'], "%" )
if counts['0']<(100-counts['0'])/3:
print(' "yes" nearly three times less than the output "no"')
else:
print( "0%" )
###Output
For length= 0 probability of 'yes' is 100 %
For length= 1 probability of 'yes' is 83 %
For length= 2 probability of 'yes' is 38 %
For length= 3 probability of 'yes' is 4 %
"yes" nearly three times less than the output "no"
For length= 4 probability of 'yes' is 3 %
"yes" nearly three times less than the output "no"
For length= 5 probability of 'yes' is 40 %
For length= 6 probability of 'yes' is 84 %
For length= 7 probability of 'yes' is 100 %
For length= 8 probability of 'yes' is 79 %
For length= 9 probability of 'yes' is 38 %
For length= 10 probability of 'yes' is 4 %
"yes" nearly three times less than the output "no"
For length= 11 probability of 'yes' is 9 %
"yes" nearly three times less than the output "no"
For length= 12 probability of 'yes' is 45 %
For length= 13 probability of 'yes' is 86 %
For length= 14 probability of 'yes' is 100 %
For length= 15 probability of 'yes' is 81 %
For length= 16 probability of 'yes' is 40 %
For length= 17 probability of 'yes' is 5 %
"yes" nearly three times less than the output "no"
For length= 18 probability of 'yes' is 4 %
"yes" nearly three times less than the output "no"
For length= 19 probability of 'yes' is 41 %
For length= 20 probability of 'yes' is 84 %
For length= 21 probability of 'yes' is 100 %
For length= 22 probability of 'yes' is 81 %
For length= 23 probability of 'yes' is 38 %
For length= 24 probability of 'yes' is 3 %
"yes" nearly three times less than the output "no"
For length= 25 probability of 'yes' is 5 %
"yes" nearly three times less than the output "no"
For length= 26 probability of 'yes' is 39 %
For length= 27 probability of 'yes' is 81 %
For length= 28 probability of 'yes' is 100 %
###Markdown
Task 3 Write down possible six different rotation angles that would work for Task 2. Rotations:(Double click to this cell for editing.)1. $ 1\frac{\pi}7 $2. $ 2\frac{\pi}7 $3. $ 3\frac{\pi}7 $4. $ 4\frac{\pi}7 $5. $ 5\frac{\pi}7 $6. $ 6\frac{\pi}7 $ Task 4 Experimentially test each of these rotations for Task 2.
###Code
#
# your solution
#
#If I decided on angle in task2, why should I test it agadin? Probably I do not understand the idea of tasks 3 and 4.
###Output
_____no_output_____
###Markdown
Task 5We can improve the algorihtm for Task 2.Let $ \Sigma= \{a\} $.Determine whether the length of input is a multiple of 91.There are 90 different rotations that you can use.Randomly pick four of these rotations and fix them.Use four qubits. In each qubit, apply one of these rotations.Test your program with all inputs of lengths less than 92.If the input length is 91, then your program should output "yes" with probability 1.If the input length is not 91, then your program should output "yes" with probability less than $ \epsilon < \frac{1}{2}$.Experimentially verify both cases, and also determine the approximate value of $\epsilon$.
###Code
#
# your solution
#
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from random import randint
from math import pi
num = 100 # number of shots
angles = []
angles.append( 2*pi/91 )
for i in range(90): # 90 times because 0th is assigned manually before
angles.append( angles[i] + 2*pi/91 )
#print( angles )
fix = [] # fixed rotations
for i in range( 4 ):
r = randint( 0, 90-i ) # smart way to make sure that numbers do not repeat
fix.append( angles[r] )
angles[r] = angles[90-i] # smart way to make sure that numbers do not repeat
print( "Angles: ",fix )
#print( angles )
for n in range(1, 92):
print( "Len=", n)
qreg = QuantumRegister(4)
creg = ClassicalRegister(4)
mycircuit = QuantumCircuit(qreg,creg)
for i in range( 1, n+1 ):
for j in range( 4 ):
mycircuit.ry( fix[j], qreg[j] )
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=num)
counts = job.result().get_counts(mycircuit)
if '0000' in counts:
print( "For ", i, "input characters, the result '0000' probability is ", counts['0000'], "%" )
#print( counts )
###Output
Angles: [1.7951958020513108, 5.937955345246646, 3.1070696573965004, 5.868909352860057]
Len= 1
Len= 2
For 2 input characters, the result '0000' probability is 4 %
Len= 3
Len= 4
For 4 input characters, the result '0000' probability is 18 %
Len= 5
Len= 6
Len= 7
Len= 8
Len= 9
Len= 10
Len= 11
Len= 12
For 12 input characters, the result '0000' probability is 2 %
Len= 13
Len= 14
For 14 input characters, the result '0000' probability is 45 %
Len= 15
For 15 input characters, the result '0000' probability is 2 %
Len= 16
For 16 input characters, the result '0000' probability is 3 %
Len= 17
For 17 input characters, the result '0000' probability is 7 %
Len= 18
For 18 input characters, the result '0000' probability is 50 %
Len= 19
Len= 20
For 20 input characters, the result '0000' probability is 5 %
Len= 21
For 21 input characters, the result '0000' probability is 3 %
Len= 22
Len= 23
Len= 24
Len= 25
Len= 26
Len= 27
Len= 28
For 28 input characters, the result '0000' probability is 2 %
Len= 29
Len= 30
For 30 input characters, the result '0000' probability is 1 %
Len= 31
For 31 input characters, the result '0000' probability is 10 %
Len= 32
For 32 input characters, the result '0000' probability is 33 %
Len= 33
For 33 input characters, the result '0000' probability is 1 %
Len= 34
For 34 input characters, the result '0000' probability is 12 %
Len= 35
For 35 input characters, the result '0000' probability is 7 %
Len= 36
For 36 input characters, the result '0000' probability is 2 %
Len= 37
Len= 38
Len= 39
Len= 40
For 40 input characters, the result '0000' probability is 1 %
Len= 41
Len= 42
For 42 input characters, the result '0000' probability is 8 %
Len= 43
For 43 input characters, the result '0000' probability is 4 %
Len= 44
For 44 input characters, the result '0000' probability is 1 %
Len= 45
Len= 46
Len= 47
Len= 48
For 48 input characters, the result '0000' probability is 2 %
Len= 49
For 49 input characters, the result '0000' probability is 15 %
Len= 50
For 50 input characters, the result '0000' probability is 2 %
Len= 51
For 51 input characters, the result '0000' probability is 1 %
Len= 52
Len= 53
Len= 54
Len= 55
For 55 input characters, the result '0000' probability is 5 %
Len= 56
For 56 input characters, the result '0000' probability is 12 %
Len= 57
For 57 input characters, the result '0000' probability is 14 %
Len= 58
For 58 input characters, the result '0000' probability is 1 %
Len= 59
For 59 input characters, the result '0000' probability is 27 %
Len= 60
For 60 input characters, the result '0000' probability is 7 %
Len= 61
Len= 62
For 62 input characters, the result '0000' probability is 1 %
Len= 63
For 63 input characters, the result '0000' probability is 1 %
Len= 64
Len= 65
Len= 66
Len= 67
Len= 68
Len= 69
Len= 70
For 70 input characters, the result '0000' probability is 2 %
Len= 71
For 71 input characters, the result '0000' probability is 8 %
Len= 72
Len= 73
For 73 input characters, the result '0000' probability is 46 %
Len= 74
For 74 input characters, the result '0000' probability is 7 %
Len= 75
For 75 input characters, the result '0000' probability is 2 %
Len= 76
For 76 input characters, the result '0000' probability is 2 %
Len= 77
For 77 input characters, the result '0000' probability is 52 %
Len= 78
For 78 input characters, the result '0000' probability is 1 %
Len= 79
For 79 input characters, the result '0000' probability is 1 %
Len= 80
For 80 input characters, the result '0000' probability is 1 %
Len= 81
Len= 82
Len= 83
Len= 84
Len= 85
For 85 input characters, the result '0000' probability is 1 %
Len= 86
Len= 87
For 87 input characters, the result '0000' probability is 18 %
Len= 88
For 88 input characters, the result '0000' probability is 1 %
Len= 89
For 89 input characters, the result '0000' probability is 3 %
Len= 90
Len= 91
For 91 input characters, the result '0000' probability is 100 %
###Markdown
Task 6 Repeat Task 5 with five and then six rotations by using five and six qubits, respectively.The value of $ \epsilon $ is expected to decrease if we use more rotations and qubits.
###Code
#
# your solution
#
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from random import randint
from math import pi
num = 100 # number of shots
for q in range( 5, 6+1):
print('*******************',q,'qbits **************')
angles = []
angles.append( 2*pi/91 )
for i in range(90): # 90 times because 0th is assigned manually before
angles.append( angles[i] + 2*pi/91 )
#print( angles )
fix = [] # fixed rotations
for i in range( q ):
r = randint( 0, 90-i ) # smart way to make sure that numbers do not repeat
fix.append( angles[r] )
angles[r] = angles[90-i] # smart way to make sure that numbers do not repeat
print( "Angles: ",fix )
#print( angles )
for n in range(1, 92):
print( "Len=", n)
qreg = QuantumRegister(q)
creg = ClassicalRegister(q)
mycircuit = QuantumCircuit(qreg,creg)
for i in range( 1, n+1 ):
for j in range( q ):
mycircuit.ry( fix[j], qreg[j] )
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=num)
counts = job.result().get_counts(mycircuit)
string = '0' * q
if string in counts:
print( "For ", i, "input characters, the result '0000' probability is ", counts[string], "%" )
###Output
******************* 5 qbits **************
Angles: [1.657103817278133, 0.6214139314792997, 3.9356215660355676, 0.3452299619329443, 4.5570354975148675]
Len= 1
For 1 input characters, the result '0000' probability is 2 %
Len= 2
Len= 3
For 3 input characters, the result '0000' probability is 9 %
Len= 4
Len= 5
Len= 6
Len= 7
Len= 8
Len= 9
Len= 10
Len= 11
For 11 input characters, the result '0000' probability is 8 %
Len= 12
Len= 13
Len= 14
For 14 input characters, the result '0000' probability is 2 %
Len= 15
Len= 16
For 16 input characters, the result '0000' probability is 1 %
Len= 17
Len= 18
For 18 input characters, the result '0000' probability is 11 %
Len= 19
For 19 input characters, the result '0000' probability is 45 %
Len= 20
Len= 21
Len= 22
For 22 input characters, the result '0000' probability is 17 %
Len= 23
For 23 input characters, the result '0000' probability is 1 %
Len= 24
Len= 25
Len= 26
Len= 27
Len= 28
Len= 29
Len= 30
For 30 input characters, the result '0000' probability is 5 %
Len= 31
Len= 32
For 32 input characters, the result '0000' probability is 1 %
Len= 33
For 33 input characters, the result '0000' probability is 4 %
Len= 34
For 34 input characters, the result '0000' probability is 1 %
Len= 35
For 35 input characters, the result '0000' probability is 1 %
Len= 36
Len= 37
For 37 input characters, the result '0000' probability is 12 %
Len= 38
For 38 input characters, the result '0000' probability is 1 %
Len= 39
Len= 40
For 40 input characters, the result '0000' probability is 1 %
Len= 41
For 41 input characters, the result '0000' probability is 2 %
Len= 42
Len= 43
Len= 44
Len= 45
Len= 46
Len= 47
Len= 48
For 48 input characters, the result '0000' probability is 1 %
Len= 49
Len= 50
For 50 input characters, the result '0000' probability is 5 %
Len= 51
For 51 input characters, the result '0000' probability is 1 %
Len= 52
For 52 input characters, the result '0000' probability is 1 %
Len= 53
For 53 input characters, the result '0000' probability is 2 %
Len= 54
For 54 input characters, the result '0000' probability is 4 %
Len= 55
Len= 56
Len= 57
For 57 input characters, the result '0000' probability is 2 %
Len= 58
For 58 input characters, the result '0000' probability is 2 %
Len= 59
Len= 60
Len= 61
For 61 input characters, the result '0000' probability is 6 %
Len= 62
Len= 63
Len= 64
Len= 65
Len= 66
Len= 67
Len= 68
For 68 input characters, the result '0000' probability is 1 %
Len= 69
For 69 input characters, the result '0000' probability is 11 %
Len= 70
Len= 71
Len= 72
For 72 input characters, the result '0000' probability is 40 %
Len= 73
For 73 input characters, the result '0000' probability is 10 %
Len= 74
Len= 75
Len= 76
Len= 77
For 77 input characters, the result '0000' probability is 1 %
Len= 78
Len= 79
Len= 80
For 80 input characters, the result '0000' probability is 9 %
Len= 81
Len= 82
Len= 83
Len= 84
Len= 85
Len= 86
Len= 87
Len= 88
For 88 input characters, the result '0000' probability is 15 %
Len= 89
Len= 90
Len= 91
For 91 input characters, the result '0000' probability is 100 %
******************* 6 qbits **************
Angles: [1.8642417944378997, 6.145093322406413, 4.004667558422156, 5.592725383313701, 4.4879895051282785, 5.937955345246646]
Len= 1
For 1 input characters, the result '0000' probability is 1 %
Len= 2
Len= 3
For 3 input characters, the result '0000' probability is 16 %
Len= 4
Len= 5
Len= 6
Len= 7
Len= 8
Len= 9
Len= 10
Len= 11
Len= 12
Len= 13
Len= 14
Len= 15
Len= 16
Len= 17
For 17 input characters, the result '0000' probability is 10 %
Len= 18
Len= 19
Len= 20
Len= 21
Len= 22
Len= 23
Len= 24
Len= 25
Len= 26
Len= 27
Len= 28
Len= 29
Len= 30
Len= 31
Len= 32
Len= 33
Len= 34
Len= 35
Len= 36
For 36 input characters, the result '0000' probability is 8 %
Len= 37
Len= 38
For 38 input characters, the result '0000' probability is 13 %
Len= 39
Len= 40
Len= 41
Len= 42
Len= 43
Len= 44
For 44 input characters, the result '0000' probability is 1 %
Len= 45
Len= 46
Len= 47
For 47 input characters, the result '0000' probability is 1 %
Len= 48
Len= 49
Len= 50
Len= 51
Len= 52
Len= 53
For 53 input characters, the result '0000' probability is 13 %
Len= 54
Len= 55
For 55 input characters, the result '0000' probability is 3 %
Len= 56
For 56 input characters, the result '0000' probability is 3 %
Len= 57
For 57 input characters, the result '0000' probability is 3 %
Len= 58
Len= 59
Len= 60
Len= 61
Len= 62
Len= 63
Len= 64
Len= 65
Len= 66
Len= 67
Len= 68
Len= 69
Len= 70
Len= 71
Len= 72
Len= 73
Len= 74
For 74 input characters, the result '0000' probability is 11 %
Len= 75
Len= 76
Len= 77
Len= 78
For 78 input characters, the result '0000' probability is 1 %
Len= 79
Len= 80
Len= 81
Len= 82
Len= 83
Len= 84
Len= 85
For 85 input characters, the result '0000' probability is 1 %
Len= 86
Len= 87
Len= 88
For 88 input characters, the result '0000' probability is 17 %
Len= 89
For 89 input characters, the result '0000' probability is 1 %
Len= 90
Len= 91
For 91 input characters, the result '0000' probability is 100 %
|
DeepSurv_Final_Project.ipynb | ###Markdown
TFDeepSurv: Deep Cox proportional risk model and survival analysis implemented by tensorflow.1. Differences from DeepSurvDeepSurv, a package of Deep Cox proportional risk model, is open-source on Github. But our works may shine in: Evaluating variable importance in deep neural network. Identifying ties of death time in your survival data, which means different loss function and estimator for survival function (Breslow or Efron approximation). Providing survival function estimated by three optional algorithm. Tuning hyperparameters of DNN using scientific method - Bayesian Hyperparameters Optimization.2. http://localhost:8888/notebooks/TFDeepSurv_Testing.ipynbThe project is based on the research of Breast Cancer. The paper about this project has been submitted to IEEE JBHI. We will update status here once paper published !3. InstallationFrom sourceDownload TFDeepSurv package and install from the directory (Python version : 3.x):git clone https://github.com/liupei101/TFDeepSurv.gitcd TFDeepSurvpip install .4. Get it started:4.1 Runing with simulated data4.1.1 import packages and prepare datahttps://github.com/liupei101/TFDeepSurv
###Code
import deepsurv
from lifelines import KaplanMeierFitter
%matplotlib inline
import matplotlib.pyplot as plt
from pandas import read_csv
#import pandas as pd
from matplotlib import pyplot
data = read_csv('lung.csv',index_col=0) #, , parse_dates=True, squeeze=True)
#data_cox = pd.read_csv("lung.csv")
#data = data.drop(["Unnamed: 0"],axis=1)
#data_cox.hist()
#pyplot.show()
data.head()
# Print the column names of the dataset
data.columns
data['sex'].hist()
data['status'].hist()
# Cleaning the data :
data_cox = data.dropna(subset=['inst', 'time', 'status', 'age', 'sex', 'ph.ecog','ph.karno', 'pat.karno', 'meal.cal', 'wt.loss'])
print (data_cox.head())
# Create the object for our method
kmf = KaplanMeierFitter()
data_cox.loc[data_cox.status == 1,'dead'] = 0
data_cox.loc[data_cox.status == 2,'dead'] = 1
data_cox.head()
T = data_cox["time"]
E = data_cox["dead"]
kmf.fit(T, event_observed=E)
#import tfdeepsurv
kmf.event_table
kmf.plot()
plt.title('The Kaplan-Meier Estimate')
plt.ylabel('Probability of the Patient Still Alive')
plt.show()
kmf.confidence_interval_[0:30]
# Probability of an individual to die
kmf.cumulative_density_[0:50]
kmf.plot_cumulative_density()
###Output
_____no_output_____
###Markdown
Hazard function:The survival functions are a great way to summarize and visualize the survival dataset. However, it is not the only way. If we are curious about the hazard function h(t) of a population, we, unfortunately, can’t transform the Kaplan Meier estimate. For that, we use the Nelson-Aalen Hazard Function:$$\hat{H}(t) = \sum_{t_i \le t}{\frac{d_i}{n_i}}$$Where $d_i$ = number of deaths at time $t_i$ and $n_i$ = number of patients at the start.
###Code
# Hazard Function
from lifelines import NelsonAalenFitter
naf = NelsonAalenFitter()
naf.fit(T, event_observed=E)
naf.plot_cumulative_hazard()
# We can predict the value of a certain point :
print (naf.predict(1022).round(3))
# Cox regression :
from lifelines import CoxPHFitter
#data = data_cox[[ 'time', 'age', 'sex', 'ph.ecog','ph.karno', 'pat.karno', 'meal.cal', 'wt.loss', 'dead']]
cph = CoxPHFitter()
cph.fit(data_cox,"time",event_col="dead")
cph.print_summary()
# Plot the survival function :
d_data = data_cox.iloc[0:5,:]
cph.predict_survival_function(d_data).plot()
# It represents median time of survival :
CTE = kmf.conditional_time_to_event_
plt.plot(CTE)
###Output
/opt/anaconda3/lib/python3.8/site-packages/lifelines/utils/__init__.py:1111: ConvergenceWarning: Column status have very low variance when conditioned on death event present or not. This may harm convergence. This could be a form of 'complete separation'. For example, try the following code:
>>> events = df['dead'].astype(bool)
>>> print(df.loc[events, 'status'].var())
>>> print(df.loc[~events, 'status'].var())
A very low variance means that the column status completely determines whether a subject dies or not. See https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression.
warnings.warn(dedent(warning_text), ConvergenceWarning)
/opt/anaconda3/lib/python3.8/site-packages/lifelines/fitters/coxph_fitter.py:1262: ConvergenceWarning: Newton-Rhaphson convergence completed successfully but norm(delta) is still high, 0.451. This may imply non-unique solutions to the maximum likelihood. Perhaps there is collinearity or complete separation in the dataset?
warnings.warn(
###Markdown
https://www.kdnuggets.com/2020/07/complete-guide-survival-analysis-python-part1.htmlhttps://www.kdnuggets.com/2020/07/guide-survival-analysis-python-part-2.htmlhttps://www.kdnuggets.com/2020/07/guide-survival-analysis-python-part-3.htmlhttps://towardsdatascience.com/deep-learning-for-survival-analysis-fdd1505293c9
###Code
import numpy as np
from sklearn import datasets
iris_X, iris_y = datasets.load_iris(return_X_y=True)
iris_X[0:10], iris_y[0:10]
from sklearn.preprocessing import OrdinalEncoder
from sklearn.model_selection import train_test_split
#import sklearn.model_selection as mod_sel
from sklearn.ensemble import RandomForestClassifier
from sksurv.preprocessing import OneHotEncoder
from sksurv.ensemble import RandomSurvivalForest
#import sklearn.model_selection as prep
rstate = 124
# Split the data into train/test subsets
#X_rf, y_rf = get_x_y_survival(data_cox, 'time', 'dead', 1)
X_rf = np.array(data_cox['time'])
y_rf = np.array(data_cox['dead'])
#X_rf[0:10], y_rf[0:10]
#print(X_rf[0:10])
#print(y_rf[0:10])
#X_rf_train, X_rf_test, y_rf_train, y_rf_test = train_test_split(X_rf, y_rf, test_size=0.25, random_state=rstate)
#'''
rsf = RandomSurvivalForest(n_estimators=50,
min_samples_split=7,
min_samples_leaf=10,
max_features="sqrt",
n_jobs=-1,
random_state=rstate,
verbose=1)
rsf.fit(X_rf_train, y_rf_train)
#'''
data_ds = data_cox.copy()
df_train = data_ds.copy()
df_test = df_train.sample(frac=0.2)
df_train = df_train.drop(df_test.index)
df_val = df_train.sample(frac=0.2)
df_train = df_train.drop(df_val.index)
cols_stand = ['balance_time', 'LTV_time', 'origination_time', 'maturity_time',
'interest_rate_time', 'house_price_index_time', 'gdp_time', 'unemployment_rate_time',
'balance_orig_time', 'FICO_orig_time',
'LTV_orig_time', 'interest_rate_orig_time', 'house_price_index_orig_time']
cols_leave = ['investor_orig_time', 'real_estate_condominium',
'real_estate_planned_urban_dev', 'real_estate_single_family_home', 'total_obs_time', 'default_time']
#standardize = [([col], StandardScaler) for col in cols_stand]
standardize = [([col], None) for col in cols_stand]
leave = [(col, None) for col in cols_leave]
x_mapper = DataFrameMapper(standardize + leave)
x_train = x_mapper.fit_transform(df_train).astype('float32')
x_val = x_mapper.transform(df_val).astype('float32')
x_test = x_mapper.transform(df_test).astype('float32')
###Output
_____no_output_____
###Markdown
DeepSurv/Non-Linear model The NonLinear CoxPH model was popularized by Katzman et al. in DeepSurv: Personalized Treatment Recommender System Using A Cox Proportional Hazards Deep Neural Network by allowing the use of Neural Networks within the original design and therefore introducing more modeling flexibility. Let's now take a look at how to use the NonLinear CoxPH model on a simulation dataset generated from a parametric model.https://square.github.io/pysurvival/models/nonlinear_coxph.htmlThe easiest way to install scikit-survival is to use Anaconda by running:conda install -c sebp scikit-survivalhttps://pypi.org/project/scikit-survival/ To Do Task Revise the following to test model the Lung Dataset.
###Code
#### 1 - Importing packages
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from pysurvival.models.simulations import SimulationModel
from pysurvival.models.semi_parametric import NonLinearCoxPHModel
from pysurvival.utils.metrics import concordance_index
from pysurvival.utils.display import integrated_brier_score
#%pylab inline
#### 2 - Generating the dataset from a nonlinear Weibull parametric model
# Initializing the simulation model
sim = SimulationModel( survival_distribution = 'weibull',
risk_type = 'gaussian',
censored_parameter = 2.1,
alpha = 0.1, beta=3.2 )
# Generating N random samples
N = 1000
dataset = sim.generate_data(num_samples = N, num_features=3)
# Showing a few data-points
dataset.head(2)
from pysurvival.utils.display import display_baseline_simulations
display_baseline_simulations(sim, figure_size=(10, 5))
print(data_cox.shape)
#### 3 - Creating the modeling dataset
# Defining the features
N = data_cox.shape[0]
features = sim.features
print('features = ', features)
# Building training and testing sets #
index_train, index_test = train_test_split( range(N), test_size = 0.2)
data_train = dataset.loc[index_train].reset_index( drop = True )
data_test = dataset.loc[index_test].reset_index( drop = True )
# Creating the X, T and E input
X_train, X_test = data_train[features], data_test[features]
T_train, T_test = data_train['time'].values, data_test['time'].values
E_train, E_test = data_train['event'].values, data_test['event'].values
#### 4 - Creating an instance of the NonLinear CoxPH model and fitting the data.
# Defining the MLP structure. Here we will build a 1-hidden layer
# with 150 units and `BentIdentity` as its activation function
structure = [ {'activation': 'BentIdentity', 'num_units': 150}, ]
# Building the model
nonlinear_coxph = NonLinearCoxPHModel(structure=structure)
nonlinear_coxph.fit(X_train, T_train, E_train, lr=1e-3, init_method='xav_uniform')
#### 5 - Cross Validation / Model Performances
c_index = concordance_index(nonlinear_coxph, X_test, T_test, E_test) #0.81
print('C-index: {:.2f}'.format(c_index))
ibs = integrated_brier_score(nonlinear_coxph, X_test, T_test, E_test, t_max=10,
figure_size=(20, 6.5) )
print('IBS: {:.2f}'.format(ibs))
###Output
% Completion: 2%| |Loss: 2143.73
###Markdown
We can see that the c-index is well above 0.5 and that the Prediction error curve is below the 0.25 limit, thus the model yields great performances. thus the model is likely to yield great performances.We can show this by randomly selecting datapoints and comparing the actual and predicted survival functions, computed by the simulation model and the Nonlinear CoxPH respectively.
###Code
#### 6 - Comparing actual and predictions
# Initializing the figure
fig, ax = plt.subplots(figsize=(8, 4))
# Randomly extracting a data-point that experienced an event
choices = np.argwhere((E_test==1.)&(T_test>=1)).flatten()
k = np.random.choice( choices, 1)[0]
# Saving the time of event
t = T_test[k]
# Computing the Survival function for all times t
predicted = nonlinear_coxph.predict_survival(X_test.values[k, :]).flatten()
actual = sim.predict_survival(X_test.values[k, :]).flatten()
# Displaying the functions
plt.plot(nonlinear_coxph.times, predicted, color='blue', label='predicted', lw=2)
plt.plot(sim.times, actual, color = 'red', label='actual', lw=2)
# Actual time
plt.axvline(x=t, color='black', ls ='--')
ax.annotate('T={:.1f}'.format(t), xy=(t, 0.5), xytext=(t, 0.5), fontsize=12)
# Show everything
title = "Comparing Survival functions between Actual and Predicted"
plt.legend(fontsize=12)
plt.title(title, fontsize=15)
plt.ylim(0, 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
4.1.2 Visualize survival status
###Code
import matplotlib.pyplot as plt
from lifelines import KaplanMeierFitter
from lifelines.plotting import add_at_risk_counts
### Visualize survival status
fig, ax = plt.subplots(figsize=(8, 6))
l_kmf = []
# training set
kmf = KaplanMeierFitter()
kmf.fit(train_data['t'], event_observed=train_data['e'], label='Training Set')
kmf.survival_function_.plot(ax=ax)
l_kmf.append(kmf)
# test set
kmf = KaplanMeierFitter()
kmf.fit(test_data['t'], event_observed=test_data['e'], label='Test Set')
kmf.survival_function_.plot(ax=ax)
l_kmf.append(kmf)
#
plt.ylim(0, 1.01)
plt.xlabel("Time")
plt.ylabel("Survival rate")
plt.title("Survival Curve")
plt.legend(loc="best", title="Dataset")
add_at_risk_counts(*l_kmf, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
4.1.3 Initialize your neural network
###Code
input_nodes = 10
output_nodes = 1
train_X = train_data['x']
train_y = {'e': train_data['e'], 't': train_data['t']}
# the arguments of dsnn is obtained by Bayesian Hyperparameters Tuning
model = dsl.dsnn(
train_X, train_y,
input_nodes, [6, 3], output_nodes,
learning_rate=0.7,
learning_rate_decay=1.0,
activation='relu',
L1_reg=3.4e-5,
L2_reg=8.8e-5,
optimizer='adam',
dropout_keep_prob=1.0
)
# Get the type of ties (three types)
# 'noties', 'breslow' when ties occur or 'efron' when ties occur frequently
print(model.get_ties())
###Output
_____no_output_____
###Markdown
4.1.4 Train neural network modelYou can train dsnn via two optional functions: Only for training: model.train(). Refer to section 4.1.4.a For training model and watch the learning curve: model.learn(). Refer to section 4.1.4.b4.1.4.a Training via model.train()
###Code
# Plot curve of loss and CI on train data
model.train(num_epoch=1900, iteration=100,
plot_train_loss=True, plot_train_ci=True)
###Output
_____no_output_____
###Markdown
4.1.4.b Training via model.learn()NOTE: the function will firstly clean the running state and then train the model from zero.
###Code
test_X = test_data['x']
test_y = {'e': test_data['e'], 't': test_data['t']}
# Plot learning curves on watch_list
watch_list = {"trainset": [train_X, train_y], "testset": [test_X, test_y]}
model.learn(num_epoch=1900, iteration=100, eval_list=watch_list,
plot_ci=True)
###Output
_____no_output_____
###Markdown
4.1.5 Evaluate model performance
###Code
test_X = test_data['x']
test_y = {'e': test_data['e'], 't': test_data['t']}
print("CI on train set: %g" % model.score(train_X, train_y))
print("CI on test set: %g" % model.score(test_X, test_y))
###Output
_____no_output_____
###Markdown
4.1.6 Evaluate variable importance
###Code
model.get_vip_byweights()
###Output
_____no_output_____
###Markdown
4.1.7 Get estimation of survival function
###Code
# optional algo: 'wwe', 'bls' or 'kp', the algorithm for estimating survival function
model.survival_function(test_X[0:3], algo="wwe")
###Output
_____no_output_____ |
ipynb/ML_ner_main.ipynb | ###Markdown
seed 설정
###Code
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if not args.no_cuda and torch.cuda.is_available():
torch.cuda.manual_seed_all(args.seed)
###Output
_____no_output_____
###Markdown
데이터 불러오기
###Code
def get_data(args):
with open(os.path.join(args.data_dir,args.data_name), 'r', encoding='utf-8') as f:
document = f.readlines()
###Output
_____no_output_____ |
notebooks/BB84_eavesdropping.ipynb | ###Markdown
BB84 Quantum Key Distribution (QKD) Protocol (with eavesdropping)This notebook is a _demonstration_ of the BB84 Protocol for QKD using Qiskit. BB84 is a quantum key distribution scheme developed by Charles Bennett and Gilles Brassard in 1984 ([paper]).The first three sections of the paper are readable and should give you all the necessary information required. [paper]: http://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute
from qiskit.providers.aer import QasmSimulator
from qiskit.visualization import *
###Output
_____no_output_____
###Markdown
Choosing bases and encoding statesAlice generates two binary strings. One encodes the basis for each qubit:$0 \rightarrow$ Computational basis$1 \rightarrow$ Hadamard basisThe other encodes the state:$0 \rightarrow|0\rangle$ or $|+\rangle $ $1 \rightarrow|1\rangle$ or $|-\rangle $ Bob and Oscar also generate a binary string each using the same convention to choose a basis for measurement
###Code
num_qubits = 32
alice_basis = np.random.randint(2, size=num_qubits)
alice_state = np.random.randint(2, size=num_qubits)
bob_basis = np.random.randint(2, size=num_qubits)
oscar_basis = np.random.randint(2, size=num_qubits)
print(f"Alice's State:\t {np.array2string(alice_state, separator='')}")
print(f"Alice's Bases:\t {np.array2string(alice_basis, separator='')}")
print(f"Oscar's Bases:\t {np.array2string(oscar_basis, separator='')}")
print(f"Bob's Bases:\t {np.array2string(bob_basis, separator='')}")
###Output
Alice's State: [01001010010100001111110110010110]
Alice's Bases: [00000100011100101001000100101001]
Oscar's Bases: [01000000110000010110000101011111]
Bob's Bases: [01111001011010100011100101010000]
###Markdown
Creating the circuitBased on the following results:$X|0\rangle = |1\rangle$$H|0\rangle = |+\rangle$$ HX|0\rangle = |-\rangle$Our algorithm to construct the circuit is as follows:1. Whenever Alice wants to encode 1 in a qubit, she applies an $X$ gate to the qubit. To encode 0, no action is needed.2. Wherever she wants to encode it in the Hadamard basis, she applies an $H$ gate. No action is necessary to encode a qubit in the computational basis.3. She then _sends_ the qubits to Bob (symbolically represented in this circuit using wires)4. However, Oscar **intercepts** the qubits and measures them by choosing a basis as per his generated random binary string. To measure a qubit in the Hadamard basis, he applies an $H$ gate to the corresponding qubit and then performs a measurement on the computational basis. 5. Oscar now prepares another set of qubits according to his measurements and the bases he chose. He then **re-sends** these qubits to Bob.4. Bob measures the qubits according to his binary string. Bob also measures using the same method as Oscar.Since this can be seen as two BB84 steps in tandem, we can use the framework that we developed earlier.
###Code
def make_bb84_circ(enc_state, enc_basis, meas_basis):
'''
enc_state: array of 0s and 1s denoting the state to be encoded
enc_basis: array of 0s and 1s denoting the basis to be used for encoding
0 -> Computational Basis
1 -> Hadamard Basis
meas_basis: array of 0s and 1s denoting the basis to be used for measurement
0 -> Computational Basis
1 -> Hadamard Basis
'''
num_qubits = len(enc_state)
bb84_circ = QuantumCircuit(num_qubits)
# Sender prepares qubits
for index in range(len(enc_basis)):
if enc_state[index] == 1:
bb84_circ.x(index)
if enc_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Receiver measures the received qubits
for index in range(len(meas_basis)):
if meas_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.measure_all()
return bb84_circ
###Output
_____no_output_____
###Markdown
Simulating intercepted BB84The 'intercept and re-send' attack can be simulated by thinking of the whole process being broken up into two parts. The first part can be thought of as the BB84 protocol happening between Alice and Oscar, and the second part between Oscar and Bob. However, we have to know the result from the first part to create the circuit for the second part. We will do this below.
###Code
bb84_AO = make_bb84_circ(alice_state, alice_basis, oscar_basis)
oscar_result = execute(bb84_AO.reverse_bits(),
backend=QasmSimulator(),
shots=1).result().get_counts().most_frequent()
print(f"Oscar's results:\t {oscar_result}")
# Converting string to array
oscar_state = np.array(list(oscar_result), dtype=int)
print(f"Oscar's State:\t\t{np.array2string(oscar_state, separator='')}")
bb84_OB = make_bb84_circ(oscar_state, oscar_basis, bob_basis)
temp_key = execute(bb84_OB.reverse_bits(),
backend=QasmSimulator(),
shots=1).result().get_counts().most_frequent()
print(f"Bob's results:\t\t {temp_key}")
###Output
Bob's results: 01011010011110100000110111000101
###Markdown
Creating the keyAlice and Bob only keep the bits where their bases match. Oscar also keeps only these bits from his measurements.
###Code
alice_key = ''
bob_key = ''
oscar_key = ''
for i in range(num_qubits):
if alice_basis[i] == bob_basis[i]: # Only choose bits where Alice and Bob chose the same basis
alice_key += str(alice_state[i])
bob_key += str(temp_key[i])
oscar_key += str(oscar_result[i])
print(f"The length of the key is {len(bob_key)}")
print(f"Alice's key contains\t {(alice_key).count('0')} zeroes and {(alice_key).count('1')} ones")
print(f"Bob's key contains\t {(bob_key).count('0')} zeroes and {(bob_key).count('1')} ones")
print(f"Oscar's key contains\t {(oscar_key).count('0')} zeroes and {(oscar_key).count('1')} ones")
print(f"Alice's Key:\t {alice_key}")
print(f"Bob's Key:\t {bob_key}")
print(f"Oscar's Key:\t {oscar_key}")
###Output
The length of the key is 16
Alice's key contains 7 zeroes and 9 ones
Bob's key contains 8 zeroes and 8 ones
Oscar's key contains 8 zeroes and 8 ones
Alice's Key: 0101000011101111
Bob's Key: 0101101000101110
Oscar's Key: 0101101100101100
|
html/html_image_loaded.ipynb | ###Markdown
###Code
%%capture
!wget https://thispersondoesnotexist.com/image -O 'image.png'
import base64
data_uri = base64.b64encode(open('image.png', 'rb').read()).decode('utf-8')
img_tag = '<img src="data:image/png;base64,{0}">'.format(data_uri)
# print(img_tag)
with open('ex.html', 'w') as f:
f.write(img_tag)
from IPython.core.display import HTML
name = 'ex.html'
HTML(name)
###Output
_____no_output_____ |
6.3 taxonomic correlation_Ratio Activation (Python).ipynb | ###Markdown
This notebook is to use spearman correlation to check association between Ratio_Activation and interested taxa
###Code
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
from scipy.stats import spearmanr, pearsonr
from statsmodels.sandbox.stats.multicomp import multipletests
import matplotlib.pylab as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
merge taxonomy with mapping file
###Code
taxa = pd.read_csv('../data/RF_taxa_act.txt', sep='\t', index_col='#OTU ID')
print(taxa.shape)
taxa.head()
otu = pd.read_csv('../Qiime_updated/feature-table-rare5807.txt', sep='\t', skiprows=1, index_col='#OTU ID')
otu.head()
# combine taxa with otu
taxa = pd.merge(taxa, otu, left_index=True, right_index=True).transpose()
taxa.shape
taxa.head()
mf = pd.read_csv('../data/mros_mapping_alpha.txt', sep='\t', index_col='#SampleID')
mf = mf[['OHV1D3', 'OHV24D3', 'OHVD3', 'ratio_activation', 'ratio_catabolism', 'VDstatus']]
mf.head()
# need to substract the six otu abundance for each subject
sample_ids = mf.index
taxa = taxa.loc[sample_ids]
taxa.head()
dat = pd.merge(mf, taxa, left_index=True, right_index=True)
dat.head()
vars_vd = np.array(['OHVD3', 'OHV1D3', 'OHV24D3', 'ratio_activation', 'ratio_catabolism'])
dat[vars_vd] = dat[vars_vd].apply(pd.to_numeric, errors='coerce')
dat[vars_vd].describe()
###Output
_____no_output_____
###Markdown
correlation
###Code
otu_cols = dat.columns[mf.shape[1]:dat.shape[1]]
len(otu_cols)
bt = pd.read_csv('../data/RF_taxa_act.txt', sep='\t', index_col='#OTU ID')
bt.head()
bt.index == dat.columns[mf.shape[1]:dat.shape[1]]
results= []
i = 3
print(vars_vd[i])
for j in range(len(otu_cols)):
tmp = dat[[vars_vd[i], otu_cols[j]]].dropna(axis=0, how='any')
rho, pval = spearmanr(tmp[vars_vd[i]], tmp[otu_cols[j]])
tax = bt['Taxon'][otu_cols[j]]
results.append([vars_vd[i], otu_cols[j], tax, rho, pval])
results.append([vars_vd[i], otu_cols[j], rho, pval])
# output table
results = pd.DataFrame(results, columns=['vars', 'otu', 'tax',
'rho', 'pval']).dropna(axis=0, how='any')
results['fdr pval'] = multipletests(results['pval'], method = 'fdr_bh')[1]
results = results.sort_values(['fdr pval'], ascending=True)
# specific bacteria
index = results.loc[results['fdr pval'] <= 0.05].index
for i in range(len(index)):
print(results.tax[index[i]], results['fdr pval'][index[i]])
# check
results
results.to_csv('../data/correlation_Act.txt', sep='\t')
###Output
_____no_output_____
###Markdown
double check the results
###Code
dat.rename(columns={dat.columns[6]: bt.Taxon[dat.columns[6]],
dat.columns[7]: bt.Taxon[dat.columns[7]],
dat.columns[8]: bt.Taxon[dat.columns[8]],
dat.columns[9]: bt.Taxon[dat.columns[9]],
dat.columns[10]: bt.Taxon[dat.columns[10]],
dat.columns[11]: bt.Taxon[dat.columns[11]],
dat.columns[12]: bt.Taxon[dat.columns[12]],
dat.columns[13]: bt.Taxon[dat.columns[13]]}, inplace=True)
dat.head()
tmp = dat[['ratio_activation', 'k__Bacteria; p__Firmicutes; c__Clostridia; o__Clostridiales; f__Ruminococcaceae; g__; s__']].dropna(axis=0, how='any')
spearmanr(tmp[tmp.columns[0]], tmp[tmp.columns[1]])
tmp = dat[['ratio_activation', 'k__Bacteria; p__Firmicutes; c__Clostridia; o__Clostridiales; f__Ruminococcaceae; g__Oscillospira; s__']].dropna(axis=0, how='any')
spearmanr(tmp[tmp.columns[0]], tmp[tmp.columns[1]])
tmp = dat[['ratio_activation', 'k__Bacteria; p__Firmicutes; c__Clostridia; o__Clostridiales; f__Lachnospiraceae; g__Dorea; s__']].dropna(axis=0, how='any')
spearmanr(tmp[tmp.columns[0]], tmp[tmp.columns[1]])
tmp = dat[['ratio_activation', 'k__Bacteria; p__Firmicutes; c__Clostridia; o__Clostridiales; f__Clostridiaceae; g__; s__']].dropna(axis=0, how='any')
spearmanr(tmp[tmp.columns[0]], tmp[tmp.columns[1]])
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.